Applied Probability and Stochastic Processes: V. C. Joshua S. R. S. Varadhan Vladimir M. Vishnevsky

Download as pdf or txt
Download as pdf or txt
You are on page 1of 518

Infosys Science Foundation Series in Mathematical Sciences

V. C. Joshua
S. R. S. Varadhan
Vladimir M. Vishnevsky Editors

Applied
Probability
and Stochastic
Processes
Infosys Science Foundation Series

Infosys Science Foundation Series in Mathematical


Sciences

Series Editors
Irene Fonseca, Carnegie Mellon University, Pittsburgh, PA, USA
Gopal Prasad, University of Michigan, Ann Arbor, USA

Editorial Board
Manindra Agrawal, Indian Institute of Technology Kanpur, Kanpur, India
Weinan E, Princeton University, Princeton, USA
Chandrashekhar Khare, University of California, Los Angeles, USA
Mahan Mj, Tata Institute of Fundamental Research, Mumbai, India
Ritabrata Munshi, Tata Institute of Fundamental Research, Mumbai, India
S. R. S. Varadhan, New York University, New York, USA
The Infosys Science Foundation Series in Mathematical Sciences is a sub-series
of The Infosys Science Foundation Series. This sub-series focuses on high-quality
content in the domain of mathematical sciences and various disciplines of math-
ematics, statistics, bio-mathematics, financial mathematics, applied mathematics,
operations research, applied statistics and computer science. All content published
in the sub-series are written, edited, or vetted by the laureates or jury members of the
Infosys Prize. With this series, Springer and the Infosys Science Foundation hope
to provide readers with monographs, handbooks, professional books and textbooks
of the highest academic quality on current topics in relevant disciplines. Literature
in this sub-series will appeal to a wide audience of researchers, students, educators,
and professionals across mathematics, applied mathematics, statistics and computer
science disciplines.

More information about this subseries at http://www.springer.com/series/13817


V. C. Joshua • S. R. S. Varadhan •
Vladimir M. Vishnevsky
Editors

Applied Probability
and Stochastic Processes
Editors
V. C. Joshua S. R. S. Varadhan
Department of Mathematics Courant Institute of Mathematical Sciences
CMS College New York University
Kottayam, India New York, NY, USA

Vladimir M. Vishnevsky
Institute of Control Sciences
Russian Academy of Sciences
Moscow, Russia

ISSN 2363-6149 ISSN 2363-6157 (electronic)


Infosys Science Foundation Series
ISSN 2364-4036 ISSN 2364-4044 (electronic)
Infosys Science Foundation Series in Mathematical Sciences
ISBN 978-981-15-5950-1 ISBN 978-981-15-5951-8 (eBook)
https://doi.org/10.1007/978-981-15-5951-8

Mathematics Subject Classification: 60B10, 60J65, 60K20, 60K25, 60K30, 62H99, 68M18, 90B05,
90B15, 91B05

© The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature Singapore
Pte Ltd. 2020
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Preface

This book is a collection of selected papers presented at the International Conference


on Advances in Applied Probability and Stochastic Processes (ICAAP&SP@CMS
2019) held during 7–10 January 2019 at CMS College Kottayam, Kerala, India,
in honour of Prof. Dr. A. Krishnamoorthy, Emeritus Professor and Former Head
of Department of Mathematics, Cochin University of Science and Technology,
Kerala, India. It is the first conference in the series of conferences decided to
be conducted biannually, aiming to promote high-quality research in the field of
applied probability that would keep pace with the advancements in science and
technology. It focuses on applied probability techniques in modelling and analysis
of systems evolving in time. Stochastic modelling plays a key role in analysing
real-life situations such as queueing theory, reliability, inventory problems, biology,
medicine, and finance.
The conference was a great success, attracting 145 delegates from 14 countries.
Professor S. R. S. Varadhan, FRS (Abel Laureate) delivered the keynote address.
It had 9 plenary speakers and 26 invited speakers. It provided a platform to
researchers, academicians, practitioners, industrialists, and so on from various
countries, interested in the theory and applications of applied probability and
stochastic processes to share their views, discuss prospective developments, and
pursue collaborations in these areas.
The conference had 104 papers for presentation. Out of 104 papers, 30 papers
are selected for publication in a book form as Springer proceedings in accordance
with the conducted peer reviews. These papers are revelations of strong theoretical
as well as practical foundations of probabilistic modelling tools. Its target audience
includes researchers and practitioners of probability theory, stochastic process, and
mathematical modelling.

v
vi Preface

We thank all the authors for the contributions, reviewers for peer- reviewing,
members of the program committee for timely help, and sponsors for financial
support.

Kottayam, India V. C. Joshua


New York, NY, USA S. R. S. Varadhan
Moscow, Russia Vladimir M. Vishnevsky
January 2020
Contents

Shift-Coupling and Maximality . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1


Hermann Thorisson
Diffusion Approximation Analysis of Multihop Wireless Networks:
Quality-of-Service and Convergence of Stationary Distribution.. . . . . . . . . . . 15
K. S. Ashok Krishnan and Vinod Sharma
Analysis of Retrial Queue with Heterogeneous Servers
and Markovian Arrival Process . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 29
Liu Mei and Alexander Dudin
What is Standard Brownian Motion?.. . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 51
Krishna B. Athreya
Busy Period Analysis of Multi-Server Retrial Queueing Systems . . . . . . . . . . 61
Srinivas R. Chakravarthy
Steady-State and Transient Analysis of a Single Channel Cognitive
Radio Model with Impatience and Balking . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 77
Alexander Rumyantsev and Garimella Rama Murthy
Applications of Fluid Queues in Rechargeable Batteries . . . . . . . . . . . . . . . . . . . . 91
Shruti Kapoor and S. Dharmaraja
Analysis of BMAP /R/1 Queues Under Gated-Limited Service
with the Server’s Single Vacation Policy .. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 103
Souvik Ghosh, A. D. Banik, and M. L. Chaudhry
A Production Inventory System with Renewal and Retrial Demands. . . . . . 129
G. Arivarignan, M. Keerthana, and B. Sivakumar
A Queueing System with Batch Renewal Input and Negative Arrivals . . . . 143
U. C. Gupta, Nitin Kumar, and F. P. Barbhuiya

vii
viii Contents

Asymptotic Analysis Methods for Multi-Server Retrial


Queueing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 159
Ekaterina Fedorova, Anatoly Nazarov, and Alexander Moiseev
On the Application of Dynamic Screening Method to Resource
Queueing System with Infinite Servers . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 179
Michele Pagano and Ekaterina Lisovskaya
“Controlled” Versions of the Collatz–Wielandt
and Donsker–Varadhan Formulae . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 199
Aristotle Arapostathis and Vivek S. Borkar
An (s, S) Production Inventory System with State Dependant
Production Rate and Lost Sales . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 215
S. Malini and Dhanya Shajin
Analysis of a MAP Risk Model with Stochastic Incomes,
Inter-Dependent Phase-Type Claims and a Constant Barrier . . . . . . . . . . . . . . 235
A. S. Dibu and M. J. Jacob
A PH Distributed Production Inventory Model with Different
Modes of Service and MAP Arrivals . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 263
Salini S. Nair and K. P. Jose
On a Generalized Lifetime Model Using DUS Transformation .. . . . . . . . . . . . 281
P. Kavya and M. Manoharan
Analysis of Inventory Control Model for Items Having General
Deterioration Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 293
V. P. Praveen and M. Manoharan
A Two-Server Queueing System with Processing of Service Items
by a Server . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 307
A. Krishnamoorthy and Divya V.
A Two-Stage Tandem Queue with Specialist Servers . . . .. . . . . . . . . . . . . . . . . . . . 335
T. S. Sinu Lal, A. Krishnamoorthy, V. C. Joshua, and Vladimir Vishnevsky
The MAP/(PH,PH,PH)/1 Model with Self-Generation of Priorities,
Customer Induced Interruption and Retrial of Customers . . . . . . . . . . . . . . . . . 355
Jomy Punalal and S. Babu
Valuation of Reverse Mortgage . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 371
D. Kannan and Lina Ma
Stationary Distribution of Discrete-Time Finite-Capacity Queue
with Re-sequencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 399
Rostislav Razumchik and Lusine Meykhanadzhyan
The Polaron Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 415
Chiranjib Mukherjee and S. R. S. Varadhan
Contents ix

Batch Arrival Multiserver Queue with State-Dependent Setup


for Energy-Saving Data Center .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 421
Tuan Phung-Duc
Weak Convergence of Probability Measures of Trotter–Kato
Approximate Solutions of Stochastic Evolution Equations . . . . . . . . . . . . . . . . . . 441
T. E. Govindan
Stochastic Multiphase Models and Their Application for Analysis
of End-to-End Delays in Wireless Multihop Networks . .. . . . . . . . . . . . . . . . . . . . 457
Vladimir Vishnevsky and Andrey Larionov
Variance Laplacian: Quadratic Forms in Statistics . . . . . .. . . . . . . . . . . . . . . . . . . . 473
Garimella Rama Murthy
On the Feynman–Kac Formula .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 491
B. Rajeev
Heterogeneous System GI/GI(n) /∞ with Random
Customers Capacities .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 507
Ekaterina Lisovskaya, Svetlana Moiseeva, Michele Pagano,
and Ekaterina Pankratova
About the Editors

V. C. Joshua is Associate Professor at the Department of Mathematics, CMS


College, Kerala, India. He received his Ph.D. in Mathematics from the Cochin
University of Science and Technology, Kerala, India, in 2003. His research interests
include stochastic modelling analysis and applications, queuing theory, inventory,
and reliability. In addition to having authored one book and 30 research papers, he
is also a reviewer for a number of international journals. He has been a participating
scientist of two international bilateral scientific research projects and has organized
three international conferences on applied probability and stochastic processes.

S. R. S. Varadhan is the Frank Jay Gould Professor of Science at Courant


Institute of Mathematical Sciences, New York University, USA, and is a renowned
mathematician. He completed his Ph.D. in Mathematics from Indian Statistical
Institute, Kolkata, India. He is known for his fundamental contributions to prob-
ability theory and for creating a unified theory of large deviations. He is a recipient
of the National Medal of Science (in 2010) from President Barack Obama, the
highest honour bestowed by the Government of the United States of America on
scientists, engineers, and inventors. The Government of India awarded him the
Padma Bhushan (in 2008). He received the Abel Prize (in 2007) for his work on
large deviations with Monroe D. Donsker. He was also awarded the Leroy P. Steele
Prize for Seminal Contribution to Research (in 1996) by the American Mathematical
Society for his work with Daniel W. Stroock on diffusion processes, Margaret and
Herman Sokol Award of the Faculty of Arts and Sciences, New York University
(in 1995), and Birkhoff Prize (in 1994). He also has two honorary degrees from
Université Pierre et Marie Curie, Paris, France, and from Indian Statistical Institute,
Kolkata, India.
He is a member of the National Academy of Sciences, Washington, and the
Norwegian Academy of Science and Letters, Oslo, Norway. He has been an elected
Fellow of the American Academy of Arts and Sciences, Cambridge, USA; the
World Academy of Sciences, Trieste, Italy; the Institute of Mathematical Statistics;
Royal Society, London, UK; the Indian Academy of Sciences, Bangalore, India; the

xi
xii About the Editors

Society for Industrial and Applied Mathematics, Philadelphia, USA; and American
Mathematical Society, Providence, USA. His areas of research include probability
theory and its relation to analysis, various aspects of stochastic processes and their
connections to certain classes of linear and nonlinear partial differential equations.

Vladimir M. Vishnevsky is Head of the Telecommunication Networks Laboratory


at the V. A. Trapeznikov Institute of Control Sciences of Russian Academy of
Sciences (ICS RAS), Moscow, Russia. Earlier, he was Assistant Head of the
Institute of Information Transmission Problems of RAS, from 1990 to 2010,
and Assistant Head of Laboratory with ICS RAS, from 1971 to 1990. He also
served as Full Professor at ICS RAS from 1989 and the Moscow Institute of
Physics and Technology from 1990. He earned his Ph.D. in queuing theory and
telecommunication networks and D.Sc. in telecommunication networks from ICS
RAS in 1974 and 1988, respectively.
He has authored over 300 research papers in queuing theory and telecommu-
nications, 10 monographs, and 20 patents for inventions. His areas of research
include computer systems and networks, queuing systems, telecommunications,
discrete mathematics (extremal graph theory and mathematical programming), and
wireless information transmission networks. He is a co-chair of a number of IEEE
conferences and project leader of several international research projects related to
the research and development of the next-generation 5G/IMT-2020 networks. In
2019, by a decree of the President of the Russian Federation, he was awarded the
title “Honored Scientist of the Russian Federation”.
Shift-Coupling and Maximality

Hermann Thorisson

Abstract We consider shift-coupling on groups. The theory is based on a key


maximality result that does not rely on the group condition.

Keywords Coupling · Shift-coupling · Invariant sets · Cesaro asymptotics ·


Total variation

AMS MSC 2010 60G60, 60G57, 60G55, 60B10

1 Introduction

This paper presents basic theory of shift-coupling on locally compact second


countable Hausdorff topological groups, involving invariant sets and Cesaro total
variation asymptotics. The key part, concerning the existence of shift-couplings, is
based on a maximality result that is proved in the latter half of the paper. That result
does not rely on the group condition.
Shift-coupling dates back to the 1979 monograph [2] by Berbee, where the
following result (Theorem 4.3.3 in [2]) for one-sided random sequences (Xk )k≥0
and (Yk )k≥0 on a Borel space, is proved: there exist copies (X̂k )k≥0 of (Xk )k≥0 and
(Ŷk )k≥0 of (Yk )k≥0 (coupling) and finite random times R and S (shifts) such that

(X̂R+k )k≥0 = (ŶS+k )k≥0 (1.1)

H. Thorisson ()
University of Iceland, Reykjavík, Iceland
e-mail: [email protected]

© The Editor(s) (if applicable) and The Author(s), under exclusive 1


licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_1
2 H. Thorisson

if and only if the distributions of (Xk )k≥0 and (Yk )k≥0 converge in Cesaro total
variation
  
 1 n−1  1  
n−1

 P (Xi+k )k≥0 ∈ · − P (Yi+k )k≥0 ∈ · 
n n  → 0, n → ∞.
i=0 i=0

Here  ·  is the total variation norm defined for bounded signed measures ν by

ν = sup ν − inf ν.

Note that if ν = P − Q, where P and Q are probability measures, then sup ν =


− inf ν so

P − Q = 2 sup(P − Q) = 2 sup |P − Q|. (1.2)

Berbee’s result is a counterpart of earlier results linking exact coupling (coupling


such that (1.1) holds with R = S, invented by Doeblin [4] to prove the basic
limit theorem of Markov chains) to total variation convergence through a coupling
inequality and a maximal coupling; see [5, 6, 15]; see also Theorem 4.3.2 in [2]. In
[5] exact coupling is further linked to the tail σ -algebra.
The term ‘shift-coupling’ for ((X̂k )k≥0 , (Ŷk )k≥0 , R, S) is from the 1993 paper [1]
where a link to the invariant σ -algebra is established. In [17] the view was extended
to continuous time and a shift-coupling inequality presented providing the Cesaro
result. In that paper epsilon-couplings (for each  > 0 there is a shift-coupling
such that |R − S| < , see [13, 14]) were also linked to a smooth total variation
convergence through -coupling inequalities, and to a smooth tail σ -algebra. In [18]
the view was further extended to a group setting, where the Borel-space condition
turns out to be not needed. Applications in Palm theory to simple point processes
on Rd were presented in the 1999 paper [19]. Those applications cover both coin
tosses and the Poisson process in Rd as special cases, but no explicit constructions
were given, only the abstract existence of shift-coupling.
The first explicit construction of a shift-coupling was the surprising Extra Head
Scheme for doubly infinite coin tosses and for the Poisson process on the line,
presented by Liggett in the 2002 paper [12]. Then in [7, 8] and [3] came equally
surprising constructions for point processes on Rd . In [11] the Palm theoretic view
was extended from point processes on Rd to random measures on groups, and
applications to local time of two-sided Brownian motion (and Lévy processes)
followed in [9, 16] and [10]. Those applications involved unbiased Skorokhod
embedding, embedding the Brownian bridge into the path of Brownian motion, and
unbiased embedding of excursions. See the notes and references in these papers (and
in [20]) for a more complete background.
Shift-Coupling and Maximality 3

2 Shift-Coupling and the Invariant σ -Algebra I

Let G be a locally compact second countable Hausdorff topological group with


Borel sets G. Let (E, E) be a measurable space equipped with a measurable flow

θt : E → E, t ∈ G.

This means that the map (t, x) → θt x is measurable with respect to G ⊗ E and E,
that with e the identity of G the map θe is the identity on E, and that θs θt = θst for
all s, t ∈ G.
Let (, F , P) be the probability space on which all the random elements in this
paper are defined. We allow (, F , P) to be extended by introducing new indepen-
dent random elements or new random elements with specified regular conditional
distributions given a random element that already is defined on (, F , P); see
Sect. 5.
Throughout the paper let X and Y be random elements in (E, E). These random
elements could be random measures on (G, G) or random fields indexed by G. For
instance, if G = Rd , then for a random measure X = X(·) we have θt X = X(t + ·)
while for a random field X = (Xs )s∈Rd we have θt X = (Xt +s )s∈Rd .
Definition Say that X and Y admit shift-coupling if there exists (possibly after
extension) a random element T in (G, G) such that

D
θT X = Y.

D
Here = denotes identity in distribution.
Let I be the invariant σ -algebra,

I = {A ∈ E : θt−1 A = A, t ∈ G}.

Lemma 2.1 If A ∈ I and T is a random element in (G, G), then {θT X ∈ A} =


{X ∈ A}.
Proof From A ∈ I we obtain the second step in
 
{θT X ∈ A} = {T = t, θt X ∈ A} = {T = t, X ∈ A} = {X ∈ A}.
t ∈G t ∈G

The only-if direction in the following theorem is easy. The if-direction relies on
the maximality result proved in Sect. 4.
Theorem 2.2 The random elements X and Y admit shift-coupling if and only if

P(X ∈ A) = P(Y ∈ A), A ∈ I.


4 H. Thorisson

D
Proof If θT X = Y and A ∈ I, then by Lemma 2.1

P(X ∈ A) = P(θT X ∈ A) = P(Y ∈ A), A ∈ I.

For the converse claim, see Corollary 5.1 at the end of the paper.

3 Shift-Coupling Inequality and Cesaro Asymptotics

Let λ be right-invariant Haar measure on (G, G). For B ∈ G with 0 < λ(B) < ∞
let UB be a random element in (G, G) that is independent of X and Y and has
distribution λ(·|B),

P(UB ∈ · | X, Y ) = λ(· ∩ B)/λ(B).

Note that the distribution of θUB X can be written on Cesaro-average form as follows:
 
P(θUB X ∈ ·) = P(θs X ∈ ·)λ(ds) λ(B).
B

Let denote the symmetric difference of two sets,

B C = (B \ C) ∪ (C \ B).

D
Theorem 3.1 (The Shift-Coupling Inequality) If θT X = Y , then

 
 P(θU X ∈ ·) − P(θU Y ∈ ·)  ≤ E λ(B BT ) . (3.1)
B B
λ(B)

Proof Take A ∈ E and let the uniform UB be independent of T , X, Y . Due to


D
θT X = Y ,

P(θUB X ∈ A) − P(θUB Y ∈ A) = P(θUB X ∈ A) − P(θUB θT X ∈ A).

Thus

P(θUB X ∈ A) − P(θUB Y ∈ A) = E[1{θUB X ∈ A} ] − E[1{θUB θT X ∈ A} ]


  
=E 1{θs X∈A} λ(ds) − 1{θs θT X∈A} λ(ds) λ(B).
B B
Shift-Coupling and Maximality 5

Right-invariance of λ yields B 1{θs θT X ∈ A} λ(ds) = BT 1{θs X ∈ A} λ(ds) and thus



P(θUB X ∈ A) − P(θUB Y ∈ A) = E 1{θs X ∈ A} λ(ds)
B
 
− 1{θs X ∈ A} λ(ds) λ(B).
BT

The B ∩ BT parts of the integrals cancel and dropping what then remains of the
negative integral yields
 
P(θUB X ∈ A) − P(θUB Y ∈ A) ≤ E 1{θs X ∈ A} λ(ds) λ(B).
B\BT

Now 1{θs X ∈ A} ≤ 1 and thus

λ(B \ BT )
P(θUB X ∈ A) − P(θUB Y ∈ A) ≤ E . (3.2)
λ(B)

Use the right-invariance of λ for the second identity in

λ(B BT ) = λ(B \ BT ) + λ(BT \ B) = 2 λ(B \ BT ).

Thus taking supremum over A ∈ E in (3.2) and consulting (1.2) yields (3.1).
In order to obtain Cesaro total variation convergence we need to assume that G
is amenable. This means that there exist Følner sets, namely a family of bounded
sets of positive λ-measure, Br ∈ G, r > 0, expanding to G in such a way that

λ(Br Br t)
∀t ∈ G : → 0, r → ∞.
λ(Br )

For instance, if G = Rd (under addition, with λ the Lebesgue measure) and B is


a convex set of positive finite volume containing 0 in its interior, then Br = r · B,
r > 0, are Følner sets.
Applying the shift-coupling inequality under the Følner condition adds a limit
result to the equivalence in Theorem 2.2.
Theorem 3.2 Suppose there exist Følner sets Br ∈ G, r > 0. Then X and Y admit
shift-coupling if and only if
 
 P(θU X ∈ ·) − P(θU Y ∈ ·) → 0, r → ∞.
Br Br
6 H. Thorisson

D
Proof If θT X = Y , apply Theorem 3.1 and bounded convergence to obtain

 
 P(θU X ∈ ·) − P(θU Y ∈ ·)  ≤ E λ(Br Br T ) → 0, r → ∞.
Br Br
λ(Br )
 
Conversely, assume that  P(θUBr X ∈ ·) − P(θUBr Y ∈ ·) → 0, r → ∞. Take
A ∈ I and apply Lemma 2.1 to obtain the first step in
   
 P(X ∈ A) − P(Y ∈ A)  =  P(θU X ∈ A) − P(θU Y ∈ A)  → 0, r → ∞.
Br Br

Thus |P(X ∈ A) − P(Y ∈ A)| = 0, that is, P(X ∈ A) = P(Y ∈ A) for A ∈ I. Now
apply Theorem 2.2 to obtain that X and Y admit shift-coupling.
tv
Remark 3.3 (Application in Palm Theory) Let → denote total variation conver-
D
gence. If Y is stationary (that is, θt Y = Y, t ∈ G), then the limit claim in
Theorem 3.2 becomes
tv
θUBr X → Y, r → ∞.

Let η be a stationary random measure (e.g. point process) with finite intensity and ξ
be its Palm version. Take X = ξ and Y = η. Assume ergodicity, P(η ∈ ·) = 0 or 1
on I. Then it is readily checked that P(ξ ∈ ·) = P(η ∈ ·) on I. Thus according to
Theorem 2.2, ξ and η admit shift-coupling. Moreover according to Theorem 3.2,

tv
θUBr ξ → η, r → ∞,

if Følner sets Br , r > 0, exist.

4 The Key Theorem and a Corollary

In the following basic maximality theorem we drop the condition that G is a group
and θs , s ∈ G, a flow. We only assume that (G, G, λ) is some σ -finite measure space
and that

θt : E → E, t ∈ G,

is some collection of mappings such that (t, x) → θt x is measurable with respect to


G ⊗ E and E.
Shift-Coupling and Maximality 7

Theorem 4.1 There are random elements X̂, Ŷ in (E, E) and R̂, Ŝ in (G, G), events
Ĉ, D̂, and a set A ∈ E, such that

D D
X̂ = X and Ŷ = Y (4.1)

P({θR̂ X̂ ∈ ·} ∩ Ĉ) = P({θŜ Ŷ ∈ ·} ∩ D̂) (4.2)


 
E 1{θs X̂∈A}∩Ĉ c λ(ds) = 0 = E 1{θs Ŷ ∈Ac }∩D̂ c λ(ds) . (4.3)
G G

Proof Let T1 be a random element in (G, G) with  distribution P having the same
null sets as the σ -finite λ. For instance, P = n λ(·|Bn )2−n , where the Bn are a
disjoint covering of G with 0 < λ(Bn ) < ∞. Let T1 be independent of a quadruple
(X1 , Y1 , C1 , D1 ), where X1 , Y1 are random elements in (E, E), C1 , D1 are events,
and
(a) P(X1 ∈ ·) = P(X ∈ ·) and P(Y1 ∈ ·) = P(Y ∈ ·),
(b) P({θT1 X1 ∈ ·} ∩ C1 ) = P({θT1 Y1 ∈ ·} ∩ D1 ),
(c) ∃ A1 ∈ E : P({θT1 X1 ∈ A1 } ∩ C1c ) = 0 = P({θT1 Y1 ∈ Ac1 } ∩ D1c ).
In order to obtain this (X1 , Y1 , C1 , D1 ), let X1 , Y1 satisfy (a), let μ be the common
component of the measures ν = P(θT1 X1 ∈ ·) and η = P(θT1 Y1 ∈ ·), that is,

dμ dν dη
= ∧ ,
d(ν + η) d(ν + η) d(ν + η)

and then introduce (by extension, see Remark 5.2) splitting events C1 and D1 cutting
out the common component μ from P(θT1 X1 ∈ ·) and P(θT1 Y1 ∈ ·),

P({θT1 X1 ∈ ·} ∩ C1 ) = μ = P({θT1 Y1 ∈ ·} ∩ D1 ).

This yields (b) and also (c) because P(θT1 X1 ∈ ·) − μ and P(θT1 Y1 ∈ ·) − μ have
no mass in common (are mutually singular).
Repeat this recursively to obtain i.i.d T1 , T2 , . . . that are independent of a
sequence of independent quadruples (Xk , Yk , Ck , Dk )1≤k<∞ where for k > 1
   
(d) P(Xk ∈ ·) = P Xk−1 ∈ ·|Ck−1c and P(Yk ∈ ·) = P Yk−1 ∈ ·|Dk−1 c ,
     
(e) P θTk Xk ∈ ·  ∩ Ck = P θTk Yk ∈ · ∩ Dk  ,  
(f) ∃ Ak ∈ E : P {θTk Xk ∈ Ak } ∩ Ckc = 0 = P θTk Yk ∈ Ack ∩ Dkc .
Now put

K = inf{k ≥ 1 : 1Ck = 1} and N = inf{k ≥ 1 : 1Dk = 1}.


8 H. Thorisson

By induction

P(X ∈ ·) = P(XK ∈ ·, K ≤ k) + P(Xk+1 ∈ ·)P(K > k) (4.4)

because for k = 1 we obtain (4.4) by summing the results of the following


calculations:

P(XK ∈ ·, K ≤ k) = P(X1 ∈ ·, C1 ) = μ
P(Xk+1 ∈ ·)P(K > k) = P(X2 ∈ ·)P(C1c ) = P(X1 ∈ ·, C1c ) = P(X ∈ ·) − μ

while if (4.4) holds for some k ≥ 1, then it holds with k replaced by k + 1 since

P(X ∈ ·) − P(XK ∈ ·, K ≤ k)
= P(Xk+1 ∈ ·)P(K > k) (by the induction assumption)
= P(Xk+1 ∈ ·, Ck+1 )P(K > k) + P(Xk+2 ∈ ·)P(Ck+1
c
)P(K > k)
= P(XK ∈ ·, K = k + 1) + P(Xk+2 ∈ ·)P(K > k + 1).

Send k → ∞ in (4.4) to obtain

P(X ∈ ·) ≥ P(XK ∈ · , K < ∞).

Let X∞ be independent of (Tk , Xk , Yk , Ck , Dk )1≤k<∞ with distribution


 
P(X∞ ∈ ·) = P(X ∈ ·) − P(XK ∈ · , K < ∞) P(K = ∞).

D D
This yields XK = X and in the same way we obtain YN = Y . Now define

(X̂, Ŷ , R̂, Ŝ, Ĉ, D̂) = (XK , YN , TK , TN , {K < ∞}, {N < ∞}). (4.5)

D D
Thus X̂ = X and Ŷ = Y , that is, (4.1) holds.
Further (b) and (e) yield P(θTK X ∈ ·, K = k) = P(θTN Y ∈ ·, N = k). Sum over
k to obtain

P(θTK XK ∈ ·, K < ∞) = P(θTN YN ∈ ·, N < ∞).

Thus (4.2) holds.


It remains to establish (4.3). Let T∞ have distribution P and be independent of
D
(Tk , Xk , Yk , Ck , Dk )1≤k<∞ , X∞ and Y∞ . From XK = X and (4.4) we obtain

P(XK ∈ ·, K > k) = P(Xk+1 ∈ ·)P(K > k).


Shift-Coupling and Maximality 9

This and P(XK ∈ ·, K = ∞) ≤ P(XK ∈ ·, K ≥ k) yield

P(XK ∈ ·, K = ∞) ≤ P(Xk ∈ ·)P(K ≥ k).

D
Use this and T∞ = Tk (and the independence assumptions) to obtain

P(θT∞ XK ∈ ·, K = ∞) ≤ P(θTk Xk ∈ ·)P(K ≥ k). (4.6)

Due to (c) and (f) we have P({θTk Xk ∈ Ak } ∩ Ckc ) = 0. This and (4.6) yield

P(θT∞ XK ∈ Ak , K = ∞) ≤ P({θTk Xk ∈ Ak } ∩ Ck )P(K ≥ k).

Thus P(θT∞ XK ∈ Ak , K = ∞) ≤ P(Ck )P(K ≥ k) = P(K = k) so


  
P θT∞ XK ∈ Ak , K = ∞ ≤ P(n ≤ K < ∞) → 0, n → ∞.
n≤k<∞

Put A = lim supk→∞ Ak to obtain

P(θT∞ XK ∈ A, K = ∞) = 0.

Proceed similarly using P({θTk Yk ∈ Ack } ∩ Dkc ) = 0 and Ac = lim infk→∞ Ack to
obtain

P(θT∞ YN ∈ Ac , N = ∞) = 0.

Now T∞ is independent of (XK , YN , K, N) and its distribution P has the same null
sets as λ. Thus the last two displays imply that
 
P({θs XK ∈ A} ∩ {K = ∞}) λ(ds) = 0 = P({θs YN ∈ Ac } ∩ {N = ∞}) λ(ds).
G G

This is a reformulation of (4.3), due to Fubini and (4.5).


In the following corollary we return to the setting of Sects. 2 and 3.
Corollary 4.2 Let G be a locally compact second countable Hausdorff topological
group with Borel sets G. If P(X ∈ B) = P(Y ∈ B), B ∈ I, then

D
θR̂ X̂ = θŜ Ŷ .

Proof Due to Lemma 2.1 and (4.2) we have P({X̂ ∈ ·} ∩ Ĉ) = P({Ŷ ∈ ·} ∩ D̂) on
I. By assumption and (4.1), P(X̂ ∈ ·) = P(Ŷ ∈ ·) on I, so this implies that

P({X̂ ∈ ·} ∩ Ĉ c ) = P({Ŷ ∈ ·} ∩ D̂ c ) on I. (4.7)


10 H. Thorisson

Define a set B ∈ E by
  
B = x ∈ E : 1{θs x∈A} λ(ds) > 0
G

and note that


  
B ⊆ x∈E:
c
1{θs x∈Ac } λ(ds) > 0 .
G

Since λ is right-invariant we have B ∈ I. From (4.3) and B ∈ I we obtain

P({X̂ ∈ B} ∩ Ĉ c ) = 0 and P({Ŷ ∈ B c } ∩ D̂ c ) = 0.

From (4.7), B c ∈ I and P({Ŷ ∈ B c } ∩ D̂ c ) = 0 we further obtain

P({X̂ ∈ B c } ∩ Ĉ c ) = 0.

D
Thus P(Ĉ c ) = 0 and similarly P(D̂ c ) = 0. This and (4.2) yield θR̂ X̂ = θŜ Ŷ .

5 Transfer from X̂ and Ŷ to X and Y

The transfer method works as follows. Let Z and Ẑ be identically distributed


random elements in some measurable space (E, E) and V̂ be a random element in
a Borel space (H, H). Let Q be the joint distribution of Ẑ and V̂ , that is, Q is a
probability measure on (E, E) ⊗ (H, H) and
 
P (Ẑ, V̂ ∈ ·) = Q.

Since (H, H) is Borel there exists a regular version Q(· | ·) of P(V̂ ∈ · | Ẑ = ·). This
can be used to extend the underlying probability space (, F , P) to support a new
random element V as follows: define a measure Q on (, F ) ⊗ (H, H) through

Q(A × B) = Q(B | Z) dP, A ∈ F, B ∈ H ;
A

for (ω, v) ∈ ×H set V (ω, v) = v, Z(ω, v) = Z(ω), Ẑ(ω, v) = Ẑ(ω), V̂ (ω, v) =
D
V̂ (ω); rename the extended space (, F , P). Then Z = Ẑ still holds and moreover

P(V ∈ · | Z = ·) = Q(· | ·) = P(V̂ ∈ · | Ẑ = ·).


Shift-Coupling and Maximality 11

Thus
D
(Z, V ) = (Ẑ, V̂ ).

We have transferred V̂ from Ẑ to Z to obtain V .


In the following corollary we complete the proof of Theorem 2.2.
Corollary 5.1 Let G be a locally compact second countable Hausdorff topological
group with Borel sets G. If

P(X ∈ B) = P(Y ∈ B), B ∈ I,

then there exists (possibly after extension) a random element T in (G, G) such that

D
θT X = Y.

D D D
Proof Due to Corollary 4.2 we have θR̂ X̂ = θŜ Ŷ , where X̂ = X, Ŷ = Y and R̂,
Ŝ are random elements in (G, G). Since (G, G) is Borel we can first transfer R̂ from
X̂ to X to obtain R such that
D
(X, R) = (X̂, R̂)

and then transfer Ŝ from Ŷ to Y to obtain S such that


D
(Y, S) = (Ŷ , Ŝ).

D
Since θR̂ X̂ = θŜ Ŷ this implies that

D
θR X = θS Y.

Now transfer S from θS Y to θR X to obtain a V such that

D
(θR X, V ) = (θS Y, S).

Since θS −1 θS Y = Y (where S −1 denotes the group inverse of S) this implies that

D
θV −1 θR X = Y.

D
Put T = V −1 R to obtain the desired result, θT X = Y .
Remark 5.2 (Splitting, Used in the Proof of Theorem 4.1) Let Z be a random
element in (E, E) and let μ be a measure that is a component of the distribution
12 H. Thorisson

of Z, that is,

P(Z ∈ ·) ≥ μ.

Let Ẑ be a random element in (E, E) and V̂ take the values 0 and 1. Let Ẑ and V̂
have the following joint distribution:

P(Ẑ ∈ ·, V̂ = 1) = μ, P(Ẑ ∈ ·, V̂ = 0) = P(Z ∈ ·) − μ.

Then Ẑ has the same distribution as Z so we can transfer V̂ from Ẑ to Z to obtain


V such that (Z, V ) has the same distribution as (Ẑ, V̂ ). Thus

P(Z ∈ ·, V = 1) = μ,

that is, we have introduced a splitting event C = {V = 1} cutting out the component
μ from the distribution of Z.

References

1. Aldous, D., Thorisson, H.: Shift-coupling. Stoch. Proc. Appl. 44, 1–14 (1993)
2. Berbee, H.C.P.: In: Random walk with Stationary Increments and Renewal Theory. Mathemat-
ical Centre Tracts, vol. 112. Mathematisch Centrum, Amsterdam (1979)
3. Chatterjee, S., Peled, R., Peres, Y., Romik, R.: Gravitational allocation to Poisson points. Ann.
Math. 172, 617–671 (2010)
4. Doeblin, W.: Exposé de la théorie des chaînes simple constantes de Markov à un nobre fini
d’états. Rev. Math. Union Interbalkan. 2, 77–105 (1938)
5. Goldstein, S.: Coupling methods for Markov processes. Z. Wahrscheinlichkeitsth. 46, 193–204
(1979)
6. Griffeath, D.: Coupling methods for Markov processes. In: Studies in Probability an Ergodic
Theory. Advances in Mathematics. Supplementary Studies, vol. 2 (1978)
7. Holroyd, A.E., Peres, Y.: Extra heads and invariant allocations. Ann. Probab. 33, 31–52 (2005)
8. Hoffman, C., Holroyd, A.E., Peres, Y.: A stable marriage of Poisson and Lebesgue. Ann.
Probab. 34, 1241–1272 (2006)
9. Last, G., Mörters, P., Thorisson, H.: Unbiased shifts of Brownian motion. Ann. Probab. 42,
431–463 (2014)
10. Last, G., Tang, W., Thorisson, H.: Transporting random measures on the line and embedding
excursions into Brownian motion. Ann. Inst. H. Poincaré Probab. Stat. 54, 2286–2303 (2018)
11. Last, G., Thorisson, H.: Invariant transports of stationary random measures and mass-
stationarity. Ann. Probab. 37, 790–813 (2009)
12. Liggett, T.M.: Tagged particle distributions or how to choose a head at random. In: Sido-
ravicious, V. (ed.) In and Out of Equilibrium. Progress in Probability, vol. 51, pp. 133–162.
Birkhäuser, Boston (2002)
13. Lindvall, T.: On coupling of continuous-time renewal processes. J. Appl. Probab. 19, 82–89
(1982)
14. Ney, P.: A refinement of the coupling method in renewal theory. Stoch. Proc. Appl. 11, 11–26
(1981)
15. Pitman, J.: Uniform rates of convergence for Markov chains transition probabilities. Z.
Wahrscheinlichkeitsth. 29, 193–227 (1974)
Shift-Coupling and Maximality 13

16. Pitman, J., Tang, W.: The Slepian zero set, and Brownian bridge embedded in Brownian motion
by a spacetime shift. Electron. J. Probab. 20, 1–28 (2015)
17. Thorisson, H.: Shift-coupling in continuous time. Prob. Theo. Rel. Fields 99, 477–483 (1995)
18. Thorisson, H.: Transforming random elements and shifting random fields. Ann. Probab. 24,
2057–2064 (1996)
19. Thorisson, H.: Point-stationarity in d dimensions and Palm theory. Bernoulli 5, 797–831 (1999)
20. Thorisson, H.: Coupling, Stationarity, and Regeneration. Springer, New York (2000)
Diffusion Approximation Analysis
of Multihop Wireless Networks:
Quality-of-Service and Convergence
of Stationary Distribution

K. S. Ashok Krishnan and Vinod Sharma

Abstract Consider a multihop wireless network, with multiple source–destination


pairs. We obtain a channel scheduling policy which can guarantee end-to-end mean
delay for different traffic streams. We show the stability of the network for this
policy by convergence to a fluid limit. It is intractable to obtain the stationary
distribution of this network. Thus, we also provide a diffusion approximation for
this scheme under heavy traffic. We further show that the stationary distribution
of the scaled process of the network converges to that of the Brownian limit.
This theoretically justifies the performance of the system. We verify the theoretical
properties by means of simulations.

Keywords Multihop wireless network · Quality-of-service · Diffusion


approximation

1 Introduction and Literature Review

A multihop wireless network is constituted by nodes communicating over a wireless


channel. Some of the nodes, called source nodes, have data to be sent to other nodes,
called receivers. In general, the data will have to be transmitted across multiple
hops. The data, originating from different applications, may have different quality-
of-service (QoS) requirements, such as delay or bandwidth constraints. Therefore,
we need to design routing and link scheduling algorithms that can meet all these
requirements.
Network performance has been studied using various mathematical techniques.
Stability of flows in a network is a minimum QoS requirement. Algorithms based

K. S. Ashok Krishnan () · V. Sharma


Department of Electrical Communication Engineering, Indian Institute of Science, Bangalore,
India
e-mail: [email protected]; [email protected]

© The Editor(s) (if applicable) and The Author(s), under exclusive 15


licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_2
16 K. S. Ashok Krishnan and V. Sharma

on backpressure [7] are throughput optimal, which means that they stabilize the
network if it is possible by any other policy. Another approach is to use the
framework of Markov decision processes [17]. The problem of minimizing power
while simultaneously providing mean and hard delay guarantees is studied in
[12]. However, knowledge of system statistics is required, and the scheme is not
throughput optimal. In [13], an algorithm using per hop queue length information is
presented, along with a low complexity approximation that stabilizes a fraction of
the capacity region. In [18], the problem of routing and scheduling transient flows
in a multihop network is studied. They also provide schemes for optimal routing.
The analysis of fluid scaling of networks was pioneered in works such as [16]
and [4], where it was demonstrated that stability of the fluid limit of the network
implies the stability of the network. Further, one may obtain bounds on moments
of asymptotic values of the queues using these techniques [5]. A comprehensive
treatment of work in this direction is provided in [14]. A delay-based scheduling
scheme is proposed in [9], where the analysis of stability uses fluid limits.
Diffusion approximation of networks [20] has been used to study the behaviour
of the system under a scaling corresponding to the functional central limit theorem
[1]. The weak limit of the diffusion scaled systems under heavy traffic is generally
a reflected Brownian motion [8], which under certain assumptions on the scaling
rate has a limiting stationary distribution. This distribution may be used as a proxy
for the actual distribution of the system state. The diffusion approximation of the
MaxWeight algorithm is studied in [19], using properties of certain fluid scaled
paths to obtain properties of the diffusion scaled paths, as in [2]. Of these, [19]
deals with a discrete time switch under the MaxWeight policy.
To obtain the behaviour of the network under stationarity, one also needs to show
the convergence of the stationary distribution of the network to that of the limiting
network. Sufficient conditions for these have only recently been studied, in [6] and
[3], in the case of Jackson networks. An important requirement for the exchange
of limits in [3] to hold is the Lipschitz continuity of an underlying Skorokhod
map, which may not always hold in general. A recent concise survey of diffusion
approximations and convergence of stationary distributions is given in [15].
Our main contributions in this work are summarized below.
– We propose a new link scheduling algorithm to guarantee end-to-end mean delay
for different traffic flows. This algorithm is close to the one proposed in [10] and
has the same fluid limit. Hence, it is also throughput optimal.
– We obtain a reflected Brownian motion (with drift) as the weak limit of the
system under diffusion scaling. This Brownian motion exhibits state space
collapse.
– We also show that the stationary distribution of our network converges to the
stationary distribution of the limiting Brownian network. This allows us to
approximate the stationary distribution of our network by that of the limiting
network which is explicitly available. While diffusion approximations have been
used to traditionally study networks, the proof of convergence of stationary
distributions is still not known in many systems. Our work proves this in a
Diffusion Approximation 17

controlled multihop wireless system with a general scheduling policy with QoS
provisions. However, our proof does not require Lipschitz continuity of the
Skorokhod map, unlike [3].
The rest of the paper is organized as follows. In Sect. 2, we describe the system
model and formulate the control policy used in the network. In Sect. 3, we describe
the two scaling regimes in which we study the network and prove the existence of
the Brownian limit. In Sect. 4, we show that the stationary distribution of the limit
of the scaled process is the stationary distribution of the limiting Brownian process.
In Sect. 5, we provide simulation results, followed by the conclusions in Sect. 6.

2 System Model and Control Policy

We consider a multihop wireless network (Fig. 1). The network is a connected graph
G = (V, E) with V = {1, 2, . . . , N} being the set of nodes and E being the set of
links on V. The system evolves in discrete time denoted by t ∈ {0, 1, 2, . . .}. The
links are directed, with link (i, j ) from node i to node j having a time varying
channel gain Hij (t) at time t. Denote the channel gain vector at time t by H (t),
evolving as independent and identically distributed (i.i.d.) process across slots with
distribution γ over a finite set H. Let Eh (t) denote the cumulative number of slots
till time t when the channel state was h ∈ H. The vector of all Eh (t) is denoted by
E(t).
f
At a node i, Ai (t) denotes the cumulative (in time) process of exogenous arrival
of packets destined to node f . The packets arrive as an i.i.d sequence across slots,
f f f
with mean arrival rate λi and variance σi . Let λ denote the vector of all λi . All
traffic in the network with the same destination f is called flow f ; the set of all flows
is denoted by F . Each flow has a fixed route to follow to its destination. At each node
f
there are queues, with Qi (t) denoting the queue length at node i corresponding to
f
flow f ∈ F at time t. For a queue Qi with i = f , we have the queue evolution
given by,

f f f f f
Qi (t) = Qi (0) + Ai (t) + Ri (t) − Di (t), (1)

Qji (t)
Fig. 1 A simplified depiction
of a wireless multihop Him(t)
network m i Aji (t)
j

l
n k
18 K. S. Ashok Krishnan and V. Sharma

f
where Ri (t) is the cumulative arrival of packets by routing (i.e., arrivals from
f f
other nodes), and Di (t) is the cumulative departure of packets. Let Sij (t) be the
cumulative number of packets of flow f transmitted over link (i, j ). We write

f
 f f
 f
Ri (t) = Ski (t), and Di (t) = Sij (t). (2)
k=i j =i

We assume that the links are sorted into M interference sets I1 , I2 , . . . , IM . At any
time, only one link from an interference set can be active. A link may belong to
multiple interference sets. We also assume that each node transmits at unit power.
Then, the rate of transmission between node i and node j is given by an achievable
rate function, which depends on H (t) and the schedule at time t.
The vector of queues at time t is denoted by Q(t). Similarly, we have the vectors
f
A(t), R(t), D(t) and S(t). Consider a vector = [ ij ](i,j )∈E ,f ∈F . Define S to be
the set of all that satisfies,
f
1. ij ∈ {0, 1} ∀i, j, f,
 f
2. f ij≤ 1, ∀(i, j ) ∈ E,
  f
3. (i,j )∈Im f ij ≤ 1, m = 1, . . . , M.
f
Such a vector is called a schedule. Clearly, any ∈ S has elements ij , which, if
one, indicates that flow f is to be sent over link (i, j ). The constraints listed above
represent the fact that no two flows can be transmitted simultaneously on a link at
any time. Furthermore, no two links in an interference set can transmit at the same
time. For any schedule and channel state h, we assume there exists a channel rate
f
function μ = [μij ](i,j )∈E ,f ∈F , where,

f
μij = F(h, ), (3)

where F is some achievable rate function.


We want to develop scheduling policies such that the different flows obtain
f f
their end-to-end mean delay deadline guarantees. Define Qij = max(Qi −
f  f
Qj , 0), Qf (t) = i Qi (t). Let M(t) = {F(H (t), ) : ∈ S} be the set of
feasible rates at time t. Our network control policy is as follows. At each t, given
the region of feasible rates M(t), we obtain the optimal allocation μ∗ ,

μ∗ = argμ∈M(t ) max
f f f
α(Qf (t), Q )Qij (t)μij , (4)
i,j,f

f f
assuming Qij > 0 for at least one link flow pair (i, j ), f . If all Qij are zero, we
define the solution to be μ∗ = 0. We optimize a weighted sum of rates, with more
weight given to flows with larger backlogs, with α capturing the delay requirement
Diffusion Approximation 19

f
of the flow. The weights α are functions of Qf (t), and Q denotes a desired value
for the queue length of flow f . We use,
a1
α(x, x) = 1 + . (5)
1 + exp(−a2 (x − x))

Thus, flows requiring a lower mean delay would have a higher weight compared to
flows needing a higher mean delay. Flows whose mean delay requirements are not
f
met should get priority over the other flows. The Q are chosen, using Little’s law,
f
as Q = λf D, where D is the target end-to-end mean delay and λf is the arrival
rate of flow f . Note that we will often use α(x) instead of α(x, x̄) for simplicity of
notation.
Let GhIijf (t) be the number of slots till time t, in which channel state was h,
the schedule was I and flow f was scheduled over (i, j ). Denote the vector of all
GhIijf (t) by G(t). Define the process,

Z = (A, E, G, D, R, S, Q), (6)

where we have A = (A(t), t ≥ 0) (and likewise for the other processes). This
process describes the evolution of the system. The state of the system at time t is
Q(t), which takes values in a state space Q. Define the capacity region as follows.
Definition 1 The capacity region Λ of the network is the set of all λ for which a
stabilizing policy exists.
We denote the set of real numbers by R, and the set of integers by Z. We
use C [0, ∞) to denote the set of all continuous functions from [0, ∞) to R, and
D[0, ∞) the set of all right continuous functions with left limits (RCLL) from
[0, ∞) to R. We use ⇒ to denote weak convergence. For a vector x, |x|
j
denotes its norm (modulus). The vector of variables of the form xi over all i and
j
j will be denoted by (xi )i,j . For any two vectors x and y, we denote their inner
product by x, y. For a vector x = (x1 , . . . , xn ) and scalar t, xt will be the product
(x1 t, . . . , xn t). We will also need the following definition.
Definition 2 A sequence of functions ζn is said to converge uniformly on compact
sets (u.o.c) to ζ if ζn → ζ uniformly on every compact subset of the domain.

3 Fluid and Diffusion Limits

Now we describe the behaviour of Z under two scaling regimes, fluid and diffusion.
20 K. S. Ashok Krishnan and V. Sharma

3.1 Fluid Scaling

For the process Z, define the scaled continuous time process,

Z(nt)
zn (t) = , (7)
n
where · represents the floor function. This is called the fluid scaled process. Note
that the time argument t on the left side is continuous, while that on the right is
discrete. Whether a time argument is discrete or continuous will be generally clear
from the context. Let zn denote the process (zn (t), t ≥ 0). We have

zn = (a n , en , g n , d n , r n , s n , q n ), (8)

with the scaling in (7) being applied to each component of Z. Note that a n =
f,n
(ai )i,f , and a similar notational convention holds for all the constituent functions
of z. The limit of zn , as n → ∞, offers insight into the behaviour of the system under
the scheduling policy in (4). The following result can be shown for our policy.
Lemma 1 The algorithm described by the slot-wise optimization in (4) stabilizes
the system for all arrival rate vectors λ in the interior of Λ. Here, stability implies
that the Markov chain Q(t) is positive recurrent.
The proof of this lemma proceeds on the lines of the proof of Theorems 1 and 3
in [10]. It can be shown that there exists a subsequential limit z = (a, e, g, d, r, s, q)
for the family {zn , n ≥ 0}. This z is called the fluid limit, and the convergence of the
processes is u.o.c. The limiting functions are also Lipschitz continuous, and hence
almost everywhere differentiable. The points t at which it is differentiable are called
regular points. In addition, the limiting functions satisfy the following properties
(see [11]):

a(t) = λt, e(t) = γ t, (9)

f
 f f
 f
ri (t) = sj i (t), di (t) = sij (t), (10)
j j

q(t) = q(0) + a(t) + r(t) − d(t), (11)

q̇(t) = λ + ṙ(t) − ḋ(t), (12)

  t
f f
hI
gijf (t) = eh (t), sij (t) = ṡij (τ )dτ, (13)
I 0
Diffusion Approximation 21

and ṡ(t) satisfies


 f f
 f f
α(q f (t))qij (t)ṡij (t) = max α(q f (t))qij (t)μ̄ij , (14)
μ̄
i,j,f i,j,f


where the dot indicates derivative, at regular t and μ̄ = h γh μ(h, S), where
μ(h, S) is an achievable rate when channel is in state h and schedule is S.
Using the Lyapunov function,
 ∞  f f
L1 (q(t)) = − exp(t − τ ) α(q f (τ ))qi (τ )q̇i (τ )dτ,
t i,f

we can establish that the fluid system is stable, and consequently, so is the stochastic
system. We can also show that the draining time of the system, which is the time for
all the fluid queues to go to zero, is of the form T |x| , where T is a finite quantity,|x|
is the initial norm of the fluid queues and  denotes the distance of λ to the boundary
of Λ.
Studying the fluid limit gives us insights into the stability properties of the
system. However, it only proves the existence of a stationary distribution. In order
to predict the behaviour of the system, one needs the stationary distribution, or
some approximation to the same. However, explicitly computing the stationary
distribution for our system is not feasible. Thus, we define the heavy traffic regime,
and the associated diffusion scaling, below. We will also show that the stationary
distribution of our system process converges to that of the limiting Brownian
network. This will provide us an approximation of the stationary distribution under
heavy traffic, the scenario of most practical interest.

3.2 Diffusion Scaling

Consider a sequence of systems, Z n . Each system differs from the other in its arrival
rate, λn . The λn are chosen such that, as n → ∞, λn → λ∗ , and,

lim nψ, λn − λ∗  = b∗ ∈ R, (15)


n→∞

where λ∗ is a point on the boundary of Λ, and ψ denotes the outer normal vector
to Λ at the point λ∗ . This is known as heavy traffic scaling. We will also assume
that λ∗ falls in the relative interior of one of the faces of the boundary of Λ. For this
sequence of systems, we define the diffusion scaling, given by,

Z n (n2 t)
ẑn (t) = . (16)
n
22 K. S. Ashok Krishnan and V. Sharma

Let ẑn denote the process (ẑn (t), t ≥ 0). As before, we have

ẑn = (â n , ên , ĝ n , d̂ n , r̂ n , ŝ n , q̂ n ).

Define the system workload W n (t) in the direction ψ as,

W n (t) = ψ, Qn (t), (17)

and,

W (n2 t)
ŵn (t) = .
n

Denote ŵn = (ŵn (t), t ≥ 0). Define an invariant point to be a vector φ that satisfies,
for some k > 0,

α(φ)φ = kψ, (18)

where α(φ) is the vector of all α(φj ), with α defined in (5), and the product of the
vectors is element-wise. Then, we have the following result, which characterizes the
weak convergence of the diffusion scaled processes.
Theorem 1 Consider {ẑn , n ∈ N }, under heavy traffic scaling satisfying (15),and
N a sequence of positive integers n increasing to infinity. Assume that the fluid
f
scaled z = (a, e, g, d, r, s, q) has components a = (ai )i,f and e = (eh )h∈H that
satisfy, with probability one, as m → ∞, for any T > 0, for all i, j , f , c ∈ H,

f f f
max sup |ai (m,  + ) − ai (m, ) − λi | → 0, (19)
0≤≤mT 0≤≤1

max sup |ec (m,  + ) − ec (m, ) − γc | → 0. (20)


0≤≤mT 0≤≤1

Further, assume that,

q̂ n (0) ⇒ cφ, (21)

where c is a non-negative real number. Then, the sequence {ŵn , n ∈ N } converges


weakly to a reflected Brownian motion ŵ as n → ∞, in D[0, ∞). Further, {q̂ n , n ∈
N } converges weakly to φ ŵ.
The existence of the Brownian limit is demonstrated as follows. We write the
scaled workload ŵn as the sum of two terms, one of which converges to a Brownian
motion, and the second as its corresponding regulating process. Together, they act
as a reflected Brownian motion. The detailed proof is available in [11].
Diffusion Approximation 23

Having established the existence of a limiting Brownian motion, we proceed to


demonstrate that the stationary distributions of the scaled systems converge to the
stationary distribution of the Brownian motion, in the next section.

4 Convergence of Stationary Distributions

In order to establish the convergence of stationary distributions, we use the


following result, which is a consequence of Theorems 3.2, 3.3 and 3.4 of [3].
Lemma 2 Assume that, for all nodes i, j , flows f , for any n ≥ 1, t ≥ 0, we have,
for some B < ∞,
 
 2
 f,n f,n 
E sup Ai (k) − āi (k) ≤ Bt, (22)
0≤k≤t
 
 2
 f,n f,n 
E sup Ri (k) − r̄i (k) ≤ Bt, (23)
0≤k≤t
 
 2
 f,n f,n 
E sup Di (k) − d̄i (k) ≤ Bt. (24)
0≤k≤t

Further, assume that there exists T such that for all t ≥ T , we have,

1
lim sup E|q̂x (n, t|x|)|2 = 0. (25)
|x|→∞ n |x|2

Then the sequence of distributions {πn } is tight.


It can be shown that the above conditions are satisfied in our case, as stated below.
Lemma 3 In our system model, conditions (22)–(24) hold. Further, there exists T
such that (25) holds. Consequently, the sequence {πn } is tight.
Proof See [11].
As a consequence of the above two lemmas, we have the following result.
Theorem 2 As n → ∞,

q̂ n (∞) ⇒ φ ŵ(∞), as n → ∞, (26)

where the time argument being infinity denotes the respective stationary distribu-
tions.
Proof See [11].
24 K. S. Ashok Krishnan and V. Sharma

The Brownian motion ŵ obtained as the limit of ŵn is a unidimensional reflected


Brownian motion, having drift b∗ < 0. The distribution of ŵ(∞) is given by [8],

P[ŵ(∞) < y] = 1 − exp(2b ∗ y/σ 2 ). (27)

This therefore becomes an approximation for the queue length distribution of the
system under heavy traffic.

5 Numerical Simulations

For simulations, we consider two topologies. In both cases, the slot-wise allocation
is done by performing the optimization (4). We compute the solution numerically by
means of an exhaustive search. Since the search space of the optimization increases
exponentially in the number of channel states, we limit the channel states to take
values over a finite set of small size.
Example 1 We consider a star network topology (Fig. 2). There are two Poisson
distributed arrival processes, one arriving at node 1, with node 4 as its destination.
The other arrives at node 2, with node 5 as destination. Two links which share a
common node interfere with each other. Thus, there is one interference set, which
contains all the links. Consequently, only one link can be active at a time. We assume
that the channels are independent and identically distributed, with the distribution
being uniform over the values {0, 1, 2, 3}. The arrival vector (λ1 , λ2 ) = (λ, λ), i.e.,
increasing along the line of unit slope. Under heavy traffic, it is easy to see that,
given the interference constraints, it is optimal to schedule the link with the highest
channel gain. From simulations, the maximum arrival rate that can be supported by
scheduling the link with the highest channel gain yields λ∗ = (0.65, 0.65). From
the diffusion approximation and (27), we can see that the mean of the Brownian
σ2
motion corresponding to the queue can be approximated by the vector φ 2b ∗ . The
Q(n2 t )
Brownian motion is a limit of the scaled process of the form n . For a large
σ2
n, we may approximately write, Q(n2 t)  nφ 2b ∗. If we run the simulations for a

Fig. 2 Example 1: the 1 4


network

2 5
Diffusion Approximation 25

Table 1 Approximation of
Arrival rate λ Mean queue length Approximation
queues
0.64 233 232
0.641 263 258
0.642 319 290
0.643 367 332
0.644 381 387
0.645 479 465
0.646 517 581
0.647 568 775
The mean queue length of the flow 1 → 3 → 5
corresponding to various arrival rates is displayed,
along with the numerical approximation

Table 2 Mean queue length target and obtained, for both flows

λ Mean queue length asked Queue length obtained


0.63 (250, 100) (213, 98)
0.64 (250, 100) (264, 110)
0.641 (250, 100) (292, 120)

time n, we may further also approximately write b∗ = n|λ − λ∗ |. Hence, we have


the approximation,

σ2
Q(∞)  φ . (28)
2|λ − λ∗ |

We will be looking at the total queue length of the flow 1 → 3 → 4. The value of
σ 2 is 2λ + σ̂ 2 . The vector φ is approximately ( √1 , √1 ). (The value of Q̄ for both
2 2
queues is set at 100.) We take σ̂ 2  8. The values of the total queue length of the
flow 1 → 3 → 5 are listed in Table 1 (owing to symmetry both queue lengths are
same), for simulation runs of length 105, averaged over 20 simulations. It can be
seen that the approximations follow the queue length closely.
In order to demonstrate that the algorithm can satisfy different QoS requirements,
we simulate the network at three points in the interior of the capacity region. The
mean queue length asked from the flows is 250 and 100, respectively. We also pick
a2 in the expression of α for the second flow to be 4, since it requires a tighter
constraint to be met. In Table 2, the first column gives the arrival rate, the second
shows the target queue length for the two flows and the final column shows the
queue length obtained. We see that the end-to-end mean queue length requirement
is met for both the flows till rate 0.64. The capacity boundary is at 0.65. Thus, our
algorithm can provide QoS under heavy traffic as well.
26 K. S. Ashok Krishnan and V. Sharma

Fig. 3 Example 2: the 1 5 8


network

3 4 6

2 7 9

Example 2 Consider the network in Fig. 3. The arrival process, channel state
distribution and interference constraints are the same as in Example 1. There are
three flows, 1 → 3 → 4 → 6 → 8, 2 → 3 → 4 → 5 and 7 → 4 → 6 → 9.
They will be called flow 8, flow 5 and flow 9. From simulations, the boundary
of the capacity region, λ∗ ≈ (0.59, 0.59, 0.01). We take arrival rates close to this
point and show the values of total queue length of flow 8 obtained by simulations
and the numerical approximations (using (28)), in Table 3. For calculating the
approximation, we use σ̂ 2 ≈ 9. In this case also, the approximations track the
queue lengths well. Just as in the previous case, we provide an example to show
how the queue length values meet targets, in Table 4. These are simulated at the
arrival rate (0.55.0.55, 0.01), which is in the interior of the capacity region. In the
weight function α, we use a1 = 5, a2 = 1 to give weights to flows. Since flows
8 and 5 are competing for network resources, delays of both cannot be reduced
simultaneously. This is also clear from the simulations.

Table 3 Entries of the form


Arrival rate λ Mean queue length Approximation
(a, b) indicate delay target a,
delay achieved b 0.5 21 26
0.54 52 47
0.56 99 79
0.57 119 144
0.58 253 239
0.582 331 299
0.584 403 399
0.585 457 479

Table 4 Entries of the form


Mean delay (slots) for each flow
(a, b) indicate delay target a,
delay achieved b Flow 8 Flow 5 Flow 9
(50, 52) (100, 112) 9
(40, 46) (100, 114) 9
(100, 139) (50, 53) 21
Arrival rate is (0.55.0.55, 0.01)
Diffusion Approximation 27

6 Conclusion

We have presented an algorithm for scheduling in multihop wireless networks


that guarantees end-to-end mean delays of the packets transmitted in the network.
The algorithm is throughput optimal. Using diffusion scaling, we obtain the
Brownian approximation of the algorithm. We also prove theoretically that the
stationary distribution of the limiting Brownian motion is the distribution of a
sequence of scaled systems, and is consequently a good approximation for the
stationary distribution of the original system. Using these relations, we obtain an
approximation for queue lengths, and demonstrate via simulations that these are
accurate.

References

1. Billingsley, P.: Convergence of Probability Measures. Wiley, London (1968)


2. Bramson, M.: State space collapse with application to heavy traffic limits for multiclass
queueing networks. Queueing Syst. 30(1–2), 89–140 (1998)
3. Budhiraja, A., Lee, C.: Stationary distribution convergence for generalized Jackson networks
in heavy traffic. Math. Oper. Res. 34(1), 45–56 (2009)
4. Dai, J.G.: On positive Harris recurrence of multiclass queueing networks: a unified approach
via fluid limit models. Ann. Appl. Probab. 5, 49–77 (1995)
5. Dai, J.G., Meyn, S.P.: Stability and convergence of moments for multiclass queueing networks
via fluid limit models. IEEE Trans. Autom. Control 40(11), 1889–1904 (1995)
6. Gamarnik, D., Zeevi, A., et al.: Validity of heavy traffic steady-state approximations in
generalized Jackson networks. Ann. Appl. Probab. 16(1), 56–90 (2006)
7. Georgiadis, L., Neely, M.J., Tassiulas, L., et al.: Resource allocation and cross-layer control in
wireless networks. Found. Trends® Netw. 1(1), 1–144 (2006)
8. Harrison, J.: Brownian Motion and Stochastic Flow Systems. Wiley, London (1985)
9. Ji, B., Joo, C., Shroff, N.B.: Delay-based back-pressure scheduling in multihop wireless
networks. IEEE/ACM Trans. Netw. 21(5), 1539–1552 (2013)
10. Krishnan, A., Sharma, V.: Distributed control and quality-of-service in multihop wireless
networks. In: 2018 IEEE International Conference on Communications (ICC), pp. 1–7. IEEE,
Piscataway (2018)
11. Krishnan, A., Sharma, V.: Quality-of-service in multihop wireless networks: diffusion approx-
imation (2018). arXiv:1810.12209
12. Kumar, S.V., Sharma, V.: Joint routing, scheduling and power control providing hard deadline
in wireless multihop networks. In: 2017 Information Theory and Applications Workshop (ITA).
San Diego (2017)
13. Li, B., Srikant, R.: Queue-proportional rate allocation with per-link information in multihop
wireless networks. Queueing Syst. 83(3–4), 329–359 (2016)
14. Meyn, S.: Control Techniques for Complex Networks. Cambridge University Press, Cambridge
(2008)
15. Miyazawa, M.: Diffusion approximation for stationary analysis of queues and their networks:
a review. J. Oper. Res. Soc. Jpn. 58(1), 104–148 (2015)
16. Rybko, A.N., Stolyar, A.L.: Ergodicity of stochastic processes describing the operation of open
queueing networks. Problemy Peredachi Informatsii 28(3), 3–26 (1992)
17. Singh, R., Kumar, P.: Throughput optimal decentralized scheduling of multi-hop networks with
end-to-end deadline constraints: unreliable links (2016). arXiv:1606.01608
28 K. S. Ashok Krishnan and V. Sharma

18. Siram, V., Varma, K., et al.: Routing and scheduling transient flows for QoS in multi-hop wire-
less networks. In: 2018 International Conference on Signal Processing and Communications
(SPCOM). Bangalore (2018)
19. Stolyar, A.L., et al.: Maxweight scheduling in a generalized switch: state space collapse and
workload minimization in heavy traffic. Ann. Appl. Probab. 14(1), 1–53 (2004)
20. Williams, R.J.: Diffusion approximations for open multiclass queueing networks: sufficient
conditions involving state space collapse. Queueing Syst. 30(1–2), 27–88 (1998)
Analysis of Retrial Queue
with Heterogeneous Servers
and Markovian Arrival Process

Liu Mei and Alexander Dudin

Abstract Multi-server retrial queueing system with heterogeneous servers is ana-


lyzed. Customers arrive to the system according to the Markovian arrival process.
Arriving primary customers and customers retrying from orbit occupy available
server with the highest service rate, if any. Otherwise, the customers move to the
orbit having an infinite capacity. Service times have exponential distribution. The
total retrial rate infinitely increases when the number of customers in orbit increases.
Behavior of the system is described by multi-dimensional continuous-time Markov
chain which belongs to the class of asymptotically quasi-Toeplitz Markov chains.
This allows to derive simple and transparent ergodicity condition and compute
the stationary distribution of the chain. Presented numerical results illustrate the
dynamics of some performance indicators of the system when the average arrival
rate increases and the importance of account of correlation in the arrival process.

Keywords Retrial queue · Heterogeneous servers · Markovian arrival process

1 Introduction

Theory of retrial queues is an important part of queueing theory that takes into
account the effect of retrials. Capacity of the system is finite and some customers
cannot be accepted for service immediately upon arrival due to the temporal

Research is supported by “RUDN University Program 5–100” and the grant F19KOR-001 of
Belarusian Republican Foundation for Fundamental Research.

L. Mei
Belarusian State University, Minsk, Belarus
A. Dudin ()
Belarusian State University, Minsk, Belarus
Peoples Friendship University of Russia (RUDN University), Moscow, Russia
e-mail: [email protected]

© The Editor(s) (if applicable) and The Author(s), under exclusive 29


licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_3
30 L. Mei and A. Dudin

unavailability of the capacity. In contrast to the queues with buffers where such
customers are placed to a buffer and, then, are picked up for service according to
some disciplines and the queues with losses where such customers are lost, in retrial
queues such customers move to some virtual place called orbit and try to get access
to service after random intervals of time. Due to their high practical interests, retrial
queues attract a lot of attention of researchers. Field of applications of the theory
of retrial queues includes various telecommunication systems with disciplines of
multiple access, databases, call centers, etc. For references to the state of the art in
research in retrial queues the books [1] and [9] are recommended.
Due to the state-inhomogeneous behavior of the Markov chains that describe
behavior of retrial queues, their analysis is essentially more involved than analysis
of queues with buffers or losses. The most essential difficulties arise in analysis
of multi-server retrial queues even in the simplest assumptions about the arrival,
service, and retrial processes, see, e.g., the study of the M/M/N retrial queue with
the classical retrial policy presented in [9]. The difficulties essentially increase if
more realistic assumptions about the arrival and service process are imposed. In
[2], the BMAP /P H /N type retrial queue is studied. Here, the BMAP stands
for the batch Markovian arrival process introduced in [13] as a potentially useful
descriptor of the correlated bursty flows in modern telecommunication networks.
For more information about the BMAP and related research see [3, 20]. In our
paper, we assume that the arrival process is the MAP which is the particular case
of the BMAP when no batch arrival is allowed. Abbreviation P H denotes phase-
type distribution, see [15]. This class of distributions is quite wide and includes, in
particular, exponential, Erlangian, Coxian distributions, and their mixtures.
In consideration of multi-server queues, usually it is assumed that the servers
are identical and an arbitrary idle server is engaged with equal probability for
the service when the new customer arrives. Much less investigated are the queues
with heterogeneous servers which are more interesting subject for research. Often,
quite non-trivial optimization problems relating to assigning the servers to arriving
customers, depending of relations of the means service rates and costs of their use,
arise. The problem of an optimal allocation of jobs between heterogeneous servers
aiming to minimize the mean number of jobs in the ordinary queueing system was
considered in [5, 11, 12, 16–19]. It was shown that the optimal policy belongs to
a class of monotone policies, i.e., threshold policies, which use a slow server only
when the queue length exceeds a certain threshold. In paper [6], it is shown for the
retrial queue with heterogeneous customers and the classical retrial policy that a
threshold policy is also optimal for retrial queues and an algorithm, which allows
to construct optimal policies for a versatile class of queueing systems, is proposed.
Analogous analysis is given in [6] for the case of the constant retrial rate.
Multi-server retrial queues, in which the servers are homogeneous, however the
new arriving customer does not select an arbitrary idle server with equal probability
but is addressed for the service to the certain concrete server, are considered, e.g., in
[8] and [14].
In this paper, we address the multi-server retrial queue of MAP /M̂N /N type.
The symbols M̂N mean that distributions of service time at the servers are
Retrial Queue with Heterogeneous Servers 31

exponential with different service rate. We assume that the servers are enumerated
in the order of decreasing the rates, i.e., the server-1 is the fastest, . . . , the server-N
is the slowest. Known results about the structure of the optimal control, see, e.g.,
[7], assume that the decision-maker has an opportunity to observe the number of
customers in the orbit and activates a new, slower, server if this number exceeds
the definite threshold. In our paper, we make an assumptions that: (1) the number of
customers in the orbit is not observable that takes place in the majority of real-world
systems because the orbit is a virtual place and indeed the waiting customers are
distributed in some, probably very wide, area and are invisible; (2) service discipline
is conservative. This means that if the customer from the orbit makes an attempt and
not all servers are busy, this customer will be accepted for service. The problem of
choosing a concrete server from the set of available servers is quite difficult. Its
solution should be prefaced by formulation of some economic criterion including,
e.g., costs of waiting of customers in orbit (or sojourn time in the system) and costs
of using available servers per unit of time. In the borders of this paper, we do not
account the economic aspects (this is planned to in further research) and examine the
discipline of servers assignment as: the fastest server should be used first. Change
of the server is not allowed during the service of any customer.
The structure of the paper is as follows. In Sect. 2, mathematical model is
formulated. In Sect. 3, dynamics of the considered system is described by the
multi-dimensional continuous-time Markov chain. The generator of this chain is
presented. It is shown that this Markov chain belongs to the class of asymptotically
quasi-Toeplitz Markov chains. In Sect. 4, sufficient conditions for ergodicity and
non-ergodicity of this Markov chain are presented. Section 5 contains a short
comment about computation of the stationary distribution of the Markov chain and
some performance measures of the system. Section 6 is devoted to brief description
of the numerical results. Section 7 concludes the paper.

2 The Mathematical Model

We consider an N-server queueing system. The primary customers arrive to the


system according to a Markovian arrival process (MAP ). We denote the directing
process of the MAP by νt , t ≥ 0. The state space of the irreducible continuous-
time Markov chain νt is {0, 1, . . . , W }. The intensities of transitions of the process
νt are defined as the entries of matrices (D0 ,D1 ) of size W̄ = W + 1. The matrix
D(1) = D0 + D1 is an infinitesimal generator of the process νt . The vector θ that
is the unique solution to the system of equations θ D(1) = 0, θe = 1 defines the
stationary distribution of the process νt . Here and thereafter e is a column vector
of an appropriate size consisting of 1’s and 0 is a row-vector of an appropriate size
consisting of 0’s.
The average (fundamental) arrival rate λ of the MAP is defined as λ = θ D1 e.
The coefficient cvar of variation of intervals between customer arrivals is defined by
32 L. Mei and A. Dudin

cvar = 2λθ (−D0 )−1 e−1. The coefficient of correlation cvar of successive intervals
between arrivals is computed as ccor = (λθ (−D0 )−1 D1 (−D0 )−1 e − 1)/cvar 2 .

Service time distribution is assumed to be exponential. Different servers have


different service rates, respectively, μ1 , μ2 , · · · , μN . Here, we assume that the
servers are enumerated in such a way as the inequalities μ1 > μ2 > · · · > μN
are fulfilled. However, in a future the obtained results can be used for solving the
problem of the optimal numeration of the servers taking into account not only the
service rates, but also the costs of the use of the servers.
If the arriving customer meets all servers being idle, the customer enters the first
server to receive the service. If the first server is busy, then the customer enters the
idle server with the minimum number. If all servers are busy, then the customer
goes to the orbit. Capacity of the orbit is unlimited. These customers are said to be
repeated customers. These customers try their luck later until they will be served.
We assume that the total flow of retrials is such that the probability of generating the
retrial attempt in the interval (t, t + Δt) is equal to αi Δt + o (Δt) when the orbit
size (the number of customers on the orbit) is equal to i, i > 0, αi = 0 when
i = 0. We do not fix the explicit dependence of the intensities αi on i. We assume
the infinitely increasing retrial rate: lim αi = ∞. This holds true, in particular, for
i→∞
classic retrial strategy where αi = iα and the linear strategy αi = iα + γ .
Our goal is to derive the stationary state distribution of the system.

3 The Process of the System States

Let, at the moment t, t > 0,


– it be the number of customers on the orbit, it ≥ 0;
(n)
– ξt be the state of the service on the nth server, n = 1, N :

0, if the nth server is idle;
ξt(n) =
1, if the nth server is busy;

– νt be the state of the directing process of the MAP , νt = 0, W .


Consider the continuous-time multi-dimensional process

(1) (N)
ζt = {it , ξt , . . . , ξt , νt }, t ≥ 0.

It is easy to see that this process is an irreducible Markov chain.


Let us assume that the stationary probabilities of this Markov chain
 
(1) (N)
π(i, r (1) , . . . , r (N) , ν) = lim P it = i, ξt = r (1) , . . . , ξt = r (N) , νt = ν
t →∞

exist for any i ≥ 0, r (n) = 0, 1, n = 0, N , ν = 0, W .


Retrial Queue with Heterogeneous Servers 33

Enumerate the states of the chain ζt , t ≥ 0, in lexicographic order and form the
row-vector

π(i, r (1) , . . . , r (N) ) = (π(i, r (1), . . . , r (N) , 0), . . . , π(i, r (1) , . . . , r (N) , W ))

of the stationary probabilities π(i, r (1) , . . . , r (N) , ν) , and the row-vectors π i ,


consisting of the vectors π(i, r (1) , . . . , r (N) ), i ≥ 0. Note that the size of the
vectors π i is equal to K = (W + 1) 2N . Define also the infinite-dimensional
probability vector π = (π 0 , π 1 , π 2 , . . .).
For the use in the sequel, introduce the following notation:
• I is an identity matrix of appropriate dimension (when needed the dimension is
identified with a suffix);
• On denotes zero matrices of size n;
• ⊗ and ⊕ are the symbols of the Kronecker product and sum of matrices, S ⊗l =
S ⊗ · · · ⊗ S , l ≥ 1;
  
l
• J is the square matrix of size 2N given by J = diag{0, . . . , 0, 1};
• diag{. . . } means the diagonal matrix with the diagonal entries given in the
brackets;
⎛ ⎞
0 0 0 0
⎜ μN −μN 0 0 ⎟
• G=⎜ ⎝μN−1 0 −μN−1
⎟;

0
0 μN−1 μN −μN−1 − μN
$ %
O2k+1 O2k+1
• Iˇk(n) = I2n−k−2 ⊗ , k = 1, n − 2 , n = 2, N − 1;
I2k+1 −I2k+1
$ % $ %
0 1
• a1 = , a2 = , b1 = (0, 1) , b2 = (1, 0) .
1 0
Lemma 1 If the vector π of stationary probabilities exists, then it satisfies the
equilibrium equations

π Q = 0, πe = 1

where 0 is the infinite row-vector consisting of zeroes and the matrix Q, which is the
infinitesimal generator of the chain ζt , t ≥ 0, has the following structure:
⎛ ⎞
Q00 Q01 0 0 ···
⎜ Q10 Q11 Q12 0 ···⎟
⎜ ⎟
⎜ ···⎟
Q=⎜ 0 Q21 Q22 Q23 ⎟ (1)
⎜ 0 ···⎟
⎝ 0 Q32 Q33 ⎠
.. .. .. .. ..
. . . . .
34 L. Mei and A. Dudin

where the blocks Qij , i, j ≥ 0, j = {max {0, i − 1} , i, i + 1} of the matrix Q have


size K and are defined as follows:

Qi,i+1 = J ⊗ D1 , Qi,i−1 = αi I˜β

where
⎛ ⎞
O2N −1 ×2N −1 a2 ⊗ I2N −2 a1 ⊗ a2 ⊗ I2N −3 · · · a1 ⊗(N−2) ⊗ a2 a1 ⊗(N−1)
⎜ ⎟
⎜O2N −2 ×2N −1 O2N −2 ×2N −2 a2 ⊗ I2N −3 · · · a1 ⊗(N−3) ⊗ a2 a1 ⊗(N−2) ⎟
⎜ ⎟
⎜O N −3 N −1 O2N −3 ×2N −2 O2N −3 ×2N −3 · · · a1 ⊗(N−4) ⊗ a2 a1 ⊗(N−3) ⎟
⎜ 2 ×2 ⎟
⎜ .. .. .. .. .. ⎟
Iβ = ⎜
˜
⎜ . . .
..
. . .
⎟⊗I ,
⎟ W
⎜ ⎟
⎜ O 1 N −1 ··· ⎟
⎜ 2 ×2 O21 ×2N −2 O21 ×2N −3 a2 a1 ⎟
⎜ ⎟
⎝ O20 ×2N −1 O20 ×2N −2 O20 ×2N −3 ··· O20 ×20 1 ⎠
O1×2N −1 O1×2N −2 O1×2N −3 ··· O1×20 0


⎪ μ " (b ⊗(m−1) ⊗ b2 ⊗ I2N −r−1 ) ⊗ IW̄ , r " = r − m, m = 1, r, r = 0, N − 1,
⎪ r +1 1




⎪ μr " +1 b1 ⊗(m−1) ⊗ IW̄ , r " = r − m, m = 1, r, r = N,







⎪ D0 ⊕ ΔN−r−1 − αi IW̄ ·2N −r−1 , r " = r, r = 0, N − 1,


(Qi,i )r,r " N

⎪ − r " = r, r = N,

⎪ D 0 μk IW̄ ,



⎪ k=1



⎪ ⊗(l−1)

⎪ a1 ⊗ a2 ⊗ I2N −r " −1 ⊗ D1 , r " = r + l, l = 1, N − r − 1, r = 0, N ,


⎪ ⊗(l−1)
⎩ a1 ⊗ D1 , r " = r + l, l = N − r, r = 0, N .

Here
⎛ ⎞

N−2

N−1 −
⎜ k=1 μ k 0 ⎟
Δ0 = − μk , Δ 1 = ⎜

⎟,


N−2
k=1 μN − μk − μN
k=1


n−2 
N−n−1
Iˇk μN−k−1 −
(n)
Δn = I2n−2 ⊗ G + μk I2n , n = 2, N − 1.
k=1 k=1

Proof of the lemma consists of careful analysis of transitions of the Markov chain
ζt during the interval of an infinitesimal length.
Corollary 1 Markov chain ζt belongs to the class of asymptotically quasi-Toeplitz
Markov chains, see [10].
Retrial Queue with Heterogeneous Servers 35

Proof According to the definition of the asymptotically quasi-Toeplitz Markov


chains given in [10], we have to prove the existence of the limits

Y0 = lim Ri −1 Qi,i−1 , Y2 = lim Ri −1 Qi,i+1 , Y1 = lim Ri −1 Qii + I


i→∞ i→∞ i→∞

where Ri is a diagonal matrix with diagonal entries defined as the moduli of the
corresponding diagonal entries of the matrix Qii , i ≥ 0. It can be easily verified
(n)
that Ri is matrix with the diagonal blocks Ti , n = 0, N, defined as follows:

⎨Λ ⊕ Zn + αi IW̄ ·2N−n−1 , n = 0, N − 1,
Ti
(n)
= 
N
⎩ Λ+ μk IW̄ , n = N,
k=1

where Λ, Zn are diagonal matrices with diagonal entries defined by the diagonal
entries of the matrices −D0 , ΔN−n−1 , n = 0, N, respectively.
Then, by direct calculations, it can be verified that
⎛ ⎞ ⎛ ⎞
O O ··· O
O OO ··· O O
⎜O O ··· O⎟
O ⎜O O ··· O O⎟
⎜ ⎟ ⎜ ⎟
⎜ .. ⎟ , Y = ⎜ .. .. .. ⎟
Y0 = I˜β , Y1 = ⎜ ... ..
.
..
.
..
.
.⎟⎟ 2 ⎜. . . . ..
. . .⎟
⎜ ⎜ ⎟
⎝O O ··· O O ⎠ ⎝O O · · · O O⎠
Γ1 Γ2 · · · ΓN Ψ OO ··· O Φ

where
* +−1

N
Γn = μn b1 ⊗(N−n) ⊗ C, n = 1, N , C = Λ + μk IW̄ ,
k=1

* +

N
Ψ = C D0 − μk IW̄ + I, Φ = CD1 .
k=1

Corollary 1 is proven.

4 Ergodicity Condition

Theorem 1 The Markov chain ζt is ergodic if the inequality


N
λ< μk (2)
k=1
36 L. Mei and A. Dudin

is fulfilled, and it is non-ergodic if


N
λ> μk . (3)
k=1

Here λ is the fundamental rate of the MAP .


Proof It follows from [10] that the sufficient condition for ergodicity of the Markov
chain ζt is the fulfillment of the inequality

yY0 e > yY2 e (4)

where the vector y is the unique solution of the system of linear algebraic equations

y(Y0 + Y1 + Y2 ) = y, ye = 1. (5)

Let us represent the vector y in the form y = (y0 , . . . , yN ) and solve the system
(5) by means of sequential multiplication of the vector y by the corresponding block
columns of the matrix Y = Y0 + Y1 + Y2 . Multiplying this vector by the first block
column, we have the relation

y0 = yN Γ1 = yN μ1 HN−1

where Hr = (O, . . . , O, C), r = 0, N − 1. Let us note that all the block entries of
  
2r
the vector y0 , except the last one, are equal to zero.
Multiplying vector y by the second block column of the matrix Y , we have the
relation

y1 = y0 ((a2 ⊗ I2N−2 ) ⊗ IW̄ ) + yN Γ2 .


$ %
I2N−2
Here a2 ⊗ I2N−2 = . Because all the block entries of the vector y0 , except
O2N−2
the last one, are equal to zero while the last block of the vector a2 ⊗ I2N−2 equals to
zero, we conclude that y0 ((a2 ⊗ I2N−2 ) ⊗ IW̄ ) = 0 and, hence,

y1 = yN Γ2 = yN μ2 HN−2 .

Analogously, we sequentially derive relations:

yk = yN μk+1 HN−k−1 , k = 0, N − 1. (6)


Retrial Queue with Heterogeneous Servers 37

Finally, by multiplying vector y by the last block column of the matrix Y , we have
the relation


N−1
yN = yk ((a1 ⊗(N−k−1) ⊗ IW̄ ) + yN (Ψ + Φ)
k=0

that can be rewritten as


*N * ++
   
N
⊗(N−k)
yN μk HN−k a1 ⊗ IW̄ + C D0 + D1 − μk IW̄ = 0.
k=1 k=1

Because a1 ⊗(N−k) ⊗ IW̄ = (O, . . . , O, IW̄ )T , we have that HN−k ((a1 ⊗(N−k) ⊗
  
2N−k
IW̄ )) = C.
Therefore, the equation for the vector yN is rewritten in the form

yN C(D0 + D1 ) = 0.

This implies that yN C = gθ or

yN = gθ C −1 (7)

where the vector θ defines the stationary distribution of the underlying process of
the MAP νt and g is positive constant.
Formulas (6) and (7) completely define the components of the vector y and we
can substitute them into inequality (4).
The left-hand side of (4) is computed as:


N−1 
N−1 
N−1
yY0 e = yk e = yN μk+1 HN−k−1 e = yN μk+1 Ce
k=0 k=0 k=0


N 
N
= gθ μk e = g μk .
k=1 k=1

The right-hand side of (4) is computed as:

yY2 e = yN Φe = gθ C −1 CD1 e = gλ.


38 L. Mei and A. Dudin


N
Therefore, inequality (4) is rewritten in the form gλ < g μk that is equivalent
k=1
to condition for ergodicity (2). Condition (3) for non-ergodicity analogously follows
from condition yY0 e < yY2 e proven in [10]. The theorem is proven.
Remark 1 Condition of ergodicity (2) is intuitively clear. Usually condition of
ergodicity consists of requirement that, in overloaded system, arrival rate is less than
the service rate. In the considered model, when it is overloaded, i.e., huge number of

N
customers stay in the orbit, all servers are busy. Thus, the total service rate is μk .
k=1
In what follows we assume that condition (2) is fulfilled. Hence the vectors
π i , i ≥ 0, defined above exist. They satisfy the system of equilibrium equations
π Q = 0, πe = 1. This system is infinite and the problem of its solution is quite
difficult. The system can be solved using the algorithm developed in [10] and more
recent and efficient algorithm from [4].

5 Performance Measures

As soon as the vectors π i , i ≥ 0, have been calculated, we are able to find various
performance measures of the system.
The average number Lorbit of customers in the orbit is computed by


Lorbit = iπ i e.
i=1

The probability Pimm that an arbitrary customer will start service immediately
upon arrival is computed by


Pimm = λ−1 π i ((I2N − J ) ⊗ D1 )e.
i=0

Let R = {(r (1) , . . . , r (N) ) : r (n) = 0, 1, n = 1, N }.


The average number Nbusy of busy servers is computed by


     
Nbusy = r (1) + . . . + r (N) π i, r (1) , . . . , r (N) e.
i=0 (r (1) ,...,r (N) )∈R
Retrial Queue with Heterogeneous Servers 39

(n)
The probability Pbusy that the nth server is busy at an arbitrary moment is
computed by

   
(n)
Pbusy = π i, r (1) , . . . , r (N) e, n = 1, N .
i=0 (r (1) ,...,r (N) )∈R, r (n) =1

Remark 2 For control of accuracy of computation of the vectors π i , i ≥ 0, it is


useful to check the fulfillment of the following equalities:


 
N
(n)
π i (e2N ⊗ IW̄ ) = θ and μn Pbusy = λ.
i=0 n=1

6 Numerical Results

To illustrate the feasibility and outcome of the presented algorithms as well to


show the effect of correlation in arrival process, we briefly consider the following
example.
Let initially the MAP -input be characterized by the matrices
$ % $ %
−1.35164 0 1.34265 0.00899
D0 = , D1 = .
0 −0.04387 0.02443 0.01944

This arrival process has the coefficient of correlation of two successive intervals
between arrivals ccor = 0.2, and the squared coefficient of variation of the intervals
between customer arrivals cvar = 13.4. In the presented experiment, we will vary
the average rate of the MAP λ that is done by multiplying the matrices D0 and D1
by the appropriate scalar.
In parallel, we present the results of computation for the model where the arrival
flow is defined as the stationary Poisson process with the same intensity. Let us
assume that the total number N of servers be equal to 4 and service rates at
the corresponding servers be μ1 = 4, μ2 = 3, μ3 = 1, and μ4 = 0.5,
correspondingly. The retrial rates are defined by α0 = 0, αi = iα, α = 1, i > 0.
Figures 1, 2, and 3 show the behavior of the value Lorbit depending on the input
rate λ when N = 2 (the slowest fourth and third servers are not used), N = 3 (the
slowest, fourth, server is not used), and N = 4 for two considered arrival processes.
Table 1 shows the value of Lorbit for several values of λ when the arrival process is
the MAP . Figure 4 shows the behavior of the value Lorbit depending on the input
rate λ under different numbers of servers for the arrival process MAP .
40 L. Mei and A. Dudin

N=2
500
M
450 MAP

400

350

300
orbit

250
L

200

150

100

50

0
1 2 3 4 5 6 7

Fig. 1 Dependence of Lorbit on the input rate λ when N = 2 for flows with different correlation

Figures 5 and 6 show the behavior of the value Pimm depending on the input rate
λ when N = 3 and N = 4 for two considered arrival processes. Table 2 shows the
value of Pimm for several values of λ when the arrival process is the MAP . Figure 7
shows the behavior of the value Pimm depending on the input rate λ under different
numbers of servers for the arrival process MAP .
Retrial Queue with Heterogeneous Servers 41

N=3
600
M
MAP
500

400
orbit

300
L

200

100

0
1 2 3 4 5 6 7 8

Fig. 2 Dependence of Lorbit on the input rate λ when N = 3 for flows with different correlation
42 L. Mei and A. Dudin

N=4
1200
M
MAP
1000

800
L orbit

600

400

200

0
1 2 3 4 5 6 7 8 9

Fig. 3 Dependence of Lorbit on the input rate λ when N = 4 for flows with different correlation

Table 1 The values of Lorbit for several values of λ and N = 2, 3, 4 when the arrival process is
the MAP

N λ 1 2 3 4 5 6 7 8
2 Lorbit 0.0656 0.5323 2.0526 6.4419 20.0501 70.0549
3 Lorbit 0.0115 0.1754 0.8670 2.9030 8.5491 24.8427 81.1391
4 Lorbit 0.0020 0.0672 0.4692 1.8211 5.5925 16.0165 46.4190 205.5134
Retrial Queue with Heterogeneous Servers 43

1200
N=2
N=3
1000 N=4

800
L orbit

600

400

200

0
1 2 3 4 5 6 7 8 9

Fig. 4 Dependence of Lorbit on the input rate λ under N = 2, 3, 4


44 L. Mei and A. Dudin

N=3
1
M
0.9 MAP

0.8

0.7

0.6
Pimm

0.5

0.4

0.3

0.2

0.1

0
1 2 3 4 5 6 7 8

Fig. 5 Dependence of Pimm on the input rate λ when N = 3 for flows with different correlation
Retrial Queue with Heterogeneous Servers 45

N=4
1
M
0.9 MAP

0.8

0.7

0.6
Pimm

0.5

0.4

0.3

0.2

0.1

0
1 2 3 4 5 6 7 8 9

Fig. 6 Dependence of Pimm on the input rate λ when N = 4 for flows with different correlation

Table 2 The values of Pimm for several values of λ and N = 2, 3, 4 when the arrival process is
the MAP

N λ 1 2 3 4 5 6 7 8
2 Pimm 0.9467 0.8184 0.6406 0.4386 0.2601 0.1318
3 Pimm 0.9895 0.9319 0.8109 0.6355 0.4360 0.2664 0.1388
4 Pimm 0.9975 0.9724 0.8990 0.7388 0.5426 0.3517 0.2061 0.0809
46 L. Mei and A. Dudin

1
N=2
0.9 N=3
N=4
0.8

0.7

0.6
Pimm

0.5

0.4

0.3

0.2

0.1

0
1 2 3 4 5 6 7 8 9

Fig. 7 Dependence of Pimm on the input rate λ under N = 2, 3, 4

Figure 8 shows the behavior of the value Nbusy when N = 4 for two considered
arrival processes. Table 3 show the value of Nbusy for several values of λ when the
arrival process is the MAP . Figure 9 shows the behavior of the value Nbusy for
different numbers of service for the arrival process MAP .
It is evidently seen from the presented figures that correlation in arrival process
essentially impacts on the values of the performance measures. Positive correlation
worsens performance measures of the system.
Retrial Queue with Heterogeneous Servers 47

N=4
4
M
MAP
3.5

2.5
Nbusy

1.5

0.5
1 2 3 4 5 6 7 8 9

Fig. 8 Dependence of Nbusy on the input rate λ when N = 4 for flows with different correlation

Table 3 The values of Nbusy for several values of λ and N = 2, 3, 4 when the arrival process is
the MAP

N λ 1 2 3 4 5 6 7 8
2 Nbusy 0.6673 0.8347 0.9358 1.0272 1.1128 1.1845
3 Nbusy 0.7449 1.1552 1.4465 1.6001 1.7623 1.9351 2.1014
4 Nbusy 0.7777 1.4252 2.0214 2.2901 2.4619 2.6897 2.9261 3.1585
48 L. Mei and A. Dudin

3.5
N=2
N=3
3 N=4

2.5
Nbusy

1.5

0.5
1 2 3 4 5 6 7 8 9

Fig. 9 Dependence of Nbusy on the input rate λ under N = 2, 3, 4

7 Conclusion

We analyzed retrial queueing model with heterogeneous servers and MAP arrival
process. The results can be used for solving various optimization problems related,
in particular, to enumeration of the servers. The results are planned to be extended
to the case of phase type distribution of service time.

References

1. Artalejo, J.R., Gomez-Corral, A.: Retrial Queueing Systems: A Computational Approach.


Springer, Berlin (2008)
2. Breuer, L., Dudin, A.N., Klimenok, V.I.: A retrial BMAP /P N/N system. Queueing Syst. 40,
433–457 (2002)
3. Chakravarthy, S.R.: The batch Markovian arrival process: a review and future work. In:
Krishnamoorthy, A., Raju, N., Ramaswami, V. (eds.) Advances in Probability Theory and
Stochastic Processes. Notable Publications, New Jersey, pp. 21–29 (2001)
Retrial Queue with Heterogeneous Servers 49

4. Dudin, S., Dudina, O.: Retrial multi-server queueing system with PHF service time distribution
as a model of a channel with unreliable transmission of information. Appl. Math. Model. 65,
676–695 (2019)
5. Efrosinin, D.V.: Controlled Queueing Systems with Heterogeneous Servers. Ph.D. Disserta-
tion, Trier University, Germany (2004)
6. Efrosinin, D., Breuer, L.: Threshold policies for controlled retrial queues with heterogeneous
servers. Ann. Oper. Res. 41(1), 139–162 (2006)
7. Efrosinin, D., Sztrik, J.: Performance analysis of a two-server heterogeneous retrial queue with
threshold policy. Quality Technol. Quant. Manag. 8(3), 211–236 (2011)
8. Falin, G.: Stability of the multiserver queue with addressed retrials. Ann. Oper. Res. 196(1)
241–246 (2012)
9. Falin, G.I., Templeton, J.G.C.: Retrial Queues. Chapman & Hall, London (1997)
10. Klimenok V., Dudin, A.: Multi-dimensional asymptotically quasi-Toeplitz Markov chains and
their application in queueing theory. Queueing Syst. 54(4), 245–259 (2006)
11. Lin, W., Kumar, P.R.: Optimal control of a queueing system with two heterogeneous servers.
IEEE Trans. Autom. Control 29, 696–703 (1984)
12. Luh, H.P., Viniotis, I.: Optimality of Threshold Policies for Heterogeneous Server Systems.
North Carolina State University, Raleigh (1990)
13. Lucantoni, D.: New results on the single server queue with a batch Markovian arrival process.
Commun. Stat. Stoch. Models 7, 1–46 (1991)
14. Mushko, V.V., Jacob, M.J., Ramakrishnan, K.O., Krishnamoorthy, A., Dudin, A.N.: Multi-
server queue with addressed retrials. Ann. Oper. Res. 141, 283–301 (2006)
15. Neuts M. Matrix-Geometric Solutions in Stochastic Models. The Johns Hopkins University
Press, Baltimore (1981)
16. Nobel, R., Tijms, H.C.: Optimal control of a queueing system with heterogeneous servers.
IEEE Trans. Autom. Control 45(4), 780–784 (2000)
17. Rosberg, Z., Makowski, A.M.: Optimal routing to parallel heterogeneous servers-small arrival
rates. Trans. Autom. Control 35(7), 789–796 (1990)
18. Rykov, V.V.: Monotone control of queueing systems with heterogeneous servers. Queueing
Syst. 37, 391–403 (2001)
19. Rykov, V.V., Efrosinin, D.V.: Numerical analysis of optimal control polices for queueing
systems with heterogeneous servers. Inf. Process. 2(2), 252–256 (2002)
20. Vishnevskii, V.M., Dudin, A.N.: Queueing systems with correlated arrival flows and their
applications to modeling telecommunication networks. Autom. Remote Control 78, 1361–1403
(2017)
What is Standard Brownian Motion?

Krishna B. Athreya

Abstract In this expository note, we explain several different historical approaches


to the construction of standard Brownian motion.

Keywords Standard Brownian motion · Gaussian process · Probability


measure · Stopping time · Martingale

1 Introduction

An easy and quick answer to the question posed in the title is that it is a Gaussian
process {X(t) : t ∈ [0, ∞)} with index set T ≡ [0, ∞), mean function m(t) ≡ 0
for all t ∈ T , and covariance function c(s, t) ≡ min(s, t) for all s, t ∈ T . That is,
for each positive integer k and any increasing k-tuple 0 ≤ t1 < t2 < . . . < tk <
∞ the random vector (X(t1 ), . . . , X(tk )) has a k-variate normal distribution in Rk
with mean vector (0, . . . , 0) and variance–covariance matrix Cov(X(ti ), X(tj )) =
min(ti , tj ). That is, the probability distribution of the vector(X(t1), . . . , X(tk )) on
Rk (with the standard Borel σ -algebra) is absolutely continuous with respect to
Lebesgue and has probability density function
$ %
1 1 1 −1 T
φk,(t1,t2 ,...,tk )(x1 ,...,xk ) ≡ √ k exp xΣ x
||Σk ||1/2 2 k

where

x = (x1 , . . . , xk ) ∈ Rk

K. B. Athreya ()
Department of Mathematics and Statistics, College of Liberal Arts and Sciences,
Iowa State University, Ames, IA, USA
e-mail: [email protected]

© The Editor(s) (if applicable) and The Author(s), under exclusive 51


licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_4
52 K. B. Athreya

and

Σk = min(ti , tj )1≤i,j ≤k ,

and xT is the transpose of x. A natural question is whether such a process exists.


Indeed it does. This follows from Kolmogorov’s consistency theorem (see, for
example, [[1], Theorem 6.3.1]). Indeed, if ΩT ≡ RT ≡ the set of real-valued
functions on T = [0, ∞) and FT is the σ -algebra generated by the class of sets

Wt1 ,...,tk ;A = {w ∈ ΩT : (w(t1 ), w(t2 ), . . . , w(tk )) ∈ A},

where ti is as above and A is a Borel subset of Rk , there is a probability measure


PT on (ΩT , FT ) such that the stochastic process {w(t) : t ≥ 0} is the process X(t)
described above. One of the problems with the above construction is that the sample
space ΩT = RT is too large but the σ -algebra FT is too small. It can be shown
that FT coincides with the σ -algebra defined as follows: given a countable subset
I ⊂ T , let πI denote the projection from ΩT to RI , and consider the collection
of sets πI−1 (B), where B is a Borel subset of RI , and I ranges over all countable
subsets of T.
In particular, the subset C[0, 1] of real-valued continuous functions is not a
member of FT . It can be shown that under the measure PT the trajectories are in C[0,
1] with probability 1 but that this event is not in FT and hence is not measurable.
Similarly, the function

M(w) ≡ sup{w(t) : 0 ≤ t ≤ 1}

is not FT -measurable. An approach pioneered by J. L. Doob is the notion of


separable stochastic processes. Another approach pioneered by Kolmogorov and
Skorokhod is to restrict the sample space to functions that are continuous or which
are right continuous and have left limits (see, for example, Billingsley [2]). If
{X(t) : t ≥ 0} is a Gaussian process as defined above, it can be shown that for
any t, h > 0, the increment X(t + h) − X(t) is stochastically independent of
{X(u) : u ≤ t} and has variance |h|. This suggests that X(t + h) − X(t) goes
to zero as h → 0 if t is fixed, i.e., that the trajectory should be continuous at t.
However, before one gets hopes too high, one can show that the trajectories, while
continuous, are not differentiable.

2 Another Construction

N. Weiner solved this problem of measurability in a different way (see, e.g.,


Karatzas and Shreve [5]). Let {ηi (ω)}i≥1 be a sequence of independent, identically
distributed (i.i.d.) N(0, 1) random variables on a probability space (Ω, B, P ). It
can be shown that (Ω, B, P ) can be chosen to be the standard Lebesgue space
What is Standard Brownian Motion? 53

[0, 1] with the Borel σ -algebra B, and P standard Lebesgue measure. Let {ψj (.)}j ≥1
be a complete orthonormal basis in L2 ([0, 1], P ), and for each positive integer N,
let


N  t
BN (t, ω) = ηj (ω) ψj (u)du, for 0 ≤ t ≤ 1, ω ∈ Ω
j =1 0

Then for each N, and ω ∈ Ω, {BN (t, ω) : 0 ≤ t ≤ 1} is a well-defined function


in C0 [0, 1], the Banach space of continuous real-valued functions f on [0, 1] with
f (0) = 0. It can be shown that the sequence {BN (·, ω)}N≥0 is a Cauchy sequence
in C0 [0, 1] for almost all ω, and since C0 [0, 1] is a complete metric space (when
equipped with the sup norm), there exists a process {B(t, ω) : 0 ≤ t ≤ 1} such that

BN (·, ω) → B(·, ω)

in C0 [0, 1] for almost all ω. It can be further shown that {B(t, ω) : 0 ≤ t ≤ 1} is


a Gaussian process with mean function m(t) ≡ 0 for all 0 ≤ t ≤ 1 and covariance
function C(s, t) = min(s, t); 0 ≤ s, t ≤ 1. Thus, the standard Brownian motion
(SBM) on [0, 1] is Gaussian process with continuous trajectories on [0, 1]. This is
the definition we will use, instead of that from 1.

3 Definition of SBM on [0, ∞)

In 2, we defined SBM on [0,1] using Weiner’s approach. Now we extend it to the


whole positive real line [0, ∞) as follows. Let {B (j ) (t, ω) : 0 ≤ t ≤ 1} be i.i.d
copies of the SBM described in 2. Given t ∈ [0, ∞), let n = t. Define


n
B(t, ω) ≡ B (j ) (1, ω) + B (n+1) (t − n, ω).
j =1

Then it can be verified that {B(t, ω) : t ≥ 0} satisfies:


1. B(0, ω) = 0 for all ω.
2. The function t → B(t, ω) is continuous in t for all ω.
3. It is a Gaussian process with mean function m(t) ≡ E(B(t, ω)) = 0 for all t and
covariance function C(s, t) = min(s, t), 0 ≤ s, t < ∞.
We will refer to this process as standard Brownian motion (SBM) on [0, ∞).
54 K. B. Athreya

4 Donsker’s Invariance Principle

A construction due to Donsker [3] gives another way of constructing SBM on [0, 1].
Theorem 1 (Donsker’s Invariance Principle) Let {Xi }∞
i=1 be i.i.d random vari-
ables with P (Xi = 1) = P (Xi = −1) = 1/2.
Let

1 
j
Yn (j/n) = √ , j = 0, 1, 2, . . . , n.
n
i=1

and define Yn (t) for 0 ≤ t ≤ 1 by linear interpolation. Let μn (·) denote the measure
on C0 [0, 1] supported on the set of realizations of Yn (t), i.e., for measurable A ⊂
C0 [0, 1],

μn (A) = P (Yn (·) ∈ A).

Then there is a probability measure μ on C0 [0, 1] so that μn → μ in the weak-*


topology, that is, for any functional f : C0 [0, 1] → R,
 
f dμn → f dμ
C0 [0,1] C0 [0,1]

Remark 1 The limiting probability measure μ on C0 [0, 1] is the same as the Wiener
measure constructed in 2.
Remark 2 The above theorem of Donsker extends to the case when {Xi } are i.i.d.
mean 0 random variables with variance E(X12 ) = 1. This√is useful in studying the
limit behavior in the Kolmogorov–Smirnov statistic Tn = nsup0≤x≤1|Fn (x) − x|,
where Fn (x) = n1 ni=1 I{Ui ≤x} is the empirical distribution function of the sample
U1 , . . . , Un , where Ui are i.i.d. uniform [0,1] random variables. It can be shown that
for each x ∈ R,

P (Tn ≤ x) → P (T ≤ x),

where T ≡ sup0≤t ≤1|Y (t) − tY (1)| and Y (·) is SBM on [0, 1], i.e., Tn → T in
distribution as n → ∞.

5 Some Basic Properties of SBM on [0, ∞)

Let {B(t, ω) : t ≥ 0} satisfying the conditions from 3, that is,


1. B(0, ω) = 0 for all ω.
What is Standard Brownian Motion? 55

2. The function t → B(t, ω) is continuous in t for all ω.


3. It is a Gaussian process with mean function m(t) ≡ E(B(t, ω)) = 0 for all t and
covariance function C(s, t) = min(s, t), 0 ≤ s, t < ∞.
Then we claim it has the following properties:
Scaling For each (deterministic) c > 0, let

1
Bc (t, ω) ≡ √ B(ct, ω).
c

Then {Bc (t, ω) : t ≥ 0} is also an SBM. To prove this, note that conditions (1) and
(2) are immediate, and condition (3) is an easy computation.
Reflection If {B(t, ω) : t ≥ 0}is an SBM, then so is

B̃(t, ω) = −B(t, ω).

This is also a straightforward verification of the above conditions.


Time Inversion For t > 0, set
$ %
1
B̃(t, ω) = tB ,ω ,
t

˜ ω) : t ≥ 0} is also an SBM. To prove this, it


and set B̃(0, ω) = 0. Then {B̃(t,
is straightforward to verify that it is a Gaussian process with the specified mean
and covariance functions, and that it is continuous on the open interval (0, ∞). It
remains to verify that

˜ ω) = 0.
lim B̃(t,
t →0

˜ ω) : t ≤ t ≤ t }
with probability 1. To prove this, fix 0 < t1 < t2 < ∞. Then {B̃(t, 1 2
is a Gaussian process with continuous trajectories and has the same distribution as
{B(t, ω) : t1 ≤ t ≤ t2 }. Hence

X1 ≡ sup{B(t, ω) : t1 ≤ t ≤ t2 }

and
 
˜ ω) : t ≤ t ≤ t .
X2 ≡ sup B̃(t, 1 2

have the same distribution. As t1 → 0, X1 and X2 converge with probability 1 to

X1∗ (t2 ) ≡ sup{B(t, ω) : 0 ≤ t ≤ t2 }


56 K. B. Athreya

and
 
˜ ω) : t ≤ t ≤ t
X2∗ (t2 ) ≡ sup B̃(t, 1 2

respectively, and thus, these have the same distribution. Now as t2 → 0, X1∗ (t2 )
converges to limsupt →0 B(t, ω) which is 0 with probability 1. This implies that
X2∗ (t2 ) → 0 with probability 1 as t2 → 0. Thus

˜ ω) = 0,
lim B̃(t,
t →0

with probability 1. As a corollary, we obtain that

B(t)
lim = 0,
t →∞ t
with probability 1.

6 Translation Invariance

6.1 Translation Invariance after a Deterministic Time

As above, let {B(t, ω) : t ≥ 0} be an SBM. Fix t0 > 0 and for t ≥ 0, let

Bt0 (t, ω) ≡ B(t0 + t, ω) − B(t0 , ω).

Then it is easy to check that {Bt0 (t, ω) : t ≥ 0} is also an SBM.

6.2 Translation Invariance after Stopping Times T

A random variable T ∼ = T (ω) with values in [0, ∞) is a stopping time with respect
to an SBM if for any deterministic t0 ≥ 0, the event {T (ω) ≤ t0 } is determined
by the history {B(u, ω) : u ≤ t0 }, i.e., the set {ω : T (ω) ≤ t0 } belongs to the σ -
algebra generated by {B(u, ω) : u ≤ t0 }. We have the following result on translation
invariance:
Theorem 2 Let T be a stopping time with respect to an SBM {B(u, ω) : u ≥ 0}.
Then the stochastic process {B(u + t, ω) − B(T , ω) : u ≥ 0} is also an SBM and is
independent of the σ -algebra σT ≡ {A : A ∩ {T ≤ t} ∈ σ (B(u, ω) : u ≤ t)}.
For a proof of this theorem, see, for example, Karatzas and Shreve [5].
What is Standard Brownian Motion? 57

Examples of Stopping Times Two important examples of stopping times are


1. Fix a ∈ R. Let Ta ≡ inf {t ≥ 0 : B(t, ω) = a} be the first hitting time of a.
2. Fix −∞ < a < 0 < b < ∞. Let Ta,b ≡ inf {t ≥ 0 : B(t, ω) ∈ / (a, b)}.

6.3 The Reflection Principle

Consider the stopping time Ta defined above. Then

P (Ta ≤ t) = P (Ta ≤ t, B(t, ω) > a) + P (Ta ≤ t, B(t, ω) < a),

since P (B(t, ω) = a) = 0. By the continuity of trajectories of SBM, B(Ta ) = a on


the event {Ta ≤ t}. We then have

P (Ta ≤ t, B(t) < a) = P (Ta ≤ t, B(t) − B(Ta ) < 0)


= P (Ta ≤ t, B(t) − B(Ta ) > 0).

So
 $ %
a
P (Ta ≤ t) = 2P (Ta ≤ t, B(t) > a) = 2 1 − Φ √ ,
t

where Φ(.) is the standard N(0, 1) cdf. So for every a > 0, Ta has an abso-
a2
lutely continuous distribution with probability density function √1 e 2t 3/2
a
. Hence
2π t
p p
E(Ta ) < ∞ for all p < 1
2 and E(Ta ) = ∞ is infinite for p ≥ 1
2.

7 Distribution of the Maximum of SBM on [0, t]

Let

M(t, ω) ≡ sup{B(u, ω) : 0 ≤ u ≤ t},

and let a > 0. Then

P (M(t, ω) > a) = P (Ta ≤ t) = 2P (B(t) > a) = P (|B(t, ω)| > a)

since B(t, ω) has an N(0, t) distribution, which is symmetric about 0. Thus, M(t, ω)
has the same distribution as |B(t, ω)|. A similar argument shows that m(t, ω) ≡
min{B(u, ω) : 0 ≤ u ≤ t} has the same distribution as −|B(t, ω)|.
58 K. B. Athreya

8 Sample Path Properties of SBM

Theorem 3 ([4]) Let f : R → R be integrable with respect to Lebesgue measure,


and {B(u) : u ≥ 0} SBM on [0, ∞). Then
 t  ∞
1 t →∞
f (B(u))du −−−→ f (u)du,
N(t) 0 −∞

where N(t) is the number of times the SBM {B(u) : u ≥ 0} hits level 1 and then hits
level 0 at a later time before getting back to level 1 in the time interval [0, t].

8.1 Nondifferentiability

Theorem 4 With probability 1, the SBM {B(u) : u ≥ 0} is nowhere differentiable


on [0, ∞).

8.2 Increments

Theorem 5 Let {B(u) : u ≥ 0} be SBM and set


n
 2
Δ≡ sup B(tj ) − B(tj −1 )
0

where the supremum is taken over all partitions {t0 < t1 . . . < tn } of [0, 1]. Then
Δ = 1 with probability 1.

8.3 Martingale Properties

Theorem 6 Let{B(u) : u ≥ 0} be SBM. Then


• (a) {B(u) : u ≥ 0} is a martingale
• (b) {B 2 (u) − u : u ≥ 0} is a martingale
θ2
• (c) For all θ ∈ R, {eθB(u)− 2 u
: u ≥ 0} is a martingale.

Acknowledgments I want to thank Prof. A. Krishnamoorthy and Dr. Varghese C. Joshua for
inviting me to this conference and to let me present this paper.
What is Standard Brownian Motion? 59

References

1. Athreya, K.B., Lahiri, S.N.: Measure Theory and Probability Theory, Springer Texts in
Statistics. Springer, New York (2006). MR2247694
2. Billingsley, P.: Convergence of probability measures. In: Wiley Series in Probability and
Statistics: Probability and Statistics. Wiley, New York (1999). A Wiley-Interscience Publication,
MR1700749
3. Donsker, M.D.: An invariance principle for certain probability limit theorems. Mem. Am. Math.
Soc. 6, 12 (1951). MR0040613
4. Kallianpur, G., Robbins, H.: Ergodic property of the Brownian motion process. Proc. Nat. Acad.
Sci. USA 39, 525–533 (1953). https://doi.org/10.1073/pnas.39.6.525. MR0056233
5. Karatzas, I., Shreve, S.E.: Brownian Motion and Stochastic Calculus, 2nd ed. In: Graduate Texts
in Mathematics, vol. 113. Springer, New York (1991). MR1121940
Busy Period Analysis of Multi-Server
Retrial Queueing Systems

Srinivas R. Chakravarthy

Abstract The literature on the busy period analysis in queueing theory is very
limited due to the inherent complexity in its study. Recently, using the simulation
approach the busy period for the classical multi-server queueing systems was
studied by this author and some interesting observations were reported. In this
paper we carry out a similar analysis but on a smaller scale in the case of multi-
server retrial queueing systems. It should be pointed out that while the literature
on retrial queueing system is vast, the same cannot be said about the busy period
analysis in retrial queueing systems. Only a few papers with restricted assumptions
are available in the literature. This paper is an attempt to fill the void.

Keywords Retrial · Queueing · Busy period · Simulation

1 Introduction

In general, the busy period analysis in queueing systems is very involved and
complicated. This is not due to the choice but rather due to the difficulty inherent in
its study [10]. Realizing this difficulty, Chakravarthy [10] used simulation to record
some interesting observations on the busy period of classical queueing systems. We
refer the reader to [10] for details including a literature survey on the busy period
analysis of the classical queues.
In this paper, which can be considered as a sequel to [10], we look at multi-
server retrial queueing systems with the arrivals governed by a point process, general
services, and general retrial times. This point process also includes a Markovian
arrival process (MAP ). Recall that a retrial queueing system (see, e.g., [2]) is such
that an arriving customer finding all servers busy will enter into a retrial orbit and

S. R. Chakravarthy ()
Departments of Industrial and Manufacturing Engineering and Mathematics,
Kettering University, Flint, MI, USA
e-mail: [email protected]

© The Editor(s) (if applicable) and The Author(s), under exclusive 61


licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_5
62 S. R. Chakravarthy

attempt to capture a free server at random times. Such a system is studied as a level-
dependent queue unless the retrial rate is independent of the number in the retrial
orbit. While a number of retrial queueing models have been studied in the literature
(see, e.g., [1, 2, 14, 22]) placing restrictions on the retrial distribution, only a few
papers deal with more complex distributions for the retrial attempts (see, e.g., [8]
and the references therein).
While in the classical queueing systems, the busy period is defined in terms of
the servers’ (which will coincide with that of the busy system in the case of a single
server) busy time, the busy period for the retrial queues should be defined in terms
of the system being busy. This is due to the fact that in (continuous-time) retrial
queueing systems, the server will always alternate between an idle period and a
service period. This can be seen immediately by noting that there is no queue in
front of the server and that the next service starts either with a new arrival or with
a successful attempt from the retrial orbit by a customer. Thus, the non-trivial busy
period analysis involves that of the system which will be the case in this paper too.
There are very few papers that deal with busy period analysis in retrial queueing
systems. In the context of MAP /M/c with a finite buffer for retrial customers,
Artalejo et al., [4] derive the Laplace–Stieltjes transform of the busy period as well
as the probability generating function of the number of customers served during a
busy period. Recently, Kim [13] derived expressions (but without any computational
procedures) for the first and the second moments of the duration of the busy period
for M X /G/1 retrial queue. However, to our knowledge, there is no paper dealing
with the busy period analysis for a multi-server retrial queueing system in a general
context.
Thus, the objective of this paper is to present some insight into the study of
the busy period analysis of a general multi-server retrial queueing system. The
paper is organized as follows. The model under study is described in Sect. 2. After
validating the simulated model for the M/G/1 retrial queueing model [3] in Sect. 3,
we report some interesting observations based on our simulated results in Sect. 4.
Some concluding remarks are mentioned in Sect. 5.

2 Model Description

In this paper, we assume that the customers arrive according to a point process
including a MAP with (irreducible) representation matrices (D0 , D1 ) of dimension
m. Let D = D0 + D1 be the underlying generator with steady-state probability
vector π . That is, π D = 0 and π e = 1 so that the arrival rate is given by λ =
π D1 e. The MAP is a versatile class of point processes introduced by Neuts [17]
and studied extensively by Neuts and his colleagues in the context of a variety of
queueing, inventory, and reliability models among others. Modeling the inter-arrival
times with MAP has many advantages including capturing any correlation that may
be present between two successive inter-arrival times. We refer the reader to [5–
7, 9, 15–17, 19–21] for details on MAP and other key references.
Busy Period Analysis of Multi-Server Retrial Queueing Systems 63

There are c servers in the system and the service times are assumed to be
generally distributed with mean μ1 . Let ρ = cμ λ
. An arriving customer finding a free
server will enter into service immediately; however, an arriving customer finding all
the servers busy will enter into a retrial buffer of infinite capacity. Each customer
entering into the retrial orbit will attempt to capture a free server (independently of
the others waiting in the orbit) after a random period of time. This random variable
is assumed to be generally distributed with a finite mean, say, ξ1 . We assume that the
service time and the retrial time have a finite variance. We also assume that the inter-
arrival times, the service times, and the retrial times are all mutually independent.
Since we are considering retrial queueing models of the GI /G/c-type as well
as MAP /G/c-type with general retrial times and since our main focus in this
paper is the busy period analysis, we will resort to simulation. It should be pointed
out that our main purpose is to motivate the need for studying the busy period of
retrial queueing models as in this era of technology the customers’ queries such
as billing enquiries, fixing appointments, and the status details in service sectors
force customers to make repeated attempts when all servers are busy. The study
of general retrial queueing models becomes very complex even for obtaining the
standard measures such as the mean waiting time, leave alone a study on the busy
period of the system. We will use ARENA, a powerful simulation software [12] in
our study here. Simulation should be considered as an important tool and also a way
to get insight into obtaining theoretical results. Hence, in this paper a few selected
retrial queueing models will be simulated and their key results will be summarized.
In this paper, we will be focusing on the following three measures: (a) the mean
busy period of the system; (b) the coefficient of variation of the busy period; and
(c) the number of busy periods during the simulation period. While the first two
measures are very standard and easy to explain the need for them, the measure given
in (c) needs some justification. It is known in queueing theory literature the effect
of the variability (as well as the correlation) in the inter-arrival times on the system
performance measures. For example, in [10] it is shown that hyperexponential inter-
arrival times appear to have a larger mean busy period compared to that of Erlang
inter-arrival times in GI /G/c-type queues, and positively correlated inter-arrival
times appear to yield a higher mean busy period as compared to the corresponding
negatively correlated ones in the context of MAP /G/c-type queues. However, in
retrial queues, since the server alternates between being idle and offering only one
service, not only the mean busy period is of interest but also the number of busy
periods.

3 Validation

It is important to validate our simulated model by comparing the simulated results


with those of the published analytical results for some well-known retrial queueing
models in the literature. However, to the best of our knowledge, we found only one
paper [3] dealing with the busy period analysis in the context of M/G/1 queue
64 S. R. Chakravarthy

with exponential retrial times that has illustrative numerical examples reporting the
second moment of the busy period. In [3], the authors investigate a single server
queue with Poisson arrivals, general services, and exponential retrial times, and
point out the limitations in using the expressions (given in the form of Laplace–
Stieltjes transforms) for the busy period, and offer a direct way to compute the
second moment of the busy period. Thus, we validate our model by comparing
our simulated results with the numerical results based on the analytical expressions
given in [3]. First, it is worth mentioning that in [3] the authors report the numerical
values for the second moment of the busy period for the following four sets of data:
(1) hyperexponential services by considering three values, namely 1.25, 1.50, 1.75,
for the coefficient of variation of the service times and by varying the arrival rate.
For all these scenarios, the mean service time is fixed at 1.0 and the retrial rate
is fixed at 0.5; (2) hyperexponential services by considering three values, namely
0.25, 0.50, 0.75, for the arrival rate, and by varying the retrial rate. For all these
scenarios, the mean service time is fixed at 1 and the coefficient of variation to
be 1.25. Note that taking the retrial rate to be ∞ reduces the retrial model to the
classical queueing model; (3) Erlang services by looking at three values, namely
2, 4, 6, for the order of the Erlang distribution, and the arrival rate is varied. The
other parameters are set as in case (1); (4) the service time is Erlang of order 3
by varying the retrial rate, and considering three values for the arrival rate. The
parameter values are as given in (2).
Also, it should be pointed out that a number of measures (other than the busy
period ones) for the retrial queueing models in a more general setup have been
validated and reported in [8]. Unless otherwise specified, all our simulation involves
five replicates with each replicate for 500,000 units. Only in some cases like when
λ = 0.1, we had to simulate for longer periods of times so as to get the simulated
values close enough to the analytical results when validating. In Table 1 below,
we display the representative error percentages of our simulated results with the
numerical results (based on analytical formulas) for the mean and the coefficient of
variation of the busy period for the M/G/1 retrial queue with exponential retrial
times. For services, the authors in [3] consider (for numerical purposes) Erlang
and hyperexponential distributions under different sets  of parameters. The error
 o et al.−Simulat ed 
percentage is calculated as 100 Art alejArt alej o et al. %.
As can be seen from the table below (and the other error percentages not
displayed here due to lack of space) the simulated values agree with the analytical
ones very well (the largest error percentage is less than 10%).
Busy Period Analysis of Multi-Server Retrial Queueing Systems 65

Table 1 Error percentages using Artalejo et al. [3]

Hyperexponential services Erlang services


λ CV = 1.25 CV = 1.50 CV = 1.75 k=2 k=4 k=6
0.05 1.77% 1.79% 2.30% 0.56% 5.31% 2.08%
0.10 0.89% 0.90% 2.23% 0.37% 0.56% 0.16%
0.15 2.01% 1.02% 0.11% 2.47% 0.26% 0.48%
0.20 0.51% 0.59% 0.95% 0.81% 1.56% 0.44%
0.25 0.87% 0.44% 1.94% 0.27% 0.31% 1.14%
0.30 2.43% 0.23% 2.14% 0.21% 0.58% 0.35%
0.35 0.21% 0.95% 1.76% 1.51% 2.22% 1.07%
0.40 0.57% 0.24% 0.04% 0.19% 0.58% 0.17%
0.45 0.97% 1.10% 0.07% 0.93% 0.76% 0.73%
0.50 1.27% 0.20% 1.42% 1.11% 0.02% 1.84%
0.55 0.22% 1.64% 0.48% 0.76% 0.33% 1.31%
0.60 0.98% 1.64% 2.76% 0.32% 2.47% 1.58%
0.65 1.85% 0.32% 3.01% 2.18% 1.96% 1.34%
0.70 2.29% 0.78% 1.88% 0.19% 2.61% 2.27%
0.75 3.13% 3.00% 1.07% 3.31% 2.76% 3.30%
0.80 2.54% 3.90% 2.49% 9.08% 0.34% 4.11%
0.85 4.71% 4.28% 0.11% 3.65% 0.43% 0.45%

4 Simulated Results

In this section, we will discuss a few illustrative examples based on simulation.


Towards this end, we consider ten different arrival processes consisting of eight
MAP s, a Weibull, and a constant; five different types of service time; and three
retrial time distributions. While it is clear (see, e.g., [6]) that Erlang, exponential,
and hyperexponential are very special cases of a MAP , a few other MAP s
considered here are based on the construction (for numerical purposes) originally
described in [7] and elaborated with more details in [11]. Before we display the
arrival processes, we set some notation. Define
⎛ ⎞ ⎛⎞
−λ1 λ1 0
⎜ ⎟ ⎜ . ⎟
⎜ −λ1 λ1 ⎟ ⎜ . ⎟
α(m) = (1, 0, · · · , 0), T (λ1 , m) = ⎜
⎜ ..
⎟,
⎟ T 0 (λ1 , m) = ⎜ . ⎟
⎜ ⎟,
⎝ . ⎠ ⎝ 0 ⎠
−λ1 λ1
* + * +
T (m) 0 p1 T 0 α q1 T 0
D0 (λ1 , λ2 , m) = , D1 (λ1 , λ2 , p1 , p2 , m) = ,
0 −λ2 q2 λ2 α p2 λ2
66 S. R. Chakravarthy

where qi = 1 − pi and 0 < pi < 1, i = 1, 2. Note that the representation


(α(m), T (m)) of dimension m is for Erlang distribution of order m (see e.g., [18]).
The representation (D0 (λ1 , λ2 , m), D1 (λ1 , λ2 , p1 , p2 , m)) of dimension m + 1 is
for a MAP . This form of MAP will be used for arrival processes in our illustrative
examples below. Suppose that r denotes the 1-lag correlation coefficient of this
MAP . It is shown in [11] that r is obtained explicitly in terms of the parameters
of the MAP as

q1 q2 (1 − q1 − q2 )(λ1 − mλ2 )2
r= .
q1 λ21 (q1 + 2q2 ) − 2mq1 q2 λ1 λ2 + mq2 λ22 [q2 + (m + 1)q1 ]

A. Arrival Process: We consider the following distributions for the arrival pro-
cesses. Note that these will be normalized so as to have a specific arrival rate, λ,
for comparison purposes. Also, these are qualitatively different in that they have
different correlation and variance structure.
1. Erlang (ERA): This is Erlang of order 5 with parameter 5λ.
2. Hyperexponential (H EA): This is hyperexponential with mixing probability
vector taken as (0.7, 0.25, 0.05) with the corresponding rate vector given by
λ(8.2, 0.82, 0.082).
3. MAP with negative correlation (NC1): Here we consider the MAP with
representation (D0 (1.25, 2.5, 2), D1(1.25, 2.5, 0.01, 0.01, 2)) of dimension 3.
Note that r = −0.32667.
4. MAP with negative correlation (NC2): Here we consider the MAP with
representation (D0 (2.25, 4.5, 4), D1(2.25, 4.5, 0.01, 0.01, 4)) of dimension 5.
Note that r = −0.57855.
5. MAP with negative correlation (NC3): Here we consider the MAP with rep-
resentation (D0 (4.75, 9.5, 9), D1(4.75, 9.5, 0.01, 0.01, 9)) of dimension 10.
Note that r = −0.78022.
6. MAP with positive correlation (P C1): Here we consider the MAP with
representation (D0 (1.25, 2.5, 2), D1(1.25, 2.5, 0.99, 0.99, 2)) of dimension 3.
Note that r = 0.32667.
7. MAP with positive correlation (P C2): Here we consider the MAP with
representation (D0 (2.25, 4.5, 4), D1(2.25, 4.5, 0.99, 0.99, 4)) of dimension 5.
Note that r = 0.57855.
8. MAP with positive correlation (P C3): Here we consider the MAP with rep-
resentation (D0 (4.75, 9.5, 9), D1(4.75, 9.5, 0.99, 0.99, 9)) of dimension 10.
Note that r = 0.78022.
9. Constant (CT A): Here we consider constant inter-arrival times with a value of
1
.
λ
10. Weibull (W BA): We consider a 2-parameter (where one is fixed to be 0.5)
Weibull whose CDF is given by FW BA (x, θ ) = 1 − e−( θ ) , x ≥ 0, θ > 0.
x 0.5

B. Service Times We consider the following for distributions for the services. Note
that these will be normalized so as to have a specific service rate, μ, for comparison
Busy Period Analysis of Multi-Server Retrial Queueing Systems 67

purposes. Also, these are qualitatively different in that they have a different variance
structure.
1. Erlang (ERS): This is Erlang of order 5 with parameter 5μ.
2. Hyperexponential (H ES): This is hyperexponential with mixing probabil-
ity vector taken as (0.9, 0.1) with the corresponding rate vector given by
μ(1.9, 0.19).
1
3. Constant (CT S): Here we consider constant services with a value of .
μ
4. Weibull (W BS): We consider a 2-parameter (where one is fixed to be 0.5)
−( xη )0.5
Weibull whose CDF is given by FW BS (x, η) = 1 − e , x ≥ 0, η > 0.
C. Retrial Times We consider the following for distributions for the trials. Note
that these will be normalized so as to have a specific retrial rate, ξ , for comparison
purposes. Also, these are qualitatively different in that they have different variance
structure.
1. Erlang (ERR): This is Erlang of order 5 with parameter 5ξ .
2. Exponential (EXR): Here we consider exponential retrials with parameter ξ .
3. Hyperexponential (H ER): This is hyperexponential with mixing probabil-
ity vector taken as (0.9, 0.1) with the corresponding rate vector given by
ξ(1.9, 0.19).
Example 1 In this example, we fix λ = 1, μ = 0.95c 1
, ξ = 0.5, and look at different
scenarios by varying the arrival process, the service times, the retrial times, and the
number of servers, c, in the system. In Figs. 1 and 2, respectively, we display the
Ln(Mean busy period) vs Ln(Mean number of busy periods) for the renewal and the
correlated arrivals, and in Figs. 3 and 4, respectively, we display the coefficient of
variation of the busy period for the renewal and the correlated arrivals. These figures
are under various scenarios.
Some key observations, keeping in mind the sampling errors due to simulation,
from these figures are summarized below.
1. When c = 1, the scenario corresponding to ERA arrivals and ERS services
appears to yield a small mean busy period along with a small mean number of
busy periods. Note that the traffic load of the queue is high here. Hence, this is
counterintuitive but can be explained. In another example, we will discuss this by
looking at smaller to medium values for the traffic load. The small values for the
mean number of periods along with the small mean busy period triggered us to
look into running the simulation even for a longer period of time from the current
one of 500,000 units. However, when simulating this scenario for 100,000,000
units, we still noticed a similar phenomenon. So, this indicates that when the
inter-arrival and service times have a smaller variability the mean busy period
lasts for a longer period of time. Due to the sampling error, the mean busy period
is obtained based on a very few busy periods and after which the simulation ends
with no more busy period completed. This results in small numbers for these two
68 S. R. Chakravarthy

Fig. 1 Ln(Mean busy period) vs Ln(Mean number of busy periods) under various scenarios for
the renewal arrivals

measures and hence one should not interpret that ERA appears to yield smaller
mean busy periods. On the contrary, the mean busy period is large for ERA as
compared to, say, H EA, when the traffic load is high.
2. As the number, c, of servers is increased, we see that the mean busy period
becomes larger and the mean number of busy periods pretty much staying small
only for scenarios wherein the variability in the arrivals and also the variability
in the services are small (i.e., ERA and ERS combination).
3. In the case of H EA, noting that this has a higher variability in the inter-arrival
times, we see the system getting free more often and also getting busy more
often. This is the case for all scenarios involving hyperexponential arrivals. This
Busy Period Analysis of Multi-Server Retrial Queueing Systems 69

Fig. 2 Ln(Mean busy period) vs Ln(Mean number of busy periods) under various scenarios for
the correlated arrivals

can be, somewhat, explained intuitively as follows. Due to a large variability in


the inter-arrival times (for H EA case), we see the customers arrive with short
inter-arrival times and then once in a way it takes a longer time for an arrival to
occur. During this longer interval, customers from orbit occupy the server at a
faster rate.
4. When comparing the correlated arrivals, we notice that almost for all scenarios
the negatively correlated arrivals have a higher mean busy period compared to
those of the positively correlated ones. However, when looking at the mean
number of busy periods, we notice that it is the positively correlated arrivals
that have a higher value compared to those of the negatively correlated ones.
70 S. R. Chakravarthy

Fig. 3 Coefficient of variation of the busy period under various scenarios for the renewal arrivals

5. Having a high variability in either the inter-arrival times or in the services appears
to have a larger coefficient of variation of the busy period.
6. Looking at the coefficient of variation of the busy period, we notice that the
positively correlated arrivals appear to have a higher value when compared with
the negatively correlated ones.
7. In the case of the constant arrivals (CT A) as well as for the Weibull arrivals
(W T A) we see the behavior of the three measures that are similar to the
hyperexponential arrivals (H EA). Note that here the types of services are
different from the ones seen for the H EA case. However, the main thing here
Busy Period Analysis of Multi-Server Retrial Queueing Systems 71

Fig. 4 Coefficient of variation of the busy period under various scenarios for the correlated arrivals

is to show that even for the heavy-tailed distribution (either in the arrivals and or
in services) like Weibull considered here behaves differently as compared to the
Erlang ones.
Finally, a few additional comments for which the figures are not included here due
to lack of space. We looked at other scenarios involving Erlang arrivals (ERA).
These include CT S, W BS, with the combinations of ERR, H ER and c = 1, 2, 5.
We noticed that constant service (CT S) indicated that (under all combinations for
retrials and the number of servers), during the entire simulation period, there was at
least one customer waiting in the retrial orbit with probability ranging from 0.98 to
72 S. R. Chakravarthy

1.0, while the average number of customers in the orbit ranged from 7.2679 through
38.6894 (based on the type of retrial times and the number of servers). In the case of
W BS, the range for the probability is from 0.9186 through 0.9964 and the average
number in the orbit from 45.7554 to 86.1952. Due to these additional observations
it is no wonder that for some scenarios involving ERA arrivals, the busy period
looked “almost” endless even after simulating for a long period of time.
As we saw earlier, the scenario corresponding to ERA − ERS − ERR (i.e.,
Erlang arrivals, Erlang services, and Erlang retrials) produces a larger mean busy
period and a smaller number of busy periods especially when the traffic intensity is
higher. In the next example, we investigate the effect of the traffic intensity and the
order of Erlang arrivals on the two of the three measures under consideration.
Example 2 In this example, we investigate the effect of the traffic intensity and the
order of Erlang arrivals on the three measures under consideration. Towards this
end, we fix μ = 1, ξ = 0.5, c = 1, vary λ = 0.1, . . . , 0.8, and consider Erlang
for arrivals, services, and for retrials. While we fix the order of the Erlang for the
services and the retrials to be 5, we vary the order of the Erlang for the arrivals from
m = 2, . . . , 9. In Fig. 5, we display Ln(Mean busy period) and Ln(mean number
of busy periods), under various scenarios. Looking at this figure with two plots, we
notice the following interesting observations.
1. For λ taking values up to 0.6, we see that the mean busy period decreases as m
increases; however, when λ > 0.6, we see this trend is reversed indicating that
the mean busy period increases as m is increased.
2. With respect to the number of busy periods, we notice that this measure increases
as m is increased for λ up to 0.6, and then the trend is reversed.
Thus, we notice that when arrivals and services occur with less variability, then as
the traffic load increases, the mean busy period becomes large and the number of
busy periods decreases.
Example 3 In this example, we investigate the effect of the retrial rate for Erlang
arrivals on the three measures under consideration. Towards this end, we fix λ =
1, μ = 0.95 1
, c = 1, vary ξ = 1, 2, 5, 10, and consider various services and
retrial times. In Fig. 6, we display Ln(Mean busy period), Ln(mean number of busy
periods), and the coefficient of variation of the busy period under various scenarios.
Looking at this figure, we notice the following interesting observations.
1. As ξ is increased, we notice that the mean busy period decreases and the mean
number of busy periods increases. This is to be expected as the retrial queueing
model approaches the corresponding classical queueing models.
2. As ξ is increased, we observe that the coefficient of variation increases. Further-
more, for a fixed ξ , H ES has a higher coefficient of variation compared to the
corresponding case for ERS. This is true for both retrial distributions.
Busy Period Analysis of Multi-Server Retrial Queueing Systems 73

Fig. 5 Graphs of the two measures as functions of λ and m for Erlang arrivals
74 S. R. Chakravarthy

Fig. 6 Graphs of various measures for Erlang arrivals

5 Concluding Remarks

In this paper, we looked at the busy period in the context of multi-server retrial
queueing systems. As is known, the study of the busy period even in the classical
queueing systems is complex due to the inherent difficulty in its study. This is further
compounded in the case of retrial queueing systems due to the server alternating
between idle and service periods even when the system has customers waiting in
the orbit. Hence, we used simulation in this paper to carry out the busy period
analysis. We showed how the variability in the arrival/service/retrial times as well
as the correlation in the inter-arrival times affects the busy period. Furthermore, we
Busy Period Analysis of Multi-Server Retrial Queueing Systems 75

offered some insight into the behavior of the busy period when arrivals, services, and
retrials are all modeled using Erlang distribution. Even though the few illustrative
examples provided some insights and interesting results, more examples need to be
simulated and this will be a topic for future research.

References

1. Artalejo, J.R.: Accessible bibliography on retrial queues. Math. Comput. Model. 30, 1–6 (1999)
2. Artalejo, J.R., Gomez-Corral, A.: Retrial Queueing Systems: A Computational Approach.
Springer, Berlin (2008)
3. Artalejo, J.R., Lopez-Herrero, M.J.: On the busy period of the M/G/1 retrial queue. Nav. Res.
Logist. 47, 115–127 (2000)
4. Artalejo, J.R., Chakravarthy, S.R., Lopez-Herrero, M.J.: The busy period and the waiting time
analysis of a MAP /M/c queue with finite retrial group. Stoch. Anal. Appl. 25, 445–469 (2007)
5. Artalejo, J.R., Gomez-Correl, A., He, Q.M.: Markovian arrivals in stochastic modelling: a
survey and some new results. SORT 34(2), 101–144 (2010)
6. Chakravarthy, S.R.: The batch Markovian arrival process: a review and future work. In:
Krishnamoorthy, A. et al. (ed.) Advances in Probability Theory and Stochastic Processes.
Notable Publications, New Jersey, pp. 21–39 (2001)
7. Chakravarthy, S.R.: Markovian arrival processes. In: Wiley Encyclopedia of operations
research and management science. Wiley, New York (2010)
8. Chakravarthy, S.R.: Analysis of MAP /P H /c retrial queue with phase type retrials—
simulation approach. In: Dudin, A. et al. (eds.) BWWQT 2013, CCIS 356, pp. 37–49 (2013)
9. Chakravarthy, S.R.: Matrix-analytic queueing models, Chapter 8. In: Narayan Bhat, U. (ed.)
An Introduction to Queueing Theory, 2nd edn, Birkhauser/Springer, New York (2015)
10. Chakravarthy, S.R.: Busy period analysis of GI /G/c and MAP /G/c queues. In: Deep, K.
et al. (eds.) Performance Prediction and Analysis of Fuzzy, Reliability and Queueing Models,
Asset Analytics, pp. 1–31 (2019). https://doi.org/10.1007/978-981-13-0857-4_1
11. Chakravarthy, S.R.: Queueing models in services—analytical and simulation approach. In:
Anisimov, V. Prof., Limnios, N. Prof. (eds.) To Appear in Advanced Trends in Queueing
Theory. Mathematics and Statistics. Sciences, ISTE/Wiley, London (2020)
12. Kelton, W.D., Sadowski, R.P., Swets, N.B.: Simulation with ARENA, 5th edn., McGraw-Hill,
New York (2010)
13. Kim, J.: Busy period distribution of a batch arrival retrial queue. Commun. Korean Math. Soc.
32(2), 425–433 (2017). https://doi.org/10.4134/CKMS.c160106
14. Kim, J., Kim. B.: A survey of retrial queueing systems. Ann. Oper. Res. 247(1), 3–36 (2016).
https://doi.org/10.1007/s10479-015-2038-7
15. Lucantoni, D.M.: New results on the single server queue with a batch Markovian arrival
process. Stoch. Model. 7, 1–46 (1991)
16. Lucantoni, D.M., Meier-Hellstern, K.S., Neuts, M.F.: A single-server queue with server
vacations and a class of nonrenewal arrival processes. Adv. Appl. Probl. 22, 676–705 (1990)
17. Neuts, M.F.: A versatile Markovian point process. J. Appl. Prob. 16, 764–779 (1979)
18. Neuts, M.F.: Matrix-geometric solutions in stochastic models: an algorithmic approach. The
Johns Hopkins University, Baltimore (1981). [1994 version is Dover Edition]
19. Neuts, M.F.: Structured stochastic matrices of M/G/1 type and their applications. Marcel
Dekker, New York (1989)
76 S. R. Chakravarthy

20. Neuts, M.F.: Models based on the Markovian arrival process. IEICE Trans. Commun. E75B,
1255–1265 (1992)
21. Neuts, M.F.: Algorithmic Probability: A Collection of Problems. Chapman and Hall, New York
(1995)
22. Phung-Duc, T.: Retrial queueing models: a survey on theory and applications. In: Dohi, T. et
al. (eds.) Stochastic Operations Research in Business and Industry. World Scientific, Singapore
(2017). http://infoshako.sk.tsukuba.ac.jp/~tuan/papers/Tuan_chapter_ver3.pdf
Steady-State and Transient Analysis
of a Single Channel Cognitive Radio
Model with Impatience and Balking

Alexander Rumyantsev and Garimella Rama Murthy

Abstract In this paper, motivated by an increasing interest in Cognitive Radio


wireless transmission systems, we study a stochastic model of a single node of
such a system with underlay transmission and balking. The considered model is
essentially a single-server system with an ON–OFF type environment governing
the service time intensity and triggering the balking events. We utilize the matrix
analytic method for steady-state analysis, and perform transient analysis by Com-
plete Level Crossing Information approach. The results of analysis are validated and
illustrated by simulation.

Keywords Radio · Structured Markov chain · Transient analysis · Matrix


analytic method · Complete level crossing information

1 Introduction

In recent years, due to proliferation of wireless networks, demand for electromag-


netic spectrum is increasing. However, measurements in an urban environment show
that the spectrum utilization can be relatively low both spatially and temporally.
Thus, spectrum sharing solutions are used to increase wireless networks efficiency.
One of widely used solutions is the so-called Cognitive Radio (CR) wireless
network technology. CR allows to share the wireless transmission channel among
the licensed users (known as Primary Users, PU) and unlicensed users (or Secondary

A. Rumyantsev ()
Institute of Applied Mathematical Research of the Karelian Research Centre of R.A.S.,
Petrozavodsk, Russia
Petrozavodsk State University, Petrozavodsk, Russia
e-mail: [email protected]
G. Rama Murthy
Mahindra Ecole Centrale, Bahadurpally, Hyderabad, India
e-mail: [email protected]

© The Editor(s) (if applicable) and The Author(s), under exclusive 77


licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_6
78 A. Rumyantsev and G. Rama Murthy

Users, SU). The latter are obliged to vacate the channel or decrease the transmission
activity in favor of the former. In general, one of the following methods of spectrum
sharing is used [15]:
– Interweaving: either PU or SU can use the channel at a time.
– Underlay: PU/SU share the channel given low interference at PU receiver.
– Overlay: PU and SU cooperatively improve signal/noise ratio at PU receiver.
The framework of stochastic modeling and queueing theory is a common tool for
researchers in the field of CR [15], and specifically matrix analytic method (MAM)
is widely adopted to study the steady-state performance of CR systems under
various assumptions [5, 14, 20]. MAM involves analysis of the structured discrete
space continuous time Markov chain (CTMC) describing the system dynamics.
In many cases, the CTMC belongs to a class of two-dimensional Markov chains,
known as quasi-birth-and-death (QBD) processes, which exhibit a matrix-geometric
solution for the steady-state probability distribution [6, 11, 13] and allows (under
some restrictions) to obtain the steady-state performance explicitly.
While steady-state performance allows to capture the long-run system behavior,
transient analysis delivers insights on the time-dependent system evolution and
sensitivity of the system to various management parameters, which is of high
practical value. Such an analysis in general requires to obtain the solution of an
(infinite) system of differential equations and is more complicated than steady-
state analysis. At the same time, special structure of the CTMC under study
(e.g., QBD processes) allows to obtain the transient solution explicitly in terms of
Laplace transform, using the so-called Complete Level Crossing Information (LCI)
approach [1, 7, 18].
In this research paper, we consider CR networks with a single channel. This
channel is licensed for utilization by PU. Opportunistically, the channel is accessed
by a pool of SU. There are many practically interesting wireless networks in
which such assumption holds true, e.g. Mobile Ad-Hoc Networks with identical
wireless handsets employed in civilian applications. The interleaving paradigm is
more easy for hardware implementation and more attractive for analysis. Compared
to the latter, the underlay and overlay methods allow to achieve higher spectrum
utilization. However, both methods are underrepresented in the literature [15] due
to higher complexity of the models. This motivates us to study the single channel
underlay CR system. At the same time, to keep analytical tractability, we focus on
the exponential distributions of governing sequences.
The contribution of this paper is threefold. First, we study the model with
underlay paradigm and randomized balking, which generalizes the models studied
in [5, 14, 19, 20] (where the SU are always required to leave the system once the PU
arrives at the system). Second, we obtain the steady-state distribution explicitly by
the method developed in [8]. Finally, we perform transient analysis of the system,
illustrating the results with the help of a simulation model. The results of numerical
study verify the applicability of the approach. We stress that transient analysis is
performed using Laplace transform of the desired performance measures. To obtain
Steady-State and Transient Analysis of a Single Channel Cognitive Radio. . . 79

the transient performance in time domain, inverse transform is required, which may
be performed numerically.
This research paper is organized as follows. In Sect. 2, QBD model of a single
channel wireless CR is discussed in detail. In Sect. 3, steady-state analysis is
performed, followed by transient analysis in Sect. 4. Simulation results are presented
in Sect. 5. The research paper concludes in Sect. 6.

2 Model Description

In what follows we formulate and study a queueing-theoretic model of the CR


system. We adopt the following notions common in the queueing theory: server
(corresponding to a wireless transmission channel), PU/SU customer (PU/SU
transmission packet), queue (backlog of SU packets waiting for transmission),
service (transmission time of PU/SU packet).
The CR wireless transmission system is a single-server model capable of serving
two types of customers, PU and SU, with PU having absolute priority over SU.
The SU arrive at epochs of Poisson input of rate λs and are waiting to be served in a
single first-come-first-served queue. Service times of SU are independent identically
distributed (iid.) random variables (r.v.) having exponential distribution of rate μs .
At any time an arriving PU can reclaim the server for an exponentially distributed
service time of rate μp . We assume that a new PU arrives after an exponentially
distributed time, of rate λp , passes since the departure epoch of the previous PU.
Thus, the server state is indeed the so-called ON–OFF process with exponential ON
and OFF period duration (of different rates), which is a common assumption [15].
Upon arrival of a PU, the interrupted SU either returns back to the first position
in the queue (with probability α ∈ [0, 1]), or balks immediately (with probability
1 − α). Such a randomized balking policy is a continuous-time generalization of
the so-called α-retry policy [19]. When PU is present in the system, the SU are
served at a different service rate β. We can think of such a service as an underlay
transmission which requires the SU to use reduced signal strength once PU is
present in the system. (The periods of PU transmission may also be considered
as working breakdowns, or reduced energy/performance states.) The service rate μs
is restored at the PU departure epoch, and the SU being served at rate β (if any)
immediately starts a new service time at the rate μs at the departure epoch of PU.
(Note that β may be also thought as the rate of impatience of SU waiting in the
first position of the queue during the PU service.) Thus, service/interarrival times
of a PU are in fact periods of a binary environment state modifying the parameter
of service time distribution of SU. We may also think of the PU arrivals as an input
following a rate-λp Poisson process with no waiting space for PU.
Thus, given the model assumptions, the model is a state-dependent single-server
M/M/1-type system with randomized balking. The model assumptions allow us to
study the system dynamics as a two-dimensional discrete space CTMC

{X(t), J (t)}t 0 ,
80 A. Rumyantsev and G. Rama Murthy

Fig. 1 State transition diagram of the CTMC model {X(t), J (t)}t0 of cognitive radio with
impatience and randomized balking

where X(t)  0 is the number of SU in the system, and J (t) ∈ {0, 1} is the number
of PU being served at time t  0. Note that such a structured CTMC has two states
at each level (i, ·), i  0. To simplify comprehension, we illustrate the transition
rate diagram on Fig. 1.
It is easy to see that the process {X(t), J (t)}t 0 a QBD process with infinitesimal
generator of the following block-tridiagonal form
⎛ ⎞
A0,0 A0 0 0 . . .
⎜ A2 A1 A0 0 . . . ⎟
⎜ ⎟
⎜ . ⎟
⎜ A2 A1 A0 . . ⎟
Q=⎜ 0 ⎟, (1)
⎜ .. ⎟
⎜ 0 0 A2 A1 . ⎟
⎝ ⎠
.. .. ..
0 0 . . .

where the matrix A0,0 corresponds to transitions between boundary states (0, ·),
matrix A2 corresponds to level decreasing transitions (departures of SU and balking
at arrivals of PU), A0 corresponds to arrivals of SU, and A1 is related to PU
arrival/service completion. Below we define these matrices explicitly.
$ % $ %
−c1 αλp μs (1 − α)λp
A0 = λs I, A1 = , A2 = , (2)
μp −c2 0 β

$ %
−λs − λp λp
A0,0 = , (3)
μp −λs − μp
Steady-State and Transient Analysis of a Single Channel Cognitive Radio. . . 81

where c1 = λp + λs + μs and c2 = λs + μp + β. The diagonal elements of A1


follow from balance condition A1 = 0, where
$ %
−λp λp
A = A0 + A1 + A2 = . (4)
μp −μp

It is interesting to note that A2 is full rank matrix if β > 0, which is in focus of


our analysis. If β = 0 and α = 1, A2 has only one nonzero element, and the model
corresponds to a classical M/M/1 queue with breakdowns.
The stability criterion follows from the celebrated Neuts ergodicity condition [11]

αA2 1 > αA0 1, (5)

where α is derived from the following system:



αA = 0,
(6)
α1 = 1.

From (4) and (6), α readily follows:


$ %
μp λp
α= , . (7)
λp + μp λp + μp

Thus, the stability criterion (5) reduces to

μp λp  
λs < μs + β + (1 − α)λp . (8)
λp + μp λp + μp

Note that the stability condition (8) has a nice interpretation: the input rate of SU
should be less than the total output rate of SU (including balking at PU arrival
epochs) weighted by relative average lengths of PU interarrival/service periods.

3 Steady-State Analysis

The steady-state probability vector π = (πi,j ), i, j ∈ E of a level-independent


QBD with state space {0, 1, . . . } × {1, . . . , m} can be obtained in matrix-geometric
form [3, 11, 13]

πk = πk−1 R, k  1, (9)
82 A. Rumyantsev and G. Rama Murthy

where πk = (πk,1 , . . . , πk,m ) and the matrix R is the minimal nonnegative solution
of a matrix quadratic equation

P (R) := R 2 A2 + RA1 + A0 = 0. (10)

The rate matrix R is in general obtained numerically [2, 12, 13]. However, the level-
independent QBD process with m = 2 states at each level admits explicit matrix-
geometric solution, both in case A2 is a full rank matrix [9] (by the Cayley–Hamilton
theorem) and if A2 is of rank one [10, 12], see also [17]. It remains to note that π0
is obtained by solving the following boundary value problem:

π0 (A0,0 + RA2 ) = 0
(11)
π0 (I − R)−1 1 = 1

Since the matrices defined in (2) in general case are full rank, we utilize the method
proposed in [8] to obtain the steady-state solution. Below we briefly summarize the
method.
First, the largest real root ξ3 of det A(ξ ) := det(A0 + ξ A1 + ξ 2 A2 ) is obtained
explicitly (in trigonometric form) as a root of the following cubic polynomial
appearing after factorization:

det A(ξ )
= βμs ξ 3 −(c1 β +(c2 −β)μs +(1 −α)λp μp )ξ 2 +λs (c1 +c2 −λs )ξ −λ2s .
ξ −1

Then, it is known that the eigenvalues of matrix R are the (real) roots of det A(ξ )
inside the unit disk, and the sum b1 and product b0 of these eigenvalues are obtained:

c1 β + (c2 − β)μs + (1 − α)λp μp λ2


b1 = ξ3 − , b0 = .
βμs βμs ξ3

Next, the Cayley–Hamilton theorem is used to obtain a solution R in the following


linear form:

R = (b0 A2 − A0 )(A1 − b1 A2 )−1 . (12)

Finally, π0 is obtained explicitly from (11), and the recursion (9) allows to obtain
the steady-state distribution π. It should be noted that the method can be directly
generalized to G/M/1-type processes [9], however, in such a case the values b0 and
b1 have to be obtained numerically.
After obtaining the steady-state distribution, it is easy to derive the stationary
performance measures of the system, such as the mean stationary number of SU in
the system, as follows:

EX = π0 R[I − R]−2 1. (13)


Steady-State and Transient Analysis of a Single Channel Cognitive Radio. . . 83

4 Transient Analysis

It is established in [18] that the transient probability distribution π(t), t  0 of an


arbitrary level-independent QBD process can be computed by a matrix geometric-
type recursion with the help of the Laplace transform. However, the state space of
QBD process should possibly be mutated to satisfy the Complete Level Crossing
Information (LCI-complete) property that guarantees that each state (i, j ) of the
structured CTMC accepts transitions either from lower level (i −1, ·), or from upper
level (i + 1, ·) only. This requires to perform a partition of the state space of each
such a state (i, j ) accepting transitions both from upper and lower levels into two
states. We summarize the required state space transformation below.
The state (i, j ) accepting inward transitions from upper and lower levels is split
into a couple (i, 0, j ) and (i, 2, j ). The outward transitions from each of these
splitted states are copies of the outward transitions of original state (i, j ); inward
transitions from level i − 1 arrive at state (i, 0, j ), while inward transitions from
level i + 1 arrive at (i, 2, j ). One of the two destinations for transitions from level
i (if any) can be chosen arbitrarily. The state (i " , j " ) accepting transitions only
from upper or lower levels (and possibly from the same level i) is (for notation
consonance) replaced with the state (i " , k, j " ), where k ∈ {0, 1, 2}. This leads to a
new Markov process with enriched state space, {X(t), D(t), J (t)}t 0 , where D(t)
may be interpreted as the direction of the inward transition. Formally, each state
(i, j ) of the original process state space is replaced with
– a pair (i, 0, j ), (i, 2, j ), if A0 (·, j ) > 0 and A2 (·, j ) > 0, i.e. j -th columns of
matrices A0 and A2 are nonzero;
– a single state (i, k, j ), if Ak (·, j ) > 0 and A2−k (·, j ) = 0, k = 0, 2;
– a single state (i, 1, j ), if A0 (·, j ) = 0 and A2 (i, j ) = 0.
Then transition rates of the new infinitesimal generator are defined as

Q̂([i, k, j ], [i " , k " , j " ]) := Q([i, j ], [i " , j " ]),

if one of the following conditions holds:


– i " = i − 1 and k " = 2;
– i " = i + 1 and k " = 0;
– i " = i and k " is minimal s.t. (i " , k " , j " ) is a state.
Note that the boundary states (0, j ) cannot receive transitions from below, thus,
they are replaced with a single state (i, k, j ), where k = 2 if A2 (·, j ) > 0 and
k = 1 otherwise. We use the three-dimensional lexicographically ordered index
of the matrix Q̂ and two-dimensional index of Q, which simplifies comprehension
compared to sequentially renumbered states in the state space. We select the minimal
k " w.o.l.o.g., since the destination for inter-level transitions can be chosen arbitrarily.
More details on LCI-completeness and the transformations can be found in [18]. We
specifically use the second component as opposed to third component used in [18],
since this allows to preserve lexicographical order without the need of reordering.
84 A. Rumyantsev and G. Rama Murthy

Fig. 2 State transition diagram of the CTMC model {X(t), D(t), J (t)}t0 of cognitive radio with
impatience and randomized balking satisfying LCI-completeness property

In Fig. 2 we illustrate the mutation of the state space performed to satisfy the LCI-
completeness (for two consecutive states).
After such a partition, the new state space allows each state (i, k, j ) to receive
transitions either from levels i+1 or from i−1 only. Thus, the infinitesimal generator
Q̂ of the new process has the following bidiagonal form known as LCI-complete
canonical form:
⎛ ⎞
B0 B1 0 0 . . .
⎜ 0 C B 0 ...⎟
⎜ ⎟
⎜ ⎟
⎜ 0 0 C B ... ⎟
Q̂ = ⎜ ⎟, (14)
⎜ ⎟
⎜ 0 0 0 C ... ⎟
⎝ ⎠
.. .. ..
0 0 . . .
Steady-State and Transient Analysis of a Single Channel Cognitive Radio. . . 85

where all matrices, except B0 and B1 , are square matrices of order m (recall m is
the number of states at non-boundary levels). B0 is an (m0 + m) × (m0 + m − m1 )
matrix, where m0 is the number of states at boundary level and m1 is the number of
states at non-boundary level that do not allow downward transitions to the boundary
level, while B1 is (m0 + m) × m. Note that after the transformation of CR model
CTMC, m = 4, m0 = 2, m1 = 2. Hereafter we define the blocks of Q̂ for CR model
explicitly:
⎛ ⎞ ⎛ ⎞
μs (1 − α)λp −c1 αλp 0 0 λs 0
⎜ 0 β μp −c2 ⎟ ⎜ 0 0 0 λs ⎟
C=⎜
⎝ μs (1 − α)λp
⎟, B=⎜ ⎟,
0 αλp ⎠ ⎝ −c1 0 λs 0⎠
0 β μp 0 0 −c2 0 λs
(15)
$ % $ % $ %
−(λp + λs ) λp λs 0 0 B0,0
B0,0 = , B1 = , B0 = .
μp −(μp + λs ) 0 λs B C
(16)

The new matrix Q̂ allows to obtain the steady-state distribution of the LCI-
complete QBD as a solution of a system of linear equations, since the matrix Q̂
is bidiagonal [1]. Note that the method of obtaining a solution is somewhat similar
to the linear solution (12), however, the interrelation of these methods is beyond the
scope of this paper and is left for future research. On the contrast, to obtain transient
solution, the following system is solved:

dπ(t)
= π(t)Q̂, (17)
dt
with boundary condition π(0) = π0 . Performing a componentwise Laplace
transform of (17) leads to the following system:

Π(u)(Q̂ − uI ) = −π0 , (18)

where I is the identity matrix, and


 ∞
Π(u) = π(t)e−ut dt, Reu  0.
0

It is easy to obtain from (14) that blocks of matrix Q̂ − uI are obtained from
blocks of Q̂ by replacing ci with ci (u) := ci + u, i = 1, 2. We use the notation
C(u), B(u), Bi (u), i = 1, 2, and B0,0 (u) to refer to the corresponding submatrices
of (18).
86 A. Rumyantsev and G. Rama Murthy

It follows from (15) that |C(u)| = c1 (u)c2 (u)βμs > 0. After some algebra,
C −1 (u) can be obtained explicitly
⎛ ⎞
− c1k(u)
1 k2 1
c2 (u) μs + k1
c1 (u) − μk1p − k2
c2 (u)
⎜ μp μp
− βc1 (u) 1 ⎟
−1 ⎜ 0 ⎟
C (u) = ⎜ βc1 (u) β
⎟,
⎝ − c 1(u) 0 1
c1 (u) 0 ⎠
1
−1 1
0 c2 (u) 0 c2 (u)

where k1 = (1 − α)λp μp /(βμs ) and k2 = αλp /μs The transient solution Π(u) is
then obtained in matrix-geometric form by the following recursion:

Πn+1 (u) = Πn (u)W (u), (19)

where W (u) = −B(u)C −1 (u) can be found explicitly:


⎛ λs ⎞
c1 (u) 0 − c1λ(u)
s
0
⎜ ⎟
⎜ 0 λs
0 − c2λ(u)
s ⎟
⎜ c2 (u) ⎟
W = ⎜ λs (u) ⎟ .
⎜ k2 c1 (u) c1 (u) λs
− k1 cμ1p(u) − k2c2c1(u) ⎟
⎝ c1 (u) − k1 c2 (u) μs + k1 − c1 (u) ⎠
μp c2 (u) λs μp c2 (u) c2 (u) λs
βc1 (u) c2 (u) − βc1 (u) β − c2 (u)

4.1 Boundary Value Problem

To obtain the Laplace transform of transient state vectors for boundary states,
Π0 (u), Π1 (u), the following linear system has to be solved:
$ %
B0 (u) B1 (u)
[Π0 (u) Π2 (u)] = −π0 , (20)
0 C(u)

where Π0 (u) is m0 + m-component vector (corresponding to the boundary state


0 as well as the level 1), and Π2 (u) has m components. However, the system has
multiple solutions and requires m1 more equations to find the solution explicitly.
This requirement is satisfied by obtaining the eigenvalues of matrix W (u) which are
on or outside of the unit circle, and eliminating the corresponding unstable modes
from the solution [18]. Some straightforward algebra allows to deduce the following
characteristic equation for the eigenvalues of W (u):

φ(ξ ) = (ξ − 1) ξ 3 + a2 ξ 2 + a1 ξ + a0 = 0,
Steady-State and Transient Analysis of a Single Channel Cognitive Radio. . . 87

where
$ %
c2 (u) c1 (u) λs (c1 (u) + c2 (u) − λs ) λ2
a2 = − 1 − − − k1 , a1 = , a0 = − s .
β μs βμs βμs

It can be seen that ξ = 1 is the root of φ(ξ ), which is as expected. To obtain the
largest real root of φ(ξ ), we use a trigonometric formula for cubic equation:
$ $ %%
√ 1 q a2
ξ0 = 2 −p cos arccos √ − ,
3 2p −p 3

where

3a1 − a22 2a23 − 9a1a2 + 27a0


p= , q= .
9 27
The corresponding right eigenvector r0 is then any solution of the following linear
system:

(W − ξ0 I )r0 = 0,

and normalization condition r0 1 = 1 can be used to obtain unique r0 . Similarly, r1


is the right eigenvector corresponding to eigenvalue ξ1 = 1. This allows to extend
the system (20) with the following equations:
$ %
0 0
[Π0 (u) Π2 (u)] = 0. (21)
r0 r1

For simplicity, we assume an initially empty system, that is, π0 = (1, 0, . . . , 0).
Now equations (20) and (21), together with recursion (19), allow to obtain Π(u)
explicitly. To obtain the transient solution π(t), an inversion of the Laplace
transform Π(u) is required, that in general is done numerically. Finally, the
performance measures of the system may be also obtained in terms of Laplace
transform inversion. In particular, the Laplace transform of the transient mean
number of SU in the system is


X(u) = iΠi (u)1 = Π1 (u)1 + Π2 (u)P (u)(2I − Λ(u))(I − Λ(u))−2 P −1 (u)1,
i=1

where Π1 (u) is the vector of m rightmost components of Π0 corresponding to level


1, P (u) is the matrix of right eigenvectors of W (u), P −1 (u) is the matrix of left
eigenvectors of W (u), and Λ(u) is the diagonal matrix of eigenvalues of W (u) with
zero values replacing the eigenvalues ξ0 > 1 and ξ1 = 1. The details of obtaining
the Laplace transform of such a performance measure may be found in [18].
88 A. Rumyantsev and G. Rama Murthy

5 Simulation Results

For simulation purpose, we created a discrete event simulation model of the system
in R language [16]. We have validated the steady-state distribution and performance
measures with analytical solution for trajectories up to 108 arrivals long, and the
difference is of order 10−4 . We also performed validation of the transient model, and
numerical errors for performance measures are reasonable. However, the sensitivity
of the transient solution to parameters and the numerical stability has to be studied
separately.
To illustrate the approach, we performed a simulation of the CR system model
with the following arbitrary chosen parameters: λp = 3, μs = 2, μp = 5, α =
0.2, β = 4. We selected λs = 3.285 such that the stability criterion is satisfied,
but the system load is relatively large. Steady-state analysis allows to obtain the
mean stationary number of SU in the system as EX ≈ 3.34. At the same time, we
simulated the system in transient state for t  100. To obtain the estimates
 of EX(t)
in simulation model, we calculated simple ensemble averages as N1 N i=1 Xi (tj ),
where Xi (tj ) is the number of SU in the system at i-th trajectory (simulation
run) at j -th time point, tj = j, j = 1, . . . , 100. We performed N = 105
simulations to obtain these point estimators, and performed Laplace inversion to
obtain the analytical result. The results of simulation depicted on Fig. 3 illustrate
good adequacy of analytical and simulation results.
3.5
3.0
2.5
E X(t)

2.0
1.5
1.0

Transient
Steady−state
Simulation
0.5

0 5 10 15 20
time, t

Fig. 3 Convergence of transient performance measure EX(t) to steady-state value EX for


analytical and simulation models
Steady-State and Transient Analysis of a Single Channel Cognitive Radio. . . 89

6 Conclusion

We performed steady-state and transient analysis of the QBD model of CR wireless


transmission system node with randomized balking and underlay transmission.
However, the proposed approach can be generalized to the so-called G/M/1-type
processes, which would require additional effort (in particular, the steady-state solu-
tion would require deriving the eigenvalues numerically). Such a generalized model
can incorporate the features of the so-called Spread Spectrum (CDMA) Cognitive
Radio Networks [4], where the SU utilize orthogonal signature sequences as in
direct sequence spread spectrum wireless communication systems. In such a system
there is a positive probability that multiple secondary users can successfully transmit
their packets without collision (sharing the channels efficiently). Furthermore, it
would be interesting to use the game-theoretic approach where SU are considered
as players, and the strategy could be seen as the pair (α, β) for each player. However,
these generalizations are beyond the scope of this paper, and we leave this for future
research.

Acknowledgments The authors thank Sergey Astafiev and anonymous referees for their sugges-
tions that helped to improve the paper. This work is supported by Russian Foundation for Basic
research, projects No 18-07-00147, 18-07-00156, 18-37-00094, 19-07-00303, 19-57-45022.

References

1. Beuerman, S.L., Coyle, E.J.: State space expansions and the limiting behavior of quasi-birth-
and-death processes. Adv. Appl. Probab. 21(02), 284–314 (1989). https://doi.org/10/cs58tc.
https://www.cambridge.org/core/product/identifier/S0001867800018553/type/journal_article
2. Bini, D.A., Latouche, G., Meini, B.: Solving matrix polynomial equations arising in queueing
problems. Linear Algebra Appl. 340(1), 225–244 (2002). https://doi.org/10.1016/S0024-
3795(01)00426-8
3. Bladt, M., Nielsen, B.F.: Matrix-Exponential Distributions in Applied Probability, Probability
Theory and Stochastic Modelling, vol. 81. Springer, Boston (2017). http://link.springer.com/
10.1007/978-1-4939-7049-0. https://doi.org/10.1007/978-1-4939-7049-0
4. Daoud, S., Haccoun, D., Cardinal, C.: Spread Spectrum-based underlay cognitive radio
wireless networks. In: COCORA 2017: The Seventh International Conference on Advances
in Cognitive Radio, pp. 20–24 (2017)
5. Dudin, A., Lee, M., Dudina, O., Lee, S.: Analysis of priority retrial queue with many
types of customers and servers reservation as a model of cognitive radio system. IEEE
Trans. Commun. 65(1), 186–199 (2016). https://doi.org/10/gfxg7c. http://ieeexplore.ieee.org/
document/7562570/
6. Evans, R.V.: Geometric distribution in some two-dimensional queuing systems. Oper. Res.
15(5), 830–846 (1967). https://doi.org/10.1287/opre.15.5.830
7. Garimella, R.M.: Transient and equilibrium analysis of computer networks: finite memory and
matrix geometric recursions, Ph.D. thesis. Purdue University, West Lafayette (1989). http://
docs.lib.purdue.edu/dissertations/AAI9018828/
90 A. Rumyantsev and G. Rama Murthy

8. Garimella, R.M., Rumyantsev, A.: On an exact solution of the rate matrix of Quasi-Birth-Death
process with small number of phases. In: Proceedings: 31st European Conference on Modelling
and Simulation (ECMS 2017), 23rd–26th May 2017, pp. 713–719. Budapest, Hungary (2017).
https://doi.org/10.7148/2017-0713
9. Garimella, R.M., Rumyantsev, A.: On an exact solution of the rate matrix of G/M/1—type
Markov process with small number of phases. J. Parallel Distrib. Comput. 119, 172–178
(2018). https://doi.org/10.1016/j.jpdc.2018.04.013, http://linkinghub.elsevier.com/retrieve/pii/
S074373151830282X
10. Gillent, F., Latouche, G.: Semi-explicit solutions for M/PH/1-like queuing systems. Eur. J.
Oper. Res. 13(2), 151–160 (1983). https://doi.org/10.1016/0377-2217(83)90077-2
11. He, Q.M.: Fundamentals of Matrix-Analytic Methods. Springer, New York (2014)
12. Latouche, G., Ramaswami, V.: Introduction to Matrix Analytic Methods in Stochastic Model-
ing. ASA-SIAM, Philadelphia (1999)
13. Neuts, M.F.: Matrix-Geometric Solutions in Stochastic Models. Johns Hopkins University,
Baltimore (1981)
14. Oklander, B., Sidi, M.: Modeling and analysis of system dynamics and state estimation in
cognitive radio networks. In: 2010 IEEE 21st International Symposium on Personal, Indoor
and Mobile Radio Communications Workshops, pp. 49–53. IEEE, Istanbul (2010). https://doi.
org/10/ctk99w. http://ieeexplore.ieee.org/document/5670521/
15. Paluncic, F., Alfa, A.S., Maharaj, B.T., Tsimba, H.M.: Queueing models for cognitive radio
networks: a survey. IEEE Access 6, 50801–50823 (2018). https://doi.org/10/gfvxsg. https://
ieeexplore.ieee.org/document/8445574/
16. R Core Team: R: A Language and Environment for Statistical Computing. R Foundation for
Statistical Computing, Vienna (2018). https://www.R-project.org/
17. van Leeuwaarden, J., Winands, E.: Quasi-birth-and-death processes with an explicit rate
matrix. Stoch. Model. 22(1), 77–98 (2006). https://doi.org/10.1080/15326340500481747
18. Zhang, J., Coyle, E.J.: Transient analysis of quasi-birth-death processes. Commun. Stat.
Stoch. Model. 5(3), 459–496 (1989). https://doi.org/10.1080/15326348908807119. http://
www.tandfonline.com/doi/abs/10.1080/15326348908807119
19. Zhao, Y., Jin, S., Yue, W.: A novel spectrum access strategy with α-retry policy in cognitive
radio networks: a queueing-based analysis. J. Commun. Networks 16(2), 193–201 (2014).
https://doi.org/10.1109/JCN.2014.000030
20. Zhu, D.B., Wang, H.M., Xu, Y.N.: Performance analysis of CSMA in an unslotted cognitive
radio network under non-saturation condition. In: 2012 Second International Conference on
Instrumentation, Measurement, Computer, Communication and Control, pp. 1122–1126. IEEE,
Harbin (2012). https://doi.org/10/gfxg66. http://ieeexplore.ieee.org/document/6429100/
Applications of Fluid Queues
in Rechargeable Batteries

Shruti Kapoor and S. Dharmaraja

Abstract In this paper, the transient solution of the amount of charge in a


rechargeable battery of finite capacity is obtained. The level of charge in the battery
is governed by different input and output processes and are dependent on the level
of charge in the battery. This model has been already discussed in Jones et al. (Fluid
queue models of battery life. In: IEEE 19th international symposium on modeling,
analysis and simulation of computer and telecommunication systems (MASCOTS),
pp. 278–285, 2011), and the distribution of the hitting time was found numerically.
In this paper, the method chosen is based on probabilistic approach which allows
us to achieve a closed form solution for the distribution of the level of charge in the
battery at any time t. Numerical illustrations are presented to verify the analytical
results.

Keywords Fluid queue · Transient distribution · Recurrence relations · Battery


life time · Markovian queues

1 Introduction

An important issue in the energy-constrained ad-hoc wireless networks is to obtain


ways that increase their lifetime. The communication protocols for these wireless
networks have to be developed such that they are aware of the state of the battery
charge. The stored energy in these batteries is limited and should be effectively
utilized, thereby increasing the battery lifetime. In this paper, we propose a fluid
queue model for the charge in a battery.

S. Kapoor
Department of Mathematics, Jesus and Mary College, University of Delhi, New Delhi, India
S. Dharmaraja ()
Department of Mathematics, IIT Delhi, New Delhi, India
e-mail: [email protected]

© The Editor(s) (if applicable) and The Author(s), under exclusive 91


licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_7
92 S. Kapoor and S. Dharmaraja

The first stochastic models for batteries were developed by Chiasserini and Rao
[2]. They describe two models for a battery of a mobile communication device
which transmitting packets. Further extensions were made to improve these model
in Chiasserini and Rao [3, 4].
The remaining paper is organized as follows. Section 2 presents the literature
survey and related work. Section 3 describes the fluid queue model in detail; Sect. 4
presents the transient analysis of the distribution of the battery charge. In Sect. 5 we
numerically illustrate the buffer content distribution and throughput in steady state
and finally, Sect. 5.1 shows the sensitivity analysis, which can help in reducing the
probability of zero charge. Pointers to further research and conclusion are given in
Sect. 6.

2 Literature Survey and Related Work

Fluid queue models have been applied to various real life situations and provide
extensive information. The existing literature shows the diverse fields where fluid
queue models have been applied. In Arunachalam et al. [1] the performance of the
IEEE 802.11 protocol is modeled using the fluid queues. The outage probability
performance of a new relay selection scheme is investigated in Liu [8] for the
energy harvesting relays based on the wireless power transfer. A recent paper by
Tunc and Akar [10] describes a fluid queue model to prolong the lifetime of Internet
of Things (IoT) device with energy harvesting rechargeable batteries. This paper
analyzes the use of fluid queues in rechargeable batteries which are used in digital
cameras, mobile phones, remote sensors, and communication satellites. By applying
fluid models to a rechargeable battery, we study the amount of charge in the battery
at any time t. Chiasserini and Rao [3] model the life of a battery using fluid queues.
Anupam and Dharmaraja [5] developed the fluid queue model for the battery life
of a DRX mechanism in LTE-A networks and the cumulative distribution function
of the battery life is derived. In this paper, our aim is to develop a fluid model to
describe the charge in a battery. By analyzing the fluid model, we hope to improve
the life of the batteries.
Transient solutions for the buffer content distribution help in analyzing the
system at any time. Various methodologies have been studied in literature to obtain
the transient buffer content distribution of fluid queues. Transient analysis for
fluid queue driven by chain sequence BDP with catastrophes was discussed in
Vijayalakshmi and Thangaraj [11]. The closed form expression for the transient
solution of fluid queue model driven by a birth death process is found in Kapoor
and Dharmaraja [7]. For a fluid queue driven by an M/M/1 Queue with disaster
and subsequent repair, the exact stationary solution is obtained in Vijayashree and
Anjuka [12]. In this paper, we analyze the application of fluid queue models in
a battery. We consider the model discussed in Jones et al. [6], and use simple
probability concepts to obtain an exact solution for the amount of charge in a battery
at any time.
Applications of Fluid Queues in Rechargeable Batteries 93

3 Model Description

In today’s world the wireless age is expanding to include not just the smart phones,
tablets, and laptops, but also the cars, homes, offices, and even whole communities.
The ubiquitous battery is why we can carry immense computing power in our
pocket. Thus, in this paper the charge level of a battery, which is subject to random
charging and discharging periods, has been presented. The battery can be charged
to a finite capacity B. The background process is determined by a switch, that is,
an on-off model. Assume that the transition rates of on to off and off to on are λ
and μ, respectively. When the switch is on, the battery gets charged at a rate α and
when the switch is off, the battery is being discharged. The rate at which the battery
gets discharged depends on the level of charge in the battery. The rate of discharge
is either βh if level of charge is above the level V or βl if it is below V .
Let {X(t), t ≥ 0} represent the state of the background process with state space
S = {0, 1}, generator matrix Q = [qij ] and C(t) represent the level of charge in
the battery at any time t. The fluid model is the pair {(X(t), C(t)), t ≥ 0} and the
corresponding distribution function is defined as

Fi (t, x) = P {X(t) = i, C(t) ≤ x}; t ≥ 0, x ≥ 0, i ∈ S. (1)

The net inflow rate is a vector depending on level of battery charge and the state of
the background process, defined as:


⎪ −βl , 0 < x ≤ V, i = 0

α − βl 0 < x ≤ V, i = 1
rk,i = . (2)
⎪ −βh ,
⎪ V < x ≤ B, i = 0

α − βh , V < x ≤ B, i = 1

where k can take value l or h depending on 0 < x ≤ V or V < x ≤ B, respectively.


The system of partial differential equations governing the above fluid queue model
is given by

∂Fi (t, x) ∂Fi (t, x) 


+ rk,i = qij Fj (t, x), i ∈ S. (3)
∂t ∂x
j ∈S

We assume the generator matrix takes the form

−λ λ
Q= .
μ −μ

Without loss of generality, we assume that λ > μ.


94 S. Kapoor and S. Dharmaraja

We consider rates given by Eq. (3) in two intervals (0, V ) and (V , B) and define
the corresponding distribution function be Fil (t, x) and Fih (t, x), respectively. The
system of partial differential equations reduces to

∂F0l (t, x) ∂F l (t, x)


− βl 0 = −λF0l (t, x) + μF1l (t, x)
∂t ∂x
∂F1l (t, x) ∂F l (t, x)
+ (α − βl ) 1 = λF0l (t, x) − μF1l (t, x) (4)
∂t ∂x

∂F0h (t, x) ∂F h (t, x)


− βh 0 = −λF0h (t, x) + μF1h (t, x)
∂t ∂x
∂F1h (t, x) ∂F h (t, x)
+ (α − βh ) 1 = λF0h (t, x) − μF1h (t, x). (5)
∂t ∂x
In Fig. 1, a sample path of the fluid queue model is shown. In charging periods,
the level of charge in the battery increases. Further, it is observed from the figure
that, when the level of charge in the battery reduces to less than the threshold level
V , the rate of discharge increases.
Charge Level

Time

Fig. 1 Sample path of the charge level vs time


Applications of Fluid Queues in Rechargeable Batteries 95

4 Transient Analysis

To simplify calculations, we follow the uniformization method and letting one-step


transition probability matrix P = I + Qμ . Let

Z = {Zn : n = 0, 1, . . .}

be a time-homogeneous discrete time Markov chain (DTMC) with finite state space
S. Let {N(t), t ≥ 0} be a time-homogeneous Poisson process with parameter μ.
Assume that, it is independent of Z. Hence, X(t) = ZN(t ) for t ≥ 0. Given the
number of transitions n in the interval (0, t), we define the probability vector p(t) =
[p0 (t), p1 (t)] as

 (μt)n
p(t) = e−μt πP n
n!
n=0

where pi (t) is the probability that the stochastic process {X(t), t ≥ 0} is in ith
state at time t, π is the probability vector at t = 0, satisfying πQ = 0 and P the
uniformized matrix.
We assume t1 , t2 , . . . , tn be the n transition times. Thus splitting the interval (0, t)
into n+1 subintervals with lengths t1 , t2 −t1 , . . ., t −tn . A net input rate is associated
with each of these intervals, based on the state of the stochastic process in that
interval. For the proposed model, the net input rate vector is r = [r1 , r2 , r3 , r4 ] =
[−βl , α − βl , −βh , α − βh ].
Given n transitions have occurred, let,

ki = the number of intervals associated with net input rate ri .

Here, k = (k1 , k2 , k3 , k4 ) is called a partition of n+1, i.e., ||k|| = k1 +k2 +k3 +k4 =
n + 1. Thus, by condition on the number of n transitions and k partition, we obtain

 
(μt)n
P [C(t) > x] = e−μt G[n, k]M(t, x, n, k)
n!
n=0 ||k||=n+1

 (μt)n 
n
= e−μt G[n, k]M(t, x, n, k) (6)
n!
n=0 k1 =0

 x, n, k) = P [C(t) > x|n transitions and partition k] and G[n, k] =


M(t,
i∈S Gi [n, k]. Here, given n transitions and k partition, Gi [n, k] is the probability
that the state visited after the last transition is i. Suppose that, if i and j are the states
visited after the last (n − 1)th and nth transitions, then k is equal to the previous
partition +1 at the entry corresponds to the net rate associated with the state j .
96 S. Kapoor and S. Dharmaraja

Therefore, the recurrence relations for the model described can be given by
  μ
Gl0 [n, k] = Gl1 [n − 1, k2 − 1] + Gh0 [n − 1, k3 − 1] + Gh1 [n − 1, k4 − 1]
λ+μ
  λ
Gl1 [n, k] = Gl0 [n − 1, k] + Gh0 [n − 1, k3 − 1]
λ+μ
μ
+Gl0 [n − 1, k1 − 1]
λ+μ
  μ
Gh0 [n, k] = Gl1 [n − 1, k2 − 1] + Gh1 [n − 1, k4 − 1]
λ+μ
λ μ μ
Gh1 [n, k] = Gl0 [n − 1, k] + Gl1 [n − 1, k] + Gl0 [n − 1, k1 − 1]
λ+μ λ+μ λ+μ
μ
+Gh0 [n − 1, k3 − 1] .
λ+μ

From the above, the function Gi [n, k] recursively satisfies the initial conditions
(0)
G1 [1, (1, 0)] = π0

Gi [0, (0, 1)] = πi(0) for i ∈ S \ {1}.

Now, in order to find the function M(t, x, n, k), first we assume that n transitions
yield k partition. Let U1 , U2 , . . . , Un be iid random variables each having uniform
distribution in the interval (0, 1). Therefore, U(1), U(2) ,. . . , U(n) be their order
statistics such that U(0) = 0 and U(n+1) = 1. Then, τi , i.e., the distribution of
time of the ith transition has the identical distribution as tU(i) (refer [9]). Thus, we
have

Y1 ≡ tU(1) , Y2 ≡ t (U(2) − U(1) ), . . . , Yn+1 ≡ t (1 − U(n) ).

We note that Yi ’s are exchangeable random variables, hence by rearranging the


intervals, we let first k1 intervals be associated with the rate r1 . Then, next k2
intervals be associated with the rate r2 . Then, k3 intervals be associated with the
rate r3 . Finally, the last k4 intervals be associated with the rate r3 .
Now, C(t) is the cumulative buffer during the interval (0, t). Hence, the required
event {C(t) > x given n transitions, and k partitions } can be given as

{C(t) > x | n transitions, k partitions} = {r1 (Y1 + . . . + Yk1 )


+r2 (Yk1 +1 + . . . + Yk2 )
+r3 (Yk2 +1 + . . . + Yk3 )
+r4 (Yk3 +1 + . . . + Yn+1 ) > x}.
Applications of Fluid Queues in Rechargeable Batteries 97

Substituting values of ri and solving, we obtain

P (C(t) > x | n transitions, k partitions) = P ((−αt)(Uk1 + Uk3 )


+t (βh − βl + α)Uk2
> x + t (βh − α)).

Now, we find obtain M(t, x, n, k). For that, first we need to obtain the distribution
of a linear combination of uniform distributed order statistics on the interval (0, 1).
A solution for this was presented in [13]. Using the result given in [13], we get

 f (ki −1) (ri , k)


M(t, x, n, k) = i

r t >x
(ki − 1)!
i

where fi(ki −1) is the (ki − 1)st derivative of the following function:

(y − x)n
fi (y, k) = ,1 .
j =0 (y − rj )kj
j =i

5 Numerical Illustration

The following graphs illustrate the model numerically. Values for some parameters
have been fixed to obtain the graphs, thereby analyzing the model in transient as
well as steady state. Variation of the measure, average buffer content in steady state
versus threshold is presented graphically. Table 1 describes the parameters and their
values assumed for numerical purpose.

Table 1 List of parameters


Rates Meaning Value
and their values
λ Arrival rate of X(t) 1
μ Departure rate of X(t) 2
α Charge rate 12
βl Discharge rate in regime l 5
βh Discharge rate in regime h 8
B Maximum buffer level 500
98 S. Kapoor and S. Dharmaraja

0.9 x=100
x=200
0.8 x=300

0.7

0.6
P(C(t)>x)

0.5

0.4

0.3

0.2

0.1

0
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Time (t)

Fig. 2 Complement cumulative buffer content distribution

In Fig. 2, the complement buffer content is plotted against time. It can be seen
from the graph that as time increases the complement buffer content decreases and
will eventually approach 0.
In Fig. 3, the average buffer content in steady state has been plotted against the
threshold (V ). It shows that as the threshold increases the average buffer content
also increases.

5.1 Sensitivity Analysis

In Figs. 4 and 5 the 3 dimensional graph of the probability of system being empty
against the discharge rates βl and βh has been plotted for V = 200 and V = 100.
It can be seen that the graph resembles a paraboloid and thus the probability of
the system being empty can be minimized. Also, with decrease in the value of the
threshold, the minimum probability of the system being empty is further reduced
which is in accordance with the numerical illustration.
Applications of Fluid Queues in Rechargeable Batteries 99

50

45

40
Average Buffer Content

35

30

25

20

15

V = 100
10 V = 80
V = 70
5

0
0 10 20 30 40 50 60 70 80 90 100
Threshold (V)

Fig. 3 Average buffer content vs threshold

1.4

1.2

1
P(empty system)

0.8

0.6

0.4

0.2

0
12
10
12
8 10
6 8
4 6
4
2 2
0 0
E
h E
l

Fig. 4 Probability of the system is empty versus discharge rate for V = 200
100 S. Kapoor and S. Dharmaraja

0.7

0.6

0.5
P(empty system)

0.4

0.3

0.2

0.1

0
12
10
12
8 10
6 8
4 6
4
2 2
0 0
E
h E
l

Fig. 5 Probability of the system is empty versus discharge rate for V = 100

6 Conclusion and Future Work

In this paper, we study a fluid queue model driven by a two state birth death process
in which the net flow rate of fluid into buffer is dependent on the state of birth death
process. The aim of the paper is to determine the amount of charge in a battery at any
time t. Using the fluid queue model, a new methodology for finding the distribution
of the level of charge in the battery at any time t is presented. This can be used
in increasing battery life of a device and reducing energy consumption. In future,
we plan to study the battery life distribution for a device modulated by a general
Markov process.

Acknowledgments Authors are thankful to the editor and two anonymous reviewers for their
valuable suggestions and comments which helped improve the manuscript to great extent and are
grateful for the financial support received from the Department of Telecommunications (DoT),
India.
Applications of Fluid Queues in Rechargeable Batteries 101

References

1. Arunachalam, V., Gupta, V., Dharmaraja, S.: A fluid queue modulated by two independent
birth-death processes. Comput. Math. Appl. 60(8), 2433–2444 (2010)
2. Chiasserini, C.F., Rao, R.R.: Pulsed battery discharge in communication devices. In: Pro-
ceedings of the 5th Annual ACM/IEEE International Conference on Mobile Computing and
Networking, pp. 88–95 (1999)
3. Chiasserini, C.F., Rao, R.R.: A model for battery pulsed discharge with recovery effect. In:
IEEE Wireless Communications and Networking Conference, vol. 2, pp. 636–639 (1999)
4. Chiasserini, C.F., Rao, R.R.: Improving battery performance by using traffic shaping tech-
niques. IEEE J. Sel. Areas Commun. 19(7), 1385–1394 (2001)
5. Gautam, A., Dharmaraja, S.: An analytical model driven by fluid queue for battery life time of
an user equipment in LTE-A networks. Phys. Commun. 30, 213–219 (2018)
6. Jones, G.L., Harrison, P.G., Harder, U., Field, T.: Fluid queue models of battery life. In:
IEEE 19th International Symposium on Modeling, Analysis and Simulation of Computer and
Telecommunication Systems (MASCOTS), pp. 278–285 (2011, July)
7. Kapoor, S., Dharmaraja, S.: On the exact transient solution of fluid queue driven by a birth
death process with specific rational rates and absorption. Opsearch 52, pp. 746–755 (2015)
8. Liu, K.H.: Performance analysis of relay selection for cooperative relays based on wireless
power transfer with finite energy storage. IEEE Trans. Veh. Technol. 65(7), 5110–5121 (2016)
9. Rubino, G., Sericola, B.: Markov Chains and Dependability Theory. Cambridge University
Press, Cambridge (2014)
10. Tunc, C., Akar, N.: Markov fluid queue model of an energy harvesting IoT device with adaptive
sensing. Perform. Eval. 111, 1–16 (2017)
11. Vijayalakshmi, T., Thangaraj, V.: Transient analysis of a fluid queue driven by a chain
sequenced birth and death process with catastrophes. Int. J. Math. Oper. Res. 8(2), 164–184
(2016)
12. Vijayashree, K.V., Anjuka, A.: Exact stationary solution for a fluid queue driven by an M/M/1
queue with disaster and subsequent repair. Int. J. Math. Oper. Res. 15, 92–109 (2019)
13. Weisberg, H.: The distribution of linear combinations of order statistics from the uniform
distribution. Ann. Math. Stat. 42(2), 704–709 (1971)
Analysis of BMAP /R/1 Queues Under
Gated-Limited Service with the Server’s
Single Vacation Policy

Souvik Ghosh, A. D. Banik, and M. L. Chaudhry

Abstract This paper deals with the finite-buffer single server vacation queues with
batch Markovian arrival process (BMAP ). The server follows gated-limited service
discipline, i.e., the server can serve a maximum of L customers out of those that are
waiting at the start of the busy period or all the waiting customers, whichever is
minimum. It has been assumed that the server can take only one vacation, i.e., if no
customers are found at the end of a vacation, the server remains idle until a batch of
customers arrives. The service time and vacation time distributions are considered to
possess rational Laplace–Stieltjes transform. The queue-length distribution at post-
departure, arbitrary, and pre-arrival epochs has been obtained. Various performance
measures like mean queue-length, mean waiting time of an arbitrary customer, and
mean length of busy and idle periods have been derived for this model. Numerical
results have been presented based on the analysis done.

Keywords Queueing · Batch Markovian arrival process · Gated-limited service ·


Single vacation · Roots method

S. Ghosh
Department of Statistics and Operations Research, School of Mathematical Sciences,
Tel Aviv University, Tel Aviv-Yafo, Israel
A. D. Banik ()
School of Basic Science, Indian Institute of Technology Bhubaneswar, Permanent campus, Argul,
Khurda, Odisha, India
e-mail: [email protected]
M. L. Chaudhry
Department of Mathematics and Computer Science, Royal Military College of Canada, Kingston,
ON, Canada
e-mail: [email protected]

© The Editor(s) (if applicable) and The Author(s), under exclusive 103
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_8
104 S. Ghosh et al.

1 Introduction

In today’s connected society we often encounter system with server’s vacation. A


server may go for vacation due to routine maintenance or utilization of its idle time.
Queueing models with server’s vacation have found many applications in today’s
life. Vacation queueing models can be classified based on the number of vacations
taken by the server, i.e., single vacation (SV ) and multiple vacations (MV ) queueing
models. Further, vacation queueing models can be classified depending on the
starting and termination rules of the service time, namely, exhaustive, limited,
gated, exhaustive-limited (E-limited), gated-limited (G-limited), and many more.
For more details on different vacation models, readers are referred to Doshi [7],
Takagi [21], and Tian and Zhang [22]. In the analysis of queueing systems,
generally it is assumed that arrival follows Poisson process. However, modern day’s
correlated arrivals can be described by Markovian arrival process (MAP ) and batch
Markovian arrival process (BMAP ), see Neuts [16] and Lucantoni [14].
Lucantoni et al. [15] considered MAP /G/1 vacation queueing system and
obtain stationary queue-length distributions using matrix-analytic approach. The
MAP /G/1 queueing system under N-policy with and without vacations has been
studied by Kasahara et al. [13]. Alfa [1] considered a discrete-time MAP /P H /1
vacation queueing system under gated time and limited service. In addition to these
infinite-buffer capacity queueing models, Gupta and Sikdar [10] have studied the
finite-buffer MAP /G/1/N queueing system with the server’s single as well as
multiple vacations policy. Vacation queueing models with BMAP arrivals have
also drawn a great attention of the researchers, see, e.g., Banik et al. [3], Saffer
and Telek [18], Vishnevsky et al. [23], and the references therein. Recently, Banik
and Ghosh [2], and Ghosh [9] have studied both the finite- and infinite-buffer
BMAP /R/1/N(∞) multiple vacation queueing system under G-limited service
discipline. In the G-limited service discipline, the server can serve a maximum of
L customers out of those that were waiting at the start of the busy period or all
the waiting customers, whichever is minimum. The queue-length distribution of the
BMAP /R/1/N − SV queueing system under G-limited service discipline at post-
service-completion and post-vacation-termination epochs has been briefly discussed
by Banik and Ghosh [2].
In this paper, we have discussed the detailed computation procedure of the queue-
length distributions at various epochs of the BMAP /R/1 − SV queueing system
under G-limited service discipline. The computation of the queue-length distribu-
tion at post-service-completion or post-vacation-termination epochs, is based on
the determination of the roots of the characteristic equation, see Chaudhry et al.
[5, 19]. Further details on root method may be found in Gupta et al. [11] and
Singh et al. [20]. One may note here that the queue-length distribution at post-
service-completion or post-vacation-termination epochs may also be obtained using
matrix-analytic methods, see Neuts [17]. Further, considering remaining service
time of a customer and remaining vacation time of the server as supplementary
variables, we have described the determination of the queue-length distribution of
BMAP /R/1 − SV Queueing System Under G-Limited Service Discipline 105

the queueing system at an arbitrary epoch. A detailed discussion on supplementary


variable technique can be found in Hokstad [12] and Choi et al. [6].
Organization of the paper is as follows. Section 2 describes the model. In Sect. 3,
we have analyzed the queue-length distribution at post-vacation-termination, post-
service-completion, and post-departure epochs. Queue-length distributions at arbi-
trary and pre-arrival epochs have been studied in Sect. 4. Section 5 deals with
various performance measures of the model. The computation procedure of the
queue-length distribution at different epochs has been discussed in Sect. 6. Finally,
in Sect. 7, numerical results have been presented based on the analytical expression
obtained in the previous sections.

2 Model Description

Let us consider a single server queueing system with infinite-buffer capacity


where the customers are arriving in batches following a BMAP . BMAP is a
generalization of the batch Poisson process where arrivals are governed by an
underlying m-state Markov chain. Let Na (t) denote the number of arrivals in (0, t]
and J (t) be the state of the underlying Markov chain at time t with state space
{j : 1 ≤ j ≤ m}. Then {Na (t), J (t)} is a two-dimensional Markov process of
BMAP with state space {(k, j ) : k ≥ 0, 1 ≤ j ≤ m}. The arrival process
is characterized by the matrices D k (k ≥ 0) of order m × m. The (i, j )-th
(1 ≤ i, j ≤ m) entry of D 0 denotes the state transition rate from state i to state
j in the underlying Markov chain without an arrival and the (i, j )-th entry of D k
(k ≥ 1) represents the state transition rate from state i to state j in the underlying
Markov chain with an arrival of batch size k. Theinfinitesimal generator of the
underlying Markov chain {J (t)} is given by D = ∞ k=0 D k . If -
π is the stationary
probability vector of the BMAP , then - π D = 0 and - π em = 1, where em is a column
vector of order m with all its entries equal to 1. Throughout the paper the subscript
m is not used and the vector is presented as e. However, when e’s dimension is other
than m it has been mentioned as its subscript. The average arrival rate ∞ λ and average
batch arrival
 rate λ b of the stationary BMAP are given by λ = -
π k=1 kD k e and
λb = - π ∞ k=1 D k e, respectively. Let P (n, t) be a square matrix of order m whose
(i, j )-th element is the conditional probability defined as

Pi,j (n, t) = P {Na (t) = n, J (t) = j | Na (0) = 0, J (0) = i},


1 ≤ i, j ≤ m, n ≥ 0, t ≥ 0.
∞
∞ |z| ≤ 1 nand t ≥ 0, we define D(z) = D(z)t k=0 D k z and P (z, t) =
For k

n=0 P (n, t)z . It may be shown that P (z, t) = e . For more details on
BMAP , see Lucantoni [14].
106 S. Ghosh et al.

In this paper, we have assumed that the server is entitled to take a single vacation
only and the service discipline is G-limited with a limit, say L. In a single vacation
queueing system, the server goes for a vacation whenever the system becomes
empty and after completing the vacation the server remains dormant until a batch
arrives in the system. In a G-limited service system, customers who have arrived
during the vacation period are considered for service and those customers who are
arriving during the busy period are excluded from service in the current busy period.
These newly arrived customers have to wait till the next vacation period terminates.
Further, in a queueing system under G-limited service with service limit L, the
server serves a maximum of L customers in a busy period even if he finds more
than L customers in the queue after termination of the vacation period. However,
the server serves all the waiting customers in the busy period, if he finds not more
than L number of waiting customers in the queue after termination of the vacation
period. One may note that for large L, i.e., L → ∞, gated-limited systems can
be expressed as exhaustive gated systems, whereas L = 1 presents pure limited
systems, see [22].
Let B, B(x), b(x), and b∗ (s) (R(s) ≥ 0) be the random variable (RV),
distribution function (DF), probability density function (pdf), and Laplace–Stieltjes
transformation (LST) of the service-time distribution of a customer, respectively.
Similarly, let V , V (x), v(x), and v ∗ (s) (R(s) ≥ 0) be the respective RV, DF, pdf,
and LST of the vacation time distribution of the server. So the expected service
time is given by E(B) = −b∗(1)(0) and the expected vacation time is given by
E(V ) = −v ∗(1) (0), where f ∗(i) (η) is the i-th (i ≥ 1) derivative of f ∗ (x) at x = η.
Hence the traffic intensity of the queueing system is given by ρ = λE(B). Any
arbitrary cycle time for this queueing system may be considered as a busy period
with L services followed by the vacation period of the server. Let B - denote the RV
for the whole cycle time duration. Hence, the expected length of the cycle time is
given by E(B) - = LE(B) + E(V ). Let us denote - ρ = λE(B),- then the stability
criterion for this gated-limited queueing system will be ρ - < L, which is equivalent
λE(V )
to L(1−ρ) < 1.

3 Analysis of Queue-Length Distribution


at Post-Vacation-Termination and
Post-Service-Completion Epochs

In this section, BMAP /G/1 − SV queueing system under G-limited service dis-
cipline has been analyzed at post-vacation-termination and post-service-completion
epochs. The states of the system at an arbitrary time epoch t has been presented by
the following random variables:
BMAP /R/1 − SV Queueing System Under G-Limited Service Discipline 107



⎪ 0, the server is dormant,


⎨1, the server is on vacation,
ξ(t) '

⎪ (r, l), the server is serving the r-th customer during a busy period



started with l customers in the queue, 1 ≤ r ≤ L, l ≥ r.
N(t) ' number of customers present in the queue.
J (t) ' state of the underlying Markov chain.
.
B(t) ' remaining service time of the customer in service.
.(t)
V ' remaining vacation time of the server.

Let us observe the system at post-service-completion or post-vacation-termination


epochs which are taken as embedded points. Let ti (i = 0, 1, 2, · · · ) be the time
epoch at which either a service is completed or a vacation terminates. The state of the
system at the time epoch ti (i = 0, 1, 2, · · · ) is defined as {N + (ti ), ξ + (ti ), J + (ti )}.
Further, just after a service-completion epoch or a vacation-termination epoch, we
define the following steady-state probabilities:

(r,l)+
πn,j = lim P {N + (ti ) = n, ξ + (ti ) = (r, l), J + (ti ) = j },
i→∞

1 ≤ r ≤ L, l ≥ r, n ≥ l − r, 1 ≤ j ≤ m,
+
ωn,j = lim P {N + (ti ) = n, ξ + (ti ) = 1, J + (ti ) = j }, 1 ≤ j ≤ m, n ≥ 0.
i→∞

Let us define the row vectors π (r,l)+


n and ω+n , whose j -th (1 ≤ j ≤ m) component
(r,l)+ +
is πn,j and ωn,j , respectively. We also define an m × m order An (V n ) matrix
whose (i, j )-th entry represents the joint probability that n (≥ 0) customers arrive
during a service (vacation) period and at the end of the service (vacation) period,
the phase of the underlying Markov chain is j which was i at the beginning of the
service (vacation) period. Now, from the definition of P (n, t), matrices An and V n
can be obtained as
∞ ∞
An = P (n, t) dB(t) and Vn = P (n, t) dV (t).
0 0

Let us denote h+ n,j as the joint probability that there are n (≥ 0) customers in the
system just after the end of a busy period ( which also includes service-completion
instants) with state of the arrival process being j (1 ≤ j ≤ m). Further, we may
+
denote h+ n as a row vector whose j -th (1 ≤ j ≤ m) component is given by hn,j .
108 S. Ghosh et al.

Hence, from the definition of a gated-limited service discipline, we have


L 
L
h+
0,j =
(l,l)+
π0,j ⇒ h+
0 = π (l,l)+
0 , (1a)
l=1 l=1


L 
L+n 
L 
L+n
h+ πn(L,l)+ ⇒ h+
(l,l)+
n,j = πn,j + n = π (l,l)+
n + π (L,l)+
n , n ≥ 1.
l=1 l=L+1 l=1 l=L+1
(1b)

Considering the two consecutive embedded Markov points, it may be shown that
the probability vectors ω+
(r,l)+
n (n ≥ 0) and π n (1 ≤ r ≤ L, l ≥ r, n ≥ l − r, 1 ≤
j ≤ m) satisfy the following set of equations:


n
ω+
n = h+
k V n−k , n ≥ 0, (2a)
k=0

π (1,l)+
n = (ω+ +
0 D l + ωl )An−l+1 , l ≥ 1, n ≥ l − 1, (2b)

n+1
(r−1,l)+
π (r,l)+
n = πk An−k+1 , 2 ≤ r ≤ L, l ≥ r, n ≥ l − r, (2c)
k=l−r+1

where D l is the phase transition matrix of the underlying Markov process during the
idle time which terminates with the arrival of a batch of size l and can be computed
as D l = (−D 0 )−1 D l . Let us denote π n as a row vector whose j -th (1 ≤ j ≤ m)
(r)+

entry gives the joint probability that there are n customers in the system immediately
after the r-th (1 ≤ r ≤ L) a service completion during a busy period and the state of
the underlying Markov chain is j . Then the probability vector at r-th post-service-
completion epoch can be given as


n+r
π (r)+
n = π (r,l)+
n , 1 ≤ r ≤ L. (3)
l=r

For |z| ≤ 1, we define the vector-generating functions (V GF s) H + (z) =


∞ + n ∞ (r)+ (z) = ∞ π (r)+ zn , Π + (z) =
hn z , W + (z) = + n
n=0 ωn z , Π n
n=0
∞  L (r)+ n  ∞ n=0

n=0 π
r=1 n z , A(z) = A
n=0 n z n , and V (z) = n
n=0 n z . Hence, using
V
the properties of P (n, t), it can be shown that

∞ ∞
A(z) = P (z, t) dB(t) and V (z) = P (z, t) dV (t).
0 0
BMAP /R/1 − SV Queueing System Under G-Limited Service Discipline 109

Lemma 1 The V GF of the probability vectors at r-th (1 ≤ r ≤ L) post-service-


completion epoch can be given as

1  
r−1 
Π (r)+
(z) = r ω+
0 (−D 0 ) −1
D(z) − D l zl
z
l=0

 
r−1   r
+ + l
+ W (z) − ωl z A(z) . (4)
l=0

Proof Using the expression of the probability vectors at r-th (1 ≤ r ≤ L) post-


service-completion epoch and the definition of the V GF , one may compute

 ∞ 
 n+r
Π (r)+ (z) = π (r)+
n zn = π (r,l)+
n zn . (5)
n=0 n=0 l=r

Hence, for r = 1, Eq. (5) can be rewritten as

∞ 
 n+1 ∞ 
 n+1
Π (1)+ (z) = π (1,l)+
n zn = (ω+ +
0 D l + ωl )An−l+1 z
n

n=0 l=1 n=0 l=1

1 
∞ ∞
  

= ω+
0 Dl z +
l
ω+
l z
l
An−l+1 zn−l+1
z
l=1 l=1 n=l−1

1 +    
= ω0 (−D 0 )−1 D(z) − D 0 + W + (z) − ω+
0 A(z).
z

Similarly, for r = 2, Eq. (5) can be rewritten as

∞ 
 n+2 ∞ 
 n+2 
n+1
Π (2)+
(z) = π (2,l)+
n zn = π (1,l)+
k An−k+1 zn
n=0 l=2 n=0 l=2 k=l−1
∞ 
 n+2 
n+1
= (ω+ +
0 D l + ωl )Ak−l+1 An−k+1 z
n

n=0 l=2 k=l−1


∞ ∞ ∞
1  + + l
 
= (ω 0 D l + ω l )z Ak−l+1 z k−l+1
An−k+1 zn−k+1
z2
l=2 k=l−1 n=k−1

1    1
+ −1
= ω 0 (−D 0 ) D(z) − D l z l
z2
l=0

 
1   2
+ W + (z) − ω+
l z l
A(z) .
l=0
110 S. Ghosh et al.

Again, for r = 3, Eq. (5) can be rewritten as

∞ 
 n+3 ∞ 
 n+3 
n+1
Π (3)+ (z) = π (3,l)+
n zn = π (2,l)+
k An−k+1 zn
n=0 l=3 n=0 l=3 k=l−2
∞ 
 n+3 
n+1 
k+1
(1,l)+
= πj Ak−j +1 An−k+1 zn
n=0 l=3 k=l−2 j =l−1

∞ 
 n+3 
n+1 
k+1
= (ω+ +
0 D l + ωl )Aj −l+1 Ak−j +1 An−k+1 z
n

n=0 l=3 k=l−2 j =l−1


∞ ∞
1  + + l

= (ω D
0 l + ω l )z Aj −l+1 zj −l+1
z3
l=3 j =l−1

 ∞

× Ak−j +1 zk−j +1 An−k+1 zn−k+1
k=j −1 n=k−1

1   
2
+ −1
= ω 0 (−D 0 ) D(z) − D l z l
z3
l=0

 
2   3
+ + l
+ W (z) − ωl z A(z) .
l=0

For 4 ≤ r ≤ L, proceeding similarly as above, i.e., using Eq. (2) successively in


Eq. (5), one may obtain the following result:
 
1  
r−1   
r−1 
Π (r)+
(z) = r ω+
0 (−D 0 )
−1
D(z) − +
D l z + W (z) −
l + l
ωl z
z
l=0 l=0
 r
× A(z) , 1 ≤ r ≤ L. (6)

Lemma 2 The V GF of the probability vectors at post-vacation-termination epoch


can be given as

W + (z) = H + (z)V (z). (7)

Proof The result follows directly from Eq. (2a) and the definition of the V GF .
BMAP /R/1 − SV Queueing System Under G-Limited Service Discipline 111

Lemma 3 The V GF of the busy period completion epoch probability vectors may
be expressed as


L−1   l
H + (z) = ω+ +
0 D l + ωl A(z) + Π (L)+ (z). (8)
l=1

Proof Using Eq. (1b) and the definition of the V GF , we have



 ∞

+
H + (z) = h+
n z = h0 +
n
h+
nz
n

n=0 n=1

* L +

L   
L+n
= π (l,l)+
0 + π (l,l)+
n + π (L,l)+
n zn
l=1 n=1 l=1 l=L+1
∞ 
 L ∞ L+n
 
= π (l,l)+
n zn + π (L,l)+
n zn
n=0 l=1 n=1 l=L+1
∞ L−1
  ∞
 ∞ L+n
 
= π (l,l)+
n zn + π (L,L)+
L zn + π (L,l)+
n zn
n=0 l=1 n=0 n=1 l=L+1
∞ L−1
  ∞ L+n
 
= π (l,l)+
n zn + π (L,l)+
n zn
n=0 l=1 n=0 l=L


L−1 ∞ ∞

= π (l,l)+
n zn + π (L)+
n zn . (9)
l=1 n=0 n=0

Now, we may compute



 ∞

π (1,1)+
n zn = (ω+ + + +
0 D 1 + ω1 )An z = (ω0 D 1 + ω1 )A(z).
n

n=0 n=0

Similarly, one may compute


 ∞ 
 n+1
π (2,2)+
n zn = π (1,2)+
k An−k+1 zn
n=0 n=0 k=1
∞ 
 n+1
= (ω+ +
0 D 2 + ω2 )Ak−1 An−k+1 z
n

n=0 k=1
112 S. Ghosh et al.


 ∞

= (ω+ +
0 D 2 + ω2 ) Ak−1 zk−1 An−k+1 zn−k+1
k=1 n=0
 2
= (ω+
0 D 2 + ω +
2 ) A(z) .

Hence, proceeding similarly, for l ≥ 1, one may express



  l
π (l,l)+
n zn = (ω+ +
0 D l + ωl ) A(z) . (10)
n=0

Thereafter, Eq. (8) directly follows from Eqs. (9) and (10).
Theorem 1 The V GF of ω+
n ’s (n ≥ 0) satisfy the following equation:

  
L−1 
W + (z) zL I m − (A(z))L V (z) = ω+
0 (−D 0 ) −1
D(z) − D l z l

l=0


L−1 
− ω+
l z l
(A(z))L
l=0


L−1  l
+ ω+ +
0 D l + ωl A(z) zL V (z),
l=1
(11)

where I m is the identity matrix of order m.


Proof The Theorem directly follows from Lemmas 1–3.
It may be noted from Theorem 1, that the V GF of ω+ n s’ (n ≥ 0) is completely
dependent on L unknown vectors ω+ n s’ (0 ≤ n ≤ L − 1) which consists of mL
+
unknown joint probabilities, namely, ωn,j as 1 ≤ j ≤ m.

Now, we may define p + n as a row vector of order m whose j -th (1 ≤ j ≤ m)


element is the joint probability that there are n (≥ 0) customers in the queue and
phase of the arrival process +
+
∞ is +j at departure epoch of a customer. Since p n is
proportional to π n and n=0 pn e = 1, we may compute

π+
p+
n =
n
. (12)

∞ 
L
(r)+
πn e
n=0 r=1
BMAP /R/1 − SV Queueing System Under G-Limited Service Discipline 113

4 Analysis of Queue-Length Distribution at Arbitrary


and Pre-arrival Epochs

(r,l)
Let πn,j (x, t) denote the joint pdf of having n (≥ 0) number of customers in the
queue at time t with state of the underlying Markov process is j (1 ≤ j ≤ m) when
the server is serving the r-th customer in a busy period which was started with l
waiting customers in the queue and the remaining service time is x. Similarly, we
denote ωn,j (x, t) as the joint pdf of having n (≥ 0) number of customers in the
queue at time t with state of the underlying Markov process is j (1 ≤ j ≤ m)
when the server is on vacation and the remaining vacation time is x. Hence, using
the random variables, we may write

(r,l)
πn,j . ≤ x + Δx, ξ(t) = (r, l)},
(x, t)Δx = P {N(t) = n, J (t) = j, x ≤ B(t)

1 ≤ r ≤ L, l ≥ r, n ≥ l − r, 1 ≤ j ≤ m, x ≥ 0,
.(t) ≤ x + Δx, ξ (t) = 1},
ωn,j (x, t)Δx = P {N(t) = n, J (t) = j, x ≤ V
1 ≤ j ≤ m, x ≥ 0.

(r,l)
Hence, in steady-state, the above pdfs’ can be rewritten as πn,j (x) and ωn,j (x).
Moreover, we denote ν0,j (t) as the probability that the server is dormant at the time
epoch t and the phase of the underlying Markov process is j (1 ≤ j ≤ m), i.e.,

ν0,j (t) = P {N(t) = 0, J (t) = j, ξ (t) = 0}, 1 ≤ j ≤ m.

Hence, in steady-state, the above probability can be rewritten as ν0,j . Further, we


define the row vectors π (r,l)
n (x), ωn (x), and ν 0 whose j -th (1 ≤ j ≤ m) components
(r,l)
are πn,j (x), ωn,j (x), and ν0,j , respectively. Further, in steady-state, we have

ν 0 = ω0 (0)[−D 0 ]−1 . (13)

Now, relating the state of the system at two consecutive time epochs t and (t + Δt)
and using probabilistic arguments, we may obtain a set of partial differential
equations for each phase j (1 ≤ j ≤ m). Thus, in steady-state, we have the
following set of equations:

d (1,l) (1,l)
− π (x) = π l−1 (x)D 0 + ωl (0)b(x) + ν 0 D l b(x), l ≥ 1, (14a)
dx l−1
d (1,l) 
n
(1,l)
− π (x) = π k (x)D n−k , l ≥ 1, n ≥ l, (14b)
dx n
k=l−1
114 S. Ghosh et al.

d (r,l) 
n
− π n (x) = π (r,l) (r−1,l)
k (x)D n−k + π n+1 (0)b(x),
dx
k=l−r

2 ≤ r ≤ L, l ≥ r, n ≥ l − r, (14c)

d  (l,l) L
− ω0 (x) = ω0 (x)D 0 + π 0 (0)v(x), (14d)
dx
l=1

d 
n 
L 
L+n
− ωn (x) = ωk (x)D n−k + n (0)v(x) +
π (l,l) π (L,l)
n (0)v(x), n ≥ 1,
dx
k=0 l=1 l=L+1
(14e)

where π (r,l)
n (0) and ωn (0) are the respective service completion rate of customers
and vacation termination rate of the server. Now, for R(s) ≥ 0, we define the LST
of π (r,l)
n (x) and ωn (x) as
 s  ∞
π (r,l)∗
n (s) = e−sx π (r,l)
n (x) dx and ω∗n (s) = e−sx ωn (x) dx,
0 0

so that
 ∞  ∞
π (r,l)
n = π (r,l)∗
n (0) = π (r,l)
n (x) dx and ωn = ω∗n (0) = ωn (x) dx.
0 0

(r,l)
We may define π n as a row vector whose j -th (1 ≤ j ≤ m) component represents
the joint probability of having n customers in the queue when the state of the arrival
process is j and the server is serving the r-th customer in a busy period which was
started with l waiting customers in the queue. Whereas ωn may be defined as a row
vector whose j -th (1 ≤ j ≤ m) component represents the joint probability that there
are n customers in the queue when the arrival process is in state j and the server is on
(s), ω∗n (s), π n , and ωn have been
(r,l)∗ (r,l)
vacation. Let the V GF of π r,l
n (0), ωn (0), π n
∗ ∗
denoted by Π(z, 0), W (z, 0), Π (z, s), W (z, s), Π(z), and W (z), respectively.
From the definition of ωn and π n , one may observe that W (z) = W ∗ (z, 0) and
Π(z) = Π ∗ (z, 0).
Now, multiplying equations (14a)–(14e) by e−sx (R(s) ≥ 0) and then integrating
w.r.t.x over 0 to ∞, we get

π l−1 (0) − sπ l−1 (s) = π l−1 (s)D 0 + ωl (0)b∗ (s) + ν 0 D l b∗ (s),


(1,l) (1,l)∗ (1,l)∗
l ≥ 1,

(15a)

n

n (0) − sπ n
π (1,l) (s) = l ≥ 1, n ≥ l,
(1,l)∗
π (1,l)∗
n (s)D n−k , (15b)
k=l−1
BMAP /R/1 − SV Queueing System Under G-Limited Service Discipline 115


n
(0)b∗(s),
(r−1,l)
n (0) − sπ n
π (r,l) (s) = (s)D n−k + π n+1
(r,l)∗
π (r,l)∗
n
k=l−r

2 ≤ r ≤ L, l ≥ r, n ≥ l − r, (15c)

L
ω0 (0) − sω∗0 (s) = ω∗0 (s)D 0 + π (l,l) ∗
0 (0)v (s), (15d)
l=1


n 
L
ωn (0) − sω ∗n (s) = ω∗k (s)D n−k + π (l,l) ∗
n (0)v (s)
k=0 l=1


L+n
+ π (L,l)
n (0)v ∗ (s), n ≥ 1. (15e)
l=L+1

Hence, setting s = 0 in Eq. (15a)–(15e) and noting that b ∗ (0) = 1 and v ∗ (0) = 1,
we may obtain

(1,l) (1,l)
π l−1 (0) = π l−1 D 0 + ωl (0) + ν 0 D l , l ≥ 1, (16a)

n
π n(1,l) (0) = π n(1,l)D n−k , l ≥ 1, n ≥ l, (16b)
k=l−1


n
(r−1,l)
n (0) =
π (r,l) n D n−k + π n+1 (0),
π (r,l) 2 ≤ r ≤ L, l ≥ r, n ≥ l − r,
k=l−r
(16c)

L
ω0 (0) = ω0 D 0 + π (l,l)
0 , (16d)
l=1


n 
L
(l,l)

L+n
ωn (0) = ωk D n−k + π0 + π (L,l)
n (0), n ≥ 1. (16e)
k=0 l=1 l=L+1

Now, we may define pn as a row vector whose j -th (1 ≤ j ≤ m) entry denotes


the joint probability of having n (≥ 0) customer in the queue at an arbitrary time
and the state of the arrival process is j . Moreover, we may denote the V GF of pn
by P (z). It is clear from the context that P (z) = W (z) + Π(z) + ν 0 .
Lemma 4 The V GF W (z) satisfy the following equation:

 ∞
L  ∞
 ∞

W (z, 0) − W (z)D(z) = π (l,l)
n (0)z
n
+ π (L,l)
n (0)zn . (17)
l=1 n=0 l=L+1 n=l−L
116 S. Ghosh et al.

Proof Multiplying equation (16e) by zn and summing over n (≥1) along with
Eq. (16d) we have


 ∞ 
 n 
L ∞ 
 L
(l,l)
ωn (0)zn = ω 0 D 0 + ωk D n−k zn + π 0 (0) + π (l,l)
n (0)z
n

n=0 n=1 k=0 l=1 n=1 l=1


∞ L+n
  ∞ 
 n ∞ 
 L
+ π (L,l)
n (0)zn = ωk D n−k zn + π (l,l)
n (0)z
n

n=1 l=L+1 n=0 k=0 n=0 l=1


∞ L+n
 
+ π (L,l)
n (0)zn . (18)
n=1 l=L+1

Using the definition of W (z, 0) and changing the order of summation in the first
term of the right-hand side of Eq. (18), we can get


 ∞
  ∞
L  ∞
 ∞

W (z, 0) = ω k zk D n−k zn−k + n (0)z +
π (l,l) n
π (L,l)
n (0)zn .
k=0 n=k l=1 n=0 l=L+1 n=l−L
(19)

Hence using the definition of W (z) and D(z), Eq. (17) directly follows from
Eq. (19).
Lemma 5 The (V GF ) Π(z) may be computed as

1     −1
Π(z) = Π(z, 0) z − 1 − W (z) − ν 0 D(z) D(z) . (20)
z

Proof Multiplying equation (15a) by zl−1 and Eqs. (15b)–(15c) by zn and then
summing them over n (≥0), r (≤L) and l (≥r), one can get

∞ 

* ∞ 

+

L  
L 
π (r,l)
n (0)z
n
=s π (r,l)∗
n (s)zn
r=1 l=r n=l−r r=1 l=r n=l−r

 ∞ 
L  ∞ 
n
(r,l)∗
+ πk (s)D n−k zn
r=1 l=r n=l−r k=l−r

 ∞
L  ∞

n−1 ∗
+ π (r,l)
n (0)z b (s)
r=1 l=r+1 n=l−r


+ (ωl (0) + ν 0 D l )zl−1 b∗ (s). (21)
l=1
BMAP /R/1 − SV Queueing System Under G-Limited Service Discipline 117

Now, changing the order of summation and using the definitions of Π(z, 0) and
Π ∗ (z, s), we can get
* L ∞ ∞ + ∞
  

Π(z, 0) = sΠ (z, s) + π (r,l)∗
k (s)zk D n−k zn−k
r=1 l=r k=l−r n=k

* L ∞ ∞
  
+ (ωl+1 (0) + ν 0 D l+1 )zl b∗ (s) + π (r,l)
n (0)
l=0 r=1 l=r n=l−r
+
 ∞
L  ∞
 ∞
 
− n (0) +
π (r,r) π (L,l)
n (0) zn−1 b∗ (s). (22)
r=1 n=0 l=L+1 n=l−L

Hence using Lemma 4 and the definition of Π ∗ (z, s), D(z), and W (z, 0), we have

1
Π(z, 0) = sΠ ∗ (z, s) + Π ∗ (z, s)D(z) + W (z, 0) − ω0 (0)
z
  L ∞ ∞
1    (r,l)
+ ν 0 D(z) − D 0 b∗ (s) + π n (0)zn b∗ (s)
z
r=1 l=r n=l−r

1
− W (z, 0) − W (z)D(z) b ∗ (s). (23)
z

Thereafter, setting s = 0 in Eq. (23), some simple algebraic calculation along with
Eq. (13) leads to Eq. (20).
Theorem 2 The relation among the V GF s at post-vacation (W + (z)), post-service
(Π + (z)), and arbitrary epoch probabilities (P (z)) is given by

1   1
P (z) = (z − 1)σ Π + (z) + W + (z) − Υ + (z) [D(z)]−1 + (z + 1)ν 0 , (24)
z z

where

 ∞ 
L  ∞ ∞

σ = n (0)e +
π (r,l) ωn (0)e,
r=1 l=r n=l−r n=0

 ∞
L  ∞
 ∞

Υ + (z) = π (l,l)+
n zn + π (L,l)+
n zn .
l=1 n=0 l=L+1 n=l−L

Proof Using Lemma 5 and the relation among P (z), Π(z), and W (z), we have

1 1
P (z) = (z − 1)[Π(z, 0) + W (z)D(z)][D(z)]−1 + (z + 1)ν 0 . (25)
z z
118 S. Ghosh et al.

and ωn (0) = σ ω +
(r,l) (r,l)+
One may note that π n (0) = σ π n n , where σ is
L ∞  ∞ (r,l)+
a 
proportional constant. Now, using the fact that r=1 l=r n=l−r π n e
∞ +
+ n=o ωn e = 1, we have

(r,l)
π n (0)
π (r,l)+
n = , 1 ≤ r ≤ L, l ≥ r, n ≥ l − r, (26a)
σ
ωn (0)
ω+
n = , n ≥ 0. (26b)
σ

Hence, using the definitions of Π + (z), W + (z), Π(z, 0), and W (z, 0), from
Lemma 4, we have
* ∞ ∞ ∞
+

L   
W (z)D(z) = σ W + (z) − π (l,l)+
n zn − π (L,l)+
n zn . (27)
l=1 n=0 l=L+1 n=l−L

L ∞ ∞ ∞
Now, denoting Υ + (z) =
(l,l)+ n (L,l)+ n
l=1 n=0 πn z + l=L+1 n=l−L π n z , we
may express
 
W (z)D(z) = σ W + (z) − Υ + (z) . (28)

Hence Eqs. (25) and (28) yield the relation among P (z), Π + (z), and W + (z), i.e.,
Eq. (24).
Now, we define p −n as a row vector whose j -th (1 ≤ j ≤ m) component is given

by pn,j which represents the joint probability that a batch arrival finds n (≥ 0)
customers in the queue and the arrival process is in state j . Then p−
n is given by
∞
pn k=1 D k
p−
n = , n ≥ 0. (29)
λb

Corollary 1 The mean number of entrances to the vacation state per unit of time is
equal to the mean number of departure from the vacation state per unit of time.

 ∞
L  ∞
 ∞
 ∞

n (0)e +
π (l,l) π (L,l)
n (0)e = ωn (0)e. (30)
l=1 n=0 l=L+1 n=l−L n=1

Proof Let us set z = 1 in Eq. (17) and then multiply these equations by e. Hence,
the result directly follows from the fact that W (1)D(1)e = 0.
BMAP /R/1 − SV Queueing System Under G-Limited Service Discipline 119

Corollary 2 The traffic intensity (ρ) of this queueing system satisfies the following
equations:
* ∞ 

+ ∞ 


L  
L 
E(B) π (r,l)
n (0)e = n e = ρ,
π (r,l) (31a)
r=1 l=r n=l−r r=1 l=r n=l−r
* ∞
+ ∞
 
E(V ) ωn (0)e = ωn e = 1 − ρ − ν 0 e. (31b)
n=1 n=0

Proof Let us differentiate Eqs. (15a)–(15c) with respect to s and then multiply these
equations by e after setting s = 0. Now adding these multiplied equations and
using Corollary 1 along with the facts that De = 0 and ν 0 [−D 0 ] = ω0 (0), one
may formulate Eq. (31a). Similarly, from Eqs. (15d)–(15e), one may obtain (31b).
 ∞ ∞ (r,l)
Here one may note that L r=1 l=r n=l−r π n (0)e denotes the mean number of
service completion
 in unit of time and multiplying this by E(B) will give ρ. On the
other hand, ∞ n=0 ωn (0)e represents the mean number of vacation termination per
unit of time and multiplying this by E(V ) will give (1 − ρ − ν 0 e).
Remark 1 Using Eqs. (26a) in Corollary 2, one may compute
ρ
σ =  ∞ ∞ . (32)
L (r,l)+
E(B) r=1 l=r n=l−r πn e

5 Performance Measures

Various performance measures of a queueing system are often required for studying
the behavior of the queueing system in detail. This section deals with the determi-
nation of different performance measures related to the model.
Mean Queue-Length As the state probabilities at post-departure, arbitrary, and pre-
arrival epochs are known, the corresponding mean queue-lengths can be easily
obtained. For example, the mean queue-length can be obtained from the steady-state
queue-length distribution at arbitrary epochs. The average number of customers in
the queue can be given as


Lq = npn e. (33)
n=1

Similarly, the mean number


 of customers when the server is busy and on vacation
can be given by Lqb = ∞ n=1 nπ n e and Lqv =

n=1 nω n e, respectively.
120 S. Ghosh et al.

Mean Waiting Time Applying Little’s law, the mean waiting time of an arbitrary
customer in the queue in steady-state may be computed as

Lq
Wq = . (34)
λ
Expected Length of a Busy and Idle Period Let the mean length of a busy period,
i.e., the average time duration the server is busy to serve customers in a busy period
is denoted by θb . Further, we define θi as the mean idle period, i.e., the average time
period the server is on vacation or the server is dormant before starting a new busy
period. From the definition of the carried load ρ (the fraction of time that the server
is busy), one may have
θb ρ
= . (35)
θi 1−ρ
The mean number of service completion per unit of time is given by
L ∞ ∞ (r,l)
r=1 l=r n=l−r π n (0)e  and  the mean number of busy period completion 
∞ L (l,l) ∞ ∞ (L,l)
per unit of time is given by n=0 l=1 π n (0)e + n=1 l=L+1 π n (0)e .
Hence, the mean number of customers served in a busy period can be calculated as
L ∞ ∞ (r,l)
r=1 l=r n=l−r π n (0)e
∞ L (l,l)  ∞  ∞ (L,l)
. Therefore, the mean length of
n=0 l=1 π n (0)e + n=1 l=L+1 π n (0)e
a busy period is given by
 
L ∞ ∞ (r,l)+
E(B) r=1 l=r n=l−r π n e
θ b = ∞ L  ∞ . (36)
n=0 l=1 π n
(l,l)+
e+ ∞ n=1 l=L+1 π n
(L,l)+
e

Now, using Eqs. (36) and (35), we may obtain


 
$ % L ∞ ∞ (r,l) +
1−ρ E(B) r=1 l=r n=l−r π n e
θi = · . (37)
∞ L (l,l) +  ∞ (L,l) +
ρ
n=0 π l=1 e+ ∞
n n=1 π l=L+1 e n

Hence, using the probabilities obtained in Sect. 3, one may compute θb and θi from
Eqs. (36) and (37), respectively.

6 Computation Procedure of Queue-Length Distribution


at Different Epochs

In this section we have demonstrated how to compute queue-length distribution at


different epochs, such as post-vacation-termination, post-service-completion, post-
departure, arbitrary epochs, and pre-arrival epochs.
BMAP /R/1 − SV Queueing System Under G-Limited Service Discipline 121

In order to compute all the probability vectors ω+ n (n ≥ 0), it is needed


to calculate all the unknown joint probabilities involved in Eq. (11). Singh et
al. [19] have used roots method for determination of the probability vectors at
the post-departure epochs of MAP /R (a,b) /1 queueing system. For computation
of the unknown joint probabilities involved in Eq. (11), we have used a similar
kind of technique as described in [19]. We have considered service and vacation
distributions having rational LST because distributions with rational LST cover a
wide range of distributions that are applicable in queueing theory, see Botta et al.
[4]. This implies that the LST of those distributions can be expressed in the form
P (s)
R(s) = Q(s) , where deg(P (s)) ≤ deg(Q(s)). For example, considering matrix-
exponential service-time distribution, the derivation of A(z) has been given by
Singh et al. [19]. Following similar arguments for phase-type (P H -type) service
distribution with representation (β, T ), where β is a row vector and T is a square
matrix of same order, say ζ , one can derive

A(z) = (I ζ ⊗ β)(−(D(z) ⊕ T ))−1 (I ζ ⊗ T 0 ), (38)

where ⊗ and ⊕ represent the Kronecker product and Kronecker sum, respectively,
and T 0 = −T eζ . Similar derivation can be done for V (z).
Assuming service and vacation time distributions possess rational LST, let,
dA(z) and dV (z) are the denominators of A(z) and V (z), respectively. Hence,
each entry of the matrix [zL I − (A(z))L V (z)] is also a rational function with
the denominator [dA(z)]LdV (z). Let the (i, j )-th (1 ≤ i, j ≤ m) entry of A(z)
ai,j (z) vi,j (z)
and V (z) is given by and , respectively. If the (i, k)-th entry of the
dA(z) dV (z)
(l)
ai,k (z)
matrix (A(z))l (1 ≤ l ≤ L) is denoted by , then the (i, j )-th entry of
(dA(z))l
[zL I − (A(z))L V (z)] is given by
 L ri,j (z)
zL I − A(z) V (z) = ,
i,j [dA(z)]LdV (z)

where

⎪ 
m

⎨z [dA(z)] dV (z) −
L L (L)
ai,k (z)vk,i (z), i = j,
ri,j (z) = 
m
k=1 (39)

⎪ (L)
⎩− ai,k (z)vk,j (z), i = j.
k=1

Since W + (z) is a row vectorof order (1 × m), the j -th (1 ≤ j ≤ m) entry



can be given as Wj+ (z) = + n
n=0 ωn,j z . Expanding both sides of Eq. (11) and
then comparing component wise we get m equations in terms of m unknowns, viz.
Wj+ (z). After canceling [dA(z)]LdV (z) from both sides of the expanded equation,
122 S. Ghosh et al.

a simplified system of linear equations may be written as

W + (z)R(z) = M(z), (40)

where R(z) is an m × m matrix whose (i, k)-th entry is given by rk,i (z) and M(z)
is a 1 × m row vector whose j -th element is given by


m 
L−1 m   
[ω+ + (l)
Mj (z) = zL 0 D l ] 1,i + ωl,i a i,k (z) [dA(z)]L−l
k=1 l=1 i=1


m  
L−1 
+ [ω+ −1
0 (−D 0 ) ]1,i [D(z)]i,k − z l
[D l ]i,k
i=1 l=0


L−1 
m 
+ (L)
− zl ωl,i ai,k (z) vk,j (z).
l=0 i=1

Above system of Eqs. (40) can be solved for Wj+ (z) using Cramer’s rule and the
solution can be given by

|R j (z)|
Wj+ (z) = 1 ≤ j ≤ m, (41)
|R(z)|

where R j (z) is a square matrix whose (i, k)-th entries is given by


/
rk,i (z) k = j.
(R j (z))i,k =
Mi (z) k = j, 1 ≤ i, j, k ≤ m.

For uniqueness of Wj+ (z), |R(z)| is considered to be a non-zero polynomial in


z, i.e., |R(z)|= 0. Under steady-state conditions, it can be shown that |zL I −
|R(z)|
(A(z))L V (z)| = = 0 has exactly mL roots in |z|≤ 1 (including
([dA(z)]LdV (z))m
multiplicity) which also includes the root at z = 1, see Gail et al. [8, Lemma
1]. The characteristic equation associated with queue-length distribution is defined
as |zL I − (A(z))L V (z)|= 0, i.e., |R(z)|= 0. Let the roots of the characteristic
equation whose value is less than 1 are denoted by z1 , z2 , · · · , zmL−1 and the
root 1 is denoted by zmL . These roots can be used to determine the unknown joint
probabilities of Eq. (11).
Since Wj+ (z) (given by Eq. (41)) is convergent for |z|≤ 1, then zn (n = 1, 2, 3,
· · · , mL) must be the zeros of the numerator of Eq. (41) and hence we can determine
the unknown vectors by considering any one component of W + (z), say Wκ+ (z) (1 ≤
κ ≤ m). This implies that

|R κ (zi )|= 0 1 ≤ i ≤ mL. (42)


BMAP /R/1 − SV Queueing System Under G-Limited Service Discipline 123

The above system of homogeneous equation gives mL equations in terms of mL


+
unknowns, namely, ωn,j (0 ≤ n ≤ L, 1 ≤ j ≤ m), which leads to a trivial
solution. Hence, a non-homogeneous system  of equation is needed to find a non-
trivial solution. Using W + (1)e = 1, i.e., m
κ=1 Wκ
+ (1) = 1 and |R(1)|= 0, from

Eq. (41), we have


m
|R "κ (1)|= |R " (1)|. (43)
κ=1

Equations (42) for (1 ≤ i ≤ mL − 1) and (43) gives a non-homogeneous system


+
of equation. Solving them we get unknown joint probabilities ωn,j (0 ≤ n ≤ L −
+
1, 1 ≤ j ≤ m) and probability vectors ωn (0 ≤ n ≤ L − 1) associated with
Eq. (11). It is assumed that z1 , z2 , · · · , zmL−1 and zmL are distinct. If some roots
of the characteristic equation, inside and on |z|= 1 are repeated, the above procedure
needs some modifications to get the unknown joint probabilities, see Singh et al.
[19]. Now after computing the probability vectors ω+ n (0 ≤ n ≤ L − 1), the rest
probability vectors at post-vacation-termination epochs can be obtained from the
V GF of ω+ n ’s, i.e., Eq. (11).
After computing the queue-length probability vectors at post-vacation-
termination epoch, the probability vectors at post-service-completion epoch can
be obtained from Eqs. (2b) and (2c). The queue-length distribution at post-departure
epoch can be computed from Eq. (12). Now, using the queue-length probability
vectors at post-vacation-termination and post-service-completion epochs, the
queue-length probability vectors at server’s vacation period can be obtained from
Eq. (28). Thereafter, the queue-length probability vectors at server’s busy period
can be obtained from Eq. (20) with the help of the previously determined queue-
length probability vectors along with Eq. (26). Finally, the queue-length distribution
at an arbitrary epoch can be computed from Eq. (24). At last, the queue-length
distribution at pre-arrival epoch can be computed from Eq. (29).

7 Numerical Results and Discussion

Based on the analytical results, a few numerical illustrations have been presented in
this section. We have observed that if the parameters satisfy the stability criteria of
the queueing system, then the numerical results are satisfactory.
Considering phase-type (P H -type) service time and vacation time distribution,
we have computed the queue-length distribution at various epochs. Although,
the numerical computations were carried out with high precision, due to lack
of place the results have been presented here up to 6 decimal places. For this
queueing system, we have considered the service limit L as 3. The BMAP has
been considered to have 3 phases, while the P H -distributions for the service and
124 S. Ghosh et al.

the vacation times have been considered to have 2 phases. The 3-state BMAP
representation has been taken as
⎡ ⎤ ⎡ ⎤
−0.95 0.08 0.05 0.02 0.15 0.07
D 0 = ⎣ 0.03 −0.87 0.04 ⎦ , D 1 = ⎣ 0.20 0.04 0.19 ⎦ ,
0.02 0.03 −0.73 0.06 0.17 0.01
⎡ ⎤ ⎡ ⎤
0.03 0.10 0.18 0.16 0.05 0.06
D 3 = ⎣ 0.07 0.11 0.05 ⎦ , D 4 = ⎣ 0.08 0.09 0.07 ⎦ , and
0.03 0.05 0.13 0.04 0.12 0.07
⎡ ⎤
0.0 0.0 0.0
D k = ⎣ 0.0 0.0 0.0 ⎦ , for k ≥ 4.
0.0 0.0 0.0

For this representation of BMAP , one may compute, -


π = [0.264842, 0.372648,
0.362510], λ = 1.982630, and λb = 0.761796. The P H -type representation of the
service-time distribution has been taken as

−18.0 18.0
β 1 = [0.5, 0.5], T1 = .
0.0 −12.0

Hence the expected service time and the traffic intensity can be computed as
E(B) = 0.111111 and ρ = λE(B) = 0.220292, respectively. The P H -type
representation of the vacation time has been taken as

−20.0 20.0
β 2 = [0.3, 0.7], T2 = .
0.5 −25.0

For this vacation time distribution the expected vacation time can be computed as
E(V ) = 0.055. In Tables 1 and 2, we have presented the queue-length distribution at
post-service-completion and post-vacation-termination epochs, respectively. Table 3
represents the distribution of the number of customers in the queue when the server
is in vacation (ωn ). Whereas Table 4 describes the queue-length distribution at an
arbitrary epoch. 
From Table 4, it may be observed that ∞ n=0 p n ' - π , which validates the
correctness of the obtained numerical values. Further, it may be mentioned here
that the value of σ can be computed from the Remark 1. Therefore, one may
compute ν 0 = σ ω+ −1
0 [−D 0 ] . Thus, for the given arrival, service time, and vacation
time distributions, the ν 0 is computed as [0.186044, 0.271003, 0.270698], i.e., the
probability that the server will remain
∞dormant is 0.727745. Moreover, Eq. (31b)
has been verified, i.e., the value n=0 ωn e has been matched with the value
(1 − ρ − ν 0 e). This may be also considered as a valid check of the numerical
computations.
BMAP /R/1 − SV Queueing System Under G-Limited Service Discipline 125

Table 1 Queue-length (3,3)+ (3,3)+ (3,3)+ (3,3)+


n πn,1 πn,2 πn,3 πn e
distribution at busy
post-service-completion 0 0.010709 0.020170 0.027227 0.058106
epochs
1 0.001793 0.002186 0.000933 0.004911
2 0.000101 0.000101 0.000067 0.000269
3 0.000790 0.001457 0.002060 0.004308
4 0.001480 0.001940 0.001426 0.004846
5 0.000192 0.000186 0.000129 0.000507
6 0.000048 0.000079 0.000104 0.000231
7 0.000108 0.000169 0.000168 0.000445
8 0.000095 0.000096 0.000077 0.000268
9 0.000013 0.000014 0.000012 0.000039
10 0.000006 0.000010 0.000011 0.000027
.. .. .. .. ..
. . . . .

0.015351 0.026424 0.032228 0.074003

Table 2 Queue-length + + +
n ωn,1 ωn,2 ωn,3 ω+
ne
distribution at
post-vacation-termination 0 0.055748 0.072681 0.060622 0.189052
epochs
1 0.023214 0.026851 0.020785 0.070850
2 0.004468 0.005142 0.003649 0.013258
3 0.002893 0.004845 0.005853 0.013591
4 0.005314 0.006787 0.006470 0.018571
5 0.003059 0.003267 0.002763 0.009089
6 0.000879 0.001031 0.000924 0.002834
7 0.000522 0.000742 0.000749 0.002013
8 0.000490 0.000621 0.000588 0.001699
9 0.000283 0.000314 0.000278 0.000875
10 0.000110 0.000131 0.000121 0.000362
.. .. .. .. ..
. . . . .

0.097138 0.122609 0.102989 0.322737
126 S. Ghosh et al.

Table 3 Queue-length
n ωn,1 ωn,1 ωn,1 ωn e
distribution when the server
on vacation 0 0.008986 0.011714 0.009769 0.030470
1 0.003738 0.004323 0.003347 0.011408
2 0.000719 0.000827 0.000587 0.002133
3 0.000464 0.000777 0.000939 0.002180
4 0.000852 0.001089 0.001038 0.002979
5 0.000491 0.000524 0.000444 0.001459
6 0.000141 0.000165 0.000148 0.000455
7 0.000084 0.000119 0.000120 0.000322
8 0.000078 0.000099 0.000094 0.000272
9 0.000045 0.000050 0.000045 0.000140
10 0.000018 0.000021 0.000019 0.000058
.. .. .. .. ..
. . . . .

0.015642 0.019741 0.016580 0.051963

Table 4 Queue-length
n pn,1 pn,1 pn,1 pn e
distribution at an arbitrary
epoch 0 0.214281 0.307641 0.301053 0.822974
1 0.016514 0.021810 0.020889 0.059213
2 0.013986 0.018967 0.018967 0.051921
3 0.010658 0.012586 0.010718 0.033962
4 0.003824 0.004826 0.004535 0.013185
5 0.002436 0.003004 0.002814 0.008253
6 0.001427 0.001743 0.001611 0.004782
7 0.000779 0.000928 0.000857 0.002564
8 0.000410 0.000508 0.000476 0.001393
9 0.000240 0.000289 0.000268 0.000797
10 0.000129 0.000155 0.000144 0.000428
.. .. .. .. ..
. . . . .

0.264842 0.372648 0.362510 1.000000
BMAP /R/1 − SV Queueing System Under G-Limited Service Discipline 127

8 Conclusion and Future Scope

This paper deals with detailed analysis of the infinite-buffer BMAP /R/1 − SV
queueing system under G-limited service discipline. The analysis is based on
the determination of the roots of the characteristic equation obtained at post-
vacation-completion and post-service-termination epochs. Using those roots, the
queue-length distribution at post-service-completion and arbitrary epochs has been
determined. Moreover, the detailed computational procedure for determining the
queue-length probability vectors at different epochs has been described. Some
performance measures of the system have been also discussed. In future, the
BMAP /R/1 queueing system with G-limited service under adaptive vacations may
be investigated. Further, one may be interested in the BMAP /R/1 queueing system
under probabilistic-limited (P -limited) service discipline with the server’s vacation.

Acknowledgments The authors are sincerely thankful to the anonymous referees for their
valuable suggestions and constructive comments towards the significant restructure of the paper.
The third author is partially supported by NSERC under the research grant number RGPIN-2014-
06604.

References

1. Alfa, A.S.: Discrete time analysis of MAP /P H /1 vacation queue with gated time-limited
service. Queueing Syst. 29(1), 35–54 (1998)
2. Banik, A.D., Ghosh, S.: Efficient computational analysis of non-exhaustive service vacation
queues: BMAP /R/1/N(∞) under gated-limited discipline. Appl. Math. Model. 68, 540–562
(2019)
3. Banik, A.D., Gupta, U.C., Pathak, S.S.: BMAP /G/1/N queue with vacations and limited
service discipline. Appl. Math. Comput. 180(2), 707–721 (2006)
4. Botta, R.F., Harris, C.M., Marchal, W.G.: Characterizations of generalized hyperexponential
distribution functions. Stoch. Model. 3(1), 115–148 (1987)
5. Chaudhry, M.L., Singh, G., Gupta, U.C.: A simple and complete computational analysis of
MAP /R/1 queue using roots. Methodol. Comput. Appl. Probab. 15(3), 563–582 (2013)
6. Choi, B.D., Hwang, G.U., Han, D.H.: Supplementary variable method applied to the
MAP /G/1 queueing system. J. Aust. Math. Soc. B Appl. Math. 40(01), 86–96 (1998)
7. Doshi, B.T.: Queueing systems with vacations – a survey. Queueing Syst. 1(1), 29–66 (1986)
8. Gail, H.R., Hantler, S.L., Sidi, M., Taylor, B.A.: Linear independence of root equations for
M/G/1 type Markov chains. Queueing Syst. 20(3–4), 321–339 (1995)
9. Ghosh, S.: Computational analysis of single server queues with non-renewal batch arrivals and
renewal/non-renewal service under different service disciplines. Ph.D. Thesis, Indian Institute
of Technology Bhubaneswar (2019)
10. Gupta, U.C., Sikdar, K.: Computing queue length distributions in MAP /G/1/N queue under
single and multiple vacation. Appl. Math. Comput. 174(2), 1498–1525 (2006)
11. Gupta, U.C., Singh, G., Chaudhry, M.L.: An alternative method for computing system-length
distributions of BMAP /R/1 and BMAP /D/1 queues using roots. Perform. Eval. 95, 60–79
(2016)
12. Hokstad, P.: A supplementary variable technique applied to the M/G/1 queue. Scand. J. Stat.
2(2), 95–98 (1975)
128 S. Ghosh et al.

13. Kasahara, S., Takine, T., Takahashi, Y., Hasegawa, T.: MAP /G/1 queues under N-policy with
and without vacations. J. Oper. Res. Soc. Jpn. 39(2), 188–212 (1996)
14. Lucantoni, D.M.: New results on the single server queue with a batch Markovian arrival
process. Commun. Stat. Stoch. Model. 7(1), 1–46 (1991)
15. Lucantoni, D.M., Meier-Hellstern, K.S., Neuts, M.F.: A single-server queue with server
vacations and a class of non-renewal arrival processes. Adv. Appl. Probab. 22(3), 676–705
(1990)
16. Neuts, M.F.: A versatile Markovian point process. J. Appl. Probab. 16(4), 764–779 (1979)
17. Neuts, M.F.: Matrix-Geometric Solutions in Stochastic Models: An Algorithmic Approach.
The John Hopkins University Press, Baltimore (1981)
18. Saffer, Z., Telek, M.: Unified analysis of BMAP /G/1 cyclic polling models. Queueing Syst.
64(1), 69–102 (2010)
19. Singh, G., Gupta, U.C., Chaudhry, M.L.: Computational analysis of bulk service queue with
Markovian arrival process: MAP /R (a,b)/1 queue. OPSEARCH 50(4), 582–603 (2013)
20. Singh, G., Gupta, U.C., Chaudhry, M.L.: Detailed computational analysis of queueing-time
distributions of the BMAP /G/1 queue using roots. J. Appl. Probab. 53(4), 1078–1097 (2016)
21. Takagi, H.: Queueing Analysis Volume 1: Vacation and Priority Systems. Elsevier Science Pub.
Co., Amsterdam (1991)
22. Tian, N., Zhang, Z.G.: Vacation Queueing Models: Theory and Applications, vol. 93. Springer,
Berlin (2006)
23. Vishnevsky, V.M., Dudin, A.N., Semenova, O.V., Klimenok, V.I.: Performance analysis of the
BMAP /G/1 queue with gated servicing and adaptive vacations. Perform. Eval. 68(5), 446–
462 (2011)
A Production Inventory System with
Renewal and Retrial Demands

G. Arivarignan, M. Keerthana, and B. Sivakumar

Abstract This paper presents a continuous review inventory system with make-to-
stock production facility. We assume that the arrival time points of demands form
a renewal process and each demand requires only single item. The replenishment
of stock is done by producing items one at a time. The production process is
started when the inventory level drops to or below a prefixed inventory level,
denoted by s(> 0), and is terminated when the maximum inventory level, namely
S(> s), is reached. The inter-production time is assumed to be exponential. The
customer, whose demand cannot be met during stock out period, enters an orbit of
infinite size and from the orbit he sends signal to the inventory system to get his
demand satisfied. The inter-retrial time between two successive retrials is assumed
to follow exponential distribution. With suitable modeling process, we derived the
joint probability distribution of number of customers in the orbit, status of the
machine (producing or not) and the inventory level, using matrix geometric method.

Keywords Renewal primary demand · Make-to-stock · Continuous review


inventory system · Infinite orbit · Matrix geometric method

1 Introduction

The literature on continuous review inventory systems mostly considered the


replenishment of stock by ordering items (in lots) from a supplier or manufacturer.
The most frequently used policies are “order up to,” equivalently (S − 1, S) policy
(placing an order at every occurrence of a demand) or (r, Q) policy (placing an
order for Q items when the inventory level drops at r). The performance analysis

G. Arivarignan ()
Department of Statistics, Manonmaniam Sundaranar University, Tirunelveli, India
M. Keerthana · B. Sivakumar
School of Mathematics, Madurai Kamaraj University, Madurai, India

© The Editor(s) (if applicable) and The Author(s), under exclusive 129
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_9
130 G. Arivarignan et al.

of these models have been carried out by [2, 3, 8, 12] and [1]. However, in certain
cases the production of items in lieu of placing orders for them is preferred. This is
necessitated to protect the trade mark rights of technology developed in-house, or
to catch up with the changing trend or specifications in the manufacturing process,
or to reduce the cost of stocking these items. Moreover, the production could be of
two types: “make-to-order” and “make-to-stock”. In the former, the production is
started whenever the demand arrives and in the latter, the production is started when
the inventory is at a low level (already fixed) and run until the stock is accumulated
to a desirable level.
A detailed discussion on stochastic models for manufacturing system is given in
[3]. He et al. [5, 6] considered a make-to-order production inventory system with
arrivals of demands forming a Poisson process, production time having exponential
distribution and zero lead time, and that the production can be initiated at any
required time. He et al. [7] extended this work by having phase type distribution for
the production time. They derived optimal replenishment policies by not only using
the inventory level, but also the information on the number of outstanding orders.
Krishnamoorthy and Narayanan [10] considered an inventory system with make-to-
stock production and assumed Markovian arrival process for demands and Markov
production process for the production times. They derived the joint probability
distribution of inventory level and the queue size and computed various measures
of system performance in steady state. Kim [9] considered a two-station tandem
production system with make-to-order production in one station and make-to-stock
in another. The make-to-order facility processes the customer order with the option
to accept or reject. They addressed the problem of coordinating the decisions by
presenting a Markov decision model.
In this paper, we considered an inventory system with one production machine
for augmenting items to the stock. The inter-demand occurrence time points are
assumed to have arbitrary distribution and customers are allowed to join an orbit in
case of nonavailability of items in the inventory and are permitted to retry for their
demands. A matrix geometric solution to the problem of finding limiting distribution
of—number of customers in the system, status of production machine, and the
inventory level—is provided.

2 Model Formulation

Consider a continuous review system, in which the customers arrive to an inventory


system demanding unit item. To augment items to the stock, the items are produced
within the system. Production process starts only when the inventory level reaches
a level (or below) a prefixed safety stock and is produced one-at-a time until the
inventory level reaches the maximum of inventory. The customers who arrive during
stock out periods are allowed to join an orbit and from there, he / she can retry to
get their demands satisfied.
A Production Inventory System with Renewal and Retrial Demands 131

The various random processes considered for the above model are listed below:
• The inter-arrival times of customers are independent and identically distributed
with arbitrary distribution F (·) , density function f (·), and mean m. Each
demand requires only a single item.
• The items are produced one at a time with random production time.
• Production process is started when the inventory level drops to (or below)
a prefixed quantity s and the production is terminated when the maximum
inventory level S is reached (0 ≤ s < S). For Mathematical tractability, we
assume that the production is switched on only at the time of demand points
only.
• The inter-production time is assumed to have exponential distribution with
parameter α.
• During the stock out period, the arriving customers are allowed to enter into an
orbit of infinite size. Retrial by customers from the orbit are entertained.
• The inter-retrial time is assumed to have exponential distribution with parameter
θ . We assume that the orbiting customers follow first come first serve discipline.

2.1 Embedded MRP and Its Analysis

Let X(t) denote the number of customers in the orbit, Y (t) denote the
production status (1 for on and 0 for off), and L(t) denote the inventory
level at time t > 0. The state space of these processes are, respectively,
{0, 1, 2, . . .}, {0, 1}, and {0, 1, 2, . . . , S}. We consider the joint process (t) =
(X(t), Y (t), L(t)) whose state space  is given by

 = {(x, y, l) : x = 0, 1, 2, . . . ; if y = 0, then l = s + 1, s + 2, . . . , S − 1, S
and
x = 0, 1, 2, . . . ; if y = 1, then l = 0, 1, . . . , S − 1, S }.

Let 0 = T0 < T1 < T2 < . . . be sequence of time points at which demand arrives
to the system. Define Xn = X(Tn +), Yn = Y (Tn +), and Ln = L(Tn +). The state
space " of the discrete time process {Xn , Yn , Ln } is given by

" = {(x, y, l) : x = 0, 1, 2, . . . ; if y = 0, then l = s + 1, s + 2, . . . , S − 1


and
x = 0, 1, 2, . . . ; if y = 1, then l = 0, 1, . . . , S − 1 .}

This representation will help us to represent the state spaces in a block partitioned
form and in turn we can write the associated matrices in block partitioned form.
132 G. Arivarignan et al.

It can be shown that the discrete time stochastic process {Xn , Yn , Ln , Tn ; n =


0, 1, 2, . . .} with state space " × R + is a Markov renewal process satisfying the
property,

Pr{Xn+1 , Yn+1 , Ln+1 , Tn+1 − Tn ≤ t|X0 , · · · , Xn , Y0 , · · · , Yn , L0 , · · · ,


Ln , T0 , · · · , Tn } = Pr{Xn+1 , Yn+1 , Ln+1 , Tn+1 − Tn ≤ t|Xn , Yn , Ln }

for all n = 0, 1, 2, . . . and t ∈ R + and (x, y, l), (x " , y " , l " ) ∈ " . Since this process
is a time homogeneous one, we can write it as a function of duration between two
successive demand time points, viz.,

Pr{Xn+1 = x " , Yn+1 = y " , Ln+1 = l " , Tn+1 − Tn ≤ t|Xn = x, Yn = y, Ln = l}


= K(x,y,l)((x " , y " , l " ), t) (1)

and the quantity in right-hand side of Eq. (1) is called semi-Markov kernel over " .
It may be noted that the function

Pr[(x, y, l), (x " , y " , l " )] = lim K(x,y,l)((x " , y " , l " ), t)
t →∞

is the one-step transition probability function of the Markov chain {(Xn , Yn , Ln ),


n = 0, 1, 2, . . .}.
We introduce a set of blocks in the collection of states of the state space " as
indicated below:

" = (0̂, 1̂, 2̂, . . .)


x̂ = ( x, 0, x, 1), x = 0, 1, 2, . . .
x, 0 = ( (x, 0, s + 1), (x, 0, s + 2), . . . , (x, 0, S − 1))
x, 1 = ( (x, 1, 0), (x, 1, 1), . . . , (x, 1, S − 1)).

We use the notation [D]ij as the (i, j )− th entry of a matrix D. The derivative
of K(x,y,l)((x " , y " , l " ), t) is denoted by κ(x,y,l)(x " , y " , l " , t). We define the following
sub-matrices:

[B((x,y),(x ",y " )) (t)]ll " = κ(x,y,l)(x " , y " , l " , t)


[A(x,x ") (t)]yy " = B((x,y),(x ",y " )) (t)
[κ(t)]xx " = A(x,x " ) (t).

It can be easily seen that the sub-matrices A(x,x ") (t) are zero matrices for
x " > x + 1 and that these matrices do not depend on the individual values of
x and x " , but on the difference namely, x − x " for x " ≤ x. For brevity, we write
A Production Inventory System with Renewal and Retrial Demands 133

Ar (t) = A(x,x ") (t), where r = x − x " + 1 for x " ≤ x + 1 and x > 0. Thus we have

⎛ 0̂ 1̂ 2̂ 3̂ 4̂ ... ⎞
0̂ Ĉ1 Â00 0 0 0 ...
1̂ ⎜
⎜ Ĉ2 Â1 Â0 0 0 ... ⎟⎟
⎜ ... ⎟
κ(t) = ⎜ Ĉ3
2̂ Â2 Â1 Â0 0 ⎟.
3̂ ⎜⎝ Ĉ4 Â3 Â2 Â1 Â0 ... ⎟⎠
.. .. .. .. .. .. ..
. . . . . . .

The sub-matrices are defined below.


0 1
$ %
0 0 0
Ã00 =
1 0 Â11
00

0 1
$ %
0 0 0
Â0 = .
1 0 Â11
0

For k = 1, 2, . . .

0 1
$ %
0 Â00 Â01
Âk = k k
1 Â10
k Â11
k

for k = 1, 2, . . .

0 1
$ %
0 Ĉk00 Ĉk01
Ĉk = .
1 Ĉk10 Ĉk11

We need the following notations for use in sequel:

f (t) : pdf of interval time between two successive demands


F (t) : distribution function associated with f (t)
F̄ (t) : 1 − F (t)
−αt )j
pj (t) : e j(αt! probability for production of j items in time t
 j
p̄j (t) : 1 − pk (t)
k=0
e−θ t (θt )j
qj (t) : j! probability for retrial of j customers in time t
j
q̄j (t) : 1 − qk (t).
k=0
134 G. Arivarignan et al.

With the above notation, we can write the following:



f (t)p0 (t) i = 0, j = 0,
Â11
00 =
ij 0 otherwise


f (t)p0 (t)q0 (t) i = 0, j = 0,
Â11
0 =
ij 0 otherwise

for k = 1, 2, . . . Q − 1

f (t)p̄j −i+k−1 (t)qk−1 (t) i = 0, 1, . . . , S − 1, j = S − k, . . . , S − 1,
Â10
k =
ij 0 otherwise

for k = Q, Q + 1, . . .

f (t)p̄j −i+k−1 (t)qk−1 (t) i = 0, 1, . . . , S − 1, j = s + 1, . . . , S − 1,
Â10
k =
ij 0 otherwise

for k = 1, 2, . . . , S − 1

⎨ f (t)pj −i+k (t)qk−1 (t) i = k + 1, . . . , S − 1, j = i − k, . . . , S − 2,
Â11 = f (t)pj −i+k (t)qk−1 (t) i = 0, 1, . . . , k, j = 0, . . . , S − 2,
k
ij ⎩
0 otherwise

for k = S, S + 1, . . .

f (t)pj −i+k (t)qk−1 (t) i = 0, 1, . . . , S − 1, j = 0, . . . , S − 2,
Â11
k =
ij 0 otherwise

for k = 1, . . . , Q − 2

f (t)qk−1 (t) i = s + k + 1, . . . , S − 1, j = i − k,
Â00
k =
ij 0 otherwise

for k = Q − 1, Q − 2 . . . , Â00
k are zero matrices.
for k = 1, . . . , s

f (t)qk−1 (t) i = s + 1, . . . , s + k, j = i − k,
Â01
k =
ij 0 otherwise
A Production Inventory System with Renewal and Retrial Demands 135

for k = s + 1, . . . , Q − 2

⎨ f (t)qk−1 (t) i = k + 1, . . . , s + k, j = i − k,
Â01 = f (t)q̄k−2 (t) i = k j = i − k,
k
ij ⎩
0 otherwise

for k = Q − 1, . . . , S − 1

⎨ f (t)qk−1 (t) i = k + 1, . . . , S − 1, j = i − k,
Â01 = f (t)q̄k−2 (t) i = k, j = i − k,
k
ij ⎩
0 otherwise

for k = S, S + 1, . . . , k are zero matrices. For k = 1,


Â01

f (t) i = s + 2, . . . , S − 1, j = i − 1,
Ĉk00 =
ij 0 otherwise


f (t) i = s + 1, j = i − 1,
Ĉk01 =
ij 0 otherwise


f (t)p̄j −i+k−1 (t) i = 0, 1, . . . , S − 1, j = S − k, . . . , S − 1,
Ĉk10 =
ij 0 otherwise


⎨ f (t)pj −i+k (t) i = 1, . . . , S − 1, j = i − k, . . . , S − 2,
Ĉk11 = f (t)pj −i+k (t) i = 0, j = 0, . . . , S − 2,
ij ⎩
0 otherwise

for k = 2, 3, . . . Q − 1

f (t)p̄j −i+k−1 (t)q̄k−2 (t) i = 0, 1, . . . , S − 1, j = S − k, . . . , S − 1,
Ĉk10 =
ij 0 otherwise

for k = Q, Q + 1, . . .

f (t)p̄j −i+k−1 (t)q̄k−2 (t) i = 0, 1, . . . , S − 1, j = s + 1, . . . , S − 1,
Ĉk10 =
ij 0 otherwise
136 G. Arivarignan et al.

for k = 2, 3, . . . , S − 1

⎨ f (t)pj −i+k (t)q̄k−2 (t) i = k, . . . , S − 1, j = i − k, . . . , S − 2,
Ĉk11 = f (t)pj −i+k (t)q̄k−2 (t) i = 0, 1, . . . , k − 1, j = 0, . . . , S − 2,
ij ⎩
0 otherwise

for k = S, S + 1, . . . ,

f (t)pj −i+k (t)q̄k−2 (t) i = 0, 1, . . . , S − 1, j = 0, . . . , S − 2,
Ĉk11 =
ij 0 otherwise

for k = 2, 3, . . . , Q − 2

f (t)q̄k−2 (t) i = s + k + 1, . . . , S − 1, j = i − k,
Ĉk00 =
ij 0 otherwise

for k = Q − 1, . . . , Ck00 are zero matrices


for k = 1, 2, . . . , s

f (t)q̄k−2 (t) i = s + 1, . . . , s + k, j = i − k,
Ĉk01 =
ij 0 otherwise

for k = s + 1, . . . , Q − 2

f (t)q̄k−2 (t) i = k, . . . , s + k, j = i − k,
Ĉk01 =
ij 0 otherwise

for k = Q − 1, . . . , S − 1

f (t)q̄k−2 (t) i = k, . . . , S − 1, j = i − k,
Ĉk01 =
ij 0 otherwise

for k = S, S + 1, . . . , Ĉk01 are zero matrices.

2.2 Steady State Analysis

The transition probability matrix (tpm) P of the Markov chain {(Xn , Yn , Ln ), n =


0, 1, 2, . . .} is obtained from the Markov renewal kernel κ(t) by
 ∞
P = κ(t)dt,
0
A Production Inventory System with Renewal and Retrial Demands 137

where the integration on right-hand side is performed element wise of the matrix.
The matrix P can be written in block partitioned form

⎛ 0̂ 1̂ 2̂ 3̂ 4̂ ... ⎞
0̂ C1 Ã0 0 0 0 ...
1̂ ⎜
⎜ C2 A1 A0 0 0 ... ⎟⎟
⎜ ... ⎟
P = 2̂ ⎜ C3 A2 A1 A0 0 ⎟,
3̂ ⎜
⎝ C4 A3 A2 A1 A0 ... ⎟⎠
.. .. .. .. .. .. ..
. . . . . . .

where the sub-matrices Ci and Ai ’s are respective integral values of Ĉi and Âi ’s.
It may be
noted that we have C1 e+ Ã0 e = e, Cn+1 +(An +. . .+A0 ) = e, n ≥ 1.

Let A = i=0 Ai . Then A is a stochastic matrix. Following [11], we make the
following statement.
Proposition Since the Markov chain with tpm P is irreducible (and hence all states
are positive recurrent) the matrix A is stochastic. The Markov chain is ergodic if and
only if,


1<ν kAk e,
k=1

where ν satisfies νA = ν and νe = 1.


We need the characterization of the stationary vector (invariant vector) of P . It
may be noted that there is a positive probability that every state (x " , y " , l " ) ∈ " can
be reached from (x, y, l) ∈ " , as the process involving the number of customers
in the orbit, machine status and inventory level move over all states of " . Hence
the chain is irreducible and we conclude that all the states of this chains are positive
recurrent. Thus the stationary vector satisfies,

P = and e = 1. (2)

With the partition of states imposed on " , we can write the stationary vector
partitioned as

= (π0 , π1 , π2 , . . .).

Then Eq. (2) can be rewritten as




π0 = πj Cj +1 (3)
j =0


π1 = π0 Ã0 + πj A j (4)
j =1
138 G. Arivarignan et al.



πk = πk+j −1 Aj , for k ≥ 2 (5)
j =0


πk e = 1. (6)
k=0

Following [11], we state the following results, without providing the proof.
Theorem 1 A matrix geometric solution to the set of Eqs. (3)–(6) is given by
1. for n ≥ 1, we have πn = π1 R n−1 ,
2. and the matrix R satisfies the equation,


R= R k Ak ,
k=0

and is the minimal nonnegative solution to the matrix equation




X= X k Ak ,
k=0

3. the eigen values of R


is inside the unit disk,
4. the matrix C[R] = ∞ k=0 R kC
k+1 is stochastic, and
5. the vector π0 and π1 is the unique positive solution to


π0 = π0 C1 + π1 R n−1 Cn+1
n=1
∞
π1 = π0 Ã0 + π1 R n−1 An
n=1
1 = π0 e + π1 (I − R)−1 e.

The first two equations of the above equations can be expressed in compact form as
⎛ ⎞
C1 A˜0
(π0 , π1 ) = (π0 , π1 ) ⎝ 
∞ 
∞ ⎠.
R n−1 Cn+1 R n−1 An
n=1 n=1

2.3 Semi-regenerative Process

It can be noted that for the process (t), we have a sequence of demand points
(which form a renewal process) and that between any two consecutive demand
A Production Inventory System with Renewal and Retrial Demands 139

points, this process is affected only by those that possess Markov property. Hence
we claim that the process (t) is a semi-regenerative process with the embedded
Markov renewal process, {Xn , Yn , Ln , Tn ; n = 0, 1, 2, . . .}.
We define the probability density function of the (t) conditioned to a fixed state
at time 0 as

!(x,y,l)((x " , y " , l " ), t) = Pr[(X(t), Y (t), L(t)) = (x " , y " , l " )|(X0 , Y0 , L0 )
= (x, y, l)],

for (x, y, l) ∈ " and (x " , y " , l " ) ∈ .


By using total probability rule, !(x,y,l)((x " , y " , l " ), t)

= Pr[(X(t), Y (t), L(t)) = (x " , y " , l), T1 > t|(X0 , Y0 , L0 ) = (x, y, l)]
+ Pr[(X(t), Y (t), L(t)) = (x " , y " , l), T1 ≤ t|(X0 , Y0 , L0 ) = (x, y, l)]
= "(x,y,l) ((x " , y " , l " ), t)
Pr[(X(t), Y (t), L(t)) = (x " , y " , l " ), t < T1 ≤ t + h|(X0 , Y0 , L0 ) = (x, y, l)]
+ lim
h↓0 h
  t
= "(x,y,l) ((x " , y " , l " ), t) + lim
h↓0
(x "" ,y "" ,l "" )∈" 0

Pr[(X1 , Y1 , L1 ) = (x "" , y "" , l "" ), w < T1 ≤ w + h|(X0 , Y0 , L0 ) = (x, y, l)]


×
h
× Pr[(X(t − w), Y (t − w), L(t − w)) = (x " , y " , l)" |(X1 , Y1 , L1 ) = (x "" , y "" , l "" )]

  t
= "(x,y,l) ((x " , y " , l " ), t) + κ(x,y,l) ((x "" , y "" , l "" ), w)
(x "" ,y "" ,l "" )∈" 0

" " "


×!(x "" ,y "" ,l "" ) ((x , y , l ), t − w)dw

where

"(x,y,l)((x " , y " , l " ), t) = Pr[(X(t), Y (t), L(t)) = (x " , y " , l " ), T1 > t|(X0 , Y0 , L0 )
= (x, y, l)]

and κ(x,y,l)((x " , y " , l " ), t) is the derivative of Markov renewal kernel. Hence we get
the Markov renewal equation,
  ∞
!(x,y,l) ((x " , y " , l " ), t) = "(x,y,l) ((x " , y " , l " ), t) + κ(x,y,l) ((u, v, k), w)
0
(x,y,l)∈"

× !(u,v,k) ((x " , y " , l " ), t − w)dw. (7)


140 G. Arivarignan et al.

We also have

˜ (x,y,l)(x " , y " , l, t),


"(x,y,l)((x " , y " , l " ), t) = F̄ (t)"

where

˜ (x,y,l)(x " , y " , l, t) = Pr[(X(t), Y (t), L(t)) = (x " , y " , l " ), |T1 > t, (X0 , Y0 , L0 )
"
= (x, y, l)].

˜ (x,y,l)(x " , y " , l, t) is obtained as


Using r = x − x " , the function "


⎪ 1 x " = x, y " = y, l " = l,



⎪ x = 0, y = 0, l = s + 1, . . . , S,





⎪ p(l " −l) (t), x" = x, y " = y, l " = l, . . . , S − 1,



⎪ x = 0, y = 1, l = 0, 1, . . . , S − 1





⎪ p̄(l " −l) (t), x " = x, y " = 0, l " = S,



⎪ x = 0, y = 1, l = 0, . . . , S − 1,





⎪ p̄(x " −x−l−(S−l " )) (t)qr (t), x" = 1, . . . , x − S, y " = 0, l " = 1, . . . , S,



⎪ x > S, y = 1, l = 1, . . . , S − 1,





⎪ p̄(l " −l+r) (t)q(r) (t), x " = x − S + 1, . . . , x, y " = 0, l " = S − r, . . . , S,



⎪ x > S, y = 1, l = 1, . . . , S − 1,




⎪ p̄(l " −l+r) (t)q̄(r) (t),
⎪ x" = 0, y " = 0, l " = S − r, . . . , S,



⎪ x > S, y = 1, l = 1, . . . , S − 1,

p̄(l " −l+r) (t)q(r) (t), x " = 1, . . . , x, y " = 0, l " = S − r, . . . , S,



⎪ y = 1, l = 1, . . . , S − 1,


x < S,



⎪ p̄(l " −l+r) (t)q̄(r) (t), x" = 0, y " = 0, l " = S − r, . . . , S,



⎪ y = 1, l = 1, . . . , S − 1,


x < S,



⎪ p(l " −l+r) (t)q̄(r) (t), x " = 0, y " = 1, l " = l − x, . . . , S − 1,



⎪ x = 1, . . . , l − 1, y = 1, l = 1, . . . , S − 1,





⎪ p " (t)q(r) (t), x" = 1, . . . , x, y " = 1, l " = l − r, . . . , S − 1,
⎪ (l −l+r)


⎪ x = 1, . . . , l − 1, y = 1, l = 1, . . . , S − 1,






⎪ p(l " −l+r) (t)q(r " ) (t), x " = 1, . . . , x; y " = 1; l " = Max{1, l − r}, . . . , S − 1,



⎪ x = l, l + 1 . . . , y = 1, l = 1, . . . , S − 1,



⎪ x" = x − l + s + 2, . . . , x, y " = 0, l " = l − r,

⎪ q(r) (t),



⎪ x = 1, . . . , l y = 0, l = s + 2, . . . , S,


0, otherwise.

We shall use the following partition on  :

 = (0̃, 1̃, 2̃, . . .)


ĩ = ( )i, 0*, )i, 1*), i = 0, 1, 2, . . .
A Production Inventory System with Renewal and Retrial Demands 141

)i, 0* = ( (i, 0, s + 1), (i, 0, s + 2), . . . , (i, 0, S))


)i, 1* = ( (i, 1, 0), (i, 1, 1), . . . , (i, 1, S))

˜ (x,y,l)(x " , y " , l, t) in a block partitioned matrix


and collect the functions "

0̃ 1̃ 2̃ 3̃ 4̃ ...
⎛ ⎞
0̂ E1 (t) 0 0 0 0 ...
1̂ ⎜
⎜ E2 (t) D1 (t) 0 0 0 ... ⎟⎟
˜ ⎜ ... ⎟
"(t) = ⎜ E3 (t)
2̂ D2 (t) D1 (t) 0 0 ⎟
3̂ ⎜
⎝ E4 (t) D3 (t) D2 (t) D1 (t) 0 ... ⎟⎠
.. .. .. .. .. .. ..
. . . . . . .
.
The Markov renewal equation (7) can be conveniently expressed as

! = " + κ , !, (8)

where , represents the convolution of matrices κ and ! so that the entry at


((x, y, l), (x " , y " , l " ))th position in the resultant matrix has the second term of
Eq. (7).
Equation (8) is a generalization of renewal equation studied in renewal theory.
Define the limiting probability distribution for ,

!(x " , y " , l " ) = lim !(x,y,l)((x " , y " , l " ), t), (x, y, l) ∈ " (x " , y " , l " ) ∈ .
t →∞

By using the result of [4] we get,

1 
!(x " , y " , l " ) =  π(x,y,l)
(x,y,l)∈" π(x,y,l)m(x, y, l)
(x,y,l)∈"
 ∞
× ˜ (x,y,l)(x " , y " , l, t)dt,
"
0

where m(x, y, l) is the mean recurrence time of the Markov chain (Xn , Yn , Ln ) in
state (x, y, l). In this problem it is given by the mean time between two successive

demand points, namely, m = 0 (1 − F (t))dt. Hence we get

  ∞
1
!(x " , y " , l " ) = π(x,y,l) ˜ (x,y,l)(x " , y " , l, t)dt.
"
m 0
(x,y,l)∈"
142 G. Arivarignan et al.

3 Conclusion

We presented in this article a stochastic model on inventory system maintained with


a production facility in which the inter-demand times have arbitrary distribution and
the production time is distributed as exponential. The customers whose demand
cannot be satisfied due to want of stock are allowed to join an orbit of infinite
size. These orbiting customers can retry for their demand according to first come
first serve discipline and the inter-trial times are exponentially distributed. We
used an embedded Markov renewal process and made a study on it. We used the
matrix geometric solution approach provided by [11] and there by a wide range
of algorithmic solution procedures can be used to get the invariant vector of the
underlying Markov chain. Finally we derived a renewal equation which can be used
to get the limiting probability distribution of the process under consideration.

Acknowledgments The work of the author “G. Arivarignan” was supported by University Grants
Commission, India, research award No. F.6-6/2017-18/EMERITUS-2017-18-OBC-9414/(SA-II).
The work of the author “M. Keerthana” was supported by the UGC BSR Research Fellowship,
University Grants Commission, India, F.25-1/2014-15 (BSR)/5-66/2007 (BSR).

References

1. Arivarignan, G., Sivakumar, B.: Inventory systems with renewal demands at service facilities.
In: Srinivasan, S.K., Vijayakumar, A. (eds.) Stochastic Point Processes, pp. 108–123. Narosa
Publishing House, New Delhi (2003)
2. Axsäter, S.: Inventory Control. Springer, Berlin (2006)
3. Buzacott, J.A., Shanthikumar, J.G.: Stochastic Models for Manufacturing Systems. Prentice
Hall, New Jersey (1993)
4. Cinlar, E.: Introduction to Stochastic Processes. Prentice Hall Inc., Upper Saddle River (1975)
5. He, Q., Jewkes, E.M., Buzacott, J.A.: Analysis of the value of information used in inventory
control of an inventory production system. In: ABS and ACORS Conference. Dalhousie
University, Halifax (1999)
6. He, Q., Jewkes, E.M., Buzacott, J.A.: Performance measures of a make to order inventory-
production system. IIE Trans. 32, 409–419 (2000)
7. He, Q., Jewkes, E.M., Buzacott, J.A.: The system of information used in inventory control of a
make to order inventory-production system. IIE Trans. 34, 999–1013 (2002)
8. Kalpakam, S., Arivarignan, G.: Semi Markov models in inventory systems. Electron. J. Math.
Phys. Sci. 18(5), 1–17 (1984–1985)
9. Kim, E.: On the admission control and demand management in a two-station tandem
production system. J. Ind. Manag. Optim. 7(1), 1–8 (2011)
10. Krishnamoorthy, A., Narayanan, V.C.: Production inventory with service time and vacation to
the server. IMA J. Manag. Math. 22, 33–45 (2011)
11. Neuts, M.F.: Matrix-Geometric Solutions in Stochastic Models: An Algorithmic Approach.
Dover Publication Inc., New York (1994)
12. Zipkin, P.H.: Foundations of Inventory Management. McGraw-Hill, New York (2000)
A Queueing System with Batch Renewal
Input and Negative Arrivals

U. C. Gupta, Nitin Kumar, and F. P. Barbhuiya

Abstract This paper studies an infinite buffer single server queueing model with
exponentially distributed service times and negative arrivals. The ordinary (positive)
customers arrive in batches of random size according to renewal arrival process,
and join the queue/server for service. The negative arrivals are characterized by
two independent Poisson arrival processes, a negative customer which removes
the positive customer undergoing service, if any, and a disaster which makes the
system empty by simultaneously removing all the positive customers present in
the system. Using the supplementary variable technique and difference equation
method we obtain explicit formulae for the steady-state distribution of the number
of positive customers in the system at pre-arrival and arbitrary epochs. Moreover,
we discuss the results of some special models with or without negative arrivals
along with their stability conditions. The results obtained throughout the analysis are
computationally tractable as illustrated by few numerical examples. Furthermore,
we discuss the impact of the negative arrivals on the performance of the system by
means of some graphical representations.

Keywords Batch arrival · Difference equation · Disasters · RCH · Renewal


process · Negative customers

1 Introduction

Since the pioneering work of Gelenbe [16] in the year 1989, queueing model with
negative arrivals (also termed as G-networks) have gained considerable attention.
A negative arrival causes the removal of one or more ordinary customer (also
called positive customer) from the system, and prevents it from getting served. In
the literature, negative arrivals are generally introduced by the name of “negative

U. C. Gupta () · N. Kumar · F. P. Barbhuiya


Department of Mathematics, Indian Institute of Technology Kharagpur, Kharagpur, India
e-mail: [email protected]

© The Editor(s) (if applicable) and The Author(s), under exclusive 143
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_10
144 U. C. Gupta et al.

customers” and/or “disasters.” The arrival of a negative customer removes one


ordinary customer from the system, according to a definite killing strategy, i.e.,
RCH (Removal of Customer at the Head) or RCE (Removal of Customer at the
End). Under RCH killing discipline, the customer who is undergoing service gets
removed, while in case of RCE, the customer at the end of the queue is eliminated.
Meanwhile, the occurrence of a disaster simultaneously removes all the present
customers in the system thus making the system idle. Disasters are also known
by the terms catastrophic events (Barbhuiya et al. [7]), mass exodus (Chen and
Renshaw [12]) or queue flushing (Towsley and Tripathi [27]). Both, a negative
customer and a disaster have no impact on the system when it is empty. For further
references on different queueing models with negative arrivals the readers may refer
to the bibliography by Van Do [28].
Initially, M/M/1 queueing model with positive and negative customers was
studied by Harrison and Pitel [18]. They derived the Laplace transforms of the
sojourn time density under both RCH and RCE killing discipline. They further
extended their work to M/G/1 queue with negative arrivals and obtained the
generating function of the queue length probability distribution (see Harrison and
Pitel [19, 20]). Jain and Sigman [21] derived a Pollaczek–Khintchine formula for an
M/G/1 queue with disasters using preemptive LIFO discipline, whereas Boxma et
al. [8] considered the same model by assuming the disasters to occur in deterministic
equidistant times or at random times. The M/M/1 queue with negative arrivals
was first extended to the GI /M/1 queue by Yang and Chae [30], assuming the
occurrence of negative customers (under RCE killing discipline) and disasters.
Meanwhile, Abbas and Aïssani [1] investigated the strong stability conditions
of the embedded Markov chain for GI /M/1 queue with negative customers.
A discrete-time GI /G/1 queue with negative arrivals was considered by Zhou
[31] where he derived the probability generating function of actual service time
of ordinary customers. Recently, Chakravarthy [10] investigated a single server
catastrophic queueing model assuming the arrival process to be versatile Markovian
point process with phase type service time. All the work discussed till now was
studied under steady-state condition. Kumar and Arivudainambi [23] and Kumar
and Madheswari [24] obtained the transient solution of system size for the M/M/1
and M/M/2 queueing model with catastrophes, respectively. Following this, a time
dependent solution for the system size of M/M/c queue with heterogeneous servers
and catastrophes was considered by Dharmaraja and Kumar [13]. A survey on
queueing models with interruptions due to various reasons such as catastrophes,
server breakdowns, etc. can be found in Krishnamoorthy et al. [22].
The papers referred above, studies queueing models with negative arrivals of
one form or the other, under the assumption of single arrival of positive customers.
But in most of the real-world scenario, the request for service arrives in groups
of random size. For example, transmission of messages to the service station
occurs in the form of packets in batches, unfinished goods arrives in bulk into the
production systems for further processing. This gives us a practical motivation to
relax the assumption of single arrival and consider batch arrival of the positive
customers into the system. We study a continuous-time GI X /M/1 queue which
A Queueing System with Batch Renewal Input and Negative Arrivals 145

is influenced by negative customers (with RCH killing discipline) and disasters,


occurring independently of one another according to Poisson process. The arrival of
negative customers or disasters have no impact on the system when it is empty. We
first formulate the model using the supplementary variable technique and then apply
difference equation method to obtain the steady-state distribution of the number
of positive customers in the system at different epochs. In the literature, most of
the queueing models with negative arrivals are studied using the matrix geometric
(matrix analytic) method or the embedded Markov chain technique. However,
encouraged by some recent works (see Barbhuiya and Gupta [5, 6], Goswami
and Mund [17]), we try to implement the methodology based on supplementary
variable technique and difference equation method to study queueing model with
negative arrivals. The whole procedure involved is analytically tractable and easy
to implement, as we obtain explicit formulae of the system-content distribution at
pre-arrival and arbitrary epochs simultaneously, in terms of roots of the associated
characteristic equation and the corresponding constants. We discuss the stability
conditions along with some special cases of the model. We also present some
numerical results in order to illustrate the applicability of our theoretical work and
study the influence of different parameters on the system performance.
The queueing model described above may have possible use in computer commu-
nications and manufacturing systems (see Artalejo [3]). A real-world application can
be experienced within a network of computers, where a message affected with virus
often infects the whole system when it gets transferred from one node to another. A
signal which immediately removes the message and prevents further transmission
of it can be thought of as a negative customer. Moreover, a reset instruction in the
computer database may be considered as a disaster as it clears all the stored files
present in the system. In these systems, the stored files/data act as positive customers
whereas clearing operation plays the role of the negative arrivals (see Wang et al.
[29], Atencia and Moreno [4]).
The remaining portion of the paper is organized as follows. In Sect. 2 we give a
comprehensive description of the model under consideration. In Sect. 3 we perform
the steady-state analysis of the model and discuss the stability condition. We deduce
the results of some special cases of our model in Sect. 4 which is followed by some
illustrative numerical examples in Sect. 5. Finally, we give the concluding remarks
in Sect. 6.

2 Model Description

We consider an infinite buffer queueing model wherein customers (positive cus-


tomers) arrive into the system in batches and joins the queue. The arriving batch
size is a random variable X with probability mass function P (X = i) = gi ,
i = 1, 2, . . .. For theoretical analysis and numerical implementation we assume
that the maximum permissible size of the arriving batch is b, which also holds
true in many real-world circumstances. Consequently, the mean arriving batch size
146 U. C. Gupta et al.

Disaster which eliminates


all the customers
from the system Negative customer
removes only the
Customers arrive customer undergoing
into the system service
in batches Customer leave the system
after service completion

Departure

Queue

Customer in the service

Fig. 1 Pictorial representation of the GI X /M/1 queue with negative customer and disaster

b b
is g = i=1 igi and the probability generating function is G(z) =
i
i=1 gi z .
The inter-arrival times T between the batches are independent and identically
distributed continuous random variables with probability density function (pdf)
a(t), distribution function A(t), the Laplace–Stieltjes transform (L.S.T) A∗ (s), and
the mean inter-arrival time λ−1 = a = −A∗(1)(0), where λ is the arrival rate of the
batches and A∗(1)(0) is the derivative of A∗ (s) evaluated at s = 0. The customers
are served individually by a single server and the service time follows exponential
distribution with parameter μ.
The system is affected by negative arrivals which is characterized by two
independent Poisson arrival processes namely, negative customers and disasters with
rate η and δ, respectively. The negative customer follows RCH killing discipline
and removes only the customer undergoing service, while the occurrence of a
disaster eliminates all the customers from the system. We further assume that the
negative customer or disaster have no impact on the system when it is empty. The
arrival process, service process, and the negative arrivals are independent of each
other. The model described above may be mathematically denoted by GI X /M/1
queue with negative customers and disasters. One may refer to Fig. 1 for a pictorial
representation of the model.

3 The Steady-State Analysis

In this section we analyze the model described in Sect. 2 in steady state. We first
formulate the governing equations of the system using supplementary variable
technique (SVT) by considering the remaining inter-arrival time of the next batch as
the supplementary variable. For this purpose, we denote the states of the system
A Queueing System with Batch Renewal Input and Negative Arrivals 147

N(t) and U (t), respectively, as the number of customers in the system and the
remaining inter-arrival time of the next batch, at time t. We further define

qn (u, t)du = P [N(t) = n, u < U (t) ≤ u + du], n ≥ 0, u ≥ 0,

and in steady state

pn (u) = lim qn (u, t).


t →∞

Relating the states of the system at two consecutive epochs t and t + $t and using
the arguments of SVT we obtain the following difference-differential equations in
steady state:

 ∞
d
− p0 (u) = (μ + η)p1 (u) + δ pk (u), (1)
du
k=1
d
− pn (u) = −(μ + η + δ)pn (u)
du

min{n,b}
+a(u) gi pn−i (0) + (μ + η)pn+1 (u), n ≥ 1. (2)
i=1

Obtaining the steady-state solution directly from (1) and (2) is a rather difficult task.
Therefore, for further analysis we take the transform for which we define
 ∞  ∞
pn∗ (s) = e−su pn (u)du ⇒ pn = pn∗ (0) = pn (u)du, n ≥ 0.
0 0

Multiplying (1) and (2) by e−su , integrating with respect to u over 0 to ∞ and then
separating Eq. (2) we obtain the transformed equations as


− sp0∗ (s) = (μ + η)p1∗ (s) + δ pk∗ (s) − p0 (0), (3)
k=1


n
(μ + η + δ − s)pn∗ (s) = A∗ (s) ∗
gi pn−i (0) + (μ + η)pn+1 (s)
i=1
−pn (0), 1 ≤ n ≤ b − 1, (4)

b
(μ + η + δ − s)pn∗ (s) = A∗ (s) ∗
gi pn−i (0) + (μ + η)pn+1 (s)
i=1
−pn (0), n ≥ b. (5)
148 U. C. Gupta et al.

 for all values of n, taking limit s → 1 and using the normalizing


Adding (3)–(5)
condition ∞n=0 pn = 1 we have


 1
pn (0) = = λ. (6)
a
n=0

The L.H.S of Eq. (6) denotes the mean number of arriving batch into the system per
unit time such that the remaining inter-arrival time is 0, which is actually the arrival
rate λ. We now define pn− as the probability that the number of positive customers
in the system is n just before the 
arrival of a batch, i.e., at pre-arrival epoch. Since
pn− is proportional to pn (0) and ∞ −
n=0 pn = 1, we have the relation between pn

and pn (0) as

pn (0) pn (0)
pn− = ∞ = , n ≥ 0. (7)
k=0 kp (0) λ

Based on the theory of difference equations we obtain the state probabilities at pre-
arrival (pn− ) and arbitrary (pn ) epochs in the following section.

3.1 Steady-State System-Content Distributions

We define the right shift operator D on the sequence of probabilities {pn (0)} and
∗ (s) for all n. Thus, (5) can be
{pn∗ (s)} as Dpn (0) = pn+1 (0) and Dpn∗ (s) = pn+1
rewritten in the form
* +
b
∗ ∗
(δ − s + (μ + η)(1 − D)) pn (s) = A (s) gi D b−i
− D pn−b (0), n ≥ b.(8)
b

i=1

Substituting s = δ + (μ + η)(1 − D) in (8), we get the following homogeneous


difference equation with constant coefficient:
 

b

A (δ + (μ + η)(1 − D)) gi D b−i
−D b
pn (0) = 0, n ≥ 0. (9)
i=1

The corresponding characteristic equation (c.e.) is


b

A (δ + (μ + η)(1 − z)) gi zb−i − zb = 0, (10)
i=1
A Queueing System with Batch Renewal Input and Negative Arrivals 149

which has exactly b roots, denoted by r1 , r2 , ..., rb , inside the unit circle |z| = 1.
Thus the solution of (9) is of the form


b
pn (0) = ci rin , n ≥ 0, (11)
i=1

where c1 , c2 , ..., cb are the corresponding b arbitrary constants independent of n.


Now using (11) in (8) we have the following non-homogeneous difference equation:
* +

b 
b
(δ − s + (μ + η)(1 − D))pn∗ (s) = ∗
cj A (s) gi rj−i − 1 rjn , n ≥ b.
j =1 i=1
(12)

The general solution of (12) is of the form


$ % / 4
δ−s n 
b A∗ (s)G(rj−1 ) − 1
pn∗ (s) =B 1+ + cj r n , n ≥ b,
μ+η δ − s + (μ + η)(1 − rj ) j
j =1
(13)

where the first term in the R.H.S of (13) is the solution corresponding to the
homogeneous equation of (12) for a fixed s, such that B is an arbitrary constant.
Meanwhile, the second term in the R.H.S. is a particular solution of (12). Taking
∞
limit as s → 0 and summing over n from b to ∞ ∗
n in (13), we have, n=b pn (0) =
∞ ∞
n=b pn ≤ 1. However, B n=b 1 + μ+η tends to infinity as s → 0. Thus to
δ−s

ensure the convergence of the solution we must have B = 0 and thus (13) reduces
to
/ 4

b A∗ (s)G(rj−1 ) − 1
pn (s) = cj r n , n ≥ b. (14)
δ − s + (μ + η)(1 − rj ) j
j =1

We now find the conditions under which pn∗ (s) satisfies (14) for 1 ≤ n ≤ b − 1 as
well. Thus substituting the respective values in (4) we obtain
* +

b 
b
cj gi rjn−i = 0, 1 ≤ n ≤ b − 1,
j =1 i=n+1

which reduces to the following on using the condition gb = 0,


b
cj rjn−b = 0, 1 ≤ n ≤ b − 1. (15)
j =1
150 U. C. Gupta et al.

Summing over n from 0 to ∞ in (11) and using relation (6) we obtain


b
ci
λ= . (16)
1 − ri
i=1

One may note that (15) and (16) together constitutes a system of b equations in b
unknowns which can be solved to obtain the constants cj for j = 1, 2, . . . , b. Once
cj ’s are obtained, the expression of pn (0) given in (11) becomes completely known
and pn∗ (s) is given by
/ 4

b A∗ (s)G(rj−1 ) − 1
pn∗ (s) = cj rjn , n ≥ 1. (17)
δ − s + (μ + η)(1 − rj )
j =1

Now, using (7) and (17), the steady-state distribution of the number of positive
customers in the system at pre-arrival and arbitrary epochs are given by

1 n
b
pn− = ci ri , n ≥ 0, (18)
λ
i=1
/ 4

b G(rj−1 ) − 1
pn = pn∗ (0) = cj rjn , n ≥ 1, (19)
δ + (μ + η)(1 − rj )
j =1
/ 4

 
b
cj rj G(rj−1 ) − 1
p0 = 1 − pn = 1 − . (20)
1 − rj δ + (μ + η)(1 − rj )
n=1 j =1

This completes the analysis of the model under consideration. It may be noted that
the results derived so far are mainly expressed in terms of the roots of the c.e. (10)
lying inside the unit circle. It can be proved that δ > 0 is a sufficient condition for
the c.e. to have exactly b roots inside the unit circle (see Appendix), which ensures
the stability of the system. Or in other words, due to the occurrence of disasters
the system becomes empty and as a result the model under consideration always
remains stable.
Once the probability distributions are completely known, different characteristic
measures determining the performance of the system can be easily established. For
example, the average ∞population size at pre-arrival (L− ) and arbitrary (L) epochs
are given by L = n=1 npn and L = ∞
− −
n=1 npn , respectively. That is,
/ 4
1  ci ri
b  cj rj b G(rj−1 ) − 1
L− = , L = .
λ (1 − ri )2 (1 − rj )2 δ + (μ + η)(1 − rj )
i=1 j =1
A Queueing System with Batch Renewal Input and Negative Arrivals 151

4 Special Cases

In this section we discuss a few special cases of the model by considering some
fixed values of the parameters. As a result our model reduces to some well-known
classical queueing models with or without negative arrivals.
Case 1: If η = 0 and δ = 0, i.e., negative customer or disaster does not occur
or their occurrence have no impact on the system, then our model reduces to
the classical GI X /M/1 queue. Consequently, the steady-state distributions of
the number of customers in the system at pre-arrival and arbitrary epochs can
be obtained directly from (18)–(20) by putting η = 0 and δ = 0, where rj ,

j = 1, 2, . . . , b are the roots of the c.e. zb − A∗ (μ − μz) bi=1 gi zb−i = 0
lying inside the unit circle, and then the corresponding arbitrary constants cj ,
j = 1, 2, . . . , b can be obtained by solving the system of Eqs. (15) and (16).
Here it may be noted that λg < μ is the necessary and sufficient condition for
the stability of the system. This particular queueing model has been extensively
studied in the literature, both analytically and numerically, based on the use
of embedded Markov chain technique and roots method (see Chaudhry and
Templeton [11], Brière and Chaudhry [9], Easton et al. [14, 15]). However, the
present paper provides an alternative procedure for the solution of the model
which is theoretically tractable and easy to implement, as compared to the other
approaches.
Meanwhile, setting η = 0, δ = 0, g1 = 1, and gi = 0 for i ≥ 2 will give the
steady-state solution for GI /M/1 queue. The c.e. will have a single root inside
the unit circle (say r) under the condition λ < μ, and the corresponding arbitrary
constant can be obtained from (16) as c = λ(1 − r). It is followed by the system-
content distributions which can be obtained from (18)–(20).
Case 2: If δ = 0, i.e., the disaster does not play any role and the only negative
arrivals are the negative customers, then the model reduces to GI X /M/1 queue

with negative customers. The c.e. z − A ((μ + η) − (μ + η)z) bi=1 gi zb−i =
b

0 will have exactly b roots inside the unit circle under the necessary and
sufficient condition λg < μ + η. Equations (15) and (16) can be solved for the
arbitrary constants following which, the steady-state distributions of the number
of positive customers in the system can be obtained from (18)–(20). As discussed
in Case 1, the solution for GI /M/1 queue with negative customers (Yang and
Chae [30]) can be further derived by assuming g1 = 1 and gi = 0 for i ≥ 2.
Case 3: If η = 0, the system does not get affected by the negative customers
and our model reduces to GI X /M/1 queue with disaster. Due to the impact
of disasters, the system will always remain stable and hence the c.e. will have
exactly b roots inside the unit circle under the sufficient condition δ > 0.
The steady-state distributions can be derived from (18)–(20) after obtaining the
constants from (15) and (16). Similarly as before, the solution for GI /M/1 queue
with disasters (Park et al. [25]) can also be obtained.
152 U. C. Gupta et al.

5 Numerical Observation

In this section we demonstrate the analytical results obtained in Sect. 3 by some


numerical examples, which are represented in tabular and graphical form. The
results given in the table may be beneficial for other researchers who would like
to compare their results using some other methods in the near future.
Table 1 displays the steady-state distribution of the number of positive customers
in the system for Poisson (M) and deterministic (D) arrival processes. The
parameters chosen are λ = 10, μ = 10, η = 5, δ = 2, g1 = 0.2, g3 = 0.4, g6 = 0.3,
and g10 = 0.1. The last row of the table depicts the average system content at various
epochs. It is important to note that the system-content distributions in the 2nd and
3rd column are same due to the Poisson arrival process, which verifies the accuracy
of our analytical results. Meanwhile, for deterministic inter-arrival time distribution,
the L.S.T A∗ (s) is a transcendental function, which is approximated to a rational
function using P ad é(15, 15) approximation (see Akar and Arikan [2], Singh et al.
[26]). Another interesting trend can be observed in the 4th and 7th column of the
table. As n becomes larger, the ratio of the system-content distribution at pre-arrival
epoch converges to a particular value which is the largest real root (say rb ) of the
c.e. (10) lying inside the unit circle. This suggests that the limiting distributions at
the pre-arrival epoch can be approximated by the unique largest root of the c.e. as
pn− = λ1 cb rbn .
Figure 2 investigates the influence of η on L for different values of δ. As η
increase, L decreases for any value of δ, which is intuitive. Similarly, for a fixed η,
L decreases with increasing δ. However, as δ becomes too large (δ = 10), L seems
to attain a constant value irrespective of the values of η. A similar behavior can be
experienced on plotting L against μ for different values of δ, and consequently it is
omitted. Figure 3 depicts the impact of λ on L for different δ. Clearly, as λ increases
L increases for any δ. However, when λ is kept fixed along with other parameters,
L decreases significantly with the increase in δ.
Finally, in Figs. 4 and 5 we, respectively, illustrate the impact of δ and η on
L for different inter-arrival time distributions, namely, exponential (M), Erlang
(E4 ), and deterministic (D). It may be observed in Fig. 4 that for each inter-arrival
time distribution, L decreases as δ increases, which is obvious. However, for a
fixed δ, L is equal for all the three distributions. A possible explanation for this
phenomenon may be the frequent occurrence of disasters which removes all the
customers including the batch which has just arrived. The effect of inter-arrival time
distribution can be best understood from Fig. 5 as η increases. For higher values of
η, L decreases significantly. However, for exponential inter-arrival time distribution
L is greater, and decreases for Erlang followed by deterministic distribution. It may
be mentioned that in all the numerical results generated throughout this section,
the values of the parameters involved are not restricted to any condition except that
δ > 0, as the system with disaster is always stable.
Table 1 Steady-state distribution of the number of positive customers in the system at various epochs for different inter-arrival time distributions

GI = M GI = D
− −
n pn− pn pn+1 /pn− pn− pn pn+1 /pn−
0 0.20533567 0.20533567 0.15065676 0.23080160 0.12004016 0.15318913
1 0.03093521 0.03093521 0.91498603 0.03535630 0.03653976 1.12629474
2 0.02830528 0.02830528 1.65427828 0.03982161 0.03420904 1.01859822
3 0.04682481 0.04682481 0.55001705 0.04056222 0.06060381 0.85997718
4 0.02575445 0.02575445 1.23727398 0.03488259 0.02886916 1.20959058
5 0.03186531 0.03186531 1.45536181 0.04219365 0.04113680 0.92958177
6 0.04637555 0.04637555 0.55360053 0.03922244 0.05948070 0.75644328
7 0.02567353 0.02567353 1.05065616 0.02966955 0.03095702 1.07090478
.. .. .. .. .. .. ..
. . . . . . .
200 0.00000060 0.00000060 0.94509121 0.00000009 0.00000010 0.93533903
201 0.00000057 0.00000057 0.94509121 0.00000008 0.00000009 0.93533903
202 0.00000054 0.00000054 0.94509121 0.00000008 0.00000009 0.93533903
A Queueing System with Batch Renewal Input and Negative Arrivals

203 0.00000051 0.00000051 0.94509121 0.00000007 0.00000008 0.93533903


204 0.00000048 0.00000048 0.94509121 0.00000007 0.00000008 0.93533903
205 0.00000045 0.00000045 0.94509121 0.00000006 0.00000007 0.93533903
.. .. .. .. .. .. ..
. . . . . . .
Sum 1.00000000 1.00000000 1.00000000 1.00000000
Mean 15.04001756 15.04001756 12.39890533 14.40030123
153
154 U. C. Gupta et al.

40
δ=1 δ=2
35 δ=3 δ=5
δ=8 δ=10
30
25
20
L

15
10
5
0
1 2 3 4 5 6 7 8 9 10
η

Fig. 2 Effect of η on L for various δ

40
δ=1 δ=2
35 δ=3 δ=5
δ=8 δ=10
30
25
20
L

15
10
5
0
1 2 3 4 5 6 7 8 9 10
λ

Fig. 3 Effect of λ on L for various δ

60

50 M
E4
40
D
30
L

20

10

0
1 3 5 7 10 13 15 17 20
δ

Fig. 4 Effect of δ on L for various inter-arrival distributions


A Queueing System with Batch Renewal Input and Negative Arrivals 155

20
18 M
16
E4
14
D
12
10
L

8
6
4
2
0
0 5 10 15 20 25 30 35 40 45 50
μ (or η)

Fig. 5 Effect of η on L for various inter-arrival distributions

6 Concluding Remarks

In this paper, the steady-state analysis of a GI X /M/1 queueing model with negative
customers and disasters has been presented. We have derived the explicit closed-
form expressions of the distribution of the number of positive customers in the
system at pre-arrival and arbitrary epochs, in terms of roots of the associated
characteristic equation. The results of some classical queueing models with or
without negative arrivals have been discussed along with their stability conditions.
Additionally, through some numerical examples, we have investigated the influence
of negative customers and disasters on the performance characteristic of the system.
The methodology used in this paper is based on supplementary variable technique
and difference equation method which makes the analysis easily tractable, both
theoretically and computationally. The procedure developed throughout the analysis
can be utilized and further extended to study some more complicated models.

Appendix

Theorem 1 The c.e. A∗ (δ + (μ + η)(1 − z)) bi=1 gi zb−i − zb = 0 have exactly b
roots inside the unit circle |z| = 1 subject to the condition δ > 0.

b Let usb−iassume f1 (z) =∗ −z and f2 (z) =bA (δ + (μ + η)(1 −
Proof b

z)) i=1 gi z = K(z). Since A (δ + (μ + η)(1


 − z)) i=1 gi z b−i is an analytic

function, it can be written in the form K(z) = ∞i=1 ki z such that ki ≥ 0 for all i.
i
156 U. C. Gupta et al.

Consider the circle |z| = 1 −  where  > 0 and is a sufficiently small quantity.
Now

|f1 (z)| = |zb | = (1 − )b = 1 − b + o()



b 
b 
b
|f2 (z)| = |K(z)|| gi zb−i | ≤ K(|z|) gi |z|b−i = K(1 − ) gi (1 − )b−i
i=1 i=1 i=1
∗ ∗ ∗(1)
= A (δ) − {2A (δ)(b − g) − (μ + η)A (δ)} + o()
< 1 − b + o()

under the sufficient condition δ > 0. Thus from Rouché’s theorem we have exactly
the same number of zeroes in f1 (z) and f1 (z) + f2 (z) inside the unit circle, and
hence the theorem.

References

1. Abbas, K., Aïssani, D.: Strong stability of the embedded Markov chain in an GI /M/1 queue
with negative customers. Appl. Math. Model. 34(10), 2806–2812 (2010)
2. Akar, N., Arikan, E.: A numerically efficient method for the MAP /D/1/K queue via rational
approximations. Queueing Syst. 22(1), 97–120 (1996)
3. Artalejo, J.R.: G-networks: a versatile approach for work removal in queueing networks. Eur.
J. Oper. Res. 126(2), 233–249 (2000)
4. Atencia, I., Moreno, P.: The discrete-time Geo/Geo/1 queue with negative customers and
disasters. Comput. Oper. Res. 31(9), 1537–1548 (2004)
5. Barbhuiya, F.P., Gupta, U.C.: A difference equation approach for analysing a batch service
queue with the batch renewal arrival process. J. Differ. Equ. Appl. 25(2), 233–242 (2019)
6. Barbhuiya, F.P., Gupta, U.C.: Discrete-time queue with batch renewal input and random
serving capacity rule: GI X /GeoY /1. Queueing Syst. 91, 347–365 (2019)
7. Barbhuiya, F.P., Kumar, N., Gupta, U.C.: Batch renewal arrival process subject to geometric
catastrophes. Methodol. Comput. Appl. Probab. 21(1), 69–83 (2019)
8. Boxma, O.J., Perry, D., Stadje, W.: Clearing models for M/G/1 queues. Queueing Syst. 38(3),
287–306 (2001)
9. Brière, G., Chaudhry, M.L.: Computational analysis of single-server bulk-arrival queues:
GI X /M/1. Queueing Syst. 2(2), 173–185 (1987)
10. Chakravarthy, S.R.: A catastrophic queueing model with delayed action. Appl. Math. Model.
46, 631–649 (2017)
11. Chaudhry, M.L., James, G.C.: Templeton. In: First Course in Bulk Queues. Wiley, New York
(1983)
12. Chen, A., Renshaw, E.: The M/M/1 queue with mass exodus and mass arrivals when empty.
J. Appl. Probab. 34(1), 192–207 (1997)
13. Dharmaraja, S., Kumar, R.: Transient solution of a Markovian queuing model with heteroge-
neous servers and catastrophes. Opsearch 52(4), 810–826 (2015)
14. Easton, G., Chaudhry, M.L., Posner, M.J.M.: Some corrected results for the queue GI X /M/1.
Eur. J. Oper. Res. 18(1), 131–132 (1984)
15. Easton, G., Chaudhry, M.L., Posner, M.J.M.: Some numerical results for the queuing system
GI X /M/1. Eur. J. Oper. Res. 18(1), 133–135 (1984)
A Queueing System with Batch Renewal Input and Negative Arrivals 157

16. Gelenbe, E.: Random neural networks with negative and positive signals and product form
solution. Neural Comput. 1(4), 502–510 (1989)
17. Goswami, V., Mund, G.B.: Analysis of discrete-time batch service renewal input queue with
multiple working vacations. Comput. Ind. Eng. 61(3), 629–636 (2011)
18. Harrison, P.G., Pitel, E.: Sojourn times in single-server queues by negative customers. J. Appl.
Probab. 30(4), 943–963 (1993)
19. Harrison, P.G., Pitel, E.: M/G/1 queues with negative arrival: an iteration to solve a fredholm
integral equation of the first kind. In: Proceedings of the Third International Workshop on
Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MAS-
COTS’95)., pp. 423–426. IEEE, Piscataway (1995)
20. Harrison, P.G., Pitel, E.: The M/G/1 queue with negative customers. Adv. Appl. Probab. 28(2),
540–566 (1996)
21. Jain, G., Sigman, K.: A Pollaczek–Khintchine formula for M/G/1 queues with disasters. J.
Appl. Probab. 33(4), 1191–1200 (1996)
22. Krishnamoorthy, A., Pramod, P.K., Chakravarthy, S.R.: Queues with interruptions: a survey.
Top 22(1), 290–320 (2014)
23. Kumar, B.K., Arivudainambi, D.: Transient solution of an M/M/1 queue with catastrophes.
Comput. Math. Appl. 40(10–11), 1233–1240 (2000)
24. Kumar, B.K., Madheswari, S.P.: Transient behaviour of the M/M/2 queue with catastrophes.
Statistica 62(1), 129–136 (2002)
25. Park, H.M., Yang, W.S., Chae, K.C.: Analysis of the GI /Geo/1 queue with disasters. Stoch.
Anal. Appl. 28(1), 44–53 (2009)
26. Singh, G., Gupta, U.C., Chaudhry, M.L.: Analysis of queueing-time distributions for
MAP /DN /1 queue. Int. J. Comput. Math. 91, 1911–1930 (2014)
27. Towsley, D., Tripathi, S.K.: A single server priority queue with server failures and queue
flushing. Oper. Res. Lett. 10(6), 353–362 (1991)
28. Van Do, T.: Bibliography on G-networks, negative customers and applications. Math. Comput.
Model. 53(1–2), 205–212 (2011)
29. Wang, J., Huang, Y., Dai, Z.: A discrete-time on–off source queueing system with negative
customers. Comput. Ind. Eng. 61(4), 1226–1232 (2011)
30. Yang, W.S., Chae, K.C.: A note on the GI /M/1 queue with Poisson negative arrivals. J. Appl.
Probab. 38(4), 1081–1085 (2001)
31. Zhou, W.-H.: Performance analysis of discrete-time queue GI /G/1 with negative arrivals.
Appl. Math. Comput. 170(2), 1349–1355 (2005)
Asymptotic Analysis Methods for
Multi-Server Retrial Queueing Systems

Ekaterina Fedorova , Anatoly Nazarov , and Alexander Moiseev

Abstract In this paper, we consider a multi-server retrial queueing system of type


M/M/N. We propose the asymptotic methods for analysis of the system under long
delay and heavy load conditions. Application areas of each method are defined and
numerical examples are given.

Keywords Retrial queues · Asymptotic analysis · Heavy load · Long delay

1 Introduction

Queueing theory is widely used for solving different practical problems in real
economic, technical, and social systems. There are two classes of queueing models:
systems with queues and loss systems. However, in real systems, there are situations
in which a queue is not identified explicitly, but also an arrival call is not lost if
it comes when all service devices are unavailable. Often a primary call does not
refuse to be serviced and performs repeated attempts after a random period of time.
Examples of this can be found in telecommunication systems, cellular networks,
and call centers [1, 21, 27, 32]. Thus, a new class of queueing system has appeared:
systems with repeated calls or retrial queueing systems.
The first papers regarding retrial queues were published in the middle of the
twentieth century by Wilkinson, Cohen, Elldin, and Gosztony [9, 13, 18, 35]. Most
were devoted to practical problems and the influence of repeated attempts on
telephone traffic, communication systems, etc. A comprehensive description and
detailed comparison of classical queueing systems and retrial queues was presented
by Falin and Artalejo in [5, 6, 15].
Nowadays, there are many papers devoted to retrial queueing systems. Scientists
from different countries have studied different types of retrial queues, developed
methods of their investigation, and solved practical and theoretical problems in this

E. Fedorova · A. Nazarov · A. Moiseev ()


Tomsk State University, Tomsk, Russian Federation

© The Editor(s) (if applicable) and The Author(s), under exclusive 159
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_11
160 E. Fedorova et al.

area. However, the majority of the studies are performed by matrix methods [10, 12,
17, 19] and involve further numerical analysis or computer simulation [6–8, 22, 28,
31].
Analytical results are obtained only for the simplest models, e.g. retrial queues
with a stationary Poisson arrival process and an exponential distribution of the
service law (see [15]).
In this paper, we use the asymptotic analysis method developed by the Tomsk
scientific group for the study of different types of queueing systems and networks
(e.g., in [24, 29]). The principle of the method is a derivation of asymptotic equations
from systems of equations determining the model behavior and obtaining formulas
for asymptotic functions under some limit condition.
In the previous papers (e.g. [23, 26]), we applied the asymptotic analysis
method for single-server retrial queueing systems under heavy load and long delay
limit conditions. In addition, we proposed Gaussian, quasi-geometric, and gamma
approximation methods for the single-server retrial queues [16, 25]. Thus, in this
paper, we plan to generalize our results to a multi-server model.
Asymptotic and approximate methods are also offered in [3, 11, 15, 30, 36], etc.
The performance characteristics of retrial queues with Poisson arrival process under
heavy and light loads and long delay conditions are studied by [2, 4, 14, 33]. In
addition, the paper [34] is devoted to the “extreme” load of a retrial queue (when an
intensity of primary calls tends to infinity or zero).
The rest of the paper is organized as follows. In Sect. 2, the description of the
mathematical model of the retrial queue M/M/N is described and the stochastic
process of the system states is analyzed. In addition, we present the research
directions. In Sect. 3, we determine a limit condition of a long delay and prove
the theorem about the Gaussian form of the asymptotic characteristic function. In
Sect. 4, the retrial queue is studied under a limit condition of a heavy load and we
prove the theorem about the gamma distribution of the asymptotic characteristic
function. Section 5 is devoted to the hyper-gamma approximation as an improve-
ment of the gamma approximation. In Sect. 6, some numerical examples of the
comparison of asymptotic distributions with exact ones (obtained via imitation
modeling) are presented, and conclusions about each method application area are
made.

2 Mathematical Model and Problem Statement

Let us consider a multi-server retrial queueing system of type M/M/N. The system
structure is presented in Fig. 1. The arrival process is Poisson with a rate λ. There are
N servers with service times distributed exponentially with a rate μ. If a call arrives
when there is a free server, this call occupies it for the service. Otherwise, the call
goes to an orbit, where it stays for a random time distributed by the exponential law
with a rate σ . After the delay the call attempts to obtain the service. If there is a free
server, the call occupies it, otherwise the call instantly returns to the orbit.
Asymptotic Analysis Methods for Multi-Server Retrial Queueing Systems 161

Fig. 1 Retrial queueing σ


system M/M/N
σ

O μ

μ
N

Let i(t) be the random process of the number of calls in the orbit and n(t) be the
random process which defines the servers block states as follows:


⎪ 0, if all servers are free at the moment t,



⎨ 1, if 1 server is busy at the moment t,
n(t) = 2, if 2 servers are busy at the moment t,



⎪ ...


N, if all servers are busy at the moment t.

The aim of the research is to find the probability distribution of the number of
calls in the orbit.
The process i(t) is not Markovian, therefore we consider the two-dimensional
continuous-time Markov chain {n(t), i(t)}.
Denote the stationary probability distribution of the system states {n(t), i(t)} by
Pn (i) = P {n(t) = n, i(t) = i}, where n = 0 . . . N, i = 0 . . . ∞. The considered
process is Markovian, thus, the following system of Kolmogorov equations for Pn (i)
can be written


⎪ −(λ + iσ )P0 (i) + μP1 (i) = 0,

−(λ + iσ + nμ)Pn (i) + λPn−1 (i) + (i + 1)σ Pn−1 (i + 1)
⎪ +(n + 1)μPn+1 (i) = 0, for n = 1, . . . , N − 1,


(i + 1)σ PN−1 (i + 1) − (λ + Nμ)PN (i) + λPN−1 (i) + λPN (i − 1) = 0.
(1)
162 E. Fedorova et al.

Let us introduce the partial characteristic functions




Hn (u) = ej ui Pn (i), (2)
i=0

where j = −1 is an imaginary unit.
Substituting functions (2) into Eqs. (1), the following system of equations is
obtained:


⎪ j σ H0" (u) − λH0 (u) + μH1 (u) = 0,

⎨ j σ H " (u) − j σ e−j u H " (u) − (λ + nμ)Hn (u) + λHn−1 (u)
n n−1
+(n + 1)μHn+1 (u)= 0, for n (3)


⎪  = 1, . 
. . , N − 1,
⎩ −j σ e−j u H " (u) − λ 1 − ej u + Nμ H (u) + λH (u) = 0.
N−1 N N−1

Analytical solution of systems (1) and (3) are unknown in the scientific literature.
Therefore, we propose:
– to obtain an asymptotic solution of system (3) under a long delay limit condition
(σ → 0);
– to obtain an asymptotic solution of system (3) under a heavy load limit condition
(ρ → 1, where ρ = λ/(Nμ));
– to make conclusions about the application area of obtained asymptotic distribu-
tions using a comparison with computer simulation results.

3 Asymptotic Solution Under Long Delay

Considering the long delay limit condition (σ → 0), we formulate the following
theorem.
Theorem 1 The asymptotic partial characteristic function of the probability distri-
bution of the number of calls in the orbit for the retrial queueing system of M/M/N
type under the long delay condition σ → 0 has the form of a Gaussian distribution
 5
κ1 (j u)2 κ2
Hn (u) = Rn exp j u + , (4)
σ 2 σ

where
*
N $ % +−1 $ %n
 κ1 + λ n 1 κ1 + λ 1
R0 = , Rn = for n = 1, . . . , N,
μ n! μ n!
n=0

RN λRN + (λ + κ1 )φN
κ1 = λ , κ2 = ,
1 − RN 1 − RN − (λ + κ1 )gN
Asymptotic Analysis Methods for Multi-Server Retrial Queueing Systems 163

φN and gN are defined by the following systems of equations, respectively,




⎪ −(κ1 + λ)g0 + μg1 = R0 ,



⎪ −(κ1 + λ + nμ)gn + (κ1 + λ) gn−1


⎨ +(n + 1)μgn+1 = Rn − Rn−1 , n = 1, . . . , N − 1,
⎪ −NμgN + (κ1 + λ) gN−1 = −RN−1 ,



⎪ 
N

⎪ gn = 0,


n=0


⎪ −(κ1 + λ)φ0 + μφ1 = 0,



⎪ −(κ1 + λ + nμ)φn + (κ1 + λ) φn−1


⎨ +(n + 1)μφn+1 = κ1 Rn−1 , n = 1, . . . , N − 1,
⎪ −NμφN + (κ1 + λ) φN−1 = −λRn + κ1 RN−1 ,


⎪ 
⎪ N

⎪ φn = 0.


n=0

To prove the theorem, we obtain the first- and the second-order asymptotic
functions for the solution of system (3).

3.1 First-Order Asymptotics

Denoting σ = ε, we use the substitutions

u = εw, Hn (u) = Fn (w, ε) for n = 0, . . . , N.

Then system (3) can be rewritten as follows:


⎧ ∂F (w, ε)


0
− λF0 (w, ε) + μF1 (w, ε) = 0,
⎪j



∂w
⎨ ∂F (w, ε) ∂Fn−1 (w, ε)
− j e −j εw
n
j − (λ + nμ)Fn (w, ε) + λFn−1 (w, ε)
∂w ∂w

⎪ +(n + 1)μFn+1 (w, ε) = 0, n = 1, . . . , N − 1,



⎪    
⎩ −j e −j εw ∂FN −1 (w, ε) − λ 1 − ej εw + N μ F (w, ε) + λF
N N −1 (w, ε) = 0.
∂w
(5)

Denote Fn (w) = lim Fn (w, ε). From Eqs. (5), we derive the following system:
ε→0
⎧ "

⎪ j F (w) − λF0 (w) + μF1 (w) = 0,
⎨ 0" "
j Fn (w) − j Fn−1 (w) − (λ + nμ)Fn (w) + λFn−1 (w)
(6)

⎪ +(n + 1)μF n+1 (w) = 0, n = 1, . . . , N − 1
⎩ "
−j FN−1 (w) − NμFN (w) + λFN−1 (w) = 0.
164 E. Fedorova et al.

The solution Fn (w) of system (6) is written as

Fn (w) = Rn exp{j wκ1 }, (7)

where Rn is the asymptotic probability of n busy servers and κ1 is the asymptotic


mean of the number of calls in the orbit.
Substituting expression (7) into Eqs. (6), the system for Rn is obtained

⎨ −(κ1 + λ)R0 + μR1 = 0,
(λ + κ1 )Rn−1 − (κ1 + λ + nμ)Rn + (n + 1)μRn+1 = 0, (8)

(λ + κ1 )RN−1 − NμRN = 0.

Obviously, the solution of system (8) has the form of discrete Erlang distribution

$ %n *
N $ % +−1
κ1 + λ 1  κ1 + λ n 1
Rn = , R0 = . (9)
μ n! μ n!
n=0

Let us find the parameter κ1 . We sum all equations of system (5):


*N−1 +
∂ 
j Fn (w, ε) + λej w FN (w, ε) = 0.
∂w
n=0

Suppose ε → 0 and substituting expression (7), then the following equation is


obtained:


N−1
−κ1 Rn + λRN = 0.
n=0


N
Taking into account the normalization requirement Rn = 1, we have
n=0

−κ1 (1 − RN ) + λRN = 0.

Thus, κ1 is the solution of the following equation:

RN
κ1 = λ , (10)
1 − RN

where RN is the component of solution of system (8) that depends on κ1 .


Asymptotic Analysis Methods for Multi-Server Retrial Queueing Systems 165

κ1 + λ
Denoting x = , it is easy to show that x is the solution of the following
μ
equation:


N
μn − λ
x n = 0,
n!
n=0

which has no more than one positive root x1 . Thus, κ1 is equal to

κ1 = μx1 − λ. (11)

In this way, we obtain the probabilities Rn and the mean κ1 for expression (7).
The function Fn (w) is called the first-order asymptotic function.

3.2 Second-Order Asymptotics

For more detailed analysis, the second-order asymptotics is derived. First, applying
results of the first-order asymptotics, we use the following substitution:
 κ 
1
Hn (u) = Hn(2)(u) exp j u . (12)
σ

From system (3), the following system of equations for Hn2 (u) is obtained
⎧ "

⎪ j σ H0(2) (u) − (κ1 + λ)H0(2)(u) + μH1(2)(u) = 0,



⎪ (2) " −j u (2) "
n (u) − j σ e  Hn−1 (u) − (κ1 + λ + nμ)Hn (u)
(2)

⎪ j σ H

+ κ1 e−j u + λ Hn−1 (u) + (n + 1)μHn+1 (u) = 0, n = 1, . . . , N − 1,
(2) (2)
⎪    

⎪ −j u (2) "

⎪ −j σ e H (u) − λ 1 − e ju
+ Nμ HN(2)(u)

⎪  N−1 

⎩ + κ1 e−j u + λ H (2) (u) = 0.
N−1
(13)
We use the notation

σ = ε2 , u = εw, Hn(2) (u) = Fn (w, ε) for n = 0, . . . , N. (14)

Note here that the notation of the parameter ε and functions Fn (w, ε) differ from
those in the previous subsection. We use the same symbols for brevity.
166 E. Fedorova et al.

Substituting (14) into system (13), we obtain



⎪ ∂F0 (w, ε)

⎪ jε − (κ1 + λ)F0 (w, ε) + μF1 (w, ε) = 0,

⎪ ∂w

⎪ ∂Fn (w, ε) ∂Fn−1 (w, ε)

⎪ − j εe−j εw − (κ1 + λ + nμ)Fn (w, ε)



 ∂w  ∂w
+ κ1 e−j εw + λ Fn−1 (w, ε) + (n + 1)μFn+1 (w, ε) = 0, (15)

⎪    



⎪ −j εw ∂FN−1 (w, ε)
⎪ −j εe − λ 1 − e j εw
+ Nμ FN (w, ε)

⎪  ∂w

⎩ + κ e−j εw + λ F
1 N−1 (w, ε) = 0.

Let us solve system (15) in three steps.


Step 1 Denote Fn (w) = lim Fn (w, ε). From Eqs. (15), we obtain the following
ε→0
system for the functions Fn (w):

⎨ −(κ1 + λ)F0 (w) + μF1 (w) = 0,
−(κ1 + λ + nμ)Fn (w) + (κ1 + λ) Fn−1 (w) + (n + 1)μFn+1 (w) = 0,

−NμFN (w) + (κ1 + λ) FN−1 (w) = 0.

Suppose the solution of this system has the form

Fn (w) = Φ(w)Rn , (16)

where Rn has the same meaning as in the previous subsection and it can be
calculated using expressions (9) and (11).
Step 2 Consider the following expansion of the function Fn (w, ε)

Fn (w, ε) = Φ(w) (Rn + j εwfn ) + O(ε2 ), (17)

where fn is unknown.
Let the function Φ(w) have the following form:
 5
(j w)2
Φ(w) = exp κ2 . (18)
2

Substituting (17) and (18) into system (15), we derive the system for fn as follows:


⎪ −(κ1 + λ)f0 + μf1 = κ2 R0 ,

−(κ1 + λ + nμ)fn + (κ1 + λ) fn−1
(19)

⎪ +(n + 1)μfn+1 = κ1 Rn−1 + κ2 Rn − κ2 Rn−1 ,

−NμfN + (κ1 + λ) fN−1 = −λRn + κ1 RN−1 − κ2 RN−1 .
Asymptotic Analysis Methods for Multi-Server Retrial Queueing Systems 167

Note that the determinant of the system matrix is equal to zero and ranks of
the system matrix and the extended matrix are equal. In this way, there are many
solutions to system (19), and the general solution has the form

fn = CRn + fn0 , (20)

where fn0 is a particular solution satisfying some additional condition, for example,
 N
n=0 fn = 0.
Let us write the particular solution fn0 as follows:

fn0 = κ2 gn + φn . (21)

Substituting (21) into (19), we obtain the following system for gn :



⎨ −(κ1 + λ)g0 + μg1 = R0 ,
−(κ1 + λ + nμ)gn + (κ1 + λ) gn−1 + (n + 1)μgn+1 = Rn − Rn−1 , (22)

−NμgN + (κ1 + λ) gN−1 = −RN−1 ,

with the additional condition


N
gn = 0, (23)
n=0

and the following system of equations for φn :



⎨ −(κ1 + λ)φ0 + μφ1 = 0,
−(κ1 + λ + nμ)φn + (κ1 + λ) φn−1 + (n + 1)μφn+1 = κ1 Rn−1 , (24)

−NμφN + (κ1 + λ) φN−1 = −λRn + κ1 RN−1 ,

with the additional condition


N
φn = 0. (25)
n=0

Obviously, systems (22)–(23) and (24)–(25) have unique solutions.


Step 3 Summing up all equations of system (15), we obtain


N−1 
N−1
∂Fn (w, ε)
κ1 Fn (w, ε) − λej εw FN (w, ε) − j ε = 0. (26)
∂w
n=0 n=0
168 E. Fedorova et al.

Using expansions (17) and taking into account ej εw = 1 + j εw + O(ε2 ), formula


(26) can be rewritten as


N−1
Φ " (w)
κ1 j εw fn − λj εRN − λj εfN − j ε (1 − RN ) = O(ε2 ).
Φ(w)
n=0

Substituting expression (18), we obtain


N−1
κ1 fn − λfN − λRN + κ2 (1 − RN ) = 0.
n=0

Taking into account formula (20), we have


/ 4

N−1 
N−1
C κ1 Rn − λRN + κ1 fn0 − λfN0 − λRN + κ2 (1 − RN ) = 0.
n=0 n=0

Using (10), we obtain the following expression:


N−1
κ2 (1 − RN ) = λRN + λfN0 − κ1 fn0 ,
n=0

or

κ2 (1 − RN ) = λRN + (λ + κ1 )fN0 = λRN + (λ + κ1 ) (κ2 gN + φN ) ,

which does not depend on the constant C.


Finally, we obtain κ2 as follows:

λRN + (λ + κ1 )φN
κ2 = . (27)
1 − RN − (λ + κ1 )gN

Thus, we define the function Fn (w), which is called the second-order asymptotic
function. √ √
Taking into account the expressions ε = σ , w = u/ε = u/ σ and Eqs. (14)
and (16), the approximation can be written as follows:
 5
(j u)2 κ2
Hn(2)(u) = Rn exp .
2 σ
Asymptotic Analysis Methods for Multi-Server Retrial Queueing Systems 169

Turning back to formula (12), we obtain the following asymptotic expression for
the partial characteristic functions:
 5
κ1 (j u)2 κ2
Hn (u) = Rn exp j u + ,
σ 2 σ

where κ1 and κ2 are defined by expressions (11) and (27), respectively, and the
probability distribution Rn is defined by system (8).
This completes the proof.
Thus, we have proved that under the long delay condition the probability
distribution P (i) of the number of calls in the orbit has a Gaussian approximation
with parameters κ1 /σ and κ2 /σ .
Finally, denote the probability function of the Gaussian distribution by G(x);
then the discrete probability distribution P (i) of the process under study can be
approximated as follows:

P (i) = (G(i + 1) − G(i)) (1 − G(0))−1 .

4 Asymptotic Solution Under Heavy Load

Denote the system load by ρ = λ/(Nμ). The stationary regime of the retrial queue
exists if ρ < 1. Let us consider the system under the limit condition of the heavy
load ρ → 1 or ε = 1 − ρ → 0.
Theorem 2 The asymptotic characteristic function of the probability distribution
of the number of calls in the orbit in the retrial queueing system of M/M/N type
under the heavy load condition ρ → 1 has a gamma distribution of the form
$ %
j u −α
h(u) = 1 − (28)
β

(μ + σ )
with the shape parameter α = and the inverse scale parameter β = 1 − ρ.
σ
To prove the theorem, we introduce the following notation:

λ = (1 − ε)Nμ, u = εw, Hn (u) = εN−n Fn (w, ε) for n = 0, . . . , N.


(29)
170 E. Fedorova et al.

Substitute expressions (29) into system (3):




⎪ ∂F0 (w, ε)

⎪ j σ εN−1 − (1 − ε)NμεN F0 (w, ε) + μεN−1 F1 (w, ε) = 0,

⎪ ∂w

⎪ ∂Fn (w, ε) ∂Fn−1 (w, ε)

⎪ j σ εN−n−1 − j σ e−j εw εN−n


⎨ ∂w ∂w
− [(1 − ε)Nμ + nμ] εN−n Fn (w, ε) + (1 − ε)NμεN−n+1 Fn−1 (w, ε)

⎪ +(n + 1)μεN−n−1 Fn+1 (w, ε) = 0, for n = 1, . . . , N − 1,



⎪ ∂F (w, ε)
⎪ (1 − ε)NμεFN−1 (w, ε) − j σ e−j εw N−1


⎪   ∂w


⎩ − (1 − ε) 1 − ej u Nμ + Nμ FN (w, ε) = 0.
(30)
After some transformations, system (30) can be rewritten as follows:
⎧ ∂F (w, ε)

⎪ jσ
0
− (1 − ε)N μεF0 (w, ε) + μF1 (w, ε) = 0,




∂w

⎪ ∂Fn (w, ε) ∂Fn−1 (w, ε)

⎨jσ − j σ e −j εw ε − [(1 − ε)N μ + nμ] εFn (w, ε)
∂w ∂w
⎪ + (1 − ε)N με Fn−1 (w, ε) + (n + 1)μFn+1 (w, ε) = 0, for n = 1, . . . , N − 1,
2



⎪ (1 − ε)N μεFN −1 (w, ε) − j σ e −j εw
∂FN −1 (w, ε)



⎪   ∂w

− (1 − ε) 1 − e j εw N μ + N μ FN (w, ε) = 0.
(31)
Suppose the solution of system (31) has the form

Fn (w, ε) = Fn (w) + εfn (w) + O(ε2 ), (32)

where Fn (w) = lim Fn (w, ε).


ε→0
Substituting expression (32) into system (31), and writing equalities for members
with equal powers of ε, we obtain the following system for Fn (w):
⎧ "
⎨ j σ F0 (w) + μF1 (w) = 0,
j σ Fn" (w) + (n + 1)μFn+1 (w) = 0, n = 1, . . . , N − 1, (33)
⎩ "
j σ FN−1 (w) + NμFN (w) = 0,

and the following system for fn (w):


⎧ "
⎨ j σf0 (w) + μf1 (w) = NμF0 (w),
j σfn" (w) + (n + 1)μfn+1 (w) = NμFn (w), n = 1, . . . , N − 1, (34)
⎩ "
j σfN−1 (w) + NμfN (w) = NμFN−1 (w).
Asymptotic Analysis Methods for Multi-Server Retrial Queueing Systems 171

Summing up all equations of system (31), we obtain the additional expression


/ 4
1  N−n
N−1

jσ ε Fn (w, ε) + (1 − ε)ej εw NμFN (w, ε) = 0.
∂w ε
n=0

Take into account expression (32) and make some transformations.


" "
j σfN−1 (w) + j σ FN−2 (w) − (1 − j w)NμFN (w) + NμfN (w) = 0.

Using formulas (33) and (34), we obtain

FN−1 (w) − (1 − j w)NFN (w) = 0. (35)

Let us differentiate equation (35) and take into account the last equation of
system (33).

(μ + σ )FN (w) − j σ (1 − j w)NFN" (w) = 0.

Clearly, the solution has the form


(μ+σ )
FN (w) = C(1 − j w)− σ .

Using inverse expressions for substitutions (14), we obtain

u $ % $ % (μ+σ )
u ju − σ
FN (w) = FN = FN =C 1− .
ε 1−ρ 1−ρ

It is easy to show that C = 1 owing to the normalization requirement. Thus, the


asymptotic characteristic function of the probability distribution of the number of
calls in the orbit h(u) has the form of the gamma distribution characteristic function
$ %
j u −α
h(u) = 1 −
β

μ+σ
with the shape parameter α = and the inverse scale parameter β = 1 − ρ.
σ
This completes the proof.
Denoting the probability distribution function of the gamma distribution by
Γ (x), the discrete probability distribution of the number of calls in the orbit P (i)
can be approximated as follows:

P (i) = Γ (i + 1) − Γ (i).
172 E. Fedorova et al.

Moreover, during the study, we obtained the understandable conclusion that only
the function FN (w) (it describes the situation when all servers are busy) is signif-
icant under the heavy load condition. This allows us to obtain the approximation
of the probability distribution of the number of calls in the orbit for more general
retrial queueing systems.

5 Hyper-Gamma Approximation

During the proof of Theorem 2, we obtained the function FN−1 (w) (see formula
(35)). Thus, we offer to apply the following approximation:
$ % $ %
u u
h∗ (u) = FN (w) + εFN−1 (w) = FN + (1 − ρ)FN−1 . (36)
1−ρ 1−ρ

Consider functions FN (w) and FN−1 (w). In Sect. 4, it is shown that


μ
FN (w) = C(1 − j w)− σ −1 .

In addition, from Eq. (35) we have

C μ
FN−1 (w) = (1 − j w)− σ .
N
Substitute these expressions into (36)

h(0) = FN (0) + (1 − ρ)FN−1 (0),

and take into account the normalization equality h(0) ≡ 1

C
C + (1 − ρ) = 1.
N
N
Therefore, we obtain C = . Then Eq. (36) can be rewritten as follows:
1−ρ+N
$ % $ %
∗ N j u −α 1−ρ j u −α+1
h (u) = 1− + 1− ,
1−ρ +N β 1−ρ+N β
μ
where α = + 1 and β = 1 − ρ.
σ
Asymptotic Analysis Methods for Multi-Server Retrial Queueing Systems 173

We use the notation


$ % $ %
j u −α j u −α+1 N
ΓN (u) = 1 − , ΓN−1 (u) = 1 − , and q = .
β β 1−ρ +N

Thus, we have the final expression

h∗ (u) = qΓN (u) + (1 − q)ΓN−1 (u). (37)

Distribution (37) is called the hyper-gamma distribution.

6 Numerical Examples

Let us consider some numerical examples to demonstrate the area of applicability


of the proposed approximations. To do this, we provide simulations of the systems’
evolution and compare statistical results with analytical results derived in this paper.
The comparison is performed by using the Kolmogorov distance [20] between
respective cumulative distribution functions
 i 
 6 7

d = max  p̃(l) − p(l) ,
i≥0  
l=0

where p(l) is a probability distribution calculated using the approximation formulas


(4), (28), or (37), and p̃(l) is an empiric distribution of the number of calls in the
orbit based on the results of the simulation. For our purposes, we assume that values
d ≤ 0.05 are sufficient for good accuracy of approximation.
Let the number of servers N be equal to 10, the service rate of each server be
μ = 1, and the arrival process be Poisson with the rate λ = Nμρ. Parameters ρ and
σ will be variable in relation to the considering asymptotic conditions.
First, we consider the long delay asymptotic condition. Let ρ = 0.8. The
comparison of the Gaussian distributions 4 and empiric distributions is presented
in Fig. 2 for various values of the parameter σ . Values of the Kolmogorov distance
for this example are presented in Table 1. We note that the Gaussian approximation
(4) becomes accurate enough for σ ≤ 0.1.
Next, consider the asymptotic condition of the heavy load. Let σ = 1.
Cumulative distribution functions of the gamma and hyper-gamma approximations
(28) and (37), and the corresponding empiric distributions are shown in Fig. 3.
Values of the Kolmogorov distance are presented in Table 2. Note that sufficient
accuracy of approximation is achieved for ρ ≥ 0.97.
174 E. Fedorova et al.

1 1 1

0,8 0,8 0,8

0,6 0,6 0,6

0,4 0,4 0,4


a) b) c)
0,2 0,2 0,2

0 0 0
0 5 10 15 20 0 10 20 30 40 50 60 70 0 20 40 60 80 100 120

Fig. 2 Comparisons of the Gaussian approximation (dashed line) and the simulation results (solid
line) for: (a) σ = 0.5; (b) σ = 0.1; (c) σ = 0.05

Table 1 Kolmogorov distances d for the Gaussian approximation for various values of the
parameter σ

σ 1 0.5 0.1 0.05 0.01


d 0.125 0.096 0.048 0.035 0.018

1 1 1

0,8 0,8 0,8

0,6 0,6 0,6

0,4 0,4 0,4


a) b) c)
0,2 0,2 0,2

0 0 0
0 10 20 30 40 50 60 70 0 20 40 60 80 100 120 140 0 50 100 150 200

Fig. 3 Comparisons of the gamma approximation (dashed line), the hyper-gamma approximation
(dotted line, very close to gamma approximation), and the simulation results (solid line) for: (a)
ρ = 0.9; (b) ρ = 0.95; (c) ρ = 0.97

Table 2 Kolmogorov distances dg and dh for the gamma and hyper-gamma approximations,
respectively, for various values of the parameter ρ

ρ 0.9 0.95 0.96 0.97 0.98


dg 0.218 0.095 0.077 0.049 0.022
dh 0.215 0.093 0.076 0.048 0.022

In addition, we note that the hyper-gamma approximation is only slightly


better than the gamma approximation in the example. However, the accuracy
of the approximation and the difference between the hyper-gamma and gamma
approximation results increase for fewer servers (e.g., for the system M/M/2,
values of the Kolmogorov distance are presented in Table 3).
Asymptotic Analysis Methods for Multi-Server Retrial Queueing Systems 175

Table 3 Kolmogorov distances dg and dh for the gamma and for hyper-gamma approximations,
respectively, for the system M/M/2

ρ 0.9 0.95 0.97


dg 0.118 0.058 0.033
dh 0.100 0.046 0.025

7 Conclusions

In this paper, the multi-server retrial queue M/M/N has been considered. We have
proposed asymptotic methods for the study of the system under long delay and
heavy load conditions.
It has been proved that the asymptotic characteristic function of the number of
calls in the orbit under the long delay condition is Gaussian (Sect. 3). Using the
numerical comparison of the asymptotic and the empiric distributions, we have
shown that the applicability area of this approximation is σ ≤ 0.1.
In Sect. 4, the retrial queue has been studied under the limit condition of a heavy
load, and the gamma form of the asymptotic characteristic function of the number
of calls in the orbit has been proved. The numerical analysis allows to conclude
that the asymptotic formulas can be applied for ρ ≥ 0.97. In addition, we have
proposed an improvement of the asymptotic result in the form of the hyper-gamma
approximation (Sect. 5), which is better than the gamma approximation for N < 10.
In recent papers, we found that the formulas for asymptotic characteristic
functions of the probability distribution of the number of calls in the orbit (under
long delay and heavy load conditions) have the same form for single-server
retrial queues with different arrival processes and service laws: M/M/1, M/GI /1,
MMP P /M/1, and MMP P /GI /1. Thus, in future work, we plan to apply the
proposed methods for multi-server retrial queueing systems with non-Poisson
arrival processes.

References

1. Aguir, S., Karaesmen, F., Askin, O.Z., Chauvet, F.: The impact of retrials on call center
performance. OR Spektr. 26, 353–376 (2004)
2. Aissani, A.: Heavy loading approximation of the unreliable queue with repeated orders. In:
Colloque “Méthodes et outils d’aide á la décision”, pp. 97–102 (1992)
3. Anisimov, V.: Asymptotic analysis of highly reliable retrial systems with finite capacity. In:
Queues, Flows, Systems, Networks: Proceedings of the International Conference Modern
Mathematical Methods of Investigating the Telecommunication Networks, Minsk, pp. 7–12
(1999)
4. Anisimov, V.: Asymptotic Analysis of Reliability for Switching Systems in Light and Heavy
Traffic Conditions, pp. 119–133. Birkhäuser Boston, Boston (2000)
176 E. Fedorova et al.

5. Artalejo, J., Falin, G.: Standard and retrial queueing systems: a comparative analysis. Rev. Mat.
Complut. 15, 101–129 (2002)
6. Artalejo, J., Gómez-Corral, A.: Retrial Queueing Systems. A Computational Approach.
Springer, Stockholm (2008)
7. Artalejo, J., Pozo, M.: Numerical calculation of the stationary distribution of the main
multiserver retrial queue. Ann. Oper. Res. 116, 41–56 (2002)
8. Artalejo, J., Gómez-Corral, A., Neuts, M.: Analysis of multiserver queues with constant retrial
rate. Eur. J. Oper. Res. 135, 569–581 (2001)
9. Cohen, J.: Basic problems of telephone traffic and the influence of repeated calls. Philips
Telecommun. Rev. 18(2), 49–100 (1957)
10. Diamond, J., Alfa, A.: Matrix analytical methods for M/P H /1 retrial queues. Stoch. Model.
11, 447–470 (1995)
11. Diamond, J., Alfa, A.: Approximation method for M/P H /1 retrial queues with phase type
inter-retrial times. Eur. J. Oper. Res. 113, 620–631 (1999)
12. Dudin, A., Klimenok, V.: Queueing system BMAP /G/1 with repeated calls. Math. Comput.
Model. 30(3–4), 115–128 (1999)
13. Elldin, A., Lind, G.: Elementary Telephone Traffic Theory. Ericsson Public Telecommunica-
tions, Stockholm (1971)
14. Falin, G.: M/G/1 queue with repeated calls in heavy traffic. Mosc. Univ. Math. Bull. 6, 48–50
(1980)
15. Falin, G., Templeton, J.: Retrial Queues. Chapman & Hall, London (1997)
16. Fedorova, E.: Quasi-geometric and gamma approximation for retrial queueing systems.
Commun. Comput. Inf. Sci. 487, 123–136 (2014)
17. Gómez-Corral, A.: A bibliographical guide to the analysis of retrial queues through matrix
analytic techniques. Ann. Oper. Res. 141, 163–191 (2006)
18. Gosztony, G.: Repeated call attempts and their effect on traffic engineering. Bell Syst. Tech. J.
2, 16–26 (1976)
19. Kim, C., Mushko, V., Dudin, A.: Computation of the steady state distribution for multi-server
retrial queues with phase type service process. Ann. Oper. Res. 201(1), 307–323 (2012)
20. Kolmogorov, A.N.: Sulla determinazione empirica di una legge di distribuzione. Giornale dell’
Intituto Italiano degli Attuari 4, 83–91 (1933)
21. Kuznetsov, D., Nazarov, A.: Analysis of non-Markovian models of communication networks
with adaptive protocols of multiple random access. Autom. Remote. Control. 5, 124–146
(2001)
22. Lopez-Herrero, M.J.: Distribution of the number of customers served in an M/G/1 retrial
queue. J. Appl. Probab. 39(2), 407–412 (2002)
23. Moiseeva, E., Nazarov, A.: Asymptotic analysis of RQ-systems M/M/1 on heavy load
condition. In: Proceedings of the IV International Conference Problems of Cybernetics and
Informatics, Baku, pp. 64–166 (2012)
24. Moiseev, A., Nazarov, A.: Queueing network MAP − (GI /∞)K with high-rate arrivals. Eur.
J. Oper. Res. 254, 161–168 (2016)
25. Nazarov, A., Chernikova, Y.: Gaussian approximations of probabilities distribution of states
of the retrial queueing system with r-persistent exclusion of alternative customers. Commun.
Comp. Inf. Sci. 564, 200–209 (2015)
26. Nazarov, A., Lyubina, T.: The non-Markov dynamic RQ system with the incoming MMP flow
of requests. Autom. Remote. Control. 74(7), 1132–1143 (2013)
27. Nazarov, A., Tsoj, S.: Common approach to studies of Markov models for data transmission
networks controlled by the static random multiple access protocols. Autom. Control. Comput.
Sci. 4, 73–85 (2004)
28. Neuts, M., Rao, B.: Numerical investigation of a multiserver retrial mode. Queueing Syst. 7(2),
169–189 (2002)
29. Pankratova, E., Moiseeva, S.: Queueing system MAP /M/∞ with n types of customers.
Commun. Comput. Inf. Sci. 487, 356–366 (2014)
Asymptotic Analysis Methods for Multi-Server Retrial Queueing Systems 177

30. Pourbabai, B.: Asymptotic analysis of G/G/K queueing-loss system with retrials and
heterogeneous servers. Int. J. Syst. Sci. 19, 1047–1052 (1988)
31. Ridder, A.: Fast simulation of retrial queues. In: Third Workshop on Rare Event Simulation
and Related Combinatorial Optimization Problems, Pisa, pp. 1–5 (2000)
32. Roszik, J., Sztrik, J., Kim, C.: Retrial queues in the performance modelling of cellular mobile
networks using MOSEL. Int. J. Simul. 6, 38–47 (2005)
33. Sakurai, H., Phung-Duc, T.: Scaling limits for single server retrial queues with two-way
communication. Ann. Oper. Res. 247(1), 229–256 (2015)
34. Stepanov, S.: Asymptotic analysis of models with repeated calls in case of extreme load. Probl.
Inf. Transm. 29(3), 248–267 (1993)
35. Wilkinson, R.: Theories for toll traffic engineering in the USA. Bell Syst. Tech. J. 35(2), 421–
507 (1956)
36. Yang, T., Posner, M., Templeton, J., Li, H.: An approximation method for the M/G/1 retrial
queue with general retrial times. Eur. J. Oper. Res. 76, 552–562 (1994)
On the Application of Dynamic Screening
Method to Resource Queueing System
with Infinite Servers

Michele Pagano and Ekaterina Lisovskaya

Abstract Infinite-server queues are a widely used modelling tool thanks to their
analytical tractability and their ability to provide conservative upper bounds for
the corresponding multi-server queueing systems. A relatively new research field is
represented by resource queues, in which every customer requires some volume of
resources during her staying in the queue and frees it only at the end of the service.
In a nutshell, in this paper the joint distribution of the processes describing the
number of busy servers and the total volume of occupied resources is derived and the
parameters of the corresponding bidimensional Gaussian distribution are explicitly
calculated as a function of the arrival process characteristics and the service time
and customers capacity distributions. The aim of this paper is twofold: on one
side it summarizes in a ready-to-be-used way the main results for different arrival
processes (namely, Poisson processes, renewal processes, MAP, and MMPP), on the
other it provides a detailed description of the employed methodology, presenting the
key ideas at the basis of powerful analysis tools (dynamic screening and asymptotic
analysis methods), developed in the last two decades by Tomsk researchers.

Keywords Resource queuing systems · Dynamic screening method · Asymptotic


analysis method · Renewal processes · MMPP · MAP

M. Pagano ()
Department of Information Engineering, University of Pisa, Pisa, Italy
e-mail: [email protected]
E. Lisovskaya
Tomsk State University, Tomsk, Russian Federation
e-mail: [email protected]

© The Editor(s) (if applicable) and The Author(s), under exclusive 179
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_12
180 M. Pagano and E. Lisovskaya

1 Introduction

Infinite-server queues play a relevant role in queueing theory and in performance


analysis. Indeed, different issues can be modelled in such a way, that the number of
servers is really infinite or so big, that in practice there are always free servers. A
typical example is represented by economical models, in which there is no reason
to limit the number of contracts that can be signed between credit organizations and
clients. Although in real systems physical resources are always finite, these models
can be applied to the analysis of computing clusters and multi-core supercomputers,
as well as to high-capacity routers (see [1] and references therein).
Moreover, infinite-server queues have a higher analytical tractability than the
corresponding multi-server systems: for several classes of arrival processes not
only mean values of the performance indexes are available, but it is also possible
to determine the corresponding probability distributions, at least under some
asymptotic conditions. For instance, “heavy traffic” scenarios are often encountered
in computer networks and the knowledge of the steady-state distribution of the
number of busy servers can provide conservative upper bounds for the correct
dimensioning of the system (e.g., output capacity of a router).
Traditionally, in network modelling the service was associated to packets trans-
mission or calls duration, but this assumption is getting less and less true in modern
network architectures. Indeed, issues related to virtual machine allocation in cloud
environments or performance of LTE (Long Term Evolution) networks require new
queueing models, in which the customers ask for some resources (CPU/memory
and radio resources, respectively) that are released at the end of the service. Such
models are known in the literature as resource queueing systems. For instance, in [2]
they are applied to the analysis of M2M traffic characteristics in a LTE network cell,
while [3] presents an overview of the resource queuing systems used for modeling of
a wide class of real systems with limited resources, focusing on wireless networks
with exponentially distributed service time. Resource queues in connection with
AQM (Active Queue Management) mechanisms are investigated in [4] under the
processor sharing discipline, but the analysis is limited to Poisson arrivals. Finally,
analytical results for systems with finite resources are given in [5], where M/M/n/m
queues are considered and the service time is assumed to be proportional to the
customer capacity.
All the above-mentioned works deal with finite resource queueing systems, and
analytical results are obtained under stringent condition for the arrival process and
the service time distribution. However, the inadequacy of the Poisson process as
arrival model is well-known in the literature [6, 7] and more realistic traffic models
have been proposed in the literature, such as MAPs (Markov Arrival Processes) and
MMPPs (Markov Modulated Poisson Processes). In case of infinite-server resource
queues the previous limitations disappear and such models can be used to calculate
conservative bounds on system performance under realistic traffic conditions and
general distributions of the service time and the customer capacity.
Analysis of Resource Queueing System with Infinite Servers 181

The aim of the paper is to present the principal results in the field of infinite-
server resource queueing systems, most of them derived and tested by the authors
in the last few years and gathered in [8], where complete proofs and simulation
results are also reported. However, to the best of our knowledge, this paper is the
first attempt (at least in the English scientific literature) to collect the main results for
such systems in a review work and provide for networking specialists the possibility
of choosing the most suitable model and finding the relevant performance indexes.
It is worth noticing that, above all in case of correlated arrivals, the derivation
of the Gaussian approximation is quite cumbersome, so we mainly focus on the
methodological elements and present a complete analysis only for Poisson arrivals.
Indeed, we also aim to popularize powerful tools developed in the last decades by
the “Tomsk queueing theory School,” such as the alternative description of MAPs,
the dynamic screening method, and the asymptotic analysis method.
The rest of the paper is organized as follows. In Sect. 2 we provide a thorough
description of the analyzed queueing systems and recall some background results
and definitions, while the following section details the application of the proposed
methodology to the case of Poisson arrivals. Then, in Sect. 4 we generalize the
analysis to renewal processes and MAPs, highlighting the differences with the
Poisson case and summarizing the key results. Finally, the main contributions of the
paper are pointed out in the Conclusions, together with future research directions.

2 Reference Model and Theoretical Background

In this section we describe the system under analysis, introducing the notation
and the mathematical apparatus used in the rest of the paper. In describing the
different arrival processes we detail the definition of MAPs, since our notation is
slightly different (although equivalent) from the one most widely used in the western
literature. Finally, we briefly recall the dynamic screening method for the study of
non-Markovian queueing systems.

2.1 Infinite-Server Resource Queueing System

Let us consider an infinite-server queueing system with infinite resources (so no


customer will be rejected) as shown in Fig. 1. An arriving customer can occupy any
free server for a random service time ξ ≥ 0, characterized by a distribution function
B(τ ) = P {ξ < τ } with finite first moment (roughly speaking, it is just required that
the mean service time is finite and no assumptions are made on its variance). As
already mentioned in the introduction, the customer requires during his service also
some resource of random volume ν, described by a distribution function G(y) =
P {ν < y} with finite first and second moments. When the service is completed,
the customer leaves the system and frees the resource. Moreover, service times {τ }
182 M. Pagano and E. Lisovskaya

Fig. 1 Infinite-server
resource queueing system

and customer capacities {ν} are assumed to be mutually independent and do not
dependent on the epochs of customers’ arrivals.
Let us fix an initial moment t0 (to get the steady-state regime it will be enough to
consider t0 → −∞) and let the system be empty at time t0 .
Denote by i(t) the number of customers in the system at time t; then, the total
volume of occupied resources (i.e., the total customers capacity) is given by


i(t )
V (t) = νi
i=1

and the bidimensional process {i(t), V (t)} unambiguously characterizes the state of
the considered queueing system.
Due to the independence of the two components, it is easy to find a relation
among them. Indeed, the characteristic function of the total customers capacity can
be rewritten as
    i 
h(v) = M ej vV (t ) = M M ej v k=1 νk |i(t) = i

  i  ∞ 
  i
= M ej v k=1 νk P {i(t) = i} = M ej vν P {i(t) = i}
i=0 i=0

and, taking into account that


   ∞
M ej vν = ej vy dG(y) = G∗ (v) ,
0

we get the link between traditional (the number of busy servers does not depend on
the occupied resources) and resource queueing systems:

  i
h(v) = G∗ (v) P {i(t) = i} . (1)
i=0

However, this elegant result does not solve our problem. Indeed, the distribution
of the number of busy servers is known only for a limited set of systems (see
Analysis of Resource Queueing System with Infinite Servers 183

Sect. 3.1 for Poisson arrivals); even in that case, quite often the analytical expression
of the distribution of the total customers capacity is not available and only numerical
approximations can be found.
The process {i(t), V (t)} is, in general, non-Markovian; among the different
approaches (for instance, the analysis of the embedded Markov chain and the
method of supplementary variables) proposed in the literature, we consider the
dynamic screening method that provides a unified framework for the analysis of
infinite-server queueing systems (including tandem queues, queueing networks, and
resource systems) and is described in Sect. 2.3.

2.2 Arrival Process

The arrival process plays a major role in determining not only the queueing behavior,
but also the analytical tractability of the system. Indeed, analytical results can be
obtained only for Poisson arrivals, but it is well-known that the distribution of
inter-arrival times is typically quite far from the exponential one and, above all, the
arrivals are correlated [6]. To cope with these issues, we will consider two different
classes of traffic models, widely used in the literature: renewal processes and MAPs
(which include MMPPs as a special case). Unfortunately, for both classes closed-
form results are not available and only asymptotic approximations can be obtained
under heavy traffic conditions. In this paper we introduce a scale parameter N → ∞
(high intensity parameter) and focus on the case of “infinitely growing arrival rate”
(for the other regime, known in the literature as “infinitely growing service time,”
see for instance [9]).
In more detail, renewal processes are characterized by the sequence of inter-
arrival times {ζn }, which are independent identically distributed random variables
with common distribution A(z) = P {ζ < z}; in our analysis only the existence
of finite mean and variance is assumed. Hence, the asymptotic condition simply
corresponds to a scaled distribution A(zN) and the mean interarrival time goes to 0
as 1/N when N → ∞.
As far as MAPs are concerned, we make use of the characterization developed by
Tomsk researchers on the basis of the theory of doubly stochastic processes, which
includes the following components [1]:
– k(t): a continuous time ergodic Markov chain with K states and infinitesimal
generator matrix Q = qkν , k, ν = 1, . . . , K
– λk ≥ 0, k = 1, . . . , K: the conditional arrival rate for each state of the
underlying Markov chain k(t), typically denoted through the diagonal matrix
Λ = diag {λk } , k = 1, . . . , K
– dkν k, ν = 1, . . . , K: the conditional probabilities that there is an arrival when
the Markov chain k(t) changes its state from k to ν (it is assumed that dkk = 0),
grouped in the matrix D = dkν , k, ν = 1, . . . , K
184 M. Pagano and E. Lisovskaya

In this way, unlike the classical notation [10], the model parameters have a clear
physical interpretation and MMPPs can be easily obtained by setting D = 0
since state transitions of the underlying Markov chain just imply a rate change,
but arrivals are not generated. Moreover, the asymptotic condition can be taken into
account multiplying all the coefficient of the matrices Q and Λ by the high intensity
parameter N → ∞.
It is worth mentioning that this notation is equivalent to the classical one, based
on matrices D0 and D1 . Indeed, it is possible to show that

D0 = Q − [Λ
Λ + D ◦ Q]
D1 = Λ + D ◦ Q

where ◦ denotes the Hadamard product (or entrywise product).

2.3 Dynamic Screening Method

In a nutshell, the dynamic screening method is based on the construction of a


suitable screened process and its markovization by the addition of a suitable
component, depending in general on the arrival process.
Let us consider two time axes (see Fig. 2): the first one displays the arrival times
of all customers, while the other one corresponds to the screened customers. For
any t ≥ t0 let us define a continuous function S(t) that assumes values in the
interval [0, 1]; then, a customer arriving at time t is screened on the second axis
(i.e., generates an event on it) with probability S(t). Since the screening probability
depends on the arrival time t, the method is called dynamic.
In more detail, for an infinite-server queue we assume that the system is empty
at the initial time t0 , fix an arbitrary moment T > t0 and put

S(t) = 1 − B(T − t)

Fig. 2 Screening of the customers’ arrivals


Analysis of Resource Queueing System with Infinite Servers 185

i.e., S(t) represents the probability that a customer, arrived at time t < T , is not
served by time T and so it still occupies some resources in the queue. Instead, with
probability 1 − S(t) the customer has left the system and it is not screened on the
second axis.
Let us denote by {n(t)} and {W (t)} the counting process representing the number
of screened event in the interval [t0 , t) and their total capacity, respectively. The
process {n(t)} is, in general, non-Markovian (except the case of Poisson arrivals),
but it can be markovized by adding a suitable component:
– the residual time before the next arrival z(t) in case of renewal processes ⇒ the
process {n(t), z(t)} is Markovian;
– the state k(t) of the modulating Markov chain in case of MAPs ⇒ the process
{n(t), k(t)} is Markovian;
– the residual time z(t) and the state l(t) of the embedded Markov chain in case of
semi-Markov processes ⇒ the process {n(t), z(t), l(t)} is Markovian.
Moreover, the probability distributions of the number of customers in the system
{i(t)} and the number of screened arrivals on the second axis {n(t)} coincide at time
T:

P {i(T ) = m} = P {n(T ) = m} ∀m = 0, 1, 2, . . . (2)

The latter result, known as the fundamental equation of the dynamic screening
method, can be easily verified starting from the equality of the corresponding
conditional probabilities (given a sequence of L arrivals at times t1 , t2 , . . . tL )

P {i(T ) = m|t1 , t2 , . . . tL } = P {n(T ) = m|t1 , t2 , . . . tL } ∀m = 0, 1, 2, . . .

for any number of arrivals L and any sequence of arrival times t1 , t2 , . . . tL , which
is a direct consequence of the chosen S(t) as can be verified by direct calculation.
Since the distributions of the multidimensional random variable (L, t1 , t2 , . . . tL )
are the same in the two cases, also the distributions of the random variables i(T )
and n(T ) (i.e., of the values of the processes {i(t)} and {n(t)} at time T ) coincide.
It is easy to prove the same property for the extended process {i(t), V (t)}:

P {i(T ) = m, V (T ) < z} = P {n(T ) = m, W (T ) < z}


∀m = 0, 1, 2, . . . and z ≥ 0 (3)

that, by analogy with (2), represents the fundamental equation of the dynamic
screening method for resource queueing systems.
To summarize, the essence of the dynamic screening method consists in the
following steps:
1. Choose a suitable screening function S(t) and build the corresponding screened
process {n(t)};
2. Markovize the process {n(t), W (t)}, by adding the suitable component &(t);
186 M. Pagano and E. Lisovskaya

3. Determine the probabilistic characteristics of the extended process

{n(t), W (t), &(t)};

4. Derive the joint distribution of the process {n(t), W (t)} (and, in case, the
marginal distributions if relevant);
5. Set t = T and, according to (3), get the distribution of the process {i(t), V (t)} at
time t = T .
Finally, note that T was chosen arbitrarily (the only condition is T > t0 ) and so
we can calculate the probability distribution of the joint process at any time ; in
particular, letting t0 → −∞, we can get the steady-state distribution, which is
typically the parameter of interest in the study of queueing systems.

3 Analysis of Infinite-Server Resource Queueing System:


Poisson Arrivals

Let us assume that the arrival process is Poissonian with rate λ and denote by
Mv /GI/∞ the corresponding resource queueing system to highlight that customers
are characterized by their capacity v. Although in this special case the analysis can
be carried out in different ways, we will take advantage of the analytical simplicity
of the input process to better illustrate our general methodology. In more detail, at
first in Sect. 3.1 we derive the Kolmogorov equation for the characteristic function
of the bidimensional process {i(t), V (t)} and find the corresponding analytical
solution that is possible thanks to the special structure of the arrival process. Then,
in Sect. 3.2 we present the general approach that provides first- and second-order
approximations of the characteristic function.

3.1 Direct Solution of Kolmogorov Equations

Let us define the screened process as described in Sect. 2; thanks to the memoryless
property of the exponential distribution, now the bidimensional stochastic process
{n(t), W (t)} is Markovian and no additional component is required. To visually
simplify the analysis, let us introduce the following notation:

Δ
P {n(T ) = n, W (T ) < w} = P (n, w, t) ∀n = 0, 1, 2, . . . and w > 0 ,
Analysis of Resource Queueing System with Infinite Servers 187

and assume that P (n, w, t) = 0 for negative values of n and w. According to the
formula of total probability the following equality holds

P (n, w, t + Δt) = P (n, w, t)(1 − λΔt) + P (n, w, t)λΔt (1 − S(t))


 ∞
+ λΔtS(t) P (n − 1, w − y, t)dG(y) + o(Δt) ,
0

from which the set of Kolmogorov differential equations can be easily derived:
 ∞
∂P (n, w, t)
= λS(t) P (n − 1, w − y, t)dG(y) − P (n, w, t) (4)
∂t 0

for n = 0, 1, 2, . . . and w > 0, with initial conditions


/
1 n=w=0
P (n, w, t0 ) = (5)
0 otherwise.

To solve the Kolmogorov differential equations, let us introduce the characteristic


function

  ∞
Δ
h(u, v, t) = M {exp (j un(t) + j vW (t))} = ej un ej vw P (n, dw, t).
n=0 0
(6)

Taking into account that



  ∞  w
e j un
e j vw
P (n − 1, d(w − y), t)dG(y)
n=0 0 0


  ∞  w
= ej u ej u(n−1) ej vy ej v(w−y) P (n − 1, d(w − y), t)dG(y)
n=0 0 0

 ∞  
∞  w
= e ju
e j vy
e j u(n−1)
e j v(w−y)
P (n − 1, d(w − y), t) dG(y)
0 n=0 0
 ∞  ∞
= ej u ej vy h(u, v, t)dG(y) = ej u h(u, v, t) ej vy dG(y)
0 0

= e G (v)h(u, v, t),
ju

where
 ∞
∗ Δ
G (v) = ej vy dG(y) ,
0
188 M. Pagano and E. Lisovskaya

Eq. (4) can be rewritten as

∂h(u, v, t)
= λS(t)h(u, v, t) ej u G∗ (v) − 1 (7)
∂t
with the initial condition

h(u, v, t0 ) = 1 (8)

and its solution is given by


  t 5
h(u, v, t) = exp λ ej u G∗ (v) − 1 S(τ )dτ . (9)
t0

For t = T and t0 → −∞, by virtue of (3) we obtain the characteristic function


of the bidimensional process describing the number of busy servers and the total
customers capacity in steady-state conditions:
 
h(u, v) = exp λb ej u G∗ (v) − 1 , (10)

where
 ∞
Δ
b = (1 − B(τ )) dτ .
0

Putting v = 0 in (10), we get the characteristic function for the number of busy
servers in steady-state conditions
 
Δ
h(u) = h(u, v)|v=0 = exp λb ej u − 1 (11)

that coincides with the characteristic function of the Poisson distribution with
parameter λb, in agreement with the well-known classical results for the M/GI/∞
queueing systems.
In a similar way the characteristic function for the total customers capacity is
 6 7
h(v) = h(u, v)|u=0 = exp λb G∗ (v) − 1
Δ
(12)

in accordance with the results obtained by Oleg Tikhonenko [5] and with Eq. (1).
Indeed, as shown by (11), in M/GI/∞ the number of busy servers has Poisson
distribution with parameter λb and by direct substitution into (1) we get

 ∞

 i  ∗ i (λb)i −λb
h(v) = G∗ (v) P {i(t) = i} = G (v) e
i!
i=0 i=0
−λb λbG∗ (v)
 6 ∗
7
= e e = exp λb G (v) − 1 .
Analysis of Resource Queueing System with Infinite Servers 189

3.2 The Asymptotic Analysis Method

The asymptotic analysis method in queueing systems aims at determining their


characteristics under some limit condition [11]. In the following we will consider
its application to the differential Eq. (7) in case of “infinitely growing arrival rate”
and we look for its approximate solutions with different order of accuracy, namely
“first-order asymptotic” h(u, v, t) ≈ h1 (u, v, t) and “second-order asymptotic”
h(u, v, t) ≈ h1 (u, v, t)h2 (u, v, t), also known as Gaussian approximation. Note
that it is possible to derive higher order asymptotics, but in that case the inversion of
the characteristic function is, in general, possible only by numerical methods and,
as stated in [1], at least for “traditional” queueing systems the gain is not significant
in case of heavy traffic.

3.2.1 First-Order Asymptotic Analysis

By performing the substitutions

1
ε= , u = εx, v = εy, h(u, v, t) = f1 (x, y, t, ε) (13)
λ
in Eq. (7), we obtain the following Cauchy problem:
⎧  
⎪ ∂f (x, y, t, ε)
⎨ε 1 = S(t)f1 (x, y, t, ε) ej εx G∗ (εy) − 1
∂t (14)


f1 (x, y, t0 , ε) = 1.

For ε → 0, taking into account the first-order Taylor–Maclaurin expansion


 
ej εx = 1 + j εx + O ε2

the limit function f1 (x, y, t) = lim f1 (x, y, t, ε) satisfies the following differen-
ε→0
tial equation:

∂f1 (x, y, t)
= S(t)f1 (x, y, t) (j x + jya1) ,
∂t
where a1 is the average customer capacity, i.e.,
 ∞
a1 = ydG (y) .
0
190 M. Pagano and E. Lisovskaya

Taking into account the initial condition f1 (x, y, t0 ) = 1, we get


  t 5
f1 (x, y, t) = exp (j x + jya1) S(τ )dτ
t0

and, after performing the substitutions inverse to (13), the first-order approximation
of h(u, v, t), i.e.,
  t 5
h(u, v, t) ≈ exp λ (j u + j va1 ) S(τ )dτ . (15)
t0

3.2.2 Second-Order Asymptotic Analysis

The second-order asymptotic provides the bidimensional Gaussian approximation


of the process {i(t), V (t)}. Rewriting the corresponding characteristic function as
  t 5
h(u, v, t) = h2 (u, v, t) exp (j x + jya1) S(τ )dτ
t0

the differential Kolmogorov equation (7) becomes

∂h2 (u, v, t)  
+ λ(j u + j va1 )S(t)h2 (u, v, t) = h2 (u, v, t)λS(t) ej εu G∗ (v) − 1
∂t
and, after performing the substitutions

1
ε2 = , u = εx, v = εy, h2 (u, v, t) = f2 (x, y, t, ε) , (16)
λ
we obtain the following differential equation:

∂f2 (x, y, t, ε)
ε2 + (j εx + j εya1)S(t)f2 (x, y, t, ε)
∂t
 
= S(t)f2 (x, y, t, ε) ej εx G∗ (εy) − 1 (17)

with the initial condition

f2 (x, y, t0 , ε) = 1. (18)

As before we consider the limit as ε → 0 and then use the second-order Taylor–
Maclaurin expansion

(j εx)2  
ej εx = 1 + j εx + + O ε3 .
2
Analysis of Resource Queueing System with Infinite Servers 191

Then, the limit function f2 (x, y, t) = lim f2 (x, y, t, ε) satisfies the following
ε→0
differential equation:
$ %
∂f2 (x, y, t) (j x)2 (jy)2
= S(t)f2 (x, y, t) + a2 + j xjya1 , (19)
∂t 2 2

where a2 is the second moment of the random variable describing the customer
capacity, i.e.,
 ∞
a2 = y 2 dG (y) .
0

The solution of (19), with the initial condition f2 (x, y, t0 ) = 1, is


$ % 5
(j x)2 (jy)2 t
f2 (x, y, t) = exp + a2 + j xjya1 S(τ )dτ
2 2 t0

and, performing the substitutions inverse to (16), we get the second-order approxi-
mation of h(u, v, t), i.e.,
 $ % t 5
(j u)2 (j v)2
h(u, v, t) ≈ exp λ j u + j va1 + + a2 + j uj va1 S(τ )dτ .
2 2 t0
(20)

Finally, for t = T and t0 → −∞, by virtue of (3) we obtain the second-order


asymptotic for the characteristic function of the steady-state distribution of the
bidimensional process {i(t), V (t)}
 5
(j u)2 (j v)2
h(u, v) ≈ exp j uλb + j vλa1 b + λb + λa2 b + j uj vλa1 b
2 2
(21)

that corresponds to the characteristic function of a bivariate Gaussian process with


correlated components. This result has a much wider validity, not limited to Poisson
arrival, as shown in the next section.

4 Asymptotic Analysis of Infinite-Server Resource Queueing


System

Dynamic screening and asymptotic analysis can be applied to a great variety of


arrival processes and queueing systems. For instance, as shown in this section, the
proposed methodology can be easily extended to renewal processes and MMPPs, a
192 M. Pagano and E. Lisovskaya

special case of MAPs widely used in teletraffic [10, 12]. For sake of brevity, we will
just sketch the procedure, highlighting the additional complexity due to the change
in the input process as well as the general validity of the Gaussian approximation
and providing references with the detailed proof of the results.

4.1 The MMPP(ν) /GI/∞ Queue

As already stated in Sect. 2.2, an MMPP is characterized by the two matrices Q and
Λ and the evolution of the queue depends on the state of the modulating Markov
chain k(t). Therefore, it is now necessary to work with the tridimensional Markovian
process {k(t), n(t), W (t)}. Denoting the probability distribution of this process by

P (k, n, w, t) = P {k(t) = k, n(t) = n, W (t) < w} ,

and applying the formula of total probability as in the Poisson case, we get

P (k, n, w, t + Δt) = P (k, n, w, t)(1 − λk Δt)(1 + qkk Δt)


+ P (k, n, w, t)λk Δt (1 − S(t))
 w
+ λk ΔtS(t) P (k, n − 1, w − y, t)dG(y)
0

+ qνk ΔtP (ν, n, w, t) + o(Δt), (22)
ν=k

for k = 1, . . . , K, n = 0, 1, 2, . . . and w > 0.


From (22), we obtain the system of Kolmogorov differential equations
 w
∂P (k, n, w, t)
= λk S(t) P (k, n − 1, w − y, t)dG(y) − P (k, n, w, t)
∂t 0

+ qνk P (ν, n, w, t), (23)
ν

with initial conditions


/
r(k) n = w = 0
P (k, n, w, t0 ) =
0 otherwise,

where {r(k)}, k = 1, . . . , K are the stationary state probabilities of the modulating


Markov chain k(t). Note that the first term on the right-hand side of (23) is similar
to the one in (4), while the other one takes into account the state transitions in the
modulating Markov chain.
Analysis of Resource Queueing System with Infinite Servers 193

Introducing the partial characteristic function

h(k, u, v, t) = M {exp (j un(t) + j vW (t))}



  ∞
= e j un
ej vw P (k, n, dw, t),
n=0 0

we can write the following system of equations:

∂h(k, u, v, t) 
= λk S(t)h(k, u, v, t) ej u G∗ (v) − 1 + h(ν, u, v, t)qνk
∂t ν

with the initial condition

h(k, u, v, t0 ) = r(k) for k = 1, . . . , K ,

or in matrix form:
∂h(u, v, t)
= h(u, v, t) S(t)(ej u G∗ (v) − 1) + Q , (24)
∂t
with the initial condition

h(u, v, t0 ) = r ,

where

h(u, v, t) = [h(1, u, v, t), h(2, u, v, t), . . . , h(K, u, v, t)]

and

r = [r(1), r(2), . . . , r(K)]

is the row-vector of the stationary distribution of the modulating Markov chain:



rQ = 0
re = 1 ,

e being a column-vector with all entries equal to 1.


To the matrix differential equation (24) we apply the asymptotic analysis method
to get asymptotic results under the condition of “infinitely growing arrival rate.”
Denoting by N the scaling parameter, we consider the family of MMPP processes
with = N ˜ and Q = N Q̃ as N → ∞. Calculations are more cumbersome
since now we need to work with a matrix (and not scalar) equation, but, as shown
in [13], the procedure is analogous to the Poisson case: the first- and second-order
194 M. Pagano and E. Lisovskaya

approximations are derived and then, by setting t = T and t0 → −∞, we obtain


the characteristic function of the process {i(t), V (t)} at steady state:

(j u)2
h(u, v) ≈ exp Nλ(j u + j va1 )b1 + (Nλb1 + Nκb2 )
2
5
(j v)2 2
+ (Nλa2 b1 + Na1 κb2 ) + j uj v(Nλa1 b1 + Nκa1 b2 ) , (25)
2

where a1 and a2 are the first and the second moments of the random variable
describing the customer capacity,
 ∞  ∞
b1 = (1 − B(τ ))dτ, b2 = (1 − B(τ ))2 dτ
0 0

and
 
λ = r ˜ e, κ = 2g ˜ − λI e,

where the row-vector g satisfies the linear matrix system


⎧  
⎨gQ̃ = r λI − ˜
⎩ge = 1.

The form of the characteristic function (25) implies that the bidimensional
process {i(t), V (t)} is asymptotically Gaussian with the vector of mathematical
expectations

a = N [λb1 λa1 b1 ]

and the covariance matrix

λb1 + κb2 λa1 b1 + κa1 b2


K=N .
λa1 b1 + κa1 b2 λa2 b1 + κa12 b2

In the general case of MAPs [14], the procedure is exactly the same, only
equality (22) slightly changes since a transition of the modulating Markov chain k(t)
from state ν to state k (with k = ν) can now generate an arrival with probablity dνk .
This corresponds to substitute the matrix Λ with Λ + Q ◦ D , leaving unchanged all
the rest. Apart from the value of λ and κ, equality (25) still holds for the steady-state
characteristic function and hence the previous considerations about Gaussianity can
be extended to MAP(ν)/GI/∞ resource queues.
Analysis of Resource Queueing System with Infinite Servers 195

4.2 The GI(ν) /GI/∞ Queue

Let us consider as input flow a renewal process and assume that the inter-arrival
time, characterized by the distribution A(z), has finite mean and variance, i.e.,
 ∞  ∞
1
a= = (1 − A(z)) dz and σ 2 = (z − a) dA(z) .
λ 0 0

In this case the memoryless property does not hold, hence it is necessary to take
into account the residual time z(t) to obtain a Markovian process {z(t), n(t), W (t)}.
Denoting its probability distribution by

P (z, n, w, t) = P {z(t) < z, n(t) = n, W (t) < w} ,

the formula of total probability leads to the following equality (for n = 0, 1, 2, . . .,


and z, w > 0):

P (z, n, w, t + Δt) = [P (z + Δt, n, w, t) − P (Δt, n, w, t)]


+ P (Δt, n, w, t)(1 − S(t))A(z)
 w
+ A(z)S(t) P (Δt, n − 1, w − y, t)dG(y) + o(Δt),
0

from which the Kolmogorov differential equation is easily derived:

∂P (z, n, w, t) ∂P (z, n, w, t) ∂P (0, n, w, t)


= + (A(z) − 1)
∂t ∂z ∂z
 w
∂P (0, n − 1, w − y, t) ∂P (0, n, w, t)
+ S(t)A(z) dG(y) − , (26)
0 ∂z ∂z

with initial condition


/
R(z) n=w=0
P (z, n, w, t0 ) =
0 otherwise ,

where
 z
1
R(z) = (1 − A(u))du
a 0
196 M. Pagano and E. Lisovskaya

is the stationary distribution of the renewal arrival process. Also in this case it is
useful to rewrite the Kolmogorov equation in terms of the partial characteristic
function

h(z, u, v, t) = M {exp (j un(t) + j vW (t))}



  ∞
= e j un
ej vw P (z, n, dw, t)
n=0 0

and we obtain the following equation:

∂h(z, u, v, t) ∂h(z, u, v, t)
=
∂t ∂z
∂h(0, u, v, t)  
+ A(z) − 1 + A(z)S(t) ej u G∗ (v) − 1 ,
∂z
(27)

with the initial condition

h(z, u, v, t0 ) = R(z) . (28)

Since the exact solution of (27) is, in general, not available, we apply the
asymptotic analysis method under the condition of “infinitely growing arrival rate,”
rewriting the distribution function as A(Nz) with N → ∞ as in Sect. 4.1. Following
our usual approach, we get the second-order approximation of h(z, u, v, t) and,
setting z → ∞, t = T , t0 → −∞, we obtain the characteristic function of the
process {i(t), V (t)} in the steady-state regime (see [15] for the detailed proof):

(j u)2
h(u, v) ≈ exp Nλ(j u + j va1 )b1 + (Nλb1 + Nκb2 )
2
5
(j v)2
+ (Nλa2 b1 + Na12 κb2 ) + j uj v(Nλa1 b1 + Nκa1 b2 ) , (29)
2

where a1 , a2 , b1 , and b2 are the same as in (25), while the expression of κ has
changed:
 
κ = λ3 σ 2 − a 2 .

In complete analogy with the result in Sect. 4.1, the bidimensional process
{i(t), V (t)} is asymptotically Gaussian with the vector of mathematical expectations

a = N [λb1 λa1 b1 ]
Analysis of Resource Queueing System with Infinite Servers 197

and the covariance matrix

λb1 + κb2 λa1 b1 + κa1b2


K=N
λa1 b1 + κa1 b2 λa2 b1 + κa12b2

that have exactly the same expression (apart from the definition of κ and λ) as in the
MMPP case.

5 Conclusions

In this work we analyzed infinite-server resource queueing systems, collecting in


a review paper the most relevant results we obtained in the last few years. To the
best of our knowledge it is the first attempt in the English literature to describe
a general analysis methodology for such systems and provide a list of ready-
to-be-used formulas for different arrival processes (namely, Poisson processes,
renewal processes, MAP, and MMPP). The proposed approach is based on the
application at first of the dynamic screening method (for markovization purposes)
and then of the asymptotic analysis method (to find at least an asymptotic solution
for the corresponding Kolmogorov equations). In a nutshell, the paper highlights
that, under the condition of “infinitely growing arrival rate,” the joint distribution
of the processes describing the number of busy servers and the total volume of
occupied resources is bivariate Gaussian and provides analytical expressions for its
parameters (mean vector and covariance matrix) as a function of the arrival process
characteristics, the distribution of the service time and the first and second moments
of the customers capacity distribution.
Finally, it is worth mentioning that the proposed methodology is much more
general and can be applied to other arrival processes (e.g., semi-Markov processes),
heterogeneous customers/servers, multi-resource customers as well as to more
complex resource systems, including tandem queues and queueing networks.

Acknowledgments The publication has been prepared with the support of the University of Pisa
PRA 2018–2019 Research Project “CONCEPT—COmmunication and Networking for vehicular
CybEr-Physical sysTems.”

References

1. Nazarov, A., Moiseev, A.: Infinite-Server Queueing System and Networks (in Russian).
Publishing House STL, Tomsk (2015)
2. Sopin, E.S., Ageev, K.A., Markova, E.V., Vikhrova, O.G., Gaidamaka, Y.V.: Performance
analysis of M2M traffic in LTE network using queuing systems with random resource
requirements. Autom. Control Comput. Sci. 52(5), 345–353 (2018)
198 M. Pagano and E. Lisovskaya

3. Gorbunova, A.V., Naumov, V.A., Gaidamaka, Y.V., Samouylov, K.E.: Resource queuing
systems as models of wireless communication systems (in Russian). Inform. Primen 12(3),
48–55 (2018)
4. Tikhonenko, O., Kempa, W.: Queueing system with processor sharing and limited memory
under control of the AQM mechanism. Autom. Remote Control 76(10), 1784–1796 (2015)
5. Tikhonenko, O., Kawecka, M.: Total volume distribution for multiserver queueing systems
with random capacity demands. In: Kwiecień, A., Gaj, P., Stera, P. (eds.) Computer Networks,
pp. 394–405. Springer, Berlin (2013)
6. Pagano, M., Rykov, V., Yuri, K.: Teletraffic Models (in Russian). Publishing House Infra-M,
Moscow (2018)
7. Paxson, V., Floyd, S.: Wide area traffic: the failure of Poisson modeling. IEEE/ACM Trans.
Netw. 3(3), 226–244 (1995)
8. Lisovskaya, E.: Asymptotic methods for the analysis of resource queueing systems with non-
Poissonian arrival flows. Ph.D. Thesis, Tomsk State University (2018). Candidate of physical
and mathematical Sciences
9. Lisovskaya, E., Moiseeva, S., Pagano, M., Potatueva, V.: Study of the MMPP/GI/∞ queueing
system with random customers’ capacities. Inf. Appl. 11(4), 109–117 (2017)
10. Heffes, H., Lucantoni, D.M.: A Markov modulated characterization of packetized voice and
data traffic and related statistical multiplexer performance. IEEE J. Sel. Areas Commun. 4,
856–868 (1986)
11. Nazarov, A., Moiseeva, S.: The Asymptotic Analysis Method in Queueing Theory (in Russian).
Publishing House STL, Tomsk (2006)
12. Heyman, D.P., Lucantoni, D.: Modeling multiple IP traffic streams with rate limits. IEEE/ACM
Trans. Netw. 11(6), 948–958 (2003)
13. Lisovskaya, E., Moiseeva, S., Pagano, M.: The total capacity of customers in the infinite-server
queue with MMPP arrivals. In: Vishnevskiy, V.M., Samouylov, K.E., Kozyrev, D.V. (eds.)
Distributed Computer and Communication Networks, pp. 110–120. Springer, Cham (2016)
14. Kononov, I., Lisovskaya, E.: Analysis of infinite-server queues with arrivals of random volume
(in Russian). In: Proceedings of the XV International Conference Named After A. F. Terpugov,
vol. 1, pp. 67–71. Publishing House TSU, Tomsk (2016)
15. Lisovskaya, E., Moiseeva, S.: Asymptotic analysis of non-Markovian infinite-server queueing
with renewal arrivals of random volume customers (in Russian). Tomsk State University J.
Control Comput. Sci. 39, 30–38 (2017)
“Controlled” Versions of the
Collatz–Wielandt and
Donsker–Varadhan Formulae

Aristotle Arapostathis and Vivek S. Borkar

Abstract This is an overview of the work of the authors and their collaborators
on the characterization of risk-sensitive costs and rewards in terms of an abstract
Collatz–Wielandt formula and in case of rewards, also a controlled version of
the Donsker–Varadhan formula. For the finite state and action case, this leads to
useful linear and dynamic programming formulations for the reward maximization
problem in the reducible case.

Keywords Principal eigenvalue · Risk-sensitive control · Collatz–Wielandt


formula · Donsker–Varadhan functional

1 Introduction

This short article is an overview of the work of authors and their collaborators on
a somewhat novel perspective of the risk-sensitive control problem on infinite time
horizon that aims to optimize the asymptotic growth rate of a mean exponentiated
total reward, resp., cost. The viewpoint taken here is based on the fact that
the dynamic programming principle for this problem essentially reduces it to
an eigenvalue problem seeking the principal eigenvalue and eigenvector for a
monotone positively 1-homogeneous operator. This allows us to exploit the existing
generalized Perron–Frobenius (or Krein–Rutman) theory which leads to some
explicit expressions for the optimal growth rate. The first is the abstract Collatz–
Wielandt formula (see [1]) which can be shown to hold for both cost minimization
and reward maximization problems, though we have not exhausted all the cases

A. Arapostathis
Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, TX,
USA
e-mail: [email protected]
V. S. Borkar ()
Department of Electrical Engineering, Indian Institute of Technology Bombay, Mumbai, India

© The Editor(s) (if applicable) and The Author(s), under exclusive 199
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_13
200 A. Arapostathis and V. S. Borkar

in our work. The second is a variational formula for the principal eigenvalue that
generalizes the Donsker–Varadhan formula for the same in the linear case. This
seems workable only for the reward maximization problem.
We first consider the discrete time case based on the results of [2] in the next
two sections, followed by those for reflected diffusions in a bounded domain,
based on [5], in Sect. 4. We then sketch, in Sect. 5, the very recent and highly
nontrivial extensions to diffusions on the whole space developed in [3] and [6].
Finally, we recall in Sect. 6 some developments in the simple finite state-action
setup from [10], where the aforementioned development allows us to derive the
dynamic programming equations for risk-sensitive reward process in the reducible
case. Section 7 concludes by highlighting some future directions.

2 Discrete Time Problems

The celebrated Courant–Fischer formula for the principal eigenvalue of a positive


definite symmetric matrix A ∈ Rd×d is

x T Ax
λ = max .
0=x∈Rd x Tx

Consider an irreducible nonnegative matrix Q ∈ Rd×d . The Perron–Frobenius


theorem guarantees a positive principal eigenvalue with an associated positive
eigenvector for Q. Is there a counterpart of the Courant–Fischer formula for this
eigenvalue?
The answer is a resounding “YES”! It is the Collatz–Wielandt formula for the
principal eigenvalue of an irreducible nonnegative matrix Q = [q(i, j )] ∈ Rd×d ,
stated as (see [17, Chapter 8]):
$ %
(Qx)i
λ = sup min
x=[x1 ,··· ,xd ]T , xi ≥0 ∀i i : xi >0 xi
$ %
(Qx)i
= inf max .
x=[x1 ,··· ,xd ]T , xi >0 ∀i i : xi >0 xi

An alternative characterization can be given as follows. Write

Q = ΓP ,
Controlled Version of the Donsker–Varadhan Formula 201

where

κi := q(i, j ) , 1≤i ≤d,
j

Γ := diag(κ1 , . . . , κd ) , κi > 0 ,
p(i, j ) := q(i,j )/κi , 1 ≤ i, j ≤ d ,
P := [p(j | i)] ,

with P a stochastic matrix. In other words, we have pulled out the row sums {κi } of
Q into a diagonal matrix Γ so that what is left is a stochastic matrix P . Also define


G0 := (π, P̃ ) : π is a stationary probability

for the stochastic matrix P̃ = [p̃(j |i)] .

Then the following representation holds [12]:


* +
 6  7
log λ = sup π(i) κi − D p̃(· | i) p(· | i) ,
(π,P̃ ) ∈ G0 i

where D(·  ·) denotes the Kullback–Leibler divergence or relative entropy. This is


the finite state counterpart of the Donsker–Varadhan formula [14] for the principal
eigenvalue of a nonnegative matrix.
As is well known, the infinite dimensional generalization of the Perron–
Frobenius theorem is given by the Krein–Rutman theorem [13, 16]. There are
also nonlinear variants of it. Let
1. B be a Banach space with a “positive cone” K such that K − K is dense in B,
2. T : B → B be a compact order preserving (i.e., f ≥ g ⇒ Tf ≥ T g), strictly
increasing (i.e., f > g ⇒ Tf > T g), strongly positive (i.e., maps nonzero
elements of K to its interior), positively 1-homogeneous (i.e., T (af ) = aTf for
all a > 0) operator.
A nonlinear variant of the Krein–Rutman theorem [18] then asserts that under some
technical hypotheses, a unique positive principal eigenvalue and a corresponding
unique (up to a scalar multiple) positive eigenvector for T exist.
Our interest is in the following nonlinear scenario arising in risk-sensitive
control: Consider
– a controlled Markov chain {Xn } on a compact metric state space S;
– an associated control process {Zn } in a compact metric control space U ;
– a per stage reward function r : S × U × S → R such that r ∈ C(S × U × S);
202 A. Arapostathis and V. S. Borkar

– a controlled transition kernel p(dy | x, u) with full support, such that for all
Borel A ⊂ S,

P (Xn+1 ∈ A | Xm , Zm , m ≤ n) = P (Xn+1 ∈ A | Xn , Zn )
(1)
= p(A | Xn , Zn ) .

This is called the controlled Markov property and the controls for which this
holds are said to be admissible. The maps

(x, u) → f (y)p(dy | x, u), f ∈ C(S), f  ≤ 1,

are assumed to be equicontinuous.


The control problem is to maximize the asymptotic growth rate of the exponential
reward, that is, to achieve
N−1 
1 
λ := sup sup lim inf log E e m=0 r(Xm ,Zm ,Xm+1 )  X0 = x .
x∈S {Zm } N↑∞ N

The second supremum in this definition is over all admissible controls. We allow
relaxed (i.e., probability measure valued) controls {μn } taking values in P(S), in
which case (1) gets replaced by

P (Xn+1 ∈ A | Xm , μm , m ≤ n) = P (Xn+1 ∈ A | Xn , μn )

= p(A | Xn , z)μn (dz), n ≥ 0 .

Define

Tf (x) := sup p(dy | x, u)φ(du | x)er(x,u,y)f (y) .
φ:S→P (U ) measurable

This is a compact, order preserving, strictly increasing, strongly positive, positively


1-homogeneous operator.
Using the nonlinear variant of the Krein–Rutman theorem stated above, this leads
to an abstract Collatz–Wielandt formula [2]:
Theorem 1 There exist ρ > 0, ψ ∈ int(C + (S)) such that T ψ = ρψ and

Tf dμ
ρ = inf sup
f ∈ int(C + (S)) M+ (S) f dμ
Tf dμ
= sup inf .
f ∈ int(C + (S)) M+ (S) f dμ

Also, log ρ is the optimal reward for the risk-sensitive control problem.
Controlled Version of the Donsker–Varadhan Formula 203

3 Variational Formula

We now state a variational formula for the principal eigenvalue [2]. Let G denote the
set of probability measures

η(dx, du, dy) ∈ P(S × U × S)

which disintegrate as

η(dx, du, dy) = η0 (dx)η1 (du | x)η2(dy | x, u) ,

such that η0 is invariant under the transition kernel



η2 (dy | x, u)η1 (du | x) .
U

These are the so-called “ergodic occupation measures” for discrete time control
problems.
Theorem 2 Under the above hypotheses,
$  
log ρ = sup η0 (dx)η1(du | x) r(x, u, y)η2 (dy | x, u)
η∈G
%
 
− D η2 (dy | x, u) p(dy | x, u) .

This can be viewed as a controlled version of the Donsker–Varadhan formula.


The hypotheses above can be relaxed to
1. Range(r) = [−∞, ∞) with er ∈ C(S × U × S);
2. p(dy | x, u) need not have full support.
The formula then is the same as before, the difference is that under the previous,
stronger set of conditions, the supremum over x ∈ S in the definition of λ was
redundant, it is no longer so. The extension proceeds via an approximation argument
that approximates the given transition kernel by a sequence of transition kernels for
which our original hypotheses hold.
We thus have an equivalent concave maximization problem, in fact a linear
program, as opposed to a “team” problem one would obtain from the usual “log
transformation” as in, e.g., [15]. Furthermore, if ρ(ϕ) denotes the asymptotic growth
rate for a randomized Markov control ϕ, then it can be shown that ρ = maxϕ ρ(ϕ),
implying the sufficiency of randomized Markov controls.
Some applications worth noting are [2]:
1. Growth rate of the number of directed paths in a graph. This requires −∞ as a
possible reward to account for the absence of edges.
204 A. Arapostathis and V. S. Borkar

2. Portfolio optimization in the framework of [8].


3. Problem of minimizing the exit rate from a domain.

4 Reflected Diffusions

Analogous results hold for reflected diffusions in a compact domain with smooth
boundary. These are described by the stochastic differential equation
   
dX(t) = b Xt , Ut dt + σ Xt dWt − γ (Xt ) dξt ,
(2)
dξ(t) = 1{Xt ∈ ∂Q} dξt ,

for t ≥ 0. Here:
1. Q is an open connected and bounded set with C 3 boundary ∂Q;
2. {Wt }t ≥0 is a standard d-dimensional Wiener process;
3. the control {Ut }t ≥0 lives in a metrizable compact action space U and is non-
anticipative, i.e., for t > s, W (t) − W (s) is independent of X0 ; Wy , Uy , y ≤ s;
4. b is continuous, and x → b(x, u) is Lipschitz uniformly in u;
5. σ is C 1,β0 and uniformly non-degenerate;
6. γi (x) = σ(x)σ(x)T η(x), where η(x) is the unit outward normal on ∂Q.
In contrast to the preceding section, we first consider the cost minimization
problem to highlight the differences with the reward maximization problem. Unlike
the classical cost/reward criteria such as discounted and average cost/reward, the
risk-sensitive cost and reward problems are not rendered equivalent by a mere sign
flip, and the differences are stark. For cost minimization, the control problem is to
minimize
1 t
r(Xs ,Us ) ds
lim sup log E e 0 ,
t ↑∞ t

where r is continuous.
The corresponding “Nisio semigroup” is defined as follows. For t ≥ 0, let
t
r(Xs ,Us ) ds
St f (x) := inf Ex e 0 f (Xt ) .
{Ut }t≥0

Then St : C(Q̄) → C(Q̄) is a semigroup of strongly continuous, bounded Lipschitz,


monotone, superadditive, positively 1-homogeneous, strongly positive operators
with infinitesimal generator G defined by

1  
Gf (x) := tr σ(x)σT (x)∇ 2 f (x) + min b(x, u) , ∇f (x) + r(x, u)f (x) .
2 u∈U
(3)
Controlled Version of the Donsker–Varadhan Formula 205

Let
 
Cγ2,+ (Q̄) := f : Q̄ → [0, ∞) : f ∈ C 2 (Q̄), ∇f (x), γ (x) = 0 for x ∈ ∂Q .

As in the discrete case, the nonlinear Krein–Rutman theorem then leads to the
following conclusions. There exists a unique pair (ρ, ϕ) ∈ R × Cγ2,+ (Q̄), satisfying
ϕ(0) = 1, such that

St ϕ = eρt ϕ .

This solves

Gϕ(x) = ρϕ(x) , x ∈ Q, and ∇ϕ(x), γ (x) = 0 , x ∈ ∂Q .

The abstract Collatz–Wielandt formula for this problem is



Gf
ρ = inf sup dν
f ∈Cγ2,+ (Q̄),f >0 ν∈P (Q̄) Q̄ f

Gf
= sup inf dν .
f ∈Cγ2,+ (Q̄),f >0 ν∈P (Q̄) Q̄ f

In the uncontrolled case, the first formula above is the convex dual of the Donsker–
Varadhan formula for the principal eigenvalue of G:
$ %
ρ = sup r(x)ν(dx) − I (ν) ,
ν∈P (Q̄) Q̄

where
 $ %
Lf
I (ν) := − inf dν ,
f ∈Cγ2,+ (Q̄),f >0 Q̄ f

with
1  
Lf (x) := tr σ(x)σT (x)∇ 2 f (x) + b(x) , ∇f (x) .
2
For the risk-sensitive reward problem, the same abstract Collatz–Wielandt
formula holds, except that the definition of the operator G now has a “max” in place
of the “min.” But as in the discrete time case, one can go a step further and have a
variational formulation. Let
1
R(x, u, w) := r(x, u) − |σT (x)w|2 , (x, u, w) ∈ Q̄ × U × Rd ,
2
206 A. Arapostathis and V. S. Borkar

and

M := μ ∈ P(Q̄ × U × Rd ) :
 5
Af (x, u, w)μ(dx, du, dw) = 0 ∀ f ∈ C 2 (Q) ∩ Cγ (Q̄) ,
Q̄×U ×Rd

with
1   8 9
Af (x, u, w) := tr σ(x)σT (x)∇ 2 f (x) + b(x, u) + σ(x)σT (x)w, ∇f (x)
2
(4)

for f ∈ C 2 (Q) ∩ C(Q̄). Recall the definition of an “ergodic occupation measure”


[4]. For a stochastic differential equation as in (2), but with the drift b replaced
with b(x, u) + σ(x)σT (x)w, and w taking values in some compact metrizable
space, this measure  is the time-t marginal of a stationary state-control process
Xt , v(Xt ), w(Xt ) , perforce independent of t. Thus, in the case when the parameter
w lives in a compact space, by a standard characterization of ergodic occupation
measures (ibid.), M is precisely the set thereof for controlled diffusions whose
(controlled) extended generator is A. This, however, is not necessarily the case
if w lives in Rd . An example to keep in mind is the one-dimensional stochastic
differential equation
 2  √
dXt = e t /2 − Xt dt + 2 dWt .
X

It is straightforward to verify that the standard Gaussian density satisfies the Fokker–
Planck equation. However, the diffusion is not even regular, so it does not have an
invariant probability measure. Therefore, we refer to M as the set of infinitesimal
ergodic occupation measures. The variational formula for this model is

ρ = sup R(x, u, w)μ(dx, du, dw) .
μ∈M Q̄×U ×Rd

This result is from [6].


An analogous abstract Collatz–Wielandt formula for the risk-sensitive cost
minimization problem was derived in [5]. We have not derived a corresponding
variational formula. Even if one were to do so, it is clear that it will be a “sup-inf/inf-
sup” formula rather than a pure maximization problem. This is already known
through a different route: it forms the basis of the approach initiated by Fleming
and McEneaney [15] and followed by many, in which the Hamilton–Jacobi–
Bellman equation for the risk-sensitive cost minimization problem is converted to
an Isaacs equation for an ergodic payoff zero sum stochastic differential game. The
aforementioned expression then is simply the value of this game. Going by pure
Controlled Version of the Donsker–Varadhan Formula 207

analogy, for the reward maximization problem, one would expect this route to yield a
stochastic team problem wherein the two agents seek to maximize a common payoff,
but non-cooperatively, i.e., without either of them having knowledge of the other
person’s decision. What this translates into is that under the corresponding ergodic
occupation measure, the two control actions are conditionally independent given
the state. The set of such measures is non-convex. What we have achieved instead
is a single concave programming problem, which is a significant simplification
from the point of view of developing computational schemes for the problem.
This also brings to the fore the difference between reward maximization and cost
minimization in risk-sensitive control.

5 Diffusions on the Whole Space

Here we consider a controlled diffusion in Rd of the form

dXt = b(Xt , Ut ) dt + σ(Xt ) dWt ,

where
1. W is a standard d-dimensional Brownian motion;
2. the control Ut lives in a metrizable compact action space U and is non-
anticipative, i.e., for t > s, W (t) − W (s) is independent of X0 ; Wy , Uy , y ≤ s;
3. b(x, u) is continuous and locally Lipschitz continuous in x uniformly in u ∈ U;
4. σ is locally Lipschitz continuous and locally non-degenerate;
5. b and σ have at most affine growth in x.
Without loss of generality, we may take Ut to be adapted to the increasing σ -fields
generated by {Xt , t ≥ 0}. Then these hypotheses guarantee the existence of a unique
weak solution for any admissible control {Ut }t ≥0 ([4, Chapter 2]).
As before, we let r(x, u) be a continuous running reward function, which is
locally Lipschitz in x uniformly in u, and is also bounded from above in Rd . We
define the optimal risk-sensitive value J ∗ by

1 T
J ∗ := sup lim inf log E e 0 r(Xt ,Ut ) dt
,
{Ut }t≥0 T →∞ T

where the supremum is over all admissible controls.


Consider the extremal operator
  8 9
. (x) := 1 trace a(x)∇ 2 f (x) + max b(x, u), ∇f (x) + r(x, u)f (x)
Gf
2 u∈U
208 A. Arapostathis and V. S. Borkar

. is defined by
for f ∈ C 2 (Rd ). The generalized principal eigenvalue of G
 
. := inf λ ∈ R : ∃ φ ∈ W2,d (Rd ), ϕ > 0, Gφ
λ∗ (G) . − λφ ≤ 0 a.e. in Rd ,
loc
(5)

where W2,d loc (R ) denotes the local Sobolev space of functions on R whose
d d
d d
generalized derivatives up to order 2 are in Lloc (R ), equipped with its natural
semi-norms. We assume that r − λ∗ is negative and bounded from above away
from zero on the complement of some compact set. This is always satisfied if
−r is an inf-compact function, that is the sublevel sets {−r ≤ c} are compact
(or empty) in Rd × U for each c ∈ R, or if r is a positive function vanishing at
infinity and the process {Xt }t ≥0 is recurrent under some stationary Markov control.
Then there exists a unique positive Φ∗ ∈ C 2 (Rd ) normalized as Φ∗ (0) = 1 which
. ∗ = λ∗ Φ∗ . In other words, the eigenvalue λ∗ = λ∗ (G)
solves GΦ . is simple. Let
ϕ∗ := log Φ∗ . As shown in [6], the function

1  T 2
H(x) := σ (x)∇ϕ∗ (x) , x ∈ Rd
2
is an infinitesimal relative entropy rate.
We let Z := Rd × U × Rd , and use the single variable z = (x, u, w) ∈ Z. Let
P(Z) denote the set of probability measures on the Borel σ -algebra of Z, and MA
denote the set of infinitesimal ergodic occupation measures for the operator A in (4)
defined for f ∈ C 2 (Rd ), which here can be written as
  5
MA := μ ∈ P(Z) : Af (z) μ(dz) = 0 ∀ f ∈ Cc2 (Rd ) ,
Z

where Cc2 (Rd ) is the class of functions in C 2 (Rd ) which have compact support.
Recall the definition R(x, u, w) := r(x, u) − 12 |σT (x)w|2 in Sect. 4. We also define
  5
P∗ (Z) := μ ∈ P(Z) : H(x) μ(dx, du, dw) < ∞ ,
Z
  5
P◦ (Z) := μ ∈ P(Z) : R(z) μ(dz) > −∞ .
Z

The following is a summary of the main results in [6, Section 4].


Controlled Version of the Donsker–Varadhan Formula 209

Theorem 3 We have

 
. =
J ∗ = λ∗ (G) sup inf Ag(z) + R(z) μ(dz)
μ∈P∗ (Z ) g∈Cc2 (Rd ) Z

= max R(z) μ(dz) .
μ∈MA-∩P∗ (Z ) Z

Suppose that the diffusion matrix a is bounded and uniformly elliptic, and either
|b|2
−r is inf-compact, or b, x− has subquadratic growth, or 1+|r| is bounded. Then
MA ∩ P◦ (Z) ⊂ P∗ (Z), and P∗ (Z) may be replaced by P(Z) in the variational
H is bounded, then
formula above. If, in addition, 1+|ϕ | ∗


 
. =
J ∗ = λ∗ (G) inf sup Ag(z) + R(z) μ(dz) .
g∈Cc2 (Rd ) μ∈P (Z ) Z

We continue with the Collatz–Wielandt formula in Rd for the risk-sensitive cost


minimization problem. This is studied in [3]. Here, we have a running cost r(x, u)
which is bounded from below in Rd × U, and is locally Lipschitz in x uniformly in
u. The assumptions on b and σ are as stated in the beginning of the section, except
that we may replace the affine growth assumption with the more general condition
 
sup b(x, u), x+ + σ(x)2 ≤ C0 1 + |x|2 ∀ x ∈ Rd ,
u∈U

for some constant C0 > 0. The risk-sensitive optimal value Λ∗ is defined by

1 T
Λ∗ := inf lim sup log E e 0 r(Xs ,Us ) ds
.
{Ut }t≥0 T →∞ T

The operator G here is as in (3) but for f ∈ C 2 (Rd ), and we let the generalized
principal eigenvalue λ∗ (G) be defined as in (5).
The running cost does not have any structural properties that penalize unstable
behavior such as near-monotonicity or inf-compactness, so uniform ergodicity for
the controlled process needs to be assumed. Let

1   8 9
Lf (x, u) := tr σ(x)σT (x)∇ 2 f (x) + b(x, u), ∇f (x) .
2
We consider the following hypothesis.
210 A. Arapostathis and V. S. Borkar

Assumption 1 The following hold.


(i) There exists an inf-compact function  ∈ C(Rd ), and a positive function V ∈
W2,d
loc (R ), satisfying infRd V > 0, such that
d

sup LV(x, u) ≤ κ1 1K (x) − (x)V(x) ∀ x ∈ Rd , (6)


u∈U

for some constant κ1 and a compact set K.


(ii) The function x → β(x) − maxu∈U r(x, u) is inf-compact for some β ∈ (0, 1).
As noted in [7], the Foster–Lyapunov equation in (6) cannot in general be
satisfied for diffusions with bounded a and b. Therefore, to treat this case, we
consider an alternate set of conditions.
Assumption 2 The following hold.
(i) There exists a positive function V ∈ W2,d loc (R ), satisfying infRd V > 0,
d

constants κ1 and γ > 0, and a compact set K such that

sup LV(x, u) ≤ κ1 1K (x) − γ V(x) ∀ x ∈ Rd .


u∈U

(ii) r − ∞ + lim sup|x|→∞ maxu∈U r(x, u) < γ .


Let o(V) denote the class of continuous functions f that grow slower than V, that
|f (x)|
is, → 0 as |x| → ∞. We quote the following result from [7].
V(x)
Theorem 4 Grant either Assumption 1, or 2. Then

Gf
Λ∗ = λ∗ (G) = sup inf dμ
f ∈C 2,+ (Rd )∩o(V) μ∈P (R )
d
Rd f
 (7)
Gf
= inf sup dμ ,
f ∈C 2,+ (Rd ) μ∈P (Rd ) Rd f

where C 2,+ (Rd ) denotes the set of positive functions in C 2 (Rd ).


We should remark here that the class of test functions f in the first representation
formula in (7) cannot, in general, be enlarged to C 2,+ (Rd ).
It is also interesting to consider the substitution f = eψ . Then (7) transforms to

λ∗ (G) = sup inf F [ψ](x) μ(dx)
ψ∈C 2,+ (Rd )∩o(log V) μ∈P (R )
d
Rd

= inf sup F [ψ](x) μ(dx) ,
ψ∈C 2,+ (Rd ) μ∈P (Rd ) Rd
Controlled Version of the Donsker–Varadhan Formula 211

with
6 7
F [ψ](x) := inf sup Aψ(x, u, w) + R(x, u, w) .
u∈U w∈Rd

This underscores the discussion in the last paragraph of Sect. 4.

6 Finite State and Action Space

For discrete time problems with finite state and action spaces (i.e., |S|, |U | < ∞
in Sects. 2 and 3), one can go significantly further for the reward maximization
problem. We recall below some results in this context from [10].
Consider a controlled Markov chain {Yn } on S with state-dependent action space
at state i given by

Ũi := ∪u∈U ({u} × Vi,u ) ,

where
  5
Vi,u := q(· | i, u) : q(· | i, u) ≥ 0, q(j | i, u) = 1 .
j

This is isomorphic to P(S). Let

K := ∪i∈S ({i} × Ũi ) .

The (controlled) transition probabilities of {Yn } are


 
p̃ j | i, (u, q(· | i, u)) := q(j | i, u) .

Define the per stage reward r̃ : K × S → R by


   
r̃ i, (u, q(· | i, u)), j := r(i, u, j ) − D q(· | i, u) p(· | i, u) .

Let {(Zn , Qn ), n ≥ 0} denote the ŨYn -valued control process. Consider the
problem: Maximize the long run average reward

1  6
N−1
7
lim inf E r̃ (Yn , (Zn , Qn ), Yn+1 ) .
N↑∞ N
n=0
212 A. Arapostathis and V. S. Borkar

Define the corresponding ergodic occupation measure γ ∈ P(K × S) by


 
γ (i, (u, dq), j ) := γ1 (i)γ2 (u, dq | i)γ3 j | i, (u, q) ,

where γ1 is an invariant probability distribution (not necessarily unique) under the


transition kernel
  
γ̌ (j | i) = γ2 (u, dq | i)γ3 j | i, (u, q) .
u Vi,u

Let E denote the set of such γ . The above average reward control problem is
equivalent to the linear program:
P0 Maximize

γ (i, (u, dq), j )r̃(i, (u, q), j )
i,j,u

over E.
Recall that E is specified by linear constraints and its extreme points correspond
to stationary Markov policies ([9, Chapter V]). The maximum will be attained at
an extreme point of E corresponding to a stationary Markov policy. This LP can be
simplified as
Maximize
  
γ " (i, u, j ) r(i, u, j ) − D q(· | i, u) p(· | i, u)
i,j


over Ẽ := γ " ∈ P(S × U × S) : γ " (i, u, j ) = γ1 (i)ϕ(u | i)q(j | i, u), where γ1 (·)
 
is invariant under the transition kernel γ̆ (j | i) := u ϕ(u | i)q(j | i, u) .
The dual LP is:
Minimize λ̆ subject to

λ̆ ≥ λ(i) ,
  
λ(i) + V (i) ≥ q(j | i, u) r̃(i, (u, q(· | i, u)), j ) + V (j ) ,
j

λ(i) ≥ q(j | i, u)λ(j ) , ∀ i ∈ S, (u, q(· | i, u)) ∈ Ũi .
j

The proof goes through finite approximations. Note that the LP has infinitely
many constraints. However, it does pave the way for the corresponding dynamic
Controlled Version of the Donsker–Varadhan Formula 213

programming principle. The dynamic programming formulation equivalent to the


above LP turns out to be as follows:

λ∗ = max λ(i) ,
i
  
λ(i) + V (i) = max q(j | i, u) V (j ) + r̃(i, (u, q(· | i, u), j )) ,
(u,q(· | i,u))∈Bi
j

λ(i) = max q(j | i, u)λ(j ) , (†)
(u,q(· | i,u))∈Bi
j

∀i ∈S,

where Bi is the Argmax in (†). Once again, the proof goes through finite approxi-
mations.

7 Future Directions

There are several directions left uncharted in this broad problem area. Some of them
are listed below.
1. There are some in-between cases that need to be analyzed, e.g., controlled
Markov chains with countably infinite state space. Under the strong “Doeblin
condition,” the abstract Collatz–Wielandt formula has been derived for these in
[11]. This needs to be extended to more general cases.
2. The counterpart of the dynamic programming equations derived for reducible
risk-sensitive reward processes can also be expected to hold for risk-sensitive
cost problems and is yet to be established.
3. Concrete computational schemes based on approximate concave maximization
problems is another direction worth pursuing.

Acknowledgments The work of A.A. was supported in part by the National Science Foundation
through grant DMS-1715210, and in part the Army Research Office through grant W911NF-17-
1-001. The work of V.S.B. was supported by a J. C. Bose Fellowship from the Government of
India.

References

1. Akian, M., Gaubert, S., Nussbaum, R.: A Collatz-Wielandt characterization of the spectral
radius of order-preserving homogeneous maps on cones (2011). arXiv1112.5968
2. Anantharam, V., Borkar, V.S.: A variational formula for risk-sensitive reward. SIAM J. Control
Optim. 55(2), 961–988 (2017). https://doi.org/10.1137/151002630
214 A. Arapostathis and V. S. Borkar

3. Arapostathis, A., Biswas, A.: A variational formula for risk-sensitive control of diffusions in
Rd . SIAM J. Control Optim. 58(1), 85–103 (2020). https://doi.org/10.1137/18M1218704
4. Arapostathis, A., Borkar, V.S., Ghosh, M.K.: Ergodic Control of Diffusion Processes. Encyclo-
pedia of Mathematics and its Applications, vol. 143. Cambridge University Press, Cambridge
(2012)
5. Arapostathis, A., Borkar, V.S., Kumar, K.S.: Risk-sensitive control and an abstract Collatz-
Wielandt formula. J. Theor. Probab. 29(4), 1458–1484 (2016). https://doi.org/10.1007/s10959-
015-0616-x
6. Arapostathis, A., Biswas, A., Borkar, V.S., Suresh Kumar, K.: A variational characterization of
the risk-sensitive average reward for controlled diffusions in Rd (2019). arXiv1903.08346
7. Arapostathis, A., Biswas, A., Saha, S.: Strict monotonicity of principal eigenvalues of elliptic
operators in Rd and risk-sensitive control. J. Math. Pures Appl. 124, 169–219 (2019). https://
doi.org/10.1016/j.matpur.2018.05.008
8. Bielecki, T., Hernández-Hernández, D., Pliska, S.R.: Risk sensitive control of finite state
Markov chains in discrete time, with applications to portfolio management. Math. Methods
Oper. Res. 50(2), 167–188 (1999). https://doi.org/10.1007/s001860050094
9. Borkar, V.S.: Topics in Controlled Markov Chains. Pitman Research Notes in Mathematics,
vol. 240. Longman Scientific and Technical, Harlow (1991). https://doi.org/10.1007/978-1-
4612-5320-4
10. Borkar, V.S.: Linear and dynamic programming approaches to degenerate risk-sensitive reward
processes. In: 56th IEEE Annual Conference on Decision and Control (CDC). pp. 3714–3718
(2017). https://doi.org/10.1109/CDC.2017.8264204
11. Cavazos-Cadena, R.: Characterization of the optimal risk-sensitive average cost in denumer-
able Markov decision chains. Math. Oper. Res. 43(3), 1025–1050 (2018). https://doi.org/10.
1287/moor.2017.0893
12. Dembo, A., Zeitouni, O.: Large Deviations Techniques and Applications. Applications of
Mathematics, vol. 38, 2nd edn. Springer, New York (1998). https://doi.org/10.1007/978-1-
4612-5320-4
13. de Pagter, B.: Irreducible compact operators. Math. Z. 192(1), 149–153 (1986). https://doi.org/
10.1007/BF01162028
14. Donsker, M.D., Varadhan, S.R.S.: On a variational formula for the principal eigenvalue for
operators with maximum principle. Proc. Nat. Acad. Sci. U.S.A. 72, 780–783 (1975). https://
doi.org/10.1073/pnas.72.3.780
15. Fleming, W.H., McEneaney, W.M.: Risk-sensitive control on an infinite time horizon. SIAM J.
Control Optim. 33(6), 1881–1915 (1995). https://doi.org/10.1137/S0363012993258720
16. Kreı̆n, M.G., Rutman, M.A.: Linear operators leaving invariant a cone in a Banach space.
Uspekhi Mat. Nauk. 3(1(23)), 3–95 (1948)
17. Meyer, C.: Matrix Analysis and Applied Linear Algebra. Society for Industrial and Applied
Mathematics (SIAM), Philadelphia (2000). https://doi.org/10.1137/1.9780898719512
18. Ogiwara, T.: Nonlinear Perron-Frobenius problem on an ordered Banach space. Jpn. J. Math.
21(1), 42–103 (1995). https://doi.org/10.4099/math1924.21.43
19. Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming.
Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics.
Wiley, New York (1994)
An (s, S) Production Inventory System
with State Dependant Production Rate
and Lost Sales

S. Malini and Dhanya Shajin

Abstract In this paper, the system under study is a production inventory system that
follows (s, S) replenishment policy and having state dependent production rate. The
system considered has infinite capacity where customers arrive according to Poisson
process. The service time follows exponential distribution. Further in the system,
when the inventory level depletes to s, the production process is switched on and
is kept on till the inventory level reaches its maximum capacity S. The production
time follows exponential distribution with parameter θi , where i represents number
of items in the inventory and 0 ≤ i ≤ S − 1. It is assumed that no new customers
join the queue when there is void inventory. This yields an explicit product form
solution for the steady state probability vector of the system, though there exists
a dependence relationship between number of customers joining the queue and
time interval for which the production process is turned on. Long run performance
measures are computed and lost sales of the system is analysed. A comparison chart
that points out the reduction of lost sales with state dependent production rate is
also provided along with numerical illustrations for the performance measures. An
expected cost function is constructed to numerically investigate the optimal (s, S)
pair.

Keywords Production inventory system · (s,S) reordering policy · State


dependent production rate · Stochastic decomposition

1 Introduction

Queueing theory is the branch of Mathematics that models and analyses queues
or waiting lines. The aim of queueing theory is to develop mathematical models
that can predict system behaviour. The system under consideration is those that

S. Malini () · D. Shajin


Department of Mathematics, Amrita School of Arts and Sciences, Amrita Vishwa Vidyapeetham,
Kochi, India

© The Editor(s) (if applicable) and The Author(s), under exclusive 215
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_14
216 S. Malini and D. Shajin

provide service for arbitrarily rising demands. Queueing systems with inventory
control are one of the areas under focus for the past decade. By inventory, we
mean any physical material stored, under process or waiting for processing in a
system like raw materials, goods, etc. or any item served to the customer after his
service period. Queueing system with inventory is studied under different heads
as—Single Server Queueing Systems with Inventory, Queueing Inventory System
with Stochastic Environment, Queueing Inventory System with Substitution Flexi-
bility, Queueing System with Production-Inventory, Queueing System with Service
Inventory, Continuous Review Inventory Systems with Server Vacation, Queueing
Inventory System with Postponed Demands/Customers, Queueing Systems with
New Inventory Models [4]. Here we consider an (s, S) production inventory system
where the production mode is also taken into consideration along with inventory
management. The reordering policy under study is (s, S) policy where s is the
reorder level and S is the maximum inventory level.
Turning the pages of development, a primary study on inventory with positive
service time was done by Sigman and Simchi-Levi [10], in which a FIFO M/G/1
queue is considered with customer arrival that follows Poisson process and time
taken for service having an arbitrary distribution and each customer requires one
item of inventory. A maximum capacity level and a minimum reorder level are also
considered in the system. Followed by this was the study made by Berman et al. [2]
in which a model was developed for an inventory management system. The service
mode of the system under consideration was to serve single unit of inventory on the
completion of each service. Also, it was assumed that there exists a deterministic
nature for arrival and service processes and the formation of queue happens only
when inventories are out of stock. An optimization for order quantity minimizing
the cost factor was also developed through the paper.
Schwarz and Daduna [9] in their work determined stationary distribution of
the system whose reorder point is taken to be zero. Various reordering policies
like (r, S), (r, Q) and some general randomized order policies are considered for
study. They derived product form solutions assuming no customer joining happens
when the inventory level is zero. Later Saffari and Haji[8] studied M/M/1 queueing
inventory system under (r, Q) policy whose lost sales, stationary distributions of
arising demands in the system and inventory level in the system considering lead
times as stochastic variables were derived. Several performance measures were also
derived along with cost analysis.
Related works in production industry include work by He and Jewkes [3] which
examines a production system in which demands follow Poisson process which are
processed as per FCFS principle. Using Markovian decision process approach an
optimal replenishment policy is arrived at. A recent development in this area is
the work by Baek and Moon [1] that evaluates a production inventory system as
an M/M/1 queue. The customer arrival is presumed to occur as a Poisson process
with a single server to render service that follows exponential distribution. The
replenishment of items happens either from an external agent who follows (r, Q)
policy or from an internal production system, where production process is supposed
to follow Poisson process. The product form solution in terms of joint probability
An (s, S) Production Inventory System with State Dependant Production Rate. . . 217

is arrived at and applying the same, performance measures and cost model are
developed.
In our study, the work by Krishnamoorthy et al. [7] requires special mention. A
queueing inventory system under (r, S) and (s, S) policy is analysed to obtain joint
probability distribution of number of customers and inventory level. Krishnamoor-
thy and Viswanath [5] published the first work in the case of a production inventory
system. The paper considered a production inventory system with Markovian arrival
process, Markovian production times, service time following phase type distribu-
tion, under (s, S) reordering policy. Under stability condition, state distributions and
various performance measures are evaluated. Current study in this paper finds its
motivation from the paper—Stochastic decomposition in production inventory with
service time by Krishnamoorthy and Viswanath [6] where demands follow Poisson
process, service rates and production times follows exponential distribution under a
constant rate. The current paper develops a comparative study with the former one
in the sense that, here we assume state dependent production rates.
Remaining paper is developed in the following manner. Section 2 provides a
model description, followed by system analysis in Sect. 3. In Sect. 4 performance
measures of the system are evaluated and in Sect. 5 the production cycle is analysed.
Following it, in Sect. 6, an expected cost function per unit time for the system under
consideration is developed for which numerical examples are provided. An optimal
value of s and S is arrived at from the illustrations.

2 Model Description

Here we consider an (s, S) production inventory system with single server and
infinite capacity. The customer arrival occurs according to Poisson process with
rate, λ. The services follow exponential distribution with rate, μ. The production is
switched on when the inventory level diminishes to s and is switched off as soon
as it reaches to S. The switch on mode is denoted by 1 and switch off mode by 0.
Thus, when the inventory level is between s + 1 and S − 1, the production mode can
be either 0 or 1. It is assumed that each production is of 1 unit and the production
times follow the exponential distribution with parameter, θi where i represents the
number of items in the inventory. Also, 0 ≤ i ≤ S − 1. Basic assumptions chosen
for this model are as follows:
• No customer joins the queue when the inventory level is zero.
• The product takes negligible time to reach the retail shop from the production
unit.
218 S. Malini and D. Shajin

2.1 Mathematical Model

The above conditions are modelled using  = {N(t), I (t), C(t); t ≥ 0} which is
a Continuous Time Markov Chain (CTMC). Here N(t) represents the number of
customers, I (t) represents the number of items in the inventory at the production
centre and C(t) represents the production mode as on or off.
Also,

C(t) = 1 for 0 ≤ I (t) ≤ s


= 0 for I (t) = S
= 0 or 1 for s + 1 ≤ I (t) ≤ S − 1

The state space of the above process is given by

{(n, i, 1); n ≥ 0, 0 ≤ i ≤ S − 1} ∪ {(n, i, 0), n ≥ 0, s + 1 ≤ i ≤ S}

When N(t) = n, known as the level of the system, there are 2S − s states in the
level.

2.2 Notations

In the following model explanation, below mentioned notations are used:


• Im → Identity matrix of order m
• en → Column vector of 1’s of order n × 1
• e → Column vector of 1’s of respective order
• 0 → Zero matrix of respective order
• i → Number of customers (0 to ∞)
• j → Number of inventory (0 to S)
• k → Production mode (0 or 1)

2.3 Transitions

For the CTMC  = {(N(t), I (t), C(t); t ≥ 0)} is given by


• Customer Arrival (rate λ)

(i, j, 1) → (i + 1, j, 1) for i ≥ 0; 1 ≤ j ≤ S − 1
(i, j, 0) → (i + 1, j, 0) for i ≥ 0; s + 1 ≤ j ≤ S
An (s, S) Production Inventory System with State Dependant Production Rate. . . 219

• Service Process (rate μ)

(i, j, 1) → (i − 1, j − 1, 1) for i ≥ 1; 1 ≤ j ≤ S − 1
(i, s + 1, 0) → (i − 1, s, 1) for i ≥ 1
(i, j, 0) → (i − 1, j − 1, 0) for i ≥ 1; s + 2 ≤ j ≤ S

• Production Process (rate θi )

(i, j, 1) → (i, j + 1, 1) for i ≥ 0; 0 ≤ j ≤ S − 2(i, S − 1, 1)


→ (i, S, 0) for i ≥ 0

• For the remaining transitions we have the rate zero.

2.4 Infinitesimal Generator

The infinitesimal generator for the CTMC  applying the transitions described is
given by
⎛ ⎞
Q R0
⎜ R2 R1 R0 ⎟
⎜ ⎟
⎜ ⎟
G=⎜ R2 R1 R0 ⎟ (1)
⎜ ⎟
⎝ R2 R1 R0 ⎠
.. .. ..
. . .

Here,
• R0 is the arrival matrix that represents the transition rates of customer arrival
• R2 is the service matrix that represents the transition rates of service times and
• R1 represents the system stay in same state.
• R00 = Q is given by
⎛ ⎞
−θ0 θ0
⎜ 0 −(λ + θ1 ) (θ1 ) ⎟
⎜ 0 −(λ + θ2 ) ⎟
⎜ ⎟
⎜ .. .. ⎟
⎜ . . ⎟
⎜ ⎟
⎜ .. .. ⎟
⎜ . . ⎟
Q=⎜
⎜ −(λ + θs−1 ) θs ⎟

⎜ −(λ + θs ) N " ⎟
⎜ ⎟
⎜ 0 Jl Nl

⎜ 0 Jl ⎟
⎜ ⎟
⎜ .. .. ⎟
⎝ . . ⎠
..
. N" 0
220 S. Malini and D. Shajin

with
6 7
N " = 0 θs ,

0
N "" = ,
θs−1
0 0
Nl = for s + 1 ≤ l ≤ S − 1,
0 θl

−(λ) 0
Jl = for s + 1 ≤ l ≤ S − 1,
0 −(λ + θl )

0 0
R0 =
0 λIm−1
μ
R1 = Q − R0
λ
⎛ ⎞
⎜μ 0 ⎟
⎜ ⎟
⎜ μ0 ⎟
⎜ ⎟
⎜ .. .. ⎟
⎜ . . ⎟
⎜ ⎟
⎜ ⎟
⎜ .. .. ⎟
⎜ . . ⎟
⎜ ⎟
R2 = ⎜ μ 0 ⎟
⎜ ⎟
⎜ H1 0 ⎟
⎜ ⎟
⎜ H2 0 ⎟
⎜ ⎟
⎜ Jl ⎟
⎜ ⎟
⎜ .. .. ⎟
⎜ . . ⎟
⎝ ⎠
..
. H3 0

with

μ μ0 6 7
H1 = , H2 = , H3 = μ 0
μ 0μ
An (s, S) Production Inventory System with State Dependant Production Rate. . . 221

3 System Analysis

Here we analyse the state of the system.

3.1 Stability Condition

For the same, we first establish the stability condition by defining D = R0 +R1 +R2
as the generator matrix. Then D has a steady state probability vector. Let it be !.
Then ! can be decomposed as (φ0 , φ1 , φ2 , . . . , φs , φ(s+1)0, φ(s+1)1, . . . , φS0 ). The
system under study is similar to a LIQBD (Level Independent Quasi Birth Death
Process). Hence the condition for stability is given by

! R0 e < ! R2 e
⇒ λ < μ

3.2 Steady State Probability Vector

For evaluating the steady state vector of the process , we assume that the produc-
tion inventory system performs with negligible service time and there are no backlog
of demands. The respective Markov Chain is given by  ˜ = {I (t), C(t); t ≥ 0}
where I (t) is the inventory level and C(t) is the production mode.
The state space of the above state process is given by
s S 
{i} {((i, 0), (i, 1))} S
i=0 i=s=1

˜ = {I (t), C(t); t ≥ 0} is
The infinitesimal generator for the above state process 
given by
⎛ ⎞
−θ0 θ0
⎜ λ −(λ + θ1 ) θ1 ⎟
⎜ 0 λ −(λ + θ2 ) ⎟
⎜ ⎟
⎜ .. .. ⎟
⎜ . . ⎟
⎜ ⎟
⎜ .. .. ⎟
⎜ . . ⎟
G̃ = ⎜
⎜ −(λ + θs−1 ) θs−1 ⎟

⎜ 0 λ −(λ + θs ) N" ⎟
⎜ H˜1 ⎟
⎜ 0 Jl Nl 0

⎜ 0 H˜2 Jl H2 0 ⎟
⎜ ⎟
⎜ .. .. ⎟
⎝ . .

Jl N"
H˜3 −λ
222 S. Malini and D. Shajin

λ λ λ
where H̃1 = H1 , H̃2 = H2 , H̃3 = H3 and all other matrices are as described
μ μ μ
for the process .
˜ let π be the steady state probability vector.
For the process ,

⇒ π = (π0 , π1 , π2 , . . . , πs , π(s+1)0, π(s+1)1, . . . , πS ).

Then π satisfies the equations

π G̃ = 0
πe = 1

On solving, the various components of π can be obtained as

πi0 = πS0 for s + 1 ≤ i ≤ S − 1.


λS−i S−s j :s+(j −1) 1
πi1 = λ πS0 for 0 ≤ i ≤ s − 1
θi j =1 k=i θk
S−i :i+(j −1) 1
πi1 = λj πS0 for s ≤ i ≤ S − 1
j =1 k=i θk

Applying the normalizing condition,

πS0
⎧ ⎡ ⎤ ⎡ ⎤ ⎫−1
⎨s−1
λS−i  j
S−s :−1)
s+(j
1 
S−1 S−i :−1) 1
i+(j ⎬
= ⎣ λ ⎦+ ⎣ λj ⎦ + (S − s)
⎩ θi θk θk ⎭
i=0 j =1m k=i i=s j =1 k=i

The steady state probability vector for the original system under study is computed
using π.
Let x be the steady state probability vector of the original system. Then x
must satisfy the equations xG = 0 and xe = 1. Now x can be partitioned as
(x0 , x1 , . . . , xS ) corresponding to the levels. Here each xi can be partitioned into
• P (number of customers in the system)
• P (number of items in the inventory at the production centre and the production
mode k)

xi (j, k) = pi πj (k)

Let xi = Kρ i π, i ≥ 0.
An (s, S) Production Inventory System with State Dependant Production Rate. . . 223

Here ρ = μλ and K is a constant to be determined. On computation the value of


K is found to be 1 − ρ, for:
Considering the equations, xG = 0 and xe = 1. xG = 0 on simplification gives

x0 Q + x1 R2 = 0 (2)
xi−1 R0 + xi R1 + xi+1 R2 = 0, otherwise (3)

Now we aim to check if this system of equations are satisfied:


We have from Eq. (2)

x0 Q + x1 R2
λ
= Kπ Q + R2
μ

But from structure of matrices we have


λ
Q+ R2 = G̃ and π G̃ = 0
μ
⇒ x0 Q + x1 R2 = 0

Also, from Eq. (3)

xi−1 R0 + xi R1 + xi+1 R2
 $ % 
λ λ 2
= Kρ π R0 + R1 +
i
R2
μ deμ
 $ % 
λ μ  λ 2
= Kρ π R0 +
i
Q − R0 + R2
μ λ deμ
λ
= Kρ i−1 π Q + R2
μ
= Kρ i−1 π G̃
=0

Hence, the condition xG = 0 is satisfied.


Now, applying the normalizing condition xe = 1 we have
 
K 1 + ρ + ρ2 + · · · = 1

⇒ K = 1 − ρ
224 S. Malini and D. Shajin

Thus, the steady state probability vector x is given by

λ
xi = (1 − ρ) ρ i π where ρ =
μ

Theorem Under necessary and sufficient condition, λ < μ, the steady state prob-
ability vector of the system under consideration has a product form decomposition
and can be written as the product of probability of number of customers in the system
and probability of number of items in the inventory.

λ
xi = (1 − ρ) ρ i π where ρ =
μ

with π = (π0 , π1 , π2 , . . . , πs , π(s+1)0, π(s+1)1, . . . , πS ) and

πi0 = πS0 for s + 1 ≤ i ≤ S − 1.


λS−i S−s j :s+(j −1) 1
πi1 = λ πS0 for 0 ≤ i ≤ s − 1
θi j =1 k=i θk
S−i :i+(j −1) 1
πi1 = λj πS0 for s ≤ i ≤ S − 1
j =1 k=i θk

s−1 λS−i S−s j :s+(j−1) 1 S−1 S−i :i+(j−1) 1
πS0 = λ + λj
i=0 θi j=1m k=i θk i=s j=1 k=i θk
5−1
+ (S − s)

4 System Performance Measures

• Mean number of customers in the system,

λ
Ecust =
μ−λ

• Expected number of items in the inventory in the system,


s 
S−1
Einvent = i · πi1 + i · (πi1 + πi0 ) + SπS0
i=0 s+1
An (s, S) Production Inventory System with State Dependant Production Rate. . . 225

• Expected rate of production,


s 
S−1
Epro.rat e = θ i · πi + θi · πi1
i=0 i=s+1

• Expected loss of customers,

Ec.loss = λ · π01

• Expected production switch on rate,

Eson = λ · πS

5 Production Cycle

By production cycle we mean the time period during which the production process
is switched on. Let the production process be switched on at the time epoch T0 when
there are s + 1 inventories in the system and a service has been completed. Till the
epoch T0 the production mode is kept off. Once the production process is turned on,
it is kept in on mode till the inventory level reaches to the maximum capacity, S. Let
T1 be that time epoch. Thus, the length of the production cycle will be T1 − T0 .

5.1 Mathematical Model

The production cycle can be modelled as the time until absorption for the Markov
Chain " = {(N(t), I (t)) ; t ≥ 0}. Here, N(t) is the number of customers and I (t)
is the inventory level in the system. The state space of the above process is given
>
∞ >
by, {(i, j ) |0 ≤ j ≤ S − 1} {$} where $ represents the absorbing state where
i=0
we switch off the production. For the Markov chain " all the transitions happen in
the same manner as for  except for the absorbing state $.
The infinitesimal generator for the above process " is given as

A −Ae
C=
0 0
226 S. Malini and D. Shajin

where
⎡ ⎤
T00 T0
⎢ T2 T1 T0 ⎥
⎢ 0 T2 T1 T0 ⎥
A= ⎢ . ⎥
⎢ .. .. .. ⎥
⎣ . . ⎦
.. .. ..
. . .

in which
⎛ ⎞
−θ0 θ0
⎜ −(λ + θ1 ) θ1 ⎟
⎜ −(λ + θ2 ) ⎟
⎜ ⎟
⎜ .. .. ⎟
T00 =⎜ . . ⎟
⎜ ⎟
⎜ .. .. ⎟
⎝ . . ⎠
−(λ + θs−1 ) θS−2
−(λ + θS−1 )
⎛ ⎞
−θ0 θ0
⎜ −(λ + μ + θ1 ) θ1 ⎟
⎜ −(λ + μ + θ2 ) ⎟
⎜ ⎟
⎜ .. .. ⎟
T1 = ⎜ . . ⎟
⎜ ⎟
⎜ .. .. ⎟
⎝ . . ⎠
−(λ + μ + θS−2 ) θS−2
−(λ + μ + θS−1 )

⎡ ⎤
0 0
⎢μ 0 ⎥
⎢ .. .. ⎥
T2 = ⎢0 . . ⎥ , T0 = 0 0
⎢ ⎥ 0 λ · IS−1
⎣ .. .. ⎦
0 0 . .
0 0 μ 0

5.2 Steady State Analysis

Let zi (j ) denote the expected time until absorption of the process " from the state
(i, j ). Define the row vector zT such that
 
zT = z0 T , z1 T , . . . .

where each zi is the column vector with S elements. Also, let τi (j ) be the probability
of switching on the production process. At this point, there are ‘i’ customers and ‘j ’
inventories in the system. Define τ to be the probability vector such that

τ = (τ0 , τ1 , . . . .)
An (s, S) Production Inventory System with State Dependant Production Rate. . . 227

Here each τi is a vector with dimension S × 1. It is to be noted that when j =


s, τi (j ) = 0. Thus, τi (s) can be calculated using the values of x.

τi (s) = (1 − ρ) ρ i , for i ≥ 0.

Now, the expected length of production cycle denoted by EP C is given by


∞
EP C = τ · z = · ρ · ρ i · zi (s)
i=0

To find the vector z, we apply the concept that z satisfies the equations:

Cz = −e
⇒ T00 · z0 + T0 · z1 = −e
T2 · zi−1 + T1 · zi + T0 · zi+1 = −e for i ≥ 1 (4)

The above system of equations are solved by assuming that the production system
is having instantaneous service without any backlogs. Let the expected length of
production cycle under this assumption be denoted by EP˜ C .
To solve for EP˜ C , consider the CTMC " ˜ = {(Y (t)) ; t ≥ 0} whose absorbing
state is ∇. Here, Y (t) represents the inventory level during the course of production
cycle at time t. The state space of the above Markov Chain " ˜ is given by

J −Je
C=
0 0

where
⎛ ⎞
−θ0 θ0
⎜ λ −(λ + θ ) ⎟
⎜ 1 θ1 ⎟
⎜ ⎟
⎜ 0 λ −(λ + θ2 ) ⎟
⎜ ⎟
⎜ .. .. ⎟
J =⎜ . . ⎟
⎜ ⎟
⎜ .. .. ⎟
⎜ . . ⎟
⎜ ⎟
⎝ λ −(λ + θS−2 ) θS−2 ⎠
λ −(λ + θS−1 )

Let −J −1 e = (Y0 , Y1 , . . . , Ys−1 ) be the column vector whose (s + 1)th entry is


EP˜ C . By the relation J (−J −1 e) = −e we arrive at the following set of equations:

−θ0 · Y0 + θ0 · Y1 = −1
λ · Yi−1 + (λ + θi · Yi + θi · Yi+1 ) = −1, for 1 ≤ i ≤ S − 2
λ · YS−2 + (λ + θS−1 ) · YS−1 = −1
228 S. Malini and D. Shajin

On computation, the above set of equations yields


i :i 1
Yi − Yi+1 = λi−j ,0 ≤ i ≤ S − 2
j =0 k=j θk
S−1 :S−1  1 
YS−1 = λS−1−j
j =0 k=j θk

A solution to the above set of equations gives

S−2 l :l 1 S−1 :S−1  1 


Ys = λl−j + λS−1−j
l=s j =0 k=j θk j =0 k=j θk

This gives the expected length EP˜ C . Now, from Eq. (4) we have
$ % ∞ $ λ % i λ  λ 2
λ
T00 + · T2 z0 + T0 + T1 + T2 zi+1
μ i=0 μ μ μ
∞  λ i
=− ·e (5)
i+0 μ

From the structure of matrices it can be observed that


λ
T00 + T2 = J
μ
λ λ
T0 + T1 = T00
μ μ
λ  λ 2 λ
T0 + T1 + T2 = J
μ μ μ

Applying the above relation to Eq. (5), it reduces to


∞  λ i 1
J · zi = ·e
i=0 μ K
∞  λ i
⇒ K · · zi = −(J −1 )e (6)
i=0 μ

From Eqs. (4) and (6) it follows that the expected length of production cycle and the
expected length of production cycle with instantaneous service are equal. Thus,

S−2 l :l $ % S−1 :S−1 $ 1 %


1
EP C = λl−j + λS−1−j
l=s j =0 k=j θk j =0 k=j θk
An (s, S) Production Inventory System with State Dependant Production Rate. . . 229

6 Analysis of Expected Cost Function per Unit Time

Based on the above performance measures, an expected cost function per unit time
is created and the optimal values of s and S are evaluated. The cost function is given
by

Fc = Einvent ∗ Cinvent + Epro.rat e ∗ Cpro.rat e + Ec.loss ∗ Cc.loss + M ∗ Eson

where
• Cinvent is the holding cost per inventory per unit time.
• Cpro.rat e is the cost for producing unit inventory per unit time
• Cc.loss is the cost occurred due to loss of customers and
• M is the fixed cost for starting the production.
Note: In all the below mentioned cases we have followed an assumption that the
variable production rates follow a relation with each other as θi = (S − (i − 1)) · θ
for 1 ≤ i ≤ S.

6.1 Effect of Maximum Inventory Level S

The effect of maximum inventory level S on cost function and various performance
measures discussed above is tabulated as per the tables. Here, the values of s, λ, μ,
θ and other basic costs are assumed to be constant throughout the calculations.
Case 1: When θ = 6
An optimal value for the cost function is obtained and is indicated in bold letters
as in Table 1. From the table it can be concluded that as the value of S increases,
the value of Einvent also increases. Coming to production rates, as the maximum
inventory level increases, the expected production rate also increases. Also it can
be seen that as the maximum inventory level increases, the expected loss is found

Table 1 Effect of max inventory level S for θ = 6, λ = 5, μ = 12, s = 3, Cinvent =


50; Cpro.rate = 200; Cc.loss = 400; M = 2000

S Einvent Epro.rat e Ec.loss EP C Es.on FC


6 4.65957 7.665957 0.001216912 937 0.400705 2568.066
7 5.28534 8.111103 0.000357524 484.3 × 101 0.206020 2298.671
9 6.476668 10.131786 0.000061156 122.030 × 103 0.206020 2519.370
10 7.052521 11.317054 0.000031314 610.311 × 103 0.061866 2739.782
13 8.731847 15.841013 0.000000651 762.939 × 105 0.030666 3666.131
20 12.501280 26.930208 0.000000584 506.046 × 1015 0.010971 6033.048
230 S. Malini and D. Shajin

Table 2 Effect of max inventory level S for θ = 2, λ = 5, μ = 12, s = 3, Cinvent =


50; Cpro.rate = 200; Cc.loss = 400; M = 2000

S Einvent Epro.rat e Ec.loss EP C Es.on FC


7 4.250082 5.124119 0.059368 4843 0.797907 2856.891
9 5.904106 3.654156 0.008299 1220.30 × 102 0.267705 1564.768
10 6.709735 3.271005 0.003417 6103.11 × 102 0.157480 1306.016
13 9.031418 3.021602 0.000533 7629.39 × 104 0.058634 1173.374
15 10.510924 3.371441 0.000235 1907.35 × 106 0.041093 1282.116
20 14.028628 4.774770 0.000050 5960.46 × 109 0.022298 1701.004

to follow a decreasing pattern. This is because there is less chance for backlogs
of demands when there is a large capacity level. As the maximum inventory level
increases, the length of production cycle also increases, i.e., the time for which
the system must be kept on to reach maximum level will increase with S. Another
observation is that the expected rate at which production is switched on decreases
with increasing value of S, because there is less chance that the inventory level falls
beyond the replenishment level.
Case 2: When θ = 2
An optimal value for the cost function and expected production rate is obtained
and is indicated in bold letters as in Table 2. In this case also, as the value of S
increases, the value of Einvent also increases. Also, as the maximum inventory level
increases, the expected loss is found to follow a decreasing pattern. As the maximum
inventory level increases, the length of production cycle also increases, i.e. the time
for which the system must be kept on to reach maximum level will increase with
S. The expected rate at which production is switched on decreases with increasing
value of S.

6.1.1 Comparison Chart for Customer Loss

The following Table 3 provides a comparison for the expected customer loss in
comparison with a system following fixed production rate:

Table 3 Effect of max


S Var pro rate Fixed pro rate
inventory level S for
θ = 2.5, λ = 5, s = 12 1.9082 × 10−10 0.033
10, Cinvent = 50; Cpro.rate = 13 3.2634 × 10−11 0.030
200; Cc.loss = 400; M =
2000 15 8.790 × 10−12 0.024
16 6.685 × 10−12 0.022
An (s, S) Production Inventory System with State Dependant Production Rate. . . 231

6.2 Effect of Minimum Inventory Level s

The effect of minimum inventory level s on cost function and various performance
measures discussed above is tabulated as per the tables. Here, the values of S, λ, μ,
θ and other basic costs are assumed to be constant throughout the calculations.
Case 1: When θ = 2
The cost function is obtained and increases with increase in value of s as in Table 4.
Meanwhile, as the value of s increases, the value of Einvent also increases, because
the stock is being replenished more frequently. Also, as the minimum inventory level
increases, the expected production rate increases. As the minimum inventory level
increases, the expected loss is found to follow a decreasing pattern as inventories
are added to the system in s shorter interval of time (i.e., smaller the value of
S − s, more frequent is the addition of inventory). Also, as the minimum inventory
level increases, the length of production cycle decreases, i.e., the time for which the
system must be kept on to reach maximum level will decrease with s. The expected
rate at which production is switched on increases with increasing value of s. This is
because as s approaches to S, there will be more chance of switching on the system.

Case 2: When θ = 10
The cost function is obtained and decreases with increase in value of s (Table 5).
Other conclusions obtained include: As the value of s increases, the value of
Einvent also increases. But as the minimum inventory level increases, the expected
production rate decreases. As the minimum inventory level increases, the expected
loss is found to follow a decreasing pattern. Also, as the minimum inventory level
increases, the length of production cycle decreases, i.e., the time for which the
system must be kept on to reach maximum level will decrease with s. Similarly, the
expected rate at which production is switched on increases with increasing value
of s.

Table 4 Effect of min inventory level s for θ = 2, λ = 5, μ = 12, S = 13, Cinvent =


50; Cpro.rate = 200; Cc.loss = 400; M = 2000

s Einvent Epro.rat e Ec.loss EP C Es.on FC


5 9.498185 3.000086 0.0000068 76,292,967 0.102597 1280.147
6 9.708196 3.274726 0.0000031 76,289,061 0.157450 1455.268
7 9.900285 3.663149 0.0000018 76,269,530 0.267563 1762.778
8 10.072336 4.437888 0.0000013 76,171,874 0.473054 2337.309
9 10.221815 5.182550 0.0000011 75,683,593 0.793881 3135.367
10 10.345635 5.881182 0.000001 73,242,187 1.152411 3998.346
232 S. Malini and D. Shajin

Table 5 Effect of min inventory level s for θ = 10, λ = 5, μ = 12, S = 13, Cinvent =
50; Cpro.rate = 200; Cc.loss = 400; M = 2000

s Einvent Epro.rat e Ec.loss EP C Es.on FC


3 8.662487 23.42065 0.000001645105 76,293,904 0.022583 5162.4225
5 9.588038 18.911789 0.000000007112 76,292,967 0.035036 4331.833
6 10.043674 16.491345 0.000000000602 76,289,061 0.045657 3891.766
7 10.492828 14.631114 0.000000000082 76,269,530 0.0621868 3575.238
9 11.36246 10.944421 0.000000000030 75,683,593 0.1456481 3048.303

Table 6 Effect of min


s Var pro rate Fixed pro rate
inventory level s for
θ = 2.5, λ = 5, S = 2 2.4680 × 10−5 0.088
15, Cinvent = 50; Cpro.rate = 3 1.722 × 10−6 0.073
200; Cc.loss = 400; M =
2000 4 1.3978 × 10−9 0.061
6 1.2074 × 10−9 0.044

6.2.1 Comparison Chart for Customer Loss

The following Table 6 provides a comparison for the expected customer loss in
comparison with a system following fixed production rate:

7 Conclusion

This paper analyses an (s, S) production inventory system with state dependent
production rate and lost sales. The steady state analysis is performed assuming
the system is having negligible service time and no backlogs of demand, and a
product form solution is developed. Further, the expected length of production cycle
is formulated in the paper. An expected cost function per unit time is also developed
with which the optimal values of s and S are calculated numerically. From the
numerical exemplars it is obvious that, the loss rate can be considerably decreased
with state dependent production rates along with an optimal value for expected cost
function. It is to be noted that the expected customer loss in the system reduces
considerably with increase in production rate. One can extend the study to derive
an optimal analytical expression for dependency between number of items in the
inventory and production rates.
An (s, S) Production Inventory System with State Dependant Production Rate. . . 233

References

1. Baek, J.W., Moon, S.K.: The M/M/1 queue with a production-inventory system and lost sales.
Appl. Math. Comput. 233, 534–544 (2014)
2. Berman, O., Kaplan, E.H., Shevishak, D.G.: Deterministic approximations for inventory
management at service facilities. IIE Trans. 25(5), 98–104 (1993)
3. He, Q.-M., Jewkes, E., Buzacott, J.: Optimal and near-optimal inventory control policies for a
make-to-order inventory-production system. Eur. J. Oper. Res. 141(1), 113–132 (2002)
4. Karthikeyan, K., Sudhesh, R.: Recent review article on queueing inventory systems. Res. J.
Pharm. Technol. 9, 2056 (2016)
5. Krishnamoorthy, A., Narayanan, V.C.: Production inventory with service time and vacation to
the server. IMA J. Manag. Math. 22(1), 33–45 (2011)
6. Krishnamoorthy, A., Viswanath, N.C.: Stochastic decomposition in production inventory with
service time. Eur. J. Oper. Res. 228(2), 358–366 (2013)
7. Krishnamoorthy, A., Manikandan, R., Lakshmy, B.: A revisit to queueing-inventory system
with positive service time. Ann. Oper. Res. 233(1), 221–236 (2015)
8. Saffari, M., Asmussen, S., Haji, R.: The M/M/1 queue with inventory, lost sale, and general
lead times. Queueing Syst. 75(1), 65–77 (2013)
9. Schwarz, M., Sauer, C., Daduna, H., Kulik, R., Szekli, R.: M/M/1 queueing systems with
inventory. Queueing Syst. 54(1), 55–78 (2006)
10. Sigman, K., Simchi-Levi, D.: Light traffic heuristic for an M/G/1 queue with limited
inventory. Ann. Oper. Res. 40, 371–380 (1992)
Analysis of a MAP Risk Model with
Stochastic Incomes, Inter-Dependent
Phase-Type Claims and a Constant
Barrier

A. S. Dibu and M. J. Jacob

Abstract Inspired by the problems with random income feature, this paper focuses
on an insurance risk model with MAP inter-arrival time for premiums as well
as claims. We study the model for a convex combination of two types of inter-
dependent Phase-type claims, where the probability of claim switching is directly
associated with the inter-arrival time of claims. Furthermore, the surplus process of
this model is assumed to be restricted by a horizontal barrier “b” above the initial
surplus “u”. The transient analysis of the corresponding Markovian fluid flow model
is considered to develop the integral equations governing the Gerber–Shiu function
and the expected discounted dividends paid until ruin. The closed-form solutions for
these integral equations are obtained in terms of Lundberg roots. When the premium
sizes are Phase-type distributed, the solutions are explicit at “u = b”. For “u ≤ b”,
the solutions are explicit when the premium sizes are distributed exponentially.
Finally, to validate and present the tractability of these solution expressions, some
numerical illustrations are provided in individual cases.

Keywords Markovian arrival process · Random incomes · Phase-type claims ·


Inter-dependent claims

1 Introduction

The insurance risk model considered in this paper realises a surplus process with
random incomes. The complexity of the premium process structures in various
financial and insurance markets is investigated in [24]. Instead of a constant
premium rate, Boikov [7] linked the stochastic premium process in a surplus
process to characterise more realistic cash inflows. Temnov [28] estimated the
metric distances between the ruin probabilities of risk models corresponding to

A. S. Dibu () · M. J. Jacob


Department of Mathematics, National Institute of Technology Calicut, Kozhikode, Kerala, India
e-mail: [email protected]

© The Editor(s) (if applicable) and The Author(s), under exclusive 235
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_15
236 A. S. Dibu and M. J. Jacob

both stochastic and constant premium processes. Bao and Ye [6] and Labbé and
Sendova [19] dealt with the Gerber–Shiu analysis in the compound Poisson risk
model with compound Poisson premiums. Further exercises in risk processes with
random incomes were done by Hao and Yang [17] and Jieming et al. [18] in delayed
claim strategies, Labbé et al. [20] in a model with amount sizes to take positive as
well as negative values, Gao and Wu [16] in a model with two classes of delayed
claims and Shija and Jacob [26] in the Markov-modulated model.
In recent years, the risk processes with multi-phase arrivals have got much
attention in the literature. Several strategic surplus processes are modelled in a
multi-phase environment with the aid of a general class of arrival process so-called
the Markovian arrival processes (MAP) for claim arrivals. The versatile MAP is
introduced by Neuts [25] and Latouche and Ramaswami [22] developed the matrix-
analytic methods to analyse the Markovian fluid flow process. The Gerber–Shiu
analysis of the risk processes under Markovian set-up is done in [1, 29] under
absolute ruin with debit interest and heavy-tailed claim sizes, Dong and Liu [15]
in the latent tax model. Furthermore, the generalised versions of the Gerber–Shiu
function have been investigated by Cheung and Landriault [13] using a reward-
based measure and Cheung and Feng [11] using a cost-utility measure. Meanwhile,
Landriault and Shi [21] analysed the occupation times and Li et al. [23] analysed the
probability function of the number of claims up to ruin in models having the MAP
inter-arrival time for claims. Apart from all these papers, we consider a multi-phase
cash inflow by modelling the premium arrivals into the MAP version.
Generally, the Phase-type (PH) amount sizes are fused to the MAP inter-
arrival times, since both have multi-phase structure and the matrix-analytic methods
are developed to analyse the Markovian fluid flow models. These methodologies
are inspired in [4] to analyse a risk process using corresponding fluid queues.
Technically, this kind of fusing will establish correlation between claims and inter-
arrival times (see [1, 5] and [2], and references therein for further details).
Risk processes with barrier strategy are proposed by De Finetti [14]. Under some
assumptions, he observed that the maximum dividend is availed to the shareholders
while implementing the barrier strategy. Some of the recent problems handled in the
MAP risk model under dividend strategy are as follows: with perturbation, Cheung
et al. [12] studied a barrier problem and Cheng and Wang [10] studied the threshold
dividend problem. Meanwhile, in the non-perturbed model, Ahn et al. [2] studied
a horizontal barrier problem and Zhang et al. [30] studied an Erlangised dividend
problem. The main quantities of interest in all the recent works are the expected
discounted dividends paid out until ruin and the Gerber–Shiu function.
The remaining sections in this paper are organised as follows. The next section
introduces the model assumptions and notations. In Sect. 3, the governing renewal
equations satisfying the two measures of interest, the expected discounted dividends
until ruin and the Gerber–Shiu function, are established. The closed-form solutions
are obtained for the two measures by taking barrier as initial surplus level u = b,
in Sect. 4. For exponential premiums, the closed-form solutions at the general initial
surplus level (u ≤ b) are then obtained in Sect. 5. The numerical examples for a
Analysis of a MAP Risk Model with Stochastic Incomes, Inter-Dependent. . . 237

scalar and two-state versions are illustrated in Sect. 6. The concluding statements
are remarked in Sect. 7.

2 The MAP Premium Model with Horizontal Barrier


Strategy

In this paper, we consider a random income surplus process given by


Np (t )
 
Nc (t )
U (t) = u + Xi − Yj (1)
i=1 j =1

which initiates at U (0) = u. The process (1) will have positive


 and negative
Np (t )
jumps due to the aggregate processes of premium i=1 Xi and that of claim
 
Nc (t )
j =1 Yj respectively. The inter-arrival times of premium and claim, and the
amount sizes of premium and claim are all assumed to be mutually independent. In
all the aforementioned papers with stochastic income, the premium is assumed to
have a stochastic behaviour, but all the cash inflows are realised through a single
phase/channel. Instead of assuming a single phase, a multi-phase cash inflow to the
surplus process is realised in this model.
For developing the multi-phase model, the inter-arrival time of premiums, Tp ,
is assumed to follow a MAP. A MAP having n ≥ 1 transient phases with repre-
sentation MAPn (β, E0 , E1 ) is a two-dimensional Markov process {(Np (t), Jp (t))}
having state space N × {1, 2, . . . , n} for t ≥ 0. Here, the counting process Np (t)
denotes the number of premium arrivals and Jp (t) is the state of the underlying
continuous time Markov chain (CTMC) of premium arrivals at time t ≥ 0. The
surplus process U (t) is thus capable of realising the cash inflows through countably
infinite number of territories (regions), offices or the financial products of the
insurance company. Furthermore, the claim arrival is governed by the MAP inter-
arrival time, Tc with the representation MAPm (α, D0 , D1 ) having m ≥ 1 transient
phases. The two-dimensional Markov process associated with the MAP of claims is
denoted by {(Nc (t), Jc (t))} having state space N × {1, 2, . . . , m} for t ≥ 0 in which
Nc (t) is the number of claim arrivals and Jc (t) is the state of underlying CTMC of
claim arrivals at time t ≥ 0 (see [25] and [22] for further details about MAP).
The process defined by Eq. (1) will either have a positive jump due to a premium
arrival or have a negative jump due to a claim arrival at the first renewal time τ 1 .
To bring out what happens at τ 1 , define τ = min(Tp , Tc ) which will again follow a
MAP with representation MAPmn ([α ⊗β], F0 , F1 ) having mn ≥ 1 transient phases.
Here,

F0 = D0 ⊕ E0 = D0 ⊗ I n + E0 ⊗ I m
F1 = D1 ⊕ E1 = D1 ⊗ I n + E1 ⊗ I m
238 A. S. Dibu and M. J. Jacob

in which I m and I n denote identity matrices of order m and n, respectively, ⊕ is


the Kronecker sum and ⊗ is the Kronecker product. Then for t ≥ 0, the inter-arrival
times τ is a two-dimensional Markov process {(N(t), J (t))} having state space,
N × E where E = {1, 2, . . . , mn}. Then, N(t) denotes the number of renewals and
J (t) is the state of underlying CTMC of renewals at time t ≥ 0 .
Due to the matrix structure of the MAP premium inter-arrival time, the premium
sizes, {Xi }i∈N+ , are assumed to follow a PH distribution with representation
PHm1 (γ , G) where the transition between the transient phases of the CTMC is given
by m1 -dimensional square matrix G and the initial probability vector is given by
the m1 -dimensional row vector γ . For the minimum of two MAP arrivals, τ , the
transitions of related fluid queue process will be governed by an irreducible CTMC
with transition rate matrix
 
T11mn×mn T12mn×mnm1 F0 γ ⊗ F1
T= = (2)
T21mnm1 ×mn T22mnm1 ×mnm1 g ⊗ I mn G ⊗ I mn

where I mn denotes the identity matrix of order mn and g = −Ge1 m1 in which


e1m1 denotes the m 1 -dimensional column vector of ones. The PH claims that
disturb the surplus process (1) are of two classes—one kind with representation
PHn1 (γ 1 , G1 ) and the another kind with representation PHn2 (γ 2 , G2 ). The related
fluid process thus comprises of an irreducible CTMC which generates two transition
rate matrices—one for the first kind which is given by
 
T"11mn×mn T"12mn×mnn F0 γ 1 ⊗ F1
T" = 1 =
T"21mnn T"22mnn g1 ⊗ I mn G1 ⊗ I mn
1 ×mn 1 ×mnn1

and the another kind which is given by


 
""
T""11mn×mn T""12mn×mnn F0 γ 2 ⊗ F1
T = 2 =
T""21mnn T""22mnn g2 ⊗ I mn G2 ⊗ I mn
2 ×mn 2 ×mnn2

where g1 = −G1 e1 1 1 1
n1 and g2 = −G2 en2 in which en1 and en2 denote, respectively,
the n1 -dimensional and n2 -dimensional column vectors of ones. Then the probabil-
ity density functions of premiums and the two classes of claims satisfy the condition
 ∞  ∞  ∞
" ""
F1 = T12 eT22 x T21 dx = T"12 eT22 x T"21 dx = T""12 eT22 x T""21 dx.
x=0 x=0 x=0

During the temporary time that the CTMC stays in the states governed by the
matrices T11 , T"11 and T""11 , the fluid level stays idle with no increase or decrease.
On the other hand, the fluid process characterised by transition rates of positive
jumps in the states governed by T22 and negative jumps in the states governed by
T"22 and T""22 . The fluid queue process and the surplus process are connected by
Analysis of a MAP Risk Model with Stochastic Incomes, Inter-Dependent. . . 239

relating the duration of the idle fluid level with the duration of the idle surplus level
and, by freezing the time evolution upon a transition from idle level to increasing
(decreasing), the increasing (decreasing) fluid flow with the premium (claim) size
random variables (see [1, 4, 5] for further details).
In this paper, the inter-dependent claims structure proposed by Boudreault et al.
[8] is generalised to the MAP/PH set-up which satisfies
 ∞
"
F1 = γ 1 ⊗ F1 e[Λ⊗I n ]t eT22 y T"21 dy
y=0
 ∞ ""
+ γ 2 ⊗ F1 (I mn − e[Λ⊗I n ]t ) eT22 y T""21 dy
y=0

where Λ = diag[−λi ] for i = 1, 2, . . . , m. This inter-dependency implies that


the probability of a claim from first (second) kind is an exponentially decreasing
(increasing) function of the time separating this event from the last claim arrival time
and λ"i s denote the Bayesian estimate of the rate of exponential decrease (increase)
in the ith claim arriving phases.
Remark 1 The inter-dependent strategy of two classes of claim sizes can be adapted
to the premium sizes also. We restrict the inter-dependence assumption for the claim
sizes to avoid redundancy in the analysis.
The security loading factor, θ = (average cash inflow/average cash outflow)−1,
is assumed to be positive where the average cash inflow is given by
6  7−1
E Tp E {X} = π p [−γ ⊗ E1 ] [G ⊗ I n ]−1 e1
nm1

and the average cash outflow is given by


 ∞ 66 7 7
[E {Tc }]−1 E {Y } = π c π c ⊗ D1 eΛt e[D0 ⊗I m ]t −γ 1 ⊗ D1 ⊗ I m
t =0
6 7−1 1
× G1 ⊗ I m2 em2 n dt
1
 ∞  
+ πc π c ⊗ D1 I m − eΛt e[D0 ⊗I m ]t
t =0
66 7 76 7−1 1
× −γ 2 ⊗ D1 ⊗ I m G2 ⊗ I m2 em2 n dt
1

where π p and π c are the stationary probability row vectors of the CTMC, Jp =
{Jp (t); t ≥ 0} and Jc = {Jc (t); t ≥ 0 , respectively. Furthermore, e1m2 n
and
1
e1
m2 n
denote the m2 n1 -dimensional and m2 n2 -dimensional column vectors of ones,
2
respectively, and I m2 denote the identity matrix of order m2 .
240 A. S. Dibu and M. J. Jacob

The surplus process (1) is further restricted under the horizontal barrier, b ≥ u,
satisfying the equation
⎧ N (t )

⎪ 
p

Nc (t )

⎪d Xi − d Yj , Ub (t) ≤ b


i=1 j =1
dUb (t) =

⎪ 
Nc (t )

⎪ −d

⎩ Yj , Ub (t) > b.
j =1

Under the barrier-restricted strategy, the insurance company can undertake the
economic interest of the shareholders by paying out the extra surplus above the
constant barrier b as dividends. For expressing the revised risk process Ub (t)
formally (as of Fig. 1), let χ(t) = sup{U (v) | 0 ≤ v ≤ t} be the running maximum
of surplus process U (t). Denoting Db (t) = max{χ(t) − b, 0} as the aggregate
dividends paid by the company up to time t, the revised risk process Ub (t) can
be thus expressed as

Ub (t) = U (t) − Db (t) (3)

with the understanding that the aggregate dividends paid by the company up to time
t ≥ 0 is zero for Ub (t) < b.
For the amended process given by Eq. (3), we analyse the expected discounted
dividends paid until ruin (EDDR) and the Gerber–Shiu function (GSF). For this

U b (t)

Db ( 2)

U b (T − )

T
0
1 2 3 t
|U b (T )|

Fig. 1 A realisation of the surplus process Ub (t)


Analysis of a MAP Risk Model with Stochastic Incomes, Inter-Dependent. . . 241

analysis, we denote the classical ultimate ruin time as T = inf{t ≥ 0 : Ub (t) < 0}
(T = ∞ if the set is empty). Then for the discounting factor δ ≥ 0,
 T
Dδ,b (T ) = e−δt dDb (t)
t =0

defines the discounted value of the aggregate dividends paid out until the time of
ruin T . The EDDR for the initial surplus level u is defined as
6 7
ν δ,b (u) = E Dδ,b (T ) | Ub (0) = u
= [α ⊗ β] V 1
δ,b (u), 0 ≤ u ≤ b. (4)

Here, V 1
δ,b (u) denotes the mn-dimensional column vector for which

6 7
V1
δ,b (u) = E Dδ,b (T ) | Ub (0) = u, J (0) = i (5)
i

Equation (5) determines the EDDR for the initial capital u and the initial phase
i ∈ E. Furthermore, let |Ub (T )| and Ub (T − ) be the deficit at ruin and the surplus
immediately before ruin, respectively. Then for δ ≥ 0, the mn-dimensional column
vector Φ 1
δ,b (u) defined as

Φ1
δ,b (u) = E e−δT w(Ub (T − ), |Ub (T )|) 1(T < ∞) | Ub (0) = u, J (0) = i
i

which determines the GSF with the initial surplus level u and the initial phase i ∈ E.
The function w(x, y) is the penalty insurer abide to settle whenever the surplus
process confirms a ruin time (T < ∞). As a consequence of the arguments from
[1], the Gerber–Shiu function is given by

φ δ,b (u) = [α ⊗ β] Φ 1
δ,b (u), 0 ≤ u ≤ b. (6)

3 The Governing Renewal Equations for the Expected


Discounted Dividends Paid Out Up To Ruin and the
Gerber–Shiu Function

The section focuses on developing two renewal equations that govern the pro-
cess (3): one which satisfies the EDDR and the other which satisfies the GSF with
the initial surplus level Ub (0) = u. For this development, Theorems 1 and 2 will
provide the existence of integral equations that satisfy the mn-dimensional vectors
V1 1
δ,b (u) and Φ δ,b (u), respectively.
242 A. S. Dibu and M. J. Jacob

Theorem 1 For 0 ≤ u ≤ b, the measure V 1


δ,b (u) satisfies the integral equation

 b−u  ∞
F0 V 1
δ,b (u) = F T12 eT22 x T21 V 1
δ,b (u + x) dx + T12 eT22 x T21
x=0 x=b−u

× (x + u − b) e1
mn +V1
δ,b (b) dx + C1
δ,b (u) (7)

in which e1 mn denotes the mn-dimensional column vector of ones. Here, F =


[E0 ⊗ I m ] [δI mn − F0 ]−1 , F" = [D0 ⊗ I n ] [δI mn − ([Λ ⊗ I n ] + F0 )]−1 and F"" =
[D0 ⊗ I n ] [δI mn − F0 ]−1 are mn-dimensional square matrices. Furthermore,
 u  
" ""
C1
δ,b (u) = F" T"12 eT22 y T"21 − T""12 eT22 y T""21
y=0
""
+ F"" T""12 eT22 y T""21 V 1
δ,b (u − y) dy

is an mn-dimensional column vector which furnishes the renewal of EDDR process


when a claim arrives at τ 1 .
Proof In the range of initial surplus 0 ≤ u < b, the first renewal on the process (3)
may be due to an arrival of either a claim or a premium. Conditioning on τ 1 , the
time of first renewal (claim or premium arrival time), it follows
 ∞ 6  7
V1
δ,b (u) = e−δt β ⊗ P r τ 1 = t, τ 1 = Tp
t =0
 b−u
eT22 x T21 V 1
δ,b (u + x) dx
x=0
 ∞
+ eT22 x T21 (x + u − b)e1 1
mn + V δ,b (b) dx dt
x=b−u
 ∞
−δt [Λ⊗I n ]t
+ e e [α ⊗ P r (τ 1 = t, τ 1 = Tc )]
t =0
 u
"
× eT22 y T"21 V 1
δ,b (u − y) dy dt
y=0
 ∞  
+ e−δt I mn − e[Λ⊗I n ]t [α ⊗ P r (τ 1 = t, τ 1 = Tc )]
t =0
 u ""
× eT22 y T""21 V 1
δ,b (u − y) dy dt. (8)
y=0
Analysis of a MAP Risk Model with Stochastic Incomes, Inter-Dependent. . . 243

where
    
P r τ 1 = t, τ 1 = Tp = P rij τ 1 = t, τ 1 = Tp i,j =1,2,...,mn
 
P r (τ 1 = t, τ 1 = Tc ) = P rij (τ 1 = t, τ 1 = Tc ) i,j =1,2,...,mn .
 
Here, P rij τ 1 = t, τ 1 = Tp and P rij (τ 1 = t, τ 1 = Tc ) are the probabilities that
the first shock on the process happens due to a claim arrival and a premium arrival,
respectively, along with a transition from ith state to j th state at time t ≥ 0 for
i, j ∈ E. Then,

P r (τ 1 = t, τ 1 = Tc ) = F−1
0 [E0 ⊗ I m ]
 
P r τ 1 = t, τ 1 = Tp = F−1 0 [D0 ⊗ I n ] .

As a consequence of Bayes’ theorem, it follows that

P r (τ 1 > t | τ 1 = Tc ) = F−1
0 [E0 ⊗ I m ] e
F0 t
[−F0 ]−1 F1 (9)
 
P r τ 1 > t | τ 1 = Tp = F−1 0 [D0 ⊗ I n ] e
F0 t
[−F0 ]−1 F1 . (10)

After applying these probabilities and integrating over time in Eq. (8), after some
can rearrange the obtained equality to the integral equation (7).
Now without any further arguments, V 1 δ,b (b) satisfies
 ∞ 6  7
V1
δ,b (b) = e−δt β ⊗ P r τ 1 = t, τ 1 = Tp
t =0
 ∞
× eT22 x T21 x e1 1
mn + V δ,b (b) dx dt
x=0
 ∞
+ e−δt e[Λ⊗I n ]t [α ⊗ P r (τ 1 = t, τ 1 = Tc )]
t =0
 b "
× eT22 y T21 " V 1
δ,b (b − y) dy dt
y=0
 ∞  
+ e−δt I mn − e[Λ⊗I n ]t [α ⊗ P r (τ 1 = t, τ 1 = Tc )]
t =0
 b
""
× eT22 y T21 "" V 1
δ,b (b − y) dy dt.
y=0

Hence at u = b, Eq. (7) can be alternatively written as


 ∞
F0 V 1
δ,b (b) = F xT12 eT22 x T21 e1 1 1
mn dx + F1 V δ,b (b) + Cδ,b (b). (11)
x=0
244 A. S. Dibu and M. J. Jacob

Theorem 2 For 0 ≤ u ≤ b, the measure Φ 1


δ,b (u) satisfies the integral equation

 b−u
F0 Φ 1
δ,b (u) = F T12 eT22 x T21 Φ 1
δ,b (u + x) dx
x=0
 ∞
+ T12 eT22 x T21 Φ 1 1 1
δ,b (b) dx + Dδ,b (u) + Wδ,b (u).
x=b−u
(12)

Here,
 u  
" ""
D1
δ,b (u) = F" T"12 eT22 y T"21 − T""12 eT22 y T""21
y=0
""
+ F"" T""12 eT22 y T""21 Φ 1
δ,b (u − y) dy,

is an mn-dimensional column vector which furnishes the renewal of GSF process


when a claim arrives at τ 1 and
6 1 7
W1 " 1 "" 1
δ,b (u) = F w 1 (u) − w 2 (u) + F w 2 (u),

∞ " ∞ ""
where, w1 " T22 y t" w(u, y − u) dy and w 1 (u) =
1 (u) = y=u T12 e 2 2
"" T22 y t"" w
y=u T12 e 2
(u, y − u) dy for which t"2mnn ×1 = −T"22 e1
mnn1 and t ""
2mnn ×1 = −T "" e1
22 mnn2 . Here,
1 2
e1 1
mnn1 and emnn2 denote mnn1 -dimensional and mnn2 -dimensional column vectors
of ones, respectively.
Proof Using the same arguments of Theorem 1, it follows
 ∞ 
6  7 b−u
Φ1
δ,b (u) = e−δt β ⊗ P r τ 1 = t, τ 1 = Tp eT22 x T21 Φ 1
δ,b (u + x) dx
t =0 x=0
 ∞
+ eT22 x T21 × (x + u − b) e1 1
mn + Φ δ,b (b) dx dt
x=b−u
 ∞
−δt [Λ⊗I n ]t
+ e e [α ⊗ P r (τ 1 = t, τ 1 = Tc )]
t =0
 u  ∞
" "
× eT22 y T"21 Φ 1
δ,b (u − y) dy + eT22 y t"2 w(u, y − u) dy dt
y=0 y=u
 ∞  
+ e−δt I mn − e[Λ⊗I n ]t × [α ⊗ P r (τ 1 = t, τ 1 = Tc )]
t =0
 u  ∞
T""22 y ""
× e T""21 Φ 1
δ,b (u − y) dy + eT22 y t""2 w(u, y − u) dy dt.
y=0 y=u
(13)
Analysis of a MAP Risk Model with Stochastic Incomes, Inter-Dependent. . . 245

After substituting the probabilities from Eqs. (9) and (10), integrate out the time in
Eq. (13) to furnish the integral equation (12) and then at u = b, Eq. (12) reduces to
the form

F0 Φ 1 1 1 1
δ,b (b) = F F1 Φ δ,b (b) + Dδ,b (b) + Wδ,b (b). (14)

4 The Closed-Form Solutions of V 


δ,b
(b) and Φ 
δ,b
(b)

For analysing the solution of the governing equations (7) and (12), the expressions
of V 1 1
δ,b (u) and Φ δ,b (u) at the horizontal barrier b have to be determined. This
section focuses to derive the expressions of V 1 1
δ,b (b) and Φ δ,b (b) by applying
analytical Laplace inversion on the Laplace transformed version of the implicit
integral equations (11) and (14), respectively. For applying these transformations

to the equations, let f˜(sx ) = x=0 e−sx f (x) dx be the Laplace transform from the
real domain of x to the complex domain of s of an integrable function f (.). The
proposition below provides the Laplace transform of Eqs. (11) and (14).
∞ −sy " T" y "
Proposition 1 Let M̃1 (sy ) = y=0 e T12 e 22 T21 dy and M̃2 (sy ) =
∞ −sy "" T"" y "" 1 1
y=0 e T12 e 22 T21 dy. Then the Laplace transforms Ṽ δ,b (sb ) and Φ̃ δ,b (sb )
of
1 1
the measures V δ,b (b) and Φ δ,b (b), respectively, will satisfy the following equations:

1 1
Lb (sb )Ṽ δ,b (sb ) = ÃV (sb ) (15)
1 1
Lb (sb )Φ̃ δ,b (sb ) = ÃΦ (sb ) (16)

1
where Lb (sb ) = F0 − F F1 − F" M̃1 (sb ) − M̃2 (sb ) − F"" M̃2 (sb ), ÃV (sb ) =
∞ T22 x T e1 dx 1
1
sb F x=0 xT12 e 21 mn and ÃΦ (sb ) = W̃1
δ,b (sb ), the Laplace transform of
W1 δ,b (b).
Proof Taking the Laplace transform from the domain of b to s, Eqs. (11) and (14)
yield
 ∞
1 1 1
F0 Ṽ δ,b (sb ) =F xT12 eT22 x T21 e1 1
mn dx + F1 Ṽ δ,b (sb ) + C̃δ,b (sb ) (17)
sb x=0
1 1
F0 Φ̃ δ,b (sb ) = F F1 Φ̃ δ,b (sb ) + D̃1
δ,b (sb ) + W̃1
δ,b (sb ) (18)
246 A. S. Dibu and M. J. Jacob

respectively. Here

1
C̃1 " ""
δ,b (sb ) = F M̃1 (sb ) − M̃2 (sb ) + F M̃2 (sb ) Ṽ δ,b (sb )

1
D̃1 " ""
δ,b (sb ) = F M̃1 (sb ) − M̃2 (sb ) + F M̃2 (sb ) Φ̃ δ,b (sb ).

1 1
Now collecting the coefficients of Ṽ δ,b (sb ) and Φ̃ δ,b (sb ) to the left-hand side,
respectively, of Eqs. (17) and (18) will provide the Laplace transform of the integral
equations (11) and (14).
In Eqs. (15) and (16), the elements of mn-dimensional square matrix Lb (sb ) are
rational which can be represented in the form

Lb (sb ) = [Lb (sb )]i,j =1,2,...,mn


pij (sb )
= ⊗ I n for i, j = 1, 2, . . . , m
qij (sb )
1 6 7
= pij (sb ) ⊗ I n for i, j = 1, 2, . . . , m
Qb (sb )

where Qb (sb ) = [qb (sb )]nκ for κ ∈ {1, 2, . . . , m} in which qb (sb ) is the n1 + n2
degree unique denominator polynomial qij (s) that occurs in κ number of columns
pij (sb ) adj
of . Denoting the Lb (sb ) and Ldetb (sb ), respectively, as the adjoint and
qij (sb )
determinant of Lb (sb ), (15) and (16) can be alternatively settled as

adj
1 Lb (sb ) 1
Ṽ δ,b (sb ) = ÃV (sb ) (19)
Ldet
b (sb )
adj
1 Lb (sb ) 1
Φ̃ δ,b (sb ) = ÃΦ (sb ) (20)
Ldet
b (sb )

adj
where Ldet
b (sb ) and the elements of the mn-dimensional square matrix Lb (sb )
adj 1
are rational. Then Ldet
b (sb ) and Lb (sb ) can be represented as multiple of ,
Qb (sb )
which is followed due to the MAP/PH structure of the model. Now, the Laplace
transform inversion of Eqs. (19) and (20) with respect to sb will determine the
expression for V 1 1
δ,b (b) and Φ δ,b (b), respectively, which is explained by Theorem 3
given below.
adj adj
Theorem 3 Let Qb (sb ) = Qb (sb )Lb (sb ), Qdet b (sb ) = Qb (sb )Lb (sb ) and
det

denote ρ b:l for l = 1, 2, . . . , κ (n1 + n2 ) as the distinct roots of Qb (sb ) = 0,


det

having algebraic multiplicity n, occur in the left half of the complex plane. Then for
 κ(n +n )
an arbitrary ωb ∈/ ρ b:l l=11 2 and using the generalised Hermite’s interpolating
Analysis of a MAP Risk Model with Stochastic Incomes, Inter-Dependent. . . 247

polynomial, the closed-form expressions for the solution of equations (11) and (14)
are given by
 b
1
V1
δ,b (b) = ξ ωb (x)A1
V dx (21)
Qdet
b (ωb ) x=0
 b
1
Φ1
δ,b (b) = ξ ωb (x)A1
Φ (b − x)dx (22)
Qdet
b (ωb ) x=0

where

1 +n2 ) 
κ(n n
d j −1 adj
ξ z (b) = clj (z, b) j −1
Qb (sb )|sb =ρ b:l
l=1 j =1 dsb
 ∞
A1
V = F xT12 eT22 x T21 e1
mn dx
x=0

= −FT12 T−1 1 1
22 (em1 ⊗ I mn )emn

A1 1
Φ (b) = Wδ,b (b)

in which
 n n−j
z − ρ b:l  b(n−j −i) eρ b:l b g (i)
l (ρ b:l )
clj (z, b) = P l (z)
Γ (j ) Γ (n − j − i + 1)Γ (i + 1)
i=0
6 7−1
P l (z) = g l (z)
⎡ ⎤n
κ(n:1 +n2 )
 
=⎣ z − ρ b:l ⎦ .
l=1

Proof Multiplying and dividing by Qb (sb ) in the right-hand side of Eqs. (19)
and (20), it follows
adj
1 Qb (sb ) 1
Ṽ δ,b (sb ) = ÃV (sb ) (23)
Qdet
b (sb )
adj
1 Qb (sb ) 1
Φ̃ δ,b (sb ) = ÃΦ (sb ). (24)
Qdet
b (sb )
248 A. S. Dibu and M. J. Jacob

 κ(n +n )
Now for an arbitrary ωb ∈ / ρ b:l l=11 2 and using the generalised Hermite’s
interpolating polynomial (see [27]), Eqs. (23) and (24) can be alternatively written
as

1 ξ̃ ωb (sb ) 1
Ṽ δ,b (sb ) = ÃV (sb ) (25)
Qdet
b (ωb )

1 ξ̃ ωb (sb ) 1
Φ̃ δ,b (sb ) = ÃΦ (sb ) (26)
Qdet
b (ωb )

κ(n1 +n2 ) n d j−1 adj


where ξ̃ z (sb ) = l=1 j =1 c̃ lj (z, sb ) ds j−1 Qb (sb )|sb =ρ b:l in which the
b
coefficient function clj (z, sb ) is given by
 (j −1) n−j  
z − ρ b:l  z − ρ b:l i (i)
c̃lj (z, sb ) = P l (z) g (ρ b:l )R li (z, sb ) (27)
Γ (j ) Γ (i + 1) l
i=0

for which R li (z, sb ) is a weight function fixed to neutralise the degree of polynomi-
adj
als comprised in the parent matrix, Qb (u). A suitable divided difference form for
this weight function is given by

n−(j −1)−i
z − ρ b:l
R li (z, sb ) = . (28)
sb − ρ b:l

This form will admit the closed-form solutions of V 1 1


δ,b (b) and Φ δ,b (b) in terms of
Lundberg roots. Substituting the divided difference weight function R li (z, sb ) given
by Eq. (28) in Eq. (27), the particular form of coefficient function c̃lj (z, sb ) can be
represented by the equation
 n
z − ρ b:l 
n−j
g (i)
l (ρ b:l )
c̃lj (z, sb ) = P l (z) −1)−i) Γ (i + 1)
Γ (j ) (sb − ρ b:l ) (n−(j
i=0

Now, inverting Eqs. (25) and (26) with respect to sb will yield the expressions for
EDDR (21) and GSF (22), respectively, with barrier b, being the initial surplus level.

5 The Closed-Form Solutions of V 


δ,b
(u) and Φ 
δ,b
(u)
for Exponential Premiums

In this section, we work on obtaining the expressions for V 1 1


δ,b (u) and Φ δ,b (u), for
0 ≤ u ≤ b, as the solution of defective renewal equations (7) and (12). For obtaining
the solution, the method of analytical Laplace inversion is applied on the expressions
Analysis of a MAP Risk Model with Stochastic Incomes, Inter-Dependent. . . 249

equivalent to the Laplace transform of the differential equations derived from the
governing renewal equations (7) and (12). The existence of the differential equations
is detailed in Theorem 4 below.
For proceeding the analysis, the generalised inverse of mn × mnm1 dimensional
component matrix F T12 of matrix T given by Eq. (2) is required. We observe that
it is difficult to provide such an explicit generalised matrix suitable for the analysis
with the PH premiums unless the columns of F T12 are linearly independent. Hence
through Theorem 4, we restrict our analysis by taking exponential premiums so that
m1 = 1 and thus the matrix F T12 reduces to a square matrix of order mn and will be
invertible if the columns of F1 are linearly independent. In Theorem 4, the integro-
differential equations that satisfy EDDR and GSF for the initial capital “u ≤ b” are
obtained.
Theorem 4 Taking Z1 1 1
δ,b (u) = Bδ,b (u)+Wδ,b (u) and assuming that the columns of
F1 are linearly independent, the measures V 1 1
δ,b (u) and Φ δ,b (u), respectively, satisfy
the non-homogeneous first order integro-differential equations
$ %
d
I mn + F T12 T22 M−1 F0 + F T12 T21 V 1 δ,b (u)
du
$ % 
d −1
= I mn + F T12 T22 M A1
δ,b (u) + C 1
δ,b (u) (29)
du
$ %
d
I mn + F T12 T22 M−1 F0 + F T12 T21 Φ 1 δ,b (u)
du
$ % 
d
= I mn + F T12 T22 M−1 Z1 1
δ,b (u) + Dδ,b (u) (30)
du

where M−1 is the inverse of the mn order square matrix F T12 and
 ∞
A1
δ,b (u) = F T12 eT22 (z−u) T21 (z − b)e1 1
mn + V δ,b (b) dz
z=b

⎨−FT eT22 (b−u) T−1 (e1 ⊗ I )e1 + T V 1 (b) , u≤b
12 22 m1 mn mn 21 δ,b
=
⎩0, u>b
 ∞
B1
δ,b (u) = F T12 eT22 (z−u) T21 Φ 1
δ,b (b) dz
z=b
/
−FT12 eT22 (b−u) T−1 1
22 T21 Φ δ,b (b), u ≤ b
=
0, u > b.
250 A. S. Dibu and M. J. Jacob

Proof Using change of variables, Eqs. (7) and (12) can be written as
 b
F0 V 1
δ,b (u) =F T12 eT22 (z−u) T21 V 1 1 1
δ,b (z)dz + Aδ,b (u) + Cδ,b (u)
z=u
 b
F0 Φ 1
δ,b (u) = F T12 eT22 (z−u) T21 Φ 1 1 1
δ,b (z)dz + Zδ,b (u) + Dδ,b (u).
z=u

Now, the left product operation with M−1 throughout in the above equations yields
 b  
M−1 F0 V 1
δ,b (u) = eT22 (z−u) T21 V 1
δ,b (z)dz + M−1
A 1
δ,b (u) + C 1
δ,b (u)
z=u
(31)
 b  
M−1 F0 Φ 1
δ,b (u) = eT22 (z−u) T21 Φ 1
δ,b (z)dz + M
−1
Z1 1
δ,b (u) + Dδ,b (u) .
z=u
(32)

Differentiating Eqs. (31) and (32) with respect to initial capital u will then deliver,

d 1 d  1 
M−1 F0 V δ,b (u) = M−1 Aδ,b (u) + C1
δ,b (u)
du du
   
+ T22 M−1 A1 1 −1 1
δ,b (u) + Cδ,b (u) − T22 M F0 + T21 V δ,b (u)

d 1 d  1 
M−1 F0 Φ δ,b (u) = M−1 Zδ,b (u) + D1
δ,b (u)
du du
   
+ T22 M−1 Z1 1 −1 1
δ,b (u) + Dδ,b (u) − T22 M F0 + T21 Φ δ,b (u)

which can be alternatively written as


$ %
d
I mn + T22 M−1 F0 + T21 V 1 δ,b (u)
du
$ %  
d
= I mn + T22 M−1 A1 1
δ,b (u) + Cδ,b (u)
du
$ %
d
I mn + T22 M−1 F0 + T21 Φ 1 δ,b (u)
du
$ %  
d
= I mn + T22 M−1 Z1 δ,b (u) + D 1
δ,b (u) .
du

Then the left product operation with F T12 on the above equations provides the
required equations (29) and (30) and hence the theorem holds.
Now, applying the Laplace transform on Eqs. (29) and (30), we have the following
corollary.
Analysis of a MAP Risk Model with Stochastic Incomes, Inter-Dependent. . . 251

Corollary 1 The Laplace transforms with respect to the initial capital “u ≤ b” of


the measures V 1 1
δ,b (u) and Φ δ,b (u) satisfy the equations

1 1
Lu (su )Ṽ δ,b (su ) = B̃ V (su ) (33)
1 1
Lu (su )Φ̃ δ,b (su ) = B̃ Φ (su ) (34)

 
where, Lu (su ) = su I mn + FT12 T22 M−1 F0 − F" M̃1 (su ) − M̃2 (su )

−F"" M̃2 (su ) + FT12 T21 and

1
 
B̃ V (su ) = su I mn + FT12 T22 M−1 Ã1 1 1
δ,b (su ) − Aδ,b (0) + F0 V δ,b (0) (35)
1
 
B̃ Φ (su ) = su I mn + FT12 T22 M−1 Z̃1 1 1
δ,b (su ) − Zδ,b (0) + F0 Φ δ,b (0). (36)

Proof Taking the Laplace transform on Eqs. (29) and (30) and collecting the
1 1
coefficients of Ṽ δ,b (su ) and Φ̃ δ,b (su ) on the left-hand side of the transformed
equations will bring out Eqs. (33) and (34) and hence completes the proof.
The expression for V 1 1
δ,b (u) and Φ δ,b (u) will be explicit if the solution of the
1 1
unknown quantities V δ,b (0) and Φ δ,b (0) in Eqs. (35) and (36), respectively, is
available. To follow up, we use Lemma 1 to determine the solutions at zero initial
capital.
6
Lemma 1 Let Δρ δ = diag(ρ δ,1 , ρ δ,2 , . . . , ρ δ,mn ) and Γ δ = γ δ,1 , γ δ,2 , . . . ,
7T
γ δ,mn be the eigenvalues matrix and the left eigenvectors matrix of Pδ =
Γ −1
δ Δρ δ δ , respectively. Then ρ δ,i ’s are determined by the roots of the equation
Γ
Lu (s) = 0 which must be in the right-half complex plane and γ δ,i can be obtained
det

by solving the equations γ δ,i Lu (ρ δ,i ) = 0; i = 1, 2, . . . , mn.


Proposition 2 The exact solutions for F0 V 1 1
δ,b (0) and F0 Φ δ,b (0) are given by
 ∞
F0 V 1 1
δ,b (0) = Aδ,b (0) − Pδ e−Δρ δ u A1
δ,b (u)du
u=0
 ∞
− e−Pδ u FT12 T22 M−1 A1
δ,b (u) du (37)
u=0
 ∞
F0 Φ 1 1
δ,b (0) = Zδ,b (0) − Pδ e−Δρδ u Z1
δ,b (u)du
u=0
 ∞
− e−Pδ u FT12 T22 M−1 Z1
δ,b (u) du (38)
u=0

respectively.
252 A. S. Dibu and M. J. Jacob

Proof For the proposed model in this paper, equation (17) in [29] which is derived
using Theorems 1 and 2 of [9], is revised to obtain the suitable form given by
 
Pδ + FT12 T22 M−1 F0 − F" M̃1 (Pδ ) − M̃2 (Pδ )

−F"" M̃2 (Fδ ) + FT12 T21 = 0 (39)

By Lemma 1, it is easy to show that Eq. (39) is equivalent to γ δ,i Lu (ρ δ,i ) = 0 for
i = 1, 2, . . . , mn. Setting su to ρ δ,i in Eqs. (33) and (34) yields
 
γ δ,i F0 V 1
δ,b (0) = γ A 1
δ,i δ,b (0) − γ δ,i ρ I
δ,i mn + FT T
12 22 M −1
Ã1
δ,b (ρ δ,i )
 ∞
= γ δ,i A1δ,b (0) − ρ γ
δ,i δ,i e−ρ δ,i u A1
δ,b (u) du
u=0
 ∞
− e−ρ δ,i u γ δ,i FT12 T22 M−1 A1
δ,b (u) du (40)
u=0
 
γ δ,i F0 Φ 1 1
δ,b (0) = γ δ,i Zδ,b (0) − γ δ,i ρ δ,i I mn + FT12 T22 M
−1
Z̃1
δ,b (ρ δ,i )
 ∞
= γ δ,i Z1
δ,b (0) − ρ δ,i γ δ,i e−ρ δ,i u Z1
δ,b (u) du
u=0
 ∞
− e−ρ δ,i u γ δ,i FT12 T22 M−1 Z1
δ,b (u) du (41)
u=0

respectively for i = 1, 2, . . . , mn. The matrix form of Eqs. (40) and (41) is given by
 ∞
Γ δ F0 V 1
δ,b (0) = Γ δ A1
δ,b (0) − Δδ Γ δ e−Δδ u A1
δ,b (u)du
u=0
 ∞
− e−Δδ u Γ δ FT12 T22 M−1 A1
δ,b (u) du (42)
u=0
 ∞
Γ δ F0 Φ 1
δ,b (0) = Γ δ Z1
δ,b (0) − Δδ Γ δ e−Δδ u Z1
δ,b (u)du
u=0
 ∞
− e−Δδ u Γ δ FT12 T22 M−1 Z1
δ,b (u) du (43)
u=0

respectively. Multiplying throughout by Γ −1 δ in Eqs. (42) and (43) will result in


Eqs. (37) and (38), respectively, and hence the proof is completed.
Remark 2 By the Perron–Frobenius theorem [3], the eigenvalue of Pδ with the
minimum real part, say ρ δ,1 , is real and strictly less than the real part of all
other eigenvalues. Let γ δ,1 = γ δ be the associated left eigenvector normalised by
γ δ e1
mn = 1. Then all components of γ δ are real and non-negative. In particular for
Analysis of a MAP Risk Model with Stochastic Incomes, Inter-Dependent. . . 253

δ = 0, we have the minimum non-negative root of Ldet u (su ) = 0, ρ 0,1 = 0, and the
associated left eigenvector, γ 0,1 = π , the stationary probability row vector of the
CTMC, {J (t)}t ≥0.
In Eqs. (33) and (34), the elements of mn-dimensional square matrix Lu (su ) are
rational which can be represented in the form

Lu (su ) = [Lu (su )]i,j =1,2,...,mn


tij (su )
= ⊗ I n for i, j = 1, 2, . . . , m
rij (su )
1 6 7
= tij (su ) ⊗ I n for i, j = 1, 2, . . . , m
Qu (su )

where Qu (su ) = [qu (su )]nκ for κ ∈ {1, 2, . . . , m} in which qu (su ) is the m+n1 +n2
degree unique denominator polynomial rij (su ) that occurs in κ number of columns
tij (su ) pij (sb ) adj
of which is same as in . Now denoting Lu (su ) and Ldet u (su )
rij (su ) qij (sb )
as the adjoint and determinant of Lu (su ), Eqs. (33) and (34) can be alternatively
settled as
adj
1 Lu (su ) 1
Ṽ δ,b (su ) = B̃ V (su ) (44)
Ldet
u (su )
adj
1 Lu (su ) 1
Φ̃ δ,b (su ) = B̃ Φ (su ) (45)
Ldet
u (su )

adj
where Ldet
u (su ) and the elements of the mn-dimensional square matrix Lu (su )
adj 1
are rational. Then Ldet
u (su ) and Lu (su ) can be represented as multiple of
Qu (su )
which is followed again due to the MAP/PH structure. Now, inverting Eqs. (44)
and (45) with respect to su will give the expression for V 1 1
δ,b (u) and Φ δ,b (u),
respectively, which is explained by adopting the similar arguments from Theorem 3
and modified as given below.
adj adj
Theorem 5 Let Qu (su ) = Qu (su )Lu (su ) and Qdet u (su ) = Qu (su )Lu (su )
det

where Qu (su ) is the denominator polynomial of Lu (su ). For l = 1, 2, . . . , κ(m +


det

n1 + n2 ), let ρ u:l denote the distinct roots of Qdet


u (su ) = 0 for which “κm”
number of roots are in the right half, κ(n1 + n2 ) number of roots are in the left
half of the complex plane, with all roots having algebraic multiplicity n. Then
 κ(m+n +n )
for an arbitrary ωb ∈ / ρ u:l l=1 1 2 , the closed-form expressions for the
254 A. S. Dibu and M. J. Jacob

solution of equations (11) and (14) are obtained by using the generalised Hermite’s
interpolating polynomial which are given by
 u
1
V1
δ,b (u) = η ωu (x)B 1
V (u − x)dx (46)
Qdet
u (ωu ) x=0
 u
1
Φ1
δ,b (u) = η ωu (x)B 1
Φ (u − x)dx (47)
Qdet
u (ωb ) x=0

κ(m+n1 +n2 ) n d j−1 adj


where, ηz (u) = l=1 j =1 d lj (z, u) ds j−1 Qu (su )|su =ρ u:l and
u

$ %
d
B1
V (u) = I mn + FT T
12 22 M−1
A1 1
δ,b (u) + F0 V δ,b (0)δ d (u)
du
$ %
d
B1
Φ (u) = I mn + FT T
12 22 M−1
Z1 1
δ,b (u) + F0 Φ δ,b (0)δ d (u)
du

in which
 n n−j
z − ρ u:l  u(n−j −i) eρ u:l u f (t )
l (ρ u:l )
d lj (z, u) = K l (z)
Γ (j ) Γ (n − j − i + 1)Γ (i + 1)
i=0
6 7−1
K l (z) = f l (z)
⎡ ⎤n
κ(m+n:1 +n2 )  
=⎣ z − ρ u:l ⎦
l=1

and δ d (u) is the Dirac delta function.


Proof Operating left product and division by Qu (su ) in the right-hand side of
Eqs. (44) and (45), we have

adj
1 Qu (su ) 1
Ṽ δ,b (su ) = B̃ V (su ) (48)
Qdet
u (su )
adj
1 Qu (su ) 1
Φ̃ δ,b (su ) = B̃ Φ (su ). (49)
Qdet
u (su )

 κ(m+n +n )
Now for an arbitrary ωu ∈ / ρ u:l l=1 1 2 and using the generalised Hermite’s
interpolating polynomial, Eqs. (48) and (49) can be alternatively written as

1 ξ̃ ωu (su ) 1
Ṽ δ,b (su ) = B̃ V (su ) (50)
Qdet
u (ωu )
Analysis of a MAP Risk Model with Stochastic Incomes, Inter-Dependent. . . 255

1 ξ̃ ωu (su ) 1
Φ̃ δ,b (su ) = B̃ Φ (su ) (51)
Qdet
u (ωu )

κ(m+n1 +n2 ) n d j−1 adj


where ξ̃ z (su ) = l=1 j =1 d̃ lj (z, su ) ds j−1 Qu (su )|su =ρ u:l in which the
u
coefficient function d̃ lj (z, su ) is given by
 (j −1) n−j  
z − ρ u:l  z − ρ u:l i (i)
d̃ lj (z, su ) = K l (z) f (ρ u:l )S li (z, su )
Γ (j ) Γ (i + 1) l
i=0

for which S li (z, su ) is given by

n−(j −1)−i
z − ρ u:l
S li (z, su ) = .
su − ρ u:l

Inverting Eqs. (50) and (51) with respect to su will yield the expressions for EDDR
and GSF with initial surplus level 0 ≤ u ≤ b.

6 Numerical Examples

An illustration on a scalar prototype is done, to validate expressions in the previous


section, using Example 1. Two other numerical examples are illustrated in a two-
state model through Examples 2 and 3 to show the tractability of expressions in the
multi-phase environment. The computational codes are compiled in Mathematica,
and the GUI of MATLAB is utilised to plot the graphs.
Example 1 Before moving to the multi-phase model, the expressions derived in
Sect. 6 are to be essentially validated. A single phase model is considered in this
example. We use the expression of GSF φ δ,b (u) given by Eq. (6) and obtained the
outputs of Laplace transform of ruin time ψ δ,b (u) by taking unit penalty function.
Furthermore, the other parameters are set as follows: δ = 0.6 as the discounting
factor and b = 10 as the horizontal barrier.
The corresponding plots of ψ δ,b (u) against the initial capital u resemble Fig. 1(b)
of example (4.1) in [31] for different values of λ. While increasing the Bayesian
parameter λ, the probability of switching from type I claims to the relatively lower
intensity type II claims increases. In the example, the intensity of claim sizes of type
II claims is less than that of type I claims. From the graph (See Fig. 2) and the table
values (see Table 1), it is evident that the values of Laplace transform of ruin time
decrease by an increase in the parameter λ and converges to zero for unbounded
initial capital.
256 A. S. Dibu and M. J. Jacob

0.5
=1
=2
0.4 =3
=4
=5
0.3

0.2

0.1

0
0 1 2 3 4 5 6 7 8 9 10

Fig. 2 Single-state model with barrier, b = 10—Laplace transform of ruin time vs initial capital

Table 1 Single-state model with barrier, b = 10—ruin probabilities vs initial capital

λ ψ 0.6,10(0) ψ 0.6,10 (1) ψ 0.6,10(3) ψ 0.6,10 (7) ψ 0.6,10 (9)


1 0.4944 0.2754 0.0911 0.0115 0.0073
2 0.4860 0.2523 0.07615 0.0073 0.0023
3 0.4800 0.2371 0.0662 0.0063 0.0038
4 0.4757 0.2254 0.0590 0.0051 0.0030
5 0.4724 0.2165 0.0536 0.0043 0.0027

Example 2 In this example, we consider a two-state model having exponential


premiums with rate G0 = [0.7]. Along with δ = 1 and b = 10, the following
parameters are further taken into account:

−0.9 0.1 0.7 0.1


α = [1, 0] , D0 = , D1 = ,
0 −0.8 0.1 0.7

−1 0.3 0.4 0.3


β = [0.8, 0.2] E0 = , E1 = ,
0 −0.7 0.3 0.4

−0.9 0.9 −1.8 1.8


γ 1 = [0.9, 0.1] , G1 = , γ 2 = [0.6, 0.4] and G2 = .
0 −4 0 −1.6

The Bayesian parameter matrix is taken as Λ = diag [−1, −0.9] so that the positive
averagecash flow is assured with a security loading factor, θ = 0.14. The values of
EDDR ν 1,10 (u) , and by using unit penalty function in the expression of GSF, the
Analysis of a MAP Risk Model with Stochastic Incomes, Inter-Dependent. . . 257

 
 for 0 ≤ u ≤ b.
values of Laplace transform of ruin time ψ 1,10 (u) are obtained
Furthermore, the expected discounted deficit at ruin φ 1,10 (u) is also obtained by
taking the penalty function as w(x, y) = e−ηy in the expression of GSF.
In Example 2, we expect an increasing function of ν 1,10 (u) which is expected
to coincide with ν 1,10 (10) at u = b = 10 (see Table 2a and Figs. 3a, b and 4).
The plots and table values support the intuitions we have. The values of ψ 1,10 (u)
and φ 1,10(u) also report the same coinciding property. Furthermore, we had an
independent expression for all the performance measures given by (37) and (38)
for zero initial capital. Our final expressions given by (46) and (47) limit to the
values of these expressions at zero capital. This also validates the accuracy of the
closed-form solutions, (46) and (47).
Example 3 For u = b, a two-state model having Phase-type premiums is illustrated
below with the following parameters along with δ = 1.

−0.9 0.9 0 0
α = [1, 0] , D0 = , D1 = ,
0 −0.8 0.8 0

−1 1 0 0
β = [0.8, 0.2] E0 = , E1 = ,
0 −0.9 0.9 0

−1 1
γ 0 = [0.7, 0.3] , G0 = , γ 1 = [0.9, 0.1] ,
0 −0.9
−0.9 0.9 −1.8 1.8
G1 = , γ 2 = [0.6, 0.4] and G2 = .
0 −0.8 0 −1.6

The Bayesian parameter matrix is taken as Λ = diag [−1, −0.9] so that the
positive average cash flow is assured with a security loading factor θ = 0.457.
The values of EDDR ν 1,b (b) , and by using  unit penalty functions, the values of
Laplace
 transform
 of ruin time ψ 1,b (b) and expected discounted deficit at ruin
φ 1,b (b) are obtained for 0 ≤ b ≤ ∞ (see Table 2b, Figs. 5a, b and 6).
On observing the output values of EDDR and the GSF under unit penalty
function and under discounted penalty function that depends only on deficit at ruin,
it is noted that as b tends to ∞, the EDDR ν 1,b (b) increases and converges to a
constant while the values of Laplace transform of ruin time and the discounted
deficit at ruin settles down to zero. The outputs are evident to support our
expectations.
258

Table 2 Two-state model

(a) With exponential premiums and for u ≤ b (b) With Phase-type premiums and for u = b
u ν 1,10 (u) ψ 1,10 (u) φ 1,10 (u) b ν 1,b (b) ψ 1,b (b) φ 1,b (b)
0 0.0075394 0.383013 0.44138800 0 0.508040 0.24771900 0.4152220
1 0.0157673 0.232532 0.24320700 1 0.562532 0.16703100 0.2515440
3 0.0438987 0.0765822 0.08110710 3 0.632340 0.06366140 0.09363030
5 0.1092820 0.028174 0.03001510 5 0.659250 0.02381490 0.03446420
7 0.2674270 0.0128726 0.01369940 10 0.674023 0.00194031 0.00277501
9 0.6498320 0.0072313 0.00767374 ∞ 0.675333 0.00000000 0.00000000
A. S. Dibu and M. J. Jacob
Analysis of a MAP Risk Model with Stochastic Incomes, Inter-Dependent. . . 259
Expected discounted dividends
1.2

Laplace transform of ruin time


0.25
=1 and b=10
1
0.2
paid until run

0.8
0.15
0.6
0.1
0.4

0.2 0.05

0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
Initial capital Initial capital, u=b

(a) (b)

Fig. 3 Two-state model with exponential premiums. (a) Expected discounted dividends paid until
ruin vs initial capital. (b) Laplace transform of ruin time vs initial capital

0.45

± =1 and b=10
Expected discounted deficit at ruin

0.4

0.35

0.3

0.25

0.2

0.15

0.1

0.05

0
0 1 2 3 4 5 6 7 8 9 10
Initial capital

Fig. 4 Two-state model with exponential premiums—expected discounted deficit at ruin vs initial
capital

7 Conclusion

The paper introduces the random income feature in a MAP risk model. The inter-
arrival time of random income is considered to follow a MAP to realise the cash
inflows through multiple phases in an insurance company. Two types of claims are
assumed to follow Phase-type distribution for which the claims are inter-dependent
according to an exponential rate matrix in which the rates may be dependent on
prior data. Furthermore, the surplus process is restricted by a horizontal barrier,
b ≥ u. For u = b, the closed-form expressions of the expected discounted dividends
260 A. S. Dibu and M. J. Jacob

0.68 0.4
Expected discounted dividend

Laplace transform of ruin time


0.66 =1 0.35
0.64 0.3
paid until ruin

0.62
0.25
0.6
0.2
0.58
0.15
0.56
0.54 0.1

0.52 0.05
0.5 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
Initial capital, u=b Initial capital

(a) (b)

Fig. 5 Two-state model with Phase-type premiums. (a) Expected discounted dividend paid until
ruin vs initial capital, u = b. (b) Laplace transform of ruin time vs initial capital

0.45

± =1
Expected discounted deficit at ruin

0.4

0.35

0.3

0.25

0.2

0.15

0.1

0.05

0
0 1 2 3 4 5 6 7 8 9 10
Initial capital, u=b

Fig. 6 Two-state model with Phase-type premiums—expected discounted deficit at ruin vs initial
capital, u = b

paid out until ruin, and the Gerber–Shiu function is obtained using the method
of Laplace transforms when the premiums are Phase-type distributed. While for a
general initial capital u ≤ b, the closed-form expressions of the expected discounted
dividends paid out until ruin and the Gerber–Shiu function are obtained using the
method of Laplace transforms when the premiums are exponentially distributed.
The expressions are validated in the scalar prototype by mimicking the results of
example (4.1) in [31] and examples for two-state models are illustrated under both
exponential and Phase-type premiums.
Analysis of a MAP Risk Model with Stochastic Incomes, Inter-Dependent. . . 261

Remark 3 An important observation on implementing the Lundberg’s roots method


for a multi-phase random income model is that the algebraic multiplicity of the
Lundberg’s roots is nothing but the number of phases for the MAP inter-arrival
times of premium.

References

1. Ahn, S., Badescu, A.L.: On the analysis of the Gerber-Shiu discounted penalty function for
risk processes with Markovian arrivals. Insurance Math. Econom. 41(2), 234–249 (2007)
2. Ahn, S., Badescu, A.L., Ramaswami, V.: Time dependent analysis of finite buffer fluid flows
and risk models with a dividend barrier. Queueing Syst. 55, 207–222 (2007)
3. Asmussen, S., Albrecher, H.: Ruin Probabilities. Advanced Series on Statistical Science &
Applied Probability, vol. 14. World Scientific, Singapore (2010)
4. Badescu, A., Breuer, L., Da Silva Soares, A., Latouche, G., Remiche, M.A., Stanford, D.: Risk
processes analysed as fluid queues. Scand. Actuar. J. 2005(2), 127–141 (2005)
5. Badescu, A.L., Breuer, L., Drekic, S., Latouche, G., Stanford, D.A.: The surplus prior to ruin
and the deficit at ruin for a correlated risk process. Scand. Actuar. J. 2005(6), 433–445 (2005)
6. Bao, Z.h., Ye, Z.x.: The Gerber-Shiu discounted penalty function in the delayed renewal risk
process with random income. Appl. Math. Comput. 184(2), 857–863 (2007)
7. Boikov, A.V.: The Cramér–Lundberg model with stochastic premium process. Theory Probab.
Appl. 47(3), 489–493 (2003)
8. Boudreault, M., Cossette, H., Landriault, D., Marceau, E.: On a risk model with dependence
between interclaim arrivals and claim sizes. Scand. Actuar. J. 2006(5), 265–285 (2006)
9. Breuer, L.: First passage times for Markov additive processes with positive jumps of phase
type. J. Appl. Probab. 45(3), 779–799 (2008) 55(4), 207–222 (2007)
10. Cheng, J.h., Wang, D.h.: On a perturbed MAP risk model under a threshold dividend strategy.
J. Korean Stat. Soc. 42(4), 543–564 (2013)
11. Cheung, E.C., Feng, R.: A unified analysis of claim costs up to ruin in a Markovian arrival risk
model. Insurance Math. Econom. 53(1), 98–109 (2013)
12. Cheung, E.C., Landriault, D.: Perturbed MAP risk models with dividend barrier strategies. J.
Appl. Probab. 46(2), 521–541 (2009)
13. Cheung, E.C., Landriault, D.: A generalized penalty function with the maximum surplus prior
to ruin in a MAP risk model. Insurance Math. Econom. 46(1), 127–134 (2010)
14. De Finetti, B.: Su un’impostazione alternativa della teoria collettiva del rischio. In: Transac-
tions of the XVth International Congress of Actuaries, New York, vol. 2, pp. 433–443 (1957)
15. Dong, H., Liu, Z.m.: On a risk model with Markovian arrivals and tax. Appl. Math. J. Chin.
Univ. 27(2), 150–158 (2012)
16. Gao, J., Wu, L.: On the Gerber-Shiu discounted penalty function in a risk model with two types
of delayed claims and random income. J. Comput. Appl. Math. 269, 42–52 (2014)
17. Hao, Y., Yang, H.: On a compound Poisson risk model with delayed claims and random
incomes. Appl. Math. Comput. 217(24), 10195–10204 (2011)
18. Jieming, Z., Xiaoyun, M., Hui, O., Xiangqun, Y.: Expected present value of total dividends in
the compound binomial model with delayed claims and random income. Acta Math. Sci. 33(6),
1639–1651 (2013)
19. Labbé, C., Sendova, K.P.: The expected discounted penalty function under a risk model with
stochastic income. Appl. Math. Comput. 215(5), 1852–1867 (2009)
20. Labbé, C., Sendov, H.S., Sendova, K.P.: The Gerber–Shiu function and the generalized
Cramér–Lundberg model. Appl. Math. Comput. 218(7), 3035–3056 (2011)
21. Landriault, D., Shi, T.: Occupation times in the MAP risk model. Insurance Math. Econom. 60,
75–82 (2015)
262 A. S. Dibu and M. J. Jacob

22. Latouche, G., Ramaswami, V.: Introduction to Matrix Analytic Methods in Stochastic Model-
ing. SIAM, Philadelphia (1999)
23. Li, J., Dickson, D.C., Li, S.: Analysis of some ruin-related quantities in a Markov-modulated
risk model. Stoch. Model. 32(3), 351–365 (2016)
24. Melnikov, A.: Risk Analysis in Finance and Insurance. Chapman and Hall/CRC, Boca Raton
(2011)
25. Neuts, M.F.: A versatile Markovian point process. J. Appl. Probab. 16(4), 764–779 (1979)
26. Shija, G., Jacob, M.: Gerber-Shiu function of Markov modulated delayed by-claimtype risk
model with random incomes. J. Math. Finance 6(4), 489 (2016)
27. Spitzbart, A.: A generalization of Hermite’s interpolation formula. Am. Math. Mon. 67(1),
42–46 (1960)
28. Temnov, G.: Risk process with random income. J. Math. Sci. 123(1), 3780–3794 (2004)
29. Zhang, Z., Yang, H., Yang, H.: On the absolute ruin in a MAP risk model with debit interest.
Adv. Appl. Probab. 43(1), 77–96 (2011)
30. Zhang, Z., Eric, C., Cheung, K.: The Markov additive risk process under an erlangized dividend
barrier strategy. Methodol. Comput. Appl. Probab. 18(2), 275 (2016)
31. Zou, W., Gao, J.w., Xie, J.h.: On the expected discounted penalty function and optimal dividend
strategy for a risk model with random incomes and interclaim-dependent claim sizes. J.
Comput. Appl. Math. 255, 270–281 (2014)
A PH Distributed Production Inventory
Model with Different Modes of Service
and MAP Arrivals

Salini S. Nair and K. P. Jose

Abstract This paper studies a production inventory model with retrial of customers
under (s, S) policy. The arrival of customers is according to a Markovian Arrival
Process with representation (D0 , D1 ) and service times follow an exponential
distribution. The production process follows a phase-type distribution. When the
inventory level reduces to a pre-assigned level s due to demands, production starts
and service is given at a reduced rate. This reduced rate continuous up to the zero
level of inventory. The arriving customers are directed to a buffer of finite capacity
equal to the current inventory level. An arriving customer, who notices the buffer
full, proceeds to an orbit of infinite capacity with some probability and decides to
leave the system with the complementary probability. An orbiting customer may
retry from the orbit and inter-retrial times are exponentially distributed with linear
rate. Various system performance measures of the model are defined. A suitable
cost function is constructed and analyzed algorithmically. The optimum (s, S) pair
is obtained. The effect of correlation between two successive inter-arrival times is
also analyzed.

Keywords Production inventory · Retrial of customers · Markovian arrival


process · Phase-type distribution

1 Introduction

Many researchers are interested in queuing-inventory systems with the production


of inventory items for the last few decades. Nowadays, manufacturing companies
produce inventory in response to the actual demand. Production inventory under
(s, S) policy can be used to model these types of systems efficiently. Different
notions such as retrial of customers, impatience of customers, server vacations,

S. S. Nair () · K. P. Jose


PG and Research Department of Mathematics, St. Peter’s College, Kolenchery, Kerala, India

© The Editor(s) (if applicable) and The Author(s), under exclusive 263
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_16
264 S. S. Nair and K. P. Jose

interruptions of the service as well as the production process, etc. are being
investigated in different production inventory systems. Artalejo et al. [1] were the
first to study inventory policies with positive lead time along with the retrial of
unsatisfied customers. They obtained solutions algorithmically. Krishnamoorthy
and Jose [8] analyzed three production inventory systems with positive service
time and retrial of customers. They assumed all the underlying distributions to
be exponential. They find out that the model with buffer size equal to the current
inventory level is the best profitable model.
Benjaafar et al. [3] analyzed a production inventory system by considering
customer impatience. They obtained that the optimal policy can be described
by a production base-stock level and an admission threshold. They also char-
acterized analytically the sensitivity of these thresholds to operating parameters.
Krishnamoorthy and Viswanath [9] considered a production inventory system
with positive service time. The time for producing each item was assumed to
follow a Markovian production scheme. The customer arrival process followed a
Markovian arrival process. The service time to each customer followed a phase-
type distribution. They obtained the effect of the control variables s and S on the
fraction of time the system goes out of inventory and on the expected loss rate of
customers.
He and Zhang [6] considered an inventory-production system consisting of a
warehouse and a production facility. They obtained explicit solutions and developed
computational methods for analyzing system performance measures. Yu and Dong
[17] considered a production lot size problem as a renewal process and used a
numerical approach to find out the optimal solution to the problem. Wensing and
Kuhn [16] analyzed periodic replenishment processes that exhibited order crossover.
It was compared with the existing concept of outstanding orders. Based on this,
formulas were developed to give an exact analysis of three essential performance
indicators of a periodic-review order-up-to inventory system with independent
stochastic lead times.
Krishnamoorthy et al. [10] considered an (s, S) production inventory system
with positive service time, interruption to both service and production process.
Production time and service time followed Erlang distribution and other random
variables were exponentially distributed. They obtained an explicit expression for
the necessary and sufficient condition for the stability of the system under study.
Several system performance measures were derived, and their dependence on
the system parameters was studied numerically. Anoop and Jacob [11] studied a
multi-server Markovian queuing system by considering the servers as a standard
production inventory and they obtained the condition for checking ergodicity and
the steady state solutions.
Beak and Moon [2] studied an (s, S) production inventory system. They analyzed
the model using a regenerative process and obtained the result that the queue size
and inventory level processes were independent in steady state. Zare et al. [18]
analyzed a production inventory system consisting of a warehouse and a distribution
center. The time taken to transport the inventory between these two centers was
generally distributed. They obtained the optimal reorder point at the distribution
Production Inventory Model with Different Modes of Service 265

center and the optimal base-stock level in the warehouse. Chan et al. [4] studied a
production inventory model with non-stop production. They assumed deterioration
during deliveries. They optimized the cost for the system in which some of the cost
parameters were production rate dependent.
The primary concern in most of the manufacturing companies is to meet
customer’s demands and to make maximum profit. Therefore, manufacturers have
to increase or decrease production and sales of the items according to the demand
of customers. Thus the effects of variations in production rate and service rate on
inventory systems need further study. Jose and Salini [7] studied two production
inventory systems with positive service time and retrial of customers. They assumed
different rates of production depending on the inventory level and constructed a
suitable cost function for an algorithmic solution using Matrix Analytic Method.
Dhanya et al. [14] introduced two modes of service in an inventory system with
positive service time. They assumed positive lead time for replenishment and
obtained the stability condition. Product form solution for system state distribution
is established. The optimal value of the service rate in a lower mode of service is
also obtained.
This paper describes a production inventory system with the retrial of customers
and varying service rates. The arrival of customers is according to a Markovian
arrival process with representation (D0 , D1 ) of order m1 . The service times follow
an exponential distribution with rate μ. When the inventory level reduces to s,
production starts and service is given at a reduced rate. The service time distribution
has rate αμ, 0 < α < 1 when the inventory level lies between 0 and s. The
production process is switched off when the inventory reaches the maximum level S.
The production process follows phase-type distribution with representation (β, T )
of order m2 . The arriving customers are directed to a buffer of finite capacity equal
to the current inventory level. An arriving customer, who notices the buffer full,
proceeds to an orbit of infinite capacity with probability γ or decides to leave the
system with probability (1 − γ ). An orbiting customer may retry from there and
inter-retrial times are exponentially distributed with parameter iθ when there are i
customers in the orbit. A retrial customer, who finds the buffer full returns back to
the orbit with probability δ or decides to leave the system with probability (1 − δ).
This paper is organized as follows. Section 2 contains the mathematical descrip-
tion of the model. Section 3 deals with the system stability. Performance measures
are included in Sect. 4. Numerical results are provided in Sect. 5 and a related
optimization problem is discussed in Sect. 6. Section 7 describes the Correlation
analysis. Finally, concluding remarks are included in Sect. 8.

2 Mathematical Description of the Model

To analyze the model mathematically, we use the following notations.


• I (t): Inventory level at time t.
266 S. S. Nair and K. P. Jose

• N(t): Number of customers in the orbit at time t.


• M(t): Number
/ of customers in the buffer at time t.
0, if the production is in OFF mode
• F (t)=
1, if the production is in ON mode
• J1 (t): Phase of the arrival process at time t.
• J2 (t): Phase of the production process at time t.
Now, {X(t), t ≥ 0}, where X(t) = (N(t), F (t), I (t), M(t), J1 (t), J2 (t)) is a
level dependent quasi birth-and-death process on the state space {(i, 0, j, h, k) : i ≥
0; j = s + 1, . . . , S; h = 0, . . . , j ; k = 1, . . . , m1 } ∪ {(i, 1, j, h, k, l) : i ≥ 0; j =
0, . . . , S − 1; h = 0, . . . , j ; k = 1, . . . , m1 ; l = 1, . . . , m2 }.
Now, we describe the transitions in the Markov chain as follows.
(a) Transitions due to arrival of customers
From (i, 0, j, h, k) to (i, 0, j, h + 1, k) is given by

D1 , j = s + 1, . . . , S; h = 0, . . . , j − 1

From (i, 1, j, h, k, l) to (i, 1, j, h + 1, k, l) is given by

D1 ⊗ Im2 , j = 0, . . . , S − 1; h = 0, . . . , j − 1

From (i, 0, j, j, k) to (i + 1, 0, j, j, k) is given by

γ D1 , j = s + 1, . . . , S

From (i, 1, j, j, k, l) to (i + 1, 1, j, j, k, l) is given by

γ D1 ⊗ Im2 , j = 0, . . . , S − 1

(b) Transitions due to service completion


From (i, 0, s + 1, h, k) to (i, 1, s, h − 1, k, l) is given by

Im1 ⊗ μβ, h = 1, . . . , j

From (i, 0, j, h, k) to (i, 0, j − 1, h − 1, k) is given by

μIm1 , j = s + 2, . . . , S; h = 1, . . . , j

From (i, 1, j, h, k, l) to (i, 1, j − 1, h − 1, k, l) is given by

αμIm1 m2 , j = 1, . . . , s; h = 1, . . . , j
Production Inventory Model with Different Modes of Service 267

From (i, 1, j, h, k, l) to (i, 1, j − 1, h − 1, k, l) is given by

μIm1 m2 , j = s + 1, . . . , S − 1; h = 1, . . . , j

(c) Transitions due to completion of production of an item


From (i, 1, j, h, k, l) to (i, 1, j + 1, h, k, l) is given by

Im1 ⊗ T 0 β, j = 0, . . . , S − 2; h = 0, . . . , j

From (i, 1, S − 1, h, k, l) to (i, 0, S, h, k) is given by

Im1 ⊗ T 0 , h = 0, . . . , S − 1

(d) Transitions due to retrial of customers from the orbit


From (i, 0, j, h, k) to (i − 1, 0, j, h + 1, k) is given by

iθ Im1 , j = s + 1, . . . , S; h = 0, . . . , j − 1

From (i, 0, j, j, k) to (i − 1, 0, j, j, k) is given by

iθ (1 − δ)Im1 , j = s + 1, . . . , S

From (i, 1, j, h, k, l) to (i − 1, 1, j, h + 1, k, l) is given by

iθ Im1 m2 , j = 1, . . . , S − 1; h = 0, . . . , j − 1

From (i, 1, j, j, k, l) to (i − 1, 1, j, j, k, l) is given by

iθ (1 − δ)Im1 m2 , j = 0, . . . , S − 1

(e) Transitions that leave the first four coordinates fixed


From (i, 0, j, 0) to (i, 0, j, 0) is given by

D0 − iθ Im1 , j = s + 1, . . . , S

From (i, 0, j, h) to (i, 0, j, h) is given by

D0 − μIm1 − iθ Im1 , j = s + 1, . . . , S; h = 1, . . . , j − 1

From (i, 0, j, j ) to (i, 0, j, j ) is given by

D0 + (1 − γ )D1 − μIm1 − iθ (1 − δ)Im1 , j = s + 1, . . . , S


268 S. S. Nair and K. P. Jose

From (i, 1, 0, 0) to (i, 1, 0, 0) is given by

(D0 + (1 − γ )D1 ) ⊕ T − iθ (1 − δ)Im1 m2

From (i, 1, j, 0) to (i, 1, j, 0) is given by

D0 ⊕ T − iθ Im1 m2 , j = 1, . . . , S − 1

From (i, 1, j, h) to (i, 1, j, h) is given by

D0 ⊕ T − αμIm1 m2 − iθ Im1 m2 , j = 1, . . . , s; h = 1, . . . , j − 1

From (i, 1, j, j ) to (i, 1, j, j ) is given by

(D0 + (1 − γ )D1 ) ⊕ T − αμIm1 m2 − iθ (1 − δ)Im1 m2 , j = 1, . . . , s

From (i, 1, j, h) to (i, 1, j, h) is given by

D0 ⊕ T − μIm1 m2 − iθ Im1 m2 , j = s + 1, . . . , S − 1; h = 1, . . . , j − 1

From (i, 1, j, j ) to (i, 1, j, j ) is given by

(D0 + (1 − γ )D1 ) ⊕ T − μIm1 m2 − iθ (1 − δ)Im1 m2 , j = s + 1, . . . , S − 1

The generator matrix of the Markov chain is given by


⎡ ⎤
A1,0 A0
⎢A2,1 A1,1 A0 ⎥
⎢ ⎥
⎢ ⎥
Q=⎢ A A
2,2 1,2 A 0 ⎥ (1)
⎢ A2,3 A1,3 A0 ⎥
⎣ ⎦
.. .. ..
. . .

where A1,i , i ≥ 0 governs transitions from i to i; A0 , transitions from i to i + 1;


A2,i , i ≥ 1, transitions from i to i − 1. Neuts–Rao [13] truncation method is used to
modify the infinitesimal generator Q to the form, where A1,i = A1 and A2,i = A2
for i ≥ N.

3 System Stability

In order to find the stability of the system, we take Lyapunov test function (Falin
and Templeton [5]), define φ(r) = i, if r is a state in the level i. The mean drift yr ,
Production Inventory Model with Different Modes of Service 269

for any r belonging to the level i ≥ 1, is given by



yr = qrp (φ(p) − φ(r))
p=r
  
= qru (φ(u) − φ(r)) + qrv (φ(v) − φ(r)) + qrw (φ(w) − φ(r))
u v w

where u, v, and w vary over the states belonging to the levels (i − 1), i and (i + 1),
respectively. Then, by using the definition of φ, we can define φ(u) = i − 1, φ(v) =
i and φ(w) = i + 1.
 
yr = − qru + qrw
u w


⎪−iθ, r = (i, 0, j, h, k), j = s + 1, . . . , S;





⎪ h = 1, . . . , j − 1; k = 1, . . . , m1





⎪−iθ(1 − δ) + γ (D1 e)k , r = (i, 0, j, h, k), j = s + 1, . . . , S;





⎪ h = j ; k = 1, . . . , m1

= −iθ, r = (i, 1, j, h, k), j = 0, . . . , S − 1;





⎪ h = 1, . . . , j − 1; k = 1, . . . , m1 ;





⎪ l = 1, . . . , m2





⎪−iθ(1 − δ) + (γ (D1 e) ⊗ em2 )(k−1)m1 +l , r = (i, 1, j, h, k), j = 0, . . . , S − 1;


⎩ h = j ; k = 1, . . . , m1 ; l = 1, . . . , m2

Since (1 − δ) > 0, for any  > 0, we can find N " large enough so that yr < −
for any r belonging to the level i ≥ N " . Hence, by Tweedi’s result [15] the system
under consideration is stable.

3.1 Rate Matrix R and Truncation Level N

We use iterative method to find R. Denote the sequence of R by {Rn (N)} and
is defined by R0 (N) = 0 and Rn+1 (N) = (−Rn2 (N)A2 (N) − A0 (N))A−1 1 (N).
The value of N must be chosen such that |η(N) − η(N + 1)| < , where 
is an arbitrarily small value and η(N), the spectral radius of R(N). For detailed
discussion of selection of the value of N, one can refer to Neuts [12].
270 S. S. Nair and K. P. Jose

4 Performance Measures of the Model

The (i + 1)th component of the steady state probability vector

x = (x0 , x1 , x2 , . . . , xN−1 , xN , . . .)

is given by

xi = (yi,0,j , yi,1,j )

where

yi,0,j = (yi,0,j,0,1 , . . . , yi,0,j,0,m1 , . . . , yi,0,j,j,1 , . . . , yi,0,j,j,m1 ), j = s + 1, . . . , S;

yi,1,j = (yi,1,j,0,1,1, . . . , yi,1,j,0,1,m2 , . . . , yi,1,j,0,m1 ,1 , . . . , yi,1,j,0,m1 ,m2 , . . . ,


yi,1,j,j,1,1 , . . . , yi,1,j,j,1,m2 , . . . , yi,1,j,j,m1 ,1 , . . . , yi,1,j,j,m1 ,m2 ),
j = 0, . . . , S − 1.

Then,
1. Expected Inventory level, EI , in the system is given by

∞ 
 S 
j 
m1 ∞ 
 S−1 
j 
m1 
m2
EI = jyi,0,j,h,k + jyi,1,j,h,k,l
i=0 j =s+1 h=0 k=1 i=0 j =0 h=0 k=1 l=1

2. Expected number of customers, EO, in the orbit is given by


 ∞


EO = ixi e
i=1

3. Expected number of customers, EB, in the buffer is given by

∞ 
 S 
j 
m1 ∞ 
 S−1 
j 
m1 
m2
EB = hyi,0,j,h,k + hyi,1,j,h,k,l
i=0 j =s+1 h=0 k=1 i=0 j =0 h=0 k=1 l=1

4. Expected switching rate, ESR, is given by

∞ 
 s+1 
m1
ESR = μ yi,0,s+1,h,k
i=0 h=1 k=1
Production Inventory Model with Different Modes of Service 271

5. Expected number of departures, EDS, after completing service is given by

∞ 
 S 
j 
m1 ∞ 
 s 
j 
m1 
m2
EDS = μ yi,0,j,h,k + αμ yi,1,j,h,k,l
i=0 j =s+1 h=1 k=1 i=0 j =1 h=1 k=1 l=1


  
S−1 j 
m1 
m2
+ μ yi,1,j,h,k,l
i=0 j =s+1 h=1 k=1 l=1

6. Expected number of external customers lost, EL1 , before entering the orbit per
unit time is
∞ 
 S 
m1
EL1 = yi,0,j,j,k ((1 − γ )D1 e)
i=0 j =s+1 k=1

∞ S−1
  m1 
m2
+ yi,1,j,j,k,l ((1 − γ )D1 ⊗ em2 )
i=0 j =0 k=1 l=1

7. Expected number of customers lost, EL2 , due to retrials per unit time is

∞ 
 S 
m1
EL2 = θ (1 − δ) iyi,0,j,j,k em1
i=0 j =s+1 k=1

∞ 
 S−1 
m1 
m2
+ θ (1 − δ) iyi,1,j,j,k,l em1 m2
i=0 j =0 k=1 l=1

8. Overall rate of retrials, ORR, is given by


∞ 

ORR = θ ix i e
i=1

9. Successful rate of retrials, SRR, is given by

∞ 
 S j −1 
 m1
SRR = iθ yi,0,j,h,k em1
i=0 j =s+1 h=0 k=1

∞ 
 j −1 
S−1  m1 
m2
+ iθ yi,1,j,h,k,l em1 m2
i=0 j =0 h=0 k=1 l=1
272 S. S. Nair and K. P. Jose

10. Expected number of crossovers, ECC, in one cycle is

∞ 
 s−1 
m1 
m2
ECC = yi,1,s−1,h,k,l (em1 ⊗ T 0 )
i=0 h=0 k=1 l=1

∞ 
 s+1 
m1 
m2
+ μ yi,1,s+1,h,k,l
i=0 h=1 k=1 l=1

5 Numerical Results

We analyze the model by considering the performance measures overall and


successful rate of retrials (ORR and SRR) and expected number of crossovers in
one cycle (ECC). The values of ORR, SRR, and ECC by varying the parameters
α, γ , δ, and θ are given in the following tables.
Consider the following parameter values.

−2.1 1.0 0.1 1.0


m1 = 2, m2 = 2, D0 = , D1 =
1.0 −3.1 1.0 1.1

−6 3 2
β = (0.5, 0.5), T = , T0 =
1 −4 2

Then the average arrival rate = 1.6 and correlation between two successive inter-
arrival times = −0.0067.
In Table 1, overall rate of retrials (ORR) decrease and successful rate of retrials
(SRR) increases. As the service rate increases, the number of customers in the
orbit decreases and hence ORR decreases. But increase in service rate increases
the rate of successful retrials. The increase in service rate decreases ECC as in

Table 1 Variations in α
α ORR SRR ECC
(S = 7; s = 2; γ = 0.6; N =
50; θ = 1.5; δ = 0.8; μ = 5) 0.1 13.2152 1.3473 1.1766
0.2 13.2145 1.3485 1.1651
0.3 13.2138 1.3494 1.1192
0.4 13.2132 1.3502 1.0626
0.5 13.2127 1.3509 1.0059
0.6 13.2122 1.3514 0.9533
0.7 13.2118 1.3518 0.9059
0.8 13.2114 1.3522 0.8639
0.9 13.2111 1.3525 0.8268
Production Inventory Model with Different Modes of Service 273

Table 1. In Tables 2 and 3, values of all the performance measures increase. As the
probability of primary customers going to orbit and probability of retrial customers
going to orbit increase, the number of customers in the orbit increases. This leads
to the increase of ORR, SRR, and ECC. In Table 4, values of all the performance
measures except ECC increase with the increase in retrial rate.

Table 2 Variations in γ
γ ORR SRR ECC
(S = 7; s = 2; α = 0.6; N =
50; θ = 1.5; δ = 0.8; μ = 3) 0.1 0.5192 0.0561 0.4147
0.2 1.1318 0.1185 0.4350
0.3 1.8466 0.1874 0.4569
0.4 2.6704 0.2627 0.4801
0.5 3.6066 0.3440 0.5046
0.6 4.6542 0.4305 0.5300
0.7 5.8068 0.5211 0.5559
0.8 7.0526 0.6141 0.5818
0.9 8.3742 0.7080 0.6073

Table 3 Variations in δ
δ ORR SRR ECC
(S = 7; s = 2; α = 0.6; γ =
0.6; N = 50; θ = 1.5; μ = 3) 0.1 1.1416 0.1368 0.4376
0.2 1.2838 0.1507 0.4421
0.3 1.4657 0.1680 0.4476
0.4 1.7060 0.1902 0.4547
0.5 2.0373 0.2198 0.4641
0.6 2.5208 0.2615 0.4773
0.7 3.2854 0.3245 0.4971
0.8 4.6542 0.4305 0.5300
0.9 7.7400 0.6355 0.5903

Table 4 Variations in θ
θ ORR SRR ECC
(S = 7; s = 2; α = 0.6; γ =
0.6; N = 50; δ = 0.8; μ = 3) 1.1 10.5773 1.2258 0.8380
1.2 11.3201 1.2323 0.8365
1.3 12.0443 1.2372 0.8343
1.4 12.7515 1.2408 0.8316
1.5 13.4432 1.2436 0.8285
1.6 14.1205 1.2456 0.8252
1.7 14.7846 1.2471 0.8218
1.8 15.4364 1.2483 0.8182
1.9 16.0769 1.2492 0.8146
274 S. S. Nair and K. P. Jose

6 Optimization Problem

Consider the following costs.


• C = the fixed cost
• c1 = the procurement cost/unit/unit time
• c2 = the holding cost of inventory/unit/unit time
• c3 = the holding cost of customers in the orbit/unit/unit time
• c4 = the holding cost of customers in the buffer/unit/unit time
• c5 = the cost due to loss of primary customers/unit/unit time
• c6 = the cost due to loss of retrial customers/unit/unit time
• c7 = the cost due to service/unit/unit time.
We define the expected total cost/unit time as

ET C = (C +(S −s)c1 )ESR +c2 EI +c3 EO +c4EB +c5 EL1 +c6 EL2 +c7 EDS

6.1 Numerical Illustrations

Here, we find out the optimum values of the parameters α, γ , δ, and θ correspond-
ing to the expected minimum total cost. We calculate the minimum expected total
cost, by varying one parameter and keeping all others fixed. In Fig. 1, the optimum

Fig. 1 ET C versus α. S = 7; s = 2; γ = 0.6; N = 50; θ = 1.5; μ = 3; δ = 0.8; C = 20; c1 =


0.8; c2 = 0.1; c3 = 1; c4 = 0.99; c5 = 1.02; c6 = 1.02; c7 = 16.45
Production Inventory Model with Different Modes of Service 275

Fig. 2 ET C versus γ . S = 7; s = 2; N = 50; θ = 1.5; μ = 3; δ = 0.8; α = 0.6; C = 20; c1 =


1; c2 = 12.9; c3 = 1; c4 = 1; c5 = 1; c6 = 1; c7 = 1

value of α which minimizes the expected total cost is obtained. The optimum value
of α is 0.3 and the minimum ET C is 33.3947. The minimum ET C is 51.2581 at
γ = 0.5 in Fig. 2. Figure 3 shows that the minimum ET C is 328.1441 at δ = 0.8.
From Fig. 4, the optimum retrial rate θ is 1.3 and the minimum ET C is 30.5237.

6.2 Optimum (s, S) Pair

We find out the optimum (s, S) pair, by fixing the parameter values and cost values.
The optimum value of s, for each value of S, is obtained as in Table 5. The optimum
value of s is 6 for all values 12, 13, 14, 15, 16 and 17 of S considered. The optimum
(s, S) pair, which minimizes ET C is (6, 15) and the minimum value of ET C is
17.1538.
276 S. S. Nair and K. P. Jose

Fig. 3 ET C versus δ. S = 7; s = 2; γ = 0.6; N = 50; θ = 1.5; μ = 3; α = 0.6; C = 20; c1 =


7; c2 = 88.5; c3 = 10.1; c4 = 0.9; c5 = 1; c6 = 0.8; c7 = 0.8

Fig. 4 ET C versus θ. S = 7; s = 2; γ = 0.6; N = 50; μ = 3; δ = 0.8; α = 0.6; C = 20; c1 =


1.5; c2 = 6.8; c3 = 1; c4 = 0.1; c5 = 1; c6 = 1; c7 = 1
Production Inventory Model with Different Modes of Service 277

Table 5 Optimum (s, S) pair (γ = 0.6; N = 50; θ = 1.5; δ = 0.8; μ = 3; α = 0.6; C = 20;.
c1 = 1; c2 = 1; c3 = 4; c4 = 1; c5 = 2; c6 = 2; c7 = 1)

S
s 12 13 14 15 16 17
4 17.9286 17.7367 17.6398 17.6188 17.6599 17.7529
5 17.6202 17.4054 17.3028 17.2854 17.3350 17.4391
6 17.5974 17.3196 17.1854 17.1538 17.1991 17.3044
7 17.8814 17.4791 17.2771 17.2081 17.2333 17.3286
8 18.5415 17.9069 17.5801 17.4406 17.4248 17.4959
9 19.7641 18.6733 18.1196 17.8564 17.7688 17.7965

7 Correlation Analysis

7.1 MAP with Negative Correlation

Let
⎡ ⎤ ⎡ ⎤
−2.0 2.0 0 0 0 0
D0 = ⎣ 0 −1.1 0 ⎦ , D1 = ⎣ 0.1 0 1.0⎦
0 0 −13.2 12.0 0 1.2

Then r = −0.3690. That is, the arrival process has negative correlated arrivals with
correlation between two successive inter-arrival times is −0.3690.

7.2 MAP with Positive Correlation

Let
⎡ ⎤ ⎡ ⎤
−2.0 2.0 0 0 0 0
D0 = ⎣ 0 −1.1 0 ⎦ , D1 = ⎣1.0 0 0.1 ⎦
0 0 −13.2 1.2 0 12.0

Then r = 0.3690. That is, the arrival process has positive correlation with value
0.3690.
278 S. S. Nair and K. P. Jose

Table 6 Effect of correlation on performance measures (S = 15; s = 6; γ = 0.6; N = 50; θ =


1.5; δ = 0.8; μ = 3; α = 0.6;)

r ESR EI EO EB EL1
−0.3690 0.0504 7.8710 0.2736 1.0079 0.0020
0.3690 0.0537 8.0844 2.4005 1.7592 0.0437

r EL2 EDS ORR SRR ERC


−0.3690 0.0617 1.2839 0.4105 0.1389 0.2970
0.3690 0.5351 1.2210 3.6007 1.1226 0.2838

7.3 Effect of Correlation on Performance Measures

From Table 6, when correlation between two successive inter-arrival times is


positive, all performance measures except EDS and ERC have higher values.

8 Concluding Remarks

In this paper, we analyzed a production inventory system with different service rates
and retrial of customers. We investigated the stability of the system and derived vari-
ous performance measures of the system in the steady state. A suitable cost function
is constructed and analyzed. The optimum value of α is obtained graphically. The
optimum values of other parameters are also obtained. The optimum (s, S) pair is
also computed. The effect of correlation between two successive inter-arrival times
on different performance measures is also analyzed.
The analyzed model can be extended further by assuming Batch Markovian
Arrival Process (BMAP).

Acknowledgments Salini S. Nair acknowledges the financial support of University Grants


Commission of India under Faculty Development Programme F.No. FIP/12th Plan/KLMG045
TF07/2015.

References

1. Artalejo, J.R., Krishnamoorthy, A., Lopez-Herrero, M.J.: Numerical analysis of (s, s) inventory
systems with repeated attempts. Ann. Oper. Res. 141(1), 67–83 (2006)
2. Baek, J.W., Moon, S.K.: A production–inventory system with a Markovian service queue and
lost sales. J. Korean Stat. Soc. 45(1), 14–24 (2016)
3. Benjaafar, S., Gayon, J.P., Tepe, S.: Optimal control of a production–inventory system with
customer impatience. Oper. Res. Lett. 38(4), 267–272 (2010)
Production Inventory Model with Different Modes of Service 279

4. Chan, C.K., Wong, W.H., Langevin, A., Lee, Y.: An integrated production-inventory model
for deteriorating items with consideration of optimal production rate and deterioration during
delivery. Int. J. Prod. Econ. 189, 1–13 (2017)
5. Falin, G.I., Templeton, J.G.C.: Retrial Queues, vol. 75. CRC Press, Boca Raton (1997)
6. He, Q.M., Zhang, H.: Performance analysis of an inventory–production system with shipment
consolidation in the production facility. Perform. Eval. 70(9), 623–638 (2013)
7. Jose, K.P., Salini, S.N.: Analysis of two production inventory systems with buffer, retrials and
different production rates. J. Ind. Eng. Int. 13(3), 369–380 (2017)
8. Krishnamoorthy, A., Jose, K.P.: Three production inventory systems with service, loss and
retrial of customers. Int. J. Inf. Manag. Sci. 19(3), 367–389 (2008)
9. Krishnamoorthy, A., Narayanan, V.C.: Production inventory with service time and vacation to
the server. IMA J. Manag. Math. 22(1), 33–45 (2011)
10. Krishnamoorthy, A., Nair, S.S., Narayanan, V.C.: Production inventory with service time and
interruptions. Int. J. Syst. Sci. 46(10), 1800–1816 (2015)
11. Nair, A.N., Jacob, M.: An production inventory controlled self-service queuing system. J.
Probab. Stat. 505082, 1–8 (2015)
12. Neuts, M.F.: Matrix-geometric Solutions in Stochastic Models: An Algorithmic Approach.
Johns Hopkins University, Baltimore (1981)
13. Neuts, M.F., Rao, B.M.: Numerical investigation of a multiserver retrial model. Queue. Syst.
7(2), 169–189 (1990)
14. Shajin, D., Benny, B., Razumchik, R.V., Krishnamoorthy, A.: Discrete product inventory
control system with positive service time and two operation modes. Autom. Remote Control
79(9), 1593–1608 (2018)
15. Tweedie, R.L.: Sufficient conditions for regularity, recurrence and ergodicity of Markov
processes. In: Mathematical Proceedings of the Cambridge Philosophical Society, vol. 78, pp.
125–136. Cambridge University Press, Cambridge (1975)
16. Wensing, T., Kuhn, H.: Analysis of production and inventory systems when orders may cross
over. Ann. Oper. Res. 231(1), 265–281 (2015)
17. Yu, A.J., Dong, Y.: A numerical solution for a two-stage production and inventory system with
random demand arrivals. Comput. Oper. Res. 44, 13–21 (2014)
18. Zare, A.G., Abouee-Mehrizi, H., Berman, O.: Exact analysis of the (r, q) inventory policy in a
two-echelon production–inventory system. Oper. Res. Lett. 45(4), 308–314 (2017)
On a Generalized Lifetime Model Using
DUS Transformation

P. Kavya and M. Manoharan

Abstract In this paper, we propose a new lifetime distribution based on the


generalized DUS transformation by using Weibull distribution as the baseline
distribution. This new distribution exhibits various behaviour of hazard function like
increasing, decreasing and inverse bathtub. Here we try to study the characteristics
of the new distribution and also analyse a real data set to illustrate the flexibility of
the model.

Keywords DUS transformation · Inverse bathtub · Weibull distribution

1 Introduction

For analysing the lifetime data, there are number of models available in the
literature. Earlier, only the constant, increasing and decreasing hazard rates received
serious consideration. But, in real-life problem, a situation does arise when the
hazard rate is expected to be non-monotone, for example, human life. To model
such problems, several non-monotone hazard rate distributions were introduced.
Mudholkar and Srivastava [7], Xie and Lai [10], Gupta et al. [4] and Xie et al.
[11] are notable research works related to the non-monotone hazard rate data.
There is a growing interest in the study of inverse bathtub hazard rates nowadays.
A study of head and neck cancer data, [2] showed inverse bathtub hazard rates, in
which the hazard rate initially increased, attained a maximum, and then decreased
before it finally stabilized due to therapy. Log-normal, log-logistic, Burr Type XII,
Burr Type III, log-Burr Type XII and the inverse Weibull distributions are some of
the statistical distributions that show inverse bathtub hazard rates.

P. Kavya () · M. Manoharan


Department of Statistics, University of Calicut, Malappuram, Kerala, India

© The Editor(s) (if applicable) and The Author(s), under exclusive 281
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_17
282 P. Kavya and M. Manoharan

In the present study, we used a transformation called DUS transformation


proposed by Kumar et al. [5]. If F (x) is the cdf of some baseline distribution, then
the cdf G(x) of new distribution is given by,

eF (x) − 1
G(x) = .
e−1

Maurya et al. [6] introduced a new class of distribution by using the generalization
of DUS transformation. The cdf of Generalized DUS (GDUS) transformation is
α (x))
e(F −1
G(x) = ,
e−1

where F (x) be the cdf of some baseline distribution. In both these papers, they used
exponential distribution as the baseline distribution.
Our main objective in this paper is to introduce a new class of distribution which
includes all types of failure rates for suitable choice of parameter. By using GDUS
transformation, the obtained distribution is expected to possess both monotone and
non-monotone failure rates depending on the values of the parameter. The Weibull
distribution has wide application in reliability and survival analysis. Depending on
the shape parameter, Weibull models show different types of observed failures of
components. Therefore here we consider Weibull distribution with parameters λ and
k as the baseline distribution in GDUS transformation. The cdf and pdf of Weibull
distribution are, respectively, G(x) = 1 − e−( λ ) and g(x) = ( λk )( xλ )k−1 e−( λ ) ,
x k x k

x > 0, λ, k > 0.
Using GDUS and Weibull distribution, the cdf and pdf of the new distribution,
i.e., GDUS Weibull Distribution (GDUSWD) can be obtained as
$ %α
−( x )k
1−e λ
e −1
F (x) = ; (1)
e−1
$ %
x k α
 
x k α−1 1−e−( λ )
−( λx )k
αkx k−1 e 1 − e−( λ ) e
f (x) = , x > 0, α, λ, k > 0.
λk (e − 1)
(2)

The hazard function of the distribution is,


$ %α
 α−1 x k
1−e−( λ )
αkx k−1 e−( λ ) −( λx )k
x k
1−e e
h(x; α, λ, k) = ⎛ $ % ⎞ . (3)
x k α
1−e−( λ )
λk ⎝e − e ⎠
On a Generalized Lifetime Model Using DUS Transformation 283

The paper deals with the selected topics as follows: In Sect. 2, we plot the pdf
and hazard rates for different values of parameters of the GDUSWD. Various
statistical characteristics of proposed distribution like moments, quantile function
order statistic and Reńyi entropy are included in Sect. 3. The parametric estimation
for the new distribution is discussed in Sect. 4. In Sect. 5 we illustrate the flexibility
of the proposed model for a real data set by using AIC (Akaike information
criterion) and BIC (Bayesian information criterion), and finally recapitulate the
conclusions in Sect. 6.

2 Shape of the pdf and Hazard Function

The distribution function may seem complicated, so we plot it to gain a better


understanding of the nature of the distribution. Using Eq. (2), the plots of pdf for
various values of the parameters α, λ and k are given in Fig. 1.
1.0
0.8
0.6
f(x)

0.4
0.2
0.0

0 2 4 6 8 10

Fig. 1 The probability density plot of GDUSWD. (Red) α = 0.5, k = 1.5, λ = 1.5; (blue)
α = 1.5, k = 2, λ = 1.5; (green) α = 0.8, k = 2, λ = 1.5
284 P. Kavya and M. Manoharan

"
The term η(x) = −f (x)
f (x) , where f (x) is the density function of the distribution
"
and f (x) is the first derivative of f (x) with respect to x, is defined by Glaser [3]
for the study of the shapes of the hazard rate. He stated the following theorem:
Theorem 1
"
1. If η (x) > 0 for all x > 0, then the distribution has increasing failure rate (IFR).
"
2. If η (x) < 0 for all x > 0, then the distribution has decreasing failure rate
(DFR).
" "
3. Suppose there exists x0 > 0 such that η (x) > 0 for all x ∈ (0, x0 ), η (x0 ) = 0,
"
and η (x) < 0 for all x > x0 and  = limx→0 f (x) exists. Then if
(i)  = 0, the distribution has inverse bathtub failure rate.
(ii)  = ∞, the distribution has DFR.
In GDUSWD,
 
(α − 1)e−( λ )
x k
−(k − 1) kx k−1 −( λx )k α−1 −( xλ )k
η(x) = + 1− − α(1 − e ) e
1 − e−( λ )
x k
x λk

and

" k − 1 k(k − 1)x k−2


η (x) = +
x2 λk
$ %
k(α − 1) −( λx )k −( λx )k k 2(k−1)
− (1 − e )e (k − 1)x k−2
− kx
λk λ
k 2(k−1) −2( x )k
− x e λ
λk
kα k−1 −( x )k
x e λ (1 − e−( λ ) )α−2 (α − 1)e−( λ ) + (1 − e−( λ ) ) .
x k x k x k
− k
λ
(4)
"
The obtained expression of η (x) is too complicated, so we have used the
software MATHEMATICA for checking the conditions mentioned in Theorem 1.
Here we have observed that
"
– When α ≤ 0.5, we have η (x) < 0 for all x > 0, hence the distribution has DFR
"
– When α ≥ 1, we have η (x) > 0 for all x > 0, hence the distribution has IFR
"
– When 0.5 < α < 1, there exists a x0 such that η (x) > 0 when x ∈ (0, x0 ),
" "
η (x0 ) = 0 and η (x) < 0 for all x > x0 , where x0 depends on the value of α, λ
and k.
From Eq. (2), we can easily verify that limx→0 f (x) = 0, hence the distribution has
inverse bathtub shaped failure rate. Figure 2 shows the different shapes of hazard
rates.
On a Generalized Lifetime Model Using DUS Transformation 285

10
8
6
h(x)
4
2
0

0.0 0.2 0.4 0.6 0.8 1.0


x

(a)
0.30

0.7
0.6
0.25

0.5
0.20

0.4
0.15
h(x)

h(x)
0.3
0.10

0.2
0.05

0.1
0.00

0.0

0 2 4 6 8 10 0.0 0.2 0.4 0.6 0.8 1.0


x x
(b) (c)

Fig. 2 The hazard rate plots of GDUSWD. (a) (Blue) α = 0.3, k = 2, λ = 0.1, (red) α = 0.4,
k = 2, λ = 0.3. (b) (Blue) α = 0.8, k = 2, λ = 4, (red) α = 0.6, k = 2, λ = 3. (c) (Green)
α = 1.5, k = 2, λ = 1.5, (blue) α = 2, k = 2, λ = 1.5

3 Some Analytical Characteristics

Different statistical characteristics like moments, quantile function, order statistic


and Reńyi entropy of our proposed distribution are discussed below.

3.1 Moments

The moments are used to understand the various characteristics of the proposed
distribution. The rth raw moment of the GDUSWD is
$ %
 ∞   x k α
αk x k α−1 1−e−( λ )
r+k−1 −( λx )k
E(X ) = k
r
x e 1 − e−( λ ) e dx
λ (e − 1) 0
286 P. Kavya and M. Manoharan

∞ xi
Expanding the exponential term ex = i=0 i! , we get
 α
 ∞   ∞
 1 − e −( λx )k
αk α−1
x r+k−1 e−( λ ) 1 − e−( λ )
x k x k
E(Xr ) = dx.
λk (e − 1) 0 !
=0

Here the summation is absolutely convergent, we can interchange the summation


and integral.

 1 ∞  ∞    
αk x k α−1 x k α
x r+k−1 e−( λ )
x k
E(X ) = k
r
1 − e−( λ ) 1 − e−( λ ) dx.
λ (e − 1) ! 0
=0
∞ 
i b yi
Using the expansion of series, (1 − y)b = i=0 (−1) i and simplifying
∞ 
 ∞ $ %
α (−1)m α + α − 1 Γ ( kr + 1)
E(Xr ) = .
λk (e − 1) k +1
r
! m
=0 m=0 ( m+1
k ) λ

This expression of rth raw moment gives variance and other higher order central
moments.

3.2 Quantile Function

The pth quantile function Q(p) of our proposed distribution is obtained from the
equation,

F (Q(p)) = p
1
1 k
Q(p) = λ −log(1 − log(1 + p(e − 1)) α ) . (5)

3.3 Order Statistic

Order statistics are sample values placed in ascending order. The study of order
statistics deals with the applications of these ordered values and their functions. Let
X1 , X2 , . . . , Xn be a random sample of size n from the GDUSWD distribution and
X(1) , X(2) , . . . , X(n) denote the corresponding order statistics. The pdf and cdf of
the rth order statistics fr (x) and Fr (x) are given by
n!
fr (x) = F r−1 (x)[1 − F (x)]n−r f (x)
(r − 1)!(n − r)!
On a Generalized Lifetime Model Using DUS Transformation 287

and
n $ %
 n j
Fr (x) = F (x)[1 − F (x)]n−j .
j
j =r

The pdf fr (x) and cdf Fr (x) of rth order statistic of our proposed distribution are
obtained by using Eqs. (1) and (2) as,
⎛ $ %
x k α
⎞r−1 ⎛ $ %
x k α
⎞n−r
1−e−( λ ) 1−e−( λ )
n! ⎜e − 1⎟ ⎜e − e ⎟
fr (x) = ⎜ ⎟ ⎜ ⎟
(r − 1)!(n − r)! ⎝ e−1 ⎠ ⎝ e−1 ⎠

$ %
x k α
 
x k α−1 1−e−( λ )
−( xλ )k
α( λk )( xλ )k−1 e 1 − e−( λ ) e
(6)
e−1

and
⎛ $ %
x k α
⎞j ⎛ $ %
x k α
⎞n−j
n $ % 1−e−( λ ) 1−e−( λ )
 n ⎜ − 1⎟ ⎜ ⎟
Fr (x) = ⎜e ⎟ ⎜e − e ⎟ . (7)
j ⎝ e−1 ⎠ ⎝ e−1 ⎠
j =r

The pdf of smallest and largest order statistics X(1) and X(n) is obtained by putting
r = 1 and r = n, respectively, in Eq. (6). The cdf of X(1) and X(n) is obtained by
putting r = 1 and r = n, respectively, in Eq. (7).

3.4 Entropy

Entropy is interpreted as the degree of disorder or randomness in the system. Reńyi


entropy [8] is one of the well-known entropy measures. If random variable X has
the pdf f (x), then the Reńyi entropy is defined as,

1
JR (γ ) = log f γ (x)dx , (8)
1−γ

where γ > 0 and γ = 1. From Eq. (2), we get,


$ %α
 ∞  ∞$ %γ   ( x )k
αk x k γ (α−1) γ 1−e λ
f γ (x)dx = x γ (k−1) e −γ ( x k
λ ) 1−e ( λ ) e
0 0 λk (e − 1)
288 P. Kavya and M. Manoharan

after some algebra, which is used in Sect. 3.1


 ∞ ∞  ∞ $ %
α γ k γ −1 γ i αi + γ α − γ (−1)j
f (x)dx =  γ −1
γ
.
0 λk (e − 1)γ i=0 j =0 i! j (γ + j )

With this, Eq. (7) becomes,


$ % $ %
γ α k
JR (γ ) = log − log
1−γ e−1 λk
⎡ ⎤
1
∞ 
 ∞
γ i $αi + γ α − γ % (−1)j
+ log ⎣ ⎦. (9)
1−γ i! j (γ + j )
i=0 j =0

4 Estimation

Maximizing the logarithm of likelihood function is the most common method


for finding the estimates—called MLE (Maximum Likelihood Estimates)—of the
parameters involved in the given distribution. In this section, we use this method
for obtaining the maximum likelihood estimates of the parameters α, λ and k of the
proposed distribution. The log-likelihood function is,

:
n
log L(x; λ) = f (xi ; λ)
i=1
$ % 
n
k
log L(x; α, λ, k) =n logα + n log + n(k − 1)logλ + (k − 1) logxi
λ
i=1
n  
 
n
xi k xi k
− + (α − 1) log(1 − e−( λ ) )
λ
i=1 i=1
n 
 
xi k α
+ 1 − e−( λ ) − n log(e − 1).
i=1
On a Generalized Lifetime Model Using DUS Transformation 289

Differentiating this function with respect to the parameters we get,

∂log L n   n
xi k   xi k α  n
xi k 
= + log 1 − e−( λ ) + 1 − e−( λ ) log 1 − e−( λ ) ,
∂α α
i=1 i=1
n xi k
 x k k
∂log L −n n(k − 1) k i=1 xik 
n
e−( λ ) i
= + + − (α − 1) x
∂λ λ λ λk+1 −( λ )i k
λ λ
1−e i=1
n  
 xi k α−1 xi k  xi k k
−α 1 − e−( λ ) e−( λ ) ,
λ λ
i=1

and

∂log L n    xi k
n x 
i
n
= + n logλ + log(xi ) − log
∂k k λ λ
i=1 i=1


n
e
x
−( λi )k  x k x 
i i
+ (α − 1) x log
−( λi )k λ λ
i=1 1−e
n 
 α−1  x k x 
xi k xi k
1 − e−( λ ) e−( λ )
i i
+α log .
λ λ
i=1

Equating these partial derivatives to zero yields three non-linear equations, and
their solutions provide the maximum likelihood estimate of the parameters α, λ and
k. Newton–Raphson method can be used to solve these equations with the help of
the available statistical packages.

5 Application

In this section, we have checked the flexibility of the proposed distribution


and compared it with some well-known distributions namely, Inv. Lindley, Inv.
Exponential, Gn. Inv. Lindley, Inv. Weibull, Inv. Gamma, Inv. Gaussian and Gn. Inv.
Exponential, see Vikas et al. [9]. For the purpose of comparison, we have considered
a set of real data of flood levels [1],
0.654, 0.613, 0.315, 0.449, 0.297, 0.402, 0.379, 0.423, 0.379, 0.324,
0.296, 0.740, 0.418, 0.412, 0.494, 0.416, 0.338, 0.392, 0.484, 0.265.
Many authors have used this data for checking the flexibility of their proposed
distributions. We have used AIC (Akaike information criterion) and BIC (Bayesian
290 P. Kavya and M. Manoharan

Table 1 AIC, BIC and ranks [1(best) to 8( worst)] of the fitted models

Model AIC BIC Rank


Inv. Lindley (θ ) 0.8291 0.1302 7
Inv. Exponential(λ) 7.4806 6.7817 8
Gn. Inv. Lindley (α, θ ) −28.2950 −29.6930 2
Inv. Weibull(α, λ) −28.1947 −29.5927 4
Inv. Gamma(α, β) −28.2833 −29.6812 3
Inv. Gaussian(μ, λ) −27.7040 −29.1020 5
Gn. Inv. Exponential(α, λ) −26.8224 −28.2203 6
GDUSWD(α, λ, k) −482.4123 −482.4209 1

information criterion) for comparing our model with other models. AIC and BIC
are defined as,

AIC = 2 × k − 2 × Log L,

and

BIC = k × Log(n) − 2 × Log L,

where n is the sample size, k is the number of parameters and L is the maximum
value of the likelihood function for the considered distribution. A minimum value
of AIC and BIC is a sign of better fit of distributions.
From Table 1, it can be seen that the proposed distribution gives the lowest AIC
and BIC values. So we can conclude that GDUSWD provides the best fit for the
data set compared to the other distributions given in this study.

6 Conclusion

In the present study, we have introduced a new lifetime distribution exhibiting


increasing, decreasing and inverse bathtub failure rates. We then derived its
moments, quantile function, order statistic and Reńyi entropy. We have considered
a real dataset and compared our proposed distribution with some other well-known
distributions. It is seen that among the distributions considered, the one proposed
here fits best with the data. Thus, we can say that the GDUSWD is more flexible
compared to the distributions mentioned in this study.
On a Generalized Lifetime Model Using DUS Transformation 291

References

1. Dumonceaux, R., Antle, C.: Discrimination between the lognormal and the Weibull distribu-
tions. Technometrics 15, 923–926 (1973)
2. Efron, B.: Logistic regression, survival analysis, and the Kaplan-Meier curve. J. Am. Stat.
Assoc. 83, 414–425 (1988)
3. Glaser, R.E.: Bathtub and related failure rate characterizations. J. Am. Stat. Assoc. 75, 667–672
(1980)
4. Gupta, R.C., Gupta, P.L., Gupta, R.D.: Modeling failure time data by Lehmann alternatives.
Commun. Stat. Theory Methods 27, 887–904 (1998)
5. Kumar, D., Singh, U., Singh, S.K.: A method of proposing new distribution and its application
to bladder cancer patient data. J. Stat. Appl. Probab. Lett. 2, 235–245 (2015)
6. Maurya, S.K., Kaushik, A., Singh, S.K., Singh, U.: A new class of distribution having
decreasing, increasing, and bathtub-shaped failure rate. Commun. Stat. Theory Methods
46(20), 10359–10372 (2017)
7. Mudholkar, G., Srivastava, D.: Exponentiated Weibull family for analyzing bathtub failure-rate
data. IEEE Trans. Reliab. 42, 299–302 (1993)
8. Renyi, A.: On measures of entropy and information. In: Proceedings of the 4th Berkeley
Symposium on Mathematical Statistics and Probability, vol. 1, pp. 547–561. University of
California Press, Berkeley (1961)
9. Sharma, V.K., Singh, S.K., Singh, U., Merovci, F.: The generalized inverse Lindley distribu-
tion: a new inverse statistical model for the study of upside-down bathtub data. Commun. Stat.
Theory Methods 45(19), 5709–5729 (2016)
10. Xie, M., Lai, C.: Reliability analysis using an additive Weibull model with bathtub shaped
failure rate function. Reliab. Eng. Syst. Saf. 52, 87–93 (1996)
11. Xie, M., Goh, T., Tang, Y.: A modified Weibull extension with bathtub-shaped failure rate
function. Reliab. Eng. Syst. Saf. 76, 279–285 (2002)
Analysis of Inventory Control Model for
Items Having General Deterioration Rate

V. P. Praveen and M. Manoharan

Abstract Deterministic inventory control models for stochastic deteriorating items


have been extensively studied in the past. However, there is not much work reported
to model situations where different phases of deterioration rate are prevalent. In
this paper, we develop a deterministic inventory control model with stochastic
deterioration incorporated through additive Weibull distribution. In this study, an
elegant approach is proposed to consider a time-dependent demand in the planning
process and we consider that the holding cost totally depends on time and shortages
are allowed for this model. The objective is to minimize the total inventory cost of
the proposed model. Finally, the formulated model is illustrated through numerical
examples to determine the effectiveness of the proposed model.

Keywords Inventory control · Stochastic deterioration · Additive Weibull


distribution · Time-dependent demand · Shortage · Optimization

1 Introduction

It is reasonable to note that a product may be understood to have a lifetime


which ends when utility reaches zero. Looking through the inventory models with
deteriorating items extensively studied by researchers in the past shows that the
deteriorating rate is considered constant or treated as some real valued functions.
However, in real life situations, combinations of various factors would form the
basis for modeling the inventory problem for deteriorating items. Deterioration is
defined as decay, damage, or spoilage such that items cannot be used for intended
purpose.
Research in this direction began with the work of Whitin [23] who considered
fashion goods deteriorating at the end of a prescribed storage period. Derman and

V. P. Praveen () · M. Manoharan


Department of Statistics, University of Calicut, Malappuram, Kerala, India

© The Editor(s) (if applicable) and The Author(s), under exclusive 293
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_18
294 V. P. Praveen and M. Manoharan

Klien [3] have studied the problem of choosing the order of issue of items from a
stock pit materials under varying changes with time. Ghare and Schrader [5] studied
the inventory model in which the deterioration is proportional to the inventory at
the beginning of the time period. The economic order quantity under the condition
of constant demand and exponential decay was derived. Covert and Phillip [1]
and Phillip [18] developed an EOQ formula where the rate of deterioration is
treated as random variable with time of deterioration following Weibull distribution.
Donaldson [4] gave the fundamental result in the development of economic order
quantity models with time-varying demand patterns. He established the classical
no shortage inventory model with a linear trend in demand over a known and
finite horizon. Dave and Patel [2] studied a deteriorating inventory model where
the demand rate is a linear increasing function of time and with an assumption that
shortages are not allowed.
According to Steven Nahmias [14], certain type inventories undergo change in
storage so that in time they may become partially or entirely unfit for consumption.
Hence deterioration of stocks taken place overtime and the quantity that deteriorates
can be modeled as a linear function of time and quantity. Mandan and Phaujdhari
[12] developed a single item deterministic order model for deteriorating items with
uniform rate of production and stock dependent consumption rate is presented.
Goswamy and Choudhuri [6] developed an inventory replenishment policy over a
deteriorating item having deterministic demand pattern with a linear trend and short-
ages. Raafat [19] reviewed inventory models with deteriorating items. Hariga [8]
has studied optimum inventory lot sizing model for deterioration items with general
continuous time-varying demand over a finite planning horizon. Krishnamurthy and
Varghese [10] considered a continuous review deterioration of items with shortages.
Goyal and Giri [7] gave recent trends of modeling in deteriorating items inventory.
They classified inventory models on the basis of demand variations and various
other conditions or constraints. Ouyang et al. [15] developed an inventory model
for deteriorating items with exponential declining demand and partial backlogging.
Ajanta Roy [20] developed a deterministic inventory model when the deterioration
rate is time proportional, demand rate is a function of selling price, and holding
cost is time dependent. Mandal [11] gave an EOQ inventory model for Weibull
distributed deteriorating items under ramp type demand and shortages. Hung [9]
gave an inventory model with generalized type demand, deterioration, and back
order rates. Shah et al. [21] integrated time-varying deterioration and holding cost
rates in the inventory model where shortages were not prohibited. Mishra et al.
[13] gave an inventory model for deteriorating items with time-dependent demand
and time-varying holding cost under partial backlogging. Tripathi and Pandey
[22] presented an inventory model for deteriorating items with Weibull distributed
deterioration and time-dependent demand under trade-credit policy. Yadav and
Vats [25] proposed an inventory model with constant holding cost under partial
backlogging and inflation. Pervin et al.[16] presented an inventory model in a
declining demand for deteriorating items under trade-credit policy. Pervin et al.[17]
proposed an inventory model with shortage under time-dependent demand and time-
varying holding cost including stochastic deterioration, etc.
Inventory Control Model for Items Having General Deterioration Rate 295

The main motivation of recommending the new model is among the various
consideration of demands in EOQ models; the time-dependent demand approach
is very sensible to represent a perfect realistic situation. In order to address more
realistic circumstances, we considered non-instantaneous deteriorating items and
shortages are allowed to occur for displaying the real condition. Many inventory
models were presented for variable deterioration rate with deterioration rate follows
exponential distribution, Weibull distribution, etc. Most of these works were
proposed to model any of the three parts in a deterioration rate. However, there is
not much work reported to model situations where different phases of deterioration
rate are prevalent. In this paper, we considered all the phases of deterioration rate by
using additive Weibull model based on the simple idea of combining the failure rates
of two Weibull distributions proposed by Xie and Lai [24]. The proposed model is
validated with the help of four illustrative numerical examples. With the help of
examples, we discussed the effectiveness of the proposed model. On these lines,
different sections in this paper are developed and presented.

2 Model Description

2.1 Notations and Assumptions

The following notations and assumptions are made in order to formulate the problem
and EOQ model.
A Ordering cost per order;
p Unit purchasing cost per item;
T Length of cycle time;
δ Backlogging rate, 0 ≤ δ ≤ 1;
s Lost sale cost per unit;
I (t) Inventory level at time t, t ≥ 0;
I0 Maximum inventory level during [0, T ];
I1 (t) Inventory level that changes with time t during the production period;
I2 (t) Inventory level that changes with time t during the non-production period;
TC Total average cost;
D(t) The demand rate D(t) at time t is a linearly increasing function of t;
i.e., D(t) = x + yt, 0 ≤ t ≤ T , where x and y are nonnegative constants;
k Replenishment rate which is always finite;
θ (t) The distribution of the time to deterioration of an item follows the additive
Weibull distribution with density function, (abt b−1 + cdt d−1 )exp(−at b −
ct d ), t ≥ 0, where a > 0 and c > 0 are scale parameters and b > d > 0
or (b < d < 0) are shape parameters. The inventory level will change at a
changing rate. Hence, to present differential model we use the deterioration
rate function for additive Weibull distribution θ (t). The term θ (t)=abt b−1 +
296 V. P. Praveen and M. Manoharan

cdt d−1 , t ≥ 0, where 0 ≤ a, c ≤ 1 and b, d > 0, gives the on hand


inventory deteriorates per unit time;
Q The stock level reached in the cycle at the end of production period
will be used in non-production period;
p2 Shortage cost per unit time, i.e., shortages are allowed to occur;
h(t) Holding cost per item per time unit is time dependent and is assumed as
h(t) = h + γ t, t ≥ 0, where γ > 0 and h > 0.

2.2 Methodology

In the proposed inventory model, depending on the above assumptions, the inventory
system can be considered as follows. At the beginning of each inventory cycle with
zero stock level, k units of products arrive at the system. Up to time t1 due to
replenishment, the inventory level meets the demand in the market. Replenishment
is occurring during the production period only and at time t = t1 production stops
and inventory level reaches the level Q which is the stock level that will be used in
the non-production period. The inventory level in stock during the non-production
period is diminishing due to those reasons of market demand and deterioration of
items during the time interval [t1 , T ] and shortages begin to be accumulated which
are partially backlogged. Next, the inventory level is declining to its lowest position
at time t = T . Just after the cycle period the process repeats itself with k units of
products arrived at the system, so replenishment is instantaneous and lead time is
zero. Figure 1 depicts the proposed inventory system.
Now I (t) denote the inventory position at time t, t ≥ 0, then the differential
equations during the interval[0,T ] that describes the instantaneous state I (t), where
all the sudden states of the inventory level are involved, are given by

dI1 (t)
+ θ (t)I1 (t) = k − D(t) with I1 (0) = 0, 0 ≤ t ≤ t1 (1)
dt
and
dI2 (t)
+ θ (t)I2 (t) = −D(t) with I2 (T ) = 0, I1 (t1 ) = I2 (t1 ) = Q, t1 ≤ t ≤ T
dt
(2)
The solution of (1) using the boundary condition is

ab b+1 cd
I1 (t) = (k − x) t − t − t d+1
(b + 1) (d + 1)
t2 ab cd
−y − t b+2 − t d+2 , 0 ≤ t ≤ t1 (3)
2 2(b + 2) 2(d + 2)
Inventory Control Model for Items Having General Deterioration Rate 297

Fig. 1 Graphical representation of our proposed model

and the solution of (2) using the boundary condition is

    
I2 (t) = x t1 − t + at b + ct d t − t1

a  b+1  c  d+1 
+ t1 − t b+1 + t1 − t d+1
(b + 1) (d + 1)
$ 2 % $ %
t1 t2 at b ct d  2 
+y − + + t − t12
2 2 2 2
a  b+2  c  d+2 
+ t1 − t b+2 + t1 − t d+2 , t1 ≤ t ≤ T (4)
(b + 2) (d + 2)
298 V. P. Praveen and M. Manoharan

Using the boundary condition I2 (t1 ) = Q, the equation becomes

   
I2 (t) = Q 1 + a t1b − t b + c t1d − t d

    
+x t1 − t + at b + ct d t − t1

a  b+1  c  d+1 
+ t − t b+1 + t − t d+1
(b + 1) 1 (d + 1) 1
$ 2 % $ b %
t1 t2 at ct d  2 
+y − + + t − t12
2 2 2 2
a  b+2  c  d+2 
+ t1 − t b+2 + t − t d+2 , t1 ≤ t ≤ T (5)
(b + 2) (d + 2) 1

The maximum inventory level during [0,T ] is given by


 t1
I0 = I1 (t)dt
0

t12 ab cd
= (k − x) − t b+2 − t d+2
2 (b + 1)(b + 2) 1 (d + 1)(d + 2) 1
t13 ab cd
−y − t b+3 − t d+3 (6)
6 2(b + 2)(b + 3) 1 2(d + 2)(d + 3) 1

For the proposed model, a cost structure is imposed and it is analyzed by the criteria
of minimization of the total expected cost per unit time. Hence for obtaining the total
inventory cost, we calculate the following terms:
– Annual ordering cost,

OC = A (7)

.
– Total annual stock holding cost, HC, during time span [0,t1] is defined as
follows:
 t1
HC = h(t)I1 (t)dt
0

t12 ab cd
H C = h(k − x) − t1b+2 − t d+2
2 (b + 1)(b + 2) (d + 1)(d + 2) 1
t13 ab cd
−hy − t1b+3 − t d+3
6 2(b + 2)(b + 3) 2(d + 2)(d + 3) 1
Inventory Control Model for Items Having General Deterioration Rate 299

t13 ab cd
+γ (k − x) − t1b+3 − t d+3
3 (b + 1)(b + 3) (d + 1)(d + 3) 1
t14 ab cd
−γ y − t b+4 − t d+4 . (8)
8 2(b + 2)(b + 4) 1 2(d + 2)(d + 4) 1

– P urchase cost, PC, during time span [t1 ,T]


 T
P C = p I0 + δD(t)dt
t1

t12 ab cd
P C = p(k − x) − t b+2 − t d+2
2 (b + 1)(b + 2) 1 (d + 1)(d + 2) 1
t13 ab cd
−py − t b+3 − t d+3
6 2(b + 2)(b + 3) 1 2(d + 2)(d + 3) 1
1
+pδx(T − t1 ) + pδy(T 2 − t12 ). (9)
2
– Deteriorating cost, DC
 t1
DC = p k − (x + yt)dt
0

* +
t2
DC = p (k − x)t1 − b 1 . (10)
2

– Shortage cost, SC, during time span [t1 ,T] is expressed as:
 T
SC = p2 I2 (t)dt
t1

$ % $ %
T b+1 + bt1b+1 T d+1 + dt1d+1
SC = Qp2 (T − t1 ) + a T t1b − + c T t1d −
b+1 d +1
2T t1 − T 2 − t12 a
+xp2 + (T b+2 − T b+1 t1 + T t1b+1 − t1b+2 )
2 b+1
c
+ (T d+2 − T d+1 t1 + T t1d+1 − t1d+2 )
d +1
$ %
3T t12 − T 3 − 2t13 a T b+3 − t1b+3 T b+1 t12 − t1b+3
+yp2 + −
6 2 b+3 b+1
300 V. P. Praveen and M. Manoharan

$ %
c T d+3 − t1d+3 T d+1 t12 − t1d+3
+ −
2 d+3 d +1
a T b+3 − t1b+3
+ t1b+2 (T − t1 ) −
b+2 b+3
c T d+3 − t1d+3
+ t1d+2 (T − t1 ) − . (11)
d +2 d+3

– Lost sale cost, Not all customers are willing to wait for the next lot size to
arrive during the shortage period [t1 ,T], which may cause some loss in profit.
Hence Lost sale cost, LSC
 T
LSC = s (1 − δ)D(t)dt
t1

 
T 2 − t12
LSC = s(1 − δ) x(T − t1 ) + y . (12)
2

Hence, the total average cost of the system per time unit denoted by TC is defined
as
1
TC = [OC + H C + P C + DC + SC + LSC]. (13)
T
where the component costs are as given in equation numbers from (7) to (12).

2.3 Solution Procedure

The total average cost given by (13) is a highly nonlinear equation in T and t1 and
our problem is to determine the optimal values of T and t1 that minimize the total
average cost TC. The optimum values of T and t1 are obtained by equating to zero
the first order partial derivatives of total average cost (T C) with respect to T and t1
as follows:
∂T C
=0 (14)
∂t1

and
∂T C
=0 (15)
∂T
Inventory Control Model for Items Having General Deterioration Rate 301

These two partial derivatives yield a minimizer (T ∗ , t1∗ ) provided that the following
second order sufficient conditions are satisfied at that point.
* +$ % $ %2
∂ 2T C ∂ 2T C ∂ 2T C
− >0 (16)
∂t12 ∂T 2 ∂t1 ∂T

and
* + $ %
∂ 2T C ∂ 2T C
> 0, >0 (17)
∂t12 ∂T 2

Equation (13) is our objective function which needs to be minimized. For this,
we use the classical optimization techniques. Equations (14) and (15) obtained
thereafter and are highly nonlinear in the variable T and t1 . However, if we
give particular values to the discrete variables, our objective function becomes
the function of two variables T and t1 . We have used the mathematical software
MATHEMATICA to arrive at the solution of the system in consideration. We can
obtain the optimal values of different values of the time with the help of this
software. With the use of these optimal values, Eq. (13) provides minimum total
average cost per unit time of the system in consideration.
We can also show the solution procedure step by step as given below:
Algorithm
Step 1: Initialize the value of the variables A, h, p, k, x, y, a, b, c, d, γ , δ, Q, p2
and s.
Step 2: Find t1∗ which satisfying (14)
Step 3: Find T ∗ which satisfying (15)
Step 4: If such t1∗ and T ∗ are found, then check if that t1∗ and T ∗ are also
satisfying (16) and (17)
Step 5: If every condition is satisfied, then calculate T C from (13)
Step 6: The optimal solution is (T ∗ , t1∗ , T C ∗ ), where t1∗ and T ∗ are the associated
values of t1 and T , respectively, and T C ∗ is the associated value of T C.

3 Numerical Examples

The proposed model is illustrated below by considering the examples, where all
associated parameters are taken in proper unit. The optimal solution of the inventory
system is calculated with the help of Mathematica software.
Example 1 Consider an inventory system with parameters A = 2000, h = 0.8,
p = 20, k = 35, x = 20, y = 40, a = 0, b = 1.6, c = 0, d = 1.1, γ = 0.95,
δ = 0.7, Q = 25, p2 = 6, s = 8. Then the optimal solution is t1∗ = 1.4321 and
T ∗ = 6.11341 and the minimum total average inventory cost T C ∗ = 1132.41. The
302 V. P. Praveen and M. Manoharan

Fig. 2 Total average cost according to various choices of parameters with t1 , T , and T C along the
x-axis, the y-axis, and the z-axis, respectively

graphical representation of the total average cost in Example 1 for the proposed
model is shown in Fig. 2.
Example 2 Consider an inventory system with parameters A = 2000, h = 0.8,
p = 20, k = 35, x = 20, y = 40, a = 0.5, b = 0.8, c = 0.5, d = 0.4, γ = 0.95,
δ = 0.7, Q = 25, p2 = 6, s = 8. Then the optimal solution is t1∗ = 1.6143 and
T ∗ = 6.4638 and the minimum total average inventory cost T C ∗ = 1867.85. The
graphical representation of the total average cost in Example 2 for the proposed
model is shown in Fig. 3.
Example 3 Consider an inventory system with parameters A = 2000, h = 0.8,
p = 20, k = 35, x = 20, y = 40, a = 0.5, b = 1.6, c = 0.5, d = 0.7, γ = 0.95,
δ = 0.7, Q = 25, p2 = 6, s = 8. Then the optimal solution is t1∗ = 1.6472 and
T ∗ = 6.25146 and the minimum total average inventory cost T C ∗ = 5139.46. The
Inventory Control Model for Items Having General Deterioration Rate 303

Fig. 3 Total average cost according to various choices of parameters with t1 , T , and T C along the
x-axis, the y-axis, and the z-axis, respectively

graphical representation of the total average cost in Example 3 for the proposed
model is shown in Fig. 4.
Example 4 Consider an inventory system with parameters A = 2000, h = 0.8,
p = 20, k = 35, x = 20, y = 40, a = 0.5, b = 1.6, c = 0.5, d = 1.1, γ = 0.95,
δ = 0.7, Q = 25, p2 = 6, s = 8. Then the optimal solution is t1∗ = 1.93214
and T ∗ = 6.14792 and the minimum total average inventory cost T C ∗ = 5076.92.
The graphical representation of the total average cost in Example 4 for the proposed
model is shown in Fig. 5.
From the above four numerical examples, if we are not considering deterioration
of the item (see Example 1), the minimum total average inventory cost is less
than that of the proposed inventory system with additive Weibull distribution
deterioration. Example 3 deals with constant rate of deterioration and that has the
higher total average inventory cost than the other two examples. So this tells us
304 V. P. Praveen and M. Manoharan

Fig. 4 Total average cost according to various choices of parameters with t1 , T , and T C along the
x-axis, the y-axis, and the z-axis, respectively

that when we are dealing with general deterioration rate our proposed model with
additive Weibull distribution deterioration is more appropriate.

4 Conclusion

The proposed model incorporates some realistic and practical features viz. the
demand function and holding costs being totally time dependent, the inventory
deteriorates at a variable rate over time and assumptions like shortages are allowed
and are completely backlogged. The new model is introduced with additive Weibull
deterioration rate to model different phases of deterioration rate. Four numerical
assessments of the theoretical model have been done to illustrate the theory. The
variations in the system statistics with a variation in system parameters have also
been illustrated graphically. The solution obtained has also been revealed that the
model is found to be quite suitable and stable. All these facts together make this
study very unique and matter-of-fact and we examined total inventory cost at each
of these situations.
Inventory Control Model for Items Having General Deterioration Rate 305

Fig. 5 Total average cost according to various choices of parameters with t1 , T , and T C along the
x-axis, the y-axis, and the z-axis, respectively

Acknowledgement The authors wish to thank the anonymous referees for their constructive
comments and suggestions on the earlier version of the paper.

References

1. Covert, R.P., Philiips, G.C.: An EOQ model for items with Weibull distribution deterioration.
AIIE. Trans. 323–326 (1973)
2. Dave, U., Patel, L.K.: (T , Sj ) policy inventory model for deteriorating items with time
proportional demand. J. Oper. Res. Soc. 32, 137–142 (1981)
3. Derman, C., Klein, M.: Inventory depletion. Manage. Sci. 4, 450–456 (1958)
4. Donaldson, W.A.: Inventory replenishment policy for a linear trend in demand-an analytical
solution. Oper. Res. Q. 28(3), 663–670 (1977)
5. Ghare, P.M., Schrader, G.P.: A model for an exponentially decaying inventory. J. Ind. Eng. 14(5),
238–243 (1963)
6. Goswami, A., Choudhuri, K.S.: An EOQ model for deteriorating items with shortages and a
linear trend demand. J. Oper. Res. Soc. 42, 1105–1110 (1991)
7. Goyal, S.K., Giri, B.C.: Recent trends in modeling of deteriorating inventory. Eur. J. Oper. Res.
134, 1–16 (2001)
306 V. P. Praveen and M. Manoharan

8. Hariga, M.A, Benkherouf, L.: Optimal and heuristic inventory replenishment models for
deteriorating items with exponential time-varying demand. Eur. J. Oper. Res. 79, 123–137
(1994)
9. Hung, K.C.: An inventory model with generalized type demand, deterioration and backorder
rates. Eur. J. Oper. Res. 208(3), 239–242 (2011)
10. Krishna Moorthy, A., Varghese, T.V.: Inventory with disaster. Optimization 35, 83–93 (1995)
11. Mandal, B.: An EOQ inventory model for Weibull distributed deteriorating items under ramp
type demand and shortages. OPSEARCH 47(2), 158–165 (2010)
12. Mandan, B.N, Phaujdhari, S.: An inventory model for deteriorating items and stock depended
consumption rate. J. O. R. Soc. 40, 483–488 (1989)
13. Mishra, V.K., Singh, L.S., Kumar, R.: An inventory model for deteriorating items with time
dependent demand and time-varying holding cost under partial backlogging. J. Ind. Eng. Int. 9,
(2013). Article number: 4
14. Nahmias, S.: Perishable inventory theory- a review. Oper. Res. 30, 680–708 (1982)
15. Ouyang, L.Y., Wu, K.S., Cheng, M.C.: An inventory model for deteriorating items with
exponential declining demand and partial backlogging. Yugoslav J. Oper. Res. 15, 277–288
(2005)
16. Pervin, M., Mahata, G.C., Roy, S.K.: An inventory model with demand declining market for
deteriorating items under trade credit policy. Int. J. Manage. Sci. Eng. Manage. 11, 243–251
(2015)
17. Pervin, M., Weber, G.W., Roy, S.K.: Analysis of inventory control model with shortage under
time-dependent demand and time-varying holding cost including stochastic deterioration. Ann.
Oper. Res. 260 437–460 (2016)
18. Phillip, G.C.: A generalized EOQ model for items with Weibull distribution deterioration. AIIE
Trans. 6(2), 159 (1974)
19. Raafat, F., Wolfe, P.M., Eidin, H.K.: An inventory model for deteriorating items. Comput. Ind.
Eng. 20, 89–94 (1991)
20. Roy, A.: An inventory model for deteriorating items with price dependent demand and time-
varying holding cost. Adv. Modell. Optim. 10(1), 25–37 (2008)
21. Shah, N.H., Soni, H.N., Patel, K.A.: Optimizing inventory and marketing policy for noninstan-
taneous deteriorating items with generalized type deterioration and holding cost rates. Omega
41(2), 421–430 (2013)
22. Tripathi, R.P., Pandey, H.S.: An EOQ model for deteriorating item with Weibull time dependent
demand rate under trade credits. Int. J. Inf. Manage. Sci. 24, 329–347 (2013)
23. Whitin, T.M.: The Theory of Inventory Management. Princeton University Press, Princeton
(1957)
24. Xie, M., Lai, C.D.: Reliability analysis using an additive Weibull model with bathtub-shaped
failure rate function. Reliab. Eng. Syst. Safe. 52, 87–93 (1995)
25. Yadav, R.K.,Vats, A.K.: A deteriorating inventory model for quadratic demand and constant
holding cost with partial backlogging and inflation. IOSR J. Math. 10(3), 47–52 (2014)
A Two-Server Queueing System with
Processing of Service Items by a Server

A. Krishnamoorthy and Divya V.

Abstract We consider a two-server (S1 and S2 ) queueing system in which the


customers arrive according to Markovian arrival process. Each customer is to be
provided with a processed item (inventory) at the end of his service. S1 provides
service alone, whereas S2 provides service and also processes the items required
to serve the customers. The maximum number of processed item permitted is L.
The processing time follows phase type distribution. When the inventory level hits
L, S2 starts serving customers if any waiting; else stays idle. S1 is dedicated to
service only. Service is rendered only if there are processed items. Also, when a
customer arrives to the system when both servers are idle, S1 provides him service
and S2 continuously remains idle even if it has completed the processing of L items.
The duration of service time given by both servers follows phase type distributions
of same order, but S1 provides service at a slower rate than S2 . If the inventory
level drops to a predetermined level s due to a service completion by S2 , then
he starts processing items. If the inventory level drops to level s due to a service
completion by S1 , then the customer served by S2 is shifted to S1 to provide him the
residual service; S2 starts processing items. The arrival process is independent of
the inventory processing and service process. The long run behavior of the system
is analyzed under condition for stability. We derive some important distributions

A. Krishnamoorthy: Emeritus Fellow (EMERITUS-2017-18 GEN 10822(SA-II)), University


Grants commission, India.
The author’s “Divya V.” research was supported by the University Grants Commission, Govt. of
India, under Faculty Development Programme (Grant No.F.No.FIP/12th Plan/KLKE008 TF 04) in
Department of Mathematics, Cochin University of Science and Technology, Cochin-22.

A. Krishnamoorthy ()
Department of Mathematics, CMS College, Kottayam, Kerala, India
Divya V.
Department of Mathematics, N.S.S. College, Cherthala, Kerala, India

© The Editor(s) (if applicable) and The Author(s), under exclusive 307
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_19
308 A. Krishnamoorthy and Divya V.

associated with the model. Numerical investigation of the optimal values of L and s
is provided.

Keywords Two-server queue · Additional item for service · Control policy ·


Level crossing problem

1 Introduction

This paper is concerned with a two-server queueing system in which Server 1 (S1 )
provides service alone, whereas Server 2 (S2 ) provides service and also processes
the item required (we call this “additional item”) to serve the customers. Each
customer requires exactly one additional item for his service. In the absence of this
additional item service cannot be provided. Therefore, S2 keeps processing the item
until it hits a threshold value L. At this epoch he switches to serve customers, if
any waiting. However, when the additional item level reduces to s, S2 returns to
processing items. His service rate is higher than that of S1 ; both servers provide
service according to phase type distributed random variable. Processing of each
additional item requires a Phase distributed type amount of time, independent of the
arrival and service processes.
This work is an extension of those discussed in Kazimirsky [5], Hanukov et al.
[4], Divya et al. [3], Baek et al. [1], and Dhanya et al. [2] to the two-server case.
In classical queueing theory, it is implicitly assumed that if the server is ready
to serve and customers are available to receive service then the service process
proceeds. Either availability of “additional” items required to provide service is
not taken into consideration/ignored or its abundance is taken for granted. In the
latter case, the holding cost incurred is completely ignored. Sometimes the item(s)
required for service may not be available. In such cases, service cannot be provided
even when server is readily available and customer(s) are waiting. Typical example
in medical case is operation theater. In the absence of “organ” for a patient in need
of it, surgery cannot be performed. In a vehicle repair shop, a vehicle requiring a
specific part replacement cannot be serviced if spares are not available.
Thus, in several cases, availability of both customers and servers alone cannot
guarantee service. This naturally leads to the investigation of availability of
additional item(s) required to provide service. Then some control problems also
arise—how much of additional item(s) to be held, time required to procure such
items, and so on. This leads to the consideration of holding cost, shortage cost,
and associated revenue loss. Kazimirisky [5] was the first to introduce “additional
items needed for service.” He considered a BMAP/G/1 queue, in which the server
proceeds to produce additional items whenever no customer is found at a departure
epoch. Exactly one processed item is needed for each customer. Service time
distribution of customers depends on whether processed item is available or not.
Thus, there are two distinct service time distributions.
A Two-Server Queueing System with Processing of Service Items by a Server 309

Baek et al. [1] considered MMAP of customers of two types—type I (high


priority) and type II (low priority). Both type of customers require a certain
minimum number of additional items to start their service. Type I customers do
not have a waiting space. If a type I customer is in service while another type I
customer arrives, the latter leaves the system. On the other hand, if a type II customer
is in service, the former is pushed out of the system by the type I arrival provided
the number of additional items available is at least equal to the minimum number
required to start its service. Else, it leaves the system without changing the status.
Type II customers have an infinite capacity waiting space. Additional items arrive
to the system according to MAP. They investigate system stability and analyze its
performance. Dhanya et al. [2] extend the above to retrial queueing setup.
Hanukov et al. [4] analyze a simple queueing system where again additional
items are needed for service of a customer (one item for each customer). The
arrival process is Poisson and service time is exponentially distributed. Divya et
al. [3] considered a single server queue in which customers arrive according to
Markovian arrival process. Whenever customers are not waiting, the server goes for
vacation and produces inventory for future use during this period. The maximum
inventory level permitted is L. The processing time is phase type distributed. The
server returns from vacation when there are N customers in the system. The service
time follows two distinct phase type distributions depending on whether there is
processed items or no processed item available at service commencement epoch.
The rest of the paper is arranged as follows. The model description and
mathematical formulation are given in Sect. 2. Section 3 provides steady-state
analysis of the model. Section 4 contains some level crossing problems. Some
important performance measures are provided in Sect. 5 and a related cost function
is analyzed in Sect. 6. Some numerical experiments to find the optimal values of L
and s are discussed in Sect. 7.
Notations and abbreviations used in the sequel:
– e(a): Column vector of 1" s of order a
– e: Column vector of 1" s of appropriate order.
– Ia : identity matrix of order a.
– ea (b): column vector of order b with 1 in the ath position and the remaining
entries zero.
– CT MC: Continuous time Markov chain
– MAP : Markovian arrival process
– LI QBD: Level independent quasi-birth and -death

2 Model Description and Mathematical Formulation

We consider a two-server queueing system in which the customers arrive according


to Markovian arrival process with representation (D0 , D1 ) of order n. Each
customer is to be provided with a processed item at the end of his service. S1 is
310 A. Krishnamoorthy and Divya V.

always available to the customers provided processed item is available, whereas


S2 produces items for service (inventory) for future use whenever the inventory
level drops to a threshold s. Until the inventory level reaches L (the maximum
permitted level) he does not provide service to customers. The inventory processing
time follows phase type distribution PH(α, T ) of order m1 . After processing L
items, S2 starts serving customers if any waiting; else stays idle. S1 is dedicated
to service only. Servers provide service only if there are processed items. Also,
when a customer arrives to an empty system, S1 provides him service and S2
remains idle even he is not engaged in processing the inventory. The service time at
S2 follows phase type distribution P H (β, S) of order m2 and that at S1 follows
phase type distribution P H (β, θ S) of order m2 , 0 < θ < 1. If the inventory
level drops to a predetermined level s after a service completion by S2 , then he
starts processing items. If the inventory level drops to a predetermined level s
after a service completion by S1 , then the customer served by S2 is shifted to S1
for the remaining service and S2 goes for processing items. The arrival process is
independent of the inventory processing and service process.

2.1 The QBD Process

The model described in Sect. 2 can be studied as a LIQBD process. First, we


introduce the following notations:
At time t:
N(t): the number of customers in the system,
I (t): the number of processed items in the inventory,

⎨ 0, when S2 is idle
J (t) : status of S2 = 1, when S2 is processing items

2, when S2 is serving a customer

⎨ processing/service phase of S2
K1 (t) = 0, when S2 is idle


⎨ service phase of S1
K2 (t) = 0, when S1 is idle

M(t): the phase of arrival of the customer.


A Two-Server Queueing System with Processing of Service Items by a Server 311

It is easy to verify that {(N(t), I (t), J (t), K1 (t), K2 (t), M(t)) : t ≥ 0} is a


LIQBD with state space:
(i) no customer in the system
l(0) = {(0, i, 1, k1 , 0, p) : 0 ≤ i ≤ L − 1, 1 ≤ k1 ≤ m1 , 1 ≤ p ≤ n} ∪
{(0, i, 0, 0, 0, p) : s + 1 ≤ i ≤ L, 1 ≤ p ≤ n}
(ii) when there is 1 customer in the system
l(1) = {(1, 0, 1, k1, 0, p) : 1 ≤ k1 ≤ m1 ; 1 ≤ p ≤ n} ∪ {(1, i, 1, k1 , k2 , p) :
1 ≤ i ≤ L − 1; 1 ≤ k1 ≤ m1 ; 1 ≤ k2 ≤ m2 ; 1 ≤ p ≤ n} ∪ {(1, i, 0, 0, k2 , p) :
s + 1 ≤ i ≤ L; 1 ≤ k2 ≤ m2 ; 1 ≤ p ≤ n} ∪ {(1, i, 2, k1 , 0, p) : s + 1 ≤ i ≤
L − 1; 1 ≤ k1 ≤ m2 ; 1 ≤ p ≤ n}
(iii) when there are h customers in the system, h ≥ 2:
l(h) = {(h, 0, 1, k1 , 0, p) : 1 ≤ k1 ≤ m1 ; 1 ≤ p ≤ n} ∪ {(h, i, 1, k1 , k2 , p) :
1 ≤ i ≤ L − 1; 1 ≤ k1 ≤ m1 ; 1 ≤ k2 ≤ m2 ; 1 ≤ p ≤ n} ∪ {(h, i, 2, k1 , k2 , p) :
s + 1 ≤ i ≤ L; 1 ≤ k1 , k2 ≤ m2 ; 1 ≤ p ≤ n}
The infinitesimal generator of this CTMC is
⎡ ⎤
A00 A01
⎢ A10 A11 A12 ⎥
⎢ ⎥
⎢ ⎥
Q̄ = ⎢ A21 A1 A0 ⎥
⎢ ⎥
⎣ A2 A1 A0 ⎦
.. .. ..
. . .

where A00 contains transitions within level 0; A01 , A10 , A11 , A12 , A21 represent
transitions from level 0 to level 1, from level 1 to level 0, within level 1, from level 1
to level 2, from level 2 to level 1, respectively; A0 represents transitions from level
h to level h + 1 for h ≥ 2, A1 represents transitions within the level h for h ≥ 2
and A2 represents transitions from level h to h − 1 for h ≥ 3. The boundary blocks
A00 , A01 , A10 , A11 , A12 , A21 are of orders (s + 1)m1 n + (L − s − 1)(1 + m1 )n + n,
((s + 1)m1n + (L − s − 1)(1 + m1)n + n) × (m1n + sm1 m2 n + (L − s − 1)(2m2 +
m1 m2 )n+m2 n),(m1 n+sm1 m2 n+(L−s−1)(2m2 +m1 m2 )n+m2 n)×((s+1)m1 n+
(L − s − 1)(1 + m1 )n + n), m1 n + sm1 m2 n + (L − s − 1)(2m2 + m1 m2 )n + m2 n,
(m1 n + sm1 m2 n + (L − s − 1)(2m2 + m1 m2 )n + m2n) × (m1 n + sm1 m2 n + (L − s −
1)(m1 m2 n+m2 2 n)+m22 n), (m1 n+sm1 m2 n+(L−s−1)(m1m2 n+m2 2 n)+m22 n)×
(m1 n + sm1 m2 n + (L − s − 1)(2m2 + m1m2 )n + m2 n), respectively. A0 , A1 , A2 are
square matrices of order m1 n + sm1 m2 n + (L − s − 1)(m1 m2 n + m2 2 n) + m2 2 n.
(h ,i ,j ,k ,l )
Define the entries of Apq2(h12,i12,j1 ,k2 1 ,l21 ) as transition submatrices which contains
transitions of the form (p, h1 , i1 , j1 , k1 , l1 ) → (q, h2 , i2 , j2 , k2 , l2 ), where q = 0
or 1, when p = 0; q = 0, 1 or 2, when p = 1 and q = 1, when p = 2.
(h ,i ,j ,k ,l ) (h ,i ,j ,k ,l ) (h ,i ,j ,k ,l )
Define the entries of A0(h2 ,i2 ,j2 ,k 2,l 2) , A1(h2 ,i2 ,j2 ,k 2,l 2) , A2(h2 ,i2 ,j2 ,k 2,l 2) as transition
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
submatrices which contains transitions of the form (g, h1 , i1 , j1 , k1 , l1 ) → (g +
1, h2 , i2 , j2 , k2 , l2 ), where g ≥ 2, (g, h1 , i1 , j1 , k1 , l1 ) → (g, h2 , i2 , j2 , k2 , l2 ),
where g ≥ 2, (g, h1 , i1 , j1 , k1 , l1 ) → (g −1, h2 , i2 , j2 , k2 , l2 ), where g ≥ 3 respec-
tively. Since none or one event alone could take place in a short interval of time
312 A. Krishnamoorthy and Divya V.

with positive probability, in general, a transition such as (g1 , h1 , i1 , j1 , k1 , l1 ) →


(g2 , h2 , i2 , j2 , k2 , l2 ) has positive rate only for exactly one of g2 , h2 , i2 , j2 , k2 , l2
different from g1 , h1 , i1 , j1 , k1 , l1 .
⎧ 0
⎪ T α ⊗ In
⎪ i2 = i1 + 1, 0 ≤ i1 ≤ L − 2; j1 = j2 = 1; 1 ≤ k1 , k1" ≤ m1 ;



⎪ k2 = k2" = 0; 1 ≤ l1 , l2 ≤ n




⎪ T 0 ⊗ In
⎪ i1 = L − 1, i2 = L; j1 = j2 = 1; 1 ≤ k1 ≤ m1 ,


(i2 ,j2 ,k1" ,k2" ,l2 ) k1" = 0; k2 = k2" = 0; 1 ≤ l1 , l2 ≤ n
A00(i =
1 ,j1 ,k1 ,k2 ,l1 ) ⎪
⎪ T ⊕ D0 i1 = i2 , 0 ≤ i1 ≤ L − 1; j1 = j2 = 1; 1 ≤ k1 , k1" ≤ m1 ;



⎪ k2 = k2" = 0; 1 ≤ l1 , l2 ≤ n






⎪ D0 i1 = i2 , s + 1 ≤ i1 ≤ L; j1 = j2 = 0; k1 = k1" = 0;

k2 = k2" = 0; 1 ≤ l1 , l2 ≤ n


⎪ Im ⊗ D1 i1 = i2 = 0; j1 = j2 = 1; 1 ≤ k1 , k1" ≤ m1 ;
⎪ 1



⎪ k2 = k2" = 0; 1 ≤ l1 , l2 ≤ n



⎨ Im1 ⊗ (β ⊗ D1 ) i1 = i2 , 1 ≤ i1 ≤ L − 1; j1 = j2 = 1;
(i2 ,j2 ,k1" ,k2" ,l2 )
A01 = 1 ≤ k1 , k1" ≤ m1 ; k2 = 0;
(i1 ,j1 ,k1 ,k2 ,l1 ) ⎪


⎪ 1 ≤ k2" ≤ m2 ; 1 ≤ l1 , l2 ≤ n





⎪ β ⊗ D1 i1 = i2 , s + 1 ≤ i1 ≤ L; j1 = j2 = 0;

k1 = k1" = 0; k2 = 0; 1 ≤ k2" ≤ m2 ; 1 ≤ l1 , l2 ≤ n


⎪ Im1 ⊗ (θS 0 ⊗ In ) i2 = i1 − 1, 1 ≤ i1 ≤ L − 1; j1 = j2 = 1;



⎪ 1 ≤ k1 , k1" ≤ m1 , 1 ≤ k2 ≤ m2 ; k2" = 0;





⎪ 1 ≤ l1 , l2 ≤ n


⎪ θS 0 α ⊗ I
⎪ i1 = s + 1, i2 = s; j1 = 0, j2 = 1; k1 = 0, 1 ≤ k1" ≤ m1 ;

⎪ n



⎪ 1 ≤ k2 ≤ m2 ; k2" = 0; 1 ≤ l1 , l2 ≤ n


⎨ θS 0 ⊗ I i2 = i1 − 1, s + 2 ≤ i1 ≤ L; j1 = j2 = 0; k1 = k1" = 0;
(i2 ,j2 ,k1" ,k2" ,l2 ) n
A =
10(i1 ,j1 ,k1 ,k2 ,l1 ) ⎪
⎪ 1 ≤ k2 ≤ m2 ; k2" = 0; 1 ≤ l1 , l2 ≤ n



⎪ S 0 α ⊗ In

⎪ i1 = s + 1, i2 = s; j1 = 2, j2 = 1; 1 ≤ k1 ≤ m2 ,



⎪ 1 ≤ k1" ≤ m1 ; k2 = k2" = 0; 1 ≤ l1 , l2 ≤ n




⎪ S 0 ⊗ In
⎪ i2 = i1 − 1, s + 2 ≤ i1 ≤ L − 1; j1 = 2, j2 = 0;



⎪ 1 ≤ k1 ≤ m2 , k1" = 0; k2 = k2" = 0;



1 ≤ l1 , l2 ≤ n
A Two-Server Queueing System with Processing of Service Items by a Server 313

⎧ 0

⎪ T (α ⊗ β) ⊗ In i1 = 0, i2 = 1; j1 = j2 = 1; 1 ≤ k1 , k1" ≤ m1 ; k2 = 0,



⎪ 1 ≤ k2" ≤ m2 ; 1 ≤ l1 , l2 ≤ n


⎪ 0


⎪ T α ⊗ Im2 n 1 ≤ i1 ≤ L − 2, i2 = i1 + 1; j1 = j2 = 1;



⎪ 1 ≤ k1 , k1" ≤ m1 ; 1 ≤ k2 , k2" ≤ m2 ; 1 ≤ l1 , l2 ≤ n





⎪ T 0 ⊗ Im2 n i1 = L − 1, i2 = L; j1 = 1, j2 = 0; 1 ≤ k1 ≤ m1 , k1" = 0;



⎪ 1 ≤ k2 , k2" ≤ m2 ; 1 ≤ l1 , l2 ≤ n





⎨ T ⊕ D0 i1 = i2 = 0; j1 = j2 = 1; 1 ≤ k1 , k1" ≤ m1 ; k2 = k2" = 0
(i2 ,j2 ,k " ,k " ,l2 )
A11(i ,j ,k1 ,k2 ,l ) = 1 ≤ l1 , l2 ≤ n
1 1 1 2 1 ⎪


⎪ θS ⊕ D0 i1 = i2 , s + 1 ≤ i1 ≤ L; j1 = j2 = 0; k1 = k1" = 0;





⎪ 1 ≤ k2 , k2" ≤ m2 ; 1 ≤ l1 , l2 ≤ n



⎪ S ⊕ D0 i1 = i2 , s + 1 ≤ i1 ≤ L − 1; j1 = j2 = 2;






⎪ 1 ≤ k1 , k1" ≤ m2 ; k2 = k2" = 0;



⎪ 1 ≤ l1 , l2 ≤ n



⎪ T ⊕ θS ⊕ D0 i1 = i2 , 1 ≤ i1 ≤ L − 1; j1 = j2 = 1; 1 ≤ k1 , k1" ≤ m1 ;



1 ≤ k2 , k2" ≤ m2 ; 1 ≤ l1 , l2 ≤ n


⎪ Im1 ⊗ D1 i1 = i2 = 0; j1 = j2 = 1; 1 ≤ k1 , k1" ≤ m1 ;



⎪ k2 = k2" = 0; 1 ≤ l1 , l2 ≤ n





⎪ Im1 m2 ⊗ D1 i1 = i2 , 1 ≤ i1 ≤ L − 1; j1 = j2 = 1; 1 ≤ k1 , k1" ≤ m1 ;



⎪ 1 ≤ k2 , k2" ≤ m2 ; 1 ≤ l1 , l2 ≤ n

(i2 ,j2 ,k1" ,k2" ,l2 )
A12(i = β ⊗ (Im2 ⊗ D1 ) i1 = i2 , s + 1 ≤ i1 ≤ L; j1 = 0, j2 = 2; k1 = 0,
1 ,j1 ,k1 ,k2 ,l1 ) ⎪


⎪ 1 ≤ k1" ≤ m2 ; 1 ≤ k2 , k2" ≤ m2 ; 1 ≤ l1 , l2 ≤ n





⎪ Im2 ⊗ (β ⊗ D1 ) i1 = i2 , s + 1 ≤ i1 ≤ L − 1; j1 = j2 = 2;



⎪ 1 ≤ k1 , k1" ≤ m2 ; k2 = 0, 1 ≤ k2" ≤ m2 ;



1 ≤ l1 , l2 ≤ n


⎪ Im1 ⊗ (θS 0 ⊗ In ) i1 = 1, i2 = 0; j1 = j2 = 1; 1 ≤ k1 , k1" ≤ m1 ,





⎪ 1 ≤ k2 ≤ m2 , k2" = 0; 1 ≤ l1 , l2 ≤ n



⎪ Im1 ⊗ (θS 0 β ⊗ In ) i2 = i1 − 1, 2 ≤ i1 ≤ L − 1; j1 = j2 = 1;






⎪ 1 ≤ k1 , k1" ≤ m1 ; 1 ≤ k2 , k2" ≤ m2 ;



⎪ 1 ≤ l1 , l2 ≤ n




⎨ θS ⊗ Im2 n
⎪ i2 = i1 − 1, s + 2 ≤ i1 ≤ L; j1 = j2 = 2;
0
(i ,j2 ,k1" ,k2 ,l2 )
A212(i = 1 ≤ k1 , k1" ≤ m2 ; 1 ≤ k2 ≤ m2 ; k2" = 0;
1 ,j1 ,k1 ,k2 ,l1 ) ⎪


⎪ 1 ≤ l1 , l2 ≤ n





⎪ S 0 ⊗ I m2 n i2 = i1 − 1, s + 2 ≤ i1 ≤ L; j1 = 2, j2 = 0; 1 ≤ k1 ≤ m2 ,





⎪ k1" = 0; 1 ≤ k2 , k2" ≤ m2 ;



⎪ 1 ≤ l1 , l2 ≤ n





⎪ S 0 α ⊗ I m2 n + B i1 = s + 1, i2 = s; j1 = 2, j2 = 1; 1 ≤ k1 ≤ m2 ;


⎩ 1 ≤ k1" ≤ m1 ; 1 ≤ k2 , k2" ≤ m2 ; 1 ≤ l1 , l2 ≤ n
314 A. Krishnamoorthy and Divya V.

where
⎡ ⎤
α ⊗ B1
⎢ α ⊗ B2 ⎥
⎢ ⎥
B=⎢ .. ⎥
⎣ . ⎦
α ⊗ Bm2

where
6 7
Bmi = 0 · · · θ S 0 ⊗ In · · · 0 , where θ S 0 ⊗ In is in the ith position


⎪ Im1 ⊗ D1
⎪ i1 = i2 = 0; j1 = j2 = 1; 1 ≤ k1 , k1" ≤ m1 ; k2 = k2" = 0;



⎪ 1 ≤ l1 , l2 ≤ n


⎨I i1 = i2 , 1 ≤ i1 ≤ L − 1; j1 = j2 = 1; 1 ≤ k1 , k1" ≤ m1 ;
(i2 ,j2 ,k1" ,k2" ,l2 ) m1 m2 ⊗ D1
A0(i =
1 ,j1 ,k1 ,k2 ,l1 ) ⎪
⎪ 1 ≤ k2 , k2" ≤ m2 ; 1 ≤ l1 , l2 ≤ n



⎪ Im2 ⊗ D1 i1 = i2 , s + 1 ≤ i1 ≤ L; j1 = j2 = 2; 1 ≤ k1 , k1" ≤ m2 ;




2
1 ≤ k2 , k2" ≤ m2 ; 1 ≤ l1 , l2 ≤ n


⎪ T (α ⊗ β) ⊗ In
0 i1 = 0, i2 = 1; j1 = j2 = 1; 1 ≤ k1 , k1" ≤ m1 ; k2 = 0,



⎪ 1 ≤ k2" ≤ m2 ; 1 ≤ l1 , l2 ≤ n




⎪ T 0 α ⊗ Im2 n
⎪ 1 ≤ i1 ≤ L − 2, i2 = i1 + 1; j1 = j2 = 1;



⎪ 1 ≤ k1 , k1" ≤ m1 ; 1 ≤ k2 , k2" ≤ m2 ;





⎪ 1 ≤ l1 , l2 ≤ n


⎪ T 0β ⊗ I
⎪ i1 = L − 1, i2 = L; j1 = 1, j2 = 2; 1 ≤ k1 ≤ m1 ,

⎪ m2 n



⎨ 1 ≤ k1" ≤ m2 ; 1 ≤ k2 , k2" ≤ m2 ;
(i2 ,j2 ,k1" ,k2" ,l2 )
A1(i = 1 ≤ l1 , l2 ≤ n
1 ,j1 ,k1 ,k2 ,l1 ) ⎪


⎪ T ⊕ D0 i1 = i2 = 0; j1 = j2 = 1; 1 ≤ k1 , k1" ≤ m1 ; k2 = k2" = 0





⎪ 1 ≤ l1 , l2 ≤ n



⎪ T ⊕ θS ⊕ D0 i1 = i2 , 1 ≤ i1 ≤ L − 1; j1 = j2 = 1;





⎪ 1 ≤ k1 , k1" ≤ m1 ; 1 ≤ k2 , k2" ≤ m2 ;



⎪ 1 ≤ l1 , l2 ≤ n





⎪ S ⊕ θS ⊕ D0 i1 = i2 , s + 1 ≤ i1 ≤ L; j1 = j2 = 2; 1 ≤ k1 , k1" ≤ m2 ;


1 ≤ k2 , k2" ≤ m2 ; 1 ≤ l1 , l2 ≤ n
A Two-Server Queueing System with Processing of Service Items by a Server 315



⎪ Im1 ⊗ (θS 0 ⊗ In ) i1 = 1, i2 = 0; j1 = j2 = 1;



⎪ 1 ≤ k1 , k1" ≤ m1 , 1 ≤ k2 ≤ m2 ,





⎪ k2" = 0; 1 ≤ l1 , l2 ≤ n


⎪ Im1 ⊗ (θS 0 β ⊗ In )
⎪ i2 = i1 − 1, 2 ≤ i1 ≤ L − 1;





⎪ j1 = j2 = 1; 1 ≤ k1 , k1" ≤ m1 ;


(i2 ,j2 ,k1" ,k2" ,l2 ) 1 ≤ k2 , k2" ≤ m2 ; 1 ≤ l1 , l2 ≤ n
A2(i =
1 ,j1 ,k1 ,k2 ,l1 ) ⎪
⎪ S 0 α ⊗ Im2 n + B i1 = s + 1, i2 = s; j1 = 2, j2 = 1;



⎪ 1 ≤ k1 ≤ m2 ; 1 ≤ k1" ≤ m1 ;






⎪ 1 ≤ k2 , k2" ≤ m2 ; 1 ≤ l1 , l2 ≤ n

⎪ I ⊗ (θS 0 β ⊗ I ) + S 0 β ⊗ I

⎪ i2 = i1 − 1, s + 2 ≤ i1 ≤ L;
⎪ m1

n m2 n

⎪ j1 = j2 = 2; 1 ≤ k1 , k1" ≤ m2 ;



1 ≤ k2 , k2" ≤ m2 ; 1 ≤ l1 , l2 ≤ n

3 Steady-State Analysis

Let π = (π0 , π1 , . . . , πL ) ⎡
denote the steady-state probability vector of the⎤generator
F0 F1
⎢F F F ⎥
⎢ 2 3 4 ⎥
⎢ ⎥
⎢ F5 F3 F4 ⎥
⎢ ⎥
⎢ .. .. .. ⎥
⎢ . . . ⎥
⎢ ⎥
⎢ F F F ⎥
⎢ 5 3 4 ⎥
⎢ F5 F3 F6 ⎥
A = A0 + A1 + A2 = ⎢ ⎢ ⎥.

⎢ F7 F8 F9 ⎥
⎢ ⎥
⎢ F10 F8 F9 ⎥
⎢ ⎥
⎢ .. .. .. ⎥
⎢ . . . ⎥
⎢ ⎥
⎢ F10 F8 F9 ⎥
⎢ ⎥
⎣ F10 F8 F11 ⎦
F12 F13
Then π satisfies

π A = 0, π e = 1. (1)

The LI QBD description of the model indicates that the queueing system is
stable (see Neuts [6]) if and only if the left drift exceeds that of right drift. That
is,

π A0 e < π A2 e (2)
316 A. Krishnamoorthy and Divya V.

The vector π cannot be obtained directly in terms of the parameters of the model.
From (1), we get

πi = πi−1 Ui−1 , 1 ≤ i ≤ L (3)

where


⎪ −F1 (F3 + U1 F5 )−1 f or i = 0



⎪ −F4 (F3 + Ui+1 F5 )−1 f or 1 ≤ i ≤ s − 2



⎨ −F4 (F3 + Us F7 )−1 f or i = s − 1
Ui = −F6 (F8 + Us+1 F10 )−1 f or i = s




⎪ −F9 (F8 + Ui+1 F10 )−1 f or s + 1 ≤ i ≤ L − 3



⎪ −F9 (F8 + UL−1 F12 ) f or i = L − 2

−F11 (F13 )−1 f or i = L − 1

From the normalizing condition πe = 1, we have


⎛ ⎞
:
L−1 j
π0 ⎝ Ui + I ⎠ e = 1 (4)
j =0 i=0

We get π0 by solving (1) and (4). Substituting (3) and (4) in (2) gives the stability
condition as

s : j  :
L−1 j

π0 (Im1 ⊗ D1 )ee + Ui (Im1 m2 ⊗ D1 )ee + Ui (Im1 m2 ⊗ D1 )ee
j =1 i=0 j =s+1 i=0

:
L−1
+ Ui (Im2 2 ⊗ D1 )ee <
i=0

s :
j  :
L−1 j
π0 ⎣ Ui (e(m1 ) ⊗ (θ S 0 ⊗ In )ee ) + Ui A"2e +
j =1 i=0 j =s+1 i=0

:
L−1
 
+ Ui e(m1 ) ⊗ (θ S 0 ⊗ In ) + (S 0 ⊗ Im2 n ) e ) (5)
i=0

where

e(m1 ) ⊗ (θ S 0 ⊗ In )
A"2 =
e(m1 ) ⊗ (θ S 0 ⊗ In ) + (S 0 ⊗ Im2 n )
A Two-Server Queueing System with Processing of Service Items by a Server 317

Let x be the steady-state probability vector of Q̄. We partition this vector as

x = (xx0 , x1 , x2 . . .),

where x0 is of dimension (s + 1)m1 n + (L − s − 1)(1 + m1 )n + n, x1 is of dimension


m1 n + sm1 m2 n + (L − s − 1)(2m2 + m1 m2 )n + m2 n, x2 , x3 , . . . are of dimension
m1 n+sm1 m2 n+(L−s −1)(m1m2 n+m2 2 n)+m2 2 n . Under the stability condition,
we have

xi = x2 R i−2 , i ≥ 3

where the matrix R is the minimal nonnegative solution to the matrix quadratic
equation

R 2 A2 + RA1 + A0 = 0

and the vectors x0 , x1 , and x 2 are obtained by solving the equations

x0 A00 + x1 A10 = 0 (6)


x0 A01 + x1 A11 + x2 A21 = 0 (7)
x1 A12 + x2 (A1 + RA2 ) = 0 (8)

subject to the normalizing condition

x0e + x1e + x2 (I − R)−1e = 1 (9)

4 Level Crossing Problems

4.1 Distribution of Number of Downcrossings from Inventory


Level s to s − 1 Before Hitting s + 1

To find this distribution, first we find the distribution of duration of time till down-
crossing from s to s − 1 occurs before hitting s + 1. This can be studied as the time
until absorption in the CTMC, χ1 = {(N1 (t), N2 (t), I (t), K1 (t), K2 (t), K3 (t))},
where N1 (t) denotes the number of downcrossings from s to s−1, N2 (t), the number
of customers in the system, I (t), the number of processed items, K1 (t), processing
phase of S2 , K2 (t), the service phase of S1 , K3 (t), the phase of the customer arrival
process at time t.
The state space of the process is {(i, 0, k, l1 , 0, p) : i ≥ 0, 0 ≤ k ≤ s, 1 ≤ l1 ≤
m1 , 1 ≤ p ≤ n} ∪ {(i, j, 0, l1 , 0, p) : i ≥ 0, 1 ≤ j ≤ M, 1 ≤ l1 ≤ m1 , 1 ≤ p ≤
n} ∪ {(i, j, k, l1 , l2 , p) : i ≥ 0, 1 ≤ j ≤ M, 1 ≤ k ≤ s, 1 ≤ l1 ≤ m1 , 1 ≤ l2 ≤
318 A. Krishnamoorthy and Divya V.

m2 , 1 ≤ p ≤ n} ∪ {∗}, where ∗ denotes the absorbing state


 indicating the hitting
 of
M()
level s + 1. Here, M() is chosen in such a way that P h=0 hx e > 1 −  →0
for every  > 0.
The infinitesimal generator of the process is given by

⎡ ⎤
0 0 0 ···
⎢ E0 B C ⎥
⎢ ⎥
⎢ 0 ⎥
⎢E B C ⎥
⎢ . ⎥
U =⎢ . .. .. ⎥.
⎢ . . . ⎥
⎢ 0 ⎥
⎢E B C ⎥
⎣ ⎦
.. .. ..
. . .

where
⎡ ⎤
F1 G1
⎢H F G ⎥
⎢ 1 2 2 ⎥
⎢ ⎥
⎢ H2 F2 G2 ⎥

B=⎢ ⎥
.. .. .. ⎥
⎢ . . . ⎥
⎢ ⎥
⎣ H2 F2 G2 ⎦
F3

with
⎡ ⎤
T ⊕ D0 T 0 α ⊗ In
⎢ . ⎥  
⎢ .. ⎥
⎥ , G1 = Im1 ⊗ D1
..
F1 = ⎢

.
⎥ ,
⎣ T ⊕ D0 T 0 α ⊗ In ⎦ Is ⊗ (Im1 ⊗ (β ⊗ D1 ))
T ⊕ D0

0 0
H1 = ,
Is ⊗ (Im1 ⊗ (θ S 0 ⊗ In )) 0
⎡ ⎤
T ⊕ D0 T 0 (α ⊗ β) ⊗ In
⎢ T ⊕ θ S ⊕ D0 T 0 α ⊗ Im2 n ⎥
⎢ ⎥
⎢ .. .. ⎥
F2 = ⎢ . . ⎥,
⎢ ⎥
⎣ T ⊕ θ S ⊕ D0 T 0 α ⊗ Im2 n ⎦
T ⊕ θ S ⊕ D0
A Two-Server Queueing System with Processing of Service Items by a Server 319

⎡ ⎤
0 0
G2 = Im1 +sm1 m2 ⊗D1 , H2 = ⎣ Im1 ⊗ (θ S 0 ⊗ In ) 0 ⎦,
0 Is−1 ⊗ (Im1 ⊗ (θ S β ⊗ In ))
0

⎡ ⎤
T ⊕ D0 − Im1 ⊗ Δ T 0 (α ⊗ β) ⊗ In
⎢ T ⊕ θ S ⊕ D0 − Im1 m2 ⊗ Δ T 0 α ⊗ Im2 n ⎥
⎢ ⎥
⎢ ⎥
⎢ .. .. ⎥
F3 = ⎢ ⎥
⎢ . . ⎥
⎢ ⎥
⎣ T ⊕ θ S ⊕ D0 − Im1 m2 ⊗ Δ T 0 α ⊗ Im2 n ⎦
T ⊕ θ S ⊕ D0 − Im1 m2 ⊗ Δ

with
⎡ ⎤
δ1
⎢ .. ⎥
Δ=⎣ . ⎦.
δn
⎡ ⎤
0 ···0··· 0
⎢ .. .. .. ⎥
⎢ .⎥
C=⎢. . ⎥ , where,
⎣0 ···0··· 0⎦
0 · · · C" · · · 0
⎡ ⎤
0 0
C " = ⎣ Im1 ⊗ (θ S 0 ⊗ In ) 0 ⎦
0 Is−1 ⊗ (Im1 ⊗ (θ S β ⊗ In ))
0

E10
E0 =
e(M) ⊗ E20

with
⎡ ⎤
0
⎢ .. ⎥ 0
E10 = ⎣ .
0
⎦ , E2 =
T 0 ⊗ e (m2 n)
T 0 ⊗ e (n)

Let yk , k = 0, 1, · · · be the probability that the number of downcrossings from


inventory level s to s − 1 is k. Then yk is the probability that the absorption occurs
from the level k for the process χ1 . Hence, yk are given by

y0 = γ1 (−B)−1 E 0

and for k = 1, 2, 3, . . .

yk = γ1 ((−B)−1 C)k (−B)−1 E 0


320 A. Krishnamoorthy and Divya V.

where,

γ1 = (1/d)(xx 0,0,1,1,0,1, · · · , x 0,s,1,m1,0,n , · · · , x M,0,1,1,0,1, · · · , x M,s,1,m1 ,m2 ,n )

with


s 
m1 
n 
M 
s 
m1 
m2 
n
d= x 0,i,1,k1 ,0,p + x h,i,1,k1 ,k2 ,p
i=0 k1 =1 p=1 h=1 i=0 k1 =1 k2 =1 p=1

Thus, we arrive at the lemma.


Lemma 1 The expected number of downcrossings from inventory level s to s − 1
before hitting s + 1 is


E(i) = kyk
k=0

4.2 Distribution of Number of Upcrossings of Inventory Level


from s to s + 1 Before Hitting s − 1

To find this distribution, first we find the distribution of duration of time till
upcrossing from s to s + 1 occurs before hitting s − 1. This again can be studied as
the time until absorption in a CTMC, χ2 = {(N1 (t), N2 (t), I (t), J (t), K1 (t), K2 (t),
K3 (t))}, where N1 (t) denotes the number of upcrossings from s to s + 1, N2 (t),
the number of customers in the system, I (t), number of processed items, J (t), status
of S2 , K1 (t), processing/service phase of S2 , K2 (t), the service phase of S1 , K3 (t),
the arrival phase at time t.
The state space of the process is {(h, 0, j, 1, k1 , 0, l) : h ≥ 0, s ≤ j ≤ L−1, 1 ≤
k1 ≤ m1 , 1 ≤ l ≤ n} ∪ {(h, 0, j, 0, 0, 0, l) : h ≥ 0, s + 1 ≤ j ≤ L, 1 ≤ l ≤
n} ∪ {(h, i, j, 1, k1 , k2 , l) : h ≥ 0, 1 ≤ i ≤ M, s ≤ j ≤ L − 1, 1 ≤ k1 ≤ m1 , 1 ≤
k2 ≤ m2 , 1 ≤ l ≤ n} ∪ {(h, 1, j, 0, 0, k2, l) : s + 1 ≤ j ≤ L, 1 ≤ k2 ≤ m2 , 1 ≤
l ≤ n} ∪ {(h, 1, j, 2, k1 , 0, l) : h ≥ 0, s + 1 ≤ j ≤ L − 1, 1 ≤ k1 ≤ m2 , 1 ≤
l ≤ n} ∪ {(h, i, j, 2, k1 , k2 , l) : h ≥ 0, 2 ≤ i ≤ M, s + 1 ≤ j ≤ L, 1 ≤ k1 , k2 ≤
m2 , 1 ≤ l ≤ n} ∪ {∗}, where ∗ denotes the absorbing state  indicating the hitting
 of
M()
level s + 1. Here, M() is chosen in such a way that P h=0 hx e > 1 −  →0
for every  > 0.
Let zk , k = 0, 1, · · · be the probability that the number of upcrossings from
inventory level s to s + 1 is k. Then zk is the probability that the absorption occurs
from the level k for the process χ2 .
Proceeding on similar lines as in the proof of Lemma 1, we arrive at lemma.
A Two-Server Queueing System with Processing of Service Items by a Server 321

Lemma 2 The expected number of upcrossings from inventory level s to s+1 before
hitting s − 1 is


E(i) = kzk
k=0

5 Performance Measures

In this section, we use the notations ηk for absorption rate from phase k in
P H (α, T ), σk for absorption rate from phase k in P H (β, S), θ σk for absorption
rate from phase k in P H (β, θ S), and dpp" to denote pp" th entry of D1 .
(1)


1. Expected number of customers in the system, Es = ∞ h=1 hxhe
2. Expected number of processed items in the inventory,


L−1 m1 
n 
L 
n
Eit = ix0,i,1,k1 ,0,p + ix0,i,0,0,0,p +
i=1 k1 =1 p=1 i=s+1 p=1

∞ L−1
  m1 
m2 
n 
L 
m2 
n
ixh,i,1,k1 ,k2 ,p + x1,i,0,0,k2,p +
h=1 i=1 k1 =1 k2 =1 p=1 i=s+1 k2 =1 p=1


L−1 
m2 
n ∞ 
 L 
m2 
m2 
n
ix1,i,2,k1 ,0,p + ixh,i,2,k1 ,k2 ,p
i=s+1 k1 =1 p=1 h=2 i=s+1 k1 =1 k2 =1 p=1

3. Expected rate at which the inventory processing is switched on,


m2 
n 
m2 
n
Ripo = σk1 x1,s+1,2,k1,0,p + θ σk2 x1,s+1,0,0,k2,p +
k1 =1 p=1 k2 =1 p=1
∞ 
 m2 
m2 
n
(θ σk2 + σk1 )xh,s+1,2,k1,k2 ,p (10)
h=2 k1 =1 k2 =1 p=1

4. Expected rate of switching of S2 to service mode,


m2 
L 
n 
n
(1)
Rsn = dpp" x1,i,0,0,k2 ,p +
k2 =1 i=s+1 p=1 p " =1
∞ 
 m1 
m2 
n
ηk1 xh,L−1,1,k1,k2 ,p (11)
h=2 k1 =1 k2 =1 p=1
322 A. Krishnamoorthy and Divya V.

6 Analysis of a Cost Function

We construct a cost function based on the above performance measures.


Let
c1 : Unit time cost for switching on inventory processing
c2 : Unit time cost for switching of S2 to service mode
h1 : Unit time cost for holding a customer
h2 : Unit time cost for holding an item in inventory
Then the expected cost per unit time,

C = c1 Ripo + c2 Rsn + h1 Es + h2 Eit

7 Numerical Experiments

We find optimal s and optimal L by using the above cost function.


6 7 −4 4 6 7 −3 3
We fix α = 0.9 0.1 , T = , β = 0.8 0.2 , S = ,
0 −4 0 −3
θ = 0.6, c1 = 100, c2 = 5, h1 = 30 and h2 = 1.

For the arrival process of type II customers, we consider the following five set of
matrices for D0 and D1
1. Exponential (EXP)

D0 = (−1), D1 = (1)

2. Erlang (ERA)
⎡ ⎤ ⎡ ⎤
−3 3 0 000
D0 = ⎣ 0 −3 3 ⎦ , D1 = ⎣ 0 0 0 ⎦
0 0 −3 300

3. Hyperexponential (HEXP)

−3.4000 0 0.6800 2.7200


D0 = , D1 =
0 −0.8500 0.1700 0.6800

4. MAP with negative correlation (MNA)


⎡ ⎤ ⎡ ⎤
−0.8101 0.8101 0 0 0 0
D0 = ⎣ 0 −1.3497 0 ⎦ , D1 = ⎣ 0.0810 0 1.2687 ⎦
0 0 −40.5065 38.0761 0 2.4304
A Two-Server Queueing System with Processing of Service Items by a Server 323

5. MAP with positive correlation (MPA)


⎡ ⎤ ⎡ ⎤
−0.8101 0.8101 0 0 0 0
D0 = ⎣ 0 −1.3497 0 ⎦ , D1 = ⎣ 1.2687 0 0.0810 ⎦
0 0 −40.5065 2.4304 0 38.0761

These two MAP processes are normalized so as to have an arrival rate of 1. The
arrival process labeled MNA has correlated arrivals with correlation between two
successive interarrival times given by −0.4211 and the arrival process correspond-
ing to the one labeled MPA has a positive correlation with value 0.4211.
Tables 1, 2, 3, 4, 5 indicate the effect of the parameter s on various performance
measures and the cost function corresponding to different arrival processes when L
is fixed. In the following, we summarize the observations based on these tables.

Table 1 Effect of s: Fix L = 20 and arrival process as EXP

s 2 3 4 5 6 7 8 9 10
Ripo 0.035 0.037 0.039 0.042 0.045 0.049 0.053 0.058 0.064
Rsn 0.178 0.180 0.181 0.182 0.184 0.186 0.188 0.191 0.194
Es 1.984 1.952 1.923 1.894 1.866 1.837 1.808 1.779 1.750
Eit 10.467 10.976 11.485 11.994 12.500 13.005 13.507 14.005 14.497
C 74.336 74.105 73.990 73.918 73.883 73.894 73.963 74.110 74.340
Bold values represent the optimal value for the cost function

Table 2 Effect of s: Fix L = 20 and arrival process as ERA

s 2 3 4 5 6 7 8 9 10
Ripo 0.034 0.036 0.038 0.041 0.045 0.048 0.053 0.058 0.064
Rsn 0.199 0.201 0.202 0.204 0.206 0.209 0.212 0.215 0.219
Es 1.553 1.527 1.501 1.475 1.448 1.421 1.393 1.365 1.336
Eit 10.487 11.001 11.516 12.031 12.546 13.061 13.576 14.089 14.602
C 61.475 61.420 61.414 61.432 61.475 61.553 61.680 61.872 62.155
Bold values represent the optimal value for the cost function

Table 3 Effect of s: Fix L = 20 and arrival process as HEXP

s 2 3 4 5 6 7 8 9 10
Ripo 0.035 0.037 0.039 0.042 0.045 0.049 0.053 0.058 0.064
Rsn 0.171 0.172 0.173 0.175 0.176 0.178 0.180 0.183 0.186
Es 2.152 2.119 2.090 2.060 2.032 2.003 1.975 1.947 1.920
Eit 10.457 10.963 11.469 11.975 12.478 12.978 13.474 13.966 14.451
C 79.319 79.051 78.923 78.848 78.815 78.831 78.909 79.068 79.333
Bold values represent the optimal value for the cost function
324

Table 4 Effect of s: Fix L = 20 and arrival process as MPA

s 2 3 4 5 6 7 8 9 10
Ripo 0.033 0.035 0.036 0.040 0.043 0.046 0.050 0.055 0.060
Rsn 0.076 0.078 0.079 0.081 0.083 0.085 0.088 0.092 0.095
Es 16.697 16.645 16.631 16.629 16.630 16.633 16.635 16.637 16.639
Eit 10.644 11.122 11.605 12.090 12.573 13.054 13.533 14.009 14.480
C 515.27 514.38 514.69 515.37 516.19 517.08 518.03 519.04 520.14
Bold values represent the optimal value for the cost function
A. Krishnamoorthy and Divya V.
A Two-Server Queueing System with Processing of Service Items by a Server 325

Table 5 Effect of s: Fix L = 20 and arrival process as MNA

s 2 3 4 5 6 7 8 9 10
Ripo 0.035 0.037 0.040 0.042 0.046 0.049 0.054 0.059 0.065
Rsn 0.208 0.209 0.211 0.212 0.214 0.216 0.219 0.222 0.225
Es 2.100 2.068 2.037 2.008 1.9778 1.949 1.918 1.890 1.858
Eit 10.418 10.918 11.427 11.924 12.430 12.918 13.419 13.892 14.381
C 77.951 77.702 77.546 77.460 77.380 77.383 77.399 77.542 77.729
Bold values represent the optimal value for the cost function

We see that Ripo increases when s increases. This happens because when s
increases, the inventory level reaches s more rapidly from above. Rsn also increases
as s increases. This is due to the fact that when s increases, S2 is switched on to
processing at a faster rate and hence the inventory level reaches to maximum value
L at a faster rate and as a result S2 switched on to service mode if customers are
waiting. Es decreases as s increases. This happens since when s increases both Ripo
and Rsn increase and as a result customers get service at a faster rate. Eit increases
as s increases. This is because when s increases, S2 is switched on to processing
mode at a faster rate. The cost function first decreases reaches a minimum value and
then increases for all arrival processes. The optimal cost varies for different arrival
processes (see Fig 1). It is the highest for MPA. This shows the effect of positive
correlation.
Tables 6, 7, 8, 9, 10 indicate the effect of the parameter L on various performance
measures and the cost function when s is fixed. We summarize the observations
based on these tables below.
Ripo decreases as L increases. This is due to the fact that the level s is attained
at a slower rate. Rsn also decreases as L increases. This happens since L is attained
at a slower rate. Es increases as L increases. This happens since when L increases
both Ripo and Rsn decrease and as a result customers get service at a slower rate.
Eit increases as L increases since more items are processed at a stretch. The cost
function first decreases reaches a minimum value and then increases for all arrival
processes. The optimal cost varies for different arrival processes (see Fig 2). It is the
highest for MPA. This shows the effect of positive correlation.
326 A. Krishnamoorthy and Divya V.

74.4 62.2
EXP ERA

62
74.2

C
C

61.8
74
61.6

73.8 61.4
2 4 6 8 10 2 4 6 8 10
s s

79.4
HEXP

79.2
C

79

78.8
2 4 6 8 10
s
520 78
MPA MNA

518 77.8
C

516 77.6

514 77.4
2 4 6 8 10 2 4 6 8 10
s s

Fig. 1 Effect of s on C when L = 20


Table 6 Effect of L: Fix s = 3 and arrival process as EXP

L 8 9 10 11 12 13 14 15 16 17 18
Ripo 0.131 0.109 0.093 0.081 0.072 0.064 0.058 0.053 0.049 0.045 0.042
Rsn 0.227 0.216 0.208 0.202 0.197 0.194 0.191 0.188 0.186 0.184 0.182
Es 1.617 1.641 1.667 1.695 1.723 1.751 1.780 1.809 1.838 1.867 1.895
Eit 5.090 5.597 6.096 6.591 7.083 7.573 8.062 8.549 9.035 9.521 10.006
C 67.86 66.80 66.43 66.52 66.90 67.48 68.21 69.04 69.96 70.93 71.96
Bold values represent the optimal value for the cost function
A Two-Server Queueing System with Processing of Service Items by a Server
327
328

Table 7 Effect of L: Fix s = 3 and arrival process as ERA

L 8 9 10 11 12 13 14 15 16 17 18
Ripo 0.134 0.111 0.094 0.081 0.072 0.064 0.058 0.053 0.048 0.045 0.041
Rsn 0.255 0.244 0.235 0.229 0.223 0.219 0.215 0.212 0.209 0.206 0.204
Es 1.187 1.216 1.247 1.277 1.307 1.336 1.365 1.393 1.421 1.448 1.475
Eit 5.208 5.694 6.178 6.660 7.142 7.624 8.106 8.588 9.070 9.552 10.035
C 55.51 54.46 54.12 54.22 54.61 55.19 55.90 56.70 57.57 58.49 59.44
Bold values represent the optimal value for the cost function
A. Krishnamoorthy and Divya V.
Table 8 Effect of L: Fix s = 3 and arrival process as HEXP

L 8 9 10 11 12 13 14 15 16 17 18
Ripo 0.130 0.108 0.092 0.081 0.071 0.064 0.058 0.053 0.049 0.045 0.042
Rsn 0.219 0.208 0.200 0.194 0.189 0.186 0.183 0.180 0.178 0.176 0.175
Es 1.795 1.817 1.842 1.867 1.894 1.921 1.949 1.977 2.005 2.033 2.062
Eit 5.048 5.559 6.063 6.561 7.056 7.549 8.040 8.529 9.017 9.504 9.991
C 73.02 71.93 71.54 71.59 71.94 72.49 73.20 74.01 74.92 75.88 76.90
Bold values represent the optimal value for the cost function
A Two-Server Queueing System with Processing of Service Items by a Server
329
330

Table 9 Effect of L: Fix s = 3 and arrival process as MPA

L 8 9 10 11 12 13 14 15 16 17 18
Ripo 0.122 0.101 0.087 0.076 0.067 0.060 0.055 0.050 0.046 0.043 0.040
Rsn 0.139 0.124 0.114 0.106 0.100 0.096 0.092 0.088 0.085 0.083 0.081
Es 16.71 16.70 16.69 16.69 16.68 16.68 16.67 16.67 16.66 16.66 16.65
Eit 5.033 5.551 6.063 6.573 7.080 7.587 8.093 8.599 9.104 9.609 10.113
C 519.3 517.3 516.1 515.3 514.7 514.4 514.2 514.1 514.0 514.0 514.1
Bold values represent the optimal value for the cost function
A. Krishnamoorthy and Divya V.
Table 10 Effect of L: Fix s = 3 and arrival process as MNA

L 8 9 10 11 12 13 14 15 16 17 18
Ripo 0.132 0.111 0.094 0.082 0.072 0.065 0.059 0.054 0.049 0.046 0.042
Rsn 0.264 0.250 0.242 0.235 0.230 0.225 0.222 0.219 0.216 0.214 0.212
Es 1.723 1.745 1.776 1.800 1.833 1.860 1.891 1.919 1.950 1.979 2.009
Eit 4.940 5.488 5.979 6.505 6.987 7.500 7.980 8.483 8.964 9.461 9.942
C 71.12 70.14 69.81 69.90 70.32 70.89 71.67 72.50 73.46 74.44 75.51
Bold values represent the optimal value for the cost function
A Two-Server Queueing System with Processing of Service Items by a Server
331
332 A. Krishnamoorthy and Divya V.

72 60
EXP ERA

70 58

C
C

68 56

66 54
10 15 20 10 15 20
L L
78
HEXP

76
C

74

72
10 15 20
L
520 76
MPA MNA

518 74
C
C

516 72

514 70
10 15 20 10 15 20
L L

Fig. 2 Effect of L on C when s = 3


A Two-Server Queueing System with Processing of Service Items by a Server 333

8 Conclusion

We considered a MAP/(PH,PH)/2 queue with processing of service items by a


server. We analyzed the model in steady state by matrix analytic method and
also derived some important distributions. Also we provided some numerical
experiments to find the optimal values of L and s.

References

1. Baek, J., Dudina, O., Kim, C.: A Queueing system with heterogeneous impatient customers and
consumable additional items. Int. J. Math. Comput. Sci. 27(2), 367–384 (2017)
2. Dhanya, S., Dudin, A.N., Olga, D., Krishnamoorthy, A.: A two-priority single server retrial
queue with additional items. J. Ind. Manag. Optim. https://doi.org/10.3934/jimo.2019085
3. Divya, V., Krishnamoorthy, A., Vishnevsky, V.M.: On a queueing system with processing of
service items under vacation and N-policy. In: DCCN 2018, CCIS 919, pp. 43–57 (2018)
4. Hanukov, G., Avinadav, T., Chernonog, T., Spiegal, U., Yechiali, U.: A queueing system with
decomposed service and inventoried preliminary services. Appl. Math. Model. 47, 276–293
(2017)
5. Kazimirsky, A.V.: Analysis of BMAP/G/1 queue with reservation of service. Stoch. Anal. Appl.
24(4), 703–718 (2006)
6. Neuts, M.F.: Matrix Geometric Solutions in Stochastic Models: An Algorithmic Approach. The
Johns Hopkins University Press, Baltimore, MD (1981)
A Two-Stage Tandem Queue
with Specialist Servers

T. S. Sinu Lal, A. Krishnamoorthy, V. C. Joshua, and Vladimir Vishnevsky

Abstract The queueing system considered consist of two multi-server stations in


series. Customers arrive according to a Markovian Arrival Process to an infinite
capacity queue at the first station. There are c servers who provide identical
exponentially distributed service at the first station. A customer at the head of the
queue can enter into service if any one of the servers at the first stage is idle. At the
second station there are N identical servers called specialist servers . The service
time distribution of specialist severs is phase type. There is a finite buffer in between
the two stations. On completion of service at first stage, a customer needs service at
the second station with probability p or leaves the system with probability 1 − p. In
the former case, the customer joins the second station for service in case the waiting
room is not full, else he is lost to the system. A customer in the finite buffer can
enter into service if at least one of these servers is free. Stability of the system is
established and stationary distribution is obtained using Matrix Analytic Methods.
We compute distribution of waiting time of customers in the first queue, the mean
number of customers lost due to capacity restriction of the waiting space of the
second station and the mean waiting time of customers who get into service at the
second station. An optimization problem on the capacity of second waiting station
is also analyzed.

Keywords Tandem queue · Specialist server · Matrix analytic method · Phase


type distribution · Markovian arrival process

T. S. Sinu Lal · A. Krishnamoorthy · V. C. Joshua ()


Department of Mathematics, CMS College Kottayam, Kottayam, Kerala, India
e-mail: [email protected];[email protected];[email protected]
http://www.cmscollege.ac.in
V. Vishnevsky
Trapeznikov Institute of Control Sciences, Russian Academy of Sciences, Moscow, Russia
e-mail: [email protected]

© The Editor(s) (if applicable) and The Author(s), under exclusive 335
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_20
336 T. S. Sinu Lal et al.

1 Introduction

Tandem queues receive much attention in the area of mathematical modeling for its
wide range of applications to fit into a great deal of queueing situations that arise
in common life as well as in various fields of science, industry, and technology.
In a tandem queue, an arriving customer is offered multistage service facilities with
various strategies. Numerous examples of such models can be found in our everyday
life itself. A very observable example is a hospital situation where an arriving patient
is first examined at the casualty clinic and after the preliminary examination at
the casualty clinic the patient’s illness is sometimes cured or otherwise he may
be sent to the clinic of a specialist for further treatment. The situation perfectly
exemplifies a tandem queue of two stations, first is the casualty clinic and the
second, the clinic of the specialist doctor. The problem discussed in this paper is
motivated by this example. Analogous situations can also be found in manufacturing
systems. The production process of many commodities are completed at service
facilities arranged sequentially. Manufacturing of motor vehicles, air crafts, etc. are
best examples for this. The mathematical model playing behind all these queueing
systems are also applicable in the design and control of various communication
networks. A message transmission system used for security checking of transmitting
messages best illustrates this. In this case messages (data packets) are initially
examined at the first stage and are transmitted if no threat is found otherwise it
is passed on to the experts for further examination. Likewise tandem queues are
observable in many realms of life.
Phase type distributions were introduced as a generalized version of exponential
distribution by M F Neuts (1975). The set of all phase type distributions is a dense
subset of set of all distributions defined on the nonnegative real line. This means
given any arbitrary probability distribution defined on the nonnegative real line,
it can be approximated using a sequence of phase type distributions converging
to it. This is the main reason why phase type distributions are widely used in
stochastic modeling. The inadequacy of the stationary Poisson process in modeling
arrivals to a system is its inability to address correlated arrival flows. This is best
overcome by the introduction of Markovian arrival process (MAP) Neuts [19] and
more results on MAP by Lucantony [17]. The set of Markovian arrival process is a
versatile class which includes PH renewal process, Markov Modulated Process, etc.
Moreover any stochastic counting process can be approximated with a sequence of
Markovian arrival process with the desired degree of accuracy. The use of phase type
distributions for service times and Markovian arrival process for customer arrivals
increases the complexity of models and hence analysis becomes extremely tedious.
Matrix analytic method, [20] is the most advantageous tool to compute the stationary
distribution of the process of the system and hence evaluating the performance
metrics. Latouche and Ramaswamy [16] is a emphasis on the applications of matrix
analytic method on various queueing models.
Chakravarthy et al. [4] study a MAP/PH/c queueing system with retrials and
search. Gomez-Corral et al. [7, 9] provide detailed study on performance evaluation
A Two-Stage Tandem Queue with Specialist Servers 337

of a two stage tandem queue. Reference [9] describes a two-stage tandem queue with
blocking, which occurs according to a mechanism called blocking after service. In
this model, there is a finite buffer between stations and when the capacity exceeds,
the customers are forced to be blocked at the first station, if they want to proceed
to the next station. A tandem queue with MAP and blocking is studied in [7]. In
the model studied in [7], there is no intermediate buffer between the stations. A
MAP/PH/1/1→./PH/1/k+1 queue with blocking and retrial is described in [8]. This
model has no waiting space before the first station and which leads to customer
loss at the early stage of service. In [10, 11] Kim et al. analyze tandem queueing
network of two stations. The model in [11] has a finite and an infinite intermediate
buffer between stations. A model for interactive voice response of call centers is
investigated in [13]. The paper [12] studies a MMAP/PH/N queueing system with an
optimal strategy of control by the number of active servers in a multi-server queue.
The model studied in [2] is of the type MAP /P H /c1 → ./P H /c2 /c2 +k2 , this is a
loss system due to capacity restriction of queue at first service station. Reference [1]
is an excellent survey of queueing networks with finite capacity queues. For wide
range of details of MAP [3] is referred. References [6, 14, 15, 18] study models with
finite buffers and use matrix analytic methods to compute stationary distribution
of the system process. A detailed description of performance analysis of queueing
networks can be seen in [21]. For fundamentals of stochastic process [5] is referred.
The coming section of the paper is arranged as follows. In Sect. 2 mathematical
model of the system is described and analysis is carried out. Also stability of the
system is characterized in this section. In Sect. 3 the steady state distributions of the
process are obtained. Section 4 includes the waiting time distribution of a customer
in the intermediate buffer. Performance characteristics of the system are defined in
Sect. 5. A constraint on system revenue is defined as a cost function in Sect. 6. The
model is numerically illustrated in Sect. 7. The study concludes with Sect. 8.

2 Model Description

The queueing system under study consists of two multi-server stations operating
in tandem. Customers arrive to the first station according to a Markovian Arrival
Process (MAP). The MAP is directed by an underlying random process νt , t ≥ 0,
which itself is an irreducible continuous time Markov Chain on a finite state
space{1,2. . . ,w}. The transition intensities of the process ν t are defined by the
square matrices D0 and D1 , each having dimension w × w. The matrix D0
corresponds to the chain transitions without generating any arrival, whereas D1
corresponds to transitions generating an arrival of a customer. The matrix D =
D0 + D1 is the infinitesimal generator of the process νt , , t ≥ 0. The stationary state
distribution η of this chain is the unique solution to the system ηD = 0, ηe = 1,
where 0 is a zero row vector and e is the column vector of 1’s having appropriate
dimension. The fundamental arrival rate λ is given by λ = ηD1 e. The coefficient
2
of variation of successive arrivals Cvar 2
is given by Cvar = 2λη(−D0 )−1 e − 1
338 T. S. Sinu Lal et al.

and the correlation coefficient Ccor of the successive inter arrivals is given by
Ccor = λη(−D0 )−1 D1 (−D0 )−1 e −1)/Cvar 2 . At the first station there are c identical

exponential servers, each having service rate μ , 0 < μ < ∞ arranged in parallel.
An arriving customer can directly enter into one of these servers if at least one of
them is idle or otherwise is required to wait in an infinite queue in front of the
first station. For a customer coming out of the first station, a probabilistic decision
making is carried out to determine whether to quit the system or to proceed towards
the second station. A customer proceeds to the second station with a probability p or
otherwise leaves the system forever with the complimentary probability 1 − p. The
second station has N identical servers each with phase type distributed service time.
This phase type distribution of the service time of these servers has the irreducible
representation (α, S).
The process of the system is modeled as the irreducible continuous time Markov
chain

X(t) = {(q1(t), q2 (t), ζ 1 (t), ζ 2 (t), . . . .ζ r (t), a(t)), t ≥ 0},

which is a quasi birth–death (QBD) process. The first component q1 (t) is the number
of customers up to the first station. Those who receiving service from the first
station together with those in the infinite waiting line constitute q1 (t) and it can
take nonnegative integer values. q2 (t) is the number of customers in the second
stage of service, those customers waiting in the finite buffer and receiving service
from the second station. q2 (t) takes values from the set {0, 1, 2, . . . k + N}. For
i = 1, 2, . . . , r, ζ i (t) is the number of customers in the second station with service
phase i, so that 0 ≤ ζ 1 (t) + ζ 2 (t) + · · · + ζ r (t) ≤ N, r is the total number
of phases of service. This method of representing the phases of service was put
forward by Ramaswamy and Lucantoni (1985). Taking the components ζ i (t) has a
great advantage of reducing the dimension of the state space considerably small.
If we count phase of each of the N servers separately, for larger values of N,
the number of states in each level shows huge hike due to the number of phases
of the service process and dimension of the MAP. Computation of performance
characteristics become extremely tedious when the state space grow indefinitely.
a(t) is the phase of arrival process which can take values in {1, 2, . . . , w}. The state
space S = S1 ∪ S2 .

S1 = {(i, j, k1, k2 , . . . , kr , l), i ∈ Z + , 0 ≤ j ≤ N + k, 1 ≤ km ≤ r, 1 ≤ l ≤ w},

and S2 = {(i, 0, l), i ∈ Z + , 1 ≤ l ≤ w}. The states in S1 correspond to nonzero


values of q2 and states of S2 correspond to q2 = 0. The levels in S are determined
by
N−1the values of i, and in the ith level the number of states is given by li = (1 +
i=1 di + kdN )w states, where di is the number of combinations of nonnegative
integers(i1, i2 , . . . , ir ) such that 0 ≤ ij ≤ N and rj =1 ij = i.
A Two-Stage Tandem Queue with Specialist Servers 339

The infinitesimal generator of the QBD process X(t) is of the form


⎛ ⎞
B00 B0
⎜B B B ⎟
⎜ 10 11 0 ⎟
⎜ ⎟
⎜ B20 B21 B0 ⎟
⎜ ⎟
Q=⎜ .. .. .. ⎟.
⎜ . . . ⎟
⎜ ⎟
⎜ B B B ⎟
⎝ 2 1 0 ⎠
.. .. ..
. . .

The blocks of the generator matrix are obtained as follows.


B0 = diag(D1 , Id1 ⊗ D1 , Id2 ⊗ D1 , . . . , IdN−1 ⊗ D1 , IdN ⊗ D1 , . . . IdN ⊗ D1 )
Bi1 = B00 − iμIli
B1 = B00 − cμIlN ,
⎛ ⎞
B 00
⎜ B 10 B 11 ⎟
⎜ ⎟
⎜ ⎟
⎜ B 20 B 21 ⎟
⎜ ⎟
⎜ .. .. ⎟
B00 =⎜ . . ,⎟
⎜ ⎟
⎜ B B N1
N0 ⎟
⎜ ⎟
⎜ .. .. ⎟
⎝ . . ⎠
B N+k0 B N+k+11

where each of the subblocks B ij is described below.


B 00 = D 0
B i1 = [Mlk ] + Idi ⊗ D 0 , i = 1, 2, . . . N
B N+i1 = B N1 for i=1,2. . . ,k,
where [Mlk ] is a block matrix, in which each Mlk is a matrix of order w for 1 ≤ l ≤
di ,1 ≤ k ≤ di . The blocks Mlk are defined below.


⎪ |ζ i (t1 ) − ζ i (t2 )| > 1for at least one i
⎪ 0w×w




⎨  
Ml,k = diag( ζ i (t1 )Sii , . . . , ζ i (t1 )Sii )w ζ i (t1 ) = ζ i (t2 )∀i = 1, 2 . . . , r








diag(ζ i (t1 )Si0 αj , . . . , ζ i (t1 )Si0 αj ))w |ζ i (t1 ) − ζ j (t2 )| = 1for exactly one pair (i,j),

where 0w×w is a zero square matrix of order w.


B j o = [Tlk ], 1 ≤ j ≤ N − 1,
340 T. S. Sinu Lal et al.

where [Tlk ] is a block matrix, in which each Tlk where1 ≤ l ≤ di and1 ≤ k ≤ di is


a square block of order w each Tlk are defined by


⎪ 0w×w |ζ i (t1 ) − ζ i (t2 )| > 1 for at least one i

Tlk = ,

⎪ diag(ζ i (t)i Si0 , . . . , ζ i (t)i Si0 )w |ζ i (t1 ) − ζ i (t2 )| = 1∀i = 1, 2 . . . , r,

where 0w×w is a zero square matrix of order w. For j ≥ N, B j 0 is defined by


B j o = [Vlk ], where [Vlk ] is a block matrix, in which each Vlk where 1 ≤ l ≤ di and
1 ≤ k ≤ di is a square block of order w each Vlk are defined by


⎪ 0w×w |ζ i (t1 ) − ζ i (t2 )| > 1for at least one i

Vlk =

⎪ diag(ζii Si0 αi , . . . , ζii Si0 αi )w |ζ i (t1 ) − ζ i (t2 )| = 1∀i = 1, 2 . . . , r

For i=1,2. . . c-1 Bi0 is defined as follows.


Bi0 = B∗∗ − iμIwdi ,B2 = B∗∗ − cμIwdi
⎛ ⎞
qμIw pμα ⊗ Iw
⎜ qμIwdi pμIwdi ⎟
⎜ ⎟
⎜ .. .. ⎟
⎜ . . ⎟
⎜ ⎟
⎜ ⎟
⎜ qμI wdi pμI wdi ⎟
B∗∗ = ⎜ .⎟ .
⎜ qμIwdN pμIwdN ⎟
⎜ ⎟
⎜ .. .. ⎟
⎜ . . ⎟
⎜ ⎟
⎝ qμIwdN pμInwdN ⎠
μIwdN

Theorem 1 The Markov chain described above is stable if and only if


N−1 
N+k
π0 D1 + πi Idi ⊗ D1 + πi IdN ⊗ D1 < cμ
i=1 i=N
A Two-Stage Tandem Queue with Specialist Servers 341

Proof Let B = B0 + B1 + B2 ,then B takes the form


⎛ ⎞
C0 C0#
⎜ C$ C C# ⎟
⎜ 0 1 1 ⎟
⎜ ⎟
⎜ C1$ C2 C2# ⎟
⎜ ⎟
⎜ .. .. .. ⎟
B=⎜ . . . ,⎟.
⎜ ⎟
⎜ $
CN CN CN # ⎟
⎜ ⎟
⎜ .. .. ⎟
⎝ . . CN# ⎠
$
CN CN

where the subblocks are defined as below.


Cj# = D 0 , C1# = D 1 + cpμα ⊗ Ia , Cj# = B j i + jpμα ⊗ Ia for2 ≤ j ≤ N
C1 = B 00 , Cj = B j 1 + cpμIdj for j <N+k and CN+k = B N+k1 + cμI , also
Cj$ = B j o , 0 ≤ j ≤ N + k − 2.
Let π = (π0 , π1 , . . . , πN+k ) be the steady state probability vector of the
generator matrix B. The subvectors πi are defined below

:
N+k−1
πi = πN+k Hj
j =1

Hj are recursively defined below.


H0 = −C0$ C0−1 and Hj = −Cj$ (Hj −1 Cj −1 + Cj )−1 for j= 1,2,. . . N+k.
πN+k is found using the normalizing condition

 N+k−1
 N+k−1
: 
πN+k Hj + 1 = 1.
i=0 j =i

The system is stable if and only if π B0 e <πB2 e substituting all the components
we get


N−1 
N+k
π0 D1 + πi Idi ⊗ D1 + πi IdN ⊗ D1 < cμ.
i=1 i=N

Remark 1 In the vector π = (π0 , π1 , . . . , πN+k ) defined in Theorem 1 π0 is a


subvector of length w, for i=1,2,. . . ,N-1 πi are subvectors of length wdi and for
j ≥ N each πj has length wdN .
342 T. S. Sinu Lal et al.

3 Steady State Probability Vector

Under the assumption of the stability condition, the steady state probability
distribution exists. Let ξ = (ξ0 , ξ1 , ξ2 , . . .) be the steady state probability vector
of the Markov Chain X. Then ξ is the unique solution to the system of equations
ξ Q = 0 and ξ e = 1.
From ξ Q = 0 and ξ e = 1, we have the system of equations

ξ0 B00 + ξ1 B10 = 0
ξ0 B0 + ξ1 B11 + ξ2 B20 = 0
..
.
ξc−1 B0 + ξc B1 + ξc+1 B2 = 0
ξi B0 + ξi+1 B1 + ξi+2 B2 = 0, i ≥ c.

Now from matrix analytic methods , ξc+i = ξc R i , i = 0, 1, 2 . . ., where R is the


minimal nonnegative solution the matrix quadratic equation R 2 B2 + RB1 + B0 = 0.
R is computed algorithmically using the logarithmic reduction algorithm [16].
For i = 1, 2, . . . , c, ξi = ξi−1 B0 (Bi1 + Gi+1 Bi+10 ), where Gc = −B0 (B1 +
RB2 )−1 , for i = 1, 2, . . . , c − 1, Gi = −B0 (Bi1 + Gi+1 Bi+10 ). Finally we reach
at ξ0 (B00 + G1 B10 ) = 0. Hence ξ0 is obtained as the steady state distribution of a
 B00 +G1 B10 .
Markov chain on a finite state space, having the infinitesimal generator
ξ is calculated by dividing each ξi with the normalizing constant ∞ i=0 ξi e.

4 Waiting Time Distribution

Finding the distribution of waiting time of a customer who joins in the finite buffer
as the mth one involves constructing a new process

W (t) = (τ (t), ζ 1 (t), ζ 2 (t), . . . , ζ r (t))

which is an irreducible continuous time Markov chain. In the definition of W (t),


τ (t) denotes the tag of the customer under observation and goes on decreasing as he
proceeds to the service center. Each ζ i (t) is defined exactly as in the system process.
The state space of the process is given as


r
{{m, m − 1. . . . , 1} × {(i1 , i2 , . . . , ir )/0 ≤ ij ≤ N, ij = N}} ∪ {0}
j =1

{0} represents an absorbing state describing the state of the chain when the customer
under consideration is taken into service.
A Two-Stage Tandem Queue with Specialist Servers 343

The infinitesimal generator of this process is given as

Ω1 Ω2
Ω= ,
0r 0

where Ω1 is an m × m block matrix obtained as below


⎡ ⎤
Δ1 Δ2
⎢ Δ1 Δ2 ⎥
⎢ ⎥
⎢ . . ⎥
Ω1 = ⎢ .. .. ⎥
⎢ ⎥
⎣ Δ2 ⎦
Δ1
⎡ ⎤
0
⎢ 0 ⎥
⎢ ⎥
Ω2 = ⎢ .. , ⎥
⎣ . ⎦
Δ2

where each Δi , i = 1, 2 is a block matrix defined by Δi = [Δi (l, k)],


each Δi (l, k) is a square block of order  corresponding to the transitions in
phase configuration(ζ 1 (t1 ), . . . , ζ r (t1 )), ri=1 ζ (t1 ) = l to the configuration
(ζ 1 (t2 ), . . . , ζ r (t2 )), ri=1 ζ (t2 ) = k and 1 ≤ l, k ≤ dN


⎪[0]w |ζ i (t1 ) − ζ i (t2 )| > 1 for at least one i







⎨diag( ζ i (t )S , · · ·  ζ i (t )S ) ζ i (t ) = ζ i (t )∀i = 1, 2 . . . , r

1 ii 1 ii w 1 2
Δ1 (l, k) =






⎪diag(ζ i (t1 )Sij , . . . ζ i (t1 )Sij )w
⎪ |ζ i (t1 ) − ζ j (t2 )| = 1 for exactly one pair (i,j)




⎪[0]a


|ζ i (t1 ) − ζ i (t2 )| > 1 for at least one i





⎨diag( ζ i (t )S 0 α · · ·  ζ i (t )S 0 α )ζ i (t ) = ζ i (t )∀i = 1, 2 . . . , r

1 i i 1 i i a 1 2
Δ2 (l, k) =







⎪diag(ζ i (t1 )Si0 αj . . . ζ i (t1 )Si0 αj )a |ζ i (t1 ) − ζ j (t2 )| = 1 for exactly one pair (i,j).


The waiting time of a tagged customer in the buffer is the time until the Markov
chain W (t) enters the absorbing state. The waiting time of rth customer follows
the phase type distribution with the irreducible representation (γ , Ω). Here γ is the
initial probability vector whose mth component as 1 and all other components are
344 T. S. Sinu Lal et al.

zero, which means that transition always starts from the level r. The distribution
function Fm of the waiting time is given by

Fm (t) = 1 − γ (exp(t)e).

In the above, γ (exp(t)e) is the mth entry of the column vector exp(t)e.
Expected waiting time of a customer who joins as the mth customer, Ewm is given

by the formula,
m
Ew = −γ −1 e.

Waiting time of an arbitrary customer in the buffer

∞ 
 k
Ew = m
yiN+m Ew ,
i=0 m=1

where yiN+m = ξiN+m edN w

5 Performance Measures

1. Expected number of customers in first stage




Ec1 = iξi eli
i=o


li = (1 + N−1 i=1 di + kdN )w where di is the number of  combinations of
nonnegative integers(i1, i2 , . . . , ir ) such that 0 ≤ ij ≤ N and rj =1 ij = i.
2. Expected number of customers in the queue


Eq1 = (j − c)ξj elj .
j =c+1

3. Expected number of busy servers in the first station


Eb1 = Ec1 − Eq1 .
4. Expected number of customers in the finite buffer

∞ N+k
 
EBF = iξj i .
j =0 i=0
A Two-Stage Tandem Queue with Specialist Servers 345

5. Expected number of busy severs in the second station

∞ N−1
  ∞ N+k
 
Es2 = i(ξij edi w ) + N ξij edN w .
j =0 i=0 j =0 i=N

6. Expected number of customers in the system= Expected number of customers


in Stage 1+Expected number of customers in stage 2
Ecs = Ec1 + Es2 .
7. Average intensity of flow of customers from first stage
Avf = Ec1 μ.
8. Probability that the system is empty
P e = ξ00 ew .
9. Expected time spend by a customer in the first queue
Ec1
Et1 = Avf .
10. Expected number of customers leaving the system after the first stage of Service


c ∞

Ln1 = iqμξi e(N+k+1)di w + cqξi e(N+k+1)di w .
i=1 i=c+1


11. Probability that the buffer is full, P bf = ∞i=0 ξ iN+k e(N+k+1)diw .
12. Expected waiting time of a customer in the system

⎨ Et1 ((1 − p) + p ∗ P bf ), if the customer is leaving after the first stage
r
Ew =

Et1 1 + p ∗ (1 − P bf ) ∗ Ew , if the customer proceeds to second station.

13. Expected departure rate from the first station


 
c−1
1
Erat e = cμ ξ i e(N+k+1)di w + μ iξ i e(N+k+1)diw .
i=c i=0

14. Rate of losing customers after the first stage,RL = P bf (Erat


1 )q.
e
15. Rate at which customers proceeds to second station from first

RP 2 = Erat
1
e (1 − P bf )p.

16. Expected departure rate from the second station

 dj
N  
r ∞
 
dN 
r

e = (ξ i Si0 ) +
2
Erat (ξ i S10 αi ).
j =1 m=1 i=1 j =N+1 m=1 i=1
346 T. S. Sinu Lal et al.

17. Probability that all servers at the first station are busy

∞ N+k
 

Pbusy = ξj i e.
j =0 i=N

18. Probability that at least one server remains idle at first station

 
c N+k
1
Pidle = ξj i e.
j =0 i=N

19. Probability that there are at least one server in first station is busy


c
1
Pbusy = ξj e(N+k+1)di w .
j =1

20. Probability that there are at least one server in second station is busy

∞ N+k
 
2
Pbusy = ξj i e.
j =0 i=1

6 Optimal Control on System Parameters

A revenue function involving parameters governing the system process is defined as


follows:

Φc = Ln1 ∗ C1 + Es2 ∗ C2

C1 represents the service cost per unit time for a server in the first station and
C2 per unit time for a specialist server. Then Φc is actually a function of system
parameters. Optimal values of a parameter can be obtained by changing the values of
that parameter over the given range and keeping all other parameters constant. The
values of Φc are plotted against the values of parameters and the graphs obtained are
convex in nature and hence values at which system revenue reaches its maximum
over the given range can be determined. It is to be noted that the values assumed by
Φc depend on the specific values assigned to the costs C1 and C2 .
A Two-Stage Tandem Queue with Specialist Servers 347

7 Numerical Experiments

7.1 Example 1

In this example, we consider a tandem queue of two stations where servers at both
stations have exponentially distributed service times with parameters μ1 and μ2 ,
respectively, and customers arrive according to a Poisson process with parameter λ.
We fix λ = 0.5, μ2 =3, w=2, and N=2. The experiment is carried out by varying
different parameters.
In Tables 1 and 2 the service rate μ1 of the initial servers is varied and the
corresponding variations in system performance measures are calculated. From
Table 1, the length of the queue in front of the first station decreases with the increase
in μ1 and this is pictorially represented in Fig. 1. Also the number of customers in

Table 1 Variation in system performance measures with respect to μ1

μ1 Ec1 Eq1 Eb1 EBF Ec2 Ecs Avf


0.1000 36.8151 34.9402 1.8749 0.3125 3.1034 39.9184 0.5625
0.2000 10.7943 9.1091 1.6852 0.6594 5.8427 16.6370 0.5056
0.3000 5.4412 3.9398 1.5014 0.9325 8.2159 13.6570 0.4504
0.4000 3.4959 2.1422 1.3537 1.1479 10.2280 13.7239 0.4061
0.5000 2.5578 1.3226 1.2352 1.3346 12.1234 14.6812 0.3706
0.6000 2.0220 0.8844 1.1375 1.5105 14.0596 16.0816 0.3413
0.7000 1.6792 0.6245 1.0547 1.6834 16.0913 17.7705 0.3164
0.8000 1.4416 0.4587 0.9829 1.8551 18.2108 19.6524 0.2949
0.9000 1.2668 0.3472 0.9196 2.0251 20.3836 21.6504 0.2759
1.0000 1.1324 0.2691 0.8632 2.1918 22.5693 23.7017 0.2590

Table 2 Variation in system performance measures with respect to μ1

μ1 Pe Et 1 Ln1 1
Pbusy 1
Erat Pbf Es2
e
0.1000 0.0134 122.7169 116.467 0.9257 93.9660 0.0038 0.9798
0.2000 0.0372 35.9809 30.363 0.8086 86.7843 0.0088 0.9293
0.3000 0.0600 18.1372 13.131 0.6944 79.8980 0.0115 0.8740
0.4000 0.0814 11.6531 7.1406 0.6007 74.5461 0.0122 0.8293
0.5000 0.1027 8.5259 4.4086 0.5244 70.3697 0.0116 0.7934
0.6000 0.1251 6.7399 2.948 0.4612 66.9616 0.0105 0.7629
0.7000 0.1486 5.5973 2.0816 0.4079 64.0387 0.0093 0.7357
0.8000 0.1729 4.8052 1.529 0.3624 61.4292 0.0081 0.7103
0.9000 0.1976 4.2226 1.1573 0.3233 59.0349 0.0070 0.6862
1.0000 0.2222 3.7746 0.897 0.2895 56.8015 0.0061 0.6629
348 T. S. Sinu Lal et al.

40

35

30

25
c=1
Eq1

20

15

10 c=3

5 c=5

0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
μ1

Fig. 1 Variation in queue length with service rate of initial servers for different values of c

2.5

2
c=7

1.5
EBF

c=5

c=3
0.5

0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
P

Fig. 2 Variation in number of customers accumulated in intermediate buffer with the increase in
p

Fig. 3 Variation in number of customers accumulated in intermediate buffer with μ2

the intermediate buffer is directly proportional to μ1 and is represented in Fig. 2.


With the increase in μ2 , the mean number of customers in the intermediate buffer
decreases and it is represented graphically in Fig. 3.
Figure 4a shows variation in EBF(along Z axis) with respect to μ1 and p. In
Fig. 4b variation in EBF(along Z axis) with c and μ2 is depicted. Figure 5 shows
that the system revenue increases with increase in buffer size(K) and reaches its
maximum in the given range of values.
A Two-Stage Tandem Queue with Specialist Servers 349

4 1.5

3.5
3 1

EBF
EBF

2.5
2 0.5

1.5
1 0
2 8
1.5 0.5 6 0.8
0.4 0.6
1 0.4
0.3 4
0.5 0.2 0.2
2 0
μ 0 0.1
p c μ1
1

(a) (b)

Fig. 4 Variation in EBF with respect to the different parameters

22.5

22.45

22.4

22.35
Cost(Φ )

22.3
c

22.25

22.2

22.15

22.1

22.05
0 500 1000 1500 2000
K

Fig. 5 Variation in cost with respect to variation in buffer size


350 T. S. Sinu Lal et al.

7.2 Example 2

0.4 0.55 −2.6 1.65


D1 = , D0 =
1.5 1.4 1.2 −4.1

−0.7 0.3 6 7
S0 = , S = 0.4 0.6
0.1 −0.7

α = (0.6, 0.4).

The fundamental rate of Markovian arrival process is calculated as λ = 0.6250.


The coefficient of variation Cvar = 2λθ (−D0)−1 )e − 1 = 1.5323. The correlation
coefficient is Ccor = λθ (−D0)−1 D1(−D0)−1 e) − 1/Cvar 2 = 0.7548. In this
case a Markovian arrival process with positively correlated inter-arrival times is
considered. The system behavior is studied by varying the different parameters. The
values of the corresponding performance characteristics are tabulated in Tables 3
and 4.
In Fig. 6 expected number of customers accumulated in the buffer is plotted
against p and μ. Fig. 7a and b, respectively, shows graphically that the number of
customers accumulated in the infinite waiting line decreases while the service rate
of the initial server (μ) and p increases. This phenomenon is intuitively true. In
∗ ) is plotted
Fig. 8 a probability that all servers in the first station are busy (Pbusy

against the service rate of the initial server (μ). Evidently Pbusy goes on decreasing
with the increase in μ. In Fig. 8b the expected number of customers accumulated in
the intermediate buffer (EBF) is plotted against p and EBF monotonically increases
with increase in p.

Table 3 Variation in system characteristics with respect to the increase in p

p Eq1 Ec1 Es2 Ecs Pe Avf Pe Et 1 Ln1


01 0.0515 0.4734 0.1751 0.1326 0.7811 1.6876 0.0355 0.4855 0.9385
0.2 0.0466 0.4197 0.1908 0.2238 0.8344 1.4925 0.0406 0.4305 0.7377
0.3 0.0426 0.3763 0.2049 0.2984 0.8796 1.3347 0.0440 0.3860 0.5771
0.4 0.0394 0.3403 0.2182 0.3606 0.9191 1.2038 0.0465 0.3490 0.4460
0.5 0.0366 0.3098 0.2311 0.4134 0.9544 1.0930 0.0485 0.3178 0.3373
0.6 0.0342 0.2837 0.2440 0.4589 0.9866 0.9976 0.0502 0.2909 0.2462
0.7 0.0322 0.2609 0.2571 0.4985 1.0165 0.9146 0.0517 0.2675 0.1692
0.8 0.0304 0.2408 0.2707 0.5333 1.0448 0.8414 0.0531 0.2470 0.1038
0.9 0.0289 0.2229 0.2849 0.5641 1.0720 0.7762 0.0545 0.2286 0.0478
1 0.0275 0.2069 0.3001 0.5917 1.0986 0.7176 0.0560 0.2122 0.0000
A Two-Stage Tandem Queue with Specialist Servers 351

Table 4 Variation in system characteristics with increase in μ

μ Eq1 Ec1 Es2 Ecs Pe Avf Pe Et 1 Ln1


4 0.0119 0.1485 0.2091 0.5653 0.9229 0.8197 0.0581 0.1523 0.1780
6 0.0055 0.0918 0.1964 0.6317 0.9199 0.6905 0.0627 0.0942 0.1395
8 0.0030 0.0625 0.1906 0.6771 0.9303 0.5950 0.0658 0.0641 0.1147
10 0.0018 0.0453 0.1876 0.7101 0.9431 0.5222 0.0680 0.0465 0.0974
12 0.0012 0.0344 0.1859 0.7352 0.9555 0.4650 0.0697 0.0353 0.0847
14 0.0008 0.0270 0.1849 0.7549 0.9667 0.4189 0.0710 0.0277 0.0749
16 0.0006 0.0217 0.1842 0.7707 0.9767 0.3811 0.0720 0.0223 0.0671
18 0.0004 0.0179 0.1838 0.7838 0.9855 0.3495 0.0729 0.0184 0.0608
20 0.0003 0.0150 0.1835 0.7947 0.9933 0.3227 0.0736 0.0154 0.0556
22 0.0003 0.0127 0.1833 0.8040 1.0001 0.2997 0.0742 0.0131 0.0512

0.96

0.95

0.94
EBF

0.93

0.92

0.91
0.5
8
0.4
6
0.3
4
p 0.2 2
μ

Fig. 6 Variation in number of customers accumulated in intermediate buffer with p

8 Conclusion

We analyzed a hospital model two-stage tandem queue, where the service is


provided in two different stations connected in series. First is the casualty clinic
and second is the clinic of the specialist doctors. The waiting space at second
station is limited, the capacity of this waiting space is optimally determined by
designing an appropriate cost function. On numerical investigation, the cost function
shows convex nature and increases with the increase in buffer size and reaches
352 T. S. Sinu Lal et al.

0.012 0.055

0.01 0.05

0.008 0.045
Eq1

Eq1
0.006 0.04

0.004 0.035

0.002 0.03

0 0.025
4 6 8 10 12 14 16 18 20 22 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
μ p

(a) (b)

Fig. 7 Variation in queue length with respect to the variations in p and μ. (a) μ versus Eq1. (b) p
versus Eq1

0.5 0.926

0.45
0.9255
0.4

0.35 0.925
P*busy

EBF

0.3
0.9245

0.25

0.924
0.2

4 6 8 10 12 14 0.9235
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
μ
p

(a) (b)

∗ . (b)
Fig. 8 Variation in different system characteristics with respect to μ and p. (a) μ versus Pbusy
p versus EBF

maximum and then becomes flat at an optimal point. Variations in different system
characteristics with respect to various parameters are also investigated.

Acknowledgments Sinu Lal T S thanks Kerala State Council for Science Technology and
Environment (KSCSTE), Kerala, India for KSCSTE Research Fellowship 2015 (No 001/FSHP-
MAIN/2015/KSCSTE).

References

1. Balsamo, S., Person, V.D.N., Inverardi, P.: A review on queueing network models with finite
capacity queues for software architectures performance prediction. Perform. Eval. 51(2–4),
269–288 (2003)
2. Baumann, H., Sandmann, W.: Multi-server tandem queue with Markovian arrival process,
phase-type service times, and finite buffers. Eur. J. Oper. Res. 256(1), 187–195 (2017)
A Two-Stage Tandem Queue with Specialist Servers 353

3. Chakravarthy, S.R.: The batch Markovian arrival process: A review and future work. Adv.
Probab. Theory Stoch. Process. 1, 21–49 (2001)
4. Chakravarthy, S.R., Krishnamoorthy, A., Joshua, V.C.: Analysis of a multi-server retrial queue
with search of customers from the orbit. Perform. Eval. 63(8), 776–798 (2006)
5. Cinlar, E.: Introduction to Stochastic Processes. Prentice-Hall, Englewood Cliffs, NJ (1975)
6. Ferng, H.W., Chang, J.F.: Connection-wise end-to-end performance analysis of queueing
networks with MMPP inputs. Perform. Eval. 43(1), 39–62 (2001)
7. Gomez-Corral, A.: A tandem queue with blocking and Markovian arrival process. Queueing
Systems 41, 343–370 (2002)
8. Gomez-Corral, A.: A matrix-geometric approximation for tandem queues with blocking and
repeated attempts. Oper. Res. Lett. 30(6), 360–374 (2002)
9. Gomez-Corral, A., Martos, M.E.: Performance of two-stage tandem queues with blocking: The
impact of several flows of signals. Perform. Eval. 63(9–10), 910–938 (2006)
10. Kim, C., Dudin, A.N., Dudin, S., Dudina, O.: Tandem queueing system eith impatient
customers as a model of call center with interactive voice response. Perform. Eval. 70, 440–453
(2013)
11. Kim, C., Dudin, A., Dudina, O., Dudin, S.: Tandem queueing system with infinite and finite
intermediate buffers and generalized phase-type service time distribution. Eur. J. Oper. Res.
235(1), 170–179 (2014)
12. Kim, C., Dudin, A., Dudin, S., Dudina, O.: Hysteresis control by the number of active servers
in queueing system MMAP/PH/N with priority service. Perform. Eval. 101, 20–33 (2016)
13. Kim, C., Klimenok, V.I., Dudin, A.N.: Priority tandem queueing system with retrials and
reservation of channels as a model of call center. Comput. Ind. Eng. 96, 61–71 (2016)
14. Krishnamoorthy, A., Deepak, T.G., Joshua, V.C.: Queues with postponed work. Top 12(2),
375–398 (2004)
15. Krishnamoorthy, A., Joshua, V.C., Babu, D.: A token based parallel processing queueing sys-
tem with priority. In: International Conference on Distributed Computer and Communication
Networks, pp. 231–239. Springer, Cham (2017, September)
16. Latouche, G., Ramaswami, V.: Introduction to Matrix Analytic Methods in Stochastic Model-
ing, vol. 5. Siam (1999)
17. Lucantoni, D.M.: New results on the single server queue with a batch Markovian arrival
process. Commun. Stat. Stoch. Models 7(1), 1–46 (1991)
18. Mathew, A.P., Krishnamoorthy, A., Joshua, V.C.: A Retrial queueing system with orbital
search of customers lost from an offer zone. In: Information Technologies and Mathematical
Modelling. Queueing Theory and Applications, pp. 39–54. Springer, Cham (2018)
19. Neuts, M.F.: A versatile Markovian point process. J. Appl. Probab. 16(4), 764–779 (1979)
20. Neuts, M.F.: Matrix-Geometric Solutions in Stochastic Models: An Algorithmic Approach.
Courier Corporation (1994)
21. Perros, H.G.: A bibliography of papers on queueing networks with finite capacity queues.
Perform. Eval. 10(3), 255–260 (1989)
The MAP/(PH,PH,PH)/1 Model with
Self-Generation of Priorities, Customer
Induced Interruption and Retrial of
Customers

Jomy Punalal and S. Babu

Abstract In this article, we consider a MAP/(PH,PH,PH)/1 model to which


customers arrive, according to the Markovian arrival process. At the time of arrival,
all customers viewed as ordinary. If the server is busy, the arriving customers enter
an orbit of infinite capacity. Each customer in orbit tries, independently of each
other, to access the server at a constant rate. Each customer in orbit, regardless of
others, generates priority with inter occurrence time exponentially distributed with
parameter γ . A priority generated customer is immediately taken for service if the
server is free. Else such customer is placed in a waiting space A1 of capacity one
which is reserved only for priority generated customers. We consider a customer
induced interruption while service is going on. The interruption occurs according
to a Poisson process. The interrupted customers will enter into a buffer B1 of finite
capacity K and they will spend a random period for completion of interruption. The
duration of the interruption of customers in B1 follows an exponential distribution.
The service facility consists of one server and period of service times of ordinary,
priority, and interruption completed customers follow phase-type distribution with
appropriate representations. Various performance measures obtained and suitable
profit function for getting optimal buffer size K is also derived.

Keywords Retrial queues; Self-generation of priorities; Customer induced


interruption; Markovian arrival process; Level dependant quasi-birth-death
process; Matrix analytic method

Jomy Punalal () · S. Babu


Department of Mathematics, University College, Thiruvananthapuram, Kerala, India

© The Editor(s) (if applicable) and The Author(s), under exclusive 355
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_21
356 Jomy Punalal and S. Babu

1 Introduction

There are a large number of probabilistic models on priority queues in literature


(Gross and Harris [17] chapter 3, Jaiswal [9] chapter 7, Takagi [18] chapter 3). Stud-
ies on priority queues found many applications in health care systems (Brahimi [2],
Taylor [20]). All these articles treat priority queues with some external priority rules.
In many applications, this discipline may not be an accurate modeling approach.
Self-generation of priorities of customers in queues has been introduced in literature
by Gomez-Corral, Krishnamoorthy, and Viswanath [6]. The pioneer works on self-
generation of priorities are Krishnamoorthy, S. Babu, and Viswanath [10, 11]. In
classical queueing models, servers are always available to serve customers. In many
practical queueing systems, servers get interrupted due to failure of the servers (see
Avi-Itzhak [1], Gaver [5], Neuts [15], Krishnamoorthy [12], Takagi [18, 19], Wang
[22]) or getting preempted due to the arrivals of high priority customers (see White
[23], Jaiswal [8]). There is a survey on queues with interruption by Krishnamoorthy
[13]. In this survey customer induced interruption is discussed in the final section.
Varghese et al. [7] discuss customer induced interruption which is entirely different
from service interruptions. When one customer is self-interrupted, the server is
ready to offer services to the other waiting customers. Retrial queues are a special
type of queueing system which provides re-service and blocking. Retrial queues are
extensively investigated in Yang [24], Falin [3], and Falin and Templeton [4].
In this paper, we consider the MAP/(PH, PH, PH)/1 model with self-generation
of priorities, customer induced interruption and retrial of customers. The stability
of the system is established, and some system performance measures are derived.
These measures are used to define an expected total profit function, and the effect
of different system parameters in profit function is explained numerically and
illustrated graphically.

2 Model Description

Customers arrive to a retrial queueing system according to Markovian arrival


process with representation (D0 , D1 ) of order n. An arriving customer enters service
immediately, if the server is free and if the server is busy, customer enters an orbit
of infinite capacity. Each customer in the orbit independently tries to access the
server according to Poisson process with parameter σ . A retrial customer who finds
the server busy returns to the orbit with probability δ and leaves the system with
probability 1 − δ. A customer in the orbit can generate priority according to Poisson
process with parameter γ . The priority generated customer is immediately taken
for service if server is free. If the server is busy, priority generated customer is
moved to a waiting space A1 of capacity one which is reserved only for priority
generated customers. If the waiting space A1 is already occupied, the new priority
generated customer will leave the system forever. We also consider a customer
The MAP/(PH,PH,PH)/1 Model with Self-Generation of Priorities, Customer. . . 357

induced interruption when service is going on and the interruption occurs according
to a Poisson process with parameter θ . The interrupted customers will enter into a
buffer B1 of finite capacity K according to the availability of space and customer
will lost forever if no space for the customer in buffer B1 . The service provided
here is non-preemptive and the service times follow phase-type distribution with
representations (α, T ), (β, S), (ν, V ) of orders m1 , m2 , m3 , respectively. Define
T0 = −T e, S0 = −Se, and V0 = −V e where T0 , S0 , V0 represents service
completions corresponding to the three service processes. When interruption occurs
the customer currently in service will force to leave the service facility and the
freed server is ready to offer services to other customers. The interrupted customer
will spend a random period of time for completion of interruption and it follows
exponential distribution with rate η and the interruption completed customers will
move to a buffer B2 whose size is also K. We assume that the priority generated
customers never undergo interruption and not more than one interruption is allowed
for a customer during service. Also assume no customer is lost before entering to
the orbit. Again we assume the sum of number of customers in buffers B1 and B2
should be less than or equal to K. When buffer B2 is full and a customer induced
interruption happens, then the self-interrupted customer will lost from the system
even though buffer B1 have free space. In particular, when buffer B2 is full, then B1
should be empty. A pictorial representation of the model is shown in Fig. 1.

3 Mathematical Formulation

The model is studied as a Quasi Birth-Death (QBD) process and matrix geometric
solution is obtained. For further analysis we use the following notations:
N1 (t)= Number of customers in the orbit at time t.
Number of busy servers at time t.
N2 (t)= ⎧


⎨1, server busy with ordinary customer at time t
S(t) = 2, server busy with priority generated customer at time t


⎩3, server busy with customer fromB buffer at timet
2
N3 (t)= Number of priority generated customers waiting for service at time t.
N4 (t)= Number of interruption completed customer in buffer B2 at time t.
N5 (t)= Number of interrupted customers in buffer B1 at time t.
M(t)= phase of service process at time t.
A(t)= phase of arrival process at time t.
Under the assumptions on arrival and service processes {χ(t) : t ≥ 0} where
χ(t) = {N1 (t), N2 (t), S(t), N3 (t), N4 (t), N5 (t), M(t), A(t)} form a continuous
time Markov chain on the state space  = L1 (i) ∪ L2 (i) ,where L1 (i) =
{(i, 0, w, b2 , b1 , y) : i ≥ 0; w = 0, 1; b2, b1 = 0, 1, 2, · · · , K; b2 + b1 ≤
K; y = 1, 2, . . . , n}. L2 (i) = {(i, 1, s, w, b2 , b1 , x, y) : i ≥ 0. s = 1, 2, 3; w =
358 Jomy Punalal and S. Babu

If No waiting Space in A1
and server is busy,
customer leaves system
in search of emergency service

A1 Waiting Space for Priority


iγ . generated customers

is
1−d
d

M AP (D 0 , D 1 )n Service Completion
Arrival Ordinary service Server (a, T )m1 , (b , S )m2 , (n , V )m3

Buffer of Interruption Completed Customers


q
B2
b1 h

B1
Buffer of Interrupted Customers

If B1 is full
customer leaves system
for ever

Fig. 1 Pictorial representation of the model

0, 1; b2, b1 = 0, 1, 2, · · · , K; b2 + b1 ≤ K; x = 1, 2, . . . , ms , s = 1, 2, 3; y =
1, 2, . . . , n}.
By partitioning the state space into levels with respect to the number of customers
in the orbit,

the generator of above Markov process⎤ is of the form:
A10 A0 . . . . . .
⎢ ⎥
⎢A A A . . . . . ⎥
⎢ 21 11 0 ⎥
⎢ ⎥
⎢ . A A . . ⎥
⎢ 22 12 A0 . . ⎥
⎢ ⎥
⎢ .. .. .. ⎥
Q= ⎢
⎢ . . . ⎥

⎢ ⎥
⎢ . . . A2N A1N A0 . . ⎥
⎢ ⎥
⎢ ⎥
⎢ . . . . A2N+1 A1N+1 A0 . ⎥
⎢ ⎥
⎣ .. .. .. ⎦
. . . . . . . .
The MAP/(PH,PH,PH)/1 Model with Self-Generation of Priorities, Customer. . . 359

where A0 , A10 , A2i , A1i , for i = 1, 2, 3, · · · are square matrices of order 4(K +
1)(K + 2)mn and matrices defined as follows:

3.1 Matrix A10 Has the Following Transitions

Entries of A10 are the transition rates within level 0. Let b = 0, 1; s = 1, 2, 3; w =


0, 1; b2 , b1 = 1, 2, 3, . . . K; b2 + b1 ≤ K; x = 1, 2, . . . ms ; y = 1, 2, . . . n
J
• (0, w, b2 , b1 , y) −−−−−−−→ (0, w, b2 , b1 , y)
J
• (0, b, s,/w, b2 , b1 , x, y) −−−−−−−→ (0, b, s, w, b2 , b1 , x, y)
In ⊗ D0 − b1 ηIms ,n , if b1 = Kor b1 + b2 = Kor b1 + b2 < K
J= .
In ⊗ D0 , otherwise
b1 ηIms n
• (0, b, s, w, b2 , b1 , x, y) −−−−−−−−−→ (0, b, s, w, b2 + 1, b1 − 1, x, y)

3.2 Matrix A0 Has the Following Transitions

Entries of A0 are the transition rates from level i to (i + 1). Let b = 0, 1; s =


1, 2, 3; w = 0, 1; b2 , b1 = 1, 2, 3, . . . K; b2 + b1 ≤ K; x = 1, 2, . . . ms ; y =
1, 2, . . . n.
In ⊗ D1
• (i, b, s, w, b2 , b1 , x, y) −−−−−−−→ (i + 1, b, s, w, b2 , b1 , x, y).

3.3 Matrix A2i Has the Following Transitions

Entries of A2i are the transition rates from level i to (i − 1).


Let s = 1, 2, 3; b2 , b1 = 1, 2, 3, . . . K; b2 + b1 ≤ K; x = 1, 2, . . . ms ; y =
1, 2, . . . n; b1 = 1, 2, 3, . . . K; b2 + b1 ≤ K
iγ Ims ,n
• (i, 0, s, 0, b2 , b1 , x, y) −−−−−−−→ (i − 1, 0, s, 1, b2 , b1 , x, y).
A
/ b2 , b1 , x, y) −−−−−−−→ (i − 1, 0, s, 1, b2 , b1 , x, y);
• (i, 0, s, 1,
iγ Ims ,n , if b1 = K
A= .
(iγ + θ )Ims ,n , if b1 = K or b1 + b2 = K
θ Ims ,n
• (i, 0, s, 0, b2 , b1 , x, y) −−−−−−→ (i − 1, 0, s, 0, b2 , b1 + 1, x, y).
θ Ims ,n
• (i, 0, s, 0, b2 , b1 , x, y) −−−−−−−→ (i − 1, 0, s, 0, b2 , b1 , x, y); if b1 = K or
b2 + b1 = K
360 Jomy Punalal and S. Babu

iσ Ims ,n
• (i, 0, s, w, b2 , b1 , x, y) −−−−−−−→ (i − 1, 1, s, w, b2 , b1 , x, y) ; for w = 0, 1
B
⎧ b2 , b1 , x, y) −−→ (i − 1, 0, s, 0, b2 , b1 , x, y) ;
• (i, 1, s, 0,

⎨T0 α ⊗ In , if s = 1

B = S0 β ⊗ In , if s = 2 and b2 = 0


⎩V ν ⊗ I , if s = 3 and b = 0
0 n 2
C
/ b2 , b1 , x, y) −−→ (i − 1, 0, 3, 0, b2 , b1 , x, y) ;
• (i, 1, s, 0,
S0 β ⊗ In , if s = 2 and b2 = 0
C=
V0 ν ⊗ In , if s = 3 and b2 = 0
D
⎧ b2 , b1 , x, y) −−→ (i − 1, 0, 2, 0, b2 , b1 , x, y) ;
• (i, 1, s, 1,

⎨T0 α ⊗ In , if s = 1

D = S0 β ⊗ In , if s = 2


⎩V ν ⊗ I , if s = 3
0 n

• (i, 0, 0, b2 , b1 ) −−−−→ (i − 1, 0, 1, b2, b1 ).

• (i, 1, s, 0, b2 , b1 ) −−−−→ (i − 1, 1, s, 1, b2 , b1 ).

• (i, 0, 1, b2 , b1 ) −−−−→ (i − 1, 0, 1, b2, b1 ).
E
• / b2 , b1 ) −−→ (i − 1, 1, s, 1, b2 , b1 ) ;
(i, 1, s, 1,
iγ , if b1 = K
E= .
(iγ + θ ), if b1 = K or b1 + b2 = K
θ
• (i, 1, s, 0, b2 , b1 ) −−→ (i − 1, 1, s, 0, b2 , b1 + 1).
θ
• (i, 1, s, 0, b2 , b1 ) −−→ (i − 1, 1, s, 0, b2 , b1 ), if b1 = K or b2 + b1 = K .

• (i, 0, w, b2 , b1 ) −−−−−−−→ (i − 1, 1, 1, w, b2 , b1 ) , for w = 0, 1
iσ (1 − δ)
• (i, 1, s, w, b2 , b1 ) −−−−−−−−−→ (i − 1, 1, s, w, b2 , b1 ) , for w = 0, 1

3.4 Matrix A1i Has the Following Transitions

Entries of A1i are transitions within level i.


Let b = 0, 1; w = 0, 1; b2, b1 = 1, 2, 3, . . . K; b2 + b1 ≤ K; x = 1, 2, . . . ms ; y =
1, 2, . . . n. Let † is a condition: b1 = K or b1 + b2 = K or b1 + b2 < K.
The MAP/(PH,PH,PH)/1 Model with Self-Generation of Priorities, Customer. . . 361

F
⎧ b2 , b1 , x, y) −−→ (i, b, s, w, b2 , b1 , x, y) ;
• (i, b, s, w,
⎪In ⊗ D0 − (iγ + iσ + θ + b1 η)Im ,n , if s = 0 and †


⎪ s



⎪I ⊗ D 0 − (iγ + iσ + θ )I , if s = 0 and otherwise


n ms ,n

⎪T ⊕ D0 − (iγ + iσ + θ + b1 η)Ims ,n , if s = 0 and †



⎨T ⊕ D − (iγ + iσ + θ )I
0 ms ,n , if s = 0 and otherwise
F=

⎪S ⊕ D0 − (iγ + iσ + θ + b1 η)Ims ,n , if s = 0 and †




⎪S ⊕ D0 − (iγ + iσ + θ )Ims ,n , if s = 0 and otherwise




⎪V ⊕ D0 − (iγ + iσ + θ + b1 η)Ims ,n , if s = 0 and †



⎩V ⊕ D − (iγ + iσ + θ )I , if s = 0 and otherwise
0 ms ,n

4 System Stability

Theorem 1 The system under discussion is stable.


Proof Consider the Lyapunov test function defined by φ(s) = i where s is a state
in level i. For a state s in level i , the mean drift ys is given by

ys = [φ(p) − φ(s)]qsp
p=s
  
= [φ(s " ) − φ(s)]qss " + [φ(s "" ) − φ(s)]qss "" + [φ(s """ ) − φ(s)]qss """
s" s "" s """

where s " , s "" , s """ vary over states belonging to levels i − 1, i, and i + 1, respectively.
Then φ(s) = i, φ(s " ) = i − 1, φ(s "" ) = i, φ(s """ ) = i + 1
 
ys = − qss" + qss"""
s" s"""

⎪ 

⎪ −iγ − iσ + s""" qss""" , when server is idle.



⎨−i(γ + σ (1 − δ)) − θ − [(T α ⊗ I )e] +  q

when server is busy with OR.
0 n s s""" ss""" ,
= 



⎪ −i(γ + σ (1 − δ)) − θ − [(S0 β ⊗ In )e]s + s""" qss""" , when server is busy with PG.


⎩−i(γ + σ (1 − δ)) − θ − [(V ν ⊗ I )e] +  q

when server is busy with IC.
0 n s s""" ss""" ,

where OR, PG, and IC denotes ordinary, priority generated, and interruption
completed customers, respectively. Since s""" qss""" is bounded by some fixed
constant
 for any s in level i ≥ 1 we can find a positive real number K such that
s""" q ss""" < K for all s in level k ≥ 1 Thus for any  > 0 , we can find K ∗
large enough that ys < − for any s belonging to level i ≥ K ∗ . Hence the theorem
follows from Tweedie’s [21] result.
362 Jomy Punalal and S. Babu

4.1 Neuts–Rao Truncation Method

When we apply this method our process χ transforms to χ̄ with infinitesimal


generator

⎡ ⎤
A
⎢ 10
A0 . . . . . . . ⎥
⎢A A11 A0 . . . . . . ⎥
⎢ 21 ⎥
⎢ ⎥
⎢ . A22 A12 A0 . . . . . ⎥
⎢ ⎥
⎢ ⎥
⎢ .. .. .. ⎥
⎢ . . . ⎥
Q̄ =⎢

⎢ .



⎢ . . A2N−1 A1N−1 A0 . . . ⎥
⎢ ⎥
⎢ . . . . A2 A1 A0 . . ⎥
⎢ ⎥
⎢ ⎥
⎢ . . . . . A2 A1 A0 . ⎥
⎢ ⎥
⎣ . . .. ⎦
. . . . . . .. .. .

where A1 = A1N and A2 = A2N


Let the steady state probability vector of the Markov process be x =
(x0 , x1 , x2 , . . . , xN−1 , xN , xN+1 , . . . ). We take
i+1
xN+i = xN−1 RN i = 0, 1, 2, . . . (1)

where RN is the minimal solution of the matrix quadratic equation


2
RN A2N + RN A1N + A0 = 0

4.1.1 Choice of N

To find the truncation level N, we use Neuts–Rao method (see [16]). As mentioned
in [14], Elsner’s algorithm is used to determine the spectral radius η(N) of R(N).
To minimize the effect of the approximation on the probabilities, N must be chosen
such that |η(N) − η(N + 1)| < , where  is an arbitrarily small value.
Again, x Q̄ = 0 leads

xN−i = xN−i−1 RN−i i = 1, 2, . . . N − 2 (2)

and

x1 = x0 R1 i = 1, 2, . . . N − 2 (3)

where RN−i = −A0 (A1N−i + RN−i+1 A2N−i+1 )−1 and R1 = −A0 (A11 +
R2 A22 )−1
The MAP/(PH,PH,PH)/1 Model with Self-Generation of Priorities, Customer. . . 363

Finally from x0 A10 + x1 A21 = 0 we find x0 as the steady state distribution of


finite state Markov chain with generator A10 + R1 A21 .
Then from (1), (2) and (3), we get xi for i =1, 2, 3, . . . Now x is calculated by
dividing each xi with the normalizing constant ∞ i=0 xi e.

5 Performance Measures

Let ξ = (ξ0 , ξ1 , ξ2 , · · · ) be our steady state probability vector of the Markov process
χ. For the evaluation of system performance measures we partition each ξi , i ≥ 0
as follows: Let ξi = (wi , xi , yi , zi ) where each vector corresponds to probability
that the server is functioning i customers in the orbit.

• Probability that the server is idle. Pidle = ∞ i=0 wi e 
• Probability that the server is busy with ordinary customer. Psbor = ∞ i=0 xi e
• Probability that the server is busy with priority generated customer. Psbpr =

i=0 yi e
• Probability that the server is busy with interruption completed customers in B2 .
Psbb2 = ∞ i=0 zi e
• Probability
∞ that the server is idle with customers in the orbit. Pidleco =
w
i=0 i e − w1 e 
• Expected number of customers in the orbit. Eor = ∞ i=1 iξi e
• Expected number of customers in the orbit when server is idle. Esidle =

i=1 iwi e 
• Successful retrial rate. Srr =  σ ∞ i=1 iwi e
• Overall retrial rate. Orr = σ ∞ i=1 iξi e  ∞
Srr iwi e
• The fraction of successful rate of retrial. Fsrr = Orr = i=1

iξi e
i=1

Let ξi = ζi (b, s, w, b2 , b1 ) where ζi (b, s, w, b2 , b1 ) is a row vector corresponding


to N2 (t) = b, S(t) = s, N3 (t) = w, N4 (t) = b2 , N5 (t) = b1 with b = 0, 1; s =
1, 2, 3; w = 0, 1; b2 , b1 = 0, · · · , K; b2 + b1 ≤ K.
• Probability that priority generated customers lost from the system.

∞ 
 1 
3  2
K K−b
Pprl = ζi (b, s, 1, b2 , b1 )e
i=1 b=0 s=1 b2 =0 b1 =0

• Probability that interrupted customers lost from the system.

∞ 
 1 
3 
K
Pinl = ζi (b, s, 1, b2 , K − b2 )e
i=1 b=0 s=1 b2 =0
364 Jomy Punalal and S. Babu

• Expected number of ordinary customers in the orbit.

∞ 
 2
K K−b ∞ 
 1  2
K K−b
Eorc = iζi (0, 0, b2 , b1 )e + iζi (1, 1, w, b2 , b1 )e
i=0 b2 =0 b1 =0 i=0 w=0 b2 =0 b1 =0

∞ 
 2
K K−b ∞ 
 2
K K−b
+ (i−1)ζi (1, 2, 0, b2 , b1 )e+ (i−2)ζi (1, 2, 1, b2 , b1 )e
i=1 b2 =0 b1 =0 i=2 b2 =0 b1 =0

∞ 
 1  2
K K−b
+ iζi (1, 3, w, b2 , b1 )e.
i=0 w=0 b2 =0 b1 =0

• Expected number of priority generated customers in the orbit.

∞ 
 2
K K−b ∞ 
 2
K K−b
Eprc = ζi (0, 1, b2, b1 )e + ζi (1, 1, 1, b2 , b1 )e
i=0 b2 =0 b1 =0 i=0 b2 =0 b1 =0

∞ 
 2
K K−b ∞ 
 2
K K−b
+ ζi (1, 2, 0, b2, b1 )e + 2ζi (1, 2, 1, b2 , b1 )e
i=0 b2 =0 b1 =0 i=0 b2 =0 b1 =0

∞ 
 2
K K−b
+ iζi (1, 3, 1, b2 , b1 )e.
i=0 b2 =0 b1 =0

• Expected number of priority generated customers lost from the system.

∞ 
 1 
3  2
K K−b
Eprl = iζi (b, s, 1, b2 , b1 )e
i=1 b=0 s=1 b2 =0 b1 =0

• Expected number of interrupted customers lost from the system.

∞ 
 1 
3 
K
Einl = iζi (b, s, 1, b2 , K − b2 )e
i=1 b=0 s=1 b2 =0

• Expected number of interrupted customers in buffer B1 .

∞ 
 1 
3 
1  2
K K−b
Einb1 = b1 ζi (b, s, w, b2 , b1 )e
i=0 b=0 s=1 w=0 b2 =0 b1 =0
The MAP/(PH,PH,PH)/1 Model with Self-Generation of Priorities, Customer. . . 365

• Expected number of interruption completed customers in buffer B2 .

∞ 
 1 
3 
1  2
K K−b
Eincb2 = b2 ζi (b, s, w, b2 , b1 )e
i=0 b=0 s=1 w=0 b2 =0 b1 =0

• Expected number of customers lost after retrials per unit time.

∞ 
 3 
1  2
K K−b
Eclr = σ (1 − δ) iζi (1, s, w, b2 , b1 )e
i=1 s=1 w=0 b2 =0 b1 =0

• Expected number of departure of ordinary customers after completing service.

∞ 
 1  2
K K−b
Edeor = ζi (1, 1, w, b2 , b1 )e
i=0 w=0 b2 =0 b1 =0

• Expected number of departure of priority generated customers after completing


service.
∞ 
 1  2
K K−b
Edepr = ζi (1, 2, w, b2 , b1 )e
i=0 w=0 b2 =0 b1 =0

• Expected number of departure of interruption completed customers after com-


pleting service.

∞ 
 1  2
K K−b
Edeinc = ζi (1, 3, w, b2 , b1 )e
i=0 w=0 b2 =0 b1 =0

6 Cost Analysis

Based on the above system characteristics we propose an optimization problem and


illustrating a numerical example.
Define a revenue (profit) function as: ET P = r1 Edeor + r2 Edepr + r3 Edeinc −
c1 Eorc − c2 Eprc − c3 Eincb2 − c4 Einb1 − c5 Einl − c6 Eprl − c7 Eclr − cf ixed , where
r1 monetary units revenue obtained for each ordinary customer getting service and
leaving the system without interruption, r2 monetary units revenue obtained for each
priority generated customer getting service and leaving the system, r3 monetary
units revenue obtained for each ordinary customer getting service and leaving the
system with a customer induced interruption, c1 monetary units holding cost for
each unit of time that an ordinary customer has to wait in the system, c2 monetary
366 Jomy Punalal and S. Babu

units holding cost for each unit of time that a priority generated customer has to
wait in the system, c3 monetary units holding cost for each unit of time that an
interruption completed customer has to wait in the buffer B2 , c4 monetary units
holding cost of time that an interrupted customer has to wait in the buffer B1 , c5
monetary units cost for each customer lost due to no vacant space in buffer B1 at
the time of interruption occurs, c6 monetary units cost for each priority generated
customer lost due to no space in waiting space A1 at the time of priority generation
occurs, c7 monetary units cost for each customer lost after retrial and cf ixed a
miscellaneous fixed cost. Our goal is to find an optimum value for K (denoted by K;
with all other parameters fixed) that maximizes the expected total profit, ET P.

7 Numerical Illustration

Consider the three phase-type distribution with representations (α, T ), (β, S),
(ν, V ) (For convenience we say (α, T ) as Type-I service, (β, S) as Type-II service
and (ν, V ) as Type-III service) are defined by

Type-I service; Type-II service; Type-III service;


α = [0.3 0.7]; β = [0.4 0.6]; ν = [0.5 0.5];
−15 3 −8 4 −5.5 2.5
T = ; S= ; V = ;
3 −15 4 −8 2.5 −5.5
12 4 3
T0 = ; S0 = ; V0 =
12 4 3

In order to demonstrate the effect of correlation, we introduce four MAP arrival of


customers. Denote the four MAP as: a MAPp , a MAPn , b MAPp , b MAPn and
−4.05 1.55 2.05 0.45
• a MAP
p is defined by D0 = , D1 =
3.5 −5.5 1 1
Average arrival rate, λ = 2.3462, Correlation coefficient, ccor = +0.00028752
−5.5 3.5 1 1
• a MAPn is defined by D0 = , D1 =
1 −3.5 1 1.5
Average arrival rate, λ = 2.3462, Correlation coefficient, ccor = −0.00028532
−5.15 2.10 2.60 0.45
• b MAPp is defined by D0 = , D1 =
4.05 −6.60 1.00 1.55
Average arrival rate, λ = 2.8822, Correlation coefficient, ccor = +0.00040550
−6.60 4.05 1.55 1.00
• b MAPn is defined by D0 = , D1 =
1.55 −4.60 1.00 2.05
Average arrival rate, λ = 2.8822, Correlation coefficient, ccor = −0.000411974
The MAP/(PH,PH,PH)/1 Model with Self-Generation of Priorities, Customer. . . 367

7.1 Optimum Buffer Size K

We fix η = 10.6, θ = 5.7, γ = 25.0, σ = 0.5, δ = 0.8, m = 2 = n, r1 = 900, r2 =


900, r3 = 900, c1 = 2, c2 = 1, c3 = 50, c4 = 50, c5 = 10, c6 = 5, c7 = 1, cf =
10 and different service combinations shown in Table 1. (For example, System A
means Type-I service given to ordinary customers, Type-II service given to priority
generated customers, Type-III service given to interruption completed customers
and System F means all customers are given Type-III service.) Then compute ET P
for different K and selected arrival process a MAPp , a MAPn , b MAPp , b MAPn .
Results are plotted in Figs. 2 and 3. From Figs. 2 and 3 we can see that K = 2
is the optimal buffer size for all the above considered cases.

7.2 Effect of θ in Einl for Different MAP and Buffer Sizes K

We fix η = 0.6, γ = 25.0, σ = 0.5, δ = 0.8, m = 2 = n and compute Einl for


different interruption rates θ and for buffer sizes K. Results are plotted graphically
in Fig. 4.
From Fig. 4 when interruption rate θ increases, Einl increases initially and then
decreases. For the above set of fixed parameters here Einl is maximum when K = 2
and all other considered buffer sizes K = 1, 3, 4, 5, Einl is less than that at K = 2
for all arrival processes assumed in previous sections.

7.3 Effect of η in Einl for Different MAP and Buffer Sizes K

We fix θ = 5.7, γ = 25.0, σ = 0.5, δ = 0.8, m = 2 = n and compute Einl for


different rates η and for buffer sizes K. Results are plotted graphically in Fig. 4.

Table 1 Some selected phase-type service combinations

Service Ordinary Priority Interruption


combination customer generated completed
name customer customer
System A Type I Type II Type III
System B Type II Type I Type III
System C Type II Type III Type I
System D Type I Type I Type I
System E Type II Type II Type II
System F Type III Type III Type III
368 Jomy Punalal and S. Babu

a
MAP p a
MAP n
60 60

50 50

40 40
ETP

ETP
30 30

20 20

System A System A
10 System B
10 System B
System C System C

0 0
1 2 3 4 5 6 7 1 2 3 4 5 6 7
K K

MAP p MAP n
b b
70 70

60 60

50 50

40 40
ETP

ETP

30 30

20 20
System A System A
10 System B 10 System B
System C System C

0 0
1 2 3 4 5 6 7 1 2 3 4 5 6 7
K K

Fig. 2 Graph of ET P vs. K for System A, B, C

From Fig. 4 when η increases, Einl increases initially and then decreases. For the
above set of fixed parameters here Einl is maximum when K = 2 and all other
considered buffer sizes K = 1, 3, 4, 6, 9, Einl is less than that at K = 2 for all
arrival processes assumed in previous sections.

8 Conclusion

A single-server queueing system with self-generation of priorities, customer


induced interruption and retrial of customers is analyzed in this paper. Arrival of
customers is according to Markovian arrival process and service times are different
phase-type distributions. The interruption we discussed here is customer induced
interruption. Performance measures required for an appropriate system designing
were computed and numerically analyzed.
The MAP/(PH,PH,PH)/1 Model with Self-Generation of Priorities, Customer. . . 369

a
MAP p MAP n
a

60 60

50 50

40 40
ETP

ETP
30 30

20 20

10 10
System D System D
System E System E
0 System F 0 System F

1 2 3 4 5 6 1 2 3 4 5 6 7
K K
b
MAP p MAP n
b
70 70

60 60

50 50

40 40
ETP

ETP

30 30

20 20
System D System D
System E System E
10 System F
10 System F

0 0
1 2 3 4 5 6 7 1 2 3 4 5 6 7
K K

Fig. 3 Graph of ET P vs. K for System D, E, F

0.08 0.09
K=1
0.07 K=2
0.08 K=3
0.06 K=4
K=6
0.07 K=9
0.05
E inl

E inl

0.04 0.06
K=1
K=2
0.03 K=3
K=4
0.05
0.02 K=5
K=6
K=7 0.04
0.01

0 0.03
0 5 10 15 0 2 4 6 8 10 12 14 16 18 20
q h

Fig. 4 Graph of Einl vs. θ, η for different buffer size K

Acknowledgments The authors are thankful to Professor A. Krishnamoorthy for constructive


suggestion and advice in the entire work of this paper. Support from the University Grants
Commission (sanction no. FIP/12th Plan/KLKE029 TF-36) is gratefully acknowledged.
370 Jomy Punalal and S. Babu

References

1. Avi-Itzhak, B., Naor, P.: Some queuing problems with the service station subject to breakdown.
Oper. Res. 11(3), 303–320 (1963)
2. Brahimi, M., Worthington, D.J.: Queueing models for out-patient appointment systems—a
case study. J. Oper. Res. Soc. 42(9), 733–746 (1991)
3. Falin, G.: A survey of retrial queues. Queueing Systems 7(2), 127–167 (1990)
4. Falin, G., Templeton, J.G.C.: Retrial Queues, vol. 75. CRC Press, 1997
5. Gaver Jr., D.P.: A waiting line with interrupted service, including priorities. J. R. Stat. Soc. B
(Methodol.), 73–90 (1962)
6. Gomez-Corral, A., Krishnamoorthy, A., Narayanan, V.C.: The impact of self-generation of
priorities on multi-server queues with finite capacity. Stoch. Models 21(2–3), 427–447 (2005)
7. Jacob, V., Chakravarthy, S.R., Krishnamoorthy, A.: On a customer-induced interruption in a
service system. Stoch. Anal. Appl. 30(6), 949–962 (2012)
8. Jaiswal, N.K.: Preemptive resume priority queue. Oper. Res. 9(5), 732–742 (1961)
9. Jaiswal, N.K.: Priority Queues. Elsevier (1968)
10. Krishnamoorthy, A., Babu, S., Narayanan, V.C.: MAP/(PH/PH)/c queue with self-generation
of priorities and non-preemptive service. Stoch. Anal. Appl. 26(6), 1250–1266 (2008)
11. Krishnamoorthy, A., Babu, S., Narayanan, V.C.: The MAP/(PH/PH)/1 queue with self-
generation of priorities and non-preemptive service. Eur. J. Oper. Res. 195(1), 174–185 (2009)
12. Krishnamoorthy, A., Pramod, P.K., Deepak, T.G.: On a queue with interruptions and repeat or
resumption of service. Nonlinear Anal. Theory Methods Appl. 71(12), e1673–e1683 (2009)
13. Krishnamoorthy, A., Pramod, P.K., Chakravarthy, S.R.: Queues with interruptions: a survey.
Top 22(1), 290–320 (2014)
14. Neuts, M.F.: Matrix-Geometric Solutions in Stochastic Models: An Algorithmic Approach.
Courier Corporation (1981)
15. Neuts, M.F., Lucantoni, D.M.: A Markovian queue with n servers subject to breakdowns and
repairs. Manag. Sci. 25(9), 849–861 (1979)
16. Neuts, M.F., Rao, B.M.: Numerical investigation of a multiserver retrial model. Queueing
Systems 7(2), 169–189 (1990)
17. Shortle, J.F., Thompson, J.M., Gross, D., Harris, C.M.: Fundamentals of Queueing Theory.
Wiley (2018)
18. Takagi, H.: Queueing Analysis: Vacations and Priority System, vol. i. North-Holland,
Amsterdam (1991)
19. Takagi, H.: Queueing Analysis: Vacations and Priority System, vol. iii. North-Holland,
Amsterdam (1993)
20. Taylor, I.D.S., Templeton, J.G.C.: Waiting time in a multi-server cutoff-priority queue, and its
application to an urban ambulance service. Oper. Res. 28(5), 1168–1188 (1980)
21. Tweedie, R.L.: Sufficient conditions for regularity, recurrence and ergodicity of Markov
processes. In: Mathematical Proceedings of the Cambridge Philosophical Society, vol. 78,
pp. 125–136. Cambridge University Press, Cambridge (1975)
22. Wang, J.: An M/G/1 queue with second optional service and server breakdowns. Comput.
Math. Appl. 47(10–11), 1713–1723 (2004)
23. White, H., Christie, L.S.: Queuing with preemptive priorities or with breakdown. Oper. Res.
6(1), 79–95 (1958)
24. Yang, T., Templeton, J.G.C.: A survey on retrial queues. Queueing Systems 2(3), 201–233
(1987)
Valuation of Reverse Mortgage

D. Kannan and Lina Ma

Abstract This article provides an analytic valuation formula for reverse mortgage.
We achieve this by utilizing the principle of balance between the expected gain
and expected payment. The underlying model employs a jump-diffusion process
to represent the dynamics of the house price, the Vasicek model to drive the
instantaneous interest rate, and a bivariate distribution function to describe the
longevity risk. We obtain, in particular, the formulas for the lump sum payment,
joint annuity, increasing (decreasing) annuity, level annuity of reverse mortgage,
and the valuation equation that the variable payment annuities satisfy. We then
discuss the monotonicity of the lump sum, annuity, and annuity payment factors
with respect to the parameters associated with the home price and the interest rate
model. Finally, we analyze the sensitivity of the joint annuity with respect to the
parameters associated with the home price, interest rate, and lifetime model. The
numerical analysis supports our theoretical results.

Keywords Reverse mortgage · Valuation · Joint annuity · Jump-diffusion ·


Vasicek model · Lifetime model

1 Introduction

Reverse mortgage is an attractive financial lending product offered to any senior


citizen who owns a house. It is categorized normally into two categories, namely
collateral reverse mortgage and ownership conversion reserve mortgage (Ohgaki
[7]). The collateral reverse mortgage is redeemable, while the ownership conversion
reverse mortgage is not. Home equity conversion mortgage system is a typical

D. Kannan ()
Department of Mathematics, University of Georgia, Athens, GA, USA
e-mail: [email protected]
L. Ma
School of Finance, Capital University of Economics and Business, Beijing, China

© The Editor(s) (if applicable) and The Author(s), under exclusive 371
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_22
372 D. Kannan and L. Ma

collateral reverse mortgage in the USA. In a collateral reverse mortgage, the


elderly householder borrows annuity like periodical installment mortgage on his/her
residential house. With the collateral reverse mortgage, the borrower is able to
redeem the reverse mortgage by repaying the loan principal and accumulated
interests through property sale at any time from the mortgage’s effective date to due
date. Rente Viager is a typical ownership conversion reverse mortgage in France.
In the ownership conversion reverse mortgage, the borrower enters into a contract
with a lending institution to obtain an annuity until his/her death, and at death the
pledged property ownership is transferred to the lender.
Since the introduction of reverse mortgage, many scholars and practitioners have
engaged in research, mainly on the basic principles, operation modes, feasibility,
effectiveness, policies, laws, risks, and valuation. These aspects of reverse mortgage
are well studied compared to the valuation problem. Hence, we focus our attention
to the valuation problem. The valuation of reverse mortgage mainly includes three
aspects: (1) determining a lump sum and annuity payments that the lender can pay
before signing the reverse mortgage contract, (2) pricing the redemption right
of the collateral reverse mortgage before signing the reverse mortgage contract,
and (3) finding the value of reverse mortgage at any time t after signing the
contract. The main idea behind the first aspect is to employ the principle of expected
balance between gain and payment under the assumption of perfect competition
market, which makes the discounted present value of payment of the lender be
equal to a certain proportion of discounted present value of the mortgaged property
(see, for example, Mitchell and Piggott [6]). The main valuation idea of the last
two aspects is to apply the option pricing concept, which regards the mortgaged
property as the underlying asset and the loan principal and accumulated interests
as the strike price of underlying asset. When the contract expires, the lender or
its successor determines whether to execute the option (i.e., redeem the pledged
property) according to the difference between the price of pledged property and the
loan principle and accumulated interests (see, for example, Tsay et al. [8]).
The main risks involved with reverse mortgage include property-value risk,
interest rate risk, and longevity risk. In general, the risk of house price is modeled
in two ways. One is to assume directly that the dynamics of house price is driven by
a forward stochastic differential equation, as in Tsay et al. [8]. The other is to fit the
time series model based on the historical data of the house price, as discussed by
Li et al. [3]. The literature on classical interest rate model is vast and we follow the
Vasicek [9] model. While there are usually several ways to describe the longevity
risk, we follow the Gompertz [2].
As pointed out earlier, this work focuses on the valuation problem of reverse
mortgage. We provide analytic valuation formulas for the lump sum and three
different annuity aspects of reverse mortgage. We also derive the valuation equation
that the variable payment annuities satisfy. For our analysis, we appeal to the
principle of balance between expected gain and expected payment. Taking into
account of the influence of the parameters associated with the home price and
interest rate models, we discuss the monotonicity of the lump sum, annuity, and
annuity payment factors. Finally, we analyze the sensitivity of the joint annuity with
Valuation of Reverse Mortgage 373

respect to the parameters associated with the home price, interest rate, and lifetime
model. Our numerical results show that the average return of home price exerts a
dominating influence on the joint annuity, followed by the mean reversion level of
interest rate. We observe here that both of them have more impact on the annuities of
young applicants than those of old applicants. It is interesting to note that the initial
age of male and that of female produce asymmetrical effect on the joint annuity.
Remarkably, the dependence of joint lifetime significantly affects the joint annuity
value.
Organization of the article: Sect. 2 presents the models of risk factors. In Sect. 3,
we first design the reverse mortgage predicated on the ownership conversion with
fixed yearly payment until death, and then derive the valuation model for the lump
sum and annuity payments under the principle of balance between expected gain and
expected payment. Section 4 analyzes the monotonicity of the lump sum, annuity
payments, and annuity payment factors with respect to the parameters involved in
housing price and interest rate models. We provide in Sect. 5 some numerical results
to examine how the housing price risk, interest rate risk, and longevity risk impact
the lump sum, annuity payment, and the annuity payment factors.
The main steps of the proofs of propositions are provided in the Appendix. This
review article is based on the references Ma et al. [5] and Ma and Kannan [4].

2 Risk Factors

We follow a stochastic model to determine the lump sum and annuity of the
reverse mortgage without redemption right applied by a joint lives. All our random
elements are defined on a complete filtered probability space (, F , P, {Ft }t ≥0),
where (, F , P) is a complete probability space and {Ft }t ≥0 is a right continuous
increasing family of sub σ -algebras of F with all the null events in F0 . The
risk factors that the reverse mortgage without redemption right involves (1) we
employ the jump-diffusion model to mimic the dynamics of home price, (2) the
Vasicek model to drive the instantaneous interest rate, and (3) a bivariate distribution
function to describe the dependent longevity risk of a joint-life (i.e. a couple).

2.1 House Price

We assume that the house price h(t) (t ≥ 0) follows the exponential Lévy process
⎡ ⎤
 t $ % )
N(t
1 2
h(t) = h(0) exp ⎣ μh (s)ds − σ + λh kh t + σh Wh (t) + Ji ⎦ ,
0 2 h
i=1

h(0) = h0 . (2.1)
374 D. Kannan and L. Ma

Here: (1) the P-standard Brownian motion process {Wh (t), t ≥ 0} captures the
unanticipated instantaneous change of house price. (2) As the Wiener process
{Wh (t)} will not capture the abnormal shocks caused by sudden rise or drop in
the house price, we model the rise/drop in the house price by independent Gaussian
random jumps {Ji , i ≥ 0} with mean μJ and variance σJ2 , and we count the number
of price jumps during the time interval (0, t] using a Poisson process {N(t), t ≥ 0}
with intensity λh . (3) We assume that the processes {Wh (t), t ≥ 0}, {N(t), t ≥ 0},
and {Ji , i ≥ 0} are independent. (4) The drift coefficient μh (t) denotes the average
rate of return. (5) The diffusion coefficient σh (> 0) represents the volatility of the
house price. (6) The parameter kh is given by kh = exp (μJ + 12 σJ2 ) − 1.

2.2 Interest Rate

We assume that the instantaneous short-rate dynamics follows the Vasicek model
[9]. More precisely, the interest rate process {r(t), t ≥ 0} is governed by the
following stochastic differential equation

dr(t) = αr (μr − r(t))dt + σr dWr (t), r(0) = r0 , (2.2)

where {Wr (t), t ≥ 0} is a P-standard Brownian motion with Cov(dWr (t),


dWh (t)) = ρhr dt. We assume that r0 , αr , μr , σr are positive constants.
From Itô’s formula applied to eαr u r(u) we obtain
 t
r(t) = e−αr t r(0) + μr (1 − e−αr t ) + σr e−αr (t −u)dWr (u), t ≥ 0. (2.3)
0

The discount factor at time t, denoted by d(t), is defined as


$  t %
d(t) := exp − r(s)ds . (2.4)
0

Some trivial computation yields


$ %
σr2 1
E [d(t)] = exp 2
− μr t + (μr − r0 )(1 − e−αr t )
2αr αr
5
σr2
+ 1 − (2 − e−αr t )2 . (2.5)
4αr3
Valuation of Reverse Mortgage 375

2.3 Joint Lives

In our analysis, the initial time t = 0 represents the time at which the reverse
mortgage is signed. Let x0 and y0 represent the age of the husband and wife at
time 0, respectively. Let X and Y be the age-at-death of the husband and wife,
respectively. By F (x, y) := P (X ≤ x, Y ≤ y) we denote the joint distribution
function of random vector (X, Y ), with F1 (x) and F2 (y) denoting the respective
marginal distributions. The bivariate distribution function can be specified by a
copula function and two marginal distributions; that is,

F (x, y) = C(F1 (x), F2 (y)),

where C is a real-valued copula function that provides a link between the marginal
distributions and the corresponding bivariate distribution. The copula function is
given by (see Frees et al. [1]),

1 (eαu − 1)(eαv − 1)
C(u, v) = ln 1 + ,
α eα − 1

with the two marginal distributions following the Gompertz distribution:


m  x 
− σ1
F1 (x) = 1 − exp e 1 1 − e σ1 ,

m  y 
− σ2
F2 (y) = 1 − exp e 2 1 − e σ2 .

The corresponding density functions for X and Y are given by

1 x−m m  x 
1 − 1
f1 (x) = e σ1 exp e σ1 1 − e σ1 ,
σ1
m  y 
1 y−m 2 − 2
f2 (y) = e σ2 exp e σ2 1 − e σ2 .
σ2

We consider a bivariate residual lifetime random vector (X − x0 , Y − y0 ), where


X − x0 and Y − y0 represents the time-until-death of the husband and that of the
wife, respectively. The joint distribution function for (X − x0 , Y − y0 ) is

1
Fc (x, y) = [F (x0 + x, y0 + y) − F (x0 + x, y0 ) − F (x0 , y0 + y)
p0
+F (x0 , y0 )] , (2.6)
376 D. Kannan and L. Ma

where

p0 = 1 − F1 (x0 ) − F2 (y0 ) + F (x0 , y0 ).

Let

T1 = min{X − x0 , Y − y0 }, T2 = max{X − x0 , Y − y0 }, (2.7)

then the density function fT (t) for T2 is

1 dF (x0 + t, y0 + t) dF (x0 + t, y0 ) dF (x0 , y0 + t)


fT (t) = − − .
p0 dt dt dt
(2.8)
dF (x0 +t,y0 +t ) dF
Abbreviating dt as dt , we have

dF f1 (x0 + t)eαF1 (x0 +t) (eαF2 (y0 +t) − 1) + f2 (y0 + t)eαF2 (y0 +t) (eαF1 (x0 +t) − 1)
= ,
dt eα − 1 + (eαF1 (x0 +t) − 1)(eαF2 (y0 +t) − 1)

dF (x0 + t, y0 ) (eαF2 (y0 ) − 1)eαF1(x0 +t ) f1 (x0 + t)


= α ,
dt e − 1 + (eαF1 (x0 +t ) − 1)(eαF2(y0 ) − 1)

dF (x0 , y0 + t) (eαF1 (x0 ) − 1)eαF2(y0 +t ) f2 (y0 + t)


= α .
dt e − 1 + (eαF1 (x0 ) − 1)(eαF2(y0 +t ) − 1)

3 Valuation of Reverse Mortgage

We start our valuation process by first introducing a reverse mortgage with the
joint and γ annuities applied by the dependent joint lives. Then, the valuation
models are built based on the principle of balance between expected gain and
expected payment. Under the two-dimensional Gauss distribution and independence
assumptions, we obtain the analytic valuation formulas for the lump sum, joint
and γ annuities, increasing (decreasing) annuities, and level annuities of reverse
mortgage without redemption right, and derive the valuation equation that the
variable payment annuities satisfy.

3.1 Case of Joint and γ Annuities

We shall now design a reverse mortgage with the joint and γ annuities. The joint
life, i.e., a couple contract offers a yearly annuity payment until the last annuitant
dies. The product that we design has the following basic features: (1) The lender
starts the payments of annuity to the joint annuitants at the beginning of signing the
Valuation of Reverse Mortgage 377

contract, and the annuity payment is terminated upon the death of the last annuitant.
While both annuitants are alive, the lender pays annuity amount A at the beginning
of each year, and γ A while only one annuitant is alive. (2) When the last applicant
dies, the lender will take over the annuitant’s pledged property, sell it in the market,
and keep all of the proceeds from the sale of the property.
The essence of reverse mortgage with the joint and γ annuities is to exchange the
profit from selling the mortgaged house with the joint-life’s annuities until the last
annuitants’ death. Upon the passing of the last annuitant, the lender will take over
the homeowner’s mortgaged property and sell it. The cash acquired from the sale
is used to repay loan (including annuities and accumulated interests) that the joint
annuitants owe to the lender. Reverse mortgage possesses the non-recourse clauses,
which is, that the lender may not reclaim the loan against the annuitants’ other
assets or cash income. So, the lender will suffer a loss when the cash of selling the
mortgaged property is less than those annuities and accumulated interests, otherwise
the lender will make a profit.
Next, we will provide a simple example to see how the reverse mortgage deals
with the joint and γ annuities. Assume that the age-at-death of the husband and
the wife be X = 65.7 and Y = 67.9 years, respectively. Let the initial age be,
respectively, x0 = 65 and y0 = 64 years. Then the loan tenure is T2 = max{X −
x0 , Y − y0 } = 3.9 years. This implies that the couple claims once cash payment A at
the beginning of the first year of the contract. The wife as the last annuitant claims
three times cash payments γ A at the beginning of the second, third, and fourth year
of the contract, respectively. When the wife, the last survivor, dies at age of 67.9
years, the lender will take over the pledged house and sell it in the market. Most of
the time, it is impossible to sell the pledged house as soon as the lender takes over
it. Thus the time of selling out the pledged house usually lags behind that of taking
over the pledged house for a time. In the following valuation models, we will take
this delay time into consideration.

3.2 Valuation: Joint and γ Annuities

In the valuation of the reverse mortgage we assume that the market is perfectly com-
petitive and, price the reverse mortgage with the joint and γ annuity by the principle
of balance between expected gain and expected payment. The terminology principle
of balanced expected gain and payment means that the expected discounted present
value of future sale of the pledged property is the same as the expected discounted
present value of annuities that the lender pays during the whole loan period.
At time T2 , the lender takes over the annuitants’ mortgaged property, and sells
it at time T2 + t0 , where, recall that, t0 ≥ 0 is the delay time between the lender
taking over the pledged property and the sale of that property. We assume that t0 is
378 D. Kannan and L. Ma

deterministic. Then the expectation of discounted present value of the sale price of
the property (i.e., the lender’s expected gain) is

E [h(T2 + t0 )d(T2 + t0 )] , (3.1)

where h(t) is the value of the mortgaged property at time t, and d(t) is the discount
factor at time t given by Eq. (2.4).
The expectation of discounted present value of the joint and γ annuities during
the whole loan period (i.e., the lender’s expected payment) is
⎡ ⎤
T
 1 T
 2
E⎣ Ad(k) + γ Ad(k)⎦ , (3.2)
k=0 k=T1 +1

where x is the floor function (the largest integer not greater than x). Then, the
principle of balance between expected gain and expected payment yields
⎡ ⎤
T
 1 T
 2
E [h(T2 + t0 )d(T2 + t0 )] = E ⎣ Ad(k) + γ Ad(k)⎦ . (3.3)
k=0 k=T1 +1

In general, the explicit formula of annuity payment is difficult to obtain from


Eq. (3.3). However, we can obtain the analytic annuity formula under the two-
dimensional Gaussian distribution and independence assumption. The following
Proposition 1 presents the analytic formula for the expected discounted present
value of the mortgaged property at any time t.
Proposition 1 Assume that (a) the dynamics of home price follows the exponential
Lévy process given by Eq. (2.1), and (b) the instantaneous short interest rate is
governed by Eq. (2.2). Define
 t  s
Y (t) := σr e−αr s eαr u dWr (u) ds. (3.4)
0 0

Assume also that (c) the joint distribution of (Wh (t), Y (t)) follows the two-
dimensional Gaussian distribution, and that (d) σh Wh (t) − Y (t) is independent
 )
of N(t
i=1 Ji . Then, the expectation of discounted present value of the mortgaged
property at time t is given by

E [h(t)d(t)] = G(t)D(t), (3.5)


Valuation of Reverse Mortgage 379

where
 t $ %5
1 1 −αr t 1
G(t) = h0 exp μ(s)ds − σh σr ρhr t+ e − , (3.6)
0 αr αr αr
$ 2 %
σr 1  
D(t) = exp 2
− μr t + (μr − r0 ) 1 − e−αr t
2αr αr
2 5
σr −αr t 2
+ 3 1 − (2 − e ) . (3.7)
4αr

Proof A gist of the proof of Proposition 1 is given in the Appendix.


The analytic valuation formula for the expected lump sum that the householder
can borrow in average at time 0, and the analytic valuation formula for the joint
annuity are given in Proposition 2.
Proposition 2 Assume that (a) h(t)d(t), (t ≥ 0), is independent of T2 , (b) r(t) is
independent of (T1 , T2 ), and (c) the pledged property is sold at time T2 +t0 . Then:
- is
(1) The expectation of the lump sum, denoted by G,
 +∞
-=
G G(x + t0 )D(x + t0 )fT (x)dx, (3.8)
0

where G(x + t0 ), D(x + t0 ) and fT (x) are given by Eqs. (3.6), (3.7) and (2.8),
respectively.
(2) For the joint and γ annuity, the fixed amount A of annuity is given by

-
G
A= , (3.9)
-1
F

where
+∞

-1 =
F D(i) [1 − (1 − γ )Fc (i, +∞) − (1 − γ )Fc (+∞, i)
i=0
+(1 − 2γ )Fc (i, i)] , (3.10)

and Fc (x, y) is given by Eq. (2.6).


Proof A concise proof is provided in the Appendix.
380 D. Kannan and L. Ma

3.3 Valuation: Variable Payment Annuities

The phrase reverse mortgage with variable payment annuity tells us that the lender
starts the payments of annuity to the joint lives at the beginning of signing the
contract until the death of the last survivor, and that the annuity payment amount
at year k, (k ≥ 1),  is Ak . Here, if the last annuitant passes away at k-th year,
then a total amount ki=1 Ai of annuity payments has been paid. The increasing
or decreasing annuity is a special case of the variable payment annuity with
Ak ≡ A0 +d ·k (k = 1, 2, . . .). At the beginning of k-th period, the annuity payment
is A0 + d · k, as long as at least one of the annuitants is alive. In the following, we
call A0 the basic annuity, and d the annuity increment. The level annuity is a special
case of the joint and γ annuity with γ = 1, and is also a special case of the variable
payment annuity with Ak being the same constant for all k ≥ 1.
Proposition 3 Assume that (a) h(t)d(t) (t ≥ 0) is independent of T2 , (b) r(t) is also
independent of T2 , and (c) The pledged property is sold at time T2 + t0 . Then:
(1) For the variable payment annuity, the annuity payments Ak (k = 1, 2, . . .)
satisfy the following valuation equation
 +∞ +∞

G(x + t0 )D(x + t0 )fT (x)dx = Ak+1 D(k)P (T2 ≥ k), (3.11)
0 k=0

where

P (T2 ≥ k) = 1 − Fc (k, k), (3.12)

and D(k) is given by Eq. (3.7).


(2) For the increasing (decreasing) annuity, A0 and d are determined by the
simultaneous equations

-−d ·F
G -3 - − A0 · F
G -2
A0 = , d= , (3.13)
-2
F -3
F

where
+∞
 +∞

-2 =
F D(k)P (T2 ≥ k), -3 =
F kD(k)P (T2 ≥ k), (3.14)
k=0 k=0

- and P (T2 ≥ k) are, respectively, defined by Eqs. (3.7), (3.8), and


and D(k), G
(3.12).
Valuation of Reverse Mortgage 381

(3) For the level annuity, the fixed annuity amount A∗ is given by

-
G
A∗ = , (3.15)
-2
F

-2 is given by Eq. (3.14).


where F
Proof We omit the proof of Proposition 3 as it parallels that of Proposition 2.

4 Effect of Parameters on the Annuity

This and the following section show how the parameters associated with the house
price, interest rate, and the delay duration in selling the pledged house would affect
the various annuity payments.

4.1 Monotonicity Subject to the Parameters of House Price

For our further analysis, we assume that the average rate of return μh (t) ≡ μh of
the house price is deterministic. The next Proposition analyzes the monotonicity of
the annuity payment, lump sum, and annuity payment factors w.r.t the parameters
related with the house price model, including the average return rate μh , the
volatility σh , the initial house price h0 , the correlation coefficient between the
Brownian motion driving the house price and those driving the interest rate ρhr , and
the delay time in selling the pledged house t0 . For the descriptions of the parameters
μh , σh , ρhr , and h0 connected to the house price, we refer to Sect. 2.
Proposition 4 Predicated on the parameters μh , σh , ρhr , and h0 of the house
price, we have the following properties:
4.1. Parameter μh : (a) The annuity payment factors F -i (i = 1, 2, 3) are indepen-
-
dent of μh ; (b) The lump sum G is an increasing function of μh ; and (c) The
A, A0 , d and A∗ are all increasing functions of μh .
4.2. Parameter σh : (a) The annuity payment factors F -i (i = 1, 2, 3) are indepen-
dent of the volatility σh of the house price. (b) If ρhr > 0, σr > 0, and αr = 0,
then the lump sum G - is a decreasing function of σh . (c) If ρhr < 0, σr > 0,
and αr = 0, then the lump sum G - is an increasing function of σh . (d) If
ρhr > 0, σr > 0, and αr = 0, then A, A0 , d and A∗ all are decreasing
functions of σh . (e) If ρhr < 0, σr > 0, and αr = 0, then A, A0 , d, and A∗ all
are increasing functions of σh .
4.3. Parameter ρhr : (a) The annuity payment factors F -i , i = 1, 2, 3, are
independent of ρhr . (b) If σh > 0, σr > 0, and αr = 0, then the lump sum G -
382 D. Kannan and L. Ma

is a decreasing function of ρhr . (c) If σh > 0, σr > 0, and αr = 0, then A, A0 ,


d, and A∗ are all decreasing functions of ρhr .
4.4. Parameter h0 : (a) The annuity payment factors F -i , i = 1, 2, 3, are indepen-
dent of the initial house price h0 . (b) The lump sum G- is an increasing function

of h0 . (c) The A, A0 , d, and A are all increasing functions of h0 .
Proof The main steps of the proof are moved to the Appendix.
Proposition 5 With respect to the delay time t0 , between acquiring the house and
- and F
selling the house, the factors A, A0 , d, A∗ , G, -i (i = 1, 2, 3) have the
following properties:
-i (i = 1, 2, 3) do not depend on t0 .
(a) The annuity payment factors F
(b) Now set
$ %
σh σr ρhr 2 2σr2 (r0 − μh )
$ := μr − r0 + + , (4.1)
αr αr2

αr2 σr2 σh σr ρhr √


z1 := − 2
μ r − r0 − 2
+ + $ , (4.2)
σr αr αr

and

αr2 σ2 σh σr ρhr √
z2 := − 2
μr − r0 − r2 + − $ . (4.3)
σr αr αr

(b-1) If any one of the following conditions is satisfied

$ ≤ 0,
$ ≥ 0, αr > 0, z1 ≥ 1,
$ ≥ 0, αr > 0, z2 ≤ 0,

- and the quantities A, A0 , d, and A∗ are all increasing


then the lump sum G,
functions of t0 .
(b-2) If

$ ≥ 0, αr > 0, z1 ≤ 0, z2 ≥ 1, (4.4)

holds, then the lump sum G - and the quantities A, A0 , d, and A∗ are all
decreasing functions of t0 .
Proof A summary of proof is delegated to the Appendix.
Valuation of Reverse Mortgage 383

4.2 Monotonicity Subject to Parameters of Interest Rate

The following Proposition 6 analyzes how the annuity payment, lump sum payment,
and the annuity payment factors are affected by the parameters r0 , μr ,, σr involved
in the interest rate mode.
Proposition 6 With respect to the parameters of the interest rate model, the factors
- F
A, A0 , d, A∗ , G, -i (i = 1, 2, 3) have the following properties:
- F
6.1. Initial Interest Rate r0 : If αr = 0, then G, -i , (i = 1, 2, 3), are decreasing
functions of r0 .
6.2. Mean Reversion Level μr : If αr > 0, then G, - F-i , (i = 1, 2, 3), are decreasing
functions of μr . If the opposite case αr < 0 holds, then G,- F -i , (i = 1, 2, 3),
are increasing functions of μr .
6.3. Volatility σr :
(a) If αr > 0, σh > 0 and ρhr ≥ 0, then G - is a decreasing function of σr in
the interval σr ∈ (0, σh ρhr αr ]. If αr > 0, σh > 0, and ρhr ≤ 0, then G - is
an increasing functions of σr .
(b) In case of αr = 0, σr > 0, then F -i , (i = 1, 2, 3), are increasing functions
of σr .
(c) If αr > 0, σh > 0, and ρhr ≥ 0, then A, A0 , d, and A∗ are decreasing
functions of σr in the interval σr ∈ (0, σh ρhr αr ].
Proof See Appendix.

5 Numerical Experiment

In this section, we illustrate the impact of risks due to the house price, the
interest rate, and the longevity on the annuity payment on the valuation of reverse
mortgage. The following Table 1 provides, as the standard case, the parametric
values involved in the models of house price, interest rate, and lifetime. The values
of the parameters (m1 , m2 , σ1 , σ2 , α) come from the bivariate distribution function
of the joint lifetimes (see Frees et al. [1]). For illustration purpose, we assume
that the initial age of the male is 2 years greater than that of the female, that is,
x0 = y0 + 2.

Table 1 Parameters of the standard case

P ara μh σh ρhr h0 t0 r0 μr σr
V alue 0.04 0.08 0.3 $100 0 0.04 0.06 0.01
P ara αr y0 m1 m2 σ1 σ2 α γ
V alue 0.5 x0 − 2 85.82 89.40 9.98 8.12 −3.367 0.5
384 D. Kannan and L. Ma

5.1 Effect of House Price on Annuity Values

The effect of the parameters of the house price and initial age on the joint annuity
and γ annuity, keeping other parametric values fixed. Table 2 supports the following
analysis.
(a) The average return μh of house price: (1) When the initial age is fixed,
the annuity increases significantly with the increase of the average return rate
of house price μh ; this agrees with the theory established by Proposition 4.
This is reasonable because the higher average return rate of house price implies
the higher average gains that can be obtained by the lender when selling the
mortgaged property in future. With the fair valuation principle, the lender is
bound to pay enhanced annuities to the annuitants. (2) Compared to the applicant
with higher initial age, the mean return has a stronger impact on the annuity
of the annuitant with lower initial age. As the initial age increases, the annuity
under different average return rates is stabilizing. While the average return rate
of house price μh remains unchanged, the annuity increases with the increase of
male initial age x0 .
(b) The volatility σh in house price: (1) When the initial age is fixed, the annuity
decreases with the increase of σh , (as supported by Proposition 4). This also is
reasonable. After all, higher the volatility of house price, greater the market risk.
In order to avoid the higher market risk, the lender will have to reduce the amount
of annuity. (2) As the volatility of house price σh remains unchanged, the annuity
increases with the increase of the initial age; that is, the older applicant will get
better annuity.
(c) The correlation coefficient ρhr between Brown motions: Here, the parameter
ρhr denotes the correlation coefficient between the Brownian motion driving
house price and that driving interest rate. (1) As the initial age is fixed, the
annuity decreases with the increase of the correlation coefficient ρhr , as proved
in Proposition 4. When the Brownian motion driving the house price and the
Brownian motion driving the interest rate are completely negatively correlated
(ρhr = −1), the annuity reaches the maximum. If they are completely positively
correlated (ρhr = 1), the annuity reaches the minimum. In addition, the annuity
values under the different correlation coefficients are very close, which implies
that the influence of the correlation coefficient ρhr on the annuity is very weak.
(2) Compared with the older applicants, the annuity of the younger applicant is
more susceptible to the correlation coefficient. When the correlation coefficient
ρhr is fixed, the annuity increases with the increase of the initial age, that is, the
older applicant will receive a larger annuity.
(d) The initial house price h0 : (1) As the initial age is fixed, the annuity increases
obviously with the increase of initial house price. The higher initial house price
implies that the lender reaps greater benefits while selling the mortgaged property
in the future. With the fair valuation, the lender will pay better annuity to the
borrower. As is clear from Table 2 that these annuity values get closer to each
other in the case of lower initial age, and while the initial age increases these
Table 2 Effect of house price on annuity values

x0 50 55 60 65 70 75 80 85 90 95 100
μh = 0.01 1.072 1.429 1.913 2.578 3.497 4.793 6.684 9.586 14.333 22.413 35.177
μh = 0.025 1.833 2.281 2.865 3.632 4.659 6.067 8.072 11.087 15.933 24.065 36.790
μh = 0.04 3.193 3.708 4.356 5.187 6.276 7.745 9.809 12.877 17.757 25.876 38.505
μh = 0.055 5.658 6.124 6.722 7.504 8.545 9.969 11.993 15.02 19.844 27.866 40.329
Valuation of Reverse Mortgage

μh = 0.07 10.191 10.271 10.52 10.989 11.755 12.939 14.753 17.594 22.235 30.055 42.273
σh = 0.001 3.247 3.762 4.411 5.242 6.331 7.798 9.861 12.927 17.803 25.916 38.535
σh = 0.1 3.179 3.694 4.342 5.173 6.262 7.731 9.796 12.865 17.746 25.866 38.497
σh = 0.2 3.112 3.626 4.274 5.105 6.194 7.664 9.73 12.802 17.687 25.816 38.459
σh = 0.3 3.046 3.559 4.207 5.038 6.127 7.597 9.665 12.739 17.629 25.767 38.42
σh = 0.4 2.982 3.494 4.141 4.971 6.061 7.531 9.6 12.677 17.572 25.717 38.382
ρhr = −1 3.438 3.955 4.603 5.433 6.519 7.984 10.041 13.099 17.961 26.050 38.638
ρhr = −0.5 3.342 3.858 4.506 5.337 6.425 7.891 9.951 13.013 17.882 25.983 38.587
ρhr = 0 3.248 3.763 4.412 5.243 6.331 7.799 9.862 12.928 17.804 25.916 38.535
ρhr = 0.5 3.156 3.671 4.319 5.151 6.240 7.709 9.774 12.844 17.726 25.850 38.484
ρhr = 1 3.068 3.582 4.229 5.060 6.150 7.619 9.686 12.760 17.649 25.783 38.433
h0 = 100 3.193 3.708 4.356 5.187 6.276 7.745 9.809 12.877 17.757 25.876 38.505
h0 = 200 6.385 7.415 8.712 10.375 12.552 15.489 19.618 25.755 35.514 51.752 77.009
h0 = 300 9.578 11.123 13.068 15.562 18.828 23.234 29.426 38.632 53.272 77.628 115.514
h0 = 400 12.771 14.830 17.424 20.749 25.105 30.979 39.235 51.509 71.029 103.505 154.019
h0 = 500 15.963 18.538 21.780 25.936 31.381 38.723 49.044 64.387 88.786 129.381 192.523
385
386 D. Kannan and L. Ma

annuity values gradually diverge. It means that the annuity for the older applicant
is more affected by the initial house price than that of the younger applicant. (2)
When the initial house price is fixed, the annuity increases with the increase of
initial age; that is, the older applicant will be paid higher annuities every year as
other factors, except for the initial age, are same.

5.2 Effect of Interest Rate on Annuity Values

Table 3 portrays the following analysis.


(a) The initial interest rate r0 : Keeping the initial age fixed, the annuity decreases
slightly with the increase of the initial interest rate r0 , that is, the higher the initial
interest rate the lower the annuity payment. The initial interest rate r0 has a less
influence on the annuity of younger applicants than those of older applicants. In
general, the initial interest rate r0 weakly affects the annuity payments. When the
initial interest rate is fixed, the annuity increases with the increase of the initial
age.
(b) The average reversion level μr of interest rate: (1) Fixing the initial age, the
annuity decreases with the increase of μr . The average reversion level μr impacts
more the annuity of younger borrowers than that of the older. Generally, μr has
a significant effect on the annuity. (2) With fixed average reversion level μr , the
annuity increases with the increase of initial age.
(c) The volatility σr of interest rate: The volatility σr of interest rate in Table 2
takes five values: 0.001, 0.01, 0.02, 0.03, 0.04. The corresponding annuity values
with different σr almost coincide with each other under fixed initial male age.
This indicates that the volatility of interest rate has weak effect on the annuity,
while the volatility of interest rate is at a low level. It is known from the original
data that: while the initial age kept fixed, the annuity decreases slightly, with the
increase of volatility rate σr in case that σr ≤ σh ρhr αr = 0.012 (it is consistent
with Proposition 6); and the annuity increases slightly with the increase of
volatility rate σr in the case of σr ≥ σh ρhr αr = 0.012. Proposition 6 shows that
the annuity amount decreases with increase of σr whenever σr ≤ σh ρhr αr . It
implies that the valuation models can be used to determine the annuity payments
as long as the volatility of interest rate σr can be controlled by the quantity
σh ρhr αr (irrespective of the volatility rate).
(d) The reversion speed αr of interest rate: As the initial age is fixed, the annuity
decreases slowly with the increase of the reversion speed αr . While the reversion
speed of interest rate is more than 0.75, the annuity is basically stable. The αr
has a greater impact on the annuity of younger applicants than that of older
applicants. While αr remains unchanged, the annuity increases with the increase
of the initial age.
Table 3 Effect of interest rate on annuity values

x0 50 55 60 65 70 75 80 85 90 95 100
r0 = 0.01 3.222 3.744 4.402 5.246 6.354 7.851 9.959 13.101 18.104 26.418 39.304
r0 = 0.04 3.193 3.708 4.356 5.187 6.276 7.745 9.809 12.877 17.757 25.876 38.505
r0 = 0.07 3.162 3.670 4.310 5.128 6.198 7.638 9.657 12.652 17.410 25.337 37.713
r0 = 0.1 3.131 3.633 4.262 5.067 6.118 7.529 9.503 12.426 17.064 24.801 36.931
Valuation of Reverse Mortgage

r0 = 0.13 3.100 3.594 4.214 5.006 6.037 7.419 9.349 12.199 16.717 24.268 36.156
μr = 0.02 8.821 8.894 9.139 9.602 10.355 11.515 13.285 16.045 20.550 28.178 40.236
μr = 0.04 5.346 5.766 6.319 7.056 8.052 9.428 11.396 14.354 19.083 26.987 39.351
μr = 0.06 3.193 3.708 4.356 5.187 6.276 7.745 9.809 12.877 17.757 25.876 38.505
μr = 0.08 1.899 2.384 3.011 3.831 4.917 6.393 8.477 11.588 16.557 24.839 37.695
μr = 0.1 1.136 1.544 2.099 2.852 3.881 5.309 7.360 10.462 15.470 23.870 36.919
σr = 0.001 3.226 3.741 4.390 5.222 6.311 7.779 9.843 12.910 17.788 25.903 38.526
σr = 0.01 3.193 3.708 4.356 5.187 6.276 7.745 9.809 12.877 17.757 25.876 38.505
σr = 0.02 3.187 3.701 4.349 5.179 6.267 7.734 9.797 12.865 17.744 25.863 38.492
σr = 0.03 3.215 3.727 4.373 5.201 6.287 7.753 9.814 12.879 17.754 25.867 38.491
σr = 0.04 3.276 3.786 4.430 5.256 6.339 7.801 9.858 12.918 17.786 25.888 38.502
αr = 0.05 4.426 4.897 5.504 6.299 7.358 8.804 10.848 13.892 18.719 26.728 39.186
αr = 0.15 3.360 3.895 4.568 5.430 6.557 8.073 10.191 13.318 18.244 26.369 38.942
αr = 0.35 3.204 3.722 4.375 5.213 6.310 7.791 9.872 12.963 17.869 26.009 38.641
αr = 0.55 3.192 3.706 4.353 5.184 6.271 7.737 9.797 12.860 17.733 25.845 38.470
αr = 0.75 3.191 3.704 4.350 5.177 6.260 7.720 9.771 12.820 17.671 25.757 38.364
387
388 D. Kannan and L. Ma

5.3 Effect of Joint Lifetime on Annuity Values

In this subsection we discuss the impact on annuity value by joint lifetime. Table 4
shows that:
(a) The modal value m1 of male lifetime: Since the parameters m1 and m2 in
the respective Gompertz distributions have the same function, we consider only
the parameter m1 . (1) When the initial age is fixed, the annuity decreases with
the increase of m1 . The annuity for the older applicants is more sensitive to the
change of m1 than that for the younger applicants. (2) When the modal value m1
is fixed, the annuity increases with the increase of the initial age, that is, older
the applicant is, higher the annuity he will receive. (3) Smaller the parameter m1
becomes, greater the impact on annuity the parameter m1 will exert.
(b) The dispersion coefficient σ1 of male lifetime: We shall treat only the
parameter σ1 as σ2 shares the same function with σ1 . With fixed initial age, the
annuity shows two different trends with the change of σ1 : (1) When the initial
age is at a lower level (say, x0 = 50, 55, 60), the annuity increases first and then
decreases with the increase of σ1 . (2) When the initial age is at a higher level (say,
x0 = 70, 75, . . . , 100), the annuity decreases with the increase of σ1 . The annuity
of an older applicant is more strongly affected by σ1 . As σ1 remains unchanged,
the annuity increases with the increase of initial age x0 .
(c) Parameter α, the dependence between male and female lifetime: As the
initial age remains fixed, the change of annuity shows three different trends:
(1) when the initial age is at a lower level (say x0 = 50, 55, 60), the annuity
decreases with the increase of α; (2) when the initial age is at the middle level
(say x0 = 65), the annuity decreases first and then increases with the increase
of α; (3) as the initial age is at a higher level (say x0 = 70, 75, . . . , 100), the
annuity increases with the increase of α. (4) In general, the impact of α on older
applicants’ annuities is significantly stronger than that for young applicants’
annuities. As α is fixed, the annuity increases with the increase of the applicant
age, that is, the older the applicants are, the greater annuity they will be paid.
Keeping the initial age y0 (y0 = 50, 55, . . . , 100) of the female annuitant fixed,
the annuity value increases with the increase of the male applicant’s initial age x0 ,
(50 ≤ x0 ≤ 100). The larger the initial age of female is, the stronger effect it will
exert on the annuity amount.
When the initial age x0 (x0 = 50, 55, . . . , 100) of male applicant is fixed, the
annuity amount increases with the increase of the female’s initial age y0 , (50 ≤ y0 ≤
100). The larger the initial age of male is, the stronger influence it will produce on
the annuity value.
We note that: the annuity value is approximately symmetrical, though not
completely symmetrical, in the initial x0 and y0 . When the age difference between
the male and female annuitants is the same, different influences are made on the
annuity value in the case with the initial age y0 of female is greater than that of male
and vice versa. Especially, when y0 − x0 = d (d > 0) the annuity value is greater
Table 4 Effect of joint lifetime on annuity values

x0 50 55 60 65 70 75 80 85 90 95 100
m1 = 69 3.69 4.337 5.134 6.123 7.39 9.129 11.757 16.083 23.491 35.692 52.474
m1 = 79 3.401 3.976 4.701 5.626 6.832 8.45 10.743 14.287 20.304 30.641 45.906
Valuation of Reverse Mortgage

m1 = 89 3.081 3.567 4.176 4.955 5.972 7.339 9.245 12.039 16.368 23.391 34.514
m1 = 99 2.658 3.05 3.534 4.142 4.918 5.929 7.269 9.089 11.637 15.412 21.528
m1 = 109 2.205 2.515 2.891 3.356 3.935 4.663 5.575 6.714 8.141 10.027 12.859
σ1 = 6 3.138 3.66 4.342 5.259 6.529 8.339 11.001 15.037 21.627 33.609 51.684
σ1 = 8 3.176 3.699 4.37 5.249 6.426 8.045 10.359 13.846 19.529 29.443 45.100
σ1 = 10 3.193 3.708 4.356 5.186 6.275 7.742 9.804 12.868 17.741 25.843 38.443
σ1 = 12 3.191 3.691 4.312 5.095 6.106 7.451 9.316 12.041 16.239 22.884 32.886
σ1 = 14 3.175 3.658 4.251 4.991 5.935 7.179 8.882 11.327 14.974 20.478 28.458
α = −5 3.223 3.736 4.378 5.195 6.260 7.690 9.689 12.640 17.335 25.322 38.180
α = −4 3.205 3.720 4.365 5.190 6.267 7.717 9.751 12.768 17.573 25.645 38.376
α = −3 3.185 3.700 4.350 5.186 6.283 7.765 9.851 12.953 17.879 26.021 38.581
α = −2 3.160 3.677 4.334 5.184 6.310 7.842 10.006 13.224 18.286 26.464 38.797
α = −1 3.134 3.652 4.316 5.185 6.350 7.954 10.236 13.615 18.832 26.987 39.023
389
390 D. Kannan and L. Ma

than that when x0 − y0 = d. To validate this claim, we present the annuity values
with the age difference of 5, 10, and 15 years in Table 5.

5.4 Effect of Other Parameters on Annuity Values

(a) The delay time t0 in selling the pledged house: Table 6 shows that: When the
initial age is fixed, the annuity slowly decreases with the increase of the delay
time of selling the pledged house. The impact of delay time on the annuity is
complex. While the other parameters in the valuation model change, the annuity
may also increase with the increase of delay time. The delay time has a weak
impact on the annuity. However, the effect is stronger on the older borrowers
than on the younger borrowers.
When the delay time of selling house remains unchanged, the annuity
increases with the increase of the initial age. Facing the different delay time,
the applicant with different initial age may get approximately the same annuity.
(b) Parameter γ , the proportion coefficient of joint annuity: In order to evaluate
the effect of the γ on the annuity, Table 6 presents the joint and γ annuity values.
It shows that: As the initial age is fixed, the annuity payment decreases slowly
with the increase of γ . The effect of γ on the annuity for older applicants is
stronger than that of younger applicants. As γ remains unchanged, the annuity
increases with the increase of initial age.

6 Appendix

Proof of Proposition 1 It is easy to see that Y (t) follows the normal distribution
with the mean 0 and variance

σr2 σr2
σy2 (t) = t + 1 − (2 − e−αr t )2 . (6.1)
αr2 2αr3

Noting that Cov(dWh (t), dWr (t)) = ρhr dt, the covariance between Wh (t) and
Y (t) is given by
$ %
1 1 −αr t 1
Cov(Wh (t), Y (t)) = σr ρhr t+ e − ,
αr αr αr

and hence the correlation coefficient ρ(t) between Wh (t) and Y (t) is
$ %
σr ρhr 1 1
ρ(t) = √ t + e−αr t − . (6.2)
αr σy (t) t αr αr
Table 5 Effect of joint lifetime on annuity values

(x0 , y0 ) (55,50) (60,55) (65,60) (70,65) (75,70) (80,75) (85,80) (90,85) (95,90) (100,95)
Valuation of Reverse Mortgage

Annuity 3.490 4.084 4.837 5.811 7.099 8.863 11.410 15.374 22.032 33.089
(x0 , y0 ) (50,55) (55,60) (60,65) (65,70) (70,75) (75,80) (80,85) (85,90) (90,95) (95,100)
Annuity 3.590 4.206 4.995 6.028 7.418 9.358 12.184 16.502 23.417 34.329
(x0 , y0 ) (60,50) (65,55) (70,60) (75,65) (80,70) (85,75) (90,80) (95,85) (100,90)
Annuity 3.653 4.293 5.103 6.142 7.505 9.369 12.110 16.572 24.357
(x0 , y0 ) (50,60) (55,65) (60,70) (65,75) (70,80) (75,85) (80,90) (85,95) (90,100)
Annuity 3.846 4.537 5.430 6.606 8.193 10.399 13.587 18.530 26.675
(x0 , y0 ) (65,50) (70,55) (75,60) (80,65) (85,70) (90,75) (95,80) (100,85)
Annuity 3.811 4.492 5.342 6.417 7.816 9.756 12.723 17.726
(x0 , y0 ) (50,65) (55,70) (60,75) (65,80) (70,85) (75,90) (80,95) (85,100)
Annuity 4.086 4.852 5.837 7.121 8.816 11.119 14.521 20.137
391
392

Table 6 Effect of other parameters on annuity values

x0 50 55 60 65 70 75 80 85 90 95 100
t0 = 0 3.193 3.708 4.356 5.187 6.276 7.745 9.809 12.877 17.757 25.876 38.505
t0 = 3 3.004 3.489 4.099 4.881 5.907 7.290 9.236 12.133 16.752 24.475 36.573
t0 = 6 2.827 3.283 3.857 4.593 5.558 6.860 8.692 11.420 15.772 23.057 34.487
t0 = 9 2.660 3.089 3.629 4.322 5.230 6.455 8.179 10.746 14.843 21.702 32.466
t0 = 12 2.503 2.907 3.415 4.067 4.922 6.074 7.696 10.112 13.967 20.422 30.553
γ = 1/2 3.193 3.708 4.356 5.187 6.276 0.745 9.809 12.877 17.757 25.876 38.505
γ = 2/3 3.129 3.615 4.221 4.990 5.987 7.314 9.157 11.868 16.162 23.349 34.755
γ = 3/4 3.099 3.571 4.157 4.897 5.851 7.116 8.862 11.421 15.468 22.262 33.141
γ = 4/5 3.080 3.545 4.119 4.843 5.773 7.002 8.694 11.168 15.079 21.657 32.242
γ =1 3.010 3.444 3.975 4.638 5.481 6.582 8.082 10.260 13.701 19.534 29.088
D. Kannan and L. Ma
Valuation of Reverse Mortgage 393

Since the joint distribution of (Wh (t), Y (t)) follows the two- dimensional normal
distribution with the correlation coefficient ρ(t) given by Eq. (6.2), we have with
Eq. (6.1)
 +∞  +∞
E {exp [σh Wh (t) − Y (t)]} = exp(σh x − y)f (x, y)dxdy,
−∞ −∞

where
 5
1 1
f (x, y) = A exp − Sxy ,
2πσy (t) t (1 − ρ 2 (t)) 2(1 − ρ 2 (t))

and
$ %2 $ %2
x x y y
Sxy = √ − 2ρ(t) √ + .
t t σy (t) σy (t)

Under the transformation u = x



t
and v = y − ρ(t)σy (t)u, we obtain

E {exp [σh Wh (t) − Y (t)]}


 +∞  +∞  √ 
= exp σh t − ρ(t)σy (t) u − v g(u, v)dudv
−∞ −∞
1 2 √ 1
= exp σh t − ρ(t)σy (t)σh t + σy2 (t) , (6.3)
2 2

where
/  4
1 1 2 v2
g(u, v) = A exp − u + 2 .
2πσy (t) 1 − ρ 2 (t) 2 σy (t)(1 − ρ 2 (t))

Noting {N(t), t ≥ 0} and {Ji , i ≥ 1} are independent, and Ji follows the normal
distribution with mean μJ and variance σJ2 , we obtain
N(t)
E e i=1 Ji
= exp(kh λh t). (6.4)

Setting
 t $ %
1 2
m1 := μh (s)ds − σ + λh kh t,
0 2 h

1  
m2 := μr t + (μr − r0 ) e−αr t − 1 ,
αr
394 D. Kannan and L. Ma

one easily obtains


⎡ ⎤
)
N(t
h(t) = h0 exp ⎣m1 + σh Wh (t) + Ji ⎦ , (6.5)
i=1
 t
r(u)du = m2 + Y (t). (6.6)
0

N(t )
Since σh Wh (t) − Y (t) is independent of i=1 Ji , we have from Eqs. (6.3)–(6.6)
N(t)
E [h(t)d(t)] = h0 em1 −m2 E eσh Wh (t )−Y (t ) E e i=1 Ji
= G(t)D(t),

where G(t) and D(t) is, respectively, defined by Eqs. (3.6) and (3.7). We obtain the
Proposition 1.
Proof of Proposition 2 The lump sum that the applicants can borrow at time t = 0
of signing the reverse mortgage contract is the random quantity h(T2 +t0 )d(T2 +t0 ).
Since h(t)d(t) (t ≥ 0) is independent of T2 ,

- = E [h(T2 + t0 )d(T2 + t0 )]
G
 +∞
= E [h(x + t0 )d(x + t0 )] fT (x)dx
0
 +∞
= G(x + t0 )D(x + t0 )fT (x)dx.
0

From the independence of r(t) and (T1 , T2 ), we have


⎡ ⎤
T
 1 T
 2
E⎣ Ad(k) + γ Ad(k)⎦
k=0 k=T1 +1
+∞ i  ⎡ ⎤
 +∞ 
 +∞ 
j
=E Ad(k)1{T1 =i} + E ⎣ γ Ad(k)1{T1 =i,T2 =j } ⎦
i=0 k=0 i=0 j =i+1 k=i+1

+∞ 
 i
=A E[d(k)]P (T1 = i)
i=0 k=0
+∞ 
 +∞ 
j
+γ A E[d(k)]P (T1  = i, T2  = j )
i=0 j =i+1 k=i+1
Valuation of Reverse Mortgage 395

+∞
 +∞

=A D(i)P (T1 ≥ i) + γ A D(i)P (T1 < i, T2 ≥ i)
i=0 i=0
-1
=A·F

where D(k) is characterized as in (3.7). Recalling that the probability density


function for T2 is given by the Relation (2.8), we get Eq. (3.9). The proof is
complete.
Proof of Proposition 4 From the descriptions of the annuity payment factors
-i (i = 1, 2, 3) (see Relations (3.10) and (3.14)), we note that these two annuity
F
payment factors are independent of μh .
- (see Relation (3.8)),
From the integrand in the definition of G

∂ [G(x + t0 )D(x + t0 )fT (x)]


= (x + t0 )G(x + t0 )D(x + t0 )fT (x).
∂μh

Since G(x + t0 ) > 0, D(x + t0 ) > 0, fT (x) ≥ 0, and x + t0 ≥ 0, the lump sum
- is an increasing function of μh . Furthermore, from Eqs. (3.9), (3.13), and (3.15),
G
we note that A, A0 , d, and A∗ are increasing functions of μh . This obtains Part 4.1.
From Eqs. (3.10) and (3.14), defining the annuity payment factors F -i (i =
1, 2, 3) we see that these annuity payment factors are independent of σh .
Defining

1 1  
g1 (z) := −z + 1 − e−αr z , (6.7)
αr αr

∂ [G(x + t0 )D(x + t0 )fT (x)]


= σr ρhr G(x + t0 )D(x + t0 )fT (x)g1 (x + t0 ),
∂σh
 
where g1 (x +t0 ) = α1r −(x + t0 ) + α1r 1 − e−αr (x+t0) . When αr = 0 and z ≥ 0,
we have g1 (z) ≤ 0. We thus proved Part 4.2.
Since D(x + t0 ) and fT (x) are free from ρhr ,

∂ [G(x + t0 )D(x + t0 )fT (x)]


= σh σr G(x + t0 )D(x + t0 )fT (x)g1 (x + t0 ).
∂ρhr

Noting that g1 (z) ≤ 0 when z ≥ 0, αr = 0, Part 4.3 follows.


Since the Part 4.4 is obvious, we omit its proof.
Proof of Proposition 5 Define

σr2 2
g2 (z) := z + β1 z + β0 , (−∞ < z < +∞),
2αr2
396 D. Kannan and L. Ma

where

σh σr ρhr σ2 σh σr ρhr σ2
β 0 = μh − μr − + r2 , β1 = μr − r0 + − r2 .
αr 2αr αr αr

Now
∂ [G(x + t0 )D(x + t0 )fT (x)]
= G(x + t0 )D(x + t0 )fT (x)g2 (e−αr (x+t0) ),
∂t0

where

σr2 −2αr (x+t0 )


g2 (e−αr (x+t0 ) ) = e + β1 e−αr (x+t0) + β0 .
2αr2

α2
The minimum of g2 (z) is − 2σr2 $ ($ given by Eq. (4.1)). If the condition $ ≤ 0
r
holds, we then have g2 (z) ≥ 0, and thus G - is an increasing function of t0 .
Recall the definitions of z1 and z2 given above by the Relations (4.2) and (4.3),
respectively. Now, if the condition $ ≥ 0 holds, then g2 (zi ) = 0, i = 1, 2.
Moreover, it is obvious that 0 < exp(−αr (x + t0 )) ≤ 1 whenever αr > 0 and
x + t0 ≥ 0. Thus the lump sum G - is a decreasing function of t0 if the Condition
(4.4) holds. One similarly obtains the rest of the properties in Part (b-1).
Proof of Proposition 6 First note that

∂ [G(x + t0 )D(x + t0 )fT (x)] 1


=− 1 − e−αr (x+t0 ) G(x + t0 )D(x + t0 )fT (x),
∂r0 αr

+∞
-1
∂F 1 
=− (1 − e−αr k )D(k)[P (T1 ≥ k) + γ P (T1 < k, T2 ≥ k)],
∂r0 αr
k=1

-2 +∞

∂F 1
=− (1 − e−αr k )D(k)P (T2 ≥ k),
∂r0 αr
k=1
+∞
-3
∂F 1 
=− k(1 − e−αr k )D(k)P (T2 ≥ k).
∂r0 αr
k=1

−αr z )
αr (1 − e ≥ 0, ( αr = 0) and z ≥ 0, we obtain Part 6.1.
1
Since
 
Defining g3 by g3 (z) := −z + α1r 1 − e−αr z . It is obvious that ∂D(x)
∂μr =
D(x)g3 (x), and

∂ [G(x + t0 )D(x + t0 )fT (x)]


= G(x + t0 )D(x + t0 )fT (x)g3 (x + t0 ),
∂μr
Valuation of Reverse Mortgage 397

 
where g3 (x + t0 ) = −(x + t0 ) + α1r 1 − e−αr (x+t0 ) .
Noting g3 (z) ≤ 0 if αr > 0, z ≥ 0, and g3 (z) ≥ 0 if αr < 0, z ≥ 0, we obtain
Part 6.2.
For −∞ < z < +∞, define
$ % $ %
σr −2αr z 2σr σh ρhr −αr z σr σh ρhr
g4 (z) = − 3 e + − e + − z
2αr αr3 αr2 αr2 αr
$ %
σh ρhr 3σr
+ − 3 ,
αr2 2αr
$ % $ %
1 σr 2 2σr σr
g5 (y) = y + σh ρhr − y+ − σh ρhr .
αr αr αr αr

We have g5 (y) has two zero points y1 = 1 − σh ρσhrr αr and y2 = 1 if σh ρhr ≥ 0;


and two zero points y1 = 1 and y2 = 1 − σh ρσhrr αr if σh ρhr ≤ 0.
Now,
$ % $ %
dg4 (z) 1 σr −2αr z 2σr σr
= e + σh ρhr − e−αr z + − σh ρhr = g5 (e−αr z ).
dz αr αr αr αr

Note that 0 < e−αr z ≤ 1 in case of z ∈ [0, +∞), αr > 0. In case of z ∈


[0, +∞), αr > 0, σh > 0 and ρhr ≥ 0, we have g5 (e−αr z ) ≤ 0 in the interval
σr ∈ (0, σh ρhr αr ], then g4 (z) ≤ g4 (0) = 0 in the interval σr ∈ (0, σh ρhr αr ]. In
case of z ∈ [0, +∞), αr > 0, σh > 0 and ρhr ≤ 0, we have g5 (e−αr z ) ≥ 0, then
g4 (z) is an increasing function of z and g4 (z) ≥ g4 (0) = 0.
Next we note
∂ [G(x + t0 )D(x + t0 )fT (x)]
= G(x + t0 )D(x + t0 )fT (x)g4 (x + t0 ).
∂σr
Defining
1
g6 (z) := z + 1 − (2 − e−αr z )2 ,
2αr
we see that
+∞
-1
∂F σr 
= 2 D(k)g6 (k)[P (T1 ≥ k) + γ P (T1 < k, T2 ≥ k)],
∂σr αr
k=1
+∞
-2
∂F σr 
= 2 D(k)g6 (k)P (T2 ≥ k),
∂σr αr
k=1
+∞
-3
∂F σr 
= 2 kD(k)g6 (k)P (T2 ≥ k).
∂σr αr
k=1

Noting that g6 (z) ≥ 0 if z ∈ [0, +∞), we get Part 6.3. This completes the proof.
398 D. Kannan and L. Ma

References

1. Frees, E.W., Carrière, J., Valdez, E.: Annuity valuation with dependent mortality. J. Risk Insur.
63(2), 229–261 (1996)
2. Gompertz, B.: On the nature of the function expressive of the law of human mortality, and on
a new mode of determining the value of life contingencies. Philos. Trans. R. Soc. Lond. 115,
513C583 (1825)
3. Li, J.S.H., Hardy, M.R., Tan, K.S.: On pricing and hedging the no-negative equity guarantee in
equity release mechanisms. J. Risk Insur. 77(2), 499–522 (2010)
4. Ma, L., Kannan, D.: Valuation of reverse mortgage with dependent joint life. Dyn. Syst. Appl.
27, 895 (2018)
5. Ma, L., Zhang, J., Kannan, D.: Fair pricing of reverse mortgage without redemption right. Dyn.
Syst. Appl. 26, 473–498 (2017)
6. Mitchell, O.S., Piggott, J.: Unlocking housing equity in Japan. J. Jpn. Int. Econ. 18, 466–505
(2004)
7. Ohgaki, H.: Economic implication and possible structure for reverse mortgage in Japan. Rits
University, pp. 1–14 (2003)
8. Tsay, J.T., Lin, C.C., Prather, L.J., Buttimer, R.J. Jr.: An approximation approach for valuing
reverse mortgages. J. Hous. Econ. 25, 39–52 (2014)
9. Vasicek, O.: An equilibrium characterization of the term structure. J. Financ. Econ. 5, 177–188
(1977)
Stationary Distribution of Discrete-Time
Finite-Capacity Queue with
Re-sequencing

Rostislav Razumchik and Lusine Meykhanadzhyan

Abstract The discrete-time re-sequencing model, consisting of one high and one
low priority finite-capacity queue and a single server, which serves the low priority
queue if and only if the high priority queue is empty, is being considered. Two
types of customers, regular and re-sequencing, arrive at the system. The arrival and
service processes are geometric, i.e. in each time slot at most one customer of each
type may arrive at the system and at most one customer may be served. A regular
customer upon arrival occupies one place in the high priority queue. An arriving re-
sequencing customer moves one customer from the high priority queue (if it is not
empty) to the low priority queue and itself leaves the system. A regular customer
which sees the high priority queue full and a re-sequenced customer which sees the
low priority queue full, are lost. Using the generating function method the recursive
procedure for the computation of the joint stationary distribution of the number of
customers in the high and in the low priority queues is derived.

Keywords Queueing system · Discrete-time · Finite-capacity · Re-sequencing ·


Negative customers · Generating function

The reported study was funded by RFBR according to the research projects №20-07-00804 and
№19-07-00739.

R. Razumchik ()
Institute of Informatics Problems, FRC CSC RAS, Moscow, Russia
Peoples’ Friendship University of Russia (RUDN University), Moscow, Russia
e-mail: [email protected]; [email protected]
L. Meykhanadzhyan
Financial University Under the Government of the Russian Federation, Moscow, Russia
e-mail: [email protected]

© The Editor(s) (if applicable) and The Author(s), under exclusive 399
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_23
400 R. Razumchik and L. Meykhanadzhyan

1 Introduction

Consideration is given to the discrete-time counterpart of continuous-time re-


sequencing queue considered in [31]. The system consists of two finite-capacity
queues (high and low priority) and a single server, which serves the low priority
queue if and only if the high priority queue is empty. Two types of customers
(regular and re-sequencing) arrive at the system. The arrival and service processes
are geometric,1 i.e. in each time slot at most one customer of each type may arrive
at the system and at most one customer may be served. A regular customer upon
arrival occupies one place in the high priority queue and waits there for service.
A re-sequencing customer upon arrival moves one customer from the high priority
queue (if it is not empty) to the low priority queue and itself leaves the system.
Whenever a re-sequencing arrival sees the high priority queue empty, it leaves the
system having no effect on it. A regular customer which sees the high priority queue
full and a re-sequenced customer which sees the low priority queue full, are lost.
For this queueing system one is interested in the computation of the joint stationary
distribution of the number of customers in the high and in the low priority queues.
Nowadays discrete-time models keep receiving attention from the research
community (see, for example, [4, 6, 10, 11, 20, 26–28, 35]). For some general
overviews on discrete-time queueing models one can refer to [7, 25, 30, 34]. Our
motivation behind the study of the particular queueing problem described in the
previous paragraph is methodological. This can already be seen from the scope
of the problem: the attention is restricted only to the computation of (quantities
related only to) the stationary queues’ size distribution. Since the stationary
probabilities satisfy the (finite) system of linear algebraic equations, numerous
solution methods are available out there by which one can approach the problem and
find the solution. Among those which are particularly suited for applied probability
problems involving Markov chains, one can mention the following. Firstly, matrix
analytic methods, which have been the topic of many papers published during the
last decades (see, for example, [2, 3, 19, 21]). This algorithmic approach allows one
to obtain numerical results for a great variety of models and the model considered
here is not an exception (especially given that its transition probability matrix has
the block tridiagonal form). Secondly, there exist general procedures for finding
stationary distributions of finite irreducible discrete-time Markov chains, which
involve the generalized inverses (see [16, 17]). Finally there is one traditional
method which stands alone2 — the generating function method (GFM) (see, for
example, [7, 8, 12] and many others). In the contrast to the algorithmic methods, the
GFM may lead to explicit and closed-form expressions for the system’s performance
characteristics and some useful probabilistic interpretations.

1 Also sometimes called Bernoulli.


2 Another such method is the compensation approach (see [1]). Yet we are unaware of any use cases
of its application to problems with a finite state space.
Finite-Capacity Queue with Re-sequencing in Discrete Time 401

Fig. 1 The transition


diagram for the discrete-time
Markov chain representing
the considered re-sequencing
queue. The x-axis (y-axis)
denotes the total number of
customer in the high priority
(low priority) queue when the
server is busy. Empty system
state is not shown

In this paper we approach the problem of the determination of the joint stationary
distribution by the GFM. The main reason for that is that we seek to find the
boundaries of tractability in using the GFM for the models with a finite state space.
In this respect the considered discrete-time queueing problem gives rise to the new
Markov chain on the finite two-dimensional grid (see Fig. 1), which to our best
knowledge has not been analyzed before with the GFM. Its most peculiar feature
is that the chain can jump not only to the adjoining states (horizontally, vertically,
and diagonally) but can also skip one state in the west-north direction. From the
system of balance equations for the joint stationary distribution, it can be seen that
the system cannot be solved recursively. This is the starting point for this paper. It
shall be seen in the next sections, that the double generating function of the joint
stationary distribution allows one to deduce the new system of equations, which
permits the recursive solution. The main idea behind the adopted method is the
following. Due to the finite state space, in the expression (which a ratio of two
polynomials) for the generating function all roots (and not only those which are of
absolute value is less than 1) of the denominator are the roots of the numerator
as well. By plugging the roots of the denominator into the numerator and after
collecting the common terms, one obtains two polynomial functions with real
coefficients in a single variable, say v, both of which are equal to zero for any value
of v ∈ [0, 1]. Thus their coefficients must be equal to zero and this produces the
system of equations, which can be used to determine the joint stationary distribution
in a recursive manner.
The rest of the paper is organized as follows. In Sect. 2, the model under the
investigation is described in detail; we look at the case where the maximum queues’
402 R. Razumchik and L. Meykhanadzhyan

sizes are equal. The double probability generating function (PGF) for the joint
stationary distribution is given in Sect. 3. Since it contains many unknowns, it is
used only to derive some useful local balance relations. In Sect. 4 one applies the
method described above to obtain the recursive procedure for the computation of
the joint stationary distribution. Some conclusions are drawn in Sect. 5.

2 The Model

Consider the discrete-time re-sequencing system consisting of two queues, each


of capacity3 r < ∞: one high priority queue and one low priority queue. Two
types of customers (regular and re-sequencing) arrive at the system independently
of each other. Arriving regular customers are enqueued in the high priority queue
and wait there for service. A re-sequencing customer4 moves one customer from
the high priority queue (if it is not empty; otherwise the re-sequencing customer
has no effect on the system) to the low priority queue and itself leaves the system.
There is one server, which serves customers one by one and the service process
is the same for both high and low priority customers. The service policy is non-
preemptive priority,5 i.e. upon a service completion a high priority customer is
picked for service and, in case there are no high priority customers, a low priority
customer is picked.
Since the discrete-time setting is considered, one has to establish the precedence
relations between arrivals, departures, and start of service. The following conven-
tions are used:6 if the service of a customer has been finished in the slot, it leaves the
system, and the server immediately takes the next customer from the high priority
queue or, if the high priority queue is empty, from the low priority queue; then if
a regular customer arrives in the same slot it is placed in the high priority queue if
the server is busy and enters the server otherwise (its servicing begins immediately);
then if a re-sequencing customer arrives in the same slot it moves a customer from
the high priority queue to the low priority queue. If a re-sequencing customer finds
the high priority queue empty, it immediately leaves the system without any effect
on it. Finally, whenever a regular customer arrives and sees busy server and full high
priority queue, it is lost; if a customer, which is being moved from the high priority
queue to the low priority queue, sees that the latter is full, it is lost.

3 If 0 ≤ r ≤ 5, then some of the relations given below have to be left out. In order not to deal with

such special cases, it is further assumed that r ≥ 6.


4 Also known in the literature as negative signal/customer, see [13–15].
5 Since the waiting time characteristics of the regular customers are not studied in this paper (the

waiting time distribution in the case of infinite-capacity queues had been studied in [29]), the
service order in each queue and the re-sequencing order are irrelevant. For certainty one can
consider all three orders to be head-of-queue.
6 In other words, EAS-IA setup (see, for example, [9] and [27, pp. 2–3]) with regular arrivals, start

of service, and departures having precedence over re-sequencing customers.


Finite-Capacity Queue with Re-sequencing in Discrete Time 403

Regular and re-sequencing customers arrive at the system according to a


geometric (Bernoulli) input flow, with the arrival probability in a slot equal to a
and c, respectively. The service times of both high and low priority customers are
assumed to constitute a set of i.i.d. random variables having geometric distribution
with parameter b, which is also the probability of a service completion in a slot.

3 The Double Probability Generating Function

In what follows the notations a = 1 − a, b = 1 − b, c = 1 − c are used


for brevity. Introduce the discrete-time Markov chain (DTMC) by observing the
system at epochs at the end of time slots (i.e., after all possible events in a slot
have occurred). Let Ht be the total number of customers in the high priority queue
plus server at time t and Lt —the total number of customers in the low priority
queue at time t. Whenever the server is idle (i.e., Ht = 0), Lt is not defined. The
stochastic process {(Ht , Lt ) : t = 0, 1, 2, . . . } is the irreducible aperiodic DTMC. It
is convenient to define its state space as (0)∪{(k, m), 0 ≤ k ≤ r, 0 ≤ m ≤ r}, where
the state (0) corresponds to the empty system; the state (k, m) corresponds to the
state with the busy server, k customers in the high priority queue and m customers
in the low priority queue. The transition diagram7 is given in Fig. 1.
Define the joint stationary distribution

π0 = lim P(Ht = 0), πkm = lim P(Ht = k + 1; Lt = m), 0 ≤ k, m ≤ r,


t →∞ t →∞

which exists for all possible values of a, b, and c, with its associated double PGF


r 
r
(u, v) = uk v m πkm , 0 ≤ u, v ≤ 1.
k=0 m=0

From the system of balance equations8 one obtains the following expression for the
double PGF (u, v):

B(u, v) (u, v) = A(u, v), (1)

where

B(u, v) = u3 + u2 p(v) + uq(v) + r(v),

7 The state (0) is not included in the diagram. From the system’s description it can be seen that the

only possible transition to (from) the state (0) is from (to) the state (0, 0).
8 The system consists of r 2 + 1 equations and due to the lack of space is not presented.
404 R. Razumchik and L. Meykhanadzhyan

* * ++
abc abc + abc abc 
r
A(u, v) = (v − u ) 2 2
+ (uv − u ) 2
+ v m π0m
abcv abc abcv m=0
* +
abcv r
abc r
+(u − 1) u + ur+1 v m πrm + (uv − u2 ) v m π1m
abc m=0
abc m=0
* +
abc 2 abc + abc abc r
abc
+(v − 1) u + u+ vr uk πkr + (v − 1)u2 v r π0r
abc abc abc k=1
abc

abc ab abc
+(u − 1)(1 − v)ur+1 v r πrr + (1 − v)u2 π00 + (1 − v)uv r π1r ,
abc abcv abc
and the functions p(v) = p0 + vp1 , q(v) = q0 + vq1 , and r(v) = vr1 stand for

ab+ab+abc+abc abc abc abc+abc abc


p(v) = − + v, q(v) = + v, r(v) = v.
abc abc abc abc abc
The given expression for (u, v) contains many unknowns but already in this form
allows one to obtain some useful local balance relations. One set of such relations
 distribution {π·,m , 0 ≤ m ≤ r} of the low priority queue
concerns the (marginal)
size, where π·,m = rk=0 πkm . By substituting u = 1 in (1) and using the method
of equating the coefficients, one gets

ab
π·,m = π0,m+1 + abπ1m + (1 − ab)π0m , 1 ≤ m ≤ r − 1,
c
  ab
π·,0 = b + ab π00 + π01 + abπ10 .
c
Another set of useful relations is obtained if one puts v = 1 in (1) and then
equates the coefficients of uk in the left-hand side and in the right-hand side of (1).
This gives
 c
πr−1,· + 1 + p(1) − πr,· = 0,
c
 c
πr−2,· + p(1)πr−1,· + q(1) + πr,· = 0,
c
πk,· + p(1)πk+1,· + q(1)πk+2,· + r(1)πk+3,· = 0, 0 ≤ k ≤ r − 3,
r
where the notation πk,· = m=0 πkm is used. Thus πk,· = yk πr,· , 0 ≤ k ≤ r,
for positive constants yk , which are uniquely determined from this system and
which depend only on the values of a,b, and c. Since aπ0 = abπ00 and
the normalization
condition implies that rk=0 πk,· = 1 − π0 , we have πr,· =
r
(a − abπ00)/(a k=0 yk ). Let xm be such positive constants (depending only on
Finite-Capacity Queue with Re-sequencing in Discrete Time 405

the values of a, b, and c), that πrm = xm π00 , 0 ≤ m ≤ r. Then the latter relation
between πr,· and π00 yields the closed-form expression for the probability π00 :

a
π00 = . (2)

r r
a xm yk + ab
m=0 k=0

In (2) xm , 0 ≤ m ≤ r, are the only unknowns and in Sect. 4 it is shown how these
quantities can be determined solely from the double PGF (u, v) given by (1). As a
side result of manipulations with the PGF (u, v), one obtains such new9 relations,
which allow the recursive computation of the whole joint distribution πkm .

4 Solution for the Joint Stationary Distribution

Setting B(u, v) = 0 produces a cubic equation, which always has three roots,
further denoted by u1 (v), u2 (v), and u3 (v). For the sake of brevity henceforth, when
referred to the roots ui (v), the argument v is omitted. The roots ui may be real or
complex numbers; in either case the following derivations remain true. Define the
functions

u2 (u3 − u1 )(uk2 − uk1 ) − u3 (u2 − u1 )(uk3 − uk1 )


+k (v) = , k ≥ 1, (3)
(u2 − u1 )(u3 − u1 )(u3 − u2 )

with +k (v) ≡ 0 for k ≤ 0. The function +k (v) is the symmetric function of the roots
ui and thus10 it can be expressed directly in terms of the coefficients p(v), q(v), and
r(v) of the equation B(u, v) = 0, i.e. +k (v) is a polynomial function. In order to find
its degree and its coefficients it is sufficient to notice that +1 (v) = −1 = λ10 and for
k ≥ 2 the functions +k (v) satisfy the recurrence relation +k (v) = −p(v)+k−1 (v)−
q(v)+k−2 (v) − r(v)+k−3 (v). Thus the degree of +k (v) is k − 1. Substitution of
+k (v) = k−1 i
i=0 v λki into the previous relation yields the recursive procedure

λk0 = −q0 λk−2,0 −p0 λk−1,0 ,


λki = −r1 λk−3,i−1 −q0 λk−2,i −p0 λk−1,i −q1λk−2,i−1 −p1 λk−1,i−1 , 1 ≤ i ≤ k −3,
λk,k−2 = −p0 λk−1,k−2 −q1λk−2,k−3 −p1 λk−1,k−3 ,
λk,k−1 = −p1 λk−1,k−2 ,

which can be used to compute the coefficients of the polynomial function +k (v) for
any k ≥ 2.

9 “New” means that these relations cannot be seen directly from the system of balance equations.
10 See, for example, [22, Chapter IX].
406 R. Razumchik and L. Meykhanadzhyan

Let us look closer at the double PGF (u, v) given by (1). It is the ratio of two
polynomial functions and for each value of v (u, v) is the continuous function
of u. Thus, since the left-hand side of (1) vanishes at points (u1 (v), v), (u2 (v), v),
and (u3 (v), v), the right-hand side must vanish at these points too. This observation
leads to the system of three equations:

⎪ A(u1 (v), v) = 0, (4)

A(u2 (v), v) = 0, (5)


A(u3 (v), v) = 0. (6)

By expressing the term (1 − v)abπ00/(abcv) from (4) and substituting it firstly


into 
(5) and then into (6), one gets two new equations sharing the same term
abc rm=0 v m+1 π1m /(abc), which does not depend on ui . Cancelation of this term
yields the following equation:11


r 
r 
r
v m π0m − "(v) v m πrm − (1 − v)v r -k (v)πkr
m=0 m=0 k=1

+(1 − v)v r (+r (v) − +r−1 (v)) p1 πrr = 0, (7)

where the functions "(v) and -k (v) are defined by

"(v) = +r (v) − +r+1 (v) + p1 v+r−1 (v) − p1 v+r (v),


-k (v) = p1 +k (v) + q1 +k−1 (v) + r1 +k−2 (v).

Since +k (v) is the polynomial function of degree k −1, then "(v) and -k (v) are the
polynomial functions of degrees r and k − 1, respectively, i.e. "(v) = ri=0 v i ψi

and -k (v) = k−1 i=0 v θki . The coefficients ψi and θki , 1 ≤ k ≤ r, can be computed
i

directly from the coefficients λki :

ψr = 0, ψ0 = λr0 − λr+1,0 ,
ψi = λri − λr+1,i + p1 λr−1,i−1 − p1 λr,i−1 , 1 ≤ i ≤ r − 1,
θki = p1 λki + q1 λk−1,i + r1 λk−2,i , 0 ≤ i ≤ k − 3,
θk,k−2 = p1 λk,k−2 + q1 λk−1,k−2 ,
θk,k−1 = p1 λk,k−1 .

From the fact that the polynomial (of degree 2r) in the left-hand side of (7) is equal
to zero for all values of v in [0, 1], it follows that all coefficients of v m , 0 ≤ m ≤ 2r,

11 Due to the lack of space the details of these and some further derivations are omitted.
Finite-Capacity Queue with Re-sequencing in Discrete Time 407

must be equal to zero. Hence one obtains the following system of linear algebraic
equations with constant coefficients:
⎧ (p1 λr,r−1 − θr,r−1 )πrr = 0, (8)





⎪ (θr,r−1 − p1 λr,r−1 )πrr − θr−1,r−2 πr−1,r





⎪ +(ψr−1 − θr,r−2 − p1 (λr−1,r−2 − λr,r−2 ))πrr = 0, (9)





⎪ 
r−1 
2r−1−m 
2r−m



⎪ ψi πr,m−i + θr−k,m−r πr−k,r − θr−k,m−r−1 πr−k,r



⎪ i=m−r k=0 k=0

⎪  



⎪ + λr−1,m−r −λr,m−r −λr−1,m−r−1 +λr,m−r−1 p1 πrr = 0, r+2 ≤ m ≤ 2r−2, (10)




⎨ 
r−1 
r−2 
r−2
ψi πr,r+1−i + θr−k,1πr−k,r −
θr−k,0 πr−k,r



⎪ i=1 k=0 k=0

⎪  

⎪ + − − + p1 πrr + p1 π1r = 0, (11)

⎪ λr−1,1 λ r,1 λr−1,0 λ r,0





⎪ 
r−1 
r−2

⎪ ψi πr,r−i +


θr−k,0 πr−k,r




i=0 k=0



⎪ +(λr−1,0 − λr,0 )p1 πrr − p1 π1r − π0r = 0, (12)





⎪ m

⎪ ψi πr,m−i − π0m = 0, 0 ≤ m ≤ r − 1.
⎩ (13)
i=0

Analysis of this system shows, that Eqs. (8)–(11) can be used to express the
probabilities πkr , 1 ≤ k ≤ r − 1, in terms of the probabilities πrm , 2 ≤ m ≤ r.
Indeed, by summing (8) and (9) one gets the relation between πr−1,r and πrr ; next,
summation of (8), (9), and (10) for m = 2r − 2 yields the relation between πr−2,r ,
πr−1,r , πr,r−1 , and πrr , and so on. After some tedious but simple algebra one finds
the general expressions12


r−1
πkr = βkm πrm + αk πrr , 2 ≤ k ≤ r − 1, (14)
m=k+1


r 
r−1
ψj  θk0r
π1r = − πrm + πkr + (λr−1,0 − λr0 )πrr , (15)
p1 p1
m=2 j =r+1−m k=2

k−1
12 Here and henceforth the agreement m=k ≡ 0 is used.
408 R. Razumchik and L. Meykhanadzhyan

in which the constants αk and βkm are computed recursively by

ψk −θr,k−1 −p1 (λr−1,k−1 −λr,k−1 )  ψm −θm,k−1 αm


r−1
αk = + , 2 ≤ k ≤ r −1,
θk,k−1 θk,k−1
m=k+1

ψr−1
βk,k+1 = , 3 ≤ k ≤ r − 2,
θk,k−1

r−1
ψj 
m−1
θj,k−1
βkm = − βj m , k +2 ≤ m ≤ r −1, 3 ≤ k ≤ r −3.
θk,k−1 θk,k−1
j =k+r−m j =k+1

To summarize, Eq. (7) obtained from the system (4)–(6) allowed one to obtain
explicit balance relations between the boundary probabilities πkr and πrm (and
between π0m and πrm , see (13)). Now it will be shown that the system (4)–(6) can
also be used to obtain the expressions
 for the probabilities πrm in terms of π00
only. By expressing the term abc rm=0 v m π1m /(abc) from (4) and substituting it
firstly
into (5) and then into (6), one gets two new equations sharing the same term
abc rm=0 v m π0m /(abc), which does not depend on ui . Elimination of this term
leads to the equation


r 
r
!0 (v) v m πrm + (1 − v)v r !k (v)πkr + abc!r+1 (v)(1 − v)v r πrr
m=0 k=1

+ abcv r+2
(1 − v)π0r − ab(1 − v)vπ00 − abc(1 − v)v r+1 π1r = 0, (16)

where the functions !k (v) are defined by


 
!0 (v) = −ab(cv−1)vr(v)+r−1 (v)−ab v 2 −cv 3 +cr(v) +r+1 (v)
 
−ab cv 3 +cp(v)v 2 −cr(v)−cq(v)v+cq(v)v 2 +cr(v)v +r (v)

+abc (v + p(v)) v+r+2 (v) + abcv+r+3 (v), (17)


!k (v) = abc(p1 !∗k (v) + q1 !∗k−1 (v) + r1 !∗k−2 (v)), 1 ≤ k ≤ r, (18)
!r+1 (v) = −(p(v)+q(v)+r1 +v)v+r (v)+(1−v)(r(v)+r−1 (v)−v+r+1 (v)),
(19)

and !∗k (v) = (vq(v)+r(v))+k (v)+vr(v)+k−1 (v)−v 2 +k+1 (v). Recall that +k (v)
is the polynomial function. Thus the functions !k (v) are polynomials as well. Since
the degree of +k (v) is k −1, the degrees of !0 (v) and !r+1 (v) are both equal to r +
2, and the degree of !k (v) for 1 ≤ k ≤ r is equal to k+2. Note that the lowest
 degree
of monomials in each !k (v) is equal to 1. By substituting !0 (v) = r+2 i
i=1 v φ0i ,
Finite-Capacity Queue with Re-sequencing in Discrete Time 409

 r+2 i
!k (v) = k+2 i=1 v φki , and !r+1 (v) =
i
i=1 v φr+1,i in the left-hand side and in
the right-hand side of (17)–(19) and using the method of equating the coefficients,
one obtains the recursive procedure for the computation of all coefficients13 φki :

φ0i = abλr+2,i−2 − abλr+1,i−2 + abcp0 λr+2,i−1 + abcλr+3,i−1


+abλr,i−1 − abcλr+1,i−1 − abcr1 λr−1,i−3
+abcλr+1,i−3 − abc(1 + p1 + q1 )λr,i−3
+abr1 λr−1,i−2 − (abcp0 + abcr1 + abcq0 − abcq1 )λr,i−2 , 1 ≤ i ≤ r + 2,
φki = abc(−p1 λk+1,i−2 + p1 (r1 + q0 )λk,i−1 + (p1 − 1)q1 λk,i−2
+(p1 r1 + q12 − r1 )λk−1,i−2 + (q1 r1 + q0 q1 )λk−1,i−1 + 2q1 r1 λk−2,i−2
+r1 (r1 + q0 )λk−2,i−1 + r12 λk−3,i−2 ), 1 ≤ i ≤ k + 2, 1 ≤ k ≤ r,
φr+1,i = λr+1,i−2 − (p0 + q0 + r1 )λr,i−1 − (p1 + q1 + 1)λr,i−2 + r1 λr−1,i−1
−r1 λr−1,i−2 − λr+1,i−1 , 1 ≤ i ≤ r + 2.

The polynomial (of degree 2r + 3) in the left-hand side of (16) is equal to zero
for all values of v in [0, 1]. Hence the coefficients of v m must be equal to zero.
Consideration of the coefficients of v 1 , v 2 , . . . , v r+1 yields the following relations:

⎪ φ01 πr0 − abπ00 = 0, (20)





⎪ φ02 πr0 + φ01 πr1 + abπ00 = 0, (21)



⎪ m



⎨ φ0,m+1−i πri = 0, 2 ≤ m ≤ r − 1, (22)
i=0



⎪ 
r−1 
r−1



⎪ φ π + φk1 πkr + (φ11 − abc)π1r


0,r+1−i ri




i=0 k=2
⎩  
+ φ01 + abcφr+1,1 + φr1 πrr = 0. (23)

By substituting πrm /π00 = xm in (20)–(23) one gets the recursive procedure


for the computation of xm . Indeed, from (20) it follows that x0 = ab/φ01 .
Relation
 (21) gives x1 = −(ab + φ02 x0 )/φ01 and relations (22) yield xm =
− m−1i=0 0,m+1−i xi /φ01 , 2 ≤ m ≤ r − 1. Finally, the value xr is computed
φ 14

from (23), since the last two terms in the left-hand side of (23) can be expressed
through πrm (see (14) and (15)).

13 Note that, by definition, λki ≡ 0 for i < 0 and i ≥ k.


14 Its expression is too cumbersome to be given here and thus is omitted.
410 R. Razumchik and L. Meykhanadzhyan

The values xm , 0 ≤ m ≤ r, obtained from the system (20)–(23), are used to


compute the probability π00 by (2). Once it is done, the whole joint stationary
distribution πkm can be determined from (11)–(13) and the system of balance
equations. The respective procedure in pseudocode is given below.

Procedure for the recursive computation of the joint stationary distribution πkm 15
procedure STEADYSTATEDISTRIBUTION(πri ,πir ,π0i ,0 ≤ i ≤ r)
for m = 0 → r − 1 do
πr−1,m = −(1/c + p0 − p1 )πrm
for k = r − 2 → 1 do
πk0 = −(p0 πk+1,0 + q0 πk+2,0 )
for m = 1 → r − 1 do
πr−2,m = −(p0 πr−1,m + p1 πr−1,m−1 + q0 πrm + (p1 + q1 )πr,m−1 )
for k = r − 3 → 1 do
πk,r−1 = − cc p(1)πkr +πk−1,r +q(1)πk+1,r +q1πk+1,r−1

+r1 (πk+2,r +πk+2,r−1 )
for m = 1 → r − 2 do
π1m = − abc
ab (p0 + q1 )π0m +r1 (π /c+π2,m−1 )
 0,m+1
+p1 π0,m−1 +q1 π1,m−1
for k = 2 → r − 3 do
πkm = − ab
ab p0 πk−1,m +p1πk−1,m−1 +r1 πk+1,m−1
+πk−2,m +q1 πk,m−1
15 The values of πri , πir , π0i , 0 ≤ i ≤ r, computed from (12)–(15) and (20)–(23), are the input for
the procedure.

5 Conclusion

The technique used in the paper to obtain the recursive procedure for the joint
stationary distribution is not new,16 but is rarely used. It is suitable for exact
arithmetic implementation but sometimes may suffer from the numerical instability
(in the considered model such case is when the re-sequencing arrival probability c
is much greater than the regular arrival probability a). On the one hand, it must be
admitted that the technique is not well suited for the computation of the whole joint
stationary distribution πkm . But on the other hand, as can be seen from Sect. 3, the
whole distribution πkm is not needed if one is only interested in the computation of

16 It had been used before for the analysis of some other types of queueing systems (see, for

example, [5, 18, 33]).


Finite-Capacity Queue with Re-sequencing in Discrete Time 411

the system’s main performance characteristics like loss probabilities, moments of


queues’ sizes, etc.
The severe limitation of the technique is the memoryless assumption of arrival
and service processes and the extension to a more general case is the open question.
Yet sometimes the technique proves to be useful because it yields recursive solutions
to problems, which seemed not to have such. As an example,17 one can mention
the heterogeneous Markov ordered entry queue with two finite-capacity queues (see
[23, Chap. 3] and [33]).
Finally it is worth mentioning that the adopted technique permits one noteworthy
modification. All the relations ((8)–(13) and (20)–(23)), which eventually allow
the computation of xm , depend on the values of λki , being the coefficients of the
polynomials +k (v) given by (3). The larger the value of k, the higher the degree of
+k (v) is. By assuming that the degree of +k (v) is min(k − 1, n) with n < r, one
reduces the degrees of the polynomials "(v), -k (v), and !k (v), which are needed
to compute xm . Consequently, this leads to the simplification of calculations18 but
surprisingly not always at the expense of accuracy loss.19

References

1. Adan, I.J.B.F., Wessels, J., Zijm, W.H.M.: A compensation approach for two-dimensional
Markov processes. Adv. Appl. Probab. 25(4), 783–817 (1993). https://doi.org/10.2307/
1427792
2. Akar, N., Oğuz, N.C., Sohraby, K.: A novel computational method for solving finite QBD
processes. Commun. Stat. Stoch. Models 16(2), 273–311 (2000). https://doi.org/10.1080/
15326340008807588
3. Alfa, A.S.: Discrete time queues and matrix-analytic methods. TOP 10, 147–185 (2002).
https://doi.org/10.1007/BF02579008
4. Atencia, I.: A discrete-time queueing system with changes in the vacation times. Int. J. Appl.
Math. Comput. Sci. 26(2), 379–390 (2016). https://doi.org/10.1515/amcs-2016-0027
5. Avrachenkov, K.E., Vilchevsky, N.O., Shevlyakov, G.L.: Priority queueing with finite buffer
size and randomized push-out mechanism. In: Proceedings of the 2003 ACM SIGMETRICS
International Conference on Measurement and Modeling of Computer Systems, San Diego,
pp. 324–335 (2003). https://doi.org/10.1145/781027.781079
6. Barbhuiya, F.P., Gupta, U.C.: Discrete-time queue with batch renewal input and random
serving capacity rule: GI X /GeoY /1. Queueing Syst. Theory Appl. 91(3), 347–365 (2019).
https://doi.org/10.1007/s11134-019-09600-7

17 Another example worth mentioning here is the computation of the joint stationary distribution in

the two M/M/1/r queues running in parallel with coupled arrivals. Although in this problem the
technique does not help, it leads to some insights into the interdependence between the equilibrium
probabilities (see [24]).
18 In the sense, that the number of terms in Eqs. (8)–(13) and (20)–(23) will be smaller.
19 Even though the whole joint stationary distribution π
km cannot be computed accurately under
this assumption, some performance characteristics (like loss probabilities, mean waiting times)
can be. The example of one such study is [32].
412 R. Razumchik and L. Meykhanadzhyan

7. Bruneel, H., Kim, B.G.: Discrete-Time Models for Communication Systems Including ATM.
Kluwer Academic Publishers, Dordrecht (1993). https://doi.org/10.1007/978-1-4615-3130-2
8. Chaudhry, M.L.: Exact and approximate numerical solutions of steady-state single-server bulk-
arrival discrete-time queues: GeomX /G/1. Int. J. Math. Stat. Sci. 62, 133–185 (1993)
9. Chaudhry, M.L., Gupta, U.C.: Queue-length and waiting-time distributions of discrete-time
GI X /Geom/1 queueing systems with early and late arrivals. Queueing Systems 25, 307–324
(1997). https://doi.org/10.1023/A:1019144116136
10. Claeys, D., De Vuyst, S.: Discrete-time modified number- and time-limited vacation queues.
Queueing Systems 91(3), 297–318 (2019). https://doi.org/10.1007/s11134-018-9596-8
11. De Clercq, S., Laevens, K., Steyaert, B., Bruneel, H.: A multi-class discrete-time queueing
system under the FCFS service discipline. Ann. Oper. Res. 202(1), 59–73 (2013). https://doi.
org/10.1007/s10479-011-1051-8
12. Dester, P.S., Fricker, C., Tibi, D.: Stationary analysis of the shortest queue problem. Queueing
Systems 87(3–4), 211–243 (2017). https://doi.org/10.1007/s11134-017-9556-8
13. Do, T.V.: An initiative for a classified bibliography on G-networks. Perform. Eval. 68(4), 385–
394 (2011). https://doi.org/10.1016/j.peva.2010.10.001
14. Gelenbe, E.: G-networks: a unifying model for neural and queueing networks. Ann. Oper. Res.
48(5), 433–461 (1994). https://doi.org/10.1007/bf02033314
15. Gelenbe, E., Glynn, P., Sigman, K.: Queues with negative arrivals. J. Appl. Prob. 28(1), 245–
250 (1991). https://doi.org/10.2307/3214756
16. Hunter, J.J.: A survey of generalized inverses and their use in stochastic modelling. Adv.
Probab. Stoch. Process. 1, 79–90 (2000)
17. Hunter, J.J.: Generalized inverses of Markovian kernels in terms of properties of the Markov
chain. Linear Algebra Appl. 447, 38–55 (2014). https://doi.org/10.1016/j.laa.2013.08.037
18. Ilyashenko, A., Zayats, O., Muliukha, V., Laboshin, L.: Further investigations of the priority
queuing system with preemptive priority and randomized push-out mechanism. In: Balandin,
S., Andreev, S., Koucheryavy, Y. (eds.) Internet of Things, Smart Spaces, and Next Generation
Networks and Systems, vol. 8638, pp. 433–443. Springer, Heidelberg (2014). https://doi.org/
10.1007/978-3-319-10353-2_38
19. Kapodistria, S., Palmowski, Z.: Matrix geometric approach for random walks: stability
condition and equilibrium distribution. Stoch. Models 33(4), 572–597 (2017). https://doi.org/
10.1080/15326349.2017.1359096
20. Krishnamoorthy, A., Pramod, P.K., Chakravarthy, S.R.: Queues with interruptions: a survey.
TOP 22(1), 290–320 (2012). https://doi.org/10.1007/s11750-012-0256-6
21. Latouche, G., Ramaswami, V.: Introduction to Matrix Analytic Methods in Stochastic Mod-
eling. ASA-SIAM Series on Statistics and Applied Probability. SIAM, Philadelphia (2000).
https://doi.org/10.1137/1.9780898719734
22. Littlewood, D.E.: The Skeleton Key of Mathematics: A Simple Account of Complex Algebraic
Theories. Courier Corporation, North Chelmsford (2002)
23. Medhi, J.: Stochastic Models in Queueing Theory. Academic, Amsterdam (2003)
24. Meykhanadzhyan, L., Matyushenko, S., Pyatkina, D., Razumchik R.: Revisiting joint station-
ary distribution in two finite capacity queues operating in parallel. Inf. Appl. 11(3), 106–112
(2017). https://doi.org/10.14357/19922264170312
25. Miyazawa, T., Takagi, H.: Advances in discrete-time queues. Queueing Systems 18, 1–3 (1994)
26. Morozov, E., Fiems, D., Bruneel, H.: Stability analysis of multiserver discrete-time queueing
systems with renewal-type server interruptions. Perform. Eval. 68(12), 1261–1275 (2011).
https://doi.org/10.1016/j.peva.2011.07.002
27. Nobel, R.: Retrial queueing models in discrete time: a short survey of some late arrival models.
Ann. Oper. Res. 247(1), 37–63 (2015). https://doi.org/10.1007/s10479-015-1904-7
28. Ozawa, T., Kobayashi, M.: Exact asymptotic formulae of the stationary distribution of a
discrete-time two-dimensional QBD process. Queueing Systems 90(3–4), 351–403 (2018)
https://doi.org/10.1007/s11134-018-9586-x
Finite-Capacity Queue with Re-sequencing in Discrete Time 413

29. Pechinkin, A., Razumchik, R.: Waiting characteristics of queueing system Geo/Geo/1 with
negative claims and a bunker for superseded claims in discrete time. In: International Congress
on Ultra Modern Telecommunications and Control Systems, Moscow, pp. 1051–1055 (2010).
https://doi.org/10.1109/ICUMT.2010.5676508
30. Pechinkin, A.V., Razumchik, R.V.: Queueing Systems in Discrete Time. Fizmatlit, Moscow
(2018, in Russian). ISBN:978-5-9221-1791-3
31. Razumchik, R.: Analysis of finite capacity queue with negative customers and bunker for
ousted customers using Chebyshev and Gegenbauer polynomials. Asia-Pac. J. Oper. Res.
31(04), 1450029 (2014). https://doi.org/10.1142/S0217595914500298
32. Razumchik, R.: Algebraic method for approximating joint stationary distribution in finite
capacity queue with negative customers and two queues. Inf. Appl. 9(4), 68–77 (2015). https://
doi.org/10.14357/1992264150407
33. Razumchik, R., Zaryadov, I.: Stationary blocking probability in multi-server finite queuing sys-
tem with ordered entry and Poisson arrivals. In: Vishnevsky V., Kozyrev D. (eds.) Distributed
Computer and Communication Networks. DCCN 2015. Communications in Computer and
Information Science, vol. 601, pp. 344–357. Springer, Cham (2016). https://doi.org/10.1007/
978-3-319-30843-2_36
34. Takagi, H.: Queueing Analysis: A Foundation of Performance Evaluation. Discrete-Time
Systems, vol. 3. North-Holland, New York (1993)
35. Ushakumari, P.V., Krishnamoorthy, A.: The queueing system BD /GD /∞. Optim. J. Math.
Program. Oper. Res. 34(2), 185–193 (1995). https://doi.org/10.1080/02331939508844104
The Polaron Measure

Chiranjib Mukherjee and S. R. S. Varadhan

Abstract {x(t) − x(s)} are the increments of the three dimensional Brown-
e−|t−s|
ian motion over the intervals [s, t]. F (T , ω) = −T ≤s<t ≤T |x(t )−x(s)| dtds.
Qα,T is defined as the measure with Radon–Nikodym derivative [Z(T , α)]−1
exp[αF (T , ω)] with respect to Brownian Motion, Z(α, T ) being the normalization
constant Z(T , α) = E[exp[αF (T , ω)]]. We are interested in the existence of the
Polaron measure Qα = limT →∞ Qα,T , the validity of central limit theorem for
1
(2T )− 2 (x(T ) − x(−T )) under Qα,T as well as Qα and the behavior of Qα for large
α.

Keywords Polaron measure · White noise · Regeneration property · Gaussian


process · Birth death process

Statistical mechanics raises questions of the following type. There is a simple


reference measure like Poisson point process or white noise on R or R d . In both
cases we have a collection ζ (A) of random variables defined for Borel sets A that
satisfy ζ (A ∪ B) = ζ (A) + ζ (B) for disjoint sets.
For the Poisson point process ζ (A) is a Poisson random variable with E[ζ(A)] =
μ(A) and {ζ (Ai )} are mutually independent for disjoint sets.
White noise is a jointly Gaussian family of random variables ζ(A) with
E[ζ (A)] = 0 and E[ζ (A)ζ (B)] = μ(A ∩ B). It can be multidimensional with
independent components. Usually μ is the Lebesgue measure on R or R d .

C. Mukherjee
University of Münster, Münster, Germany
S. R. S. Varadhan ()
Courant Institute of Mathematical Sciences, New York, NY, USA
e-mail: [email protected]

© The Editor(s) (if applicable) and The Author(s), under exclusive 415
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_24
416 C. Mukherjee and S. R. S. Varadhan

In addition there is a local function g(ω) and it translates g(τx ω). For example,
if g(ω) = g(ζ (J )) for some J , then g(τx ω) = g(ζ (J + x)). In our case we have a
3 dimensional white noise on R and
 ∞
e−t
g(ω) = dt
0 |x(0) − x(−t)|

It is not quite local but almost so.


For a finite T , we consider the quantity
 
1 T T e−|t −s|
F (T , ω) = dtds
2 −T −T |x(t) − x(s)|
 
e−|t −s|
= dtds
−T ≤s<t ≤T |x(t) − x(s)|

If P is white noise we define the measure QTα by

1
dQTα = exp[αF (T , ω)]dP (1)
Z(T , α)

where Z(T , α) = exp[αF (T , ω)]dP is the normalization constant.


We are interested in the following quantities.

1
G(α) = lim log Z(T , α)
T →∞ T

Qα = lim QTα
T →∞

1
σ 2 (α) = lim |x(T ) − x(−T )|2 dQTα
T →∞ 2T

1
σ (α) = lim
2
|x(T ) − x(−T )|2 dQα
T →∞ 2T

Do the limits exist? Are the last two equal? What is the behavior of G(α), Qα , and
σ 2 (α) as α → ∞
What do we know? As shown in [1] and [2], it follows from large deviation theory
that G(α) exists and is given by
 0 et
G(α) = sup[E Q [α dt] − H (Q|P )] (2)
Q −∞ |x(0) − x(t)|
The Polaron Measure 417

over processes with stationary increments. Here

1
H (Q|P ) = lim h(Q[0,T ] , P[0,T ] )
T →∞ T
  5 
dβ dβ dβ
h(β, α) = log dβ = log dα
dα dα dα

It is shown in [3] that the limit Qα of QTα as T → ∞ exists. It is a mixture of


Gaussian processes. They all have zero mean and rotationally symmetric covariance.
It is shown in [4] that a rescaled version of Qα has a limit Q0 as α → ∞.
The covariance process has a regeneration property. The increments over differ-
ent generations are independent. The covariance in a single generation is based on a
birth and death process with a constant birth rate α and death rate 1 for each member
of the current population. {Ji = [si , ti ]} are their life spans. Total number ever lived
is n. [mini si , maxi ti ] = ∪i [si , ti ]. Generation starts from 0 at time 0 and returns
to 0 at a random time τ . ξ is a realization of the random birth and death records
ξ
{Ji } = {[si , ti ]} and u = (u1 , . . . , un ) ∈ R n . Qu is a random Gaussian process with
mean 0 and covariance ρ(s, t, ξ, u)I .
If P is white noise

1 2
ξ
dQu
= c(u, ξ ) exp − ui |ζ([si , ti ])|2 (3)
dP 2
3
c(u, ξ ) = [Det A(u, ξ )] 2

where

A(u, ξ ) = {ai,k (u, ξ )} = {δi,k + ui uk |Ji ∩ Jk |}

and λ = λ(α) is chosen so that



−λτ (ξ )
[c(u, ξ )]−1 (2π) 2 du] = 1
n
E [e
πα
(4)
Rn

This is possible for λ sufficiently large or sufficiently small. We now have a


description of Qα . We have a stationary version of a renewal process R = ∪i Ui ,
{Ui } are random intervals. Each interval is the lifetime of a birth and death process
ξ
with history ξ . πα is its distribution. There is a random Gaussian Qu that is a
ξ
superposition of Qu on each Ui ,with the weight
3 n(ξ)
e−λτ (ξ )[c(u, ξ )]− 2 (2π) 2 dudπα
418 C. Mukherjee and S. R. S. Varadhan

They are independent over different Ui . Average over {Ui }. This defines Qα . σ 2 (α)
exists by an application of the ergodic theorem. The central limit theorem follows
from the ergodic theorem for the covariance.
The next question is what happens as α → ∞.
  
G(α) φ 2 (x)φ 2 (y) 1
lim = sup dxdy − |∇φ|2 dx (5)
α→∞ α 2 φ2 =1 |x − y| 2

The supremum is attained at a unique (modulo translation) φ0 (x) that is radially


symmetric around 0 and positive. There is an Ornstein–Uhlenbeck type process Q0
with generator 12 $− (∇φ 0 )(x)
φ0 (x) ·∇ with invariant distribution |φ0 (x)| dx. This process
2

is not unique because φ0 is unique only up to translations. But the distribution of the
increments of the process is unique and a rescaled version of Qα converges to Q0
on the σ -field of increments.
Proofs
 
e−|t −s|
exp α dtds
−T ≤s≤t ≤T |x(t) − x(s)|
   :
αn e−|ti −si |
= ··· dti dsi
n! −T ≤si ≤ti ≤T |x(ti ) − x(si )|

We have a point process {si , ti } on the subset [−T ≤ s ≤ t ≤ T ] of R 2 . It is a birth


and death process on [−T , T ]. The birth rate is α s e−(t −s)dt = α(1 − e−(T −s) )
T

−s) . As T → ∞, birth rate tends to α and the death rate


1
and the death rate is 1−e−(T
to 1. We use the identity
 ∞
1 1 1 2
= √ e− 2 u |x|2
du
|x| 2π −∞

to represent
:  
1 n 1
= (2π) 2 e− 2 i u2i |x(ti )−x(si |2
du1 · · · dun
|x(ti ) − x(si )| Rn

which shows that Qα is a superposition of Gaussians. See [3].


What is the limiting behavior of Qα ? With Brownian scaling we can study the
behavior of

1 e−|s−t|
.T , =
Q e
 −T ≤s≤t≤T |x(t)−x(s)| dsdt dP
Z(T , )

. is Qα rescaled.
with  = α −2 . Q
The Polaron Measure 419

. maximizes
Q

e−|t −s |
sup E Q
 dtds − H (Q|P )
Q |x(t) − x(s)|

Let  → 0. The variational problem can now be reduced to


 
φ 2 (x)φ 2 (y) 1
sup dxdy] − |∇φ|2 dx
φ2 =1 |x − y)| 2

and the increments of Q . can be shown to converge, in distribution, to the


increments of Q0 . See [4].

References

1. Donsker, M.D., Varadhan, S.R.S.: Asymptotics for the polaron. Commun. Pure Appl. Math. 36,
505–528 (1983)
2. Mukherjee, C., Varadhan, S.R.S.: Brownian occupation measures, compactness and large
deviations. Ann. Probab. 44(6), 3934–3964 (2016)
3. Mukherjee, C., Varadhan, S.R.S.: Identification of the polaron measure I: fixed coupling regime
and the central limit theorem for large times. Commun. Pure Appl. Math. (to appear)
4. Mukherjee, C., Varadhan, S.R.S.: Identification of the Polaron measure in strong coupling and
the Pekar variational formula. Ann. Probab. (to appear)
Batch Arrival Multiserver Queue with
State-Dependent Setup for
Energy-Saving Data Center

Tuan Phung-Duc

Abstract Queues with setup time are extensively studied because they have
application in performance evaluation of power-saving data centers. In data centers,
there are a huge number of servers which consume a large amount of energy. In
the current technology, an idle server still consumes about 60% of the energy when
it is busy. Thus, a simple way to save energy is to turn off idle servers. However,
when there are some waiting jobs, we have to turn on the OFF servers in order
to reduce the waiting time. A server needs some setup time to be active during
which it consumes energy but cannot process jobs. Therefore, there exists a trade-
off between power consumption and delay performance. Gandhi et al. (Eval Rev
38:48–50, 2010; Perform Eval 67:1123–1138, 2010) analyze this trade-off using
an M/M/c queue with staggered setup (one server in setup at a time). In this paper,
using an alternative approach, we obtain generating functions for the joint stationary
distribution of the number of active servers and that of jobs in the system for a
more general model with batch arrivals and state-dependent setup time. We further
obtain moments for the joint queue length. Numerical results reveal that under the
same traffic intensity, the mean power consumption decreases with the mean batch
size. One of the main theoretical contributions is a new conditional decomposition
formula showing that the number of waiting customers under the condition that all
servers are busy can be decomposed to the sum of two independent random variables
with clear physical interpretation.

Keywords Batch arrival · Energy-saving · Multiserver queue · Setup time ·


Conditional decomposition · Generating function

T. Phung-Duc ()
Department of Policy and Planning Sciences, University of Tsukuba, University of Tsukuba,
Japan
e-mail: [email protected]

© The Editor(s) (if applicable) and The Author(s), under exclusive 421
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_25
422 T. Phung-Duc

1 Introduction

Cloud computing is an Internet-based service where customers pay money to access


and use remote computing resources. Thus, the customers do not own the computing
resources at their companies and thus they do not have to install software as well
as maintain hardware by themselves. The most important parts of cloud computing
are data centers in which there are a huge number of servers consuming a huge
amount of energy. One of the most important management issues in data centers
is minimizing the energy consumption while satisfying service level agreement
(SLA) with customers [3, 5, 17–23, 25–27]. However, in the current technology,
60% of the peak energy when processing a job is still consumed by an idle server
(not processing any job). Therefore, a natural idea for saving energy is turning off
idle servers and turning on them again when jobs are waiting (ON–OFF policy).
However, setup time is necessary for OFF servers to be active again so as to process
jobs. Furthermore, it should be noted that during the setup time the servers consume
energy but cannot process jobs. Thus, while such a simple ON–OFF policy might
reduce the amount of energy consumption, it might also increase the response time
of jobs. Therefore, a careful investigation of the ON–OFF policy is important in
management of data centers. Motivated by this need, we propose to analyze a
queueing model with setup time for modelling ON–OFF policy in data centers.
While queues with setup time are extensively studied in the literature, most
of works are about single server models [6, 7, 28, 30] in which service time
follows an arbitrary distribution. Artalejo et al. [2] carry out a detailed analysis
of multiserver queues with setup time in which at most one server can be in setup
mode at a time. This policy is later named staggered setup in [14]. It is reported
in [2] that the underlying Markov chain of the model is a quasi-birth-and-dearth
process (QBD) whose rate matrix is analytically obtained. Using difference equation
technique, Artalejo et al. [2] obtain an exact solution for the stationary queue length
distribution. Recently, Gandhi et al. [11–16] extensively investigate multiserver
queues with setup time with applications to data centers. They study the M/M/c
system with staggered setup and obtain some closed form approximations for the
ON–OFF policy without limiting the number of servers in setup at a time. Gandhi
et al. [11] further consider the case where an idle server waits for a moment before
turning off. Tian et al. [29] study an M/M/c model with vacation.
In all the papers on multiserver queues presented above, customers (jobs) arrive
according to a Poisson process. However, in cloud computing a group of jobs
(subtasks) divided from a big task might arrive concurrently and might be processed
in parallel [9]. Motivated by this, we consider a multiserver queueing system with
state-dependent setup time and batch arrival, where a batch is a big task and a job
in the batch is a subtask. Our model unifies and extends the models in [14, 29]. In
this paper, we obtain generating functions for the joint stationary distribution of the
number of active servers and that of customers in the system (those in service and in
queue). The generating functions are recursively obtained. Models in [2, 14, 24, 29]
are special cases of our model in this paper. Furthermore, we show that all the
Batch Arrival Multiserver Queue with State-Dependent Setup 423

moments of any order of the joint queue length are recursively calculated. We also
investigate the effect of batch arrivals on the performance of the system by numerical
experiments. Furthermore, the derivation of waiting time distribution is also briefly
presented. We also obtain the conditional decomposition formula which shows that
the queue length under the condition that all the servers are busy can be decomposed
into the sum of two independent random variables with clear physical meaning.
This can be considered as an important theoretical contribution. A special case of
our model, i.e., batch arrival model with staggered setup is briefly presented in our
previous work [24].
The rest of our paper is organized as follows. First we present the model
in Sect. 2. Section 3 is devoted to the detailed analysis where we derive the
generating functions and the joint stationary distribution. Section 4 briefly presents
the method to compute the waiting time distribution. In Sect. 5, we discuss the
conditional decomposition property for the queue length. In Sect. 6, we derive
some performance measures and in Sect. 7, we provide numerical results for the
performance measures. Finally, concluding remarks are presented in Sect. 8.

2 Model

We consider MX /M/c queueing systems with c identical servers and state-dependent


staggered setup. Batches of customers arrive at the system according to a Poisson
process with rate λ. The distribution of the batch size X is given by P (X = i) = βi
(i ∈ N = {1, 2, . . . }). The generating function of X is given by β(z) = i∈N βi zi .
In our system, an idle server is turned off immediately. Upon the arrival of a batch,
only one OFF server is turned one. OFF servers are turned on one by one as long as
jobs are waiting at the buffer. The OFF server needs some setup time to be active so
as to serve a waiting customers. We assume that the service time distribution of the
setup time of an OFF server is the exponential distribution with mean 1/αi provided
that there are already i active servers. By using state-dependent setup rate, we are
able to consider two important cases at the same time; the staggered setup policy
αi = α and the vacation model αi = (c − i)α. If a server finishes serving a job,
this server picks a waiting job if there are some waiting jobs in the system. In the
case that there is not a waiting job, the server in setup process and the idle ones are
turned off immediately. Under these assumptions, in this model a server is in one
of three states; BUSY or OFF or SETUP. We assume that all customers enter the
system eventually receive service and depart from the system. This implies that no
abandonment is allowed. Furthermore, the distribution of the service time of jobs is
the exponential distribution with mean 1/μ.
Let C(t) and N(t) denote the number of busy servers and the number of jobs
in the system at time t, respectively. Under the Markovian assumptions made in
424 T. Phung-Duc

Fig. 1 State transition diagram (β(z) = z)

Sect. 2, it is easy to confirm that {X(t) = (C(t), N(t)); t ≥ 0} forms a Markov


chain in the state space

S = {(i, j ); j ∈ Z+ , i = 0, 1, . . . , min(c, j )},

where Z+ = {0, 1, . . . }. Figure 1 shows the transitions among states for the special
case of single arrival, i.e., β(z) = z.
The necessary and sufficient stability condition for {X(t); t ≥ 0} is given by
ρ = λβ " (1)/(cμ) < 1. Under this stability condition, let

πi,j = lim P(N(t) = i, C(t) = j ), (i, j ) ∈ S,


t →∞

denote the stationary probability of state (i, j ). In the next section, we derive the
generating functions of πi,j .
Batch Arrival Multiserver Queue with State-Dependent Setup 425

3 Analysis of the Model

In this section, we present an analytical solution for our model based on a generating
function approach. The key of our analysis is that we exploit “upward” structure of
the underlying Markov chain, i.e., the change of the number of active servers occurs
only in one direction except for the boundary (i.e., states (i, i), i = 0, 1, . . . , c −
1, c).

3.1 Generating Functions

We present Rouche’s theorem which will be repeatedly used in this section.


Theorem 1 (Rouche’s Theorem (see, e.g., [1])) Let D denote a bounded region
which has a simple closed contour C. f (z) and g(z) are two analytic functions on
C and D. Assume that |f (z)| < |g(z)| on C. Then f (z) − g(z) has in D the same
number of zeros as g(z) where all zeros are counted as their multiplicity.
We define the generating functions for πi,j as follows:



Πi (z) = πi,j −i zj −i , |z| ≤ 1.
j =i

In the following analysis, we express Π0 (z) in terms of π0,0 and then Πi (z) in terms
of Πi−1 (z) for 1 ≤ i ≤ c.
The balance equations for states (0, j ) (j ∈ N) read as follows:


j
λ βi π0,j −i = (λ + α0 )π0,j , j ∈ N.
i=1

Multiplying the above equation by zj and adding over j ∈ N yields,

λβ(z)Π0 (z) = (λ + α0 )(Π0 (z) − π0,0 ),

or equivalently

(λ + α0 )π0,0
Π0 (z) = . (1)
λ + α0 − λβ(z)

The balance equation for state (0, 0) is given by

λπ0,0 = μπ1,1 .
426 T. Phung-Duc

This equation is also derived from the balance between flows in and out the group
of states {(0, j ); j ∈ Z+ }. Indeed, we have

α0 (Π0 (1) − π0,0 ) = μπ1,1 ,

leading to

α0 (Π0 (1) − π0,0 ) λ


π1,1 = = π0,0.
μ μ

Next, we shift to the case where there are i (1 ≤ i ≤ c − 1) active servers. The
balance equations read as follows:

(λ + iμ)πi,i = αi−1 πi−1,i + iμπi,i+1 + (i + 1)μπi+1,i+1 , (2)


j −i

(λ + iμ + αi )πi,j = λ βk πi,j −k + iμπi,j +1 + αi−1 πi−1,j , j ≥ i + 1.
k=1
(3)

Multiplying (2) by z0 and (3) by zj −i and adding over j = i, i + 1, . . . , we obtain


(λ + iμ + αi )Πi (z) − αi πi,i = λβ(z)Πi (z) + (Πi (z) − πi,i )
z
αi−1
+ (Πi−1 (z) − πi−1,i−1 ) + (i + 1)μπi+1,i+1 ,
z

or equivalently

fi (z)Πi (z) − αi zπi,i


= (i + 1)μzπi+1,i+1 − iμπi,i + αi−1 (Πi−1 (z) − πi−1,i−1 ), (4)

where fi (z) = (λ + iμ + αi )z − λzβ(z) − iμ. Since fi (0) = −iμ < 0 and


fi (1) = αi > 0, 0 < ∃zi < 1 such that fi (zi ) = 0.
Furthermore, Rouche’s theorem (Theorem 1) shows that zi is the unique root in
the unit circle. Indeed, letting g(z) = (λ + iμ + αi )z and f (z) = λzβ(z) + iμ,
C = {z ∈ C | |z| = 1} and D = {z ∈ C | |z| < 1}, we see that

|f (z)| ≤ λ|z||β(z)| + iμ ≤ λ + μ < λ + iμ + αi = |g(z)|, z ∈ C.

Thus, applying Rouche’s theorem, we have that f (z) − g(z) and g(z) have the same
number of zeros. Furthermore, because limz→∞ f (z) = −∞, there also exists at
least one root outside the unit circle.
Batch Arrival Multiserver Queue with State-Dependent Setup 427

For the case of single arrival, i.e., β(z) = z, we have


A
λ + iμ + αi − (λ + iμ + αi )2 − 4iλμ
zi = .

and the other root outside the unit circle is given by
A
λ + iμ + αi + (λ + iμ + αi )2 − 4iλμ
z̄i = .

Furthermore, if the batch size follows a geometric distribution with parameter q,
i.e., β(z) = (1 − q)z/(1 − qz), the root inside (zi ) and the one outside the unit circle
z̄i are given by
√ √
λ + iμ(1 + q) + αi − Δ λ + iμ(1 + q) + αi + Δ
zi = , z̄i = ,
2[q(λ + iμ + αi ) + λ(1 − q)] 2[q(λ + iμ + αi ) + λ(1 − q)]
(5)

respectively, where

Δ = [λ + iμ(1 + q) + αi ]2 − 4μ[q(λ + iμ + αi ) + λ(1 − q)].

Putting z = zi into (4), we obtain

(iμ − αi zi )πi,i + αi−1 (πi−1,i−1 − Πi−1 (zi ))


πi+1,i+1 = , (6)
(i + 1)μzi
i = 1, 2, . . . , c − 1.

It should be noted that πi+1,i+1 is either obtained by (6) or by the following balance
equation of the flow coming into and that going out the set of states {(i, j ); j =
i, i + 1, . . . }.

(i + 1)μπi+1,i+1 = αi (Πi (1) − πi,i ).

Substituting (6) into (4) and arranging the result, we also have

(i + 1)μπi+1,i+1 + αi πi,i + αi−1 .


πi−1 (z)
Πi (z) = ,
gi (z)

where
Πi−1 (z) − Πi−1 (zi ) fi (z)
.
πi−1 (z) = , gi (z) = .
z − zi z − zi
428 T. Phung-Duc

Remark 1 At this point, we have expressed the generating functions Πi (z) (i =


0, 1, . . . , c − 1) and boundary probabilities πi,i (i = 0, 1, . . . , c) in terms of π0,0 .
Finally, we consider the case i = c, i.e., all servers are active. Balance equations
are given as follows:

(λ + cμ)πc,c = αc−1 πc−1,c + cμπc,c+1 , (7)


j −c

(λ + cμ)πc,j = αc−1 πc−1,j + λ βi πc,j −i + cμπc,j +1 , j ≥ c + 1. (8)
i=1

Multiplying (7) by z0 and (8) by zj −c and summing over j ≥ c, we obtain


αc−1 cμ
(λ + cμ)Πc (z) = (Πc−1 (z) − πc−1,c−1 ) + (Πc (z) − πc,c ) + λβ(z)Πc (z),
z z

or equivalently

αc−1 (Πc−1 (z) − πc−1,c−1 ) − cμπc,c


Πc (z) = ,
fc (z)
αc−1 (Πc−1 (z) − Πc−1 (1))
= ,
fc (z)

where fc (z) = (λ + cμ)z − λzβ(z) − cμ and the second equality is due to the
balance between the flows in and out the group of states {(c, j ); j = c, c + 1, . . . }.
It should be noted that fc (1) = 1. Thus, applying L’Hopital’s rule and arranging the
results yields
" (1)
αc−1 Πc−1
Πc (1) = . (9)
cμ − λβ " (1)

Remark 2 It should be noted that we have expressed Πi (z) (i = 0, 1, . . . , c)


in terms of π0,0 , which is uniquely determined by the following normalization
condition:


c
Πi (1) = 1. (10)
i=0

" (1) which is recursively


According to (9), in order to calculate Πc (1), we need Πc−1
obtained by Theorem 2.
Remark 3 Once πi,i (i = 0, 1, . . . , c) is determined, we can calculate all the steady
state probabilities πi,j by a recursive manner via the balance equations. In particular,
the calculation order is {π0,j ; j ≥ 0} → {π1,j ; j ≥ 1} → · · · → {πc,j ; j ≥ c}.
Batch Arrival Multiserver Queue with State-Dependent Setup 429

In Sect. 3.2, we show some simple recursive formulae for the factorial moments.
Remark 4 In case geometric batch size β(z) = (1 − q)z/(1 − qz), we can
easily confirm that Πi (z) has only single poles z̄0 , z̄1 , . . . , z̄i . Thus, the generating
functions can be decomposed into the following forms:


i
Ai,j
Πi (z) = Bi + , (11)
z − z̄j
j =0

where Bi and Ai,j are recursively computed using π0,0 . In case of single arrival, i.e.,
β(z) = z, the generating functions have the same form with Bi = 0.

3.2 Factorial Moments

In this section, we derive simple recursive formulae for factorial moments. Because
the generating function Π0 (z) is given in a simple form, its derivatives at z = 1 are
also explicitly obtained in a simple form.
Theorem 2 The first moments of the queue length is recursively calculated as
follows:

αi−1 " λβ " (1) − αi − iμ


Πi" (1) = Πi−1 (1) + Πi (1)
αi αi
(i + 1)μπi+1,i+1 + αi πi,i
+ , i = 1, 2, . . . , c − 1, (12)
αi

where Π0" (1) = π0,0 λβ " (1)(λ + α0 )/α02 due to (1). Equation (12) could be further
simplified as

αi−1 " λβ " (1) − iμ


Πi" (1) = Πi−1 (1) + Πi (1), (13)
αi αi

by using the balance between the transitions per time unit into and going out of the
set of states Si = {(k, j ) | k = 0, 1, . . . , i, j = k, k + 1, . . . }, i.e.,

(i + 1)μπi+1,i+1 = αi (Πi (1) − πi,i ), i = 0, 1, . . . , c − 1.


430 T. Phung-Duc

Furthermore, the n-th (n ≥ 2) factorial moment is given by

n(λβ " (1) − iμ − αi )Πi


(n−1)
(n) αi−1 (n) (1)
Πi (1) = Πi−1 (1) +
αi αi
n  (k) 
(k−1) (1) Π (n−k)
k=2 n Ck λβ (1) + kλβ i
+ , (14)
αi
i = 1, 2, . . . , c − 1,

where Π0(n) (1) is recursively computed using the following recursive formula:
n (k) (1)Π (n−k) (1)
k=1 n Ck β
Π0(n) (1) = 0
α0

due to (1) and n Ck = n!


k!(n−k)! .
Proof Differentiating (4), we obtain

fi (z)Πi" (z) = − (iμ + αi − λzβ " (z))Πi (z) + αi−1 Πi−1


"
(z)
+αi πi,i + (i + 1)μπi+1,i+1 .

Substituting z = 1 into the above equation and arranging the result yields (12).
Differentiating (4) for n ≥ 2 times at z = 1 and arranging the result, we obtain (14).
Theorem 3 We have
An
Πc(n) (1) = , n ∈ N, (15)
(n + 1)(cμ − λβ " (1))

where


n+1  
(n+1)
An = αc−1 Πc−1 (1) + n+1 Ck λkβ (k−1)(1) + λβ (k) (1) Πc(n+1−k) (1).
k=2

Proof We have

fc (z)Πc (z) = αc−1 (Πc−1 (z) − πc−1,c−1 ) − cμπc,c .

Differentiating this equation n ≥ 1 times, we obtain


n
(n)
fc (z)Πc(n) (z) + (k) (n−k)
n Ck fc (z)Πc (z) = αc−1 Πc−1 (z).
k=1
Batch Arrival Multiserver Queue with State-Dependent Setup 431

Rearranging this equation leads to

(n) n (k) (n−k)


αc−1 Πc−1 (z) − k=1 n Ck fc (z)Πc (z)
Πc(n) (z) = . (16)
fc (z)

We observe inductively that both the denominator and numerator in the right-
hand side of (16) vanish at z = 1 (See Remark 5). Thus, applying L’Hopital’s rule
and arranging the result, we obtain (15).
Remark 5 In the recursive derivation of Πi (z) (i = 0, 1, . . . , c − 1), it is easy to see
that these generating functions have only poles outside the unit circle. Furthermore,
assuming that the generating function of the batch size β(z) has moments of any
order β (n) (1), it is easy to confirm that Πc (z) also has moments of any order, i.e.,
Πc(n) (z) (n = 1, 2, . . . ).
Remark 6 It should be noted that in order to obtain the n-th factorial moment
(n) (n+1)
Πc (1), we need to have the (n + 1)-th factorial moment Πc−1 (1). Fortunately,
(n+1) (n+1)
Πc−1 (1) is expressed in terms of Π0 (1) which is explicitly obtained for any n
according to Theorem 2.
Remark 7 It should be noted that when αi = α (i = 0, 1, . . . , c − 1), our results
reduce to those presented in [24].

4 Waiting Time Distribution

This section is devoted to the waiting time distribution of an arbitrary customer. To


this end, we first find the steady state probability pi,n−1 that an arriving customer
finds i servers in active mode and n − 1 (n ≥ 1) customers standing before him. We
then find the conditional waiting time Wi,n of a tagged customer that finds i active
servers and n − 1 customers stand before him. Let W -i,n (s) denote the LST of Wi,n .
Let W denote the waiting time of an arbitrary customer and W - (s) denote the LST
of W . We then have

 ∞
c 
- (s) =
W -i,n (s).
pi,n−1 W
i=0 n=i+1

In Artalejo et al. [2], explicit expression for W -i,n (s) for the case αi = α (i =
0, 1, . . . , c − 1) is obtained. Although this is different from our setting, the analysis
in [2] can be easily adapted to our model. In fact, W -i,n (s) is the first passage time
from state (i, n) to the boundary {(i, i); i = 0, 1, . . . , c − 1, c} where the arrivals
are ignored. Thus, we can obtain the waiting time distribution by inverting the LST.
432 T. Phung-Duc

4.1 Computation of pi,n

Because pi,n denotes the probability that an arriving customer finds that there are
i active servers and himself at the n-th position in the queue (to depart from the
system). We have


n
pi,n = πi,n−j rj ,
j =1

where rj is the probability that an arriving customer finds himself at the j -th position
in the batch. According to Burke [4] and Cromie et al. [8], we have

1 
rj = βi , j = 1, 2, . . . ,
E[B]
i=j

where E[B] = β " (1) is the mean batch size.

4.2 Algorithm for the Stationary Distribution

In this section, we present an algorithm calculating all the joint steady state
probabilities. Since πi,i (i = 0, 1, . . . , c) are obtained. We can calculate all other
steady state probability using a recursive algorithm. Indeed, π0,n is recursively
obtained if π0,0 is given. Given that π0,n is known for any n and that π1,1 is known,
we can recursively obtain all the probabilities π1,n for n ≥ 1. Similarly, we could
obtain all the probabilities πi,n , (i, n) ∈ S.

5 Conditional Decomposition

We have derived the following result:


αc−1 (Πc−1 (z) − πc−1,c−1 ) − cμπc,c
Πc (z) = ,
fc (z)
" (1)
αc−1 Πc−1
Πc (1) = .
cμ − λβ " (1)

Let Q(c) denote the conditional queue length given that all c servers are busy,
i.e.,

P(Q(c) = i) = P(N = i + c | C = c),


Batch Arrival Multiserver Queue with State-Dependent Setup 433

where N and C are the number of customers in the system and that of busy servers
in the steady state, respectively. Let Pc (z) denote the generating function of Q(c) . It
is easy to see that
Πc (z)
Pc (z) =
Πc (1)
αc−1 (Πc−1 (z) − πc−1,c−1 ) − cμπc,c
= " (1)(z − 1) g(z)
αc−1 Πc−1
Πc−1 (z) − Πc−1 (1)
= " (1)(z − 1) g(z)
Πc−1
∞
j =1 πc−1,c−1+j (z − 1)
j
= " (1)(z − 1) g(z)
Πc−1
∞ j −1 i
j =1 πc−1,c−1+j i=0 z
= " g(z)
Πc−1 (1)
∞ ∞ 
i
i=0 j =i+1 c−1,c−1+j z
π
= " (1) g(z),
Πc−1

where we have used cμπc,c = αc−1 (Πc−1 (1) − πc−1,c−1 ) in the third equality and

(cμ − λβ " (1))(z − 1)


g(z) = .
(cμ + λ)z − λzβ(z) − cμ
It should be noted that g(z) is the generating function of the number of customers
(c)
in the conventional MX /M/1 system without setup time (denoted by QON−−I DLE )
where the arrival rate, the PGF of batch size, and the service rate are λ, β(z), and
cμ, respectively.
We give a clear interpretation for the generating function:
∞ ∞ 
i
i=0 j =i+1 πc−1,c−1+j z
" (1) .
Πc−1

For simplicity, we define


∞
j =i+1 πc−1,c−1+j
qc−1,i = " (1) , i ∈ Z+ .
Πc−1

We have


πc−1,c−1+j = P(N − C > i | C = c − 1)P(C = c − 1).
j =i+1
434 T. Phung-Duc

Thus, we have

P(N − C > i | C = c − 1)
qc−1,i = .
E[N − C | C = c − 1]

It should be noted that under the condition C = c − 1, N − C represents


the number of customers in the system that are not receiving the service. Thus,
qc−1,i (i = 0, 1, 2, . . . ) is the probability that a waiting customer find i other
customers waiting in front of him under the condition that c − 1 servers are
active (See Burke [4]). Let QRes denote the discrete random variable following this
distribution.
Thus our decomposition result is summarized as follows:

d (c)
Q(c) = QON−−I DLE + QRes .

Remark 8 Tian et al. [29, 31] obtain a similar result for a multiserver model with
Poisson arrival and vacation, i.e., αi = (c − i)α and β(z) = z. However, the random
variable with the distribution qc−1,i here is not given a clear physical meaning in [29,
31].

6 Performance Measures

We derive the power consumption and the mean queue length for our model and the
corresponding one without setup time.

6.1 Power Consumption

The cost per unit time for states SETUP, ON, and IDLE of a server are denoted by
Cset up , Crun , and Cidle , respectively. The power consumption of our system is given
by
* +

c−1
POn−−off = Cset up 1 − πi,i − Πc (1) + Crun cρ,
i=0

where cρ = λ/μ is the mean number of running servers. For comparison, we also
plot the curves for the conventional M/M/c queue under the same setting. It should
be noted that in the conventional M/M/c system, an idle server is not turned off. As
a result, the cost for power consumption is given by

POn−−idle = Crun cρ + Cidle (c − cρ).


Batch Arrival Multiserver Queue with State-Dependent Setup 435

6.2 Mean Queue Length

The mean number of waiting customers for our model is given by


c
E[QOn−−off ] = Π " (1).
i=0

Let E[QOn−−idle ] denote the mean queue length MX /M/c queue without setup
time which could be obtained from the analysis in [8].

7 Numerical Experiments

In this section, we consider the case where αi = α, i.e., staggered setup policy.
Furthermore, we consider fixed batch size, i.e., β(z) = zk for k = 1, 2, . . . . In this
case ρ = kλ/μ. It means that a batch consists of k customers. In all the figures, the
curves for On–Idle policy are indicated by “On–Idle” and other curves are of the
On–Off model. Furthermore, we fix μ = 1, c = 10 in all the numerical examples.

7.1 Power Consumption Against ρ

We set the costs as follows: Cset up = 1, Crun = 1, and Cidle = 0.6. In this section
we investigate the power consumption against the traffic intensity. Figures 2, 3,
and 4 show the power consumption against the traffic intensity for α = 0.1, 1,
and 10, respectively. We observe from these three figures that the On–Off policy
always outperform the On–Idle policy. However, from the performance point of
view, the waiting time in the former is expected to be longer than the latter. Thus,
we will investigate the impact of setup time on the total cost of the system next
section. An important observation is that keeping the traffic intensity the same,
power consumption decreases with the batch size k. This suggests that it is more
efficient to design the system where customers arrive in group with a large batch
size. Furthermore, we also observe from these figures that the power consumption
decreases with α.

7.2 Power Consumption Against α

In this section, we set the costs as follows: Cset up = 5, Crun = 1, and Cidle = 0.6.
It should be noted that in this setting the power consumption of a setup server is five
times bigger than that of a running server. Figure 5 shows the power consumption
436 T. Phung-Duc

10

9.5

9
System Power Consumption

8.5

7.5

7
ON-IDLE
k=1
6.5 k=3
k=5
k=7
k=9
6
0.5 0.6 0.7 0.8 0.9 1
Traffic Intensity

Fig. 2 Power consumption against ρ (α = 0.1)

10

9.5

9
System Power Consumption

8.5

7.5

7
ON-IDLE
k=1
6.5 k=3
k=5
k=7
k=9
6
0.5 0.6 0.7 0.8 0.9 1
Traffic Intensity

Fig. 3 Power consumption against ρ (α = 1)


Batch Arrival Multiserver Queue with State-Dependent Setup 437

10

9.5

9
System Power Consumption

8.5

7.5

ON-IDLE
k=1
6.5 k=3
k=5
k=7
k=9
6
0.5 0.6 0.7 0.8 0.9 1
Traffic Intensity

Fig. 4 Power consumption against ρ (α = 10)

k = 1, ON-IDLE
k=1
14 k = 3, ON-IDLE
k=3
k = 5, ON-IDLE
k=5
k = 7, ON-IDLE
k=7
k = 9, ON-IDLE
12 k=9
Total Power Cost

10

0.01 0.1 1 10 100


Setup rate (D)

Fig. 5 Power consumption against α (ρ = 0.5)


438 T. Phung-Duc

against α for ρ = 0.5. We observe that the power consumption increases with the
setup rate α. Furthermore, there exists some threshold αT such that the On–Off
model is more power-saving than the On–Idle model if α > αT . However, if α <
αT , it is more power-saving to keep the server idle even when there is no waiting
job.

7.3 Queue Length

Figure 6 shows the queue length against the setup rate α. It should be noted that for
the model without setup, the queue length does not depend on α. We observe that the
queue length of the model with setup time decreases with the setup rate and tends
to the queue length of the On–Idle model. Furthermore, the queue length increases
with the batch size k.

1000
k = 1, ON-IDLE
k=1
k = 3, ON-IDLE
k=3
k = 5, ON-IDLE
100 k=5
k = 7, ON-IDLE
k=7
k = 9, ON-IDLE
k=9

10
Queue Length

0.1

0.01
0.01 0.1 1 10 100
Setup rate (D)

Fig. 6 Queue length against α (ρ = 0.5)


Batch Arrival Multiserver Queue with State-Dependent Setup 439

8 Concluding Remarks

In this paper, we have considered the MX /M/c queueing system with state-dependent
setup rates. A server is turned off immediately after serving a job and there
is no waiting customer. If there are some waiting customers, OFF servers are
turned on according to a policy such that the setup rate depends on the number
of active servers. This policy covers two important cases: the staggered setup
policy where the servers are setup one by one and the vacation model where the
server goes for vacation once it has no job to process and returns to normal mode
after the vacation time. Using a generating function approach, we have obtained
the generating functions of the queue length. We also have obtained recursive
formulae for computing the factorial moments of the number of waiting jobs.
Numerical experiments have shown some insights into the performance of the
system. Furthermore, it is also important to consider the case where a fixed number
of servers are always kept ON in order to reduce the delay of customers. It is also
interesting to find the relation between the decomposition formula in this paper with
that of Fuhrmann and Cooper [10]. We have obtained generating functions for the
joint queue lengths. A possible future work may be to obtain the tail asymptotic for
the joint queue lengths.

References

1. Adan, I.J., Van Leeuwaarden, J.S.H., Winands, E.M.: On the application of Rouche’s theorem
in queueing theory. Oper. Res. Lett. 34, 355–360 (2006)
2. Artalejo, J.R., Economou, A., Lopez-Herrero, M.J.: Analysis of a multiserver queue with setup
times. Queue. Syst. 51, 53–76 (2005)
3. Barroso, L.A., Holzle, U.: The case for energy-proportional computing. Computer 40, 33–37
(2007)
4. Burke, P.J.: Delays in single-server queues with batch input. Oper. Res. 23, 830–833 (1975)
5. Chen, Y., Das, A., Qin, W., Sivasubramaniam, A., Wang, Q., Gautam, N.: Managing server
energy and operational costs in hosting centers. ACM SIGMETRICS Perform. Eval. Rev. 33,
303–314 (2005)
6. Choudhury, G.: On a batch arrival Poisson queue with a random setup time and vacation period.
Comput. Oper. Res. 25, 1013–1026 (1998)
7. Choudhury, G.: An M X /G/1 queueing system with a setup period and a vacation period. Queue.
Syst. 36, 23–38 (2000)
8. Cromie, M.V., Chaudhry, M.L., Grassmann, W.K.: Further results for the queueing system
M X /M/c. J. Oper. Res. Soc. 30, 755–763 (1979)
9. Dean, J., Ghemawat, S.: MapReduce: simplified data processing on large clusters. Commun.
ACM 51(1), 107–113 (2008)
10. Fuhrmann, S.W., Cooper, R.B.: Stochastic decompositions in the M/G/1 queue with general-
ized vacations. Oper. Res. 33(5), 1117–1129 (1985)
11. Gandhi, A., Harchol-Balter, M.: How data center size impacts the effectiveness of dynamic
power management. In: Proceedings of 49th Annual Allerton Conference on Communication,
Control, and Computing (Allerton), pp. 1164–1169 (2011)
440 T. Phung-Duc

12. Gandhi, A., Harchol-Balter, M.: M/G/k with staggered setup. Oper. Res. Lett. 41, 317–320
(2013)
13. Gandhi, A., Harchol-Balter, M., Adan, I.: Decomposition results for an M/M/k with staggered
setup. ACM SIGMETRICS Perform. Eval. Rev. 38, 48–50 (2010)
14. Gandhi, A., Harchol-Balter, M., Adan, I.: Server farms with setup costs. Perform. Eval. 67,
1123–1138 (2010)
15. Gandhi, A., Gupta, V., Harchol-Balter, M., Kozuch, M.A.: Optimality analysis of energy-
performance trade-off for server farm management. Perform. Eval. 67, 1155–1171 (2010)
16. Gandhi, A., Harchol-Balter, M., Kozuch, M.A.: The case for sleep states in servers. In:
Proceedings of the 4th Workshop on Power-Aware Computing and Systems (2011). Article
no. 2
17. Greenberg, A., Hamilton, J., Maltz, D.A., Patel, P.: The cost of a cloud: research problems in
data center networks. ACM SIGCOMM Comput. Commun. Rev. 39, 68–73 (2008)
18. Mazzucco, M., Dyachuk, D.: Balancing electricity bill and performance in server farms with
setup costs. Future Gener. Comput. Syst. 28, 415–426 (2012)
19. Meisner, D., Gold, B.T., Wenisch, T.F.: PowerNap: eliminating server idle power. ACM Sigplan
Not. 44, 205–216 (2009)
20. Mitrani, I.: Service center trade-offs between customer impatience and power consumption.
Perform. Eval. 68, 1222–1231 (2011)
21. Mitrani, I.: Trading power consumption against performance by reserving blocks of servers.
In: Computer Performance Engineering. Springer, Berlin (2013), pp. 1–15
22. Mitrani, I.: Managing performance and power consumption in a server farm. Ann. Oper. Res.
202, 121–134 (2013)
23. Phung-Duc, T.: Impatient customers in power-saving data centers. In: Proceedings of 21th
International Conference on Analytical and Stochastic Modeling Techniques and Applications
(ASMTA 2014), Lecture Notes in Computer Science LNCS 8499. Springer, Cham (2014), pp.
185–199
24. Phung-Duc, T.: Server farms with batch arrival and staggered setup. In: Proceedings of the
Fifth Symposium on Information and Communication Technology (SoICT). ACM, New York
(2014), pp. 240–247
25. Phung-Duc, T.: Multiserver queues with finite capacity and setup time. In: International
Conference on Analytical and Stochastic Modeling Techniques and Applications, Lecture
Notes in Computer Science LNCS 9081. Springer, Cham (2015), pp. 173–187
26. Phung-Duc, T.: Exact solutions for M/M/c/setup queues. Telecommun. Syst. 64(2), 309–324
(2017)
27. Schwartz, C., Pries, R., Tran-Gia, P.: A queuing analysis of an energy-saving mechanism in
data centers. In: Proceedings of International Conference on Information Networking (ICOIN)
(2012), pp. 70–75
28. Takagi, H.: Priority queues with setup times. Oper. Res. 38, 667–677 (1990)
29. Tian, N., Li, Q.L., Gao, J.: Conditional stochastic decompositions in the M/M/c queue with
server vacations. Stoch. Models 15, 367–377 (1999)
30. Wolfgang, B.: Analysis of M/G/1-queues with setup times and vacations under six different
service disciplines. Queue. Syst. 39, 265–301 (2001)
31. Zhang, Z.G., Tian, N.: Analysis of queueing systems with synchronous single vacation for
some servers. Queue. Syst. 45, 161–175 (2003)
Weak Convergence of Probability
Measures of Trotter–Kato Approximate
Solutions of Stochastic Evolution
Equations

T. E. Govindan

Abstract The paper considers semilinear stochastic evolution equations in real


Hilbert spaces. The goal here is to establish the weak convergence of probability
measures induced by mild solutions of Trotter–Kato approximating equations.

Keywords Stochastic evolution equations in infinite dimensions · Existence and


uniqueness of a mild solution · Trotter–Kato approximations · Weak convergence
of probability measures

2000 Mathematics Subject Classification 60H10

1 Introduction

Stochastic evolution equations (SEEs) in infinite dimensions have been investigated


by several authors, see Ichikawa [8], Da Prato and Zabczyk [2], and Govindan [7]
and the references cited therein for details. SEEs are well known to model real world
problems arising from many areas of science, engineering, and finance.
The aim of this paper is to study weak convergence of probability measures
induced by Trotter–Kato approximate mild solutions of SEEs in a real separable
Hilbert space X of the form, see, for instance, Govindan [6]:

dx(t) = [Ax(t) + f (t, x(t))]dt + g(t, x(t))dw(t), t > 0, (1.1)


x(0) = x0 , (1.2)

where A is the infinitesimal generator of a strongly continuous semigroup {S(t) :


t ≥ 0} of bounded linear operators on X; f : R + × X → X (R + = [0, ∞)),

T. E. Govindan ()
Department of Mathematics, ESFM-IPN, Mexico City, Mexico

© The Editor(s) (if applicable) and The Author(s), under exclusive 441
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_26
442 T. E. Govindan

g : R + ×X → L(Y, X) and {w(t), t ≥ 0} is a Y -valued Wiener process. In Eq. (1.2),


x0 is an X-valued random variable F0 -measurable and satisfies E|x0 |p < ∞, p ≥ 2.
Kunze and van Neerven [10] and Govindan [6] studied Trotter–Kato approxi-
mations of Eq. (1.1). Such approximations have been considered earlier for other
classes of stochastic equations, see Kannan and Bharucha-Reid [9] and Govindan
[4, 5]. Motivated by Kannan and Barucha-Reid [9] and Govindan [4], the approx-
imation result given in Theorem 3.2 in Govindan [6], see Sect. 3 below, can be
used to derive another approximation result, that is, the weak convergence of a
probability measure Pn induced by a mild solution of Trotter–Kato approximating
equation to the probability measure P induced by the mild solution of Eq. (1.1). So,
the objective of this paper is to consider weak convergence of induced probability
measures.
The rest of the paper is organized as follows: In Sect. 2, we give the preliminaries
and essentially work in the framework of Ichikawa [8] and Govindan [6]. The main
results are presented in Sect. 3. An example is given in Sect. 4.

2 Preliminaries

Let X, Y be a pair of real separable Hilbert spaces and L(Y, X) the space of
bounded linear operators mapping Y into X. For convenience, we shall use the
notations | · | and (·, ·) for norms and scalar products for both the Hilbert spaces.
We write L(X) for L(X, X). Let (, F , P ) be a complete probability space. A map
x :  → X is a random variable if it is strongly measurable. Let x :  → X be
a square integrable random variable, that is, x ∈ L2 (, F , P ; X). The covariance
operator of the random element x is Cov[x] = E[(x − Ex) ◦ (x − Ex)], where
E denotes the expectation and g ◦ h ∈ L(X) for any g, h ∈ X is defined by
(g ◦ h)k = g(h, k), k ∈ X. Then Cov[x] is a self-adjoint nonnegative trace
class (or nuclear) operator and trCov[x] = E|x − Ex|2, where tr denotes the trace.
The joint covariance of any pair {x, y} ⊂ L2 (, F , P ; X), is defined as Cov[x, y]
E[(x − Ex) ◦ (y − Ey)].
Let I be a subinterval of [0, ∞). A stochastic process {x} with values in X is a
family of random variables {x(t), t ∈ I }, taking values in X. Let Ft , t ∈ I , be a
family of increasing sub σ -algebras of the sigma algebra F . A stochastic process
{x(t), t ≥ 0} is adapted to Ft if x(t) is Ft measurable for all t ∈ I.
A stochastic process {w(t), t ≥ 0} in a real separable Hilbert space Y is a
Wiener process if (a) w(t) ∈ L2 (, F , P ; Y ) and Ew(t) = 0 for all t ≥ 0, (b)
Cov[w(t) − w(s)] = (t − s)W, W ∈ L+ 1 (Y ) is a nonnegative nuclear operator, (c)
w(t) has continuous sample paths, and (d) w(t) has independent increments. The
operator W is called the incremental covariance  (operator) of the Wiener process

w(t). Then w has the representation w(t) = n=1 βn (t)en , where {en }(n =
1, 2, 3, . . .) is an orthonormal set of eigenvectors of W, βn (t), n = 1, 2, 3, . . .
Weak Convergence of Probability Measures of Trotter–Kato Approximate. . . 443

are mutually independent real-valued


 Wiener processes with incremental covariance
λn > 0, W en = λn en and trW = ∞ n=1 λn .
In the sequel, we will use the notation A ∈ G(M, α) for an operator A which
is the infinitesimal generator of a C0 -semigroup {S(t) : t ≥ 0} of bounded linear
operators on X satisfying ||S(t)|| ≤ M exp(αt), t ≥ 0 for some positive constants
M ≥ 1 and α, where || · || denotes the operator norm.
Now we make the system (1.1)–(1.2) more precise: Let A : D(A) ⊆ X → X
(D(A) is the domain of A) be the infinitesimal generator of a strongly continuous
semigroup {S(t) : t ≥ 0} in X. Let the functions f and g with f : R + × X → X,
and g : R + × X → L(Y, X) be Borel measurable maps.
Next, we introduce the notion of a mild solution for the system (1.1)–(1.2).
Definition 2.1 A stochastic process x : [0, T ] → X defined on the probability
space (, F , P ) is called a mild solution of Eq. (1.1) if
(i) x is jointly measurable and Ft -adapted and its restriction to the interval [0, T ]
T
satisfies 0 |x(t)|2 dt < ∞, P − a.s., and
(ii) x(t) satisfies the integral equation
 t
x(t) = S(t)x0 + S(t − s)f (s, x(s))ds
0
 t
+ S(t − s)g(s, x(s))dw(s), t ∈ [0, T ], P − a.s..
0

Note that the second integral in the last equality is the Itô stochastic integral. For
the definition and properties of this integral, see Ichikawa [8]. See also Da Prato and
Zabczyk [2] and Govindan [7].

3 Weak Convergence of Probability Measures

In this section, we shall establish weak convergence of induced probability measures


associated with the Trotter–Kato approximations of Eq. (1.1).
Let us state the following basic assumptions used in the rest of the paper.
Hypothesis (H1)
The nonlinear functions f (t, x) and g(t, x) satisfy the following Lipschitz and
linear growth conditions for all t ≥ 0: For p ≥ 2,

|f (t, x) − f (t, y)| ≤ L1 |x − y|, L1 > 0, x, y ∈ X,


|g(t, x) − g(t, y)| ≤ L2 |x − y|, L2 > 0, x, y ∈ X,
|f (t, x)|p ≤ L3 (1 + |x|p ), L3 > 0, x ∈ X,
|g(t, x)| ≤ L4 (1 + |x| ),
p p
L4 > 0, x ∈ X.
444 T. E. Govindan

Note that the constants Li , i = 1, 2, 3, 4 do not depend on t.


Theorem 3.1 (Govindan [6]) Let A ∈ G(M, α) and the assumption (H1) hold.
Then, Eq. (1.1) has a unique mild solution x ∈ C([0, T ], Lp (, X)), p ≥ 2.
Moreover, for any p ≥ 1, we have sup0≤t ≤T E|x(t)|2p ≤ kp,T (1 + E|x0 |2p ), where
kp,T is a positive constant.
Consider the Trotter–Kato approximations of Eq. (1.1):

dxn (t) = [An xn (t) + f (t, xn (t))]dt + g(t, xn (t))dw(t), t > 0, (3.1)
xn (0) = x0 ,

where An , n = 1, 2, 3, . . . , is the infinitesimal generator of a strongly continuous


semigroup {Sn (t) : t ≥ 0} of bounded linear operators on X.
For each n ≥ 1, as before, one can define a mild solution xn ∈
C([0, T ], Lp (, X)), p ≥ 2 of Eq. (3.1) so that xn (t) satisfies the stochastic
integral equation
 t
xn (t) = Sn (t)x0 + Sn (t − s)f (s, xn (s))ds
0
 t
+ Sn (t − s)g(s, xn (s))dw(s), t ∈ [0, T ], P − a.s..
0

The following hypothesis is needed to consider the next result.


Hypothesis (H2)
(i) Let An ∈ G(M, α) for each n = 1, 2, 3, . . . ,
(ii) As n → ∞, An x → Ax for every x ∈ D, where D is a dense subset of X,
(iii) There exists a γ with Reγ > α for which (γ I − A)D is dense in X, then the
closure A of A is in G(M, α).
A consequence of the Trotter–Kato theorem is the following.
Proposition 3.1 (Pazy [11], Theorem 4.5, p. 88) Let the hypothesis (H2) hold. If
Sn (t) and S(t) are the C0 -semigroups generated by An and A, respectively, then

lim Sn (t)x = S(t)x, x ∈ X, (3.2)


n→∞

for all t ≥ 0, and the limit in (3.2) is uniform in t for t in bounded intervals.
Theorem 3.2 (Govindan [6]) Suppose that the hypotheses (H1) and (H2) are
satisfied. Then, there exists a unique mild solution xn in C([0, T ], Lp (, X)) for
each n = 1, 2, 3, . . . , of Eq. (3.1), and for each T > 0,

sup E|xn (t) − x(t)|2 → 0 as n → ∞,


0≤t ≤T

where x(t) is a mild solution of Eq. (1.1).


Weak Convergence of Probability Measures of Trotter–Kato Approximate. . . 445

As mentioned in the introduction, the Trotter–Kato approximation result given


in Theorem 3.2 can be used to derive another interesting approximation result.
For this, first observe that the solutions x and xn , n = 1, 2, 3, . . . , are elements
of C([0, T ], Lp (, X)), p ≥ 2. Let P and Pn be the probability measures
on C([0, T ], Lp (, X)) induced by x and xn , respectively. We shall show in
what follows that Pn converges weakly to P as n → ∞. Towards this, note
that Theorem 3.2 implies that every finite dimensional (joint) distribution of Pn
converges weakly to the corresponding one of P . Our claim will be proved once we
establish the tightness of the family {Pn , n = 1, 2, 3, . . .}.
In order to prove the weak convergence result, we need to make assumptions
different from Hypothesis (H2).
First observe that a closed operator A generates a strongly continuous analytic
semigroup on a Banach space X if and only if A is densely defined and sectorial,
that is, there exist M ≥ 1 and ω ∈ R, the real line, such that {λ ∈ C : Reλ > ω} is
contained in the resolvent set ρ(A) and

sup ||(λ − ω)R(λ, A)|| ≤ M, (3.3)


Reλ>ω

the constants M and ω are called the sectoriality constants of A; in this context, we
say A is sectorial of type (M, ω).
We now make the following further assumptions, see Kunze and van Neerven
[10]:
Hypothesis (H3)
(i) The operators A and An , for each n = 1, 2, 3, . . . , are densely defined, closed,
and uniformly sectorial on X in the sense, there exist numbers M ≥ 1 and
ω ∈ R such that A and each An is sectorial of type (M, ω).
(ii) The operators An converge to A in the strong resolvent sense:

lim R(λ, An )x = R(λ, A)x


n→∞

for all Reλ > ω and x ∈ X.


Under the Hypothesis (H3)(i), the operators A and An generate strongly continu-
ous analytic semigroups S(t) and Sn (t), respectively, satisfying the uniform bounds

||S(t)||, ||Sn (t)|| ≤ Meωt , t ≥ 0,


M"
||AS(t)||, ||An Sn (t)|| ≤ eωt , t > 0.
t
The following Trotter–Kato type approximation theorem is well known. For the
proof of part (i), see Arendt et al. [1, Theorem 3.6.1] and for part (ii), see Kunze and
van Neerven [10].
446 T. E. Govindan

Proposition 3.2 (Kunze and van Neerven [10]) Let the Hypothesis (H3) hold.
(i) For all t ∈ [0, ∞) and x ∈ X, we have

lim Sn (t)x → S(t)x,


n→∞

and the convergence is uniform on compact subsets of [0, ∞) × X.


(ii) For all t ∈ (0, ∞) and x ∈ X, we have

lim An Sn (t)x → AS(t)x,


n→∞

and the convergence is uniform on compact subsets of (0, ∞) × X.


Theorem 3.3 Let the Hypotheses (H1) and (H3) hold. Then, there exists a unique
mild solution xn in C([0, T ], Lp (, X)) for each n = 1, 2, 3, . . . , of Eq. (3.1), and
for each T > 0,

sup E|xn (t) − x(t)|2 → 0 as n → ∞,


0≤t ≤T

where x(t) is a mild solution of Eq. (1.1). Moreover, Pn converges weakly to P as


n → ∞.
Proof The existence and uniqueness of a mild solution xn of Eq. (3.1) in C([0, T ],
Lp (, X)) for each n = 1, 2, 3, . . . , of Eq. (3.1), follows from Theorem 3.1. Next,
the proof of

sup E|xn (t) − x(t)|2 → 0 as n → ∞,


0≤t ≤T

is exactly identical as in Theorem 3.2 wherein Proposition 3.1 was employed to


show that

sup E|Sn (t)x − S(t)x|2 → 0 as n → ∞, (3.4)


0≤t ≤T

for all t ≥ 0, x ∈ X, and the limit in (3.4) is uniform in t for t in bounded intervals.
However, to prove our theorem, we shall employ Proposition 3.2 (i) to show (3.4)
instead of Proposition 3.1.
It remains to show that Pn converges weakly to P as n → ∞. The proof is
divided into three steps.
Step 1 We claim that for each 0 < T < ∞, we have

sup sup E|xn (t)|p < ∞. (3.5)


n 0≤t ≤T
Weak Convergence of Probability Measures of Trotter–Kato Approximate. . . 447

To show this, consider for each n = 1, 2, 3, . . . ,


 t
xn (t) = Sn (t)x0 + Sn (t − s)f (s, xn (s))ds
0
 t
+ Sn (t − s)g(s, xn (s))dw(s), t ∈ [0, T ], P − a.s..
0

Now using Lemma 1.9 from Ichikawa [8], it follows that


  t p
 
E|xn (t)| ≤ 3
p p−1 
E|Sn (t)x0 | + E 
p
Sn (t − s)f (s, xn (s))ds 
0
 t p 5
 
+ E  Sn (t − s)g(s, xn (s))dw(s)
0

≤ 3p−1 M p exp(pωT )E|x0|p
 t
+ T p−1 M p exp(pωT )E |f (s, xn (s))|p ds
0
 t 5
+ c(p, T )M exp(pωT )E p
|g(s, xn (s))| ds ,
p
0

where c(p, T ) > 0 is a constant.


Hence Hypothesis (H1) yields

E|xn (t)|p ≤ 3p−1 M p exp(pωT )E|x0 |p
 t 5
+ M p exp(pωT )[T p−1 L3 + c(p, T )L4 ] (1 + E|xn (s)|p )ds
0

≤3 p−1
M p exp(pωT )E|x0 |p

+ M p exp(pωT )[T p−1 L3 + c(p, T )L4 ]T


 t 5
+ M exp(pωT )[T
p p−1
L3 + c(p, T )L4 ] p
E|xn (s)| ds ,
0

for each n = 1, 2, 3, . . .. An appeal to Bellman–Gronwall’s lemma proves the


claim (3.5).
Step 2 For an arbitrarily fixed 0 < T < ∞, we claim that, for each n =
1, 2, 3, . . . and 0 ≤ s < t ≤ T , there exists a constant C > 0 such that

E|xn (t) − xn (s)|4 ≤ C(t − s)2 . (3.6)


448 T. E. Govindan

First, by Theorem 2.4 from Pazy [11], we have


 t
|(Sn (t) − Sn (s))x0 |2 ≤ T |Sn (u)An x0 |2 du
s

≤ T (M exp(ωT ))2 E|x0 |2 (t − s).

So,

|(Sn (t) − Sn (s))x0 |4 ≤ C1 (t − s)2 , (3.7)

for some constant C1 > 0.


Next, consider
 t  s
Sn (t − u)f (u, xn (u))du − Sn (s − u)f (u, xn (u))du
0 0
 s  t
= [Sn (t − u) − Sn (s − u)]f (u, xn (u))du + Sn (t − u)f (u, xn (u))du
0 s

= I1 + I2 , say.

Now, by Hypothesis (H1), we get


 t
E|I2 |4 ≤ (t − s)3 E M 4 e4ω(t −u)|f (u, xn (u))|4 du
s
 t  
≤ (t − s)3 M 4 exp(4ωT )E L3 1 + |xn (u)|4 du
s

≤ T M exp(4ωT )L3 (T + sup E|xn (t)|4 )(t − s)2 ,


4
0≤t ≤T

and
  4
 s t −u 
E|I1 |4 = E  Sn (v)An f (u, xn (u))dvdu
0 s−u
 t  4
 t −u 
≤T E
3  Sn (v)An f (u, xn (u))dv  du

0 s−u
 t  4
 t −s 
≤T E
3  Sn (v + s − u)An f (u, xn (u))dv  du

0 0
* +
≤ T 5 M " e4ωT L3 T + sup E|xn (t)|4 (t − s)2 .
4
0≤t ≤T
Weak Convergence of Probability Measures of Trotter–Kato Approximate. . . 449

Hence, from Step 1,


 t  4
 s 

E Sn (t − u)f (u, xn (u))du − Sn (s − u)f (u, xn (u))du ≤ C2 (t − s)2 ,
0 0
(3.8)

where C2 > 0 is a constant.


Lastly, consider the stochastic integral term:
 t  s
Sn (t − u)g(u, xn (u))dw(u) − Sn (s − u)g(u, xn (u))dw(u)
0 0
 s
= [Sn (t − u) − Sn (s − u)]g(u, xn (u))dw(u)
0
 t
+ Sn (t − u)g(u, xn (u))dw(u)
s

= I3 + I4 , say.

By applying Lemma 7.2 from Da Prato and Zabczyk [2, p. 182] and exploiting
Hypothesis (H1), we get
$ t %2
E|I4 |4 ≤ KE |Sn (t − u)g(u, xn (u))|2 du
s
$ t %2
≤ KM 4 exp (4aT )E |g(u, xn (u))|2 du
s

≤ 2KM exp (4aT )T L4 (T + sup E|xn (t)|4 )(t − s)2 ,


4 2
0≤t ≤T

where K > 0 is a constant, and


  4
 s t −u 
E|I3 |4 = E  Sn (v)An g(u, xn (u))dvdw(u)
0 s−u
$ t  2 %2
 t −u 
≤ KE  Sn (v)An g(u, xn (u))dv  du

0 s−u
$ t  2 %2
 t −s 
≤ KE  Sn (v + s − u)An g(u, xn (u))dv  du

0 0
" 4 4ωT
 
≤ T KM e 2
L4 T + sup E|xn (t)|4 (t − s)2 ,
0≤t ≤T
450 T. E. Govindan

Thus, using (3.5), we obtain


 t

E  Sn (t − u)g(u, xn (u))dw(u)
0
 4
s 
− Sn (s − u)g(u, xn (u))dw(u) ≤ C3 (t − s)2 , (3.9)
0

where C3 > 0 is a constant.


Combining all the estimates (3.7)–(3.9), the claim (3.6) follows.
Step 3 In Step 2, it was shown that xn (t) converges to x(t) uniformly on compact
intervals of [0, ∞) as n → ∞. Then, the family {Pn } of probability measures
is tight on C([0, T ], Lp (, X)). This together with Theorem 3.2 implies that
Pn → P weakly on C([0, T ], Lp (, X)). The proof is complete.
As an application of Theorem 3.3, we consider a classical limit theorem on the
dependence of the stochastic evolution equation (1.1) on a parameter. For this, we
shall follow Gikhman and Skorokhod [3, pp. 50–54].
Consider the family of stochastic evolution equations

dxn (t) = [An xn (t) + fn (t, xn (t))]dt + gn (t, xn (t))dw(t), t > 0, (3.10)
xn (0) = x0 ,

where An , n = 1, 2, 3, . . . is the infinitesimal generator of a strongly continuous


semigroup {Sn (t) : t ≥ 0} of bounded linear operators on X.
For each n = 1, 2, 3, . . . , one can define a mild solution xn ∈ C([0, T ];
Lp (, X)), p ≥ 2 as before that satisfies the stochastic integral equation
 t
xn (t) = Sn (t)x0 + Sn (t − s)fn (s, xn (s))ds
0
 t
+ Sn (t − s)gn (s, xn (s))dw(s), t ∈ [0, T ], P − a.s..
0

Hypothesis (H4)
For each n = 1, 2, 3, . . . , the nonlinear functions fn (t, x) and gn (t, x) satisfy the
following Lipschitz and linear growth conditions for all t ≥ 0: For p ≥ 2,

|fn (t, x) − fn (t, y)| ≤ L"1 |x − y|, L"1 > 0, x, y ∈ X,


|gn (t, x) − gn (t, y)| ≤ L"2 |x − y|, L"2 > 0, x, y ∈ X,
|fn (t, x)| ≤ p
L"3 (1 + |x|p ), L"3 > 0, x ∈ X,
|gn (t, x)|p ≤ L"4 (1 + |x|p ), L"4 > 0, x ∈ X.

Note that the constants L"i , i = 1, 2, 3, 4 do not depend on t.


Weak Convergence of Probability Measures of Trotter–Kato Approximate. . . 451

We now make the following further assumption. See Gikhman and Skorokhod
[3, p. 52].
Hypothesis (H5)
For each N > 0,

sup |fn (t, x) − f (t, x)| → 0 and sup |gn (t, x) − g(t, x)| → 0
|x|≤N |x|≤N

as n → ∞ for each t ∈ [0, T ].


Theorem 3.4 Suppose that the hypotheses (H1), (H3), (H4), and (H5) hold. Then,
there exists a unique mild solution xn in C([0, T ], Lp (, X)) for each n =
1, 2, 3, . . . , of Eq. (3.10), and for each T > 0,

sup E|xn (t) − x(t)|2 → 0 as n → ∞,


0≤t ≤T

where x(t) be the mild solutions of Eq. (1.1).


Proof The existence and uniqueness of a mild solution xn of Eq. (3.10) in C([0, T ],
Lp (, X)) for each n = 1, 2, 3, . . . , of Eq. (3.10), follows from Theorem 3.1. Next,
the proof of

sup E|xn (t) − x(t)|2 → 0 as n → ∞,


0≤t ≤T

is exactly identical as in Theorem 4.1 from Govindan [6] wherein Proposition 3.1
was employed to show that

sup E|Sn (t)x − S(t)x|2 → 0 as n → ∞, (3.11)


0≤t ≤T

for all t ≥ 0, x ∈ X, and the limit in (3.11) is uniform in t for t in bounded intervals.
We shall instead employ Proposition 3.2 (i). This completes the proof.
Let P and Pn∗ be the probability measures on C([0, T ], Lp (, X)) induced by
the mild solution x of Eq. (1.1) and mild solution xn of Eq. (3.10), respectively. We
shall show, in what follows, that Pn∗ converges weakly to P as n → ∞.
Theorem 3.5 Let all the Hypotheses of Theorem 3.4 hold. Then Pn∗ converges
weakly to P as n → ∞.
Proof The proof follows as in Theorem 3.3 and is divided into three steps.
Step 1 We claim that for each 0 < T < ∞, we have

sup sup E|xn (t)|p < ∞. (3.12)


n 0≤t ≤T
452 T. E. Govindan

Consider for each n = 1, 2, 3, . . .,


 t
xn (t) = Sn (t)x0 + Sn (t − s)fn (s, xn (s))ds
0
 t
+ Sn (t − s)gn (s, xn (s))dw(s), t ∈ [0, T ], P − a.s..
0

As before, from Lemma 1.9 from Ichikawa [8] and Hypothesis (H4), for each
n = 1, 2, 3, . . ., it follows that

E|xn (t)|p ≤ 3p−1 M p exp(pωT )E|x0 |p
6 7
+ M p exp(pωT ) T p−1 L"3 + c(p, T )L"4 T
 5
6 p−1 " "
7 t
+ M exp(pωT ) T
p
L3 + c(p, T )L4 p
E|xn (s)| ds ,
0

where c(p, T ) > 0 is a constant. An appeal to Bellman–Gronwall’s lemma


proves (3.12).
Step 2 For an arbitrarily fixed 0 < T < ∞, we claim that, for each n =
1, 2, 3, . . . and 0 ≤ s < t ≤ T , there exists a constant C " > 0 such that

E|xn (t) − xn (s)|4 ≤ C " (t − s)2 . (3.13)

As in Step 2 of Theorem 3.3, we have

|(Sn (t) − Sn (s))x0 |4 ≤ C1" (t − s)2 , (3.14)

for some constant C1" > 0.


Next, consider
 t  s
Sn (t − u)fn (u, xn (u))du − Sn (s − u)fn (u, xn (u))du
0 0
 s  t
= [Sn (t − u) − Sn (s − u)]fn (u, xn (u))du + Sn (t − u)fn (u, xn (u))du
0 s

= J1 + J2 , say.

By Hypothesis (H4), we get


* +
4
E|J2 | ≤ T M 4
exp(4ωT )L"3 T + sup E|xn (t)| 4
(t − s)2 ,
0≤t ≤T
Weak Convergence of Probability Measures of Trotter–Kato Approximate. . . 453

and
  4
 s t −u 
E|J1 |4 = E  Sn (v)An fn (u, xn (u))dvdu
0 s−u
* +
" 4 4ωT
≤ T 5M e L"3 T + sup E|xn (t)|4 (t − s)2 .
0≤t ≤T

Hence, from Step 1,


 t  4
 s 

E Sn (t − u)fn (u, xn (u))du − Sn (s − u)fn (u, xn (u))du ≤ C2" (t − s)2 ,
0 0
(3.15)

where C2" > 0 is a constant.


Lastly, consider
 t  s
Sn (t − u)gn (u, xn (u))dw(u) − Sn (s − u)gn (u, xn (u))dw(u)
0 0
 s
= [Sn (t − u) − Sn (s − u)]gn (u, xn (u))dw(u)
0
 t
+ Sn (t − u)gn (u, xn (u))dw(u)
s

= J3 + J4 , say.

By Lemma 7.2 from Da Prato and Zabczyk [2, p. 182] and exploiting Hypothesis
(H4), we get

E|J4 |4 ≤ 2K " M 4 exp (4aT )T 2 L"4 (T + sup E|xn (t)|4 )(t − s)2 ,
0≤t ≤T

where K " > 0 is a constant, and


  4
 s t −u 
E|J3 | = E  4
Sn (v)An gn (u, xn (u))dvdw(u)
0 s−u
" 4 4ωT
≤ T 2 KM e L"4 (T + sup E|xn (t)|4 )(t − s)2 ,
0≤t ≤T
454 T. E. Govindan

Thus, using (3.12), we obtain


 t

E  Sn (t − u)gn (u, xn (u))dw(u)
0
 4
s 
− Sn (s − u)gn (u, xn (u))dw(u) ≤ C3" (t − s)2 , (3.16)
0

where C3" > 0 is a constant.


Combining all the estimates (3.14)–(3.16), (3.13) follows.
Step 3 In this step, we use similar arguments as in Theorem 3.3. The proof is
complete.

4 An Example

In this section, we discuss an example from Kunze and van Neerven [10]. Consider
the stochastic partial differential equation of the form:

∂z
(t, x) = Az(t, x) + f (z(t, x))
∂t

K
wk
+ gk (z(t, x)) (t), x ∈ O, t ≥ 0, (4.1)
∂t
k=1
z(t, x) = 0, x ∈ ∂O, t ≥ 0,
z(0, x) = ϕ(x), x ∈ O,

where O is a bounded open domain in R d and wk (t) are independent real-valued


standard Wiener processes. Here, A is the second-order divergence form differential
operator defined by


d $ 
d % d
∂ ∂z ∂z
Az(x) = aij (x) (x) + bj (x) (x),
∂xi ∂xj ∂xj
i=1 j =1 j =1

whose coefficients a = (aij ) and b = (bj ) satisfy suitable boundedness and


uniform ellipticity conditions. The functions f and gk are Lipschitz continuous.
Let ||f ||Lip = supt =s |f (t|t)−f
−s|
(s)|
denote the Lipschitz seminorm of a function f .
Hypothesis (H6)
Let a, an ∈ L∞ (O; R d×d ) and b, bn ∈ L∞ (O; R d ). Let f, fn , gk , gk,n : R → R
be Lipschitz continuous. Assume that there exist finite constants κ, C such that:
(i) a, an are symmetric and ax · x, an x · x ≥ κ|x|2 for all x ∈ R d ,
Weak Convergence of Probability Measures of Trotter–Kato Approximate. . . 455

(ii) ||a||∞ , ||an ||∞ , ||b||∞ , ||bn ||∞ ≤ C,


(iii) ||f ||Lip, ||fn ||Lip, ||gk ||Lip, ||gk,n ||Lip ≤ C,
(iv) limn→∞ an = a, limn→∞ bn = b a.e. on O, and
(v) limn→∞ fn = f , limn→∞ gk,n = gk pointwise on O.
Similarly, An can be defined. Let Sn (·) and S(·) be strongly continuous analytic
semigroups generated by An and A, respectively.
In order to reformulate the above equation in the abstract setting on the Banach
space Lr (O), 1 < r < ∞, we use a variational approach. Consider the sesquilinear
form

a[u, v] := (a∇u) · ∇v + (b · ∇u)vdx
O

on the domain

D(a) := H01 (O).

The sectorial operator A on Lr (O) associated with a generates a strongly continuous


analytic semigroup {S(t) : t ≥ 0} extrapolates to a consistent family of strongly
continuous analytic semigroups {S (r) (t) : t ≥ 0} on Lr (O). Let us denote
(r)
their generates by A(r). The forms an and the associated semigroups Sn (t) with
(r)
generators An are defined likewise.
Lemma 4.1 (Kunze and van Neerven [10]) Let the Hypothesis (H6)(i), (ii), and
(r)
(iv) hold. Then, the operators A(r) and An satisfy Hypothesis (H3).
Lemma 4.2 (Kunze and van Neerven [10]) Suppose that Hypothesis (H6) (iii)
and (v) hold. Then
(i) the maps f, fn : Lr (O) → Lr (O) defined by

[f (z)](x) := f (z(x)), [fn (z)](x) := fn (z(x)),

satisfy Hypotheses (H1) and (H4), and


(ii) the maps g, gn : Lr (O) → L(R K , Lr (O)) defined by


K 
K
[g(z)h](x) := gk (z(x))(ek , h), [gn (z)h](x) := gk,n (z(x))(ek , h),
k=1 k=1

where {ek }K K
k=1 is the standard unit basis of R , satisfy Hypotheses (H1) and
(H4).
Hence, Eq. (4.1) can be expressed in the abstract setting as Eq. (1.1) with A, f,
and g as defined above.
456 T. E. Govindan

Acknowledgement This research is supported by SIP from IPN, Mexico.

References

1. Arendt, W., Batty, C.J.K., Hieber, M., Neubrander, F.: Vector-Valued Laplace Transforms and
Cauchy Problems. Monographs in Mathematics, vol. 96. Birkhäuser-Verlag, Basel (2001)
2. Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions. Cambridge University
Press, Cambridge (1992)
3. Gikhman, I.I., Skorokhod, A.V.: Stochastic Differential Equations. Springer, Berlin (1972)
4. Govindan, T.E.: Autonomous semilinear stochastic Volterra integrodifferential equations in
Hilbert spaces. Dyn. Syst. Appl. 3, 51–74 (1994)
5. Govindan, T.E.: Trotter-Kato approximations of semilinear stochastic evolution equations. Bol.
Soc. Matemat. Mexicana 12, 109–120 (2006)
6. Govindan, T.E.: On Trotter-Kato approximations of semilinear stochastic evolution equations
in infinite dimensions. Stat. Probab. Lett. 96, 299–306 (2015)
7. Govindan, T.E.: Yosida Approximations of Stochastic Differential Equations in Infinite
Dimensions and Applications. Probability Theory and Stochastic Modelling Series, vol. 79.
Springer, Switzerland (2016)
8. Ichikawa, A.: Stability of semilinear stochastic evolution equations. J. Math. Anal. Appl. 90,
12–44 (1982)
9. Kannan, D., Bharucha-Reid, A.T.: On a stochastic integrodifferential evolution equation of
Volterra type. J. Integr. Eqns. 10, 351–379 (1985)
10. Kunze, M., van Neerven, J.M.A.M.: Approximating the coefficients in semilinear stochastic
partial differential equations. J. Evol. Eqns. 11, 577–604 (2011)
11. Pazy, A.: Semigroups of Linear Operators and Applications to Partial Differential Equations.
Springer, Berlin, 1983
Stochastic Multiphase Models and Their
Application for Analysis of End-to-End
Delays in Wireless Multihop Networks

Vladimir Vishnevsky and Andrey Larionov

Abstract This paper presents a study of applying open queueing networks with
MAP /P H /1/N nodes for estimation of performance characteristics of wireless
networks with linear topology using either relay, or DCF channels. Basic properties
of such queueing networks are outlined along with Markovian arrival processes
(MAPs) and phase-type (PH) distributions fitting methods. Due to exponential
growth of the system state space in MAP /P H /1/N → · · · → •/P H /1/N queue-
ing networks, the exact calculation of its characteristics is practically impossible
for an arbitrary large number of nodes, and we propose an algorithm which finds
approximated results by iterative estimations of node parameters using departure
processes approximations with MAPs of smaller order. We use this approach to get
numerical results, which are further compared with the data obtained by Monte-
Carlo method. The comparison shows that the results obtained by both methods are
very close to each other, while the iterative approach requires significantly less time.
The paper provides results of fitting transmission delays using PH distributions and
end-to-end delays estimations for wireless networks with simple relay and IEEE
802.11 DCF channels. All numerical results are validated using a simulation model.

Keywords DCF · Wireless relay networks · Markovian arrival processes · PH


distributions · Queueing networks · Multihop wireless networks

1 Introduction

Wireless networks are often used as permanent or temporary backbone networks,


for example, for transmitting data from sensors, organizing communication along
roads or pipelines, or for connecting base stations in cellular networks. Most often,
radio relay or IEEE 802.11 channels are used to build such networks. In both cases,
it is possible to build a network capable of transmitting sufficiently large amounts

V. Vishnevsky · A. Larionov ()


V.A. Trapeznikov Institute of Control Sciences of RAS, Moscow, Russia

© The Editor(s) (if applicable) and The Author(s), under exclusive 457
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_27
458 V. Vishnevsky and A. Larionov

Fig. 1 An example of a wireless multihop network with linear topology and its model

of information. However, errors that occur when transmitting signals in the air,
interference from neighboring stations, as well as features of the channel access
protocols can significantly affect the performance of the wireless networks. For
its evaluation, one can use various methods—from the construction of test-beds
with wireless equipment to analytical and simulation modeling. In cases where it
is necessary to estimate network parameters such as end-to-end delays or station
utilization, queueing models can be used. The use of queuing theory allows to
abstract from the complex features of data transmission protocols and applications,
describing the transmission time, intervals, and packet sizes with random functions.
A typical example of a wireless network with linear topology and its queueing model
is shown on Fig. 1.
One of the promising ways to model telecommunication networks is open
queuing networks MAP /P H /1/N → · · · → •/P H /1/N. In these queueing
networks applications traffic is modeled with Markovian arrival processes (MAP)
[13], and channel transmission delays with phase-type (PH) distributions. The use of
Markovian arrival processes allows to take into account the correlation present in the
traffic of real network applications [8, 12], while PH distributions provide enough
means to model rather complex transmission delays. Various questions of applying
these queueing systems for wireless networks performance evaluation were studied
in previous work [20–22].
The key problem in MAP /P H /1/N → · · · → •/P H /1/N queueing network
analysis is the exponential growth of the state space as the number of stations
increases. Because of this, numerical methods have to be used for the queueing
system properties estimation in case of large networks. The most widely used
approach here is Monte-Carlo method, in which the results are estimated from the
repeated sampling of the target characteristics values. Another possible approach,
described in this paper, is departure processes replacement with approximated
MAPs of smaller order. These approximated MAPs are used as arrival processes
for the next node input, and have the same first moments and lags values as the
original departure MAP of higher order. Below we use both approaches for tandem
queueing network evaluation and compare the calculated end-to-end delays values.
It will be shown, that iterative approximation approach provides good precision and
requires significantly less time for computation.
Multiphase Models for Analysis of End-to-End Delays in Wireless Networks 459

Two types of wireless networks channels are considered in this paper: radio
relay and channels operating under IEEE 802.11 DCF (Distributed Coordination
Function). Radio relay channels use frequency duplexing, directed antennas and it
is assumed that no collisions take place during transmissions; channel access is very
similar to the mechanism used in wired networks. On the contrary, IEEE 802.11
DCF channel access is based on CSMA/CA scheme and assumes a competition
with possible collisions. In case of relay channels, the transmission delay is
defined by the payload transmission time plus constant intervals, preambles, and
headers transmissions. In case of DCF channels, transmission delay is also defined
with random channel listening intervals, acknowledgements, and possible packet
retransmissions.
For each type of the channel we find a PH distribution that models transmission
delays. To do this, we run simulation models of the channels, collect transmission
delays samples and use them to find the fitting PH distributions. These PH distri-
butions are used in MAP /P H /1/N → · · · → •/P H /1/N queueing networks.
For each queueing network we find end-to-end delays using Monte-Carlo method
and iterative algorithm with departures approximation, and compare the results with
end-to-end delays estimated from a multihop wireless network simulation.
The paper is organized as follows. In Sect. 2 we outline the related work, in
particular—approaches to estimation of transmission delays in wireless channels,
and MAP and PH fitting methods. In Sect. 3 we briefly describe the relay and
DCF channels and provide parameters values used in numerical experiment. Then,
in Sect. 4 we outline MAP /P H /1/N systems, MAP/PH fitting methods and
describe an iterative algorithm for estimation of tandem properties using departure
processes approximations. Section 5 provides numerical experiments results and
Sect. 6 concludes the paper.

2 Related Work

Since this work aims at the performance analysis of multihop wireless networks
using open queuing networks and methods for their approximate estimation, here
we consider studies related to estimating the packets transmission delays in wireless
channels, as well as methods of MAP flows and PH distributions fitting.
A large number of papers are devoted to the performance analysis of IEEE
802.11 networks. In many cases the proposed methods are based on a Markov chain
proposed by Bianchi in paper [2]. This chain is originally used by the author for
the network throughput estimation in saturated mode, when there is always another
packet to transmit. Based on this model, many methods have been developed for
estimating the time of data transmission in DCF channels. For instance, in [1, 6, 15]
transmission delays are analyzed when network operates in saturated mode, paper
[19] takes into account queueing delay, and papers [7, 18] estimate delays when
network is unsaturated.
A significant number of papers are also devoted to the methods of fitting MAP
flows [3–5, 9–11, 14, 16] and PH distributions [3, 17]. Some authors [3, 10, 16]
460 V. Vishnevsky and A. Larionov

focus on moments matching methods, others use different approaches based on EM


(Expectation Maximization) algorithm [9, 14, 17].
In this paper, we will use the moments matching method to find a smaller order
MAP flow when reducing the state space of the departure flow. For fitting PH
distributions of the packets transmission delay, we will use G-FIT [17] method.
Initial data for fitting PH distributions of packet transmission delay will be obtained
analytically for a relay network and using a simulation model for a network with
DCF channels. Note that instead of simulation modeling, in further studies one can
use the estimates obtained from the works cited above.

3 Channel Access in Wireless Networks

The basic channel access mechanism used in IEEE 802.11 networks from the
earliest versions of the standard is Distributed Coordination Function (DCF). This
mechanism is based on the CSMA/CA access scheme and involves listening to the
channel for a random time before transmitting data. The success of the transmission
is confirmed by the receiver, in the absence of confirmation, retransmission is
performed. Transmission errors (collisions) may occur due to simultaneous trans-
missions of multiple stations. You can read more about DCF in the IEEE 802.11
standard, or in one of the many papers (e.g., [2]). In our model example, we consider
a simplified version of DCF (see Fig. 2), not using the RTS/CTS mechanism, not
limiting the number of retransmissions, and also ignoring the post-backoff, since its
accounting would lead to dependencies of successive service times and the need to
use Markovian service process (MSP) instead of PH distributions.

Fig. 2 Basic channel access under control of DCF


Multiphase Models for Analysis of End-to-End Delays in Wireless Networks 461

Fig. 3 Basic channel access in wireless relay channel

Table 1 Parameters values used in the experiment

Parameter Value Unit Used in


PHY header 128 bit DCF, Relay
MAC DATA header 272 bit DCF, Relay
IP header 160 bit DCF, Relay
MAC ACK frame 112 (+PHY header) bit DCF
Slot duration (σ ) 50 μs DCF
IFS 128 μs Relay
DIFS 128 μs DCF
SIFS 28 μs DCF
CWmin 16 – DCF
CWmax 1024 – DCF
Bitrate 1000 kbps DCF, Relay
IP header 160 bit DCF, Relay
Payload avg. size 6344 bit DCF, Relay
Payload min. size 2000 bit DCF, Relay
Payload max. size 10,688 bit DCF, Relay
Traffic rates for relay 120,250,500 kbps Relay
Traffic rates for DCF 20,406,080,120,250 kbps DCF

In radio relay channels (see Fig. 3) access is deterministic. In many cases it does
not involve sensing the channel for a random time interval, interference with other
stations is omitted; there are also no acknowledgements.
In our model example we assume an ideal channel, i.e. if transmissions are not
colliding, they are transmitted successfully (bit error rate, BER, is assumed zero). In
a more general case, it is certainly desirable to use more realistic channel models.
In our numerical experiment we are going to use channel access parameters
values shown in Table 1. These parameters are basically the same as the values from
[2], which was required for the model validation. However, higher-speed modern
IEEE 802.11 versions can be modeled if these parameters are updated. We assumed
that IFS (interframe space) for a relay channel is equal to DIFS (DCF interframe
space). We also omit preambles since in the ideal channel they are simply an
additional constant that can be counted in another interval (IFS for relay channel
or DIFS for DCF). We also show the application traffic parameters in the bottom
of the table. Note that uniformly distributed payload sizes are used in both models,
462 V. Vishnevsky and A. Larionov

while in relay networks modeling we will also use exponentially distributed and
constant payloads with the same mean values.

4 Open Queueing Tandem Networks with MAP /P H/1/N


Nodes

In this section we will briefly describe Markovian arrival processes and phase-
type distributions, MAP fitting using moments and lags matching and present an
algorithm of approximate estimation of tandem queueing network properties with
departure process approximation with a MAP of a given size.

4.1 Markovian Arrival Processes (MAP) and Phase-Type (PH)


Distributions

A Markovian arrival process is defined by an irreducible continuous-time Markov


chain νt , t ≥ 0 with a finite state space {0, . . . , W }. The process νt , t ≥ 0 is in
state ν during exponentially distributed time with parameter λν , ν ∈ 0, W . After
the time expires the chain jumps from state ν to state ν̃ with probability p0 (ν, ν̃)
if the transmission is unobserved and p1 (ν, ν̃) otherwise. An observed transmission
generates a message. It is also assumed that the process cannot stay in the same state
ν̃ = ν without message generation. Matrices D0 , D1 are used to define the MAP:
/
−λν , if ν = ν "
(D0 )ν,ν " =
λν p0 (ν, ν " ), otherwise
(D1 )ν,ν " = λν p1 (ν, ν " ).

The matrix D = D0 + D1 defines an infinitesimal generator of the random


process νt , t ≥ 0. Its stationary probability vector θ is obtained from the system

θ D = 0, θ 1 = 1,

where 0 is a row vector of zeros and 1 is a column vector of ones. The steady-state
probability vector π of a discrete-time Markov chain embedded at arrival instants
with a generator P = (−D0 )−1 D1 can be obtained as the solution of the following
linear system:

πP = π , π 1 = 1.
Multiphase Models for Analysis of End-to-End Delays in Wireless Networks 463

The average arrival intensity of a MAP is λ = 1/π(−D0 )−1 1. The k-th moment
and lag-k correlation can be expressed as

mk = k! π (−D0 )−k 1, k ≥ 1, (1)

λ2 π(−D0 )−1 P k (−D0 )−1 1 − 1


lk = , k ≥ 1. (2)
λ2 π (−D0 )−2 1 − 1

A phase-type (PH) distribution is defined as a hitting time of the absorbing


state in a continuous-time Markov chain with a single absorbing state. Formally,
a random variable X is said to have PH distribution X ∼ P H (S, τ ) if τ ∈ RV is a
probability distribution and S ∈ RV ×V is a subinfinitesimal matrix defining initial
states probabilities and transition rates between non-absorbing states, respectively.
The background Markov chain has the following generator matrix:
$ %
S −S1
0 1

The k-th moment E[Xk ], X ∼ P H (S, τ ) can be found via the expression

mk = k! τ (−S)−k 1, k ≥ 1. (3)

Markovian arrival processes and MAP /P H /1/N queues satisfy the following
properties [20, 21]:
1. The result of sifting a MAP with constant probability is also a MAP;
2. The composition of a finite number of MAPs is a MAP;
3. The departure process of MAP /P H /1/N system is also a MAP.
Note that MAP /P H /1/N queue can lose packets due to the queue overflow
and the flow of lost packets is also a MAP. Taking into account these properties
it can be shown that a departure process form the first server is a MAP and
consequently the arrival processes to all succeeding servers are also MAPs as well
as the departure processes. Thus an iterative procedure can be built to compute
parameters of a queueing network, [21]. However, the order of departure MAP at
i-th phase, i = 1, K, is O(W1 ij =1 Vj (Nj + 2)). So it is impossible to use this
procedure without approximations in practical cases for arbitrary network sizes.

4.2 MAP and PH Fitting

For PH fitting we will make use of expectation-maximization method (EM)


implemented in G-FIT algorithm [17], in which the PH distribution is found in the
form of hyper-Erlang distribution. To fit MAP, we will use the generalized method
of moments.
464 V. Vishnevsky and A. Larionov

In moments matching method elements of matrices D0 , D1 are considered


unknown. Assuming we know the first Km moments and the first Kl lags, the
Eqs. (1) and (2) are used to build the equations system for MAP. Then the MAP
fitting may be described as a solution of the optimization problem constrained by
the values of the moments and lag-k autocorrelation coefficient values. Let mKm be
the vector of the first Km moments of MAP, lKl be the vector of the first Kl lags
given in (1) and (2) correspondingly; μ and ν be the vectors of moments and lags
of a random process to fit correspondingly. Using this notation the problem of MAP
fitting can be formulated via solution of a nonlinear algebraic system

mKm (D0 , D1 ) = μ,
(4)
lKl (D0 , D1 ) = ν.

System (4) should be solved for D0 and D1 such that D = D0 + D1 is an


infinitesimal generator and D0 is a subgenerator. By these restrictions, the system
may have no solution for some pairs (μ, ν) and the order N, thus a MAP with such
lags and moments does not exist. It should be noticed that there are no known closed
form margins for the moments and lags values for MAPs and PH distributions of an
arbitrary order making the problem much harder.
We suggest that approximate solution of the system can be brought to an
optimization problem as follows. Define a loss function L (·) = (| · |)2 and a loss
functional

Q(D0 , D1 ) = L (mKm (D0 , D1 ) − μ) + L (lKl (D0 , D1 ) − ν). (5)

Then a proper MAP is found as a solution of (D0 , D1 ) = arg min Q(D0 , D1 ).


D0 ,D1
The problem described is generally nonconvex which leads to local optima
solutions and requires additional effort to randomize the initial vectors and look
for the best solution.

4.3 Computing End-to-End Delays with Departure


Approximation

We assume that the wireless network contains K channels, so the queueing network
consists of K queues. Let us denote the arrival MAP at each phase as Yi (Y1 is
known), departure flow as Yi" , and approximated departure MAP as Yi"" . We can now
describe an iterative algorithm:
Besides this approximation procedure, we will also use Monte-Carlo method by
implementing the tandem queue in a discrete-event simulation system to compare
results.
Multiphase Models for Analysis of End-to-End Delays in Wireless Networks 465

Result: End-to-end delay T


i := 1;
T := 0;
while i ≤ K do
if i > 1 then
"" ;
Yi := Yi−1
end
find matrices D0" , D1" of the departure MAP Yi" (see [20, 21]);
compute response time T i ;
compute moments and mKm and lags l Kl of Yi" ;
find Yi"" := MAP (D0"" , D1"" ) by solving (4) with loss (5);
T := T + T i ;
i := i + 1;
end
Algorithm 1: Iterative procedure for MAP /P H /1/N tandem properties estima-
tion with departure process approximation

5 Numerical Results

In the numerical experiment we estimate end-to-end delays by performing the


following four steps:
1. Fit PH distributions for the given channel using samples collected from the
channel simulation;
2. Estimate end-to-end delays using Monte-Carlo method with PH distributions
from the first step;
3. Estimate approximate end-to-end delays again using iterative Algorithm 1;
4. Compute end-to-end delays using wireless network simulation.
Simulation model used in steps 1 and 4 takes into account actual channel access
protocols and models interactions between nodes. Description of the model is
beyond the scope of this work. It should be noted that it was implemented in Python
language, allows to investigate the process of data transmission in detail, and is
significantly simpler than the models implemented in NS-3 or OMNeT++. Source
code and documentation are available at GitHub.1
In all experiments we use the same Markovian arrival process A0 , which was
fitted using moments matching method from a sample of values of random variable
|γ |, γ ∼ N (a0 , σa ). To model various arrival rates, we scale this MAP by
multiplying its matrices D0 , D1 by the corresponding constants.
For DCF channels we assume only uniformly distributed payload sizes and data
rates from 20 kbps up to 250 kbps. For radio relay channels we consider different

1 Simulation model source code: https://github.com/larioandr/pycsmaca.


466 V. Vishnevsky and A. Larionov

payload size distributions (uniform, exponential and constant) and data rates from
120 kbps and up to 500 kbps.

5.1 Channel Delay Fitting with PH Distributions

We simulate channels shown on Fig. 4 to collect samples for PH distributions fitting.


Several stations may compete for the channel access in DCF network, and all
stations have applications generating data with the same rate. Any two or more
simultaneous transmissions cause collision and retransmission after a longer random
backoff. In radio relay channel there are no competing stations.
Assuming that in the multihop wireless network with DCF channels stations
are placed far enough from each other to neglect interference between two-hop
neighbors. Then each station in the network competes with either one neighbor
(for the boundary stations), or with two neighbors (for intermediate stations), see
Fig. 1. In case of a one-hop network there are no competing stations, as well as in
the network with radio relay channels. For PH fitting we use G-FIT algorithm [17].
Figure 5 illustrates mean values and standard deviations of transmission delays
in DCF channels depending on arrival traffic rate for 1, 2, and 3 competing stations,
and Fig. 6 shows the density functions of the transmission delay in DCF channel,
obtained from the fitted PH distribution and the collected samples from the channel
simulation.
Regarding radio relay channel, Fig. 7 shows the probability density functions
for the original distributions of transmission delays along with the fitted PH
distributions. Analytic expression for the channel delay used here is τ = (ξ +
H )/B + IFS, where ξ —random variable describing payload size, H —total length
of MAC, PHY, and IP headers, B—channel bitrate, and IFS is the interframe space;
all constants values are given in Table 1.

Fig. 4 Network topologies used in channel delay fitting


Multiphase Models for Analysis of End-to-End Delays in Wireless Networks 467

Fig. 5 Average transmission delay in DCF channels

Fig. 6 Probability density functions of transmission delay in DCF channels with 1, 2, and 3
colliding stations under various user data bitrates

Fig. 7 Probability density functions of transmission delay in relay channel under various payload
size distributions
468 V. Vishnevsky and A. Larionov

5.2 Estimation of End-to-End Delays in Wireless Relay


Networks

For each type of payload size distributions we found end-to-end delays in networks
with radio relay channels using PH distributions found and described above.
In approximate estimation using iterative algorithm we approximated departure
processes with Poisson process (it can be described as a MAP of order 1) and with
a more generic MAP of order 3. The results are shown in Fig. 8.
Payload transmission takes more than 90% of service time in the relay channel,
so the system utilization can be roughly approximated as the relation of the payload
bitrate to the channel bitrate. If utilization is under 0.25, it can be seen that the results
obtained using both Monte-Carlo and iterative departure approximation methods are
fairly close to each other in all cases except the exponential payload size distribution.
The cause of error here is the service time dependency existing in the wireless
network (payload size is fixed for each packet), but ignored in the queueing network,
where service times at different nodes are independent.
Note that in case of other payload size distributions the results obtained using
Monte-Carlo method are very close to the results obtained with iterative departure
approximation method with MAPs of order 3. However, if the departure processes
are fitted with Poisson processes, then the error grows with the growth of the
utilization coefficient.

Fig. 8 End-to-end delays in wireless networks with relay channels containing from 1 to 10 nodes
Multiphase Models for Analysis of End-to-End Delays in Wireless Networks 469

5.3 Estimation of End-to-End Delays in Networks with DCF


Access

End-to-end delays were estimated for a wireless network with DCF channels in
the same way as for the network with radio relay channel, but using only uniform
payload size distribution. In contrast to the queueing model for a radio relay
network, here we used different PH distributions for different nodes: boundary
servers used PH distributions fitted for the channel with two competing stations,
while intermediate servers used PH distributions fitted for the channel with three
competing stations. In a basic case of a network of size 1, we used PH distribution
obtained from the channel without competing stations. End-to-end delays estimation
results are shown in Fig. 9. It can be seen that both iterative departure approximation
and Monte-Carlo methods provide fairly accurate results until payload bitrate
reaches 120 kbps.
The possible reason for the increase in error is the drop in service time as the
distance from the source in the real network increases with the load (see Fig. 10).
Ways to take this effect into account in the queueing model are expected to be
considered in the future work.

Fig. 9 End-to-end delays in wireless networks with DCF channels containing from 1 to 10 nodes

Fig. 10 Node response time in a network of size 10 with DCF channels


470 V. Vishnevsky and A. Larionov

6 Conclusion

The paper presented the results of applying the open queuing networks to calculate
end-to-end delays in wireless networks with radio relay and IEEE 802.11 DCF
channels. Methods for fitting realistic PH distributions of packet transmission times
were described, and also in a numerical experiment it was shown that under low
utilization open queuing networks MAP /P H /1/N → · · · → •/P H /1/N can be
used to adequately evaluate end-to-end delays. At the same time, the dependence of
the service times at neighboring stations leads to an error with increasing utilization.
The key conclusion of this work is that the method of iterative properties
estimation with approximation of the departure flow can be used to calculate the
parameters of a queuing network. The approximation by the Poisson arrival process
leads to errors earlier than the approximation by the “real” MAP flow (the third
order was considered in the work), but under a small load it also allows to get an
adequate result. At the same time, the approximation method has a much smaller
time complexity: for example, it took about 10 min to obtain estimates for the radio
relay network using the Monte-Carlo method on a computer with an i7 processor,
and using the iterative approximation algorithm, with both Poisson and MAP-3
flows, only about 1 min 44 s. In the case of a network with DCF channels, the
times were 5 min 16 s versus 1 min, respectively. The calculation algorithm plays
a significant role here: when evaluating using the Monte-Carlo method, it was
necessary to model a network of each size independently, while the approximation
method uses an iterative algorithm that allows one to obtain results for networks of
a size not exceeding a specified in one pass. In addition, in the Monte-Carlo method,
it is necessary to generate a sufficiently large number of events (packet arrivals
and service terminations), while in iterative departure approximation method all
properties are computed analytically.
All numerical experiments, the results of which are presented in this work,
some additional data, including the matrices of the fitted PH distributions and
measurement details are available in the repository2 on GitHub. All simulation and
analytical models and numerical experiment itself are written in Python language.

Acknowledgement This work was partly financially supported by the Russian Foundation for
Basic Research, grant No. 18-57-00002.

References

1. Banchs, A., Serrano, P., Azcorra, A.: End-to-end delay analysis and admission control in
802.11 DCF WLANs. Comput. Commun. 29(7), 842–854 (2006)
2. Bianchi, G.: Performance analysis of the IEEE 802.11 distributed coordination function. IEEE
J. Sel. Areas Commun. 18(3), 535–547 (2000)

2 Experiment code: https://github.com/larioandr/2019-icaap-queues-model.


Multiphase Models for Analysis of End-to-End Delays in Wireless Networks 471

3. Bobbio, A., Horvath, A., Telek, M.: Matching three moments with minimal acyclic phase type
distributions. Stoch. Model. 21, 303–326 (2005)
4. Bodrog, L., Heindl, A., Horvath, G., Telek, M.: A Markovian canonical form of second-order
matrix-exponential processes. Eur. J. Oper. Res. 190, 459–477 (2008)
5. Casale, G., Zhang, E.Z., Smirni, E.: Trace data characterization and fitting for Markov
modeling. Perform. Eval. 67, 61–79 (2010)
6. Chatzimisios, P., Vitsas, V., Boucouvalas, A.: Throughput and delay analysis of IEEE 802.11
protocol. In: Proceedings 3rd IEEE International Workshop on System-on-Chip for Real-Time
Applications, pp. 168–174. IEEE, Piscataway (2002)
7. Dong, L.F., Shu, Y.T., Chen, H.M., Ma, M.D.: Packet delay analysis on IEEE 802.11 DCF
under finite load traffic in multi-hop ad hoc networks. Sci. China Ser. F Inf. Sci. 51(4), 408–
416 (2008)
8. Heyman, D., Lucantoni, D.: Modelling multiple ip traffic streams with rate limits. IEEE ACM
Trans. Netw. 11, 948–958 (2003)
9. Horvath, G., Okamura, H.: A fast EM algorithm for fitting marked markovian arrival processes
with a new special structure. In: Computer Performance Engineering, pp. 119–133. Springer,
Berlin (2013)
10. Horvath, G., Buchholz, P., Telek, M.: A map fitting approach with independent approximation
of the inter-arrival time distribution and the lag correlation. In: Second International Confer-
ence on the Quantitative Evaluation of Systems, pp. 124–133 (2005)
11. Horvath, G., Reinecke, P., Telek, M., Wolter, K.: Heuristic representation optimization for
efficient generation of PH-distributed random variates. Ann. Oper. Res. 239, 643–665 (2016)
12. Klemm, A., Lindermann, C., Lohmann, M.: Modelling IP traffic using the batch Markovian
arrival process. Perf. Eval. 54, 149–173 (2008)
13. Neuts, M.: A versatile markovian point process. J. Appl. Probab. 16, 764–779 (1979)
14. Okamura, H., Dohi, T.: Faster maximum likelihood estimation algorithms for markovian arrival
processes. In: IEEE Sixth International Conference on the Quantitative Evaluation of Systems
(QEST’09) (2009)
15. Sakurai, T., Vu, H.: MAC access delay of IEEE 802.11 DCF. IEEE Trans. Wirel. Commun.
6(5), 1702–1710 (2007)
16. Telek, M., Horvath, G.: A minimal representation of Markov arrival processes and a moments
matching method. Perf. Eval. 64, 1153–1168 (2007)
17. Thummler, A., Buchholz, P., Telek, M.: A novel approach for fitting probability distributions
to real trace data with the EM algorithm. In: International Conference on Dependable Systems
and Networks (2005)
18. Tickoo, O., Sikdar, B.: Modeling queueing and channel access delay in unsaturated IEEE
802.11 random access MAC based wireless networks. IEEE/ACM Trans. Netw. 16(4), 878–
891 (2008)
19. Vardakas, J., Papapanagiotou, I., Logothetis, M., Kotsopoulos, S.: On the end-to-end delay
analysis of the IEEE 802.11 distributed coordination function. In: Second International
Conference on Internet Monitoring and Protection (ICIMP 2007), pp. 16–16. IEEE, Piscataway
(2007)
20. Vishnevski, V., Larionov, A., Ivanov, R.: An open queueing network with a correlated input
arrival process for broadband wireless network performance evaluation. In: International
Conference on Information Technologies and Mathematical Modelling, pp. 354–365 (2016)
21. Vishnevsky, V., Dudin, A., Kozyrev, D., Larionov, A.: Methods of performance evaluation of
broadband wireless networks along the long transport routes. In: Communications in Computer
and Information Science, vol. 601, pp. 72–85. Springer, Berlin (2016)
22. Vishnevsky, V., Larionov, A., Semenova, O., Ivanov, R.: State reduction in analysis of a tandem
queueing system with correlated arrivals. In: Communications in Computer and Information
Science, vol. 800, pp. 215–230. Springer, Berlin (2017)
Variance Laplacian: Quadratic Forms
in Statistics

Garimella Rama Murthy

Abstract In this research paper, it is proved that the variance of a discrete random
variable, Z can be expressed as a quadratic form associated with a Laplacian matrix
i.e.

Variance [Z] = XT GX

G is Laplacian matrix whose elements are expressed in terms of probabilities.


We formally state and prove the properties of Variance Laplacian matrix, G.
Some implications of the properties of such matrix to statistics are discussed. It
is reasoned that several interesting quadratic forms can be naturally associated with
statistical measures such as the covariance of two random variables. It is hoped that
VARIANCE LAPLACIAN MATRIX G will be of significant interest in statistical
applications. The results are generalized to continuous random variables also. It is
reasoned that cross-fertilization of results from the theory of quadratic forms and
probability theory/statistics will lead to new research directions.

Keywords Variance · Laplacian matrix · Eigenvalues · Eigenvectors ·


Quadratic form

1 Introduction

Structured matrices such as Toeplitz matrix naturally arise in various application


areas of mathematics, science, and engineering. Specifically, in probability theory
as well as statistics, the autocorrelation matrix of an Auto-Regressive (AR) random
process is a Toeplitz matrix. Auto-Regressive stochastic processes find many
applications in stochastic modeling. Motivated by practical considerations, detailed

G. Rama Murthy ()


Mahindra Ecole Centrale, Hyderabad, India
e-mail: [email protected]

© The Editor(s) (if applicable) and The Author(s), under exclusive 473
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_28
474 G. Rama Murthy

research efforts went into understanding the properties of Toeplitz matrices (such
as connections to orthogonal polynomials). For instance, considerable research
effort went into efficiently inverting a Toeplitz matrix (such as Levinson–Durbin
algorithm).
In the research area of Graph theory, a structured matrix called Laplacian
naturally arises [1]. It is defined utilizing the adjacency matrix of a graph (which
essentially summarizes the adjacency information associated with the vertices of
graph). Thus, Graph Laplacian was subjected to detailed study and several new
properties of it are discovered. Some of these properties have graph-theoretic
significance.
Effectively, researchers are interested in discovering the connections between
concepts in probability/statistics and structured matrices. Discrete random variables
find many applications in Statistics. Thus, a curious natural question is to see
whether structured matrices are naturally associated with scalar measures of discrete
random variables, such as the moments.

2 Review of Related Literature

In the field of mathematics, research related to quadratic forms have long history
dating back to the time of Fermat, Bhaskara, others. Several interesting results
such as the Rayleigh’s theorem were discovered and proved. Quadratic forms have
connections to such diverse areas such as topology, differential geometry, etc. To the
best of our knowledge, the author discovered for the first time that the variance of
a discrete random variable can be expressed as the quadratic form associated with
a Laplacian matrix (of probabilities) [3, 4]. This discovery motivated the author
to express other statistical/probabilistic measures as quadratic forms. This line of
research enables cross-fertilization of ideas between probability theory/Statistics
and the theory of quadratic forms.

3 Variance of a Discrete Random Variable: Laplacian


Quadratic Form

Consider a discrete random variable, Z with probability mass function


{p1 , p2 . . . , pN }. The variance of Z is given by
 
Variance (Z) = Var(Z) = E Z 2 − (E(Z))2
Variance Laplacian: Quadratic Forms in Statistics 475

Let the values assumed by the random variable Z be given by {T1 , T2 . . . , TN }. Let
the associated vector of values assumed by Z be denoted by T . Hence, we have that
* +2

N 
N
Var(Z) = Ti2 pi − Ti pi
i=1 i=1


N 
N 
N
= Ti2 pi − Ti Tj pi pj
i=1 i=1 j =1
t
= T [D − P̃ ]T

where D is a diagonal matrix whose diagonal elements are {p1 , p2 , . . . , pN } and


P̃ij = pi pj for all 1 ≤ i, j ≤ N.
Let G = D − P-. Hence, we have that Var(Z) = T GT
T

Thus, we have shown that variance of discrete random variable Z constitutes


a quadratic form associated with the matrix G. We now state the following well
known definition:
Definition 1 ([1]) A square matrix is called a Laplacian matrix if and only if all
diagonal elements of it are all positive, all non-diagonal elements are non-positive,
and all the row sums are all zero. Now, we prove that the square matrix G is a
Laplacian matrix.
Lemma 1 The square matrix G is a Laplacian matrix.
Proof From the definition of G, we readily have that

Gii = pi − pi2 = pi (1 − pi )

Also, we have that Gij = −pi pj for i = j . Further


N 
N 
N=1
Gij = Gii + Gij = pi (1 − pi ) − pi pj
j =1 j =1j =i j =1j =i

= pi (1 − pi ) − pi (1 − pi ) = 0

Hence, the square matrix G is a Laplacian matrix. Q.E.D.


Note In the case of specific discrete random variables (such as Bernoulli, Poisson,
Binomial, etc.), the associated Laplacian matrix can easily be determined. Also, if
the number of values assumed by the random variables is atmost 5, the eigenvalues
of Laplacian matrix (roots of the associated characteristic polynomial) can be
determined by algebraic formulas (Galois Theory).
476 G. Rama Murthy

Example 1 Specifically when the dimension of G is 2 (i.e. the random variable,


Z is Bernoulli random variable), we determine its eigenvalues and eigenvectors
explicitly. Let Probability {z = 0} = q. Then we have that

q(1 − q) −q(1 − q)
G=
−q(1 − q) q(1 − q)
  
The eigenvalues are 0, 2 q − q 2 ) .
/   4
√1 √1
The orthonormal basis of eigenvectors are 2 , −1
2 . When q =
√1 √
2 2
1
2, spectral radius is 12 .
Note Suppose we consider a discrete random variable Z which assumes the values
{+1, −1}. In such case, it is easy to show that

Variance (Z) = 4q(1 − q)

Example 2 We now consider discrete uniform  random variable whose probability


1 1 1
mass function is given by N , N , . . . , N . The Variance Laplacian associated with
it is given by
⎡ N−1 −1 −1

N2 N2
··· N2
⎢ −1 −1 ⎥
⎢ N−1
··· ⎥
G=⎢
⎢ ..
N2 N2
.. .. ⎥
N2

⎣ . . ... . ⎦
−1 −1
N2 N2
··· N−1
N2

Since, the sum of absolute values of elements in every row is same, the spectral
radius Sp(G) can be determined (using well known result in linear algebra).

2(N − 1) N −1
Sp(G) = , Trace (G) = , Determinant (G) = 0
N2 N

Since G is a right circulant matrix, from linear algebra, its eigenvalues as well as
eigenvectors can be explicitly determined.
Note The matrix, −G constitutes a generator matrix of a finite state space Continu-
ous Time Markov Chain (CTMC). Thus a discrete random variable can be associated
with a CTMC. In general, since, G is a symmetric matrix, it is completely specified
by the eigenvalues and eigenvectors.
Now, we briefly summarize few properties of G matrix that readily follow.
6 7t
Let e be a column vector of 1" s (ONES) i.e. e = 1 1 . . . .
Variance Laplacian: Quadratic Forms in Statistics 477

• From Lemma 1, we have that Ge ≡ 0. Hence ‘0’ is an eigenvalue of G and the


corresponding eigenvector is e.
T
• Since Variance [Z] is non-negative, we have that the quadratic form T GT ≥ 0
for all vectors of T . Hence the Laplacian matrix G is a positive semi-definite
matrix. Thus, all eigenvalues of G are real and non-negative.
We now derive an important property of G in the following lemma.
Lemma 2 The spectral radius, μmax i.e. largest eigenvalue of G is less than or
equal to 1/2.
Proof From linear algebra (particularly matrix norms), it is well known that the
spectral radius of any square matrix A i.e. Sp(A) is bounded in the following
manner:

Minimum absolute row sum (A) ≤ Sp(A) ≤ Maximum absolute row sum (A)

But, in the case of Laplacian matrix G, we have that


N
 
Gij  = 2pi (1 − pi ) for all i
j =1

Hence, using the above fact from linear algebra, we have that

Min Max
{2pi (1 − pi )} ≤ Sp(G) ≤ {2pi (1 − pi )} .
i i

Max
Using the fact that, pi" s are probabilities, we now bound {2pi (1 − pi )}.
i
Let f (pi ) = {2pi (1 − pi )} = 2pi − 2pi2 . We now calculate the stationary Points
of f (pi ).
Let f " (pi ) = 2 − 4pi = 0. Hence pi = 12 is the unique critical point in feasible
region.
""
  that f (pi ) = −4. Thus the critical point is maximum of f (pi ).
Also we have
Thus, f 12 = 12 . Hence we readily have that spectral radius of G i.e.
Sp(G) ≤ 1
2 Q.E.D.
Note The function f (pi ) constitutes the well known logistic map whose properties
were investigated by several researchers.
Goals
• Goal 1: In view of the above discovery related to the variance of a discrete
random variable (i.e. Laplacian quadratic form), we would like to discover other
quadratic forms which naturally arise in probability/statistics.
478 G. Rama Murthy

• Goal 2: Once the interesting quadratic forms are identified, the results from
the theory of quadratic forms (for instance Rayleigh’s Theorem) are applied
to statistical/probabilistic quadratic forms. On the other hand, results related to
statistical/probabilistic quadratic forms are invoked to derive new results in the
theory of quadratic forms (such as inequalities between quadratic forms).
• We now derive a specific inequality associated with quadratic forms based on
statistical/probabilistic quadratic forms.
Consider a vector K whose components are all positive real numbers. It
readily follows that by means of the following normalization procedure, it can
be converted into a probability vector p (i.e. vector whose components are
probabilities and sum to one i.e. probability mass function of a random variable,
say Z). Let the vector of values assumed by the random variable Z be T .

K K
p = N =
i=1 Ki
α

But, we know that the variance of discrete random variable Z is non-negative.


T  
Hence T diag(p) − ppT T ≥ 0, where diag(p) is a diagonal matrix
whose components are all the components of vector p. It readily follows that
(on using the above normalization equation), we have the following inequality
 T   T T
 
α T (diag(K))T ≥ T KK T for all T , K, α

We now state the following Theorem, useful in bounding the variance of Z.


Rayleigh’s Theorem The local/global optimum values of a quadratic form associ-
ated with a matrix B evaluated on the unit Euclidean hypersphere (constraint set) are
the eigenvalues of B and they are attained at the corresponding eigenvectors of B.
Using Rayleigh’s theorem, we arrive at the following result.
Lemma 3
1 2 2
Variance (Z) ≤ L − norm(T ) .
2

Proof Formally, if the vector of values assumed by the random variable i.e. T lies
on the unit Euclidean hypersphere, then we have that

T 1
μmin ≤ T GT ≤ μmax ≤ , if L2 − norm(T ) = 1
2
Variance Laplacian: Quadratic Forms in Statistics 479

Suppose L2 − norm(T ) = 1. Then, we readily have that T


is a vector
L2 −norm(T )
whose L2− norm is equal to one and the Rayleigh’s Theorem can be applied to the
quadratic form based on it. Thus, it follows that
 2  2
μmin L2 − norm(T ) ≤ Variance(Z) ≤ μmax L2 − norm(T )

Hence, by applying the earlier upper bound on spectral radius, we have

1 2 2
Variance(Z) ≤ L − norm(T ) Q.E.D.
2

Corollary The non-zero lower bound on Variance (Z) is given by (using μmin ).
 2
μmin L2 − norm(T ) ≤ Variance(Z)

Property (iv) Now, we consider sum of eigenvalues of G i.e. Trace(G).


It readily follows that


N N 
  
N 
N
Trace(G) = pi (1 − pi ) = pi − pi = 1 −
2
pi =
2
μi
i=1 i=1 i=1 i=1

Since Trace(G) is the sum of eigenvalues, we have the following obvious bounds:

Nμmin ≤ Trace(G) ≤ Nμmax

The following Lemma provides an interesting upper bound on Trace(G).


Lemma 4 Let G be N × N matrix. Then Trace(G) has the following upper bound.
$ %
1
Trace(G) ≤ 1 −
N

Proof Let {p1 , p2 , . . . , pN } be the probability mass function of


random variable Z.
We now apply the Lagrange-multipliers method to bound N 2
i=1 pi . The objec-
tive function for the optimization problem is given by J (p1 , p2 , . . . , pN ) =
 N 2
i=1 pi with the constraint that the probabilities sum to one. Hence the Lagrangian
is given by
* +

N 
N
L (p1 , p2 , . . . , pN ) = pi2 +α pi − 1
i=1 i=1
480 G. Rama Murthy

Now, we compute the critical point and the components of the Hessian matrix:

δL δ2 L δ2 L
= 2pi + α, 2 = 2 for all i, = 0 for all i = j
δpi δpi δpi δpj

Hence, there is a single critical point and the Hessian matrix is positive definite
at the critical point. Thus, we conclude that the objective function has a unique
minimum and occurs at
δL −α
=0 i.e. pi =
δpi 2

Using the constraint that probabilities sum to one, we have α = −2


N . Thus, the global
minimum occurs at pi = N1 for all i " .
Equivalently, we have the following upper bound on Trace(G).

$ %
1
Trace(G) ≤ 1 −
N Q.E.D.

Corollary We
 now bound
 the second smallest eigenvalue, μ2 of G. It is clear that
Trace(G) ≤ 1 − N . Further (N − 1)μ2 ≤ Trace(G). Hence μ2 ≤ N1 . Thus we
1

have

$
1 1 1
μ2 ∈ 0, and μi ∈ , for i ≥ 3.
N N 2 Q.E.D.

Note The upper bound on Trace(G) is attained for uniform probability mass
function i.e. pi = N1 for all i.
Note The finite condition number of Laplacian matrix G is defined as μμmax
min
, where
μmin is the smallest non-zero eigenvalue of G and μmax is the spectral radius of G.
Using the content of Lemma 2 and above corollary, the following lower bound on
finite condition number of G follows:
as
μmax
≥ 2Npmin (1 − pmin )
μmin

where pmin is the minimum of all the probabilities in the PMF of random variable
Z.
Variance Laplacian: Quadratic Forms in Statistics 481

3.1 Connections to Statistical Mechanics

Note The expression for Trace(G) has familiar relationship to Tsallis Entropy
concept from statistical mechanics. We have the following Definition:
Definition Tsallis entropy of a probability mass function {p1 , p2 , . . . , pN } is
defined as
* +
k N
q
Sq (p) = 1− pi
q −1
i=1

where “k” is Boltzmann constant and q is real number.


We, thus readily have that

Trace(G) = kS2 (p)

where p specifies the probability mass function [Property (iv)].


Note It readily follows that Trace(G) is the DC/constant contribution to the variance
Laplacian based quadratic form evaluated on the unit hypercube (i.e. set of all
vectors whose components are +1 or −1). We readily have that

T
T GT = Trace(G) + terms dependent on T

Trace(G) is exactly equal to the scaled Tsallis entropy, kS2 (p) associated with the
probability mass function of the discrete random variable.
N q
In the following lemma, we derive interesting results related to i=1 pi .
Specifically, the set of inequalities can have interesting consequences for Tsallis
entropy.
Lemma 5 Consider probability mass function {p1 , p2 , . . . , pN }. The following
inequalities
N hold true:

p
i=1 i
2m+1
≤ N m+1
i=1 pi for all integer m. But

* +2

N 
N
pi2m+1 ≥ pim+1 for all m.
i=1 i=1

Hence
⎛ *N +2 ⎞
k ⎝ 
S2m+1 (p) ≤ 1− pim+1 ⎠ .
2m
i=1
482 G. Rama Murthy

Proof Since pi" s are probabilities, we readily have that pi2m+1 ≤ pim+1 for any
integer m. 
Thus, N 2m+1
i=1 pi ≤ N i=1 pi
m+1
for all integer m.  
Now, consider a random variable Z which assume the values p1m , p2m , . . . , pN
m

i.e. values assumed are higher integer powers of the probabilities in the associated
PMF. We know that the variance of Z is non-negative.
 
Variance(Z) = Var(Z) = E Z 2 − (E(Z))2 ≥ 0

 
Thus it readily follows that E Z 2 ≥ (E(Z))2 and hence

*N +2

N 
pi2m+1 ≥ pim+1 for all m.
i=1 i=1
$  2 %
N m+1
Hence S2m+1 (p) ≤ 2m 1 −
k
i=1 pi or equivalently

 m 
S2m+1 (p) ≤ Sm+1 (p) − (Sm+1 (p))2 .
2k

Corollary Suppose the random variable Z assumes probability values qi" s different
from pi" s. Then, using the fact that Variance of Z is non-negative, we have the
following inequality
*N +2

N 
qi2 pi ≥ qi pi
i=1 i=1

It should be noted that both sides of inequality are convex combinations of real
numbers Q.E.D.
 
Note Suppose the values assumed by the random variable are p1m , p1m , · · · , p1m ,
1 2 N
then using the idea in the above proof, we have that
* +2

N
1 
N
1

i=1
pi2m−1 i=1
pim−1

In the above inequalities, the probabilities can be rational numbers less than one.
Hence the above inequalities hold true between rational numbers.
Variance Laplacian: Quadratic Forms in Statistics 483

 2  
Now, we compute the Trace G (in the same spirit of Trace G ) and briefly
study its properties. It readily follows that, treating G a vector, we have that

 2  2 
N
Trace G = L2 -norm(G) = μ2i
i=1
 
2
i.e. treating the set of eigenvalues leading to eigenvalue vector, Trace G is the
square of L2 -norm of such vector (of eigenvalues, the smallest of which is zero).
Also, from the theory of matrix norms, the L2 -norm of a matrix is related to the
spectral radius. We have

  N 
N 
N
2
Trace G = pi (1 − pi ) +
2 2
pi2 pj2
i=1 i=1i=j j =1


N 
N 
N 
N
= pi2 pj2 + pi2 pj2
i=1 j =1 i=1i=j j =1
⎡ ⎤

N 
N 
N
= 2⎣ pi2 pj2 ⎦ = μ2i (with μmin = 0)
i=1i=j j =1 i=1

 
2
Hence, Trace G is divisible by 2. Using the definition of Tsallis entropy Sq (p),
it can be readily seen that
 2 1 2 3
Trace G = 2 2
(S2 (p))2 − (S2 (p)) + (S4 (p))
k k k

Now, we derive interesting property related to the eigenvectors of G.


Lemma 6 The right eigenvectors g " s (whose transpose are the left eigenvectors) of
the variance Laplacian G that are different from the all-ones vector (i.e. e which
lies in the right null space of G) are such that they lie in the null space of matrix of
all ones, S i.e. Sij = 1 f orall i, j
Proof Since G is a symmetric matrix, the set of eigenvectors forms an orthonormal
basis. Also, the eigenvector corresponding to the ZERO eigenvalue of G is the
column vector of all ONES. Hence, we readily have the following fact:
g Ti e = 0 for all i. Thus, the components of all other eigenvectors sum to zero.
Also, it readily follows that g T Sg = 0. Since S is a rank one matrix with only
non-zero eigenvalue being “N” (with e being the associated eigenvector), all the
vectors g " s lie in the null space of S (in fact they form the basis of the null space of
S).
484 G. Rama Murthy

 
Hence L1 − norm g i is divisible by 2 for all eigenvectors g " s.
Also, let g be an eigenvector of G, other than all-ones vector i.e. e. We have that
*N +2
 
N
gi = gi2 + 2 (pairwise product of distinct components of g) = 0.
i=1 i=1

Hence if follows that g T -


Sg = −1, where S̃ is a matrix all of whose diagonal
elements are zero and all the non-diagonal elements are 1.
Since L2 − norm of g is one, it readily follows that pairwise product of distinct
components of g = − 12 . Q.E.D.
Similar result can be derived based on the Lp − norm of g. Details are avoided
for brevity.
We now propose an interesting orthonormal basis which satisfies all the proper-
ties required of the set of eigenvectors of an arbitrary Laplacian matrix.
Definition Hadamard basis (orthonormal) is the normalized set of rows/columns
of a symmetric Hadamard matrix, Hm . For instance, it is well known that H2 =
1 1
. Hence the Hadamard basis is given by
1 −1
/   4
√1 √1
2 , 2
√1 −1

2 2

Note Two +1, −1 vectors are orthogonal if and only if the number of +1’s is equal
to the number of −1’s. Such vectors exist if and only if the dimension of vectors is
an even number. Further the sum of elements in such vectors is zero (as required by
the eigenvectors of an arbitrary Laplacian matrix which is not necessarily a variance
Laplacian matrix).
Note In view of Rayleigh’s Theorem, if the orthonormal basis of eigenvectors of
a Variance Laplacian G is the Hadamard basis, then the global maximum value
of associated quadratic form evaluated on the unit hypercube is attained at the
eigenvector corresponding to its spectral radius.

3.2 Spectral Representation of Symmetric Laplacian Matrix G

We now arrive at the spectral representation of variance Laplacian matrix G i.e.


T  T " "
G = P DP = N i=2 μi f i f i where μi s are eigenvalues with μ1 = 0 and f i are
normalized eigenvectors of G. It should be noted that the column vector of all ones
Variance Laplacian: Quadratic Forms in Statistics 485

 T
i.e. e = 1 1 . . . is an eigenvector corresponding to the zero eigenvalue and √1 e
N
is the associated normalized eigenvector.
We know that G is completely specified by the probability mass function of the
associated discrete random variable i.e. {p1 , p2 , . . . , pN } . Hence we have that


N
 
μi fij2 = pj 1 − pj for 1 ≤ j ≤ N (i.e. diagonal elements of G.)
i=2

Also, we have that


N
i=2 μi fil fim = −pl pm for l = m and 1 ≤ l ≤ N, 1 ≤ m ≤ N (i.e.
off diagonal elements of G).

The orthogonal matrix P is of the following form:


⎡ ⎤
√1 f21 f31 · · · fN1
N
⎢ ⎥
⎢ √1 f22 f32 · · · fN2 ⎥
⎢ N ⎥
⎢ √1 f23 f33 · · · fN3 ⎥
P =⎢ ⎥
⎢ . N
.. ⎥
⎢ . .. .. .. ⎥
⎣ . . . . . ⎦
√1 f2N f3N · · · fNN
N

T T
Since, we have that P P = P P = I , the L2 -norm of rows, column vectors of
P is one. 
T
The residue matrices i.e. E i = f i f i are such that Ni=1 E i = I .


N
N −1 1
Hence E i = Q with Qii = for all i and Qij = − for i = j
N N
i=2

Also, we have readily that,


N
N −1 
N
1
fij2 = and fil fim = − for l = m and 1 ≤ l ≤ N, 1 ≤ m ≤ N
N N
i=2 i=2

Note In the spirit of properties of Laplacian G, we can derive new results related to
Graph Laplacian. Thus new results in spectral graph theory can be readily derived.
Abstract Vector Space of Random Variables Consider a collection of discrete
random variables. All of them assume same values. Specifically consider two
random variables X, Y . From research literature [2], E(XY ) (i.e. expected value
of their product) can be regarded as an inner product between the random variables
486 G. Rama Murthy

X, Y (regarded as abstract vectors). Suppose T be the set of values assumed by the


T
random variables X, Y . It readily follows that E(XY ) = T P̃ T , where P̃ can be
considered as a symmetric matrix.
Using Dirac notion E(XY ) = T , P-T . It readily follows that the inner product
E(XY ) is zero i.e. the associated random variables are orthogonal if T lies in the null
space of the symmetric matrix P̃ . Thus, the null space of the matrix P̃ determines
the space of orthogonal random variables.

3.3 Connections to Stochastic Processes

Let us first consider a discrete time, discrete state space stochastic process i.e. a
countable collection of discrete random variables. In view of the above results,
the variance values of random variables constitute a sequence of quadratic forms.
Thus, the sequence of scalar variance values constitute an infinite sequence of
real/complex numbers. We consider the following special cases:
• Consider the case where the random process is a strict sense stationary random
process. Hence, the sequence of variance values (i.e. the associated quadratic
forms) form a constant sequence (DC sequence).
• Consider the case where the random process constitutes a homogeneous Discrete
Time Markov Chain (DTMC). Since such a process exhibits an equilibrium
behavior, the sequence of variance values of the discrete random variables (i.e.
associated quadratic forms) converges to an equilibrium variance value (based on
the equilibrium probability mass function).

4 Other Interesting Quadratic Forms in


Probability/Statistics

In this section, we investigate several other quadratic forms which are naturally
associated with measures such as covariance/Correlation of two random variables
which assume same values.
 N
• In general, quadratic form is of the form β = N i=1 j =1 Ti Tj Bij , where Bij
has statistical or probabilistic significance e.g. B could be Toeplitz autocorrela-
tion matrix of an Auto-Regressive process. In fact B could be the state transition
matrix of a Discrete Time Markov chain (DTMC). Further B could be −Q, where
Q is the generator matrix of a CTMC.
• Variance Laplacian related investigation naturally leads to studying the following
more general quadratic form associated with two jointly distributed random
variables X, Y that are “symmetric” in the sense that their “marginal probability
mass functions” are exactly same and the values assumed by them are same. Let
Variance Laplacian: Quadratic Forms in Statistics 487

the common marginal probability mass function of the two random variables be
{p1 , p2 , . . . , pN }. In the spirit of Laplacian G, we are motivated to introduce, a
more general Laplacian matrix, H i.e.

H = D − P-,

where D = diag (p1 , p2 , . . . , pN ) i.e. a diagonal matrix and P̃ij =


Probability {X = i, Y = j } i.e. matrix of joint probabilities.
With such definition H need not be symmetric but still is Laplacian. Suppose,
P̃ is a symmetric matrix (a stronger condition which ensures that the random
variables X, Y are “symmetric”), H will be a symmetric, Laplacian matrix.
Let the common vector of values assumed by the random variables, X, Y be
T . Hence, the quadratic form associated with H is given by T T H T . Explicitly,
we have the following novel measure associated with jointly distributed random
variables X, Y .

T 
N 
N 
N
θ = T HT = Ti2 pi − Ti Tj Prob{X = i, Y = j }
i=1 i=1 j =1
   
= E X2 − E(XY ) = E Y 2 − E(XY )

Note If X, Y are independent and identically distributed random variables, then


the above measure is the common variance of them. Also, if X, Y s are same, then
θ is zero.
• We now introduce the concept of “symmetrization” of Jointly Distributed
Random variables based on the following well known result associated with
quadratic forms:

T
T P-T = 1 T T (P̃ + P̌ T )T i.e. symmetric quadratic form.
2
We now introduce a new definition.
Definition Two jointly distributed random variables with Joint PMF matrix P̃
(not necessarily symmetric) are “symmetrized”
 when they are associated with the
symmetric joint PMF matrix 12 P̃ + P̌ T .
T
Lemma 7 Laplacian quadratic form T H T is always positive semi-definite.
Proof It readily follows that E(XY ) is non-positive, then “θ ” is non-negative. Thus,
the more interesting case is when E(XY ) is non-negative. In this case, we invoke
a well known result in the abstract vector space of random variables. From [2], the
following definition is well known.
488 G. Rama Murthy

Definition The second moment of the random variables X, Y i.e. E(XY ) is defined
as their inner product. Further, the ratio

E(XY )
B    
E X2 E Y 2

is the cosine of their angle, β i.e. say cos(β).


Hence, it is well known that | cos(β)| ≤ 1. Thus, in the case of random variables
X, Y whose joint probability mass function matrix, P̃ is symmetric, we have that
   
|E(XY )| ≤ E X2 . Thus, if E(XY ) ≥ 0, E(XY ) ≤ E X2

T
Thus, the Laplacian quadratic form T H T is always positive semi-definite.
Q.E.D.
Corollary In this case, the covariance of random variables considered above can
be bounded in the following manner:

Cxy = E(XY ) − (E(X))2


   
Since Variance is non-negative, we have that E X2 ≥ (E(X))2 or − E X2 ≤
−(E(X))2 . Hence, Cxy ≥ −θ . Q.E.D.
We now briefly consider familiar scalar measures routinely utilized in probabilis-
tic/statistical investigations and provide them with quadratic form interpretation.
• Covariance: By definition, covariance of two random variables X, Y is given by

Cxy = E(XY ) − E(X)E(Y )

suppose the random variables X, Y assume the same vector of values T . Then
we have the following quadratic form interpretation of covariance of X, Y .


N 
N 
N 
N
Cxy = Ti Tj Prob{X = i, Y = j } − Ti Tj pi pj
i=1 j =1 i=1 j =1

= T P-T − T J˜T , where J˜ij = pi pj


T T

= T (P- − J-)T
T

Thus, we have a quadratic form that is not Laplacian.


• From the above
 discussion, it readily follows that given a random variable, X
(E(X))2 , E X2 are arbitrary quadratic forms.
Variance Laplacian: Quadratic Forms in Statistics 489

Correlation Matrix of Finitely Many Random Variables Let us consider


finitely many real valued discrete random variables, all of which assume the same
set of finitely many⎡values. The correlation
⎤ matrix of such random variables is
R11 · · · R1N

given by c RN = ⎣ ... ... .. ⎥, where R = E X X . From the above
. ⎦ ij i j
RN1 · · · RNN
discussion, it is clear that the elements of RN are quadratic forms in the set of
values assumed by the random variables T (diagonal elements are Laplacian
quadratic forms where as other elements are not necessarily Laplacian). It is
well known that RN is non-negative definite. Using the above discussion, the
T
correlation matrix RN can be written as RN = T oP oT , where “o” is suitably
defined product like Kronecker or Schur product. It should be noted that P is the
associated block symmetric matrix of probabilities.

5 Conclusions

In this research paper, it is proved that the variance of a discrete random variable
constitutes the quadratic form associated with a Laplacian matrix (whose elements
are expressed in terms of probabilities). Various interesting properties of the
associated Laplacian matrix are proved. Also, other quadratic forms which naturally
arise in statistics are identified. It is shown that cross-fertilization of results between
the theory of quadratic forms and statistics/probability theory leads to new research
directions.

References

1. Chung, F.R.K.: Spectral Graph Theory. American Mathematical Society, Providence (1994)
2. Papoulis, A., Pillai, S.U.: Probability, Random Variables and Stochastic Processes. Tata-
McGraw Hill, New Delhi (2002)
3. Rama Murthy, G.: Time optimal spectrum sensing. IIIT Technical Report No. 63, December
(2015)
4. Rama Murthy, G., Singh, R.P., Chilamkurti, N.: Wide band time optimal spectrum sensing. Int.
J. Internet Technol. Secur. Trans. 10(4), (2020)
On the Feynman–Kac Formula

B. Rajeev

Abstract In this article given y : [0, η) → H , a continuous map into a Hilbert


space H , we study the equation
t
c(s,ŷ)ds
ŷ(t) = e 0 y(t),

where c(s, ·) is a given “potential” on C([0, η), H ). Applying the transformation


y → ŷ to the solutions of the SPDE and SDE underlying a diffusion, we study the
Feynman–Kac formula.

Keywords S " valued process · Diffusion processes · Hermite–Sobolev space ·


Path transformations · Quasi-linear SPDE · Feynman–Kac formula · Translation
invariance

Subject Classification 2010 60G51, 60H10, 60H15

1 Introduction

One of the well-known formulas at the boundary of probability and analysis is


t
the Feynman–Kac formula u(t, x) = Ex (f (Xt )e− 0 V (Xs )ds ) which represents the
solution u(t, x) of the evolution equation for the operator L − V , where L is the
infinitesimal generator of a diffusion (Xt , Px ), x ∈ Rd , V (x) ≥ 0 the potential,
and f the initial value [5]. We refer to [6, 8, 9] for basic material on this topic. It is
also known that this formula defines a sub-Markovian semi-group whose underlying
process (X̂t ) is obtained from (Xt ) by the operation known as “killing” according

B. Rajeev ()
Department of Theoretical Statistics and Mathematics, Indian Statistical Institute, Bangalore,
India
e-mail: [email protected]

© The Editor(s) (if applicable) and The Author(s), under exclusive 491
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_29
492 B. Rajeev

t
to the multiplicative functional Mt := e− 0 V (Xs )ds [14]. It may be of interest to
have an answer to the following natural question: is it possible to have a “pathwise”
construction of the process X̂. The special case when (Xt ) satisfies an Itô stochastic
differential equation (SDE) is of interest. However, it turns out that it is the SPDE
satisfied by the distribution valued process (Yt ) := (δXt ) [10, 11] rather than the
SDE for (Xt ) that is more relevant for our purposes.
To motivate our “pathwise” construction we proceed as follows. Let H be a
separable real Hilbert space and consider C([0, η), H ), the space of continuous
functions on [0, η), 0 < η ≤ ∞, with values in H . Let u(t) ∈ C([0, η), H ) be
the solution of the following evolution equation in H , viz.

∂t u(t) = Lu(t)
t
with L : H → H , a linear operator. Consider ū(t) := u(t)e 0 c(s,u) ds , where
c(s, ·) : C([0, η), H ) → R is a given function (the potential) for each 0 ≤ s < η.
Then integrating by parts, it is easy to see that ū solves

∂t ū(t) = Lū(t) + c(t, u)ū(t).

We would have a good and proper evolution equation for ū(t) if we were able
to write c(t, u) = ĉ(t, ū) for some ĉ(t, ·) : C([0, η), H ) → R. If the map
t
u → S(u) := ū ≡ u(t)e 0 c(s,u) ds were invertible, then we may define ĉ(t, u) :=
c(t, R(u)), where R(u) = S −1 (u) so that ĉ(t, ū) = c(t, u). It is easy to see that the
inverse R is a path transformation R : C([0, η), H ) → C([0, η), H ) induced by the
potential c : [0, η) × C([0, η), H ) → R as follows: For a given y ∈ C([0, η), H ),
R(y) ∈ C([0, η), H ) is the solution ŷ of the equation
t
ŷ(t) = y(t)e− 0 c(s,ŷ) ds
.

In Sect. 2, we prove existence and uniqueness to the above equation in Theo-


rem 2.2, using a fixed point argument. Thus the map R is well defined and injective.
Since −c satisfies the conditions of Theorem 2.2 whenever c does, the map R is
also onto. From a modeling point of view, R(y) maybe viewed as a perturbation,
induced by the potential c(t, y), of the trajectory of a particle represented by y(·).
We deal with real Hilbert spaces as we consider applications only to the theory
of diffusions. However, complex Hilbert spaces and complex valued potentials
(with the corresponding interpretation of “amplitude” and “phase”) may also be
of interest.
Given a diffusion (Xt , 0 ≤ t < η, Px , x ∈ Rd ), we try to realize the Feynman–
Kac formula by applying the above transformation to the paths of the diffusion. We
remark here that we could choose H = Rd but this does not lead to the Feynman–
Kac formula (see Remark 3.2). However if we look at the process (Yt ) := (δXt )
up to time η, then this is a semi-martingale in a Hilbert space Sp —the Hermite–
Sobolev space—and is indeed the unique solution of a quasi- linear stochastic partial
On the Feynman–Kac Formula 493

differential equation (SPDE) [10, 11]; one may then look at the process (Ŷt ) :=
t
(e− 0 V (Xs )ds δXt ) and using the rules of stochastic calculus write an SPDE for Ŷ .
Note that we can write V (Xs ) = V , δXs  =: c(s, δX ), 0 ≤ s < η, if V belongs to a
suitable class of test functions.
In Sect. 3, we show that when (Yt ) satisfies a quasi-linear SPDE in Sp then (Ŷt ) is
the solution of a new SPDE with a potential term, viz. c(t, Ŷ ) and whose coefficients
are defined on the path space C([0, η), Sp ) using the coefficients of the original
equation and the transformation discussed above. This transformation works at both
levels, viz. the SPDE and the PDE underlying the diffusion, although the “Kac
functional” (we use the terminology from [1]) induced by the potential function
V (x) is necessarily different in the two cases (see the discussion on diffusions in
Sect. 5). In Sect. 4, we allow c(·, ·) to depend also on x ∈ Rd and we show that the
above transformation may also be applied directly to the solutions of a class of non-
linear PDEs. We conclude in Sect. 5 with a discussion on two classes of examples
in both of which the functional c(t, x, y) depends on x albeit in different ways. The
second example that we discuss in Sect. 5 concerns diffusion processes and shows
also the connections that can arise between the transformations of the solutions to
the SPDE and the solutions to the associated PDE. In Sects. 3, 4, and 5, we work in
the framework of [11] to which we refer for results relating to SPDEs, the related
notations and references. See also Example 7 of [11] where we had briefly indicated
the results in Sect. 2.

2 A Transformation on Path Space

Let H be a separable real Hilbert space with norm denoted by . We consider for
0 ≤ T < ∞, the space C([0, T ], H ) of continuous functions y : [0, T ] → H with
the sigma field Bt , 0 ≤ t ≤ T generated by the coordinate maps up to time t. Let
η > 0. We denote by C([0, η), H ) the set of continuous maps y : [0, η) → H .
We denote the norm on C([0, s], H ) by ys := sup y(u). Fix T > 0. Let c :
u≤s
[0, T ] × C([0, T ], Sp ) → R satisfy
C1. For 0 ≤ t ≤ T , |c(t, y1 ) − c(t, y2 )| ≤ βy1 − y2 t ,
where β = β(T ) depends only on T .
We note that as a consequence of this condition we have the following: for
0 ≤ s ≤ T , and y1 , y2 ∈ C([0, T ], H ), y1 (u) = y2 (u), 0 ≤ u ≤ s implies
c(s, y1 ) = c(s, y2 ).
C2. For α > 0 and T > 0 there exists a constant M(α, T ) such that

|c(t, y)| ≤ M(α, T )

for 0 ≤ t ≤ T and y ∈ B(0, α) ≡ B(0, α, T ) := {y ∈ C([0, T ], H ), yT ≤


α}.
494 B. Rajeev

We note that if c(t, y) satisfies the above conditions, then so does −c(t, y). Let
t
− c(s,y)ds
α(t) ≡ α(t, y) := e 0

for y ∈ C([0, T ], H ). Given a y ∈ C([0, η), H ) for some η > 0, and 0 < T < η
we consider the following equation for ŷ in C([0, T ], H ), viz.
t
− c(s,ŷ)ds
ŷ(t) = y(t)α(t, ŷ) = y(t)e 0 (2.1)

for 0 ≤ t ≤ T . We first derive an a priori estimate for the distance between two
solutions of (2.1) corresponding to inputs y1 and y2 .
Lemma 2.1 Let η > 0. Let y1 , y2 ∈ C([0, η), H ) and suppose ŷ1 , ŷ2 are the
corresponding solutions of (2.1). Then for every 0 < T < η, we have the following
estimate, viz.

ŷ1 − ŷ2 T ≤ My1 − y2 T eMy2 T βe ,


δ
(2.2)
T
where δ > βT ŷ1 − ŷ2 T and M := e 0 |c(s,y1 )|ds .

Proof Let 0 < T < η and δ, M as above. Then


 T
|c(s, ŷ1 ) − c(s, ŷ2 )| ds ≤ βT ŷ1 − ŷ2 T < δ
0

Consequently, using the elementary estimate |1 − ex | ≤ eδ |x|, |x| < δ we have for
any 0 ≤ t ≤ T ,
 T
T
|1 − e 0 (c(s,ŷ1 )−c(s,ŷ2 )) ds| ≤ eδ β |ŷ1 − ŷ2 |s ds.
0

Then we have
·
ŷ1 − ŷ2 T = (y1 − y2 )e− 0 c(s,ŷ1 ) ds

· ·
+y2 e− 0 c(s,ŷ1 ) ds
(1 − e 0 (c(s,ŷ1 )−c(s,ŷ2 )) ds
)T
T T
|c(s,ŷ1 )|ds
≤ y1 − y2 T e 0 |c(s, ŷ1 )|ds + y2 T e 0

t
× sup |1 − e 0 (c(s,ŷ1 )−c(s,ŷ2 )) ds
|
t ≤T
 T
≤ My1 − y2 T + My2 T e β δ
ŷ1 − ŷ2 s ds
0

≤ My1 − y2 T eT My2 T βe
δ
On the Feynman–Kac Formula 495

where the last step follows from Gronwal’s inequality.


Let y1 ∈ C([0, t1 ], H ), y2 ∈ C([0, t2 ], H ), where 0 ≤ t1 < t1 + t2 < T .
In the proof of the following theorem we need a construction of “concatenation”
y1 2 y2 ∈ C([0, T ], H ) of the paths y1 and y2 :

y1 2 y2 (s) := y1 (s)|[0,t1 ] (s) + (y2 (s − t1 ) − y2 (0) + y1 (t1 ))|(t1,t1 +t2 ] (s)


+(y2(t2 ) − y2 (0) + y1 (t1 ))|(t1 +t2 ,T ] (s),

where |A is the indicator of the set A. The following theorem is our main result.
Theorem 2.2 Let η > 0 and let c(t, y) satisfy C1 and C2 above for every T , 0 ≤
T < η. Then for a given y ∈ C([0, η), H ) there exists a unique ŷ ∈ C([0, η), H )
satisfying Eq. (2.1) for every T , 0 ≤ T < η.
Proof It suffices to show existence and uniqueness of Eq. (2.1) on [0, T ] for every
T < η. Using uniqueness, we can then patch up the solutions on different intervals
to get the required solution. So let 0 < T < η. Uniqueness of the solution on [0, T ]
is immediate from (2.2).
To show existence on [0, T ] suppose we have the decomposition (0, T ] =
>
m−1
(Tn , Tn+1 ]. Fix n, 0 ≤ n ≤ m − 1. Define ŷ(0) = y(0). Suppose ŷ(t), t ∈
n=0
[0, Tn ] has been defined. We use an inductive procedure to extend ŷ to the interval
(Tn , Tn+1 ] as follows: We first solve the following equation on [0, Tn+1 − Tn ], viz.

ŷn (t) = y(t + Tn )α(Tn , ŷ)αn (t, ŷn ) − y(Tn )α(Tn , ŷ) (2.3)
= yn (t)αn (t, ŷn ) + an ,

where for y ∈ C([0, Tn+1 − Tn ], H ) and t ∈ [0, Tn+1 − Tn ],


t
− c(s+Tn ,ŷ2y)ds
αn (t, y) := e 0 , yn (t) := y(Tn + t)α(Tn , ŷ),

and an := −y(Tn )α(Tn , ŷ).


We extend ŷ to the interval (Tn , Tn+1 ] as follows:

ŷ(t) := ŷn (t − Tn ) + ŷ(Tn ), t ∈ (Tn , Tn+1 ].

Then provided ŷ satisfies Eq. (2.1) in [0, Tn ], we have

ŷ(t) := ŷn (t − Tn ) + ŷ(Tn )


= y(t)α(Tn , ŷ)αn (t − Tn , ŷn ) − y(Tn )α(Tn , ŷ) + ŷ(Tn )
= y(t)α(Tn , ŷ)αn (t − Tn , ŷn )
= y(t)α(t, ŷ),
496 B. Rajeev

where in the third equality we have used the assumption that ŷ satisfies Eq. (2.1) in
[0, Tn ]. As for the fourth equality, we use the fact that ŷ on the interval (Tn , Tn+1 ]
is the concatenation of ŷ ∈ C([0, Tn ], H ) and ŷn ∈ C([0, Tn+1 − Tn ], H ), i.e.
ŷ(t) = ŷ 2 ŷn (t), t ∈ (Tn , Tn+1 ]. Thus it suffices to solve (2.3) on [0, Tn+1 − Tn ] for
a suitable partition 0 = T0 < T1 < · · · < Tm = T of [0, T ].
So let α > sup y(s), and c(., .) satisfy C1 and C2 on [0, T ] for some M(α) :=
s≤T
M(α, T ) and β. Let  > 0 be such that eM(3α)T < α2 . By uniform continuity of y
on [0, T ] we can divide [0, T ] into a finite number (say m) of subintervals [Tn , Tn+1 ]
with Tm = T such that

y(t1 ) − y(t2 ) ≤  ∀t1 , t2 ∈ [Tn , Tn+1 ], n = 0, · · · m − 1.

Next we choose δ > 0 such that |ex − 1| < eδ |x| for |x| < δ. By refining the
partition if necessary we may assume without loss of generality that
α
αM(3α)e(M(α)T )+δ (Tn+1 − Tn ) < and
2

Kn := 2αβe(M(3α)T )+δ (Tn+1 − Tn ) < 1, n = 0, · · · , m − 1.

and

2M(3α)(Tn+1 − Tn ) < δ.

With this choice of the partition {Tn } we now solve (2.3) on [0, Tn+1 − Tn ] by a
fixed point argument. Let α be as above. Recall the definition of B(0, α) from C2
above, with T there replaced with Tn+1 − Tn . For z ∈ B(0, α), t ∈ [0, Tn+1 − Tn ]
let

Sn (z)(t) := y(t + Tn )α(Tn , ŷ)αn (t, z) − y(Tn )α(Tn , ŷ).

Note that αn (t, z) depends on ŷ 2 z, where ŷ is the solution on [0, Tn ]. Assume that
ŷ ∈ B(0, α). Then we claim that

Sn : B(0, α) ⊂ C([0, Tn+1 − Tn ], H ) → B(0, α).

To see this we write Sn (z)(t) as

Sn (z)(t) = (y(t + Tn ) − y(Tn ))α(Tn , ŷ)αn (t, y) + y(Tn )α(Tn , ŷ)(αn (t, y) − 1).
On the Feynman–Kac Formula 497

Let t ∈ [0, Tn+1 − Tn ]. Then from the triangle inequality and the choice of  and
{Tn } we have

Sn (y)(t) ≤ (y(t + Tn ) − y(Tn ))α(Tn , ŷ)αn (t, z)


+y(Tn )α(Tn , ŷ)|(αn (t, z) − 1)|
≤ eM(3α)T (y(t + Tn ) − y(Tn ))p
 t
+αe(M(α)T )+δ | c(u + Tn , ŷ 2 z)du|
0
α α
≤ eM(3α)T + αM(3α)e(M(α)T )+δ (Tn+1 − Tn ) ≤ + = α.
2 2
Note that in the second equality we have used the fact that C2 implies
 t 
 
 c(s + T , ŷ 2 z) ds  < M(3α)(Tn+1 − Tn ) < δ, t ∈ [0, Tn+1 − Tn ]
 n 
0

and in the second and third inequality above we have used the fact that ŷ 2 z(t) ≤
3α, t ∈ [0, T ].
We now show that the map Sn : B(0, α) → B(0, α), defined above is a
contraction. Let y1 , y2 ∈ B(0, α). For t ∈ [0, Tn+1 − Tn ],

Sn (y1 )(t) − Sn (y2 )(t) = y(t + Tn )α(Tn , ŷ)αn (t, y2 )


 t 
 (c(s+T ,ŷ2y )−c(s+T ,ŷ2y ))ds 
 n 1 n 2 
× e 0 − 1
 

≤ y(t + Tn )α(Tn , ŷ)αn (t, y2 )


 t 
 
 
×eδ  (c(s + Tn , ŷ 2 y1 ) − c(s + Tn , ŷ 2 y2 ))ds 
 
0

≤ e(T M(3α))+δ αβ2(Tn+1 − Tn )y1 − y2 Tn+1 −Tn

and by definition of the constant K we have

S(y1 ) − S(y2 )Tn+1 −Tn ≤ Kŷ1 − ŷ2 Tn+1 −Tn .

Since K < 1 by our choice, the map

Sn (y) : C([0, Tn+1 − Tn ], B(0, α)) → C([0, Tn+1 − Tn ], B(0, α))

is a contraction on a complete metric space and has a unique fixed point. Thus (2.3)
has a unique solution. This completes the proof of the Theorem.
498 B. Rajeev

Corollary 2.3 For y ∈ C([0, η), H ) let R(y) := ŷ, where ŷ is the unique solution
of (2.1). Then R is 1–1 and onto. Further for every t > 0, R : C([0, t], H ) →
C([0, t], H ) is a homeomorphism. In particular, for every t > 0, the map R :
(C([0, t], H ), Bt ) → (C([0, t], H ), Bt ) is a measurable isomorphism.
Proof To see that R is 1–1, suppose that R(y1 ) = R(y2 ). Then since this implies
ŷ1 = ŷ2 , we also have y1 = y2 . That R is onto follows from the observation that
t
if ŷ ∈ C([0, η), H ) is given and if we define y(t) := ŷ(t)e 0 c(s,ŷ) ds , then clearly
R(y) = ŷ.
·
Note that for a given y ∈ C([0, η), H ), R −1 (y) = y(·)e 0 c(s,y) ds . Since R −1
has the same form as R it suffices to show that R is continuous. But this is clear
from (2.2). The last statement follows from the continuity of R and the fact that the
Borel sigma field on C([0, η), H ) is the same as Bt .

3 Application to Stochastic PDEs

In this section we discuss the applications to SPDEs of the results in the previous
section. We work in the framework of the Hermite–Sobolev spaces, Sp , p ∈ R, and
we refer to [4, 7, 11] for the results and notations that we use. Let S, S " denote,
respectively, the Schwarz space of rapidly decreasing smooth functions and its dual.
We refer to [2, 3] for results on stochastic calculus in Hilbert spaces. We work on a
probability space (, F , P ) on which is given an r-dimensional Brownian motion
(Bt ). Let (FtB )t ≥0 be the filtration of (Bt ). We now consider solutions of the SPDE

dYt = L(Yt ) dt + A(Yt ) · dBt ; Y0 = Y, (3.4)

where L, Ai , i = 1, · · · r are quasi-linear partial differential operators of the form

1  
d d
L(y) := (σ σ )ij (y)∂ij y −
t 2
bi (y)∂i y
2
i,j =1 j =1


d
Ai (y) := − σj i (y)∂j y,
j =1

where σij , bi : Sp → R and Y :  → Sp is independent of (Bt ).


In [11] we have proved existence and uniqueness of solutions to (3.4) and shown
that for a given Y :  → Sp , a unique solution (Yt , η) exists under a Lipschitz
condition on the coefficients σij and bi . Here 0 < η ≤ ∞ is the lifetime of the
process and if σij , bi are uniformly bounded on Sp , then η = ∞ almost surely (see
[11, Proposition 5.2]).
Let η > 0 be a fixed positive number, which later will also be allowed to be
random. Let c(·, ·) : [0, η) × C([0, η), Sp ) → R satisfy C1, C2 on bounded
On the Feynman–Kac Formula 499

intervals [0, T ], T < η. Given y ∈ C([0, η), Sp ) let ŷ be the solution of (2.1)
given by Theorem 2.2 with H = Sp .
Let σij , bi : Sp → R and L, Ai as above. Then, the transformation y → ŷ
induced by the map c(·, ·) and Eq. (2.1) induces a corresponding transformation of
maps σij (·), bi (·) → σ̂ij (·, ·), b̂i (·, ·) as follows: σ̂ij , b̂i : [0, η) × C([0, η), Sp ) →
R are given by σ̂ij (s, y) := σij (ŷ(s)), b̂i (s, y) := bi (ŷ(s)). Define ĉ(s, y) :=
c(s, ŷ), 0 ≤ s < η, y ∈ C([0, η), Sp ). Let L̂(t, y) and Âi (t, y) be maps from
[0, η) × C([0, η), Sp ) to Sp−1 defined as follows:

1  
d d
L̂(s, y) := (σ̂ σ̂ t )ij (s, y)∂ij2 ys − b̂i (s, y)∂i ys + ĉ(s, y)ys
2
i,j =1 j =1


d
Âi (s, y) := − σ̂j i (s, y)∂j ys
j =1

Let (Yt , η) be a strong solution (see [11]) of Eq. (3.4) with initial value Y
and η now a random variable. Then for each ω ∈ , the trajectory Y· (ω) ∈
C([0, η(ω), Sp ). Define for 0 ≤ t < η(ω)

t
c(s,Y (ω))ds
Ŷt (ω) := Yt (ω)e 0 .

Let σ̂ij , b̂i , ĉ, L̂, Âi be as above. We take Ŷt (ω) := δ, t ≥ η, where δ is a “coffin
state.” By the continuity of c(·, ·) and> the definition of a strong solution [11], (Ŷt ) is
a continuous FtB -adapted, Ŝp := Sp {δ} valued process.
Theorem 3.1 Let (Yt , η) be a strong solution of (3.4) and let c(·, ·) satisfy C1 and
C2. Then (Ŷt )0≤t <η is a strong solution of the equation

d Ŷt = L̂(t, Ŷ )dt + Â(t, Ŷ ) · dBt (3.5)


Ŷ0 = y.

If Eq. (3.4) has a pathwise unique strong solution, then so has Eq. (3.5).
t
Proof Let Mt := e 0 c(s,Y )ds . To prove existence, we use integration by parts.
Indeed one can verify the following equation by acting on it with a test function.
We have in differential form

d Ŷt = d(Mt Yt ) = Yt dMt + Mt dYt


= Ŷt c(t, Y ) dt + Mt L(Yt ) dt + Mt A(Yt ) · dBt
500 B. Rajeev

Now from the definition of Ŷt (ω), we have that for each fixed ω, Yt (ω), 0 ≤ t <
η(ω) is the unique solution ŷ of (2.1) with y(t) := Ŷ (t), 0 ≤ t < η,
t
− c(s,ŷ)ds
ŷ(t) = Ŷt (ω)e 0 .

It follows that σij (Yt ) = σ̂ij (t, Ŷ ), c(t, Y ) = ĉ(t, Ŷ ), etc., and hence from above,

d Ŷt = L̂(t, Ŷ )dt + Â(t, Ŷ ) · dBt .

The pathwise uniqueness of the above equation follows from the pathwise unique-
ness of Eq. (3.4). Indeed if Ŷ 1 , Ŷ 2 are solutions of Eq. (3.5), then if Y i (t) =
t
Ŷti e− 0 c(s,Y )ds , it is easy to check using the integration by parts formula for the
i

t
product Ŷti e− 0 c(s,Y )ds and the definition of the “hat” functionals that Y i , i = 1, 2,
i

both solve Eq. (3.4) and hence almost surely, Yt1 = Yt2 , t ≥ 0, which in turn implies
that almost surely, Ŷt1 = Ŷt2 , t ≥ 0.
Remark 3.2 Let (Xt , η) be the solution of an Itô SDE with diffusion and drift
coefficients σ̄ij , b̄i , j = 1, · · · , r, i = 1, · · · , d, respectively, and initial value
x ∈ Rd . Given c : [0, η) × C([0, η), Rd ) → R satisfying C1 and C2, with
t
H = Rd we can define X̂t := Xt e 0 c(s,X) ds and show, as in the proof of
Theorem 3.1, that (X̂, η) will satisfy an SDE with path dependent coefficients
σ̂ij (s, y), bi (s, y), y ∈ C([0, η), Rd ) and an additional drift term involving c(t, y).

4 Application to PDE’s

We now apply the transformation y → ŷ developed in Sect. 2, to solutions of


(parabolic, non-linear) partial differential equations of the form

∂t u(t, x) = L(x, u(t, x)) (4.6)


u(0, x) = u(x).

Here u : Rd → Sp is the initial value and the operator L is defined by

1  
d d
L(x, y) := (σ σ t )ij (x, y)∂ij2 y − bi (x, y)∂i y,
2
i,j =1 j =1
On the Feynman–Kac Formula 501

where σij , bi : Rd × Sp → R, i, j = 1, · · · , d are assumed to satisfy a Lipschitz


condition as follows: Let f : Rd × S " → IR. We say that f satisfies a (p, q)
local Lipschitz condition, uniformly in x ∈ Rd if for all λ > 0, there exists C =
C(λ, p, q) such that

|f (x, ϕ) − f (x, ψ)| ≤ Cϕ − ψq

for all ϕ, ψ ∈ Bp (0, λ) := {ϕ ∈ Sp : ϕp ≤ λ} and x ∈ Rd .


Under the above condition, we can show the existence and uniqueness of
solutions of the above equation [13]. Here, given a measurable map u : Rd → Sp
we will assume the existence of a unique solution to the above PDE, i.e. for each
x ∈ Rd , the existence of a unique map u(., x) : [0, T ] → Sp which is continuous
and satisfies

t
u(t, x) = u(x) + L(x, u(s, x))ds,
0

where the equation holds in Sq , q ≤ p − 1. Suppose now we are given a potential


function, i.e. a real valued function of the form c(t, x, y), 0 ≤ t ≤ T , x ∈ Rd , y ∈
C([0, T ], Sp ), satisfying for each x, conditions C1 and C2 of Sect. 2 for H = Sp .
Let σ̂ij (t, x, .), b̂i (t, x, .), ĉ(t, x, .) be as defined in Sect. 3. For t ∈ [0, T ], x ∈
Rd , y ∈ C([0, T ], Sp ) define the operator

1  
d d
L̂(s, x, y) := (σ̂ σ̂ t )ij (s, x, y)∂ij2 ys − b̂i (s, x, y)∂i ys
2
i,j =1 j =1

+ĉ(s, x, y)ys .

The following theorem can be proved in the same manner as Theorem 3.1.
Theorem 4.1 Let (u(t, x)) be a solution of Eq. (4.6) for a given u : Rd → Sp and
let c(·, ·) satisfy C1 and C2. Then,
t
− c(s,x,u(.,x))ds
û(t, x) := u(t, x)e 0

solves the equation

∂t û(t, x) = L̂(., x, û(., x)) (4.7)


u(0, x) = u(x).

If Eq. (4.6) has a unique solution, so has Eq. (4.7).


502 B. Rajeev

The solutions of the PDE (4.6) have a stochastic representation u(t, x) = EYtx ,
where Ytx := τZtx u(x), τx : Sp → Sp are the translation operators (see [11] and
references therein), the coefficients σij (x, y), bi (x, y) are uniformly bounded in y ∈
Sp for each x ∈ Rd and

t t
Ztx := σ (x, u(s, x)) · dBs + b(x, u(s, x))ds.
0 0

The proof follows as in the proof of Theorem 6.3 of [11] applied to the present
situation for each fixed x. As an immediate consequence we have the following
t
− c(s,x,EY·x )ds
Corollary 4.2 û(t, x) = (EYtx )e 0 .

5 Conclusion

In this section we make a few remarks on the applications of Theorem 2.2. We


consider the PDE (4.6) and its interplay with the Sp -valued process considered in
Sect. 3. The existence and uniqueness of solutions of (4.6) in the non-linear case will
be considered, as mentioned above, in a separate paper [13]. Here in the remarks
below, we will consider two separate classes of Eq. (4.6), corresponding to different
classes of coefficients in the operator L(x, ·) in (4.6).
1. We assume that the coefficients depend only on x ∈ Rd , i.e. σij (x, φ) =
σij (x), bi (x, φ) = bi (x), i = 1, · · · , d, j = 1, · · · , r. In this case L(x) :
Sp → Sq , q ≤ p − 1 is a linear operator. The solution u(t, x) exists uniquely—
because of the monotonicity inequality satisfied by L(x) [11, 13] and is given by
u(t, x) := EτZtx u(x), where for each x ∈ Rd ,

Ztx := σ (x) · Bt + b(x) t

as defined in Sect. 4. In particular, û(t, x) is the unique solution to (4.7) for any
given potential function c(t, x, y) satisfying C1 and C2. In this example, the role
of x ∈ Rd in the coefficients of the equation is that of an “external parameter”
and as a consequence (Ztx ) is a Gaussian process for each x.
2. In the calculations below we consider the analogs of Eqs. (4.6) and (4.7) for
a special type of linear operator arising as the formal adjoint L̄∗ of a second
order elliptic operator L̄ and potential c(t, x, y) := V (x) for a suitable function
V (x), x ∈ Rd . Note that c(t, x, y) is independent of t, y. The analog of (4.6) that
we consider is as follows:

∂t u(t, x) = L̄∗ u(t, x) (5.8)


u(0, x) = δx ,
On the Feynman–Kac Formula 503

where L̄ is defined as

1  
d d
L̄φ(x) := (σ̄ σ̄ t )ij (x)∂ij2 φ(x) + b̄i (x)∂i φ(x).
2
i,j =1 i=1

In the above definition, σ̄ij , b̄i ∈ Sp , p > d4 , so that they are bounded continuous
functions [12] and φ ∈ S is a test function. Consider the Itô SDE

dXt = σ̄ (Xt ) · dBt + b̄(Xt ) dt


X0 = x.

If in addition, σ̄ij , b̄i are Lipschitz functions, then a unique strong solution (Xtx )
exists for all t ≥ 0. We denote by (Pt ) the transition semi-group of the diffusion
and by δx the Dirac measure at x. Let Pt f (x) := Ef (Xtx ), f ∈ S. Since Pt :
S → S " and since (S " )" = S (where S " is given the weak star topology, see
Theorem 1, Section 8, Chapter 4 of [16]), we have Pt∗ : S → S " , where for
f, g ∈ S we have the duality relation

Pt f , g = f , Pt∗ g.

Again using duality it is easy to see that Pt∗ is given by the kernel Pt∗ (x) := EδXtx
which is just the transition probability measure of (Xtx ) represented as an element
of S−p . In other words, for g ∈ S,


Pt g = g(x)EδXtx dx,

where the RHS is a Bochner integral in S−p . When σ̄ij , b̄i are twice continuously
differentiable with bounded derivatives then according to Theorem 2.2.9 of
[15], (5.8) has a unique solution. We can ensure these assumptions on the
coefficients by choosing p sufficiently large. It is now easy to see that the unique
solution is given by the S−p valued kernel Pt∗ (x): indeed, one has

L̄∗ Pt∗ (x) = EL(δXtx )

as is easily verified by acting on both sides with a test function. With u(t, x) :=
Pt∗ (x), (5.8) is verified by acting with a test function and using Itô’s formula.
We now consider the analog of Eq. (4.7). Let V (x) be a bounded measurable
function. We define û(t, x) := Pt∗ (x)et V (x) . Then, integrating by parts in S−p
as done in earlier sections, it is easy to see that û(t, x) satisfies the S−p valued
evolution equation

∂t u(t, x) = (L̄∗ + V (x))u(t, x) (5.9)


u(0, x) = δx .
504 B. Rajeev

Uniqueness for solutions of (5.9) follows from uniqueness of solutions of (5.8)


as in the proof of Theorem 3.1. Equations (5.8) and (5.9) are the analogs of
Eqs. (4.6) and (4.7) for the linear operator L̄∗ .
We now establish the connection between the PDE results in Sect. 4 and the
SPDE results in Sect. 3, for the specific example of the operator L̄∗ . To do this
we consider, for p > d4 sufficiently large, the S−p valued process Yt := δXtx .
Then (Yt ) satisfies the SPDE (3.4) with operators L, Ai , i = 1, · · · r given as in
Sect. 3 with coefficients σij , bi : S−p → R given as σij (φ) := σ̄ij , φ, etc., for
φ ∈ S.
On the other hand, let now V ∈ Sp and define c(., .) : [0, ∞) ×
C([0, ∞), S−p ) → R as c(t, y) := < V , y(t) >, y ∈ C([0, ∞), S−p ).
t
c(s,Y )ds
Let Ŷt := Yt e 0 , where c(s, Y ) := V , Ys  = V (Xsx ). If V is bounded
above by K, then we have

EŶt p ≤ eKt EYt p < ∞.

We note that (Ŷt ) satisfies the SPDE (3.5) with L̂(t, ·), Âi (t, ·) defined as in
Sect. 3 with the coefficients σ̂ij , b̂i , ĉ all defined through the corresponding
σij , bi , c defined above. We define PtV : S → S " as follows. For f ∈ Sp ,
we define
t
V (Xsx )ds
PtV f (x) := E(e 0 f (Xtx ))

and let the kernel PtV (x) ∈ S−p be defined as

t
∗ V (Xsx )ds
PtV (x) := E Ŷt = E(e 0 δXtx ).

Then the following calculation show that (PtV )∗ (x) satisfies (5.9). Let f ∈ S.
Firstly we note that from the definition of L̂(t, y) we have
t t
V (Xsx )ds V (Xsx )ds
f, L̂(t, Ŷ ) = e 0 f, L(δXtx ) + e 0 V (Xtx )f, δXtx 
t
V (Xsx )ds
=e 0 (L̄ + V )f (Xtx ),
On the Feynman–Kac Formula 505

Hence from the equation satisfied by Ŷt we get


 t

f, PtV (x) = PtV f (x) = f, E Ŷt  = f (x) + f, L̂(s, Ŷ )ds
0
t s
V (Xux )du
= f (x) + E[e 0 (L̄ + V )f (Xsx )]ds
0

t
= f (x) + PsV ((L̄ + V )f )(x)ds
0

t

= f (x) + f, (L̄∗ + V )PsV (x) ds.
0

It follows by uniqueness of solutions of (5.9) that with the coefficients


σ̄ij , b̄i , V (x) as above and p sufficiently large, we have the following special
case of Corollary 4.2: For each x ∈ Rd ,

PtV (x) = Pt ∗ (x)et V (x) = et V (x) EδXtx ,

where the equality holds in S−p .

Acknowledgments The author would like to acknowledge the financial support from the
SERB (Science and Engineering Research Board, India) through the MATRIX project No.
MTR/2017/000750.

References

1. Chung, K.L., Varadhan, S.R.S. (1980) Kac functionals and Schrodinger equations. In: Bhatia,
R., Bhat, A.G., Parthasarathy, K.R. (eds.) Collected Papers of S.R.S. Varadhan, vol. 2, pp.
304–315. Hindustan Book Agency, New Delhi
2. Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions. Cambridge University
Press, Cambridge (1992)
3. Gawarecki, L., Mandrekar, V.: Stochastic Differential Equations with Applications to Stochas-
tic Partial Differential Equations. Springer, Berlin (2011)
4. Itô, K.: Foundations of Stochastic Differential Equations in Infinite Dimensional Spaces.
CBMS 47. SIAM, Philadelphia (1984)
5. Kac, M.: On the distribution of certain Wiener functionals. Trans. Am. Math. Soc. 65, 1–13
(1949)
6. Kallenberg, O.: Foundations of Modern Probability. Springer, New York (2010)
7. Kallianpur, G., Xiong, J.: Stochastic Differential Equations in Infinite Dimensional Spaces.
Lecture Notes, Monograph Series, vol. 26. Institute of Mathematical Statistics, Hayward
(1995)
506 B. Rajeev

8. Karatzas, I., Shreve, S.E.: Brownian Motion and Stochastic Calculus. Springer, New York
(1998)
9. Oksendal, B.: Stochastic Differential Equations: An Introduction with Applications (Universi-
text). Springer, Berlin (2010)
10. Rajeev, B.: Translation invariant diffusions in the space of tempered distributions. Indian J.
Pure Appl. Math. 44(2), 231–258 (2013)
11. Rajeev, B.: Translation invariant diffusions and stochastic partial differential equations in S "
(2019). http://arxiv.org/abs/1901.00277
12. Rajeev, B., Thangavelu, S.: Probabilistic representations of solutions to the forward equation.
Potential Anal. 28, 139–162 (2008)
13. Rajeev, B., Vasudeva Murthy, A.S.: Existence and uniqueness for 2nd order quasi linear
parabolic PDE’s. Preprint
14. Revuz, D., Yor, M.: Continuous Martingales and Brownian Motion. Springer, Berlin (1999)
15. Stroock, D.W.: Partial Differential Equations for Probabilists. Cambridge University Press,
Cambridge (2008)
16. Yosida, K.: Functional Analysis. Springer, Berlin (1979)
Heterogeneous System GI/GI(n) /∞ with
Random Customers Capacities

Ekaterina Lisovskaya , Svetlana Moiseeva , Michele Pagano ,


and Ekaterina Pankratova

Abstract In the paper, we consider a queuing system with n types of customers.


We assume that each customer arrives at the queue according to a renewal process
and takes a random resource amount, independent of their service time. We write
Kolmogorov integro-differential equation, which, in general, cannot be analytically
solved. Hence, we look for the solution under the condition of infinitely growing
a service time, and we obtain multi-dimensional asymptotic approximations. We
show that the n-dimensional probability distribution of the total resource amounts
is asymptotically Gaussian, and we look at its accuracy via Kolmogorov distance.

Keywords Renewal arrival process · Different types of servers · Queueing


system

1 Introduction

The globalization of modern managed systems sets new tasks at the hardware,
structural, and organizational level. Such systems include both global computer
and complex socio-economic relations. In addition to the fact that they are highly
heterogeneous, they can also comprise a large number of various objects by

E. Lisovskaya ()
Tomsk State University, Tomsk, Russian Federation
e-mail: [email protected]
S. Moiseeva
Tomsk State University, Tomsk, Russian Federation
M. Pagano
Department of Information Engineering, University of Pisa, Pisa, Italy
e-mail: [email protected]
E. Pankratova
V. A. Trapeznikov Institute of Control Sciences of Russian Academy of Sciences, Moscow,
Russian Federation

© The Editor(s) (if applicable) and The Author(s), under exclusive 507
licence to Springer Nature Singapore Pte Ltd. 2020
V. C. Joshua et al. (eds.), Applied Probability and Stochastic Processes,
Infosys Science Foundation Series, https://doi.org/10.1007/978-981-15-5951-8_30
508 E. Lisovskaya et al.

highly connected cooperations. For example, the actively developing conceptions


of Internet of Things (IoT), Internet of Everything (IoE), and Internet of Nano
Things (IoNT) involve the interaction of both objects and subjects of the social
environment [3, 9, 15]. In this regard, an integrated approach is needed to solve
multi-dimensional problems of managing complex technical and social objects in a
dynamically changing environment.
Cellular networks are transformed from a planned set of large base-stations to an
irregular deployment of heterogeneous infrastructure elements. In paper [2], authors
developed a tractable, flexible, and accurate model for a heterogeneous cellular
network consisting of K level of randomly located base-station, where each level
may differ in terms of average transmit power and supported data rate.
It should be noted that the number of publications has been devoted to modeling
of wireless communication systems by the resource queueing system [1, 4, 5].
However, the main results were obtained assuming that requests to resources is
deterministic. Thus, considering new models of heterogeneous resource queues is
currently relevant [7, 8, 12].
Important task of modeling connection networks is cost criterion, which defines
the quality of the system operation. A tandem queueing systems with heterogeneous
customers is analyzed in the paper [16]. The authors computated the stationary
distribution of the system states under the fixed set of the thresholds—the most
difficult part of solving the problem of minimizing the cost.
Similarly, in our article, the problem of finding a stationary probability distribu-
tion of the total volumes of occupied resources in a heterogeneous queue is solved.
The considered heterogeneous resource queue can be applied when analyzing the
performance indicators of radio resource separation schemes of next-generation
telecommunication [6, 14].

2 Problem Statement

2.1 Mathematical Model

Consider the queueing system (see Fig. 1) with unlimited number and n different
servers types, also assume that each customer carries a random capacity (or needed
some resource).
Customers arrive in the system according to a renewal arrival process, given
by distribution function A(z) of random variable between time point of customers
arriving, which has a finite mean and variance (a and σ 2 ).
Each arriving customer randomly selects its type according to the set of

n
probabilities pi (i = 1, . . . , n), and besides pi = 1. Further, the customer goes
i=1
to the conforming server, staying there for a random time with distribution function
Heterogeneous System GI/GI(n) /∞ with Random Customers Capacities 509

Fig. 1 Queueing system with n servers types

Fig. 2 Dynamic screening of the arrival process

Bi (x), and also taking random resources amount vi > 0 with distribution function
Gi (y).
Queueing system with such service discipline was considered by the authors in
[13]. However, it does not take into account that each customer requires a random
amount of resources.
Denote by {V1 (t), . . . , Vn (t)} the each type’s customers total capacity in the
system at time t. This process is non-Markovian, therefore, we use the dynamic
screening method for its investigation.
Let the system be empty at moment t0 , and let us fix any time moment T in
the future as shown in Fig. 2. The set of dynamic probabilities S1 (t), . . . , Sn (t)
represents that a customer arriving at time t have the i-type and it will be served
at the moment T , i.e. Si (t) = pi (1 − Bi (T − t)), for t0 ≤ t ≤ T .
Denote by {W1 (t), . . . , Wn (t)} the each type total customers capacities screened
before the moment t. It is easy to prove the property for the probability distribution
stochastic processes [10]:

P {V1 (T ) < w1 , . . . , Vn (T ) < wn } = P {W1 (T ) < w1 , . . . , Wn (T ) < wn }, wi ≥ 0.

The above n-dimensional process is non-Markovian, then we will add the


residual time from t to the next arrival z(t).
510 E. Lisovskaya et al.

2.2 Kolmogorov Integro-Differential Equation

For the probability distribution of (n + 1)-dimensional Markovian process


{z(t), W1 (t), . . . , Wn (t)}:

P (z, w1 , . . . , wn , t) = P {z(t) < z, W1 (t) < w1 , . . . , Wn (t) < wn } ,


z, w1 , . . . , wn > 0,

we can write the following Kolmogorov integro-differential equation:

∂P (z, w1 , . . . , wn , t) ∂P (z, w1 , . . . , wn , t)
=
∂t ∂z
∂P (0, w1 , . . . , wn , t)  n
+ (A(z) − 1) + A (z) Si (t)
∂z
i=1
⎡w
i
∂P (0, w1 , . . . , wi − yi , . . . , wn , t)
×⎣
∂z
0

∂P (0, w1 , . . . , wn , t)
×dGi (yi ) − . (1)
∂z

We define the initial conditions in the form



R(z), w1 = . . . = wn = 0,
P (z, w1 , . . . ,n , t0 ) =
0, otherwise,

1 z
where R(z) = (1 − A(u)) du is the stationary probability distribution of the
a0
values of the random process z(t).
To solve (1), we introduce the partial characteristic function:

∞ ∞
h(z, v1 , . . . , vn , t) = e j v1 w1
... ej vn wn P (z, dw1 , . . . , dwn , t),
0 0

where j = −1 is the imaginary unit. Then, we obtain the following equation:

∂h(z, v1 , . . . , vn , t) ∂h(z, v1 , . . . , vn , t) ∂h(0, v1 , . . . , vn , t)


= +
∂t ∂z ∂z
 
n

× A(z) − 1 + A(z) Si (t)(Gi (vi ) − 1) , (2)
i=1
Heterogeneous System GI/GI(n) /∞ with Random Customers Capacities 511

where
∞
G∗i (vi ) = ej vi y dGi (y),
0

with the initial condition

h(z, v1 , . . . , vn , t0 ) = R(z). (3)

3 Asymptotic Analysis

In general, Eq. (2) cannot be solved analytically, but it is possible to find approxi-
mate solutions under suitable asymptotic conditions; in this paper we consider the
case that the service times of the different types of customers growth proportionally
to each other.
We state and prove the following theorem.
Theorem 1 The asymptotic characteristic function of the stationary probability
distribution of the process {V1 (t), . . . , Vn (t)} has the form
/

n
(i)

n
(j vi )2 (i)
h(v1 , . . . , vn ) ≈ exp λ j vi a1 pi bi + λ a2 pi bi
2
i=1 i=1
4

n 
n
j vi j vm (i) (m)
+κ a1 a1 pi pm Kim , (4)
2
i=1 m=1

where λ = (a)−1 , κ = λ3 (σ 2 − a 2 ), (a and σ 2 being the mean and the variance of


the interval time, respectively), and

∞ ∞
a1(i) = ydGi (y), a2(i) = y 2 dGi (y),
0 0
∞ ∞
bi = (1 − Bi (x))dx; Kim = (1 − Bi (x))(1 − Bm (x))dx.
0 0

Proof At first, we prove a secondary statement.


512 E. Lisovskaya et al.

Lemma 1 The first-order asymptotic characteristic function of the process


{z(t), W1 (t), . . . , Wn (t)} is given by
⎧ ⎫
⎨ n t ⎬
(i)
h(z, v1 , . . . , vn , t) ≈ R(z) exp λ j vi a1 Si (θ )dθ .
⎩ ⎭
i=1 t0

Proof Let bi = bqi for some real values qi > 0 and b → ∞. Put

1
ε= , vi = εyi , tε = τ, t0 ε = τ0 , T ε = T̃ , Si (t) = S̃i (τ ),
bqi
h(z, v1 , . . . , vn , t) = f1 (z, y1 , . . . , yn , τ, ε).

Then, from the expressions (2) and (3), we get

∂f1 (z, y1 , . . . , yn , τ, ε) ∂f1 (z, y1 , . . . , yn , τ, ε) ∂f1 (0, y1 , . . . , yn , τ, ε)


ε = +
∂τ ∂z ∂z
 
n

× A(z) − 1 + A (z) S̃i (τ )(Gi (εyi ) − 1) ,
i=1
(5)

with the initial condition

f1 (z, y1 , . . . , yn , τ0 , ε) = R(z).

Let ε → 0; then Eq. (5) becomes:

∂f1 (z, y1 , . . . , yn , τ ) ∂f1 (0, y1 , . . . , yn , τ )


+ (A(z) − 1) = 0.
∂z ∂z

and hence f1 (z, y1 , . . . , yn , τ ) can be expressed as

f1 (z, y1 , . . . , yn , τ ) = R(z)Φ1 (y1 , . . . , yn , τ ), (6)

where Φ1 (y1 , . . . , yn , τ ) is some scalar function, satisfying the condition

Φ1 (y1 , . . . , yn , τ0 ) = 1.

Now let z → ∞ in (5):

∂f1 (0, y1 , . . . , yn , τ, ε) 
n
∂f1 (∞, y1 , . . . , yn , τ, ε)
ε = S̃i (τ )(G∗i (εyi ) − 1).
∂τ ∂z
i=1
Heterogeneous System GI/GI(n) /∞ with Random Customers Capacities 513

Then, we substitute here the expression (6), take advantage of the Taylor expansion
 
ej εs = 1 + j εs + O ε2 , (7)

divide by ε and perform the limit as ε → 0. Since R " (0) = λ, we get the following
differential equation:

∂Φ1 (y1 , . . . , yn , τ )  n
(i)
= Φ1 (y1 , . . . , yn , τ )λ S̃i (τ )jyi a1 . (8)
∂τ
i=1

Taking into account the initial condition, the solution of (8) is


⎧ ⎫
⎨ n τ ⎬
Φ1 (y1 , . . . , yn , τ ) = exp λ jyi a1(i) S̃i (θ )dθ .
⎩ ⎭
i=1 τ0

By substituting Φ1 (y1 , . . . , yn , τ ) from (6), and then we can write

h(z, v1 , . . . , vn , t) = f1 (z, y1 , . . . , yn , τ, ε) ≈ f1 (z, y1 , . . . , yn , τ )


= R(z)Φ1 (y1 , . . . , yn , τ )
⎧ ⎫
⎨  n τ ⎬
= R(z) exp λ jyi a1(i) S̃i (θ )dθ
⎩ ⎭
i=1 τ0
⎧ ⎫
⎨ n t ⎬
(i)
= R(z) exp λ j vi a1 Si (θ )dθ .
⎩ ⎭
i=1 t0

Let h2 (z, v1 , . . . , vn , t) be a solution of the following equation:


⎧ ⎫
⎨ n t ⎬
(i)
h(z, v1 , . . . , vn , t) = h2 (z, v1 , . . . , vn , t) exp λ j vi a1 Si (θ )dθ . (9)
⎩ ⎭
i=1 t0
514 E. Lisovskaya et al.

Substituting this expression into (2) and (3), we get the following equivalent
problem:

∂h2 (z, v1 , . . . , vn , t)  n
(i)
+ λh2 (z, v1 , . . . , vn , t) j vi a1 Si (t)
∂t
i=1
∂h2 (z, v1 , . . . , vn , t) ∂h2 (0, v1 , . . . , vn , t)
= +
∂z ∂z
 

n
 ∗ 
× A(z) − 1 + A(z) Si (t) Gi (vi ) − 1 , (10)
i=1

with the initial condition

h2 (z, v1 , . . . , vn , t0 ) = R(z). (11)

By performing the following changes of variable

1
ε2 = , vi = εyi , tε = τ, t0 ε = τ0 , T ε = T̃ , Si (t) = S̃i (τ ),
bqi (12)
h2 (z, v1 , . . . , vn , t) = f2 (z, y1 , . . . , yn , τ, ε).

In (10) and (11), we get the following problem:

∂f2 (z, y1 , . . . , yn , τ, ε)  n
(i)
ε2 + f2 (z, y1 , . . . , yn , τ, ε)λ j εyi a1 S̃i (τ )
∂τ
i=1
∂f2 (z, y1 , . . . , yn , τ, ε) ∂f2 (0, y1 , . . . , yn , τ, ε)
= +
∂z ∂z
 
n
× A(z) − 1 + A(z) S̃i (τ )(G∗i (εyi ) − 1) , (13)
i=1

with the initial condition

f2 (z, y1 , . . . , yn , τ0 , ε) = R(z).

As a generalization of the approach used in the previous subsection, the


asymptotic solution of this problem

f2 (z, y1 , . . . , yn , τ ) = lim f2 (z, y1 , . . . , yn , τ, ε).


ε→0
Heterogeneous System GI/GI(n) /∞ with Random Customers Capacities 515

Letting ε → 0 in (13), we get the following equation:


∂f2 (z, y1 , . . . , yn , τ ) ∂f2 (0, y1 , . . . , yn , τ )
+ (A(z) − 1) = 0.
∂z ∂z
Hence, we can express f2 (z, y1 , . . . , yn , τ ) as

f2 (z, y1 , . . . , yn , τ ) = R(z)Φ2 (y1 , . . . , yn , τ ), (14)

where Φ2 (y1 , . . . , yn , τ ) is some scalar function that satisfies the condition

Φ2 (y1 , . . . , yn , τ0 ) = 1.

The solution f2 (z, y1 , . . . , yn , τ ) can be represented in the expansion form

f2 (z, y1 , . . . , yn , τ ) = Φ2 (y1 , . . . , yn , τ )
 
 n
(i)  
× R(z) + f (z) j εyi a1 S̃i (τ ) + O ε2 , (15)
i=1

where f (z) is a suitable function, and besides f (∞) = const, let be f (∞) = 0. By
substituting the previous expression and the Taylor–Maclaurin expansion (7) in (13),
taking into account that R " (z) = λ(1 − A(z)), it is easy to verify that
z z
κ
f (z) = (1 − A(u)) du + λ (R(u) − A(u)) du.
2
0 0

Letting z → ∞ in (13), by the definition of the function f2 (z, y1 , . . . , yn , τ, ε),


we obtain
∂f2 (z, y1 , . . . , yn , τ, ε)
lim = 0,
z→∞ ∂z
and, taking into account the expansion

(j εs)2  
ej εs = 1 + j εs + + O ε3 ,
2
we can write

∂f2 (∞, y1 , . . . , yn , τ, ε)  n
ε2 + f2 (∞, y1 , . . . , yn , τ, ε)λ S̃i (τ )j εyi a1(i)
∂τ
i=1
$ %
∂f2 (0, y1 , . . . , yn , τ, ε) 
n
(i) (j εyi )2 (i)  
= S̃i (τ ) j εyi a1 + a2 + O ε 3 .
∂z 2
i=1
516 E. Lisovskaya et al.

By substituting here the expansion (15) and taking the limit as z → ∞, we get

∂Φ2 (y1 , . . . , yn , τ )  n
(i)
ε2 + Φ2 (y1 , . . . , yn , τ )λ j εyi a1 S̃i (τ )
∂τ
i=1


n $ %
(j εyi )2 (i)
= Φ2 (y1 , . . . , yn , τ )λ S̃i (τ ) j εyi a1(i) + a2
2
i=1


n
+ Φ2 (y1 , . . . , yn , τ )f " (0) S̃i (τ )j εyi a1(i)
i=1


n $ %
(j εym )2 (m)  
× S̃m (τ ) j εym a1(m) + a2 + O ε3 .
2
m=1

After simple remakes, and taking into account that κ = 2f " (0), we get the
following differential equation for Φ2 (y1 , . . . , yn , τ ):
 n
∂Φ2 (y1 , . . . , yn , τ )  (jyi )2 (i)
= Φ2 (y1 , . . . , yn , τ ) λ a2 S̃i (τ )
∂τ 2
i=1

n  n
jyi jym (i) (m)
+κ a1 a1 S̃i (τ )S̃m (τ ) ,
2
i=1 m=1

whose solution (with the given initial condition) can be expressed as



⎨ n τ
(jyi )2 (i)
Φ2 (y1 , . . . , yn , τ ) = exp λ a2 S̃i (θ )dθ
⎩ 2
i=1 τ0


n 
n τ ⎬
jyi jym (i) (m)
+κ a1 a1 S̃i (θ )S̃m (θ )dθ .
2 ⎭
i=1 m=1 τ0
Heterogeneous System GI/GI(n) /∞ with Random Customers Capacities 517

Substituting this expression into (14) and performing the inverse substitutions
of (12) and (9), we get the following expression for the asymptotic characteristic
function of the process {z(t), W1 (t), . . . , Wn (t)}:

⎨ n t
(i)
h(z, v1 , . . . , vn , t) ≈ R(z) exp λ j vi a1 Si (θ )dθ

i=1 t0


n t
(j vi )2
+λ a2(i) Si (θ )dθ
2
i=1 t0


n 
n t ⎬
j vi j vm (i) (m)
+κ a1 a1 Si (θ )Sm (θ )dθ ,
2 ⎭
i=1 m=1 t0

For z → ∞, t = T and t0 → −∞ we get the characteristic function of the


process {V1 (t), . . . , Vn (t)} in the steady state regime
/

n
h(v1 , . . . , vn ) ≈ exp λ j vi a1(i) pi bi
i=1
4

n
(j vi )2 (i) 
n 
n
j vi j vm (i) (m)
+λ a2 pi bi + κ a1 a1 pi pm Kim .
2 2
i=1 i=1 m=1

The structure of this characteristic function implies that the n-dimensional


process {V1 (t), . . . , Vn (t)} is asymptotically Gaussian with mean

a = λ a1(1)p1 b1 a1(2)p2 b2 . . . a1(n) pn bn

and covariance matrix

K = λK(1) + κK(2) ,

where
⎡ (1) ⎤
a p1 b1 0 ... 0
⎢ 2 (2) ⎥
⎢ 0 a2 p2 b2 ... 0 ⎥
K(1) =⎢ ⎥,
⎣ ... ... ... ... ⎦
(n)
0 0 . . . a2 pn bn
518 E. Lisovskaya et al.

⎡ (1) (1) (1) (2) (1) (n)



a a p1 p1 K11 a1 a1 p1 p2 K12 . . . a1 a1 p1 pn K1n
⎢ 1(2) 1(1) (2) (2) (2) (n) ⎥
⎢ a1 a1 p2 p1 K21 a1 a1 p2 p2 K22 . . . a1 a1 p2 pn K2n ⎥
K(2) =⎢ ⎥.
⎣ ... ... ... ... ⎦
(n) (1) (n) (2) (n) (n)
a1 a1 pn p1 Kn1 a1 a1 pn p2 Kn2 . . . a1 a1 pn pn Knn

4 Simulation Results

The result (4) was obtained under the asymptotic condition for an unlimited increase
of the service time (bi → ∞). We conducted several simulation experiments [11],
changing all the systems parameters (i.e., the laws that characterize the incoming
flow, the service time, and the customers resource, as well as the probabilities pi ),
in order to investigate their practical applicability. Since the different values of the
source data show similar results, for example, we present only one of them.
Thus, we assume that the arrival renewal process is characterized by a uniform
distribution of the interval time in the [0.5, 1.5], corresponding to a fundamental
rate of arrivals λ = 1 customers per time unit. The remaining distribution laws and
their parameters are presented in Table 1, according to the customers type.
We compared the asymptotic distributions with the empiric ones by Kolmogorov
distance:

Δ = sup |Fem (x) − Fas (x)| ,


x

where Fem (x) is the distribution function built on the basis of simulation results,
and Fas (x) is the Gaussian approximation given by (4).
Table 2 shows the results for the marginal distributions of the total resource
amount for each customers types (Δ1 and Δ2 , respectively) and for two-dimensional
distributions (Δ).
As expected, the asymptotic results become more precise when the service time
parameter b increases. This conclusion is also confirmed by Figs. 3 and 4, which
compare the asymptotic approximations with the empirical histograms for the total
resource amount of each type of customers for two different values of b.

Table 1 Types of customers and their distribution laws

Distribution laws
Type Probability Service time Resources
First p1 = 0.7 Gamma (0.5b, 0.5) Exponential (2)
Second p2 = 0.3 Gamma (1.5b, 1.5) Exponential (1)
Heterogeneous System GI/GI(n) /∞ with Random Customers Capacities 519

Table 2 Kolmogorov
b 10 20 50 100 200
distance
Δ1 0.136 0.072 0.035 0.024 0.017
Δ2 0.041 0.027 0.020 0.014 0.010
Δ 0.136 0.072 0.035 0.024 0.017

0.2 0.06
theoretical theoretical
0.18 simulation simulation
0.05
0.16
0.14
0.04
0.12
f(v1)

f(v1)
0.1 0.03
0.08
0.02
0.06
0.04
0.01
0.02
0 0
0 2 4 6 8 10 12 14 16 40 50 60 70 80 90 100
v1 v1

(a) (b)

Fig. 3 Distributions of the total resource amount for the first type of customers. (a) b = 20. (b)
b = 200
0.14 0.04
theoretical theoretical
simulation simulation
0.12 0.035

0.03
0.1
0.025
0.08
f(v2)

f(v2)

0.02
0.06
0.015
0.04
0.01

0.02 0.005

0 0
0 5 10 15 20 20 30 40 50 60 70 80 90 100
v2 v2

(a) (b)

Fig. 4 Distributions of the total resource amount for the second type of customers. (a) b = 20. (b)
b = 200

5 Conclusion

In this work we considered a queue with n customers types under the assumption
that arrival points correspond to a renewal process and each customer occupies a
random resource amount. At first we constructed the system of Kolmogorov differ-
ential equations, which in the general case cannot be solved analytically. Hence, we
obtained the approximations of probability distributions in case of infinitely growing
service time by asymptotic analysis method, and we noticed that the n-dimensional
probability distribution of the total resource amount is asymptotically Gaussian.
520 E. Lisovskaya et al.

Finally, by discrete-event simulation we tested the approximation reliability, by


considering the Kolmogorov distance as accuracy measure.

Acknowledgement This publication has been prepared with the support of the University of Pisa
PRA 2018-2019 Research Project “CONCEPT—COmmunication and Networking for vehicular
CybEr-Physical sysTems”.

References

1. Basharin, G.P., Samouylov, K.E., Yarkina, N.V., Gudkova, I.A.: A new stage in mathematical
teletraffic theory. Automat. Rem. Contr. 70(12), 1954–1964 (2009). https://doi.org/10.1134/
S0005117909120030
2. Dhillon, H.S., Ganti, R.K., Baccelli, F., Andrews, J.G.: Modeling and analysis of k-tier
downlink heterogeneous cellular networks. IEEE J. Sel. Area Comm. 30(3), 550–560 (2012).
https://doi.org/10.1109/JSAC.2012.120405
3. Elie, E.: Intel OptaneTM technology as differentiator for internet of everything and fog
computing. In: 2018 Fifth International Conference on Software Defined Systems (SDS),
pp. 3–3 (2018). https://doi.org/10.1109/SDS.2018.8370412
4. Gimpelson, L.: Analysis of mixtures of wide- and narrow-band traffic. IEEE Trans. Commun.
Technol. 13(3), 258–266 (1965). https://doi.org/10.1109/TCOM.1965.1089121
5. Kelly, F.P.: Loss networks. Ann. Appl. Probab. 1(3), 319–378 (1991). https://doi.org/10.1214/
aoap/1177005872
6. Lebedenko, T., Yeremenko, O., Harkusha, S., Ali, A.S.: Dynamic model of queue management
based on resource allocation in telecommunication networks. In: 2018 14th International
Conference on Advanced Trends in Radioelectronics, Telecommunications and Computer
Engineering (TCSET), pp. 1035–1038 (2018). https://doi.org/10.1109/TCSET.2018.8336371
7. Lisovskaya, E., Moiseeva, S., Pagano, M.: The total capacity of customers in the infinite-server
queue with MMPP arrivals. In: Vishnevskiy, V.M., Samouylov, K.E., Kozyrev, D.V. (eds.)
Distributed Computer and Communication Networks, pp. 110–120. Springer International
Publishing, Cham (2016)
8. Lisovskaya, E., Moiseeva, S., Pagano, M.: Infinite–server tandem queue with renewal arrivals
and random capacity of customers. In: Vishnevskiy, V.M., Samouylov, K.E., Kozyrev, D.V.
(eds.) Distributed Computer and Communication Networks, pp. 201–216. Springer Interna-
tional Publishing, Cham (2017)
9. Miraz, M.H., Ali, M., Excell, P.S., Picking, R.: A review on internet of things (IoT), internet
of everything (IoE) and internet of nano things (IoNT). In: 2015 Internet Technologies and
Applications (ITA), pp. 219–224 (2015). https://doi.org/10.1109/ITechA.2015.7317398
10. Moiseev, A., Nazarov, A.: Queueing network map–(GI/∞)K with high-rate arrivals. Eur. J.
Oper. Res. 254(1), 161–168 (2016). https://doi.org/https://doi.org/10.1016/j.ejor.2016.04.011.
http://www.sciencedirect.com/science/article/pii/S0377221716302302
11. Moiseev, A., Demin, A., Dorofeev, V., Sorokin, V.: Discrete-event approach to simulation of
queueing networks. In: High Technology: Research and Applications 2015. Key Engineering
Materials, vol. 685, pp. 939–942. Trans Tech Publications, Stafa-Zurich (2016). https://doi.org/
10.4028/www.scientific.net/KEM.685.939
12. Pankratova, E., Moiseeva, S.: Queueing system with renewal arrival process and two types
of customers. In: 2014 6th International Congress on Ultra Modern Telecommunications
and Control Systems and Workshops (ICUMT), pp. 514–517 (2014). https://doi.org/10.1109/
ICUMT.2014.7002154
13. Pankratova, E., Moiseeva, S.: Queueing system GI/GI/∞ with n types of customers. In: Dudin,
A., Nazarov, A., Yakupov, R. (eds.) Information Technologies and Mathematical Modelling -
Heterogeneous System GI/GI(n) /∞ with Random Customers Capacities 521

Queueing Theory and Applications, pp. 216–225. Springer International Publishing, Cham
(2015)
14. Petrov, V., Solomitckii, D., Samuylov, A., Lema, M.A., Gapeyenko, M., Moltchanov, D.,
Andreev, S., Naumov, V., Samouylov, K., Dohler, M., Koucheryavy, Y.: Dynamic multi-
connectivity performance in ultra-dense urban mmWave deployments. IEEE J. Sel. Areas
Commun. 35(9), 2038–2055 (2017). https://doi.org/10.1109/JSAC.2017.2720482
15. Raj, A., Prakash, S.: Internet of everything: A survey based on architecture, issues and
challenges. In: 2018 5th IEEE Uttar Pradesh Section International Conference on Electrical,
Electronics and Computer Engineering (UPCON), pp. 1–6 (2018). https://doi.org/10.1109/
UPCON.2018.8596923
16. Sun, B., Dudina, O.S., Dudin, S.A., Samouylov, K.E.: Optimization of admission control in
tandem queue with heterogeneous customers and pre-service. Optimization 69(1), 165–185
(2020). https://doi.org/10.1080/02331934.2018.1505887

You might also like