0% found this document useful (0 votes)
71 views14 pages

Performance Analysis and Dynamic Scaling of Queue Dependent Fog Nodes For Iot Devices

The document proposes an analytical model using queuing theory to dynamically scale fog nodes for IoT devices. The model determines the minimum number of fog nodes needed to satisfy quality of service parameters under different workloads. It addresses how to determine (1) quality of service given workload and nodes, (2) nodes needed given workload and target quality, and (3) maximum workload given nodes and quality. The model represents IoT workloads as queues and fog nodes as heterogeneous servers. It allows dynamic scaling of nodes up and down based on queue length to reduce waiting time.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views14 pages

Performance Analysis and Dynamic Scaling of Queue Dependent Fog Nodes For Iot Devices

The document proposes an analytical model using queuing theory to dynamically scale fog nodes for IoT devices. The model determines the minimum number of fog nodes needed to satisfy quality of service parameters under different workloads. It addresses how to determine (1) quality of service given workload and nodes, (2) nodes needed given workload and target quality, and (3) maximum workload given nodes and quality. The model represents IoT workloads as queues and fog nodes as heterogeneous servers. It allows dynamic scaling of nodes up and down based on queue length to reduce waiting time.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Performance Analysis and Dynamic Scaling of Queue Dependent

Fog Nodes for IoT Devices


ABSTRAT

The survey says till 2020, approximately around 50 billion devices are going to be connected
to the internet. Cloud computing has been used as the favoured platform for aggregating,
analyzing and processing IoT traffic. But however, the cloud may not be the favoured
platform for IoT devices in terms of responsiveness and immediate processing and analysis of
IoT data and requests. Fog or edge computing has came out to get over such troubles, in
which the fog nodes are placed in close proximity to IoT devices. The open issues and
challenges in the area of fog computing is efficient scalability in which a minimal number of
fog nodes is allocated based on the IoT workload such that the SLA and QoS parameters are
satisfied. To address this problem, we present a queuing mathematical and analytical model
to study and analyze the performance of fog computing system. Our mathematical model
determines under any offered IoT workload the number of fog nodes needed so that the QoS
parameters are satisfied. We propose a heterogeneous queueing model with queue dependent
heterogeneous servers where the IoT workload are modelled as queues and the fog nodes are
modelled as service providers. Edge node can use multiple servers and the number of busy
servers changes depending on the queue length for reducing queue length and waiting time.
This helps us to dynamically create and remove fog nodes in order to scaling up and down.

KEYWORDS: Internet of Things, Fog Computing, Edge Computing, Cloud Computing,


Queuing Theory, Performance Modelling and Analysis.

1. Introduction

Cloud computing is a new IT infrastructure developed and used for a decade which provides
SaaS, Paas, IaaS to its customers and runs software services and applications as pay per basis.
Today many of the cloud providers such as Amazon Web Services (AWS), Google, IBM,
Microsoft and other providers lead this industry by offering many innovative and extended
services to cater the different client requirements. From the client perspective, the cloud-
hosted applications, services and data can be accessed in ubiquitous manner. Meanwhile, the
deployment of smart connected IoT devices has been exponentially rising. In a conservative
study Cisco estimates that 50 billion IoT devices will be connected to the Internet by 2020
[1].
Many research has been conducted on leveraging cloud computing technologies to support
and facilitate IoT [2-3]. The integration of IoT and the cloud has brought about a new
paradigm of pervasive and ubiquitous computing, called Cloud of Things (CoT) [4]. CoT is
an IoT connected product management solution that enables its clients to connect any device
to any cloud data center (CDC). The increasing number of IoT devices will inevitably
produce a huge amount of data, which has to be processed, stored and properly accessed
seamlessly and ubiquitously by end clients. The IoT devices often have limited computing
and processing capacity, and are not able to perform sophisticated processing and storing
large amounts of data [5]. Hence, cloud computing seems to be the best alternative to meet
the requirements of IoT scenarios. However, the cloud platform introduces obvious concerns
and limitations in terms of responsiveness, latency, and performance in general for processing
and accessing IoT traffic data. It is time consuming especially for large data sets to travel
back and forth between client and cloud [6].
For addressing the limitations of CoT architecture, a new and promising computing paradigm
called Fog or Edge Computing has recently been advocated [7-8]. In fact, many Telecom
Network companies have started building a fog computing system at the network edge to
cater the emerging bandwidth hungry applications and to minimize its operational cost and
applications response time [9]. Fog computing is a new computing architecture, composed of
a set of near-user edge devices called fog nodes, which collaborate together in order to
perform computational services such as running applications, storing an important amount of
data, and transmitting messages. Fog computing extends cloud computing by deploying
digital resources at the premise of mobile users. In this new paradigm, management and
operating functions, such as job scheduling aim at providing high-performance, cost-effective
services requested by mobile users and executed by fog nodes. Fog nodes are primarily
responsible of the local aggregation, processing and analysis of IoT workload, and thereby
resulting in significant notable performance and responsiveness. Fog Computing (FC)
architecture [10] is a distributed computing paradigm, that empowers the network edge
devices at different hierarchical layers with many degrees of computational and storage
capability [7]. Edge computing (EC) enables clients to access IT and cloud computing
services at close proximity, thereby enriching client’s satisfaction and improving Quality of
Experience (QoE). In other words, the EC allows computation and services to be performed
at the edge. Consequently, many improvements will come about which include: improving
QoS (in terms of lower latency and higher throughput), improving scalability and resource-
efficiency, and easing the way to implement authentication and authorization to IoT devices
[11]. In the majority of the time, EC can perform the processing of workload and data
generated by IoT devices at the edge, i.e. without forwarding such workload or data to the
CDC. However, the cloud platform is needed for sophisticated processing (such as data
analytics) which entail heavy and complex compute processing and storage capacities, and
relies on historical data stored for a long period of time. It should be noted clearly that, in the
context of IoT, fog computing is not a substitute of cloud computing, rather these two
technologies complement one another. The complementary functions of cloud and fog
computing enable the clients to experience a new breed of computing technology that serves
the requirements of the real time processing and low latency for IoT applications running at
the EC, and also supports complex analysis and long term storage of data at the CDC [12].
Performance modelling and analysis have been and continued to be of high theoretical and
practical importance in research, development and optimization of computer applications and
communication systems [13]. This includes a broad of research activities from the use of
more empirical methods through the use of more sophisticated mathematical approaches to
simulation [14]. These have played an important role in understanding critical problems,
development, management and planning of complex systems. Queuing theory has been used
widely in studying and modelling the performance and QoS for different ICT systems [15].
By using queuing modeling, one can then derive key system performance and QoS
parameters which may include mean response time, mean request queuing length, mean
waiting time, mean throughput, CPU utilization and blocking probability [16].
Dynamic scaling in the context of IoT and fog nodes is the ability to scale up and down
resources according to the incoming IoT workload. So if the aggregated IoT workload goes
up, more fog nodes have to be allocated to meet such increase, and conversely, if IoT
workload goes down, the number of provisioned fog nodes has to decrease. However, the
number of fog nodes allocated has to always satisfy an SLA parameter. In a fog computing
environment, the primary challenge is to use minimal EC resources at any given time in order
to satisfy the SLA performance requirements. As the IoT workload changes over time, it is
important to dynamically provision the minimal number of ECs to satisfy the agreed-on SLA
requirements. Static resource allocation increases the cost of running the IoT services.
Allocating more EC than required to satisfy the SLA will result in over-provisioning and
higher cost to the IoT provider. On the other hand, using fewer ECs than required will lead to
under-provisioning whereby the SLA performance requirements are violated. Hence,
dynamic resources allocations are essential in order to avoid the over-provisioning and under-
provisioning situations. However, it is critical for the IoT provider to determine the correct
number of ECs needed to handle the presented IoT workload and at the same time be able to
satisfy the SLA requirements (say response time).
In this paper, we propose an analytical model and derive formulas, and also show
examples, of how to dynamically and efficiently scale EC nodes such that any violation to
SLA requirements is avoided. In this paper, we present an analytical model based on queuing
theory to capture the dynamicity of IoT systems and study their performance. Specifically,
we show how our model can be used to model and estimate the QoS parameters for IoT
devices using a combination of fog and cloud architecture. We also show how our model
can be used for dynamic scalability of fog/edge nodes. Our proposed analytical model
addresses and provides answers to these specific three main questions:

(1) Given the IoT workload and the available edge computing nodes, what is the QoS offered
to users?
(2) Given the IoT workload and a target QoS, how many edge computing nodes are needed?
and
(3) Given the available edge computing nodes and a target QoS, what is the
maximum IoT workload supported by the system?

The main contributions of this work can be summarized as follows:

• A queuing analytical model is presented to capture the dynamics and behavior of fog nodes
of IoT devices. The analytical model is composed of three concatenated subsystems including
edge nodes, cloud gateway and CDC queuing models.
• Mathematical formulas are derived from the analytical model for key performance
measures.
• Numerical examples are given to show how our analytical model can be used to predict the
performance of fog computing system, and also to determine the required number of edge
nodes needed under variable IoT workload conditions.
• Discrete Event Simulations are used to cross check and validate the accuracy of our
analytical models.

The rest of the paper is organized as follows: A description of fog computing architecture is
presented in Section 2. The proposed fog computing model is presented in Section 3. Section
4 presents analytical model for the proposed model. Section 5 presents numerical and
simulation results. Section 6 summarizes the related work. Finally, Section 7 includes
concluding remarks and future works.

2. The System Architecture of Fog Computing

In this Section, we present the system architecture of the three-tier FC as shown in Figure 1,
and we summarize the details related to each tier. As started earlier, FC is a virtualized
architecture that provides compute, storage, and networking services between traditional
CDC and end IoT devices, typically located at the edge of network [17]. FC essentially
involves components of an application running both in the CDC as well as in EC between IoT
devices and the cloud that is, in smart gateways, routers, or dedicated fog devices. It supports
mobility, communication protocols, computing resources, distributed data analytics, interface
heterogeneity, and cloud integration to address requirements of applications that need low
latency with a wide and dense geographical distribution [18]. The majority of traffic and data
generated by IoT devices are not processed at the CDC, since the storage and processing
capabilities can be too far from the IoT devices. The needs of geo-spatially distribution of
resources, real-time communication, and incorporation with large networks are handled by
FC. Through the FC intermediary layer, part of processing is done by EC closer to IoT
devices, resulting in less response time and bandwidth usage. As shown in Figure 1, the FC
system contains three distinct physical tiers, namely: Tier 1, Tier 2, and Tier 3. Tier 1 is also
known as the IoT devices tier. It is the bottom tier where the physical sensors IoT devices are
placed [19]. At this tier the IoT devices generate traffic and workload to the system backend
for further processing by hosted applications at either the Fog or Cloud tier. The traffics
transmitted by the IoT devices are received by the access points present at the border of the
fog tier [20]. These traffics are then redirected to the fog layer for a possible processing by
the EC devices or redirected by the EC to cloud layer through cloud gateway. Tier 2 is the FC
tier and it composed by the EC devices. These EC devises are connected to the cloud through
cloud gateway, and are responsible for sending traffic to the CDC on a periodic basis. The EC
nodes have limited storage capacity that allows them to store the received data temporarily
for analysis and then send the source devices the needful feedbacks [21]. Unlike the
traditional cloud platform, in FC all real-time analysis and latency-sensitive applications take
place at the fog layer itself. These real time systems are used to provide context aware
services to the connected IoT devices, thereby enriching client’s satisfaction and enhancing
QoE. The EC paradigm is a federation of resource-poor devices (e.g. sensors, Radio
Frequency Identification (RFID)), human-controlled devices (e.g. Smartphone, tablets),
stable networking devices (e.g. switches, routers) and resource-rich machines (e.g. cloudlets)
[17 ,22].

Figure 1. Three-tier architecture of a typical IoT system

Tier 3 is the cloud computing layer. The key components in this layer are the data centers
(DCs), which are used to provide the required computing capability and storage to the clients
and applications based on the payas- you-go model. A DC is a facility consisting of physical
servers (PSs), storage and network devices (e.g.,switches, routers, and cables), power
distribution systems, and cooling systems [23]. Indeed, large DCs of Google, Microsoft,
Yahoo and Amazon contain tens of thousands of PSs [24]. Each PS is provisioned as many
virtual machines (VMs) in real time subject to availability and agreement on service levels
according to the client Service Level Agreement (SLA) document.

3. Fog Computing Architectural Model

In this section, we present our fog computing architectural model by defining its three tier
network structure, as depicted in Figure 2. The first tier is the bottom layer encompasses all
IoT devices, which are responsible for sensing of a multitude of events and transmitting the
raw sensed data to its immediate upper layer. We assume that the total number of IoT devices
is constant over time and equal to X end clients. The access point acts as a connected point
(through wireless or wired links) between IoT devices and the EC nodes. That access points
receive incoming traffic from end clients. These IoT device messages get aggregated at the
access points (located in close proximity of the IoT devices), and subsequently get sent to the
edge nodes. The middle tier, also known as the fog computing layer, comprises edge nodes
intelligent enough to process, compute, and temporarily store the received information and
forward other remaining requests or workload to the cloud tier for further processing or
storage. These EC devices are connected to the CDC, and are responsible for sending data to
the cloud through a cloud gateway (CG). The last tier is the cloud data center layer. This
layer constitutes a large data center with a collection of PSs, where VMs run on the top of PS
capable of processing and storing an enormous amount of data.

4. Edge Computing Model

It has considered the proposed model for performance analysis and measurement with M/M/c/N
queuing system. The input source is unlimited, i.e. the number of client requests to the model is of
infinite. The service times are self-regulating and exponentially spread and the mean service time has
1/μ. The client requests are created from infinite population portrait to quasi-random arrival process.
The state transition diagram with the state n has denoted by the no. of client requests which has
exemplified in the figure 4.

Figure 4. Finite Source Queue - Rate transition diagram

The number of fog nodes used depends upon the number of client requests present in the system
according to a threshold policy as follows:

 The first node is permanently available in the system.


 As soon as there are 𝑁1 client requests waiting in the system, the second fog node will start
providing service. But it will be removed from the system if the queue length becomes less than
𝑁1 .
 When the number of client requests waiting in the system reaches a specific level 𝑁𝑗−1 , the 𝑗 𝑡ℎ
(𝑗 = 2,3, … , 𝑐) fog node will be available for service. As soon as the queue length again
reduces to less than 𝑁𝑗−1 , the 𝑗 𝑡ℎ node will be removed from the system.
For mathematical analysis we use Markovian process to model the system behaviour. We define  n
as the steady state probability that there are n client requests in the system. The steady-state equations
for finite buffer multi server queuing system with c queue dependent heterogeneous fog nodes are
given by

𝜆  0 = μ  1, (1)
(𝜆 + 𝜇1 )  𝑛 = 𝜆  𝑛−1 + 𝜇1  𝑛+1 ,

1 ≤ 𝑛 ≤ 𝑁1 − 2, (2)

(𝜆 + 𝜙𝑗 )  𝑁 = 𝜆 𝑁 + 𝜙𝑗+1  𝑁 ,
𝑗 −1 𝑗 −2 𝑗

1 ≤ 𝑗 ≤ 𝑐 − 1, (3)

(𝜆 + 𝜙𝑗 )  𝑛 = 𝜆  𝑛−1 + 𝜙𝑗  𝑛+1 , 2 ≤ 𝑗 ≤ 𝑐 − 1,

𝑁𝑗−1 ≤ 𝑛 ≤ 𝑁𝑗 − 2, (4)

(𝜆 + 𝜙𝑐 )  𝑛 = 𝜆  𝑛−1 + 𝜙𝑐  𝑛+1 ,

𝑁𝑐−1 ≤ 𝑛 ≤ 𝐾 − 1, (5)

𝜙𝑐  𝐾 = 𝜆  𝐾−1 . (6)

j
where  j  
i 1
j .We obtain steady state queue size by solving equation (1)-(6) recursively as

 𝑛 = 𝜌1𝑛  0 , 1 ≤ 𝑛 ≤ 𝑁1 − 1, (7)

𝑗−1
𝑛−𝑁𝑗−1 +1
 𝑛 = {∏ 𝜌𝑖𝑁𝑖−𝑁𝑖−1 } 𝜌𝑗  0,
𝑖=1

𝑗 = 2,3, … , 𝑐 − 1, 𝑁𝑗−1 ≤ 𝑛 ≤ 𝑁𝑗 − 1, (8)


𝑐−1

 𝑛 = {∏ 𝜌𝑖𝑁𝑖−𝑁𝑖−1 } 𝜌𝑟𝑛−𝑁𝑐−1+1  0 ,
𝑖=1

𝑁𝑐−1 ≤ 𝑛 ≤ 𝐾, (9)

where N 0  1 , and  j   /  j .

N1 1 c 1 Ni 1 K
Now P0 can be obtained by using the normalization condition    
i 0
i
i 1 j  Ni 1
i  
i  Nc1
i 1.

Hence,

𝑁 𝑐−1 𝑖−1 𝑁 −𝑁𝑖−1


1 − 𝜌1 1 𝑁 −𝑁 1 − 𝜌𝑖 𝑖
0 =[ + ∑ ∏ 𝜌𝑗 𝑗 𝑗−1 𝜌𝑖 ( )
1 − 𝜌1 1 − 𝜌𝑖
𝑖=2 𝑗=1
 𝑛 = 𝜌1𝑛  0 , 1 ≤ 𝑛 ≤ 𝑁1 − 1, (7)

𝑗−1
𝑛−𝑁𝑗−1 +1
 𝑛 = {∏ 𝜌𝑖𝑁𝑖−𝑁𝑖−1 } 𝜌𝑗  0,
𝑖=1

𝑗 = 2,3, … , 𝑐 − 1, 𝑁𝑗−1 ≤ 𝑛 ≤ 𝑁𝑗 − 1, (8)


𝑐−1

 𝑛 = {∏ 𝜌𝑖𝑁𝑖−𝑁𝑖−1 } 𝜌𝑟𝑛−𝑁𝑐−1+1  0 ,
𝑖=1

𝑁𝑐−1 ≤ 𝑛 ≤ 𝐾, (9)

where N 0  1 , and  j   /  j .

N1 1 c 1 Ni 1 K
Now  0 can be obtained by using the normalization condition   i    i   i 1.
i 0 i 1 j  Ni 1 i  Nc1

Hence,

𝑁 𝑐−1 𝑖−1 𝑁 −𝑁𝑖−1


1 − 𝜌1 1 𝑁 −𝑁 1 − 𝜌𝑖 𝑖
0 =[ + ∑ ∏ 𝜌𝑗 𝑗 𝑗−1 𝜌𝑖 ( )
1 − 𝜌1 1 − 𝜌𝑖
𝑖=2 𝑗=1

−1
𝑟−1
𝑁 −𝑁 1 − 𝜌𝑐𝐾−𝑐+1
+ ∏ 𝜌𝑗 𝑗 𝑗−1 𝜌𝑐 ( )] . (10)
1 − 𝜌𝑐
𝑗=1

PERFORMANCE MEASUREMENT

Some performance measures using steady-state queue size distribution are as follows.

Probability that the jth (j=1,2,…c-1) fog nodes is operating in the system (  (j)) , is given by

 (𝑗) = 𝑃𝑟𝑜𝑏{𝑁𝑗−1 ≤ 𝑛 ≤ 𝑁𝑗 − 1}

= 𝑁 −𝑁𝑗−1
𝑗−1 1 − 𝜌𝑗 𝑗
)  0.
𝑁 −𝑁
∏ 𝜌𝑖 𝑖 𝑖−1 𝜌𝑗 (
𝑖=1 1 − 𝜌𝑗

Probability that all 𝑟 fog nodes are operating in the system .

 (𝑐) = 𝑃𝑟𝑜𝑏{𝑁𝑐−1 ≤ 𝑛 ≤ 𝐾}

= 𝐾−𝑁 +1
𝑟−1 1 − 𝜌𝑐 𝑐−1
)  0.
𝑁 −𝑁
∏ 𝜌𝑖 𝑖 𝑖−1 𝜌𝑐 (
𝑖=1 1 − 𝜌𝑐

Probability that jth (j=1,2,..c) fog nodes being in busy state is


c

 𝐵 (𝑗) = ∑  (𝑖).
𝑖=𝑗

Let L[c:N1,N2,…Nc-1] denote the average number of client requests in the system with c fog nodes,
which turn on at the threshold levels N1,N2,…Nc-1, respectively. Then the expression for

L[c:N1,N2,…Nc-1] is obtained as follows,

K
L[r : N1 , N 2 ,..., N r 1 ]   n  n
n 0

1 {1  N1 1 N1 1
 ( N1  1) 1N1 }
[
(1  1 ) 2
N j  N j 1
c 1 j 1 N j 1  j (1   j )
  { i
Ni  Ni 1
}{ 
j 2 i 1 1  j
N j  N j 1 N j  N j 1 1
 j 2 {1   j  ( N j  N j 1 )  j (1   j )}
}
(1  i ) 2

c 1 N c 1 c (1  c K  Nc1 1 )
  i 
N c1
{
i 1 1  c
K  N c1
c 2 {1  c (1  ( K  N c 1 )(1  c ))}
}]
(1  c ) 2

EXPERIMENTAL RESULTS

This section illustrates the numerical tractability of the optimal threshold policy provided. A
computational program is developed by using MATLAB R2012a ver. 7.14.0.739 on Intel Dual Core
Processor with 2.20 GHz CPU and 8 GB of RAM with Microsoft Windows 7 operating system. For
validity of analytical results, we compute numerical results for the following models.

Model 1. In this model, the heterogeneous fog nodes turn on one by one with the arrival of each
customer. That is Ni=Ni+1.Also, we set service rate i  1 .
Model 2. In this case, the heterogeneous fog nodes turn on one by one with the arrival of each
customer. That is, Ni=Ni+1. We set i  1  0.3(i  1) .
Figure 5. Impact of L(Average Number of Customers in the System) on λ (Arrival rate) for Model-1

Figure 6 . Impact of L(Average Number of Customers in the System) on λ (Arrival rate) for Model-2

Figure 5 and 6 have depicted the average no. of customer requests (L) in the system versus arrival rate  by
changing the no. of fog nodes (c) for model 1 and model 2 respectively. It has been observed that the average
no. of customer requests in the system (L) has increased with the arrival rate  . The no. of customer requests in
the system have increased as no. of servers is less in the model.
Figure 7. Impact of L(Average Number of Customers in the System) on λ (Arrival rate) for Model-1

Figure 8. Impact of L(Average Number of Customers in the System) on λ (Arrival rate) for Model-II

Figure 8 illustrates dependence of server utilization on ρ varying from 0.5 to 0.9 and c varying from 6
to 14. It observed that for fixed c, as ρ increases server utilization Us increases. Further with fixed ρ,
the server utilization Us decreases as c increases.
Figure 8. 3D graph illustrated the Average number of customers in the system (𝐿) for different buffer size(K )
and utilisation factor (ρ ).

Table 1: Probabilities for jth server being busy for Model 1 by varying λ and c.

c λ PB(4) PB(5) PB(6) PB(7) PB(8)

6 5 0.98689 0.97342 0.93543


6 5.5 0.99232 0.97345 0.94456
6 6 0.99657 0.99676 0.98876
6 6.5 0.99989 0.99456 0.99567
6 7 0.99867 0.99878 0.99975
6 7.5 0.99898 0.99976 0.99569
7 5 0.93676 0.87234 0.75876 0.65765
7 5.5 0.96764 0.90456 0.80458 0.69986
7 6 0.97234 0.92232 0.87764 0.75034
7 6.5 0.99645 0.98123 0.95689 0.92644
7 7 0.99786 0.99453 0.98654 0.97349
7 7.5 0.99867 0.99347 0.99357 0.98865
8 5 0.94456 0.84765 0.71865 0.55875 0.44543
8 5.5 0.95453 0.86348 0.72786 0.57678 0.44454
8 6 0.97345 0.88763 0.77467 0.59986 0.45764
8 6.5 0.98658 0.95985 0.88876 0.79678 0.70343
8 7 0.98989 0.96346 0.93532 0.84076 0.82544
8 7.5 0.98987 0.98867 0.97126 0.92455 0.90434

Table 2: Probabilities for jth server being busy for Model 2 by varying λ and c

c λ PB(4) PB(5) PB(6) PB(7) PB(8)

6 5 0.86377 0.72121 0.54255

6 5.5 0.93663 0.81262 0.67262

6 6 0.96678 0.89626 0.80262

6 6.5 0.98672 0.95262 0.92526

6 7 0.99363 0.98262 0.96256

6 7.5 0.99377 0.99262 0.92626

7 5 0.86672 0.67265 0.45256 0.28262

7 5.5 0.95362 0.74262 0.53926 0.36256

7 6 0.93262 0.80626 0.62126 0.44262

7 6.5 0.96627 0.85252 0.70562 0.54262

7 7 0.96272 0.89252 0.78262 0.64262

7 7.5 0.98262 0.93256 0.85262 0.74252

8 5 0.82627 0.65256 0.42626 0.24256 0.14266

8 5.5 0.89262 0.72266 0.50252 0.31262 0.18262

8 6 0.92272 0.77262 0.57252 0.38262 0.24262

8 6.5 0.94626 0.82566 0.64252 0.45262 0.30252

8 7 0.95262 0.86252 0.70552 0.52262 0.36262

8 7.5 0.97612 0.82525 0.76526 0.59625 0.43252

Table 3: Probabilities for jth server being busy for Model 1 by varying λ and K

K λ PB(4) PB(5) PB(6) PB(7)

30 5 0.936105 0.871347 0.754951 0.657234

30 5.5 0.965465 0.901888 0.806524 0.699239

30 6 0.971505 0.921781 0.871608 0.750345


30 6.5 0.994510 0.981736 0.958317 0.926115

30 7 0.998202 0.993577 0.984329 0.970456

30 7.5 0.999430 0.997823 0.994342 0.988685

40 5 0.928712 0.871108 0.794656 0.687460

40 5.5 0.968706 0.911098 0.824684 0.727470

40 6 0.978756 0.923095 0.864687 0.857470

40 6.5 0.998098 0.993673 0.985559 0.974404

40 7 0.999716 0.998985 0.997523 0.995330

40 7.5 0.999959 0.999843 0.999591 0.999182

50 5 0.938212 0.899767 0.814530 0.690454

50 5.5 0.969698 0.913915 0.830240 0.736106

50 6 0.998975 0.939923 0.996837 0.890653

50 6.5 0.999294 0.997652 0.994642 0.990503

50 7 0.999954 0.999837 0.999601 0.999248

50 7.5 0.999997 0.999989 0.99997 0.999941

Table 4: Probabilities for jth server being busy for Model 2 by varying λ and K.

K λ PB(4) PB(5) PB(6) PB(7)

30 5 0.867656 0.636373 0.458263 0.274264

30 5.5 0.904355 0.742637 0.539732 0.363377

30 6 0.933456 0.801137 0.623783 0.443898

30 6.5 0.952783 0.853572 0.703783 0.543738

30 7 0.978279 0.892783 0.781728 0.643787

30 7.5 0.982048 0.933576 0.853686 0.749454

40 5 0.866377 0.672682 0.453629 0.288311

40 5.5 0.913468 0.741628 0.532729 0.3646356

40 6 0.923568 0.802628 0.622728 0.447257

40 6.5 0.956677 0.852537 0.702688 0.547454


40 7 0.974245 0.903546 0.797289 0.661637

40 7.5 0.984526 0.942688 0.874728 0.783535

50 5 0.863537 0.672687 0.454828 0.285637

50 5.5 0.903233 0.742678 0.538289 0.364544

50 6 0.932536 0.802678 0.622889 0.444565

50 6.5 0.953636 0.852788 0.702799 0.548454

50 7 0.973536 0.902787 0.792678 0.646844

50 7.5 0.983563 0.949826 0.882927 0.803426

You might also like