0% found this document useful (0 votes)
29 views18 pages

Indetermsoft-Set-Based D Extra Lite Framework For Resource Provisioning in Cloud Computing

Cloud computing is an immensely complex, huge-scale, and highly diverse computing platform that allows the deployment of highly resource-constrained scientific and personal applications. Resource provisioning in cloud computing is difficult because of the uncertainty associated with it in terms of dynamic elasticity, rapid performance change, large-scale virtualization, loosely coupled applications, the elastic escalation of user demands, etc.

Uploaded by

Victor Hermann
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views18 pages

Indetermsoft-Set-Based D Extra Lite Framework For Resource Provisioning in Cloud Computing

Cloud computing is an immensely complex, huge-scale, and highly diverse computing platform that allows the deployment of highly resource-constrained scientific and personal applications. Resource provisioning in cloud computing is difficult because of the uncertainty associated with it in terms of dynamic elasticity, rapid performance change, large-scale virtualization, loosely coupled applications, the elastic escalation of user demands, etc.

Uploaded by

Victor Hermann
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

algorithms

Article
Indetermsoft-Set-Based D* Extra Lite Framework for Resource
Provisioning in Cloud Computing
Bhargavi Krishnamurthy 1, * and Sajjan G. Shiva 2, *

1 Department of CSE, Siddaganga Institute of Technology, Tumakuru 572103, Karnataka, India


2 Department of CS, University of Memphis, Memphis, TN 38152, USA
* Correspondence: [email protected] (B.K.); [email protected] (S.G.S.)

Abstract: Cloud computing is an immensely complex, huge-scale, and highly diverse computing plat-
form that allows the deployment of highly resource-constrained scientific and personal applications.
Resource provisioning in cloud computing is difficult because of the uncertainty associated with it in
terms of dynamic elasticity, rapid performance change, large-scale virtualization, loosely coupled
applications, the elastic escalation of user demands, etc. Hence, there is a need to develop an intelli-
gent framework that allows effective resource provisioning under uncertainties. The Indetermsoft
set is a promising mathematical model that is an extension of the traditional soft set that is designed
to handle uncertain forms of data. The D* extra lite algorithm is a dynamic heuristic algorithm that
makes use of the history of knowledge from past search experience to arrive at decisions. In this
paper, the D* extra lite algorithm is enabled with the Indetermsoft set to perform proficient resource
provisioning under uncertainty. The experimental results show that the performance of the proposed
algorithm is found to be promising in performance metrics such as power consumption, resource
utilization, total execution time, and learning rate. The expected value analysis also validated the
experimental results obtained.

Keywords: resource provisioning; uncertainty; D* extra lite; cloud computing; Indetermsoft set;
cloud computing

Citation: Krishnamurthy, B.; Shiva,


S.G. Indetermsoft-Set-Based D* Extra
Lite Framework for Resource 1. Introduction
Provisioning in Cloud Computing.
Cloud computing is a highly expansive platform that supports a broad spectrum
Algorithms 2024, 17, 479.
of applications, catering to diverse needs across various domains. Its vast infrastructure
https://doi.org/10.3390/
a17110479
allows for the scalable deployment of both resource-intensive scientific simulations and
lightweight personal applications. By offering on-demand access to computing resources,
Academic Editors: Frank Werner and cloud computing enables flexibility and efficiency, accommodating everything from large-
Claudio Savaglio scale data processing to everyday tasks. This adaptability makes it an invaluable tool for
Received: 11 June 2024 organizations and individuals alike, providing the necessary resources to handle complex
Revised: 29 August 2024 workloads and drive innovation across different fields. Effective resource management is
Accepted: 16 October 2024 composed of several activities, which include the provisioning of resources, the reporting of
Published: 25 October 2024 provisioning, task scheduling, and thermal management. The resource management system
must be capable of dealing with the huge size, heterogeneity, and changing workload
demands of cloud users. The main aim of the resource-provisioning scheme is to achieve the
maximum data transfer rate with the minimum incurred cost of transfer. For applications
Copyright: © 2024 by the authors.
utilizing the cloud, their Quality of Service (QoS) must be upgraded without violating the
Licensee MDPI, Basel, Switzerland.
Service-Level Agreement (SLA). While the resources are provisioned, resource availability
This article is an open access article
must be ensured, deployment must be optimized, and interdependency between the user
distributed under the terms and
tasks must be managed very well. The improper provisioning of resources leads to an
conditions of the Creative Commons
increase in the host machine downtime, compromised service, the inefficient functioning of
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
the application, energy wastage, prolonged time to scale up/down, inability to satisfy the
4.0/).
cost goal of users, etc. [1–3].

Algorithms 2024, 17, 479. https://doi.org/10.3390/a17110479 https://www.mdpi.com/journal/algorithms


Algorithms 2024, 17, 479 2 of 18

Modern cloud computing structures are utilized to serve computation-intensive appli-


cations with large datasets. Resource provisioning is an important aspect of accommodating
this use. Potential sources that cause interruption to effective resource provisioning are
uncertain operating conditions, performance fluctuations, the improper positioning of the
cloud, and the complex infrastructure of a multi-cloud environment. Several forms of
uncertainty in the cloud include dynamic elasticity, rapid performance change, large-scale
virtualization, loosely coupled applications, the elastic escalation of user demands, etc.
These uncertainties make resource provisioning a difficult activity. It is also impossible to
determine the workload demands and to obtain the exact knowledge of system parameters
like processor speed and bandwidth. Hence, there is a need to develop an intelligent
framework with a self-learning ability to make resource-provisioning decisions by properly
handling these uncertainties [4,5].
The classical soft set deals only with the determinate form of data, whose values
are obviously certain and precise. However, in real-world scenarios, there are several
sources that generate uncertainty, which include a lack of information and ignorance. The
Indetermsoft set is an extension of the traditional soft set that is designed to handle the
uncertain form of data. The word “indeterm” represents indeterminate, reflecting the
conflicting and unique form of output. The mathematical definition of the Indetermsoft
set is as follows: Let U be the universe of discourse, H be the non-empty subset of U, and
P(H) be the power set of H. The Indetermsoft set function ISSF : A → P( H ) satisfies three
properties: the set A has indeterminacy, P(H) has some indeterminacy, and there exists an
attribute value v ∈ A such that F(v) is indeterminate [6–8].
D* lite is exactly the reverse of the A* algorithm and is much simpler. The trivial A*
algorithm is executed in reverse order; i.e., it begins from the goal state and traverses to the
start state. It first determines the current solution and goes into the waiting state until an
obstacle occurs. Then, D* lite performs re-planning and incrementally repairs the path by
keeping the modifications around the robot’s current pose [9,10]. D* extra lite is a novel
and general-purpose algorithm that performs a shortest-path search using an incremental
search approach. It performs a fast re-initialization of the search space to determine the
shortest path in an unknown or highly dynamic environment. In this paper, the novel
D* extra lite algorithm is enriched with the Indetermsoft set mathematical model, which
performs a superior re-initialization process under cloud uncertainty to provide stable
resource-provisioning decisions.
The objectives of this paper are as follows:
• Providing a brief introduction to the need for effective resource provisioning in uncer-
tain cloud computing systems;
• The efficient handling of parameter uncertainty in the user tasks and virtual machines
using the Indetermsoft set mathematical model;
• The design of a novel Indetermsoft-set-based D* extra lite framework for resource
provisioning in the cloud;
• An experimental evaluation of the D* extra lite framework performance using the Google
Cluster dataset and the Bitbrains dataset using the CloudSim 3.0 open-source framework;
• An expected value analysis and validation of the D* extra lite framework in a dynamic
cloud scenario with respect to future time intervals.
The remaining sections of this paper are organized as follows. Section 2 discusses
related work. Section 3 provides a mathematical definition of the system model, along with
the performance objectives. Section 4 presents the novel Indetermsoft-set-based D* extra lite
framework. Section 5 provides the mathematical modeling of the performance objectives
that were considered for evaluation. Section 6 deals with the results and discussion by
considering the Google Cluster dataset and the Bitbrains dataset. Finally, Section 7 presents
the conclusions.
Algorithms 2024, 17, 479 3 of 18

2. Related Work
This section provides a comprehensive overview of the existing literature and de-
velopments pertinent to resource provisioning in cloud computing. Through this review
of previous research, methodologies, and technological advancements, we establish the
foundation/scope for the proposed work.
Shreshth et al. [11] present an artificial-intelligence-based holistic approach named
HUNTER for resource management in cloud computing. They view optimizing energy
emission in cloud data centers as a multiple-objective-based scheduling problem. A large
amount of energy is consumed by cloud data centers, and hence, there is a need to optimize
energy emissions. One of the key factors to consider while optimizing energy consumption
is reducing the number of thermal hotspots which lead to the degradation of system
performance. Three important models are considered for resource management: cooling,
thermal, and energy. A gated graph convolutional network is viewed as a surrogate model
for optimizing Quality of Service (QoS) to perform optimal task scheduling. The continuous
training of the model helps in quick adaption to dynamic scenarios by providing the
permission to change the scheduling decisions through task migration. Power performance
is used as a heuristic to efficiently balance the load among the cloud hosts. The performance
of the HUNTER is evaluated through the CloudSim simulation toolkit and is good with
respect to energy conservation. However, the approach exhibits poor scalability as it cannot
scale up to large-scale graphs with higher node degrees.
Spyridon et al. propose an auto-scaling framework for resource provisioning in a
cloud computing environment [12]. The over- and under-provisioning of resources results
in a loss of revenue for the cloud brokers whose primary function is to select, manage, and
provide the resources in a complex heterogeneous environment. Since the client resource
requirements are uncertain, it becomes difficult for the cloud broker to predict and process
the client resource demands. Here, the resource-provisioning problem is divided into two
stages: resource selection and resource management. The resource selection problem deals
with the process of selecting the services that meet the requirements of multiple cloud
service providers. The resource management problem deals with the effective maintenance
of cloud resources in terms of resource utilization and overhead maintenance. Both selection
and management of resources have been considered as decision-making problems that
focus on matching the resource requests with services provided. The cloud users make use
of virtualized resources in order to benefit from long-term pricing strategies. For providing
cost-effective solutions, a precise estimation of the upcoming workload is required. An
adaptive auto-scaling framework for resource provisioning that uses historical time-series
data for training a K-means-enabled CNN framework to categorize the future workload
demands as low, medium, or high as per their CPU utilization rate is proposed here. From
the performance evaluation, it is observed that the solution deployment cost is minimal.
However, the framework exhibits higher sensitivity towards initial parameter setting and
an inability to handle categorical data in large-state-space environments.
Kumar et al. present an efficient algorithm for resource provisioning for the efficient
execution of workflows in cloud computing platforms [13]. The workflow considered here
is composed of tasks exhibiting varying resource requirements in terms of memory storage,
memory type, and computation speed. Improper mapping of the workflows to resources
leads to wastage of resources and increased makespan time. Here, the workflow is divided
into three categories: compute-intensive, memory-intensive, and storage-intensive. The
proposed algorithm operates in two phases to provision resources precisely by distinguish-
ing the tasks as computation-intensive and non-computation-intensive. The workflow
model is composed of limited information about the task contained in it, which makes it
applicable to real-time scenarios. The Amazon EC2 cloud model is considered for offer-
ing on-demand computational resources for applications using the presented algorithm.
However, the approach is found to be static and applies a standard set of operations to pro-
cess computation-intensive and non-computation-intensive tasks. This limits the practical
applicability of the approach.
Algorithms 2024, 17, 479 4 of 18

Mao et al. discuss a game approach based on mean-field theory for resource man-
agement in cloud computing [14]. Resource management is viewed as one of the promi-
nent problems in serverless cloud computing environments. Multiple users compete for
resources which usually suffer from scalability issues. Here, an actor–critic learning algo-
rithm is developed to effectively deal with large-state-space environments. The algorithm
is implemented by considering linear and neural network approximations. The mean-
field approach is compatible with several forms of function approximations. Theoretically,
convergence to Nash equilibrium is achieved under linear and softmax approximations.
The performance of this approach is better in terms of resource utilization, but the time to
converge towards better resource-provisioning policies is excessive.
Lucia et al. [15] present an adaptive reinforcement-learning-based approach for re-
source provisioning in serverless computing. Cloud service providers expect a resource-
provisioning scheme to be flexible to meet the fluctuating demands of the customers. A
request-based policy is proposed here in which the resources are scaled for the maximum
number of requests processed in parallel. The performance is strongly influenced by the
predetermined concurrency level. The performance evaluation indicates that with a lim-
ited number of iterations, the model formulates efficient scaling policies. But identifying
the concurrency level that provides the maximum QoS is difficult because of the varying
workload, complex infrastructure, and high latency.
Sangeetha et al. discuss a novel resource management framework based on deep
learning [16]. Increased multimedia traffic in the cloud leads to minimum extensibility of a
service portfolio and poor resource management. A gray-wolf-optimization-based resource
allocation strategy that mimics the hunting behavior of grey wolves is proposed here. The
deep neural network utilized here provides routing direction based on the data input rate
and storage availability. The neural network operates in two phases: data pre-processing
and routing, and controlling application. While the delay in processing the requests is
reduced by this policy, it suffers from a poor search ability and a slow convergence rate.
Saxena et al. develop an elastic resource management framework to provide cloud
services with high availability [17]. There is a very high demand for resources on the
cloud, and the failure to provide on-demand services leads to load imbalance, performance
degradation, and excessive power consumption. An online failure predictor that predicts
the possibility of the virtual machines resulting in resource starvation due to a resource
contention situation is developed. The server under operation is continuously monitored
with the aid of a power analyzer, resource monitor, and thermal analyzer. The virtual
machines that exhibit a high probability of failure are assigned to the fault tolerance unit
that can handle all outages and performance degradation. The virtual machine failure
prediction accuracy in this technique is poor, leading to poor resource management policies.
In summary, the existing works exhibit the following drawbacks:
• Inability to determine the parameter uncertainty in the user tasks and virtual machines.
• Conventional resource-provisioning approaches are static in nature, limiting their
practical application.
• Rule-based approaches are time-consuming and hard to scale, and the rate of virtual
machine violations in terms of cost and response time is very high.
• Most of the heuristic approaches exhibit a higher tendency for premature convergence
under uncertainty.
• Predictive approaches exhibit poor prediction accuracy leading to over- or under-
utilization of resources.
• The computational complexity of the soft computing approaches is high as they deal
with a large number of optimization parameters.
• The learning algorithms fail to consider the highly dynamic operating conditions of
a cloud system. As a result, they cannot handle the dynamic task scheduling and
dynamic placement of resources efficiently.
Algorithms 2024, 17, 479 5 of 18

3. System Model
This section provides the details of the structure, components, and interactions among
the components of the proposed cloud resource-provisioning system. A detailed represen-
tation of its operational dynamics and a discussion of its theoretical foundations are also
presented. The Indetermsoft-set-based D* extra lite framework for edge computing systems
is composed of three functional modules: Indetermsoft set task manager, Indetermsoft set
resource manager, and D* extra lite.
The user submits a set of tasks (UTs) to the system accessibility layer, i.e., UT 1 =
{ut1 , ut2 , ut3 , . . . , utm }, UT 2 = {ut1 , ut2 , ut3 , . . . , utm }, . . . , UT m = {ut1 , ut2 , ut3 , . . . , utm }.
The system accessibility layer places these user tasks into the task queue of the task manager.
The Indetermsoft set task manager applies the Indetermsoft set function (ISF) to the user
tasks. That is,

ISF (UT 1 ) = is f (ut 1 ) , is f (ut 2 ) , is f (ut 3 ) , . . . , is f (ut m ) ,

ISF (UT 2 ) = is f (ut 1 ), is f (ut2 ) , is f (ut 3 ), . . . , is f (utm ) ,

ISF (UT m ) = is f (ut 1 ) , is f (ut 2 ) , is f (ut 3 ) , . . . , is f (ut m ) (1)
Similarly, the resource center is composed of several resource instances, each instance
consists of a set of hosts, and a set of virtual machines are mounted on each host.

RCI 1 = { H1 (vm1 , vm2 , . . . , vmn ), . . . , Hn (vm1 , vm2 , . . . , vmn ) },

RCI 2 = { H1 (vm1 , vm2 , . . . , vmn ), ..., H n (vm1 , vm2 , . . . , vmn ) },


RCI n = { H1 (vm1 , vm2 , . . . , vmn ), . . . , H n (vm1 , vm2 , . . . , vmn ) } (2)
The Indetermsoft set resource manager applies the Indetermsoft set function to the
resource instances.

ISF ( RCI 1 ) = { is f ( H 1 (vm1 , vm2 , . . . , vmn )) . . . is f ( Hn (vm1 , vm2 , . . . , vmn ))},

ISF ( RCI 2 ) = { is f ( H 1 (vm1 , vm2 , . . . , vmn ))...is f ( Hn (vm1 , Vm2 , . . . , vmn ))},
ISF ( RCI n ) = { is f ( H 1 (vm1 , vm2 , . . . , vmn ))...is f ( Hn (vm1 , vm2 , . . . , vmn )) } (3)
The D* extra lite functional module combines the Indetermsoft set function of user
tasks and the resource. It is an incremental heuristic search algorithm which is a dynamic
form of the A* algorithm that generates D* extra lite task resource-provisioning policies:

D ∗ ( RPP) = { D ∗ ( RPP1 ), D ∗ ( RPP2 ), . . . , D ∗ ( RPPn ) } (4)

The following performance objectives (POs) are set for the D* extra lite framework.
PO1: Power Consumption (PC(D* extra lite)): The power consumption of the D*
extra lite framework is the summation of the power consumption of the resource in-
stances PC ( RCI i ).
i =n
PC ( D ∗ extra lite) = ∑i=1 PC ( RCI i ) (5)
where PC ( RCI i ) is determined by the summation of the power consumption of the
host PC ( Hi ), i.e., PC ( RCI i ) = ∑ii= n
=1 PC ( Hi ). Here, PC( Hi ) is determined by consider-
ing the maximum power consumption state PC vmimax, minimum  power consumption
min idle

state PC vmi , and idle state power consumption PC vmi of the virtual machine:
h  i  i
PC ( Hi ) = PC (vmimax ) − PC vmimin ∗ RU (vmi ) + PC vmidle
i (5a)
Algorithms 2024, 17, 479 6 of 18

PO2: Resource Utilization (RU(D* extra lite)): The resource utilization of the D* extra
lite framework is computed by the summation of the resources utilized by the resource
instances RU ( RCI i ).
i=n
RU ( D ∗ extra lite) = ∑ RU ( RCI i ) (6)
i =1

where RU ( RCI i ) is determined by the summation of the resources consumed by the


host RU ( Hi ), i.e., RU ( RCI i ) = ∑ii= n
=1 RU ( Hi ). Here, RU ( Hi ) is the measure of the utilities
of the virtual machines that are used by the user tasks. It is determined by computing
the over-utilized resource center instances RCI iover and under-utilized resource center
instances RCI iunder among the set of available resource center instances NRCI .

i =n  
RU ( D ∗ extra lite) = ∑ RCI iover − RCI iunder /NRCI (6a)
i =1

PO3: Total Execution Time (TET(D* extra lite)): The total execution time of the D*
extra lite framework is the time taken to assign the user tasks to virtual machines. It is the
summation of total execution time of the Indetermsoft set function of user tasks ISF( UT i ),
Indetermsoft set function of resource instances ISF ( RCI i ), and D* policy D ∗ ( RPPi ).

TET ( D ∗ extra lite) =


(7)
∑ii=TET ( ISF (UT i )) + ∑ii=
=1
m n i =n ∗
=1 TET ( ISF ( RCI i ))+ ∑i =1 TET ( D ( RPPi ))

PO4: Learning Rate (LR D* extra lite )): The learning rate of the D* extra lite frame-
work is the speed at which the Indetermsoft set function of user tasks ISF( UT i ) is mapped
to the Indetermsoft set function of resource instances ISF ( RCI i ). It is computed by con-
sidering the total execution time for the formulation of the first resource-provisioning pol-
icy ET ( D ∗ ( RPP1 ) and the number of resource-provisioning policies formulated


N ( D ∗ ( RPPP)).

LR( D ∗ extra lite) = TET ( D ∗ ( RPP1 ) ∗ N ( D ∗ ( RPP))



(8)

4. Proposed Work
This section outlines the innovative contributions and research directions that form
the core of this research study and presents the novel ideas, methodologies, and pro-
posed solutions, emphasizing their significance and potential impact on the resource-
provisioning field.
As shown in Figure 1, the Indetermsoft-set-based D* extra lite framework is composed
of three distinct modules. The users submit requests to the system accessibility layer. The
task manager is responsible for monitoring the resources and tasks. The resource center
is composed of a set of virtual machines that are hosted on several physical machines.
The uncertainty in the incoming tasks is managed by the Indetermsoft set task manager.
Similarly, the uncertainty in the resources is managed by the Indetermsoft set resource
manager. The D* extra lite algorithm is executed on the Indetermsoft set of tasks and
resources. D* extra lite is also referred to as dynamic A*, which determines an ideal path
between the starting point and goal of an application. The obstacles that occur are handled
efficiently when they are encountered on the path towards the destination.
Algorithms 2024, 17, 479 7 of 18
Algorithms 2024, 17, x. https://doi.org/10.3390/xxxxx 7 of 21

Figure 1. Indetermsoft-set-based D* extra lite framework.


Figure 1. Indetermsoft-set-based D* extra lite framework.
4.1. Indetermsoft Set Task Manager (ISSTM)
4.1. Indetermsoft Set Task Manager (ISSTM)
The ISSTM component of the D* extra lite framework accepts a set of user tasks UT
TheUTISSTM
as inputs; = {UT , UTcomponent
, UT , … , UT }. Itof thetheD*
applies extra lite
Indetermsoft framework
set function accepts a set of user
to generate
the Indetermsoft set function ISF of the user task set. 𝐼𝑆𝐹(𝑈𝑇) =
tasks UT as inputs; UT = {UT 1 , UT2 , UT3 , . . . , UTm }. It applies the Indetermsoft set
{𝐼𝑆𝐹(𝑈𝑇 ), 𝐼𝑆𝐹(𝑈𝑇 ), 𝐼𝑆𝐹(𝑈𝑇 ), … , 𝐼𝑆𝐹(𝑈𝑇 )} . The Indetermsoft set function handles the
function
indeterminate to and
generate the
conflicting Indetermsoft
parameters of the usersettasks
function
by mappingISFtheofattributes
the user of task set. ISF (UT ) =
{ ISF (UT 1 ), ISF (UT 2 ), ISF (UT 1 ), . . . , ISF (UT m )}. The Indetermsoft set function handles
user tasks to the power set of the user task set. The working of the ISSTM module is de-
picted
the in Algorithm 1, which
indeterminate and consists of a training
conflicting phase andof
parameters a testing
the userphase. During
tasks bythe
mapping the attributes
training phase, for each set of user tasks, the Indetermsoft set function is applied to gen-
of
erate the Indetermsoft set function of the user task set. Likewise, during the testing phase, the ISSTM module is
user tasks to the power set of the user task set. The working of
depicted
the cumulativein Algorithm
aggregation of1, thewhich consists
Indetermsoft of a training
set function of the userphase
task setand a testing phase. During
is com-
puted.
the training phase, for each set of user tasks, the Indetermsoft set function is applied to
generate the Indetermsoft set function of the user task set. Likewise, during the testing
Algorithm 1: Working of ISSTM
phase,
1: Start the cumulative aggregation of the Indetermsoft set function of the user task set
is2: computed.
Input user task set
UT = {UT , UT , UT , … , UT }
3:Algorithm
Output ISF of
1:user task setof ISSTM
Working
𝐼𝑆𝐹(𝑈𝑇) = {𝐼𝑆𝐹(𝑈𝑇 ), 𝐼𝑆𝐹(𝑈𝑇 ), 𝐼𝑆𝐹(𝑈𝑇 ), . . . , 𝐼𝑆𝐹(𝑈𝑇 )}
4:1:Training
Start phase of ISSTM
5:2:For
Input
each user task
training settask set 𝑈𝑇 ∈ 𝑈𝑇 do
user
UT = {user
6: For each training UTtask
1 , UT
set2 ,attributes UTm=}{𝑢𝑡 , 𝑢𝑡 , , … , 𝑢𝑡 } in UT do
UT3 , . . . ,𝑈𝑇
7:3: Output ISF∀of𝑖𝑠𝑓
Train user
(𝑢𝑡 task
) ∈ set
ut initialize 𝜎(𝑖𝑠𝑓 (𝑢𝑡 ) = NULL
8:
ISF (UT ) = { ISF (UT 1 ), ISF (UT 2 ), ISF (UT 1 ), ..., ISF (UT m )}
Calculate training ISF of user tasks
4: Training phase of ISSTM
9: 𝐼𝑆𝐹(𝑈𝑇 ): 𝑢𝑡 → 𝑃(𝐻(𝑈𝑇 )), 𝐻 ⊆ 𝑈𝑇
5 : For each training user task set UT i ∈ UT do
10: End For at
 at at at
6: For each training user task set attributes UT  i = ut1 ,ut2 , . . . , utm in UT do
11: End For at
7: Train ∀ is f (utk ) ∈ utat initialize σ is f i (utk )at = NULL
12: Testing phase of ISSTM i
8: Calculate training ISF of user tasks
ISF (UT i ) : utkat → P H UT iat , H ⊆ UT

9:
10: End For
11: End For
12: Testing phase of ISSTM
13 : For each testing user task set UT i ∈ UT do
at
 at at at
14 : For each testing user task set attributes  UT i = ut1 , ut2 , ..., utm in UT do
at at at
15 : Test ∀ is f i (utk ) ∈ ut initialize σ is f i (utk ) = NULL
16: Compute aggregation of Indetermsoft set function
at

ISF (UT ) ::= ISF (UT i ) ∪ UT iat , is f i (utk )
17: End For
18: End For
19 : Output ISF (UT ) = { ISF (UT 1 ), ISF (UT 2 ), ISF (UT 1 ), ..., ISF (UT m )}
20: Stop
Algorithms 2024, 17, 479 8 of 18

4.2. Indetermsoft Set Resource Manager (ISSRM)


The ISSRM component of the D* extra lite framework receives the set of resource center
instances RCI as inputs. RCI = { RCI 1 , RCI 2 , RCI 3 ,. . ., RCI m }. It applies the Indetermsoft
set function to generate the Indetermsoft set function ISF of the resource center instance
set. ISF ( RCI ) = { ISF ( RCI 1 ), ISF ( RCI 2 ) , ISF ( RCI 3 ),. . .,ISF ( RCI m )}. The Indetermsoft
set function handles the indeterminate, conflicting parameters of the resource center in-
stances by mapping the attributes of resource center instances to the power set of the
resource center instance set. The working of the ISSRM module is shown in Algorithm
2, which consists of training and testing phases. During the training phase, for every
set of resource center instances, the Indetermsoft set function is applied to generate the
Indetermsoft set function of resource center instances. Likewise, during the testing phase,
the cumulative aggregation of the Indetermsoft set function of the resource center instance
set is computed.

Algorithm 2: Working of ISSRM


1: Start
2: Input user task set
RCI = {RCI 1 , RCI2 , RCI3 , . . . , RCIm }
3: Output ISF of user task set
ISF ( RCI ) = { ISF ( RCI 1 ), ISF ( RCI 2 ), ISF ( RCI 3 ), ..., ISF ( RCI n )}
4: Training phase of ISSRM
5 : For each training resource center instance RCI i ∈ RCI do
6: For each training resource center instance  attributes do
RCI iat = Hi vm1at , . . . , Hi vmiat in RCI
at at
initialize σ is f i Hi vmkat
 
7: Train ∀is f i Hi vm1 ∈ Hi (vm) = NULL
8: Calculate training ISF of resource  center instances 
9: ISF ( RCI i ) : Hi vmkat → P H Hi vmkat , H ⊆ RCI
10: End For
11: End For
12: Testing phase of ISSRM
13 : For each testing user task set RCI i ∈ RCI do
14: For eachtesting user  task set attributes do
RCI iat = Hi vm1at , Hi vmiat  in RCI
Test ∀is f i Hi vm1at ∈ Hi (vm)at initialize σ is f i Hi vmkat
 
15: = NULL
16: Compute aggregation of Indetermsoft set function
ISF ( RCI ) ::= ISF ( RCI i ) ∪ RCI iat , ∀∀is f i Hi vmkat )
 

17: End For


18: End For
19 : Output ISF ( RCI ) = { ISF ( RCI 1 ), ISF ( RCI 2 ), ISF ( RCI 3 ), ..., ISF ( RCI n )}
20: Stop

4.3. D* Extra Lite (D*EL)


The D*EL component of the D* extra lite framework accepts the Indetermsoft set
function of the user task set ISF (UT ) = { ISF (UT 1 ), ISF (UT 2 ), ..., ISF (UT m )} and In-
determsoft set function of resource center instances to generate D* extra lite resource-
provisioning policies D ∗ ( RPP) = { D ∗ ( RPP1 ), . . . , D ∗ ( RPPn ) }. The working of the D*EL
component is shown in Algorithm 3, which is composed of training and testing phases.
The user tasks and resource center instances are represented in terms of an acyclic tree
composed of nodes and edges. During training, D*EL uses an incremental heuristic search
technique for the robust navigation of user tasks to ideal resource center instances. During
testing, for the incoming user tasks, an ideal path is determined by using a priority queue
data structure. The time incurred in training and testing is less as the user tasks are aligned
in a priority queue which prevents frequent reordering and fast re-planning during user
task navigation. The efficiency lies in re-expanding the parts of the search space that are
registered for changes. The CALCULATE KEY function uses the key value for assigning
Algorithms 2024, 17, 479 9 of 18

priority to the list elements, and this value is determined through the sum of heuristic
values. The REINITIALIZE function performs fast re-initialization of the affected search
space using a search tree. Quick re-computation of the optimal path is performed by
keeping track of visited and unvisited states of the resource center instances using the CUT
BRANCHES function.

Algorithm 3: Working of D*EL


1: Start
2: Input
ISF (UT ) = { ISF (UT 1 ), ISF (UT 2 ), ISF (UT 1 ), ..., ISF (UT m )}
ISF (RCI ) = { ISF ( RCI 1 ), ISF ( RCI 2 ), ISF ( RCI 3 ), ..., ISF ( RCI n )}
3 : Output D* ( RPP) = D* ( RPP1 ), D* ( RPP2 ), . . . , D* ( RPPn )
4: Function CALCULATE KEY (State S)
5: Return [ g(s) + h(Sstart , S) + Km ; g(s)], where h is the heuristic value,
g(s) is the cos t from goal state, and Km is the bias value.
6: Function SOLUTION FOUND ()
7: Return TOP OPEN = Sstart or Visited (Sstart ) AND NOT OPEN (Sstart )
8: Function INITIALIZE ()   
9: Km = 0, Visited Sgoal = True, Parent Sgoal = NULL, g Sgoal = 0
10 : PUSH OPEN Sgoal , CALCULATE KEY Sgoal ))
11: Function SEARCH STEP ()
12: s = TOP OPEN ()
13: POP OPEN ()
14 : k old = key(s)
15 : k new = CALCULATE KEY Sgoal )
16 : if k old < k new then
17: PUSH OPEN(s, CALCULATE KEY(s))
18: else
19 : for all s′ ∈ Pred(s) do
20 : if NOT VISITED (s′ ) OR g(s′ ) > cost(s′ , s) + g(s)then
21 : Parent(s′ ) = s
22 : g(s′ ) = cost(s′ , s) + g(s)
23 : if NOT VISTED (s′ ) then
24 : VISITED (s′ ) = true
25 : PUSH OPEN (s′ , CALCULATE KEY (s′ ))
26: End for
27: Function REINITIALIZE ()
28: if any edge cost changed then
29: CUT BRANCHES
30 : if seeds ̸= ∅ then
31 : Km = Km + h(Slast , Sstart )
32 : Slast = Sstart
33 : for all s ∈ Seeds do
34 : if visited(s) AND NOT OPEN(S) THEN
35: PUSH OPEN (s, CALCULATE KEY(s))
36 : seed = ∅
37: Function CUT BRANCHES ()
38: Reopen_start = false
39: For all directed edges (u, v) with changed cost do
40: if visited (u) AND visited (v) then
41: cold = cost (u, v)
42: Update edge cost (u, v)
43: if cold > cost (u, v) then
44 : if g(start) > g(v) + cost(u, v) + h(Sstart , u) then
45: reopen_start = true
46 : seeds = seeds ∪ v
47: else if cold < cost (u, v) then
48 : if parent (u) = v then
49: CUT BRANCHES (u)
50 : if reopenstart = true AND visited(Sstart )then
51 : seeds = seeds ∪ Sstart
52: End for
53: Function CUT BRANCHES(s)
54: Visited(s) = false
55: Parent(s) = NULL
56: REMOVE OPEN(s)
57 : for all s′ ∈ Succ(s) do
58 : if visited (s′ ) AND NOT parent (s′ ) = s then
59 : seeds = seeds ∪ s′
60: End for
61 : for all s′ ∈ Pred(s) do
62 : if visited (s′ ) AND parent (s′ ) = s then
63 : CUT BRANCHES (s′ )
64: End for
65 : Output D* ( RPP) = D* ( RPP1 ), D* ( RPP2 ), . . . , D* ( RPPn )

Algorithms 2024, 17, 479 10 of 18

5. Mathematical Modeling
This section presents the mathematical structures, equations, and algorithms that
are central to this resource-provisioning study, providing a rigorous basis for our the-
oretical and empirical investigations. The performance of the D* extra lite framework
is analyzed through mathematical modeling. For modeling purposes, a limited cloud
setup that is composed of a predefined set of user tasks and resource center instances is
considered. The performance objectives considered for evaluation purposes are power
consumption (PC(D* extra lite)), resource utilization
 (RU(D* extra lite)), total execution
time (TET(D* extra lite)), and learning rate (LR D* extra lite )). The expected value analysis
is performed to determine the performance metrics for the future time FT. Finally, the
performance of the proposed D* extra lite framework is compared with three existing
works: EW1-TP [13], EW2-AC [14], and EW3-GC [16].
PO1: Power Consumption (PC(D* extra lite)): The power consumption of D* extra lite
is mainly influenced by the power consumed by the virtual machines during the maximum
max and minimum power consumption state PC vmimin .
 
power consumption state PC vmi
The PC(D* extra lite) is low as D* extra lite performs smooth matching of tasks and
resource instances by finding safer paths, whereas the power consumption of EW1 is
comparatively higher as it performs static classification of tasks as computation-intensive,
memory-intensive, and storage-intensive by considering the present traffic scenario and
ignoring the history. The power consumption of EW2 and EW3 is higher than that of
EW1 due to loose function approximation, poor scalability, and inappropriate selection
of resources.
PC ( D ∗ extra lite) R y ∑ PC ( D ∗ extra lite)( a)
 
E ∗
, FT = x aϵπ
D (TSP) | D ∗ (TSP)|
PC ( D ∗ extra lite) R y ∑ aϵπ PC ( D ∗ extra lite)( a)
 
E , FT = ∑ d x
D ∗ (TSP) d∈ D  | D ∗ (TSP)| 

PC ( D ∗ extra lite)
 E 1 ∗ π ∗ ∑ii= n
=1 PC ( RCI i )
E , FT =
D ∗ (TSP) P( D ∗ (TSP))  i
RQ
= ∑ d q PC vmimax − PC vmimin ∗ RU (vmi ) + PC vmidle
 
i
d∈ D  i
PC vmimax − PC vmimin ∗ RU (vmi ) + PC vmidle
  
RQ i
= q Q ∗ dP ∗ (TSP)
D
PC ( D ∗ extra lite)
 
PC(D ∗ extra lite) : E , FT ≈ Low
 D ∗ (TSP )
PC ( EW1)
PC(EW1) : E ∗
, FT ≈ Medium
D (TSP) 
PC ( EW2)
PC(EW2) : E ∗
, FT ≈ High
 D (TSP) 
PC ( EW3)
PC(EW3) : E , FT ≈ High
D ∗ (TSP)

PO2: Resource Utilization (RU(D* lite)): The resource utilization of D* extra lite is
mainly influenced by the over-utilization of resources of resource center instance RCI iover and
under-utilization of resources of resource center instance RCI iunder . The RU(D* lite) is high
as it combines an incremental search and a heuristic search strategy for task mapping. The
resource utilization of EW1 is low due to improper formulation of the workflow model
which leads to more resource wastage. The resource utilization of EW2 is moderate as
the model is not capable of dealing with a large-state-space environment and takes a
Algorithms 2024, 17, 479 11 of 18

longer time to converge to Nash equilibrium. The RU(EW3) is low due to convergence to
suboptimal solutions and poor task-routing capability.

RU ( D ∗ extra lite) R y ∑ aϵπ RU ( D ∗ extra lite)( a)


 
, FT =
D ∗ (TSP) x | D ∗ (TSP)|
RU ( D ∗ extra lite) R y ∑ aϵπ RU ( D ∗ extra lite)( a)
 
E , FT = ∑ d x
D ∗ (TSP) d∈ D  | D ∗ (TSP)| 

RU ( D ∗ extra lite)
 E 1 ∗ π ∗ ∑i = i n
=1 RU ( RCI i )
E , FT =
D ∗ (TSP)  P( D ∗ (TSP))
 ∗
RU ( D extra lite) i = n  
Q
, FT = ∑ d q ∑ RCI iover − RCI iunder /NRCI
R
E ∗
D (TSP) d∈ D i =1  
i=n over under
 ∗
RU ( D extra lite)

R Q
∑ i =1 RCI i − RCI i /NRCI
E ∗
, FT = q Q ∗ dP ∗
D (TSP) D (TSP)
RU ( D ∗ extra lite)

RU(D ∗ extra lite) : E , FT ≈ High
 D ∗ (TSP) 
RU ( EW1)
RU(EW1) : E ∗
, FT ≈ Low
 D (TSP) 
RU ( EW2)
RU(EW2) : E ∗
, FT ≈ Medium
D (TSP) 
RU ( EW3)
RU(EW3) : E , FT ≈ Low
D ∗ (TSP)

PO3: Total Execution Time (TET(D* lite)): The total execution time of D* extra lite
is mainly influenced by the total execution time of the Indetermsoft set function of user
tasks ISF( UT i ), Indetermsoft set function of resource instances ISF ( RCI i ), and D* pol-
icy D ∗ ( Tspi ). The TET(D* lite) is low as it is capable of handling dynamic obstacles in
cloud scenarios using an incremental search strategy. The TET(EW1) is moderate due to
inappropriate mapping of resources. The TET(EW2) is very high because of poor workflow
classification and softmax parametrization taking an infinite period of time for convergence.
The TET(EW3) is low due to bad local search capability and a poor convergence rate.

TET ( D ∗ extra lite) R y ∑ aϵπ TET ( D ∗ extra lite)( a)


 
E , FT =
D ∗ (TSP) x | D ∗ (TSP)|
TET ( D ∗ extra lite) R y ∑ aϵπ TET ( D ∗ extra lite)( a)
 
E , FT = ∑ d x
 D ∗ (TSP) d∈ D | D ∗ (TSP)| 
E 1 ∗ π ∗ ∑ii= m i =n i =n ∗
=1 TET ( ISF (UT i )) + ∑i =1 TET ( ISF ( RCI i ))+ ∑i =1 TET ( D ( Tspi ))
=
P( D ∗ (TSP))
R Q i =m i =n i =n
= ∑ d q ∑ TET ( ISF (UT i )) + ∑ TET ( ISF ( RCI i ))+ ∑ TET ( D ∗ ( Tspi ))
d∈ D i =1 i =1 i =1
RQ ∑i=m TET ( ISF (UT i )) + ∑ii= n i =n ∗
=1 TET ( ISF ( RCI i ))+ ∑i =1 TET ( D ( Tspi ))
= q Qdp i=1
D ∗ (TSP)
TET ( D ∗ extra lite)
 
TET(D ∗ extra lite) : E , FT ≈ Low
 D ∗ (TSP)
TET ( EW1)
TET(EW1) : E ∗
, FT ≈ Medium
D (TSP) 
TET ( EW2)
TET(EW2) : E ∗
, FT ≈ High
 D (TSP) 
TET ( EW3)
TET(EW3) : E , FT ≈ Low
D ∗ (TSP)

PO4: Learning Rate (LR D* extra lite )): The learning rate of D* extra lite is mainly
influenced by the total execution time for the formulation of the first task-scheduling
policy TET ( D ∗ ( TSP1 ) and the number of task-scheduling policies formulated D ∗ ( TSP).
Algorithms 2024, 17, 479 12 of 18


The LR D* extra lite ) is very high due to high planning efficiency and provides a fast
reliable path from user tasks to resource center instances. The LR(EW1) is medium as it
makes poor task-scheduling decisions considering limited information about user tasks.
The LR(EW2) is low as it does not consider the computation-sensitive and storage-sensitive
features of the tasks. The LR(EW3) is low due to low learning accuracy and converging to a
low-optimality solution as it is easily affected by external factors.

LR( D ∗ extra lite) Ry ∑ LR( D ∗ extra lite)( a)


 
E ∗
, FT = x aϵπ
D (TSP) | D ∗ (TSP)|
∗ LR( D ∗ extra lite)( a)
y ∑
 
LR( D extra lite)
, FT = ∑ d x aϵπ
R
E ∗
D (TSP) d∈ D | D ∗ (TSP
 )|

∗ E 1 ∗ π ∗ TET ( D ( TSP1 ) ∗ N ( D ∗ ( TSP))
  
LR( D extra lite)
E ∗
, FT =
 D ∗(TSP)  P( D ∗ (TSP))
LR( D extra lite) RQ
, FT = ∑ d q TET ( D ∗ ( TSP1 ) ∗ N ( D ∗ ( TSP))

E ∗
D (TSP) d∈ D
LR( D ∗ extra lite) TET ( D ∗ ( TSP1 ) ∗ N ( D ∗ ( TSP))
  
RQ
E , FT = q Q ∗ dP
D ∗ (TSP) D ∗(TSP)
LR( D ∗ extra lite)

LR(D ∗ extra lite) : E , FT ≈ High
 D ∗ (TSP)
LR( EW1)
LR(EW1) : E , FT ≈ Medium
D ∗ (TSP)
 
LR( EW2)
LR(EW2) : E ∗
, FT ≈ Low
 D (TSP) 
LR( EW3)
LR(EW3) : E , FT ≈ Low
D ∗ (TSP)

6. Results and Discussion


This section presents our findings and interprets their significance in the context of
resource-provisioning studies. As the key outcome of our research, a summary of the data
and insights that have emerged from the analyses is presented. Also, a stage is set for a
comprehensive comparison of our work with three recent existing works.

6.1. Experimental Setup


The CloudSim 3.0.3 framework is used for the modeling and simulation of the pro-
posed Indetermsoft-set-based D* extra lite framework [18,19]. The experimental setup for
execution is as follows: The resource center consists of three systems: Intel 5150 CPU
containing 40 core processors with 3.0 GHz clock speed, Xeon 4150 CPU containing
45 core processors with 4.0 GHz clock speed, and Silver 6150 CPU containing 50 core
processors with 2.0 GHz clock speed. Four host machine configurations are used: Host
Machine 1 = {PE = 2(small), MIPS = 2660, RAM = 4 GB, Storage = 160 GB, PC vmimax =


135, PC vmimin /PC vmidle



i = 93.7}, Host Machine 2 = {PE = 4 (medium), MIPS = 3067,
 
RAM = 8 GB, Storage = 250 GB, PC vmimax = 113, PC vmimin /PC vmidle
 
i = 42.3}, Host
max =

Machine 3 = {PE = 12(large), MIPS = 3067, RAM = 16 GB, Storage = 500 GB, PC vm i
222, PC vmimin /PC vmidle

i = 58.4}, and Host Machine 4 = {PE = 24 (Extra-large), MIPS =
 
4067, RAM = 18 GB, Storage = 600 GB, PC vmimax = 322, PC vmimin /PC vmidle
 
i = 68.4}.
Four virtual machine configurations are used: Virtual Machine 1 = {Host Machine 1, VM
type = small, PE = 1, MIPS = 500, RAM (GB) = 0.5, Storage (GB) = 40}, Virtual Machine 2 =
{Host Machine 2, VM type = medium, PE = 2, MIPS = 1000, RAM (GB) = 1.0, Storage (GB) =
60}, Virtual Machine 3 = {Host Machine 3, VM type = large, PE = 3, MIPS = 1500, RAM (GB)
= 2.0, Storage (GB) = 80}, and Virtual Machine 4 = {Host Machine 4, VM type = Extra-large,
PE = 4, MIPS = 2000, RAM (GB) = 3.0, Storage (GB) = 100}. Two real-time datasets are used
for implementation: the Google Cluster dataset and the Bitbrains dataset [20].
Algorithms 2024, 17, x. https://doi.org/10.3390/xxxxx 15 of 21
Algorithms 2024, 17, 479 13 of 18

6.2. Google Cluster Dataset


6.2. Google ClusterCluster
The Google Dataset dataset is composed of physical resources such as memory, CPU,
and The
diskGoogle
storage.Cluster
The userdataset
taskisset
composed of physical
is composed resources
of 672,300 tasks such as memory,
that are executedCPU,over
and disk
12,500 storage.
host machinesTheforuser task set
a period of is
30composed of 672,300 tasks
days. The experiments that are executed
are conducted over
with varying
12,500
sizes ofhost machines
resource for instances
center a period of 30 days. The
considering 100experiments are conducted
(Tiny), 300 (Small), with varying
700 (Large), 900 (Ex-
sizes of resource center instances considering 100 (Tiny), 300 (Small), 700
tra Large), and 1000 (Huge) host machines. The ratio of virtual machines to host machines (Large), 900 (Extra
Large), and 1000 (Huge) host machines. The ratio of virtual machines to
is 2:1. As per the user task set resource demands, the virtual machines are allocated for host machines is
2:1. As per the user task set resource demands, the virtual machines are
every 5 minutes randomly. The user task set is created with a virtual machine ratio of 60allocated for every
5percent.
min randomly. The user
The number task set
of virtual is created
machines with over
varies a virtual
time,machine
and each ratio
useroftask
60 percent. The
set can keep
number of virtual
0 to 20 virtual machines
machines. Thevaries
userover
tasktime, and each
set exhibits user task
varying set canrequirements.
resource keep 0 to 20 virtual
There
machines. The user task set exhibits varying resource requirements.
will be user task set demand of workload burst conditions in terms of massive failureThere will be user taskof
set demand of workload burst conditions in terms of massive failure of
virtual machines, and at peak service hours, situations of overloads and resource conten- virtual machines,
and
tionat peak
will service
occur. Eachhours, situations
experiment of overloads
is conducted and period
for a time resource contention
of 100 minuteswill occur.
to analyze
Each experiment is conducted for a time period of 100 min to analyze
the performance of the proposed D* extra lite framework dynamically with respect to the the performance
of the proposed
performance D* extra
objectives lite framework dynamically with respect to the performance
mentioned.
objectives mentioned.
PO1: Power Consumption (PC(D∗ extra lite))
PO1: Power Consumption (PC(D* extra lite))
A graph of varying resource center instances versus power consumption is shown in
A graph of varying resource center instances versus power consumption is shown
Figure 2. It is observed from the graph that the power consumption of D* extra lite is
in Figure 2. It is observed from the graph that the power consumption of D* extra lite is
minimal over varying sizes of resource center instances. The framework is focused and
minimal over varying sizes of resource center instances. The framework is focused and
performs smooth matching between tasks and resource instances by finding safer paths.
performs smooth matching between tasks and resource instances by finding safer paths.
The power consumption of EW1-TP is very high as it performs static classification of tasks
The power consumption of EW1-TP is very high as it performs static classification of tasks
as computation-intensive, memory-intensive, and storage-intensive by considering the
as computation-intensive, memory-intensive, and storage-intensive by considering the
present traffic scenario and ignoring the history. The power consumptions of EW2-AC
present traffic scenario and ignoring the history. The power consumptions of EW2-AC
and EW3-GW are moderate due to loose function approximation, poor scalability, and
and EW3-GW are moderate due to loose function approximation, poor scalability, and
inappropriate selection of resources. The workflow model considered in EW2-AC and
inappropriate selection of resources. The workflow model considered in EW2-AC and
EW3-GW has limited information about the task parameters which makes it not suitable
EW3-GW has limited information about the task parameters which makes it not suitable
forreal-world
for real-worldsituations.
situations.

Figure2.2.Resource
Figure Resourcecenter
centerinstances
instancesversus
versuspower
powerconsumption
consumption(W).
(W).

PO2:Resource
PO2: ResourceUtilization (RU(D*∗ lite))
Utilization(RU(D lite))
AA graph
graph of varying
varying sizes
sizesof
ofuser
usertasktasksets
setsversus
versusresource
resourceutilization
utilizationis shown in Fig-
is shown in
Figure
ure 3. 3.
TheThe resource
resource utilization
utilization of of
D*D* extra
extra lite
lite is is optimal
optimal over
over allallsizes
sizesofofuser
usertasks
tasksasasit
itcombines
combinesincremental
incrementaland and heuristic
heuristic search
search strategies
strategies forfor task
task mapping.
mapping. TheThe resource
resource uti-
utilization
lization ofofEW1-TP
EW1-TPisispoor
poordue
dueto toimproper
improper formulation
formulation of the workflow
workflow modelmodelwhich
which
does
doesnotnotsuit
suitthe real-time
the real-timescenario
scenarioof cloud systems.
of cloud The structural
systems. features
The structural extracted
features from
extracted
the tasks are poor, which leads to resource wastage. The resource utilization
from the tasks are poor, which leads to resource wastage. The resource utilization of EW2- of EW2-AC and
EW3-GW is moderate due to convergence to suboptimal solutions
AC and EW3-GW is moderate due to convergence to suboptimal solutions and poor task- and poor task-routing
capability. They are They
routing capability. not capable
are notof dealing
capable ofwith
dealinga large-state-space environment,
with a large-state-space and the
environment,
resource allocationallocation
and the resource models take longer
models taketime to converge
longer to Nash equilibrium.
time to converge to Nash equilibrium.
Algorithms
Algorithms 2024, 17,
2024,17,
Algorithms2024, x.
x. https://doi.org/10.3390/xxxxx
17,479 https://doi.org/10.3390/xxxxx 16
16 of 21
of 18
21
14 of

Figure
Figure3.3. User
3.User task
Usertask set
taskset versus
setversus resource
versusresource utilization.
resourceutilization.
utilization.
Figure

PO3: Total
PO3:Total
PO3: Execution
TotalExecution
ExecutionTimeTime (TET(D lite))
(TET(D* ∗lite))
Time(TET(D lite))
A graph
AAgraph of varying
graphofofvarying sizes
varyingsizes of user
sizesofofuser tasks
usertasks
tasks setset
set versus
versus
versus total
total
total execution
execution
execution timetime
time
(ms)(ms)
(ms) is
is shown
is shownshown
in
in Figure
in Figure
Figure 4.
4. The The
4. The total
total
total execution
execution
execution timetime
time of
of of D*
D*D* extra
extra
extra lite
lite
lite is
is is considerably
considerablyless
considerably less for
lessfor all
forall sizes
allsizes
sizesof of user
ofuser
user
task
task sets
tasksets due
duetoto
setsdue effective
toeffective
effective task
task
task scheduling
scheduling
scheduling despite
despite
despite dynamic
dynamic
dynamic obstacles
obstacles
obstacles in
in cloud
in cloud cloud scenarios.
scenarios.
scenarios. The
The
total total
The total execution
execution
execution time
time time of EW1-TP
of EW1-TP
of EW1-TP and EW2-AC
and EW2-AC
and EW2-AC is veryis very
is very high
highhigh due
due due
to theto the
to poor poor
the poor classifica-
classifica-
classification
tion
of
tion of
of workflow
workflow
workflow and
and inappropriate
and inappropriate
inappropriate mapping
mapping
mapping of
of resources,
of resources, which
resources, which cause
cause
which the the
cause execution
execution
the execution timetime
to
time
to spike.
spike.
to spike. The
TheThe softmax
softmax
softmax parametrization
parametrization
parametrization also
also
also takes
takes
takesananan infinite
infinite period
infiniteperiod of time
periodofoftime
timeforfor convergence
forconvergence
convergence
to
to promising
topromising solutions. The
promising solutions. Thetotal
The totalexecution
total execution
execution time
time
time of EW3-GW
of of EW3-GW
EW3-GW is
is moderate
is moderate
moderate due
duedueto
to poor local
to poor
poor local
search
search capability and poor convergence rate. Also, the resource management model pro-
local capability
search and
capability poor
and convergence
poor convergence rate. Also,
rate. the
Also, resource
the management
resource management model model
pro-
vides
vides minimal
provides minimal
minimal extensibility of
extensibility
extensibility aa service
of of portfolio.
a service
service portfolio.
portfolio.

Figure
Figure4.4.
Figure User
4.User task
Usertask set
taskset versus
setversus total
versustotal execution
totalexecution time
executiontime (ms).
time(ms).
(ms).

PO4:
PO4: Learning
Learning Rate
Rate
PO4: Learning Rate (LR(D (LR(D
(LR D∗∗* lite))
lite ))
lite))
A graph
AAgraph
graphof of varying
ofvarying sizes
varyingsizes
sizesof of user
ofuser
usertasktask sets
task sets versus
sets versus learning
versus learning rate
learning rate is
rate is shown
is shown
shown in in Figure
in Figure 5.
Figure 5.
5.
The
The learning
Thelearning
learningrate rate of
rateofofD*D* extra
D*extra lite
extralite is very
liteisisvery
veryhighhigh due
highdue to high
duetotohigh
high planning
planning
planning efficiency
efficiency
efficiency and
and
and provides
provides
providesa
aa fast
fast fast reliable
reliable
reliablepathpath
pathfromfrom
user
from user tasks
tasks
user tasks to
to resource
to resource
resource center
center instances.
instances.
center TheThe
instances. learning
learning
The raterate
learning of
of EW1-
of EW1-TP
rate EW1-
isTP is
very very
low low
as itas it
makesmakes
poor poor task-scheduling
task-scheduling decisions
decisions considering
considering
TP is very low as it makes poor task-scheduling decisions considering limited information limited
limited information
information on
on
user user
tasks.tasks.
It doesIt does
not not
considerconsider
the the computation-sensitive
computation-sensitive and and storage-sensitive
storage-sensitive
on user tasks. It does not consider the computation-sensitive and storage-sensitive fea- features fea-
of
thetures of
tasks. the
The tasks. The
learning learning
rates of rates
EW2-AC of EW2-AC
and EW3-GWand EW3-GW
are are
moderate
tures of the tasks. The learning rates of EW2-AC and EW3-GW are moderate due to low moderate
due to low due to low
learning
learning
accuracy
learning andaccuracy and
and convergence
convergence
accuracy to
to low-optimality
to low-optimality
convergence solutions.solutions.
low-optimality They alsoThey
solutions. fail toalso
They fail
fail to
capture
also capture
tothe com-
capture
the complexity
plexity
the complexity
of real-worldof real-world
of real-world
situations, situations, and the
and the outcomes
situations, and the outcomes
outcomes of the
the approach
of the approach
of approach
are easilyareare easily by
affected
easily af-
af-
fected
external by external
factors. factors.
fected by external factors.
Algorithms 2024, 17, 479 15 of 18
Algorithms 2024, 17, x. https://doi.org/10.3390/xxxxx 17 of 21

Figure 5. User task set versus learning rate.


Figure 5. User task set versus learning rate.
6.3. Bitbrains Dataset
6.3. Bitbrains Dataset
The Bitbrains dataset is composed of performance metrics related to rapid data stor-
age ofThe2830Bitbrains dataset
virtual machines overis distributed
composed of performance
resource metrics
center instances related
for a period to rapid data storage
of one
of 2830It virtual
month. machines
mainly constitutes over distributed
information related to theresource
percentagecenter instances
of CPU usage, memory for a period of one
consumed (kilobytes), memory sanctioned (kilobytes), network capacity,
month. It mainly constitutes information related to the percentage of CPU usage, memory throughput, etc.
The ratio of virtual machines to host machines is 4:2, and per the user task set resource
consumed (kilobytes), memory sanctioned (kilobytes), network capacity, throughput, etc.
demand, the virtual machines are allocated for every minute randomly. The user task sets
The ratio of
are created withvirtual machines
a virtual machine ratioto host
of 40 machines
percent. Theisnumber
4:2, and per the
of virtual user task set resource
machines
demand,
varies over thetime,virtual
and eachmachines
user task setarecanallocated
keep 0 to 10for every
virtual minute
machines. Therandomly.
experimentsThe user task sets
are conducted
are created with for different resource
a virtual machine center instances
ratio of 40for 24 hoursThe
percent. overnumber
a period ofof60virtual
days. machines varies
Resource center instances are composed of 200 (Tiny), 400 (Small), 600 (Large), 800 (Extra-
over time, and each user task set can keep 0 to 10 virtual machines. The experiments are
large), and 1000 (Huge) host machines. The user task sets exhibit varying resource require-
conducted
ments and often for different resource
lead to burst conditions center
at peakinstances for 24Each
service hours. h over a period
experiment of 60 days. Resource
is exe-
center
cuted for instances are composed
a time period of 600 min toof 200 (Tiny),
analyze 400 (Small),
the performance of the600 (Large),
proposed 800 (Extra-large), and
D* extra
1000 (Huge) host
lite framework machines.
dynamically The user
with respect to thetask sets exhibit
performance varying
objectives resource requirements and
mentioned.
oftenPO1: leadPower Consumption
to burst (PC(D∗at
conditions lite))
peak service hours. Each experiment is executed for a
A graph of varying sizes of resource center instances versus power consumption is
time period of 600 min to analyze the performance of the proposed D* extra lite framework
shown in Figure 6. It is observed from the graph that the power consumption of D* extra
dynamically with respect to the performance
lite is considerably less over varying sizes of resource center objectives
instancesmentioned.
from small to huge.
*
PO1:
It easily Power
navigates theConsumption (PC(D
tasks to the resource centerlite))
instances in a highly dynamic environ-
ment A by graph
using a priority queue to
of varying minimize
sizes the effect of
of resource reordering.
center The power
instances consump-
versus power consumption
tion of EW3-GW is moderate because of poor exploration capability when exposed to a
is shown in Figure 6. It is observed from the graph that the power consumption of D*
large state space. The convergence speed is never satisfied due to insufficient handling of
extra lite is considerably
task diversities. less overofvarying
The power consumption EW1-TP sizes of resource
and EW2-AC is very center
high since instances
it from small
to huge.
takes a longIt easily navigates
time (infinite time) to the tasks to
converge the resource
to Nash equilibrium center
underinstances
linear functionin a highly dynamic
environment
approximation. Also, by using a priority
the tendency towards queue to minimize
the selection the effect
of inappropriate of reordering.
solutions is high, The power
and the ability toofhandle
consumption EW3-GWenormous computational
is moderate resources
because for larger
of poor task sets iscapability
exploration poor. when exposed
to a large state space. The convergence speed is never satisfied due to insufficient handling
of task diversities. The power consumption of EW1-TP and EW2-AC is very high since
it takes a long time (infinite time) to converge to Nash equilibrium under linear function
approximation. Also, the tendency towards the selection of inappropriate
Algorithms 2024, 17, x. https://doi.org/10.3390/xxxxx 18 of 21 solutions is high,

and the ability to handle enormous computational resources for larger task sets is poor.

Figure 6. Resource center instances versus power consumption (W).


Figure 6. Resource center instances versus power consumption (W).
PO2: Resource Utilization (RU(D∗ lite))
A graph of varying sizes of user task sets versus resource utilization is shown in Fig-
ure 7. The resource utilization rate of D* extra lite is very high for varying sizes of user
task sets. It simplifies the maintenance of user task set priority by efficiently analyzing the
program workflow. The resource utilization of RE3-GW is average as the algorithm con-
verges to a suboptimal solution due to an imbalance in grey wolf behavior. The resource
Algorithms 2024, 17, 479 16 of 18

Figure 6. Resource center instances versus power* consumption (W).


PO2: Resource Utilization (RU(D lite))
A graph
PO2: of varying
Resource sizes
Utilization (RU(Dof∗ lite))
user task sets versus resource utilization is shown in
Figure 7. Theofresource
A graph utilization
varying sizes ratesets
of user task of versus
D* extra lite isutilization
resource very high for varying
is shown in Fig- sizes of user
task
ure 7.sets. It simplifies
The resource the maintenance
utilization rate of D* extraof liteuser task
is very setfor
high priority
varyingby efficiently
sizes of user analyzing
taskprogram
the sets. It simplifies the maintenance
workflow. The resource of user task set priority
utilization by efficiently
of RE3-GW analyzingasthe
is average the algorithm
program workflow.
converges The resource
to a suboptimal utilization
solution due toof RE3-GW
an imbalanceis average as the
in grey algorithm
wolf con- The resource
behavior.
verges to a suboptimal solution due to an imbalance in grey wolf behavior. The resource
utilization of EW1-TP and EW2-AC is very low as the task-scheduling policy gradient
utilization of EW1-TP and EW2-AC is very low as the task-scheduling policy gradient
quality is low due to a mismatch between the critical value function and actor policy.
quality is low due to a mismatch between the critical value function and actor policy. Also,
Also, the softmax parametrization
the softmax parametrization of the userof the
task setuser
leadstask
to anset leadstime
infinite to an infinite
required fortime
con- required for
convergence to a promising
vergence to a promising solution.
solution. It is alsoItnotis also
able tonothandle
able to handle
a spike a spike
in user in user requests
requests
which causesresource
which causes resource contention
contention in theinmulti-tenancy
the multi-tenancy
nature ofnature of a serverless
a serverless platform. platform.

Figure 7. User task set versus resource utilization.


Figure 7. User task set versus resource utilization.
PO3: Total Execution Time (TET(𝐃∗ 𝐥𝐢𝐭𝐞))*
PO3: Total Execution Time (TET(D lite))
A graph of varying sizes of resource center instances versus total execution time is
A
shown in graph
Figureof8. varying sizes of resource
The total execution time of D* center
extra liteinstances versus
is low for all total
sizes of user execution
tasks time is
shown in Figure 8. The total execution time of D* extra lite is low
as it is able to find the optimal path between the starting point (user test set) and ending for all sizes of user
tasks
point as it is able
(resource to find
instances) in the optimal
a dynamic path
cloud between the
environment. Thestarting point time
total execution (userof test set) and
EW3-GW
ending is moderate
point (resourceas it generates
instances)an in
approximate
a dynamic solution
cloudwhose correctness The
environment. of oper-
total execution
ation is not guaranteed. Service management and operating
time of EW3-GW is moderate as it generates an approximate solution whose system deployment are im-correctness of
Algorithms 2024, 17, x. https://doi.org/10.3390/xxxxx 19 of 21
proper due to poor approximation towards a promising solution. The total execution time
operation is not guaranteed. Service management and operating system deployment are
improper due to poor approximation towards a promising solution. The total execution
time of EW1-TP
of EW1-TP and EW2-AC
and EW2-AC is veryishigh
verydue
hightodue
theirtolimited
their limited capability
capability to perform
to perform global global
search operations and susceptibility to convergence towards local suboptimal
search operations and susceptibility to convergence towards local suboptimal solutions. solutions.
Also,
Also, the task-scheduling decisions suffer from severe scalability issues and inappropriate
task-scheduling decisions suffer from severe scalability issues and inappropriate
convergence towardspromising
convergence towards promising solutions.
solutions.

Figure 8.
Figure 8. Resource
Resourcecenter
centerinstances versus
instances totaltotal
versus execution time (ms).
execution time (ms).

PO4: Learning Rate (𝐋𝐑(𝐃∗ 𝐥𝐢𝐭𝐞))


A graph of varying sizes of resource center instances versus learning rate is shown in
Figure 9. The learning rate of D* extra lite is very high for all varying sizes of user task sets
since it efficiently handles uncertainty in the environment by precisely taking actions us-
ing the knowledge gained from previous searches. The learning rate of EW2-AC is very
low due to neural network function approximation. Even the asymptotic behavior of the
Algorithms 2024, 17, 479 17 of 18
Figure 8. Resource center instances versus total execution time (ms).


PO4:Learning
PO4: LearningRateRate(LR
(𝐋𝐑(𝐃D* 𝐥𝐢𝐭𝐞))
lite ))
Agraph
A graph of of varying
varying sizes
sizes of
of resource
resourcecentercenterinstances
instancesversus
versuslearning
learningraterateisis
shown
shown in
Figure 9. The learning rate of D* extra lite is very high for all varying sizes of
in Figure 9. The learning rate of D* extra lite is very high for all varying sizes of user task user task sets
since
sets it efficiently
since handles
it efficiently uncertainty
handles uncertainty in the environment
in the environment by precisely taking
by precisely takingactions us-
actions
ing the knowledge gained from previous searches. The learning rate
using the knowledge gained from previous searches. The learning rate of EW2-AC is very of EW2-AC is very
lowdue
low dueto toneural
neuralnetwork
networkfunction
functionapproximation.
approximation.Even Eventhetheasymptotic
asymptoticbehavior
behaviorof ofthe
the
differential game approach decreases its learning rate towards promising
differential game approach decreases its learning rate towards promising solutions. The solutions. The
learningrates
learning ratesof ofEW1-TP
EW1-TPand andEW3-GW
EW3-GWare aremoderate
moderateduedueto toananimbalanced
imbalancedrelationship
relationship
between exploration and exploitation. They also fail to handle larger user requests, and and
between exploration and exploitation. They also fail to handle larger user requests, the
the task
task datadata
cannotcannot be handled
be handled effectively
effectively withwith increasing
increasing user user demands.
demands.

Figure9.9.Resource
Figure Resourcecenter
centerinstances
instancesversus
versuslearning
learningrate.
rate.

7.7.Conclusions
Conclusions
This
This paper
paper presented aa novel
novelIndetermsoft-set-based
Indetermsoft-set-basedD* D*Extra
Extra Lite
Lite framework
framework forfor
re-
resource provisioningininthe
source provisioning thecloud.
cloud. The
The experimental
experimental evaluation
evaluation waswas carried
carried outout using
using the
the CloudSim
CloudSim simulator,
simulator, withwith the Google
the Google Cluster
Cluster dataset
dataset and Bitbrains
and Bitbrains dataset.
dataset. The
The results
results obtained were found to outperform three of the existing works with respect
obtained were found to outperform three of the existing works with respect to the perfor- to the
performance objectives of power consumption, resource utilization, total execution
mance objectives of power consumption, resource utilization, total execution time, and time,
and learning
learning rate.
rate. However,
However, thethe proposed
proposed frameworkalso
framework alsosuffered
sufferedfrom
from limitations
limitations in
in terms
terms
of limited handling of highly dynamic obstacles, extensive memory usage, and increased
computational overhead. As future work, complete analytical modeling and exhaustive
testing of the framework are planned to address higher-end performance objectives such as
fault tolerance, correctness, confidentiality, reliability, and transparency.

Author Contributions: Conceptualization, methodology, validation, and formal analysis: B.K. and
S.G.S.; writing—original draft preparation: B.K.; review and editing: S.G.S. All authors have read
and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Data are contained within the article.
Conflicts of Interest: The authors declare no conflicts of interest.

References
1. Abid, A.; Manzoor, M.F.; Farooq, M.S.; Farooq, U.; Hussain, M. Challenges and Issues of Resource Allocation Techniques in Cloud
Computing. KSII Trans. Internet Inf. Syst. 2020, 14, 2815–2839.
2. Laha, S.R.; Parhi, M.; Pattnaik, S.; Pattanayak, B.K.; Patnaik, S. Issues, Challenges and Techniques for Resource Provisioning
in Computing Environment. In Proceedings of the 2020 2nd International Conference on Applied Machine Learning (ICAML),
Changsha, China, 16–18 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 157–161.
Algorithms 2024, 17, 479 18 of 18

3. Maenhaut, P.J.; Volckaert, B.; Ongenae, V.; De Turck, F. Resource management in a containerized cloud: Status and challenges. J.
Netw. Syst. Manag. 2020, 28, 197–246. [CrossRef]
4. Kabir, H.D.; Khosravi, A.; Mondal, S.K.; Rahman, M.; Nahavandi, S.; Buyya, R. Uncertainty-aware decisions in cloud computing:
Foundations and future directions. ACM Comput. Surv. (CSUR) 2021, 54, 1–30. [CrossRef]
5. Li, B.; Tan, Z.; Arreola-Risa, A.; Huang, Y. On the improvement of uncertain cloud service capacity. Int. J. Prod. Econ. 2023,
258, 108779. [CrossRef]
6. Smarandache, F. Introduction to the IndetermSoft Set and IndetermHyperSoft Set; Infinite Study; University of New Mexico: Albu-
querque, NM, USA, 2022; Volume 1.
7. Smarandache, F. New Types of Soft Sets: HyperSoft Set, IndetermSoft Set, IndetermHyperSoft Set, and TreeSoft Set; Infinite Study;
American Scientific Publishing Group: Gretna, LA, USA, 2023.
8. Smarandache, F.; Abdel-Basset, M.; Broumi, S. (Eds.) Neutrosophic Systems with Applications (NSWA); Infinite Study; Sciences Force
LLC: Clifton, NJ, USA, 2023; Volume 3.
9. Xie, K.; Qiang, J.; Yang, H. Research and optimization of d-start lite algorithm in track planning. IEEE Access 2020, 8, 161920–161928.
[CrossRef]
10. Ren, Z.; Rathinam, S.; Likhachev, M.; Choset, H. Multi-objective path-based D* lite. IEEE Robot. Autom. Lett. 2022, 7, 3318–3325.
[CrossRef]
11. Tuli, S.; Gill, S.S.; Xu, M.; Garraghan, P.; Bahsoon, R.; Dustdar, S.; Sakellariou, R.; Rana, O.; Buyya, R.; Casale, G.; et al. HUNTER:
AI based holistic resource management for sustainable cloud computing. J. Syst. Softw. 2022, 184, 111124. [CrossRef]
12. Chouliaras, S.; Sotiriadis, S. An adaptive auto-scaling framework for cloud resource provisioning. Future Gener. Comput. Syst.
2023, 148, 173–183. [CrossRef]
13. Kumar, M.S.; Choudhary, A.; Gupta, I.; Jana, P.K. An efficient resource provisioning algorithm for workflow execution in cloud
platform. Clust. Comput. 2022, 25, 4233–4255. [CrossRef]
14. Mao, W.; Qiu, H.; Wang, C.; Franke, H.; Kalbarczyk, Z.; Iyer, R.; Basar, T. A mean-field game approach to cloud resource
management with function approximation. Adv. Neural Inf. Process. Syst. 2022, 35, 36243–36258.14.
15. Schuler, L.; Jamil, S.; Kühl, N. AI-based resource allocation: Reinforcement learning for adaptive auto-scaling in serverless
environments. In Proceedings of the 2021 IEEE/ACM 21st International Symposium on Cluster, Cloud and Internet Computing
(CCGrid), Melbourne, Australia, 10–13 May 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 804–811.
16. Sangeetha, S.B.; Sabitha, R.; Dhiyanesh, B.; Kiruthiga, G.; Yuvaraj, N.; Raja, R.A. Resource management framework using deep
neural networks in multi-cloud environment. In Operationalizing Multi-Cloud Environments: Technologies, Tools and Use Cases;
Springer: Cham, Switzerland, 2022; pp. 89–104.
17. Saxena, D.; Gupta, I.; Singh, A.K.; Lee, C.N. A fault tolerant elastic resource management framework toward high availability of
cloud services. IEEE Trans. Netw. Serv. Manag. 2022, 19, 3048–3061. [CrossRef]
18. Habaebi, M.H.; Merrad, Y.; Islam, M.R.; Elsheikh, E.A.; Sliman, F.M.; Mesri, M. Extending CloudSim to simulate sensor networks.
Simulation 2023, 99, 3–22. [CrossRef]
19. Barbierato, E.; Gribaudo, M.; Iacono, M.; Jakobik, A. Exploiting CloudSim in a multiformalism modeling approach for cloud
based systems. Simul. Model. Pract. Theory 2019, 93, 133–147. [CrossRef]
20. Shen, S.; van Beek, V.; Iosup, A. Statistical characterization of business-critical workloads hosted in cloud datacenters. In
Proceedings of the International Symposium on Cluster, Cloud and Grid Computing, Shenzhen, China, 4–7 May 2015; IEEE:
Piscataway, NJ, USA, 2015; pp. 465–474.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like