Indetermsoft-Set-Based D Extra Lite Framework For Resource Provisioning in Cloud Computing
Indetermsoft-Set-Based D Extra Lite Framework For Resource Provisioning in Cloud Computing
Article
Indetermsoft-Set-Based D* Extra Lite Framework for Resource
Provisioning in Cloud Computing
Bhargavi Krishnamurthy 1, * and Sajjan G. Shiva 2, *
Abstract: Cloud computing is an immensely complex, huge-scale, and highly diverse computing plat-
form that allows the deployment of highly resource-constrained scientific and personal applications.
Resource provisioning in cloud computing is difficult because of the uncertainty associated with it in
terms of dynamic elasticity, rapid performance change, large-scale virtualization, loosely coupled
applications, the elastic escalation of user demands, etc. Hence, there is a need to develop an intelli-
gent framework that allows effective resource provisioning under uncertainties. The Indetermsoft
set is a promising mathematical model that is an extension of the traditional soft set that is designed
to handle uncertain forms of data. The D* extra lite algorithm is a dynamic heuristic algorithm that
makes use of the history of knowledge from past search experience to arrive at decisions. In this
paper, the D* extra lite algorithm is enabled with the Indetermsoft set to perform proficient resource
provisioning under uncertainty. The experimental results show that the performance of the proposed
algorithm is found to be promising in performance metrics such as power consumption, resource
utilization, total execution time, and learning rate. The expected value analysis also validated the
experimental results obtained.
Keywords: resource provisioning; uncertainty; D* extra lite; cloud computing; Indetermsoft set;
cloud computing
2. Related Work
This section provides a comprehensive overview of the existing literature and de-
velopments pertinent to resource provisioning in cloud computing. Through this review
of previous research, methodologies, and technological advancements, we establish the
foundation/scope for the proposed work.
Shreshth et al. [11] present an artificial-intelligence-based holistic approach named
HUNTER for resource management in cloud computing. They view optimizing energy
emission in cloud data centers as a multiple-objective-based scheduling problem. A large
amount of energy is consumed by cloud data centers, and hence, there is a need to optimize
energy emissions. One of the key factors to consider while optimizing energy consumption
is reducing the number of thermal hotspots which lead to the degradation of system
performance. Three important models are considered for resource management: cooling,
thermal, and energy. A gated graph convolutional network is viewed as a surrogate model
for optimizing Quality of Service (QoS) to perform optimal task scheduling. The continuous
training of the model helps in quick adaption to dynamic scenarios by providing the
permission to change the scheduling decisions through task migration. Power performance
is used as a heuristic to efficiently balance the load among the cloud hosts. The performance
of the HUNTER is evaluated through the CloudSim simulation toolkit and is good with
respect to energy conservation. However, the approach exhibits poor scalability as it cannot
scale up to large-scale graphs with higher node degrees.
Spyridon et al. propose an auto-scaling framework for resource provisioning in a
cloud computing environment [12]. The over- and under-provisioning of resources results
in a loss of revenue for the cloud brokers whose primary function is to select, manage, and
provide the resources in a complex heterogeneous environment. Since the client resource
requirements are uncertain, it becomes difficult for the cloud broker to predict and process
the client resource demands. Here, the resource-provisioning problem is divided into two
stages: resource selection and resource management. The resource selection problem deals
with the process of selecting the services that meet the requirements of multiple cloud
service providers. The resource management problem deals with the effective maintenance
of cloud resources in terms of resource utilization and overhead maintenance. Both selection
and management of resources have been considered as decision-making problems that
focus on matching the resource requests with services provided. The cloud users make use
of virtualized resources in order to benefit from long-term pricing strategies. For providing
cost-effective solutions, a precise estimation of the upcoming workload is required. An
adaptive auto-scaling framework for resource provisioning that uses historical time-series
data for training a K-means-enabled CNN framework to categorize the future workload
demands as low, medium, or high as per their CPU utilization rate is proposed here. From
the performance evaluation, it is observed that the solution deployment cost is minimal.
However, the framework exhibits higher sensitivity towards initial parameter setting and
an inability to handle categorical data in large-state-space environments.
Kumar et al. present an efficient algorithm for resource provisioning for the efficient
execution of workflows in cloud computing platforms [13]. The workflow considered here
is composed of tasks exhibiting varying resource requirements in terms of memory storage,
memory type, and computation speed. Improper mapping of the workflows to resources
leads to wastage of resources and increased makespan time. Here, the workflow is divided
into three categories: compute-intensive, memory-intensive, and storage-intensive. The
proposed algorithm operates in two phases to provision resources precisely by distinguish-
ing the tasks as computation-intensive and non-computation-intensive. The workflow
model is composed of limited information about the task contained in it, which makes it
applicable to real-time scenarios. The Amazon EC2 cloud model is considered for offer-
ing on-demand computational resources for applications using the presented algorithm.
However, the approach is found to be static and applies a standard set of operations to pro-
cess computation-intensive and non-computation-intensive tasks. This limits the practical
applicability of the approach.
Algorithms 2024, 17, 479 4 of 18
Mao et al. discuss a game approach based on mean-field theory for resource man-
agement in cloud computing [14]. Resource management is viewed as one of the promi-
nent problems in serverless cloud computing environments. Multiple users compete for
resources which usually suffer from scalability issues. Here, an actor–critic learning algo-
rithm is developed to effectively deal with large-state-space environments. The algorithm
is implemented by considering linear and neural network approximations. The mean-
field approach is compatible with several forms of function approximations. Theoretically,
convergence to Nash equilibrium is achieved under linear and softmax approximations.
The performance of this approach is better in terms of resource utilization, but the time to
converge towards better resource-provisioning policies is excessive.
Lucia et al. [15] present an adaptive reinforcement-learning-based approach for re-
source provisioning in serverless computing. Cloud service providers expect a resource-
provisioning scheme to be flexible to meet the fluctuating demands of the customers. A
request-based policy is proposed here in which the resources are scaled for the maximum
number of requests processed in parallel. The performance is strongly influenced by the
predetermined concurrency level. The performance evaluation indicates that with a lim-
ited number of iterations, the model formulates efficient scaling policies. But identifying
the concurrency level that provides the maximum QoS is difficult because of the varying
workload, complex infrastructure, and high latency.
Sangeetha et al. discuss a novel resource management framework based on deep
learning [16]. Increased multimedia traffic in the cloud leads to minimum extensibility of a
service portfolio and poor resource management. A gray-wolf-optimization-based resource
allocation strategy that mimics the hunting behavior of grey wolves is proposed here. The
deep neural network utilized here provides routing direction based on the data input rate
and storage availability. The neural network operates in two phases: data pre-processing
and routing, and controlling application. While the delay in processing the requests is
reduced by this policy, it suffers from a poor search ability and a slow convergence rate.
Saxena et al. develop an elastic resource management framework to provide cloud
services with high availability [17]. There is a very high demand for resources on the
cloud, and the failure to provide on-demand services leads to load imbalance, performance
degradation, and excessive power consumption. An online failure predictor that predicts
the possibility of the virtual machines resulting in resource starvation due to a resource
contention situation is developed. The server under operation is continuously monitored
with the aid of a power analyzer, resource monitor, and thermal analyzer. The virtual
machines that exhibit a high probability of failure are assigned to the fault tolerance unit
that can handle all outages and performance degradation. The virtual machine failure
prediction accuracy in this technique is poor, leading to poor resource management policies.
In summary, the existing works exhibit the following drawbacks:
• Inability to determine the parameter uncertainty in the user tasks and virtual machines.
• Conventional resource-provisioning approaches are static in nature, limiting their
practical application.
• Rule-based approaches are time-consuming and hard to scale, and the rate of virtual
machine violations in terms of cost and response time is very high.
• Most of the heuristic approaches exhibit a higher tendency for premature convergence
under uncertainty.
• Predictive approaches exhibit poor prediction accuracy leading to over- or under-
utilization of resources.
• The computational complexity of the soft computing approaches is high as they deal
with a large number of optimization parameters.
• The learning algorithms fail to consider the highly dynamic operating conditions of
a cloud system. As a result, they cannot handle the dynamic task scheduling and
dynamic placement of resources efficiently.
Algorithms 2024, 17, 479 5 of 18
3. System Model
This section provides the details of the structure, components, and interactions among
the components of the proposed cloud resource-provisioning system. A detailed represen-
tation of its operational dynamics and a discussion of its theoretical foundations are also
presented. The Indetermsoft-set-based D* extra lite framework for edge computing systems
is composed of three functional modules: Indetermsoft set task manager, Indetermsoft set
resource manager, and D* extra lite.
The user submits a set of tasks (UTs) to the system accessibility layer, i.e., UT 1 =
{ut1 , ut2 , ut3 , . . . , utm }, UT 2 = {ut1 , ut2 , ut3 , . . . , utm }, . . . , UT m = {ut1 , ut2 , ut3 , . . . , utm }.
The system accessibility layer places these user tasks into the task queue of the task manager.
The Indetermsoft set task manager applies the Indetermsoft set function (ISF) to the user
tasks. That is,
ISF (UT 1 ) = is f (ut 1 ) , is f (ut 2 ) , is f (ut 3 ) , . . . , is f (ut m ) ,
ISF (UT 2 ) = is f (ut 1 ), is f (ut2 ) , is f (ut 3 ), . . . , is f (utm ) ,
ISF (UT m ) = is f (ut 1 ) , is f (ut 2 ) , is f (ut 3 ) , . . . , is f (ut m ) (1)
Similarly, the resource center is composed of several resource instances, each instance
consists of a set of hosts, and a set of virtual machines are mounted on each host.
ISF ( RCI 2 ) = { is f ( H 1 (vm1 , vm2 , . . . , vmn ))...is f ( Hn (vm1 , Vm2 , . . . , vmn ))},
ISF ( RCI n ) = { is f ( H 1 (vm1 , vm2 , . . . , vmn ))...is f ( Hn (vm1 , vm2 , . . . , vmn )) } (3)
The D* extra lite functional module combines the Indetermsoft set function of user
tasks and the resource. It is an incremental heuristic search algorithm which is a dynamic
form of the A* algorithm that generates D* extra lite task resource-provisioning policies:
The following performance objectives (POs) are set for the D* extra lite framework.
PO1: Power Consumption (PC(D* extra lite)): The power consumption of the D*
extra lite framework is the summation of the power consumption of the resource in-
stances PC ( RCI i ).
i =n
PC ( D ∗ extra lite) = ∑i=1 PC ( RCI i ) (5)
where PC ( RCI i ) is determined by the summation of the power consumption of the
host PC ( Hi ), i.e., PC ( RCI i ) = ∑ii= n
=1 PC ( Hi ). Here, PC( Hi ) is determined by consider-
ing the maximum power consumption state PC vmimax, minimum power consumption
min idle
state PC vmi , and idle state power consumption PC vmi of the virtual machine:
h i i
PC ( Hi ) = PC (vmimax ) − PC vmimin ∗ RU (vmi ) + PC vmidle
i (5a)
Algorithms 2024, 17, 479 6 of 18
PO2: Resource Utilization (RU(D* extra lite)): The resource utilization of the D* extra
lite framework is computed by the summation of the resources utilized by the resource
instances RU ( RCI i ).
i=n
RU ( D ∗ extra lite) = ∑ RU ( RCI i ) (6)
i =1
i =n
RU ( D ∗ extra lite) = ∑ RCI iover − RCI iunder /NRCI (6a)
i =1
PO3: Total Execution Time (TET(D* extra lite)): The total execution time of the D*
extra lite framework is the time taken to assign the user tasks to virtual machines. It is the
summation of total execution time of the Indetermsoft set function of user tasks ISF( UT i ),
Indetermsoft set function of resource instances ISF ( RCI i ), and D* policy D ∗ ( RPPi ).
N ( D ∗ ( RPPP)).
4. Proposed Work
This section outlines the innovative contributions and research directions that form
the core of this research study and presents the novel ideas, methodologies, and pro-
posed solutions, emphasizing their significance and potential impact on the resource-
provisioning field.
As shown in Figure 1, the Indetermsoft-set-based D* extra lite framework is composed
of three distinct modules. The users submit requests to the system accessibility layer. The
task manager is responsible for monitoring the resources and tasks. The resource center
is composed of a set of virtual machines that are hosted on several physical machines.
The uncertainty in the incoming tasks is managed by the Indetermsoft set task manager.
Similarly, the uncertainty in the resources is managed by the Indetermsoft set resource
manager. The D* extra lite algorithm is executed on the Indetermsoft set of tasks and
resources. D* extra lite is also referred to as dynamic A*, which determines an ideal path
between the starting point and goal of an application. The obstacles that occur are handled
efficiently when they are encountered on the path towards the destination.
Algorithms 2024, 17, 479 7 of 18
Algorithms 2024, 17, x. https://doi.org/10.3390/xxxxx 7 of 21
priority to the list elements, and this value is determined through the sum of heuristic
values. The REINITIALIZE function performs fast re-initialization of the affected search
space using a search tree. Quick re-computation of the optimal path is performed by
keeping track of visited and unvisited states of the resource center instances using the CUT
BRANCHES function.
5. Mathematical Modeling
This section presents the mathematical structures, equations, and algorithms that
are central to this resource-provisioning study, providing a rigorous basis for our the-
oretical and empirical investigations. The performance of the D* extra lite framework
is analyzed through mathematical modeling. For modeling purposes, a limited cloud
setup that is composed of a predefined set of user tasks and resource center instances is
considered. The performance objectives considered for evaluation purposes are power
consumption (PC(D* extra lite)), resource utilization
(RU(D* extra lite)), total execution
time (TET(D* extra lite)), and learning rate (LR D* extra lite )). The expected value analysis
is performed to determine the performance metrics for the future time FT. Finally, the
performance of the proposed D* extra lite framework is compared with three existing
works: EW1-TP [13], EW2-AC [14], and EW3-GC [16].
PO1: Power Consumption (PC(D* extra lite)): The power consumption of D* extra lite
is mainly influenced by the power consumed by the virtual machines during the maximum
max and minimum power consumption state PC vmimin .
power consumption state PC vmi
The PC(D* extra lite) is low as D* extra lite performs smooth matching of tasks and
resource instances by finding safer paths, whereas the power consumption of EW1 is
comparatively higher as it performs static classification of tasks as computation-intensive,
memory-intensive, and storage-intensive by considering the present traffic scenario and
ignoring the history. The power consumption of EW2 and EW3 is higher than that of
EW1 due to loose function approximation, poor scalability, and inappropriate selection
of resources.
PC ( D ∗ extra lite) R y ∑ PC ( D ∗ extra lite)( a)
E ∗
, FT = x aϵπ
D (TSP) | D ∗ (TSP)|
PC ( D ∗ extra lite) R y ∑ aϵπ PC ( D ∗ extra lite)( a)
E , FT = ∑ d x
D ∗ (TSP) d∈ D | D ∗ (TSP)|
PC ( D ∗ extra lite)
E 1 ∗ π ∗ ∑ii= n
=1 PC ( RCI i )
E , FT =
D ∗ (TSP) P( D ∗ (TSP)) i
RQ
= ∑ d q PC vmimax − PC vmimin ∗ RU (vmi ) + PC vmidle
i
d∈ D i
PC vmimax − PC vmimin ∗ RU (vmi ) + PC vmidle
RQ i
= q Q ∗ dP ∗ (TSP)
D
PC ( D ∗ extra lite)
PC(D ∗ extra lite) : E , FT ≈ Low
D ∗ (TSP )
PC ( EW1)
PC(EW1) : E ∗
, FT ≈ Medium
D (TSP)
PC ( EW2)
PC(EW2) : E ∗
, FT ≈ High
D (TSP)
PC ( EW3)
PC(EW3) : E , FT ≈ High
D ∗ (TSP)
PO2: Resource Utilization (RU(D* lite)): The resource utilization of D* extra lite is
mainly influenced by the over-utilization of resources of resource center instance RCI iover and
under-utilization of resources of resource center instance RCI iunder . The RU(D* lite) is high
as it combines an incremental search and a heuristic search strategy for task mapping. The
resource utilization of EW1 is low due to improper formulation of the workflow model
which leads to more resource wastage. The resource utilization of EW2 is moderate as
the model is not capable of dealing with a large-state-space environment and takes a
Algorithms 2024, 17, 479 11 of 18
longer time to converge to Nash equilibrium. The RU(EW3) is low due to convergence to
suboptimal solutions and poor task-routing capability.
PO3: Total Execution Time (TET(D* lite)): The total execution time of D* extra lite
is mainly influenced by the total execution time of the Indetermsoft set function of user
tasks ISF( UT i ), Indetermsoft set function of resource instances ISF ( RCI i ), and D* pol-
icy D ∗ ( Tspi ). The TET(D* lite) is low as it is capable of handling dynamic obstacles in
cloud scenarios using an incremental search strategy. The TET(EW1) is moderate due to
inappropriate mapping of resources. The TET(EW2) is very high because of poor workflow
classification and softmax parametrization taking an infinite period of time for convergence.
The TET(EW3) is low due to bad local search capability and a poor convergence rate.
The LR D* extra lite ) is very high due to high planning efficiency and provides a fast
reliable path from user tasks to resource center instances. The LR(EW1) is medium as it
makes poor task-scheduling decisions considering limited information about user tasks.
The LR(EW2) is low as it does not consider the computation-sensitive and storage-sensitive
features of the tasks. The LR(EW3) is low due to low learning accuracy and converging to a
low-optimality solution as it is easily affected by external factors.
Figure2.2.Resource
Figure Resourcecenter
centerinstances
instancesversus
versuspower
powerconsumption
consumption(W).
(W).
PO2:Resource
PO2: ResourceUtilization (RU(D*∗ lite))
Utilization(RU(D lite))
AA graph
graph of varying
varying sizes
sizesof
ofuser
usertasktasksets
setsversus
versusresource
resourceutilization
utilizationis shown in Fig-
is shown in
Figure
ure 3. 3.
TheThe resource
resource utilization
utilization of of
D*D* extra
extra lite
lite is is optimal
optimal over
over allallsizes
sizesofofuser
usertasks
tasksasasit
itcombines
combinesincremental
incrementaland and heuristic
heuristic search
search strategies
strategies forfor task
task mapping.
mapping. TheThe resource
resource uti-
utilization
lization ofofEW1-TP
EW1-TPisispoor
poordue
dueto toimproper
improper formulation
formulation of the workflow
workflow modelmodelwhich
which
does
doesnotnotsuit
suitthe real-time
the real-timescenario
scenarioof cloud systems.
of cloud The structural
systems. features
The structural extracted
features from
extracted
the tasks are poor, which leads to resource wastage. The resource utilization
from the tasks are poor, which leads to resource wastage. The resource utilization of EW2- of EW2-AC and
EW3-GW is moderate due to convergence to suboptimal solutions
AC and EW3-GW is moderate due to convergence to suboptimal solutions and poor task- and poor task-routing
capability. They are They
routing capability. not capable
are notof dealing
capable ofwith
dealinga large-state-space environment,
with a large-state-space and the
environment,
resource allocationallocation
and the resource models take longer
models taketime to converge
longer to Nash equilibrium.
time to converge to Nash equilibrium.
Algorithms
Algorithms 2024, 17,
2024,17,
Algorithms2024, x.
x. https://doi.org/10.3390/xxxxx
17,479 https://doi.org/10.3390/xxxxx 16
16 of 21
of 18
21
14 of
Figure
Figure3.3. User
3.User task
Usertask set
taskset versus
setversus resource
versusresource utilization.
resourceutilization.
utilization.
Figure
∗
PO3: Total
PO3:Total
PO3: Execution
TotalExecution
ExecutionTimeTime (TET(D lite))
(TET(D* ∗lite))
Time(TET(D lite))
A graph
AAgraph of varying
graphofofvarying sizes
varyingsizes of user
sizesofofuser tasks
usertasks
tasks setset
set versus
versus
versus total
total
total execution
execution
execution timetime
time
(ms)(ms)
(ms) is
is shown
is shownshown
in
in Figure
in Figure
Figure 4.
4. The The
4. The total
total
total execution
execution
execution timetime
time of
of of D*
D*D* extra
extra
extra lite
lite
lite is
is is considerably
considerablyless
considerably less for
lessfor all
forall sizes
allsizes
sizesof of user
ofuser
user
task
task sets
tasksets due
duetoto
setsdue effective
toeffective
effective task
task
task scheduling
scheduling
scheduling despite
despite
despite dynamic
dynamic
dynamic obstacles
obstacles
obstacles in
in cloud
in cloud cloud scenarios.
scenarios.
scenarios. The
The
total total
The total execution
execution
execution time
time time of EW1-TP
of EW1-TP
of EW1-TP and EW2-AC
and EW2-AC
and EW2-AC is veryis very
is very high
highhigh due
due due
to theto the
to poor poor
the poor classifica-
classifica-
classification
tion
of
tion of
of workflow
workflow
workflow and
and inappropriate
and inappropriate
inappropriate mapping
mapping
mapping of
of resources,
of resources, which
resources, which cause
cause
which the the
cause execution
execution
the execution timetime
to
time
to spike.
spike.
to spike. The
TheThe softmax
softmax
softmax parametrization
parametrization
parametrization also
also
also takes
takes
takesananan infinite
infinite period
infiniteperiod of time
periodofoftime
timeforfor convergence
forconvergence
convergence
to
to promising
topromising solutions. The
promising solutions. Thetotal
The totalexecution
total execution
execution time
time
time of EW3-GW
of of EW3-GW
EW3-GW is
is moderate
is moderate
moderate due
duedueto
to poor local
to poor
poor local
search
search capability and poor convergence rate. Also, the resource management model pro-
local capability
search and
capability poor
and convergence
poor convergence rate. Also,
rate. the
Also, resource
the management
resource management model model
pro-
vides
vides minimal
provides minimal
minimal extensibility of
extensibility
extensibility aa service
of of portfolio.
a service
service portfolio.
portfolio.
Figure
Figure4.4.
Figure User
4.User task
Usertask set
taskset versus
setversus total
versustotal execution
totalexecution time
executiontime (ms).
time(ms).
(ms).
PO4:
PO4: Learning
Learning Rate
Rate
PO4: Learning Rate (LR(D (LR(D
(LR D∗∗* lite))
lite ))
lite))
A graph
AAgraph
graphof of varying
ofvarying sizes
varyingsizes
sizesof of user
ofuser
usertasktask sets
task sets versus
sets versus learning
versus learning rate
learning rate is
rate is shown
is shown
shown in in Figure
in Figure 5.
Figure 5.
5.
The
The learning
Thelearning
learningrate rate of
rateofofD*D* extra
D*extra lite
extralite is very
liteisisvery
veryhighhigh due
highdue to high
duetotohigh
high planning
planning
planning efficiency
efficiency
efficiency and
and
and provides
provides
providesa
aa fast
fast fast reliable
reliable
reliablepathpath
pathfromfrom
user
from user tasks
tasks
user tasks to
to resource
to resource
resource center
center instances.
instances.
center TheThe
instances. learning
learning
The raterate
learning of
of EW1-
of EW1-TP
rate EW1-
isTP is
very very
low low
as itas it
makesmakes
poor poor task-scheduling
task-scheduling decisions
decisions considering
considering
TP is very low as it makes poor task-scheduling decisions considering limited information limited
limited information
information on
on
user user
tasks.tasks.
It doesIt does
not not
considerconsider
the the computation-sensitive
computation-sensitive and and storage-sensitive
storage-sensitive
on user tasks. It does not consider the computation-sensitive and storage-sensitive fea- features fea-
of
thetures of
tasks. the
The tasks. The
learning learning
rates of rates
EW2-AC of EW2-AC
and EW3-GWand EW3-GW
are are
moderate
tures of the tasks. The learning rates of EW2-AC and EW3-GW are moderate due to low moderate
due to low due to low
learning
learning
accuracy
learning andaccuracy and
and convergence
convergence
accuracy to
to low-optimality
to low-optimality
convergence solutions.solutions.
low-optimality They alsoThey
solutions. fail toalso
They fail
fail to
capture
also capture
tothe com-
capture
the complexity
plexity
the complexity
of real-worldof real-world
of real-world
situations, situations, and the
and the outcomes
situations, and the outcomes
outcomes of the
the approach
of the approach
of approach
are easilyareare easily by
affected
easily af-
af-
fected
external by external
factors. factors.
fected by external factors.
Algorithms 2024, 17, 479 15 of 18
Algorithms 2024, 17, x. https://doi.org/10.3390/xxxxx 17 of 21
and the ability to handle enormous computational resources for larger task sets is poor.
Figure 8.
Figure 8. Resource
Resourcecenter
centerinstances versus
instances totaltotal
versus execution time (ms).
execution time (ms).
∗
PO4:Learning
PO4: LearningRateRate(LR
(𝐋𝐑(𝐃D* 𝐥𝐢𝐭𝐞))
lite ))
Agraph
A graph of of varying
varying sizes
sizes of
of resource
resourcecentercenterinstances
instancesversus
versuslearning
learningraterateisis
shown
shown in
Figure 9. The learning rate of D* extra lite is very high for all varying sizes of
in Figure 9. The learning rate of D* extra lite is very high for all varying sizes of user task user task sets
since
sets it efficiently
since handles
it efficiently uncertainty
handles uncertainty in the environment
in the environment by precisely taking
by precisely takingactions us-
actions
ing the knowledge gained from previous searches. The learning rate
using the knowledge gained from previous searches. The learning rate of EW2-AC is very of EW2-AC is very
lowdue
low dueto toneural
neuralnetwork
networkfunction
functionapproximation.
approximation.Even Eventhetheasymptotic
asymptoticbehavior
behaviorof ofthe
the
differential game approach decreases its learning rate towards promising
differential game approach decreases its learning rate towards promising solutions. The solutions. The
learningrates
learning ratesof ofEW1-TP
EW1-TPand andEW3-GW
EW3-GWare aremoderate
moderateduedueto toananimbalanced
imbalancedrelationship
relationship
between exploration and exploitation. They also fail to handle larger user requests, and and
between exploration and exploitation. They also fail to handle larger user requests, the
the task
task datadata
cannotcannot be handled
be handled effectively
effectively withwith increasing
increasing user user demands.
demands.
Figure9.9.Resource
Figure Resourcecenter
centerinstances
instancesversus
versuslearning
learningrate.
rate.
7.7.Conclusions
Conclusions
This
This paper
paper presented aa novel
novelIndetermsoft-set-based
Indetermsoft-set-basedD* D*Extra
Extra Lite
Lite framework
framework forfor
re-
resource provisioningininthe
source provisioning thecloud.
cloud. The
The experimental
experimental evaluation
evaluation waswas carried
carried outout using
using the
the CloudSim
CloudSim simulator,
simulator, withwith the Google
the Google Cluster
Cluster dataset
dataset and Bitbrains
and Bitbrains dataset.
dataset. The
The results
results obtained were found to outperform three of the existing works with respect
obtained were found to outperform three of the existing works with respect to the perfor- to the
performance objectives of power consumption, resource utilization, total execution
mance objectives of power consumption, resource utilization, total execution time, and time,
and learning
learning rate.
rate. However,
However, thethe proposed
proposed frameworkalso
framework alsosuffered
sufferedfrom
from limitations
limitations in
in terms
terms
of limited handling of highly dynamic obstacles, extensive memory usage, and increased
computational overhead. As future work, complete analytical modeling and exhaustive
testing of the framework are planned to address higher-end performance objectives such as
fault tolerance, correctness, confidentiality, reliability, and transparency.
Author Contributions: Conceptualization, methodology, validation, and formal analysis: B.K. and
S.G.S.; writing—original draft preparation: B.K.; review and editing: S.G.S. All authors have read
and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Data are contained within the article.
Conflicts of Interest: The authors declare no conflicts of interest.
References
1. Abid, A.; Manzoor, M.F.; Farooq, M.S.; Farooq, U.; Hussain, M. Challenges and Issues of Resource Allocation Techniques in Cloud
Computing. KSII Trans. Internet Inf. Syst. 2020, 14, 2815–2839.
2. Laha, S.R.; Parhi, M.; Pattnaik, S.; Pattanayak, B.K.; Patnaik, S. Issues, Challenges and Techniques for Resource Provisioning
in Computing Environment. In Proceedings of the 2020 2nd International Conference on Applied Machine Learning (ICAML),
Changsha, China, 16–18 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 157–161.
Algorithms 2024, 17, 479 18 of 18
3. Maenhaut, P.J.; Volckaert, B.; Ongenae, V.; De Turck, F. Resource management in a containerized cloud: Status and challenges. J.
Netw. Syst. Manag. 2020, 28, 197–246. [CrossRef]
4. Kabir, H.D.; Khosravi, A.; Mondal, S.K.; Rahman, M.; Nahavandi, S.; Buyya, R. Uncertainty-aware decisions in cloud computing:
Foundations and future directions. ACM Comput. Surv. (CSUR) 2021, 54, 1–30. [CrossRef]
5. Li, B.; Tan, Z.; Arreola-Risa, A.; Huang, Y. On the improvement of uncertain cloud service capacity. Int. J. Prod. Econ. 2023,
258, 108779. [CrossRef]
6. Smarandache, F. Introduction to the IndetermSoft Set and IndetermHyperSoft Set; Infinite Study; University of New Mexico: Albu-
querque, NM, USA, 2022; Volume 1.
7. Smarandache, F. New Types of Soft Sets: HyperSoft Set, IndetermSoft Set, IndetermHyperSoft Set, and TreeSoft Set; Infinite Study;
American Scientific Publishing Group: Gretna, LA, USA, 2023.
8. Smarandache, F.; Abdel-Basset, M.; Broumi, S. (Eds.) Neutrosophic Systems with Applications (NSWA); Infinite Study; Sciences Force
LLC: Clifton, NJ, USA, 2023; Volume 3.
9. Xie, K.; Qiang, J.; Yang, H. Research and optimization of d-start lite algorithm in track planning. IEEE Access 2020, 8, 161920–161928.
[CrossRef]
10. Ren, Z.; Rathinam, S.; Likhachev, M.; Choset, H. Multi-objective path-based D* lite. IEEE Robot. Autom. Lett. 2022, 7, 3318–3325.
[CrossRef]
11. Tuli, S.; Gill, S.S.; Xu, M.; Garraghan, P.; Bahsoon, R.; Dustdar, S.; Sakellariou, R.; Rana, O.; Buyya, R.; Casale, G.; et al. HUNTER:
AI based holistic resource management for sustainable cloud computing. J. Syst. Softw. 2022, 184, 111124. [CrossRef]
12. Chouliaras, S.; Sotiriadis, S. An adaptive auto-scaling framework for cloud resource provisioning. Future Gener. Comput. Syst.
2023, 148, 173–183. [CrossRef]
13. Kumar, M.S.; Choudhary, A.; Gupta, I.; Jana, P.K. An efficient resource provisioning algorithm for workflow execution in cloud
platform. Clust. Comput. 2022, 25, 4233–4255. [CrossRef]
14. Mao, W.; Qiu, H.; Wang, C.; Franke, H.; Kalbarczyk, Z.; Iyer, R.; Basar, T. A mean-field game approach to cloud resource
management with function approximation. Adv. Neural Inf. Process. Syst. 2022, 35, 36243–36258.14.
15. Schuler, L.; Jamil, S.; Kühl, N. AI-based resource allocation: Reinforcement learning for adaptive auto-scaling in serverless
environments. In Proceedings of the 2021 IEEE/ACM 21st International Symposium on Cluster, Cloud and Internet Computing
(CCGrid), Melbourne, Australia, 10–13 May 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 804–811.
16. Sangeetha, S.B.; Sabitha, R.; Dhiyanesh, B.; Kiruthiga, G.; Yuvaraj, N.; Raja, R.A. Resource management framework using deep
neural networks in multi-cloud environment. In Operationalizing Multi-Cloud Environments: Technologies, Tools and Use Cases;
Springer: Cham, Switzerland, 2022; pp. 89–104.
17. Saxena, D.; Gupta, I.; Singh, A.K.; Lee, C.N. A fault tolerant elastic resource management framework toward high availability of
cloud services. IEEE Trans. Netw. Serv. Manag. 2022, 19, 3048–3061. [CrossRef]
18. Habaebi, M.H.; Merrad, Y.; Islam, M.R.; Elsheikh, E.A.; Sliman, F.M.; Mesri, M. Extending CloudSim to simulate sensor networks.
Simulation 2023, 99, 3–22. [CrossRef]
19. Barbierato, E.; Gribaudo, M.; Iacono, M.; Jakobik, A. Exploiting CloudSim in a multiformalism modeling approach for cloud
based systems. Simul. Model. Pract. Theory 2019, 93, 133–147. [CrossRef]
20. Shen, S.; van Beek, V.; Iosup, A. Statistical characterization of business-critical workloads hosted in cloud datacenters. In
Proceedings of the International Symposium on Cluster, Cloud and Grid Computing, Shenzhen, China, 4–7 May 2015; IEEE:
Piscataway, NJ, USA, 2015; pp. 465–474.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.