MrLBA - Multi-Resource Load Balancing Algorithm For Cloud Computing Using Ant Colony Optimization
MrLBA - Multi-Resource Load Balancing Algorithm For Cloud Computing Using Ant Colony Optimization
https://doi.org/10.1007/s10586-021-03322-3 (0123456789().,-volV)(0123456789().
,- volV)
Abstract
Cloud computing is a new paradigm of computing. This paradigm delivers services over the internet and eliminates
requirements for local data storage. Instead of purchasing hardware and software, cloud computing enables users to use
storage or applications as a service. Scheduling is the process of allocating the available resources in cloud environment.
Scientific workflows consist of a large number of tasks. Workflow scheduling is a critical issue in cloud computing that
targets to complete workflow execution by considering different parameters such as execution time, user deadlines,
execution cost, and Quality of Service (QoS), etc. In this article, we present a Multi-resource Load Balancing Algorithm
(MrLBA) cloud computing environment. The algorithm is based on Ant Colony Optimization (ACO). The proposed
algorithm targets makespan, cost while keeping a well load-balanced system. The algorithm is validated with experimental
results on benchmark workflows. The results show that MrLBA reduces both execution time and cost and efficiently
utilizes available resources by maintaining balanced load among resources.
123
Cluster Computing
123
Cluster Computing
better performance. A load-balancing algorithm for IaaS are reduced to reduce the resource allocation time. Finally,
used to analyze the incoming tasks to find a suitable VM features are combined to allocate resources. Hybrid PSO
for maximum utilization of cloud infrastructure resources and modified GA are used for this purpose. Comparative
is proposed in [22]. A queuing model is adopted for experimental results are presented to validate the perfor-
mapping tasks to VMs assignment. Tasks are scheduled to mance of the proposed algorithm. A hybrid approach based
VMs to reduce response time and task completion time. on modified PSO and Q-learning algorithm is presented in
Experimental results show that the algorithm performed [29]. The algorithm targets load balancing among VMs.
better than existing load balancing algorithms in terms of The purpose of the hybridization is to improve load man-
waiting time, makespan, and resource utilization. Authors agement, improve throughput, and manage priorities of the
in [23] proposed a hybrid multi-objective task scheduling tasks. Q-learning is used to improve the variables of PSO.
algorithm for cloud computing environments. The The proposed algorithm is validated with experimental
approach uses hybrid discrete artificial bee colony algo- results. Another multi-objective optimization algorithm for
rithm. The problem is modeled as hybrid flow shop scheduling workflows in cloud computing is proposed in
scheduling by considering single and multi-objectives. The [16]. The proposed method consists of two phases. In the
objectives include completion time, device workload, and first phase pre-processing is done to avoid bottleneck tasks
cluster workload. To verify the performance of the pro- and reduce dependencies in workflow tasks. in the next
posed algorithm, experiments on benchmark datasets are phase, PSO is used to optimize the scheduling process. The
presented. algorithm targets makespan, cost, and load balancing as
The technique proposed in [24] takes advantage of both multi-objective optimization. The proposed method is
online load balancing and offline load balancing strategies validated with comparative experimental results. A utility-
to develop a more efficient load balancing algorithm. based algorithm for load balancing in cloud resources is
Bacteria Foraging Optimization (BFO) theory is used in the presented in [30]. The method is based on the firefly
offline phase whereas, in online mode, load balancing technique. The algorithm uses the bargaining protocol to
according to users’ needs is used for completion of tasks optimize the utility. Comparative experimental results are
with minimum time. A hybrid GA-PSO algorithm for load shown to validate the proposed method. Another algorithm
balancing and reducing makespan is presented in [25] that for load balancing in cloud computing environment is
defines a specific number for iterations. The algorithm proposed in [31]. The algorithm is the hybridization of
starts by initializing the random population and applying firefly and improved hybrid PSO. The algorithm works by
GA on half of the defined number of iterations i.e. n/2 to selecting the global best particle with a small distance of
reduce complexity. After this, PSO is applied over the point to a line in a reduced search space with the help of the
results of GA to find gbest and pbest positions and updates firefly algorithm. The algorithm targets load balance and
velocity and positions. Algorithm searches for the tasks response time. Comparative experimental results are shown
with minimum execution time to execute them first. For to validate the proposed algorithm. Authors in [32] pre-
mapping tasks on to VMs algorithm ignores computational sented a novel approach for load balancing in cloud com-
cost and time. Another technique that uses machine puting. The proposed algorithm tries to solve load
learning to recommend suitable fault-tolerant techniques balancing issues in the cloud computing environment by
for workflow management systems is proposed in [26]. The optimizing response time as the second objective. The
target of the proposed method is to improve resiliency. The proposed technique distributes dynamic workload over the
method uses historical data along with machine learning to entire system. Authors in [33] proposed a hybrid machine
achieve the desired results. Experimental results are pre- learning algorithm for load balancing in cloud computing.
sented to validate the proposed technique. Another algo- The algorithm uses supervised, unsupervised learning and
rithm that uses parallel GA for workflow scheduling in fuzzy logic as the hybrid mechanism. In the first phase,
cloud environment is presented in [27]. The proposed overload and underload resources are identified followed
algorithm uses GA in a parallel fashion. First, the best by task scheduling to improve resource utilization and
chromosomes for individual parameters are calculated. In achieve load balancing. Tasks scheduling is supported by
the next round, the best chromosomes are used to calculate multi-objective optimization technique. Experimental
the super best chromosomes for multiple parameters. The results are presented to validate the proposed algorithm.
algorithm targets makespan, cost, and load balance. Com- The performance of cloud computing infrastructure is
parative experimental results are shown to verify the influenced by load balancing and task scheduling. Many
effectiveness of the proposed algorithm. Another algorithm load balancing and scheduling algorithms are proposed by
for dynamic resource allocation in cloud computing is many researchers whose aim is to distribute the tasks fairly
presented in [28]. In the first phase, tasks are analyzed, and on available VMs to achieve the desired objectives. In this
different features are extracted. In the next phase, features article, we propose a scheduling algorithm for cloud
123
Cluster Computing
computing environment. The algorithm uses ACO for While scheduling workflow tasks to VMs, load of each VM
optimization purpose. The proposed algorithm targets needs to be measured. Load of (VMi ) is the ratio of the total
execution time, monetary cost, and load balance. length (TL) of tasks assigned and the processing capacity as
shown in Eq. 3.
TL
3 Materials and methods Lvmi ¼ ð3Þ
Cvmi
This section presents the details of MrLBA. The proposed Load of n VMs in a host can be calculated with Eq. 4.
algorithm schedules workflow tasks at each level and finds X
n
optimal solutions through pheromone trials. The algorithm Ln ¼ Lvmi ð4Þ
uses sorting workflow tasks with different parameters in the i¼1
preprocessing phase and then optimizes the solution with Load balance (r) can be measured as the standard deviation
ACO. First, we present the workflow and cloud model used as shown in Eq. 5. The smaller values mean better load
in this article followed by details of the proposed method. management. Where Lvmi refers to the load of VMi , L is the
average load of all VMs and n is the number of VMs.
3.1 Workflow and cloud model sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Pn
2
i¼1 ðLvmi LÞ ð5Þ
Workflow scheduler is strongly influenced by the nature of tasks. r¼
n
For example, some workflows are computationally intensive
whereas others are data intensive. Allocation of resources to such Completion time of a task ti is the time that a task takes for
tasks is constrained by some parameters such as makespan, cost, execution and getting the required data. Completion time
user deadlines, QoS, etc. Workflows are represented as Directed can be calculated with Eq. 6.
Acyclic Graphs (DAG) where vertices represent workflow tasks Timeðti Þ ¼ ðTimeTðti ;tj Þ þ TimeEðti ;VMk Þ Þ ð6Þ
whereas edges represent the relationship between tasks. Work-
flow tasks are represented in different levels that create a parent- Where TimeTðti ;tj Þ is the time of data transmission from task
child relationship or dependencies between tasks at different ti to tj and TimeEðti ;VM k Þ is the execution time of task ti over
levels. In addition to these dependencies, workflow tasks also VMk . Transfer time can be calculated with Eq. 7 whereas
share data during execution. execution time can be calculated with Eq. 8.
The processing unit in cloud computing is referred to as
VM. VMs are created on physical hardware or servers to size of ðti ; tj Þ
Transðti ; tj Þ ¼ ð7Þ
efficiently utilize cloud resources and serve a large number BðVMk ;VMm Þ
of users. Datacenters are used to host physical infrastruc- In above equation, sizeof ðti ; tj Þ is the size of data trans-
ture. Each resource in cloud computing has a finite number ferred from task ti to tj and BðVMk ;VMm Þ is the bandwidth of
of processing, storage, and other resources. VMs are cre-
data centers where VMk and VMm are located. In case
ated with a fixed number of processing, storage, memory,
where both VMs are in same data center, transmission time
and network bandwidth capabilities. The proposed algo-
will be zero.
rithm targets execution of workflow tasks according to the
required resources. For this purpose, the capacity of VMs li
TE ¼ ð8Þ
and resource utilization are calculated first. Assigning Cvmj
resources to tasks accordingly results in better resource
In Eq. 8, li is the length of tasks i and Cvmj is the processing
utilization and overall execution time is reduced. The
capacity of VMj calculated with Eq. 2. Makespan refers to
processing capacity of VMi can be measured as the number
the finish time of the last task in the workflow. Makespan is
of Processing Elements (PE) and the pressing power of
calculated with Eq. 9.
each PE as shown in Eq. 1.
n
Cvmi ¼ ðPEi MIPSi Þ ð1Þ MS ¼ FTi¼1 ½taski time ð9Þ
Processing capacity of n VMs can be calculated with Eq. 2. Where MS refers to makespan and FT is the finish time of a
task. Monetary Cost ðMCÞis the cost of executing workflow
X
n
tasks and the cost of transferring data between different
Cn ¼ Cvmi ð2Þ
i¼1 resources as shown in Eq. 10.
123
Cluster Computing
MC ¼ TEC þ TTC ð10Þ where Pk is the path k and Q is the adaptive parameter and
its value was set to 100 as in [34]. The fitness value is
Where ðTECÞ refers to Total Execution Cost calculated calculated according to Eq. 16.
with Eq. 11 and ðTTCÞ refers to Total Transfer Cost cal-
culated with Eq. 12. FitnessðxÞ ¼ weðf ðxÞÞ þ weðbðxÞÞ þ weðgðxÞÞ ð16Þ
X
N Where w is the weight factor for each parameter, f(x), b(x),
TECwi ¼ vmtime
i vmcost
i ð11Þ and g(x) are objective functions for makespan, costs and
i¼1 load balance respectively. In the evaluation of the proposed
Where N is the number of tasks in the workflow, vmtime algorithm, equal weightage i.e. 0.33 was used for all
refers to the time a VM has executed a particular task. parameters.
Vmcost is the cost of VM for executing a task. Standard ACO algorithm consists of the following steps
[35].
X
n sizeð Þ
ti ;tj
TTCwi ¼ ð12Þ (1) Initialize the performance i.e. pheromone.
i¼1
Bcost
(2) Evolve each generation.
Size refers to the size of data to be transferred and Bcost is (3) Perform pheromone local and global update.
the bandwidth cost. (4) Update each ant according to fitness function.
(5) For each ant select the best trails.
3.2 Ant colony optimization (ACO) (6) Repeat the process until the stopping criterion is met.
ACO algorithms consist of the initialization phase i.e. 3.3 Proposed algorithm
initialize the performance indicated by pheromone. The
next phase involves the evolution of generations. Each In this article, our objective is to maintain a well-balanced
generation consists of solutions and each task is con- system and reduce makespan and cost as well with effec-
structed as ant step. Suitable resources are selected for tive utilization of cloud resources. The proposed approach
tasks based on the performance of pheromone and heuristic is designed for heterogeneous cloud environments due to
information. Local and global variables are updated as all the heterogeneity of computational cloud resources that are
tasks are constructed. The pheromone of Ti to Rj is placed in cloud data centers. Each workflow consists of
assumed to be small positive value. Resources to tasks different number of tasks in terms of length, number of
allocation is carried out using Eq. 13 [34]. parent tasks, number of children tasks, and data size.
8
>
< ½sðTi ;Rj Þa ½gðTi ;Rj Þb Þ Moreover, the numbers of tasks are different at each level.
P if Rj 2 Lk Hence the proposed approach works on pre-processing of
Pk ðTi ;Rj Þ ¼ ð a b
> ði2lkððTi ;Rj ÞÞ ½sðTi ;hÞ ½gðTi ; hÞ Þ
: workflow and mapping of tasks to VMs using ACO to
0 otherwise
achieve the desired parameters. The problem of scheduling
ð13Þ with multiple parameters and constraints becomes more
Where ½sðTi ;Rj Þ is the pheromone of task Ti and ½gðTi ;Rj Þ complicated. It is difficult to solve such combinatorial
is the heuristics information set to the reciprocal of start problems and obtain optimal solutions. ACO has the
time of task Ti . a and b are the weight factors [34]. Lk is the advantage to solve such problems and many studies in
lost that satisfies the optimization criteria. Pheromone is scheduling have shown good results [34]. Initially, work-
updated according to Eq. 14. After the ant completes the flow is parsed and tasks at each level are arranged
tour pheromone value of each ant is placed on its edge Ti to according to the number of children, task execution time,
Rj as shown in Eq. 14. and file size. A task with more children, more task exe-
cution time, and with large file size is executed first. To get
sðTi ; Rj Þ ¼ ð1 qÞsðTi ; Rj Þ þ DsðTi ; Rj Þ ð14Þ the optimal solution ants try to schedule tasks on VMs at
each iteration. The proposed algorithm follows the tradi-
Where q is the evaporation factor and DsðTi ; Rj Þ is com-
tional structure of the ACO algorithm. Algorithm 1 pre-
puted with Eq. 15.
sents the pseudo-code of the ACO algorithm whereas
DsðTi ;Rj Þ ¼ Table 1 shows notations and symbols used in the article.
Qðweðf ðxÞÞ þ weðbðxÞÞ þ weðgðxÞÞ Þ if Rj 2 Pk
¼
0 otherwise
ð15Þ
123
Cluster Computing
Table 1 Notations and symbols used in the article arranged according to the number of children, execution
Symbols Description
time, and data size. The tasks are sorted on more children
i.e., large depth, execution time, and large file size. ACO is
Cvmi Capacity of ith VM used to optimize the schedule and calculate fitness func-
Cn Capacity of n VMs tion. The objective function is optimized by selecting the
Lvmi Load of ith VM best fitness value. The procedure repeats till the desired
Ln Load of n VMs number of iterations. Algorithm 2 shows the pseudo-code
n Number of VMs of the proposed algorithm.
L Average load
B Bandwidth Algorithm 2: Pseudo code of MrLBA
TE Execution time Input : List of workflow tasks
li Length of ith task Output : map (Ti to Rj )
MS Makespan
1 for each VM v in VMList do
FT Finish time
2 get processing capacity of v
MC Monetary cost
TEC Total execution cost 3 sort VMs by processing capacity
TET Total transfer cost 4 end
N Number of tasks
5 for each workflow w do
Ti Task i
6 Calculate depth of w (for level of tasks)
Rj Reosurce j
Rk Path k 7 for each level l in w do
q Evaporation factor 8 for each task in l do
W Weight factor for each parameter 9 Sort tasks and set priorities by
PE Processing elements
10 Number of children tasks
a, b Weight factors of heuristics information
r Standard deviation
11 Execution time
D Change factor 12 Data size
13 Demand d = requested resources of t
14 if (d ≤ processing capacity of VM v)
Algorithm 1: Procedure of ACO
Input : workflow w, resources list then
Output : best list 15 Allocate VM with according to
1 while termination condition not met do the priority of task
2 Perform Ant Optimization 16 else
3 Calculate Fitness 17 Select next VM
4 Construct solution with the help of Ant 18 end
5 Update pheromone Values 19 end
6 end 20 end
21 end
The procedure starts by initialization of pheromone and 22 Call ACO
computation of heuristic value associated with each node in 23 end
search space. In while loop artificial ants calculate fitness
value. In the last step, on the basis of best fitness value
pheromone value is updated. This is done by either
increasing pheromone values on edges or decreasing the 4 Experimental evaluation
pheromone values which simulates pheromone evapora-
tion. The working flow of MrLBA is presented in Fig. 2. This section presents experimental evaluation of MrLBA.
The proposed architecture has four basic elements which First, the dataset and assessment approaches used for
describe the interaction of different processes. First, input experiments are discussed. The details of the simulation
workflow is taken and parsed. Tasks in workflow are
123
Cluster Computing
setup and the results obtained are presented. To evaluate comparative results of the proposed algorithm with other
the proposed algorithm, simulations are performed on Dell methods. The results are shown in terms of execution time,
Core i5 machine with 2.4 GHz processor having 16 GB cost, and load balance. The proposed algorithm achieves
memory and running Ubuntu 16.04 operating system. better results in terms of execution time, cots, and load
Three datacenters with eight VMs are configured for sim- balance over all datasets in comparison to other compared
ulation. Each VM is allocated with different processing methods. The proposed algorithm uses characteristics of
capacity, bandwidth capacity, etc. A collection of five tasks in workflow to process tasks according to the capacity
datasets i.e. Montage, LIGO, Sipht, CyberShake, and of available resources. Here we present the improvement
Epigenomics are used in the simulation. The structure of achieved by the proposed method over the other compared
the datasets is shown in Fig. 3 whereas Table 2 shows algorithms. The results are reported in Figs. 4, 5, 6, 7 and 8
details of the datasets. for Montage, Sipht, LIOG, CyberShake, and Epigenomics
respectively. For the Montage dataset, the proposed algo-
The proposed algorithm is compared with standard ACO rithm achieved percent improvement of 40.08, 26.14, and
and specialized schedulers for workflow scheduling 9.55 over ACO, GA-PSO, and hybrid approach respec-
[16, 25]. The standard algorithm is used to make a baseline tively in execution time. The improvement in cost over the
for evaluation. Specialized schedulers are selected based on compared algorithms is 38.9, 26.5, and 11.1. In case of load
targeted parameters, datasets used, etc. Standard ACO and balance, the improvement gain of the proposed algorithm
MrLBA were executed with 10 ants and 100 iterations. The over ACO, GA-PSO, and hybrid approach is 47.5, 26.8,
values of a, b, and q were set to 3, 2, and 0.01 respectively and 14.18 respectively. A similar improvement can also be
taking into consideration the results obtained in [34]. Other observed for the Sipht dataset. The improvement gain in
methods were executed with the parameters settings dis- makespan over the compared methods is 30.5, 10.4, and
cussed in the respective articles. Table 3 shows the 2.3. For the monetary cost, the percent improvement gain
123
Cluster Computing
of the proposed algorithm over ACO, GA-PSO, and hybrid the same dataset, the improvement in cost is 43.3, 27, and
approach is 38.7, 26.9, and 11.9 respectively. In terms of 7.9 over the compared methods. In case of load balance,
load balance, the proposed algorithm achieved percent the gain is 36.9, 22.3, and 11.01 over ACO, GA-PSO, and
improvement gain of 34.15 over ACO, 24.4 over GA-POS, hybrid approach respectively. Workflows used for evalua-
and 11.9 over the hybrid approach. In case of the LIGO tion have different intensities of the required resources.
dataset, the improvement gain for makespan over ACO, These characteristics influence the execution flow and
GA-PSO, and hybrid approach is 29.8, 7.18, and 2.5 results of the simulation. MrLBA makes better use of
respectively. The improvement in cost over the compared resources for CPU-intensive tasks i.e., resources are allo-
techniques is 49.1, 33.6, and 4.5. For load balance, the cated according to the demands of the tasks. For workflows
improvement over ACO, GA-PSO, and hybrid approach is with more dependencies e.g., Sipht, MrLBA gives high
33.1, 19.8, and 9.1 respectively. For the CyberShake priority to execute parent tasks to remove bottlenecks. For
dataset, the improvement gain in makespan over ACO, workflows with more parallel tasks and fewer dependencies
GA-PSO, and hybrid approach is 29.4, 5.2, and 2.06 e.g., Ligo and CyberShake, tasks are processed with the
respectively. For the same dataset improvement in cost is priority of the execution time to reduce the overall exe-
50.6, 26.6, and 8.13 over the compared methods. In case of cution time and cost. The hybrid methods compared with
load balance, the improvement gain of the proposed algo- the proposed algorithm suffer either from waiting time in
rithm over the other method is 32.2, 16.4, and 11.1. Percent pre-processing phase or avoid best solutions in selection
improvement gain of the proposed algorithm over ACO, phase and thus leads to worst results than the proposed
GA-PSO, and hybrid approach in makespan for the algorithm.
Epigenomic dataset is 38.4, 11.7, and 3.8 respectively. For
123
Cluster Computing
Montage
Makespan 278 218 178 161
Cost 1.18 0.98 0.81 0.72
LB 242 178 148 127
Sipht
Makespan 4755 3689 3381 3302
Cost 1.24 1.04 0.82 0.76
LB 202 176 151 133
LIGO
Makespan 4802 3632 3458 3371
Fig. 5 Percent improvement gain of MrLBA over ACO, GA-PSO and
Cost 1.24 0.95 0.66 0.63 Hybrid approach for Sipht dataset
LB 223 186 164 149
CyberShake
Makespan 6113 4549 4402 4311
Cost 2.29 1.54 1.23 1.13
LB 232 188 172 157
Epigenomics
Makespan 77897 34382 31813 30972
Cost 6.51 4.98 4.28 3.92
LB 244 195 171 152
Makespan is shown in seconds, cost is shown in dollars and Load
Balance (LB) is calculated with Eq. 5. Average results of 20 runs are
shown
123
Cluster Computing
123
Cluster Computing
24. Tang, L., Li, Z., Ren, P., Pan, J., Lu, Z., Su, J., Meng, Z.: Online Arfa Muteeh is a graduate stu-
and offline based load balance algorithm in cloud computing. dent in the Department of
Knowl. Based Syst. 138, 91–104 (2017) Computer Science, COMSATS
25. Manasrah, A.M., Ba Ali, H.: Workflow scheduling using hybrid University Islamabad, Attock
ga-pso algorithm in cloud computing. Wireless Commun. Mobile Campus. She completed mas-
Comput. (2018). https://doi.org/10.1155/2018/1934784 ter’s degree in computer science
26. Guedes, T., Jesus, L.A., Ocaña, K.A., Drummond, L.M., de from the same Department. Her
Oliveira, D.: Provenance-based fault tolerance technique recom- area of research is in cloud
mendation for cloud-based scientific workflows: a practical computing applications.
approach. Cluster Comput. 23(1), 123–148 (2020)
27. Sardaraz, M., Tahir, M.: A parallel multi-objective genetic
algorithm for scheduling scientific workflows in cloud comput-
ing. Int. J. Distrib. Sens. Netw. 16(8), 1550147720949142 (2020)
28. Ramasamy, V., Pillai, S.T.: An effective hpso-mga optimization
algorithm for dynamic resource allocation in cloud environment.
Cluster Comput. 23(3), 1711–1724 (2020)
29. Jena, U., Das, P., Kabat, M.: Hybridization of meta-heuristic Muhammad Sardaraz received
algorithm for load balancing in cloud computing environment. his master’s degree in computer
J. King Saud Univ. Comput. Inf. Sci. (2020). https://doi.org/10. science from Foundation
1016/j.jksuci.2020.01.012 University Islamabad. He com-
30. Tapale, M.T., Goudar, R.H., Birje, M.N., Patil, R.S.: Utility based pleted Ph.D. in Computer Sci-
load balancing using firefly algorithm in cloud. J. Data Inf. ence in 2016 from Iqra
Manage. 2, 215 (2020) University Islamabad, Pakistan.
31. Devaraj, A.F.S., Elhoseny, M., Dhanasekaran, S., Lydia, E.L., He worked as Lecturer in the
Shankar, K.: Hybridization of firefly and improved multi-objec- Department of Computer Sci-
tive particle swarm optimization algorithm for energy efficient ence University of Wah, Wah
load balancing in cloud computing environments. J. Parallel Cantt. Presently Dr. Sardaraz is
Distrib. Comput. 142, 36–45 (2020) working as Assistant Professor
32. Dam, S., Mandal, G., Dasgupta, K., Dutta, P.: An ant-colony- in the Department of Computer
based meta-heuristic approach for load balancing in cloud com- Science COMSATS Institute of
puting. In: Khalid, S. (ed.) Applied Computational Intelligence Information Technology Attock,
and Soft Computing in Engineering, pp. 204–232. IGI Global, Pakistan. His research interests are cloud computing, cluster and grid
Hershey (2018) computing, and bioinformatics.
33. Negi, S., Rauthan, M.M.S., Vaisla, K.S., Panwar, N.: Cmodlb: an
efficient load balancing approach in cloud computing environ- Muhammad Tahir completed
ment. J. Supercomput. (2021). https://doi.org/10.1007/s11227- Ph.D. (Computer Science) from
020-03601-7 the Department of Computing &
34. Zuo, L., Shu, L., Dong, S., Zhu, C., Hara, T.: A multi-objective Technology, Iqra University,
optimization scheduling method based on the ant colony algo- Islamabad, Pakistan in 2016. He
rithm in cloud computing. Ieee Access 3, 2687–2699 (2015) worked as Lecturer in the
35. Zhan, Z.H., Liu, X.F., Gong, Y.J., Zhang, J., Chung, H.S.H., Li, Department of Computer Sci-
Y.: Cloud computing resource scheduling and a survey of its ence, University of Wah, Wah
evolutionary approaches. ACM Comput. Surv. (CSUR) 47(4), Cantt. He is currently working
1–33 (2015) as Assistant Professor in the
Department of Computer Sci-
Publisher’s Note Springer Nature remains neutral with regard to ence COMSATS Institute of
jurisdictional claims in published maps and institutional affiliations. Information Technology Attock,
Pakistan. His research interests
are in parallel and distributed
computing, Hadoop Mapreduce framework, Internet of things,
Security, and cryptography.
123