0% found this document useful (0 votes)
84 views

Task Partitioning Scheduling Algorithms For Heterogeneous Multi-Cloud Environment

Uploaded by

sohan pande
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views

Task Partitioning Scheduling Algorithms For Heterogeneous Multi-Cloud Environment

Uploaded by

sohan pande
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Arab J Sci Eng

DOI 10.1007/s13369-017-2798-2

RESEARCH ARTICLE - COMPUTER ENGINEERING AND COMPUTER SCIENCE

Task Partitioning Scheduling Algorithms for Heterogeneous


Multi-Cloud Environment
Sanjaya Kumar Panda1,2 · Sohan Kumar Pande2 · Satyabrata Das2

Received: 16 March 2017 / Accepted: 6 August 2017


© King Fahd University of Petroleum & Minerals 2017

Abstract Cloud computing is now an emerging trend extensively simulated and compared using various bench-
for cost-effective, universal access, reliability, availability, mark and synthetic datasets. The simulation results show
recovery and flexible IT resources. Although cloud com- the benefit of the proposed algorithms in terms of two per-
puting has a tremendous growth, there is a wide scope of formance metrics, makespan and average cloud resource
research in different dimensions. For instance, one of the utilization. Moreover, we evaluate the simulation results
challenging topics is task scheduling problem, which is using analysis of variance statistical test and confidence inter-
shown to be NP-Hard. Recent studies report that the tasks val.
are assigned to clouds based on their current load, without
considering the partition of a task into pre-processing and Keywords Cloud computing · Multi-cloud · Task schedul-
processing time. Here, pre-processing time is the time needed ing · Task partitioning · Makespan
for initialization, linking and loading of a task, whereas pro-
cessing time is the time needed for the execution of a task.
In this paper, we present three task partitioning schedul- 1 Introduction
ing algorithms, namely cloud task partitioning scheduling
(CTPS), cloud min–min task partitioning scheduling and There is an enormous attention of cloud computing in various
cloud max–min task partitioning scheduling, for hetero- businesses, organizations and research communities [1–3].
geneous multi-cloud environment. The proposed CTPS is Many enterprises, especially small to medium enterprises,
an online scheduling algorithm, whereas others are offline are moving to cloud because it provides cost-cutting solu-
scheduling algorithm. Basically, these proposed algorithms tions, reliability, universal access, availability, recovery and
partition the tasks into two different phases, pre-processing flexible IT resources [4,5]. Cloud computing provides the
and processing, to schedule a task in two different clouds. We services in the form of infrastructure as a service (IaaS),
compare the proposed algorithms with four task scheduling platform as a service (PaaS) and software as a service (SaaS)
algorithms as per their applicability. All the algorithms are [3]. According to Gartner Inc. [6], the cloud service market is
expected to reach $208.6 billion in 2016 where 42.8% growth
B Sanjaya Kumar Panda is expected from IaaS. On the other hand, Amazon Web Ser-
[email protected] vices (AWS) leads on IaaS according to RightScale survey
Sohan Kumar Pande [7]. Therefore, many enterprises reduce their capital expen-
[email protected] ditures as they do not require any investment for building and
Satyabrata Das maintaining new infrastructure. They can use the service and
[email protected] pay as per their usage [8], without knowing the infrastructure
1 complexity of the cloud.
Department of Computer Science and Engineering, Indian
Institute of Technology (ISM), Dhanbad 826004, India In IaaS cloud, the resources (such as CPU, memory,
2 networks and applications) are provisioned in the form of
Department of Computer Science and Engineering and
Information Technology, Veer Surendra Sai University virtual machines (VMs). These VMs are dynamically cre-
of Technology, Burla 768018, India ated based on the customer requirements [9]. Subsequently,

123
Arab J Sci Eng

they are deployed in the physical machines of the data cen- and cloud max–min scheduling (CMAXMS) [2] in terms
ters [10]. However, the capacity of the resources are not of makespan and average cloud resource utilization, respec-
unlimited to satisfy all the customer requests. As a result, tively.
some of the requests are transferred from one cloud to Our major contributions can be summarized as follows:
another cloud. It requires collaboration among cloud service
providers (CSPs), which is the future of cloud comput- – We design an online and two offline task scheduling algo-
ing as stated in [11–13]. This is called as multi-cloud as rithms for heterogeneous multi-cloud environment. The
a customer may concurrently access two or more clouds proposed algorithms are well suited for the autonomic
in a single heterogeneous environment. Note that we rep- feature between clouds (i.e., multi-cloud) and diversity
resent the customer requests in the form of independent feature of VMs.
tasks and we use customer requests and tasks interchange- – We introduce partitioning of the tasks into two different
ably. phases, namely pre-processing and processing.
Task scheduling is the ordering of a set of tasks to – Extensive simulation of the proposed algorithms with
the available resources such that the overall processing benchmark dataset by Braun et al. [39] and synthetic
time (i.e., makespan) is minimized and the utilization of datasets.
resources is maximized. It is a well-known NP-hard prob- – Evaluation of the proposed algorithms in terms of
lem [14–23]. Therefore, many practitioners [2,11,12,14–38] makespan and average cloud resource utilization and
have developed task scheduling algorithms to find near- validation using ANOVA statistical test and confidence
optimal solutions. Most of these works have developed in a interval (CI).
single-cloud homogeneous environment. However, they have – Comparison of results with existing algorithms to show
not considered the multi-cloud collaboration. Although few the benefit of the proposed algorithms.
works have developed [2,11,12,21] for multi-cloud environ-
ment, they have not produced good makespan and resource The paper is structured as follows: In Sect. 2, we review
utilization. Moreover, they have only considered the pro- related works on the task scheduling problem. In Sect. 3, we
cessing time of the tasks without focusing on initialization, describe the cloud system model and the mathematical for-
linking and loading time of the tasks. To the best of our mation of the task scheduling problem followed by proposed
knowledge, this is the first paper which deals with the task algorithms in Sect. 4. Performance metrics and simulation
partitioning scheduling problem for heterogeneous multi- results are presented in Sects. 5 and 6, respectively. Finally,
cloud environment. Sect. 7 provides concluding remarks and scopes for future
In this paper, we present the following task scheduling work.
problem for heterogeneous multi-cloud environment. Given
a set of n tasks, a set of m clouds and the pre-processing
and processing time of all the tasks on different clouds, 2 Related Work
the objective is to order the pre-processing time and pro-
cessing time of the tasks so that minimum makespan and To date, a vast number of task scheduling models and algo-
maximum resource utilization can be achieved. We develop rithms have been proposed, especially for grid and cloud
three algorithms, called cloud task partitioning schedul- computing [2,11,12,14–38]. In this paper, we present and
ing (CTPS), cloud min–min task partitioning scheduling discuss few of them. Liu et al. [18] have introduced a schedul-
(CMMTPS) and cloud max–min task partitioning schedul- ing problem where precedence constraints jobs are scheduled
ing (CMAXMTPS), for the problem stated above. CTPS is to unrelated parallel machines. They have developed a prior-
a two-phase online algorithm, whereas rests are two-phase ity rule-based heuristic to find the job–machine pair. Kumar
offline algorithms. In the first phase, pre-processing comple- et al. [19] have presented a polylogarithmic approximation,
tion time of the tasks is calculated over all the clouds. Based which is applicable, when precedence constraints jobs are
on the pre-processing completion time, a task is selected present in a tree-like structure. Maheswaran et al. [20] have
and assigned to a cloud. In the later phase, processing time proposed various heuristics for batch and immediate mode
of the same task is calculated and assigned to one of the scheduling. In batch mode, tasks are kept in a batch, and
clouds. Effectiveness of the proposed algorithms is evalu- they are observed at a pre-scheduled time. For this, they
ated using various benchmark and synthetic datasets, and have developed Sufferage heuristic. On the contrary, the
evaluation results are compared with the existing algorithms. immediate mode assigns a task to a machine on its arrival
We show that CTPS outperforms existing cloud list schedul- to the system. Here, they have introduced k-percent best
ing (CLS) [11] and minimum completion cloud (MCC) (KPB) and opportunistic load balancing (OLB) heuristics. It
[2] algorithms, whereas CMMTPS and CMAXMTPS out- is observed from the experimental results that KPB heuristic
perform existing cloud min–min scheduling (CMMS) [11] outperforms than other immediate mode heuristics in HiHi

123
Arab J Sci Eng

heterogeneity of semi-consistent and inconsistent matrix and called virtual desktop cloud-analyst, to capture various met-
that OLB performs better in semi-consistent matrix. Simi- rics such as net utility and service response time. This tool
larly, in the batch mode, Sufferage heuristic performs better supports simulation and emulation and runs in two different
in HiHi heterogeneity of inconsistent matrix. Reda et al. modes, namely run simulation and run experiment. They have
[22] have presented a sort-mid algorithm to find the map- also stated that the tool may be integrated with virtual private
ping between tasks and machines. The objectives are to local area network, overlay transport virtualization and vir-
maximize the resource utilization and minimize the overall tual tenant networks in near future. Ahmed et al. [44] have
makespan. The experimental results reveal that the sort-mid defined execution and migration time using formal analysis.
algorithm produces more than 98% utilization in 10 out of They have analyzed two distributed application frameworks
12 instances. Gogos et al. [23] have proposed two heuristics, and found that transfer time is the main factor in execution
namely list sufferage and tenacious penalty-based heuristic time.
for the heterogeneous task scheduling problem. Moreover, Lin et al. [27] have introduced a two-tier scheduling prob-
they have formed a mathematical model and used linear lem for SaaS cloud and proposed four scheduling algorithms,
programming method of column pricing with restarts. The namely two-tier strict backfilling, two-tier flexible backfill-
limitation of column pricing with restarts is that it only per- ing with zero slack factor, two-tier flexible backfilling with
forms better in the consistent type of problems. Celaya et al. nonzero slack factor and two-tier priority backfilling. Here,
[24] have proposed a decentralized approach to minimize two-tier strict backfilling is a non-preemptive scheduling,
the stretch of user applications (or tasks). This approach whereas others are preemptive scheduling. However, higher-
is compared with the centralized approach, which revels priority user request only preempts the lower-priority user
11% worse performance in the implementation. Anglano request in two-tier priority backfilling. The main objective
et al. [25] have proposed five selection policies, first come is to improve the project turn-around time. Xu et al. [28]
first served (FCFS)—exclusive, FCFS—shared, round robin, have proposed five allocation policies, a random allocation
round robin—no replica first and longest idle. They have policy, three sequence allocation policies and a greedy allo-
decomposed the selection problem into two independent sub- cation policy. In random allocation policy, tasks are assigned
problems, namely bag selection and individual bag. However, to the VMs without following any order. As a result, a VM
they have used the static replication strategy in all the pro- may be overloaded by receiving too many tasks, whereas
posed policies. Wang et al. [26] have proposed a scheduling other VMs may be under-loaded. In the first sequence allo-
algorithm that combines OLB and load-balanced min–min cation policy, tasks are assigned to the VM in a particular
algorithm. They have applied this algorithm for load balanc- sequence. The main goal behind this policy is that each VM
ing in a three-level cloud computing network. Most of the will receive equal number of tasks. In the second sequence
above works were not considered for multi-cloud environ- allocation policy, the tasks are sorted in descending order
ment. of instruction length. Then, the sorted tasks are assigned to
One of the major parameters in service-level agreement the clouds in a sequence. In the third sequence allocation
(SLA) is response time, which has a direct impact on the policy, the VMs are sorted in ascending order of processing
quality of service (QoS) performance of the user applica- speed. Then, the sorted VMs are assigned to the tasks in a
tions. Therefore, Salah et al. [40] have presented a Markovian sequence. The greedy allocation policy combines the last two
analytical model to roughly calculate the response time for allocation policies to make an allocation decision. However,
elastic applications in the cloud. It is fulfilled by monitoring none of these works were considered the different phases of
the user applications and scaling the resources as and when tasks.
required. Again, Salah et al. [41] have extended their work The task scheduling problem is not limited to grid and
in [40] and validated the approach on the Amazon Web Ser- cloud computing, but it is also applied to mobile com-
vice platform. Moreover, they have verified their analytical puting, video surveillance center, real-time system and
model using a discrete-event simulator and used the prime many more. Shi et al. [29] have introduced a scheduling
modulus multiplicative linear congruential generator for ran- problem for mobile cloud computing. Also, they have pro-
dom number generation. However, they have not estimated posed an adaptive probabilistic scheduler and compared
the minimum number of VMs that are required to satisfy it with three traditional schedulers, namely round robin,
the given service-level objective response time. Kafhali et greedy and probabilistic. Xiong et al. [30] have optimized
al. [42] have presented a stochastic model for analyzing per- the energy consumption of resources by designing a task
formance in cloud data centers, which is based on queuing scheduling model. They have formulated the optimiza-
theory. This model estimates the number of VMs to fulfill the tion as a multi-dimensional bin packing problem. Later,
target QoS parameters. Calyam et al. [43] have introduced they have reduced the multi-dimensional problem into a
another important parameter, called quality of experience in single-dimensional problem for applying greedy best-fit
order to maintain the SLA. They have also introduced a tool, search method. Hosseinimotlagh et al. [31] have presented

123
Arab J Sci Eng

a scheduling algorithm based on the utilization level and 3 Cloud Model and Problem Statement
QoS. The main limitation of this algorithm is that it under-
performs when the optimal threshold of CPU utilization is 3.1 Cloud Model
surpassed.
Many meta-heuristic methods have been proposed to solve We assume that a number of heterogeneous IaaS clouds col-
the task scheduling problem. Abdullahi et al. [32] have laborate together to provide services to the customers all
applied discrete symbiotic organism search algorithm to over the world. They are connected each other using mesh
find the solution for scheduling problem. This algorithm interconnection network. Each cloud has a server resource
undergoes mutualism, commensalism and parasitism to find manager (SRM) to handle the customer requests. The SRM
possible solutions from the given populations [33]. Salman is also responsible to collaborate with other SRMs of other
et al. [34] have applied particle swarm optimization for static clouds. The collaboration details are as follows. A customer
task scheduling problem and shown that it performs bet- may submit its application (or task) to one of the clouds.
ter in comparison with the genetic algorithm. Pandey et al. In general, a task is represented in the form of volume of
[35] have developed a particle swarm optimization-based instructions [in million instructions (MI)] and volume of data
workflow application scheduling for data-intensive applica- [in megabit (Mb)]. A task undergoes various stages, such
tions. They have stated that genetic algorithm is not best for as initialization, linking, loading and processing, for com-
workflow scheduling. The above works were produced near- pleting its execution. The first three stages are referred as
optimal results. pre-processing phase, and the last stage is referred as pro-
Recently, Li et al. [11] and Panda et al. [2] have proposed cessing phase in the cloud. Note that the above stages are also
various task scheduling algorithms, especially CLS, MCC, used for computer program execution. On receiving the cus-
CMMS and CMAXMS, for heterogeneous multi-cloud envi- tomer application, the corresponding SRM divides the task
ronment. Note that CMAXMS is an extended version of into two different phases, namely pre-processing and pro-
max–min [17,20,36] by following the same approach as cessing, and checks the status of the resources in its own cloud
CMMS. These algorithms follow a distributed approach to and collects the status of the resources from other clouds
assign the tasks among the multiple clouds. This is due to through SRMs. Here, we assume that a task is always sep-
the fact that future cloud computing will collaborate multi- arable in two phases. Note that the resources are delivered
ple CSPs in order to deliver the resources to the customers. by creating the VMs on the respective cloud data centers.
However, these works were based on the execution or com- Here, the resource status shows the pre-processing comple-
pletion time of the task, without separating the pre-processing tion time and processing completion time of the customer
and processing completion time of the task. Panda et al. [12] application on various clouds. If the pre-processing comple-
have presented various normalization-based task schedul- tion time is minimum in its own cloud, then it assigns the
ing algorithms for heterogeneous multi-cloud environment. task to the same cloud. Otherwise, it transfers the task to
The algorithms are based on traditional normalization tech- a cloud where the minimum is achieved. The same process
niques, which are applied to minimize the impact of the is repeated for the processing phase. However, we assume
inconsistent elements in the expected time to compute (ETC) that the disk image of the task is readily available to the
matrix. However, these works are focused in the ETC matrix, selected cloud for its execution. Upon completion of the ser-
which considers the execution time only. In this paper, we vice, the customers have to pay the cost of used resources
consider the same task scheduling problem as presented as per the agreement. However, we have not considered it in
in [2,11,12]. However, the proposed algorithms have fol- our model.
lowing notable differences. (a) The ETC matrix used in As there is no data center that has the unlimited resource
[2,11,12] is divided into the pre-processing time matrix and capacity, we assume that the number of clouds is fixed and
processing time matrix, in order to incorporate initializa- the number of VMs deployed in each cloud is varying. When
tion, linking, loading, mapping, scheduling and finalizing an application (or a task) is deployed in the cloud, we also
time of the tasks. (b) A task can only be assigned to one assume that sufficient number of VMs is readily available for
cloud in [2,11,12]. In contrast, a task can be assigned its execution. This setup is well suited for the autonomic fea-
to two different clouds in pre-processing and processing ture between clouds and diversity feature of VMs. Therefore,
phase of the proposed algorithms. (c) The proposed algo- it is of practical relevance, and it is shown to work online.
rithms consider maximum ready time and next ready time Note that the same setup and assumptions were also adopted
in contrast to ready time of the existing algorithms. The in [11,22,23,28].
proposed algorithms have same time complexity as given It is noteworthy to mention that the following communi-
in [2,11,12], but they produce better makespan and aver- cation overhead may arise in the cloud model: (a) customer-
age cloud resource utilization in benchmark and synthetic SRM, (b) SRM-SRM, (c) SRM-resource, (d) resource-SRM,
datasets. (e) SRM-customer and (f) pre-processing phase-processing

123
Arab J Sci Eng

phase. Apart from these communication overheads, each 3. The communication time of a task Ti , 1 ≤ i ≤ n from its
individual component (SRM, resource) is also taking time pre-processing phase to processing phase is negligible.
to make a scheduling decision. However, we assume that the
above overheads are negligible for the simplicity of the cloud
model. 4 Proposed Algorithms
The real application of the proposed model is cloud service
brokerage (CSB), which is adopted by many CSPs such as In this section, we present three task scheduling algorithms,
Appirio, AWS marketplace, BlueWolf and CloudMore. CSB namely cloud task partitioning, cloud min-min task par-
acts as an intermediary between CSPs and cloud customers. titioning and cloud max-min task partitioning scheduling
It selects the services as per the customers’ requirements and algorithms, with illustrative examples.
delivers to the customers. Here, the service selection may
be treated as pre-processing and service delivery as process- 4.1 Cloud Task Partitioning Scheduling Algorithm
ing.
Cloud task partitioning scheduling (CTPS) algorithm is an
online algorithm for heterogeneous multi-cloud environ-
3.2 Problem Formation
ment. It consists of two phases, namely pre-processing and
processing. In the first phase, pre-processing completion time
Consider a set of m clouds C = {C j | j = 1, 2, 3,…, m}, a set
of a task is calculated over all the clouds which is the sum
of n independent tasks T = {Ti |i = 1, 2, 3,…, n} from a set of
of the pre-processing time and maximum ready time. It is
n users, a ppt matrix in which ppti j , 1 ≤ i ≤ n, 1 ≤ j ≤ m
mathematically expressed as follows.
is the pre-processing time of task i on cloud j (Eq. 1) and
a pt matrix in which pti j , 1 ≤ i ≤ n, 1 ≤ j ≤ m is the
pcti j = ppti j + mr t j , 1 ≤ i ≤ n, 1 ≤ j ≤ m (3)
processing time of task i on cloud j (Eq. 2).
where pcti j = pre-processing completion time of task i on
C1 C2 ··· Cm cloud j, ppti j = pre-processing time of task i on cloud j
⎧ ppt ··· ppt1m ⎫
T1 ⎪ 11 ppt12 ⎪ and mr t j = maximum ready time of cloud j in the worst

⎨ ppt21 ⎪
T2 ppt22 ··· ppt2m ⎬ case. Here, mr t is the earliest time that the cloud is going to
ppti j = . .. .. .. .. (1)
.. ⎪ . ⎪ be ready after completing the currently assigned tasks, and
⎩ .
⎪ . . ⎪

Tn pptn1 pptn2 ··· pptnm it is set to zero if there are no currently assigned tasks. Note
that the complete ppt matrix is unknown in CTPS as it is an
C1 C2 ··· Cm online algorithm.
T1 ⎧ pt pt12 ··· pt1m ⎫


11 ⎪
⎪ Remark 1 Pre-processing completion time is same as pre-
T2 ⎨ pt21 pt22 ··· pt2m ⎬
pti j = . (2) processing time in the best case, i.e., pcti j = ppti j .
.. .. .. .. ..
⎪ . ⎪
⎩ .
⎪ . . ⎪
⎭ Next, the task is assigned to a cloud which holds minimum
Tn ptn1 ptn2 ··· ptnm pcti j . In the latter phase, the completion time of the same task
is calculated over all the clouds. Note that the completion
time is the sum of the processing time, maximum ready time
Here, the ppt matrix is the time needed for a given task
and next ready time in the worst case. Mathematically,
during initialization, linking, loading, ordering and mapping
process. On the other hand, the pt matrix is the time needed cti j = pti j + mr t j + nr t, 1 ≤ i ≤ n, 1 ≤ j ≤ m (4)
for a given task during scheduling and finalizing process. The
problem of task scheduling is to map a task Ti , 1 ≤ i ≤ n where cti j = completion time of task i on cloud j, pti j =
of ppt matrix followed by the same task Ti , 1 ≤ i ≤ n of processing time of task i on cloud j and nr t = next ready
pt matrix in any order such that the makespan is minimized. time. Here, nr t is the earliest time that the cloud is going
The problem is subjected to the following constraints. to be ready after completing the pre-processing completion
time of the task.
1. A task Ti , 1 ≤ i ≤ n can only be assigned to one cloud
Remark 2 Completion time is the sum of the processing time
C j , 1 ≤ i ≤ m at a time.
and the next ready time in the best case, i.e., cti j = pti j +nr t.
2. The pre-processing time of a task Ti , 1 ≤ i ≤ n on a
cloud C j , 1 ≤ j ≤ m is followed by the processing time Like pre-processing phase, this phase also assigns the task
of a task Ti , 1 ≤ i ≤ n on a cloud Ck , 1 ≤ k ≤ m where to the minimum cti j . The above procedure is repeated for all
j = k or j = k. the incoming tasks.

123
Arab J Sci Eng

Table 1 (a) A ppt matrix and (b) A pt matrix calculated, and they are (55 + 10 + 5), (30 + 45 + 5) and (25
ppt C1 C2 C3 pt C1 C2 C3 + 0 + 5) on three different clouds. However, the actual pro-
cessing completion time is 65, 70 and 30, respectively. The
T1 10 20 15 T1 60 35 45 processing time of task T2 is assigned to cloud C3 , and the
T2 15 5 20 T2 55 30 25 mr t of cloud C3 is updated to 30. Similarly, other tasks are
T3 20 40 30 T3 60 50 40 assigned to three different clouds. The Gantt chart for pro-
T4 10 30 20 T4 40 30 35 posed CTPS is listed in Table 2a. It is obvious to see that the
T5 40 50 60 T5 10 20 5 makespan of the CTPS is 75 and the average cloud resource
(a) (b) utilization is 0.9778 (i.e., 97.78%).
We also generate the Gantt chart of the existing CLS [11]
and MCC [2] algorithm as depicted in Table 2b, c, respec-
4.1.1 An Illustration tively. The makespan of these algorithms is 100 and 125,
respectively, which are 33.34 and 66.67% more than the pro-
For the sake of easy understanding, let us illustrate the pro- posed algorithm. The average cloud resource utilization of
posed algorithm CTPS. Assume that there are five tasks, T1 CLS and MCC is 0.8667 and 0.8134, respectively, which are
to T5 and they are assigned to three different clouds. The ppt 11.36 and 16.81% worse than the proposed algorithm. More-
matrix and pt matrix of these tasks are given in Table 1a, b, over, it clearly shows that the proposed CTPS produces better
respectively. load-balanced schedule.
The first task T1 takes minimum pre-processing comple-
tion time ( pct) on cloud C1 by assuming that the ready time
of all the clouds is zero. Therefore, the pre-processing time 4.1.2 Pseudo-Code for CTPS
of task T1 is assigned to cloud C1 . The maximum ready time
(mr t) of cloud C1 is updated to 10, and it is also termed as For the pseudo-code of CTPS, we describe the notations and
next ready time (nr t). Subsequently, the processing comple- their definitions in Table 3.
tion time of the same task is calculated from pt matrix, mr t The pseudo-code for CTPS is shown in Algorithm 1.
and nr t, and they are (60 + 10 + 10), (35 + 0 + 10) and (45 + CTPS algorithm begins with n tasks, n ≥ 1 present in
0 + 10) on three different clouds by considering the worst- the queue Q (Line 1). Here, we assume that these tasks
case scenario. However, in this case, the actual processing have arrived at a time. However, they are mapped in first-
completion time is 70, 45 and 55, respectively. The rational- come first-serve basis. CTPS calculates the expected pre-
ity behind this is that the early arrival tasks may occupy some processing and processing time of these tasks, and they are
of the execution slots of the clouds in the pre-processing or stored in ppt and pt matrix, respectively. The algorithm calls
processing phase. As a result, the actual processing comple- Procedure 1 to schedule the pre-processing time of the tasks
tion time may differ from the processing completion time. (Line 2). The pseudo-code for this procedure is shown in
However, it is same in the worst case. Here, the processing Procedure 1. Note that |Q| = n.
time of task T1 is assigned to cloud C2 and the mr t of cloud
C2 is updated to 45. Next the second task T2 calculates the
Table 3 Notations and their definitions
pct on three different clouds which are (15 + 10), (5 + 45)
and (20 + 0), respectively. However, the actual pct is 25, Notation Definition
5 and 20, respectively. As a result, the pre-processing time Q Queue of tasks
of task T2 is assigned to cloud C2 and the mr t of cloud C2 gantt (i, j) Gantt chart of time i on cloud j
remains same. The processing completion time of task T2 is

Table 2 Gantt chart for (a)


C1 C2 C3 C1 C2 C3 C1 C2 C3
proposed CTPS, (b) CLS and (c)
MCC 0–5 T2 ∗ 0–50 T4 0–45 T2
5–10 T1 ∗ 50–55 T1 45–55 T1
10–30 T3 T2 55–70 T3 55–80 T3
30–40 T4
40–45 T1 70–90 T2 80–100 T4
45–70 T5 T3
70–75 ∗ T4 T5 90–100 T5 ∗ ∗ 100–125 ∗ T5 ∗
(a) (b) (c)

123
Arab J Sci Eng

Algorithm 1 : Pseudo-code for CTPS Procedure 1 : ASS I G N -P P T -O F-T AS K ( ppt, pt, mr t, gantt,
Input: n, m)
1: for i = 1, 2, 3,. . ., n do
1. 1-D matrix: mr t
2: for j = 1, 2, 3,. . ., m do
2. 2-D matrices: ppt, pt and gantt
3: execution = ppti j + mr t j
3. n: Total number of tasks
4: count = 0
4. m: Total number of clouds
5: for k = 1, 2, 3,. . ., execution do
Output: 6: if gantt (k, j) == N U L L then
1. Makespan 7: count = count + 1
2. Average cloud utilization 8: end if
9: if count == ppti j then
10: pcti j = k
1: while Q = N U L L do 11: Break
2: Call ASS I G N -P P T -O F-T AS K ( ppt, pt, mr t, gantt, n, m) 12: end if
3: end while 13: end for
14: end for
15: minimum = pcti1
Procedure 1 selects the first unassigned task and finds the 16: index = 1
17: for j = 1, 2, 3,. . ., m do
pre-processing completion time over all the clouds (Lines 18: if minimum > pcti j then
2–14). However, the actual pre-processing completion time 19: minimum = pcti j
is calculated and stored in pct (Line 10). Next it selects a 20: index = j
cloud that holds minimum of actual pre-processing comple- 21: end if
22: end for
tion time to assign the task (Lines 15–22). Subsequently, the 23: for k = 1, 2, 3,. . ., pctiindex do
task is executed in the selected cloud (Lines 23–27). At last, it 24: if gantt (k, index) = N U L L then
updates the maximum ready time and next ready time (Lines 25: gantt (k, index) = i
28–31). 26: end if
27: end for
Procedure 2 is called from Line 32 of Procedure 1 to assign 28: if mr tindex < pctiindex then
the processing time of the same task, and the pseudo-code 29: mr tindex = pctiindex
for this procedure is shown in Procedure 2. This procedure 30: end if
calculates the processing completion time of the task over 31: nr t = pctiindex
32: Call ASS I G N -P T -O F-T AS K ( pt, mr t, gantt, nr t, i, n, m)
all the clouds (Lines 1–13). However, the actual processing 33: end for
completion time is calculated and stored in ct (Line 9). Next 34: Return
it finds a cloud that results minimum ct (Lines 14–21), and
it executes the task in the corresponding cloud (Lines 22–
26). At last, it updates the maximum ready time and the total Proof The inner for loop (Lines 4–12, Procedure 2) iter-
number of tasks (Lines 27–30). ates o times. Thus, this loop takes O(o) time. The outer for
loop (Lines 1–13) takes O(m) time as this loop iterates m
Lemma 1 The time complexity of Procedure 1 is O(nmo). times. To find the minimum completion time, Lines 14–21
take O(m) time. In the worst case, the process of task assign-
Proof Let n be the total number of tasks, m be the total
ment in the processing phase takes O(o) time (Lines 22–26).
number of clouds and o be the maximum of the actual pre-
The decision control statement (Lines 27–29) requires O(1)
processing completion time or the completion time. The
time. Again, O(1) time is needed to update the total number
innermost for loop (Lines 5–13, Procedure 1) iterates o
of tasks (Line 30). Therefore, the time complexity of Proce-
times. Thus, it takes O(o) time. The inner for loop (Lines
dure 2 is O(mo). 

2–14) takes O(m) times. To find the minimum of actual
pre-processing completion time, the for loop (Lines 17–22) Theorem 1 The overall time complexity of the algorithm
takes O(m) time. To assign the pre-processing time of the CTPS is O(knmo).
task, the for loop (Lines 23–27) takes O(o) time in the
worst case. Similarly, the processing time of the same task Proof The algorithm invokes Procedure 1 followed by Pro-
is assigned by calling Procedure 2 which takes O(mo) time cedure 2, say k times. Therefore, the overall time complexity
(Refer Lemma 2). However, O(1) is needed for the decision of the algorithm CTPS is O(knmo). 

control statement and the assignment process (Lines 28–31).
The outer for loop (Lines 1–33) continues for each task which Lemma 3 The difference between the processing comple-
takes O(n) time. Therefore, the time complexity of Proce- tion time and the processing time of a task in the processing
dure 1 is O(nmo). 
 phase is no more than the sum of maximum ready time and
next ready time.
Lemma 2 The time complexity of Procedure 2 is O(mo).

123
Arab J Sci Eng

Procedure 2 : ASS I G N -P T -O F-T AS K ( pt, mr t, gantt, nr t, i, n, Lemma 5 The pre-processing completion time of a task is no
m) more than the sum of the pre-processing time and maximum
1: for j = 1, 2, 3,. . ., m do ready time.
2: execution = pti j + mr t j + nr t
3: count = 0 Proof The maximum ready time shows the maximum delay
4: for k = nr t + 1, nr t + 2,. . ., execution do
(worst case) in the execution of a task due to the early arrival
5: if gantt (k, j) == N U L L then
6: count = count + 1 tasks. Mathematically, pct = ppt + mr t (Eq. 3).
7: end if In the best and average case, there may be one or more
8: if count = pti j then available periods in between 0 to mr t. Therefore,
9: cti j = k
10: Break
11: end if pct < ppt + mr t (7)
12: end for
13: end for If mr t = 0, then pct = ppt (Lemma 4). From Eq. 3, Eq. 7
14: minimum = cti1 and Lemma 4, we have pct ≤ ppt + mr t. 

15: index = 1
16: for j = 1, 2, 3,…, m do Lemma 6 The maximum ready time is updated to pre-
17: if minimum > cti j then processing or processing completion time if and only if it is
18: minimum = cti j
19: index = j less than the pre-processing or processing completion time.
20: end if Proof
21: end for
22: for k = nr t + 1, nr t + 2,. . ., ctiindex do
23: if gantt (k, index) = N U L L then ct = pt + mr t + nr t (From Lemma 3)
24: gantt (k, index) = i ⇒ mr t = ct − pt − nr t
25: end if
26: end for ⇒ mr t < ct pt, nr t > 0
27: if mr tindex < ctiindex then
28: mr tindex = ctiindex
29: end if
30: n = n − 1 In this case, the mr t is updated to ct for the upcoming
31: Return tasks as we have considered the worst-case scenario.

pct = ppt + mr t (From Lemma 4)


Proof Let the completion time of a task in the processing
⇒ mr t = pct − ppt
phase is ct and the processing time is pt. We know that
ct ≥ pt. We have to prove that (ct − pt) ≤ (mr t +nr t). As nr t ⇒ mr t < pct ppt > 0
is the pre-processing completion time, the processing time of
the same task can be processed only after nr t time. It makes
ct ≥ pt + nr t. However, the early arrival tasks (including In this case, the mr t is updated to pct for the processing
pre-processing and processing phases) may occupy the time phase of the same task as we have considered the worst-case
between (nr t) to ( pt + nr t). As a result, the completion time scenario. 

is delayed by mr t time in the worst case. Mathematically, Lemma 7 The actual processing completion time is the sum
ct ≤ pt + mr t + nr t. Alternatively, (ct − pt) ≤ (mr t + nr t). of processing time and next ready time in the best case.


Proof In general,
Lemma 4 The total time duration of a task in both phases
() is the sum of pre-processing and processing time by ct = pt + mr t + nr t (8)
assuming the maximum ready time and next ready time are
zero. In the best case, no early arrival tasks have occupied the
time between (nr t) and ( pt + nr t).
Proof As a result, mr t = nr t and the actual processing comple-
tion time of the task takes pt time units from nr t. Therefore,
(mr t = 0) → ( pct = ppt) (5) ct = pt + nr t by ignoring mr t. 

(mr t = 0) ∧ (nr t = 0) → (ct = pt) (6)
Lemma 8 A task T j , 1 ≤ j ≤ n may complete its execu-
From Eq. 5 and Eq. 6, (mr t = 0) ∧ (nr t = 0) → ( = tion on or before a task Ti , 1 ≤ i ≤ n, i = j, even if T j
ppt + pt) where  = pct + ct. 
 arrives after Ti by assuming both phases of Ti are assigned
to different clouds.

123
Arab J Sci Eng

Table 4 Gantt chart for (a) CMMTPS and (b) CMMS differences. (1) CMMTPS partitions the tasks into two dif-
C1 C2 C3 C1 C2 C3 ferent phases, namely pre-processing and processing. As a
result, the tasks can be executed in two different clouds, one
0–5 T2 T3 after another, to minimize the completion time. However,
5–10 T1 T5 0–35 T2 min–min assigns a task to a single cloud which increases the
10–20 T4 overall processing time. (2) CMMTPS finds the maximum
20–30 T2 35–50 T4 ready time and next ready time to calculate the maximum
30–45 T1 delay time of a task in contrast to the ready time of the min–
45–55 T3 50–60 T1 min algorithm.
55–60 T4 In the latter phase, processing completion time of the same
60–90 ∗ T5 60–100 T5 task is calculated over all the clouds as shown in Eq. 4. Next
90–95 T3 it assigns the task to a cloud where it achieves minimum
95–100 T5 ∗ ∗ 100–125 ∗ T3 ∗ processing completion time. The above process is repeated
(a) (b) for all the tasks present in the batch.

4.2.1 An Illustration
Proof Let the actual completion time of task Ti in pre-
processing phase is pct, and it is assigned to cloud C1 . We Let us illustrate the proposed algorithm CMMTPS using the
assume that the processing phase of the same task is assigned ppt matrix and the pt matrix as given in Table 1a, b, respec-
to other cloud (say, cloud C2 ). However, the processing tively.
phase of task Ti can only be started after pct. Therefore, The proposed algorithm CMMTPS calculates the pre-
ct = pct + α where α is the time spent in the process- processing completion time of five tasks (T1 to T5 ) in three
ing phase. Next, task T j may be started its pre-processing different clouds (C1 to C3 ). The minimum pre-processing
phase after pct in cloud C1 . Then the processing phase of completion time of these tasks is 10, 5, 20, 10 and 40 on C1 ,
the same task may also be started in cloud C1 by assuming C2 , C1 , C1 and C1 , respectively, in which task T2 on cloud
the worst case. Let the time taken by task T j in both phases C2 is minimum among them. Therefore, the pre-processing
is β. If β < α, then task T j completes its execution before time of task T2 is assigned to cloud C2 . Here, mr t of cloud
Ti . 
 C2 and nr t are updated to 5. Next, it calculates the process-
ing completion time of task T2 over all the clouds which
4.2 Cloud Min–Min Task Partitioning Scheduling are (55 + 0 + 5), (30 + 5 + 5) and (25 + 0 + 5), respec-
Algorithm tively. However, the actual processing completion times are
60, 35 and 25, respectively. As a result, processing time of
Cloud min–min task partitioning scheduling (CMMTPS) task T2 is assigned to cloud C3 . In the similar fashion, it
algorithm is an offline algorithm for heterogeneous multi- assigns the remaining tasks to one of the clouds and the
cloud environment. Like CTPS, it has two phases, pre- execution order is T1 , T4 , T3 and T5 . The Gantt chart for
processing and processing. However, it processes the incom- proposed CMMTPS is listed in Table 4a. The makespan of
ing tasks in a batch-wise manner. In the first phase, it finds CMMTPS is 100, and the average cloud resource utilization
the pre-processing completion time of the unassigned tasks is 0.95.
over all the clouds as shown in Eq. 3. Note that the com- We also produce the Gantt chart of the existing CMMS
plete ppt matrix is known in CMMTPS as it is an offline [11] algorithm as listed in Table 4b. The makespan of CMMS
algorithm. Next, it finds the minimum pre-processing com- is 125, which is 25% more than the proposed algorithm. The
pletion time for each task followed by minimum among them. average cloud resource utilization of CMMS is 0.76, which
Now it assigns the task to the corresponding cloud where it is 20% worse than the proposed CMMTPS. This illustration
achieves minimum one. Note that the first phase of CMMTPS clearly shows that the proposed algorithm outperforms the
is similar to the min–min [11,17,20,36] scheduling algo- existing CMMS in terms of makespan and average cloud
rithm. However, the proposed algorithm has the following resource utilization.

Table 5 Notations and their


Notation Definition
definitions
mpcti Minimum pre-processing time of task i
Ii Index of the cloud that keeps minimum pre-processing time of task i

123
Arab J Sci Eng

4.2.2 Pseudo-Code for CMMTPS Procedure 3 : ASS I G N -P P T -O F-T AS K -C M M T P S( ppt, pt,


mr t, gantt, n, m)
For the pseudo-code of CMMTPS, we describe the notations 1: while n = 0 do
2: for i = 1, 2, 3,. . ., n do
and their definitions in Table 5, in addition to the notations 3: for j = 1, 2, 3,. . ., m do
and their definitions given in Table 3. 4: execution = ppti j + mr t j
The pseudo-code for CMMTPS is shown in Algorithm 2. 5: count = 0
6: for k = 1, 2, 3,. . ., execution do
7: if gantt (k, j) == N U L L then
Algorithm 2 : Pseudo-code for CMMTPS 8: count = count + 1
9: end if
Input: 10: if count == ppti j then
1. 1-D matrix: mr t 11: pcti j = k
2. 2-D matrices: ppt, pt and gantt 12: Break
3. n: Total number of tasks 13: end if
4. m: Total number of clouds 14: end for
15: end for
Output:
16: minimum = pcti1
1. Makespan 17: index = 1
2. Average cloud utilization 18: for j = 1, 2, 3,. . ., m do
19: if minimum > pcti j then
1: while Q = N U L L do 20: minimum = pcti j
2: Call ASS I G N -P P T -O F-T AS K -C M M T P S( ppt, pt, mr t, 21: index = j
gantt, n, m) 22: end if
3: end while 23: end for
24: mpcti = minimum
25: Ii = index
26: end for
CMMTPS algorithm places all the incoming tasks into 27: minimum = mpct1
the queue Q (Line 1). Then, the algorithm calls Procedure 3 28: task = 1
29: for i = 1, 2, 3,. . ., n do
to schedule the pre-processing time of the incoming tasks
30: if minimum > mpcti then
(Line 2). The pseudo-code of this procedure is shown in Pro- 31: minimum = mpcti
cedure 3. 32: task = i
In the pre-processing phase, this procedure selects a task 33: end if
34: end for
from a batch of tasks using min–min scheduling algorithm
35: cloud = Itask
(Lines 2–34). Note that CMMTPS calculates the actual pre- 36: for k = 1, 2, 3,. . ., minimum do
processing completion time of a task over all the clouds in 37: if gantt (k, cloud) = N U L L then
Line 11 of Procedure 3. Then, the selected task is executed 38: gantt (k, cloud) = task
39: end if
in the corresponding cloud (Lines 36–40), and the mr t and
40: end for
nr t are updated (Lines 41–44). 41: if mr tcloud < minimum then
Procedure 2 is called from Line 45 of Procedure 3 to assign 42: mr tcloud = minimum
the processing time of the selected task. Note that Procedure 2 43: end if
44: nr t = minimum
is common for all the proposed algorithms.
45: Call ASS I G N -P T -O F-T AS K ( pt, mr t, gantt, nr t, i, n, m)
46: end while
Lemma 9 The time complexity of Procedure 3 is O(n 2 mo). 47: Return

Proof The innermost for loop (Lines 6–14, Procedure 3) iter-


ates for o times. As a result, it takes O(o) time. The Line 3
However, the while loop (Lines 1–46) iterates for n times.
for loop iterates over all the clouds. Thus, it takes O(m)
Therefore, the time complexity of Procedure 3 is O(n 2 mo).
time. The outer for loop (Lines 2–26) repeats for all the


tasks which requires O(n) time. Therefore, Lines 2–26 take
O(n) × O(m) × O(o) = O(nmo) time. Note that Lines
2–16 find the minimum pre-processing completion time of Theorem 2 The overall time complexity of the CMMTPS
all the tasks over the clouds. To find the minimum among algorithm is O(kn 2 mo).
them, Lines 27–34 take O(n) time. Then it assigns the task to
the corresponding cloud (Lines 35–40) which requires O(o) Proof The proposed algorithm CMMTPS calls Procedure 3
time. Next the mr t and nr t values are updated (Lines 41–44). followed by Procedure 2, say k times in which Procedure 3
It takes O(1) time. Finally, it calls Procedure 2 to assign the dominates Procedure 2. Therefore, the overall time complex-
processing time of the same task which requires O(mo) time. ity of the CMMTPS is O(kn 2 mo). 


123
Arab J Sci Eng

Table 6 (a) A ppt matrix and (b) A pt matrix Table 7 Gantt chart for (a) CMAXMTPS and (b) CMAXMS
ppt C1 C2 C3 pt C1 C2 C3 C1 C2 C3 C1 C2 C3

T1 10 20 15 T1 25 35 45 0–30 T4 T3
T2 15 5 20 T2 55 30 25 30–40 T5 T3
T3 20 40 30 T3 60 50 40 40–45 T5 0–50 T4
T4 10 30 20 T4 40 30 35 45–55 T2 50–70 T5 T3
T5 40 50 60 T5 10 20 5 55–60 T4
(a) (b) 60–65 T1 70–85 T1
65–75 T3
75–90 T1 T2 ∗ 85–105 ∗ T2 ∗
(a) (b)
4.3 Cloud Max–Min Task Partitioning Scheduling
Algorithm

Cloud max–min task partitioning scheduling (CMAXMTPS) is listed in Table 7a. The makespan of CMAXMTPS is 90,
algorithm is also an offline algorithm for heterogeneous and the average cloud resource utilization is 0.94.
multi-cloud environment. It has two phases which is same as For the sake of comparison, we also produce the Gantt
CTPS and CMMTPS. In the first phase, it finds the minimum chart of the existing CMAXMS [2] algorithm as listed in
pre-processing completion time of each task over the clouds Table 7b. The makespan of CMAXMS is 105. Note that it
and finds the maximum among them. Note that the complete is 16.67% more than the CMAXMTPS. The average cloud
ppt matrix is known in CMAXMTPS as it is an offline algo- resource utilization of CMAXMS is 0.83, which is 12.6%
rithm. Then, it assigns the corresponding task which holds the worse than the CMAXMTPS. The results clearly show the
maximum of the minimum pre-processing completion time efficiency of the proposed algorithm CMAXMTPS.
to the cloud. Note that the first phase of CMAXMTPS is
similar to max–min [17,20,36] scheduling algorithm. How-
4.3.2 Pseudo-Code for CMAXMTPS
ever, the proposed algorithm has the following differences.
(1) CMAXMTPS performs task partitioning to achieve the
The pseudo-code of CMAXMTPS is similar to the CMMTPS.
minimum overall processing time, whereas max–min suffers
Therefore, we represent the similarities and differences of
from poor makespan. (2) Unlike max–min, it considers maxi-
both algorithms as follows. (1) The main algorithm of
mum ready time and next ready time. In the second phase, the
CMAXMTPS is same as Algorithm 2. However, we replace
processing completion time of the same task is processed in
CMMTPS by CMAXMTPS for this proposed algorithm. (2)
the minimum processing completion time cloud. The above
The procedure call of the CMMTPS is same as CMAXMTPS.
process is repeated for all the tasks.
However, we select the maximum of the minimum pre-
processing completion time for CMAXMTPS in contrast to
minimum of minimum pre-processing completion time for
4.3.1 An Illustration CMMTPS. Therefore, the partial procedure of CMAXMTPS
is shown in Procedure 4. (3) Procedure 2 is called from Pro-
Let us illustrate the proposed algorithm CMAXMTPS using cedure 4 to perform the processing phase.
the ppt and pt matrix as listed in Table 6a, b, respectively.
CMAXMTPS calculates the pre-processing completion
time of five tasks over three different clouds. The minimum Lemma 10 The time complexity of Procedure 4 is O(n 2 mo).
pre-processing time of these tasks is 10 (C1 ), 5 (C2 ), 20 (C1 ),
10 (C1 ) and 40 (C1 ), respectively, and the maximum among Proof The proof is same as Lemma 9. 

them is 40. Therefore, the pre-processing time of task T5 is
assigned to C1 . Here, the mr t and nr t are updated to 40.
Theorem 3 The overall time complexity of the CMAXMTPS
In the second phase, the processing completion time of task
algorithm is O(kn 2 mo).
T5 is calculated in three different clouds as (10 + 40 + 40),
(20 + 0 + 40) and (5 + 0 + 40), respectively. However,
the actual processing completion time is calculated as 50, 60 Proof The proposed algorithm CMAXMTPS calls Proce-
and 45, respectively. Therefore, task T5 is assigned to cloud dure 4 followed by Procedure 2, say k times. Therefore, the
C3 . In the similar fashion, other tasks are processed in three overall time complexity of the CMAXMTPS is O(kn 2 mo).
different clouds. The Gantt chart for proposed CMAXMTPS 


123
Arab J Sci Eng

Procedure 4 : ASS I G N -P P T -O F-T AS K -C M AX M T P S( ppt, where


pt, mr t, gantt, n, m)
27: maximum = mpct1 Step 1 to Step 26 is same as Step 1 to Step

26 of Procedure 3 1 if task Ti is assigned to cloud C j


Fi j =
28: task = 1 0 Otherwise
29: for i = 1, 2, 3,. . ., n do
30: if maximum > mpcti then
31: maximum = mpcti Therefore, the overall makespan of the cloud system is
32: task = i [2,11,17,20,36]
33: end if
34: end for
35: cloud = Itask M = max(M j ), 1 ≤ j ≤ m (10)
36: for k = 1, 2, 3,. . ., maximum do
37: if gantt (k, cloud) = N U L L then
38: gantt (k, cloud) = task Remark 3 The cloud makespan needs to be minimized in
39: end if order to complete the execution of all the tasks.
40: end for
41: if mr tcloud < maximum then
42: mr tcloud = maximum 5.2 Average Cloud Resource Utilization
43: end if
44: nr t = maximum Step 45 to Step 47 is same as Step 45 to Step
47 of Procedure 3 Cloud resource utilization is the ratio of local makespan to
overall makespan. Note that local makespan is the makespan
of the corresponding cloud. The cloud resource utilization
5 Performance Metrics of the cloud C j , 1 ≤ j ≤ m (i.e., U (C j )) is mathematically
expressed as follows.
In this section, we use two performance metrics, namely
cloud makespan and average cloud resource utilization, to Mj
U (C j ) = ,1 ≤ j ≤ m (11)
evaluate the performance of proposed and existing algo- M
rithms. We briefly describe the metrics as follows.
Therefore, the average cloud resource utilization is [2,11,17,
5.1 Cloud Makespan 20,36]

1
m
The cloud makespan (M) is the overall completion time to
U= U (C j ) (12)
execute all the tasks in pre-processing and processing phase. m
j=1
Let M j , 1 ≤ j ≤ m be the makespan of cloud C j , 1 ≤ j ≤ m,
which is mathematically expressed as follows.
Remark 4 The average cloud resource utilization needs to

n
n
be maximized in order to utilize the cloud resources profi-
Mj = ppti j × Fi j + pti j × Fi j , 1 ≤ j ≤ m (9)
ciently.
i=1 i=1

Table 8 Comparison of cloud


Dataset CLS MCC CTPS
makespan and average cloud
resource utilization for CLS, M U M U M U
MCC and CTPS in benchmark
dataset u_c_hi hi 4.7526e+05 1.0000 1.1451e+05 0.9505 2.7480e+04 0.9350
u_c_hilo 1.2295e+04 0.5026 1.9220e+03 0.9534 4.1700e+02 0.9535
u_c_lohi 1.5028e+04 0.5012 3.8400e+03 0.9726 9.0200e+02 0.9700
u_c_lolo 1.0430e+03 0.3445 9.8000e+01 0.9643 6.7000e+01 0.9832
u_i_hi hi 4.5124e+04 0.6288 4.4466e+04 0.9426 1.8425e+04 0.9581
u_i_hilo 1.0520e+03 0.7204 9.7500e+02 0.9613 3.6100e+02 0.9484
u_i_lohi 1.8930e+03 0.5438 1.4840e+03 0.9405 5.8100e+02 0.9580
u_i_lolo 3.6900e+02 0.1851 7.5000e+01 0.9817 6.5000e+01 0.9904
u_s_hi hi 2.5189e+05 1.1971 6.5726e+04 0.9347 1.8520e+04 0.9393
u_s_hilo 6.6440e+03 0.1834 1.2840e+03 0.9549 3.5500e+02 0.9496
u_s_lohi 7.0750e+03 0.2153 1.9530e+03 0.9621 6.8200e+02 0.9449
u_s_lolo 8.8100e+02 0.1245 8.4000e+01 0.9606 6.6000e+01 0.9839

123
Arab J Sci Eng

106
CLS MCC CTPS

105

Cloud Makespan
104

103

102

lo
lo

hi

lo
hi

hi

lo

hi

lo
hi

lo

hi
hi

lo

hi

lo

hi

lo
hi

lo

hi

lo

hi

lo
c

s
c

s
c

s
u
u

u
u

u
u

u
Instance

Fig. 1 Graphical comparison of cloud makespan for CLS, MCC and CTPS in benchmark dataset

Table 9 Comparison of cloud makespan and average cloud resource utilization for CMAXMS, CMMS, CMAXMTPS and CMMTPS in benchmark
dataset
Dataset CMAXMS CMMS CMAXMTPS CMMTPS
M U M U M U M U

u_c_hi hi 1.2526e+05 0.9993 8.5084e+04 0.8937 3.5794e+04 0.9969 2.4838e+04 0.9048


u_c_hilo 2.1060e+03 0.9993 1.6810e+03 0.9463 5.3800e+02 0.9887 3.7800e+02 0.9038
u_c_lohi 4.0310e+03 0.9990 2.8430e+03 0.8963 1.1460e+03 0.9974 8.3800e+02 0.8785
u_c_lolo 1.1300e+02 0.9945 9.0000e+01 0.9569 7.6000e+01 0.9934 6.8000e+01 0.9586
u_i_hi hi 8.0395e+04 0.9991 3.5176e+04 0.8343 2.4841e+04 0.9628 1.7013e+04 0.8706
u_i_hilo 1.5120e+03 0.9991 8.5100e+02 0.9114 4.7200e+02 0.9955 3.3000e+02 0.9155
u_i_lohi 2.5140e+03 0.9980 1.3710e+03 0.7734 8.4000e+02 0.9949 5.1000e+02 0.9235
u_i_lolo 9.9000e+01 0.9949 7.0000e+01 0.9759 7.1000e+01 0.9921 6.6000e+01 0.9725
u_s_hi hi 9.1314e+04 0.9985 5.2067e+04 0.8022 2.5946e+04 0.9887 1.6761e+04 0.8903
u_s_hilo 1.7620e+03 0.9993 1.0910e+03 0.9217 4.7500e+02 0.9897 3.3000e+02 0.9114
u_s_lohi 2.8200e+03 0.9982 1.4930e+03 0.8547 8.8500e+02 0.9912 6.5000e+02 0.8389
u_s_lolo 1.1200e+02 0.9922 7.5000e+01 0.9650 7.5000e+01 0.9900 6.7000e+01 0.9636

CMAXMS CMMS CMAXMTPS CMMTPS


105
Cloud Makespan

104

103

102
hi

lo
hi

lo

hi

lo

hi

lo

lo
hi

lo

hi
hi

lo

hi

lo

hi

lo
hi

lo

hi

lo

hi

lo
c

s
c

s
c

s
u
u

u
u

u
u

u
u

Instance

Fig. 2 Graphical comparison of cloud makespan for CMAXMS, CMMS, CMAXMTPS and CMMTPS in benchmark dataset

123
Arab J Sci Eng

6 Simulation Results 6.1 Simulation on Benchmark Dataset

The proposed algorithms are evaluated using various bench- In this simulation, we took a benchmark dataset of size 512
mark and synthetic datasets. Here, the synthetic datasets are × 16 generated by Braun et al. [39]. Here, the first value 512
generated using Monte Carlo method as used in [12]. The denotes the number of tasks, and the later value denotes the
simulations were performed using MATLAB R2014a ver- number of clouds. This dataset has twelve different instances
sion 8.3.0532. The system configuration for the simulation of the same size. The general structure of these instances is
is as follows. (1) Processor: Intel (R) core (TM) i5-4210 u_x_yyzz [2,12,17,20]. The character ‘u’ in the structure
CPU @ 1.70 GHz, (2) System type: 64-bit operating sys- denotes the use of uniform distribution in the generation of
tem, (3) Memory (RAM): 8 GB and (4) Platform: Microsoft instances. Then, the character ‘x’ denotes the type of consis-
Windows 7. tency which is one of the followings. (1) Consistent (c), (2)
Inconsistent (i) and Semi-consistent (s). Next ‘yy’ and ‘zz’
denote the heterogeneity of tasks and clouds, respectively.

Table 10 Parameters and their


Parameter Value
values for synthetic datasets
Number of tasks 250, 500, 2500, 5000, 25000
Number of clouds 10, 20, 30, 40, 50
Structure of the datasets number_of_tasks × number_of_clouds
Range of the datasets 10–2000
Instances (i x) i1, i2, i3, i4, i5

Table 11 Comparison of cloud


Dataset Instance CLS MCC CTPS
makespan and average cloud
resource utilization for CLS, M U M U M U
MCC and CTPS in synthetic
datasets 250 × 10 i1 1.2222e+04 0.7805 1.2517e+04 0.9627 6.9250e+03 0.9813
i2 1.2262e+04 0.8476 1.3598e+04 0.9590 7.1130e+03 0.9607
i3 1.3903e+04 0.7486 1.3593e+04 0.9676 7.0940e+03 0.9492
i4 1.2775e+04 0.7449 1.2365e+04 0.9712 6.4750e+03 0.9813
i5 1.1802e+04 0.8202 1.2414e+04 0.9705 5.7880e+03 0.9801
500 × 20 i1 1.0851e+04 0.6714 9.4770e+03 0.9649 3.8640e+03 0.9588
i2 1.0543e+04 0.6831 9.4440e+03 0.9632 3.7450e+03 0.9537
i3 1.0458e+04 0.6866 9.6020e+03 0.9707 3.9150e+03 0.9608
i4 1.1360e+04 0.6571 9.7120e+03 0.9667 3.9440e+03 0.9532
i5 1.1217e+04 0.6733 9.8560e+03 0.9655 4.0010e+03 0.9624
2500 × 30 i1 2.6550e+04 0.7643 2.6770e+04 0.9816 8.9300e+03 0.9862
i2 2.4366e+04 0.8229 2.6492e+04 0.9852 8.9770e+03 0.9845
i3 2.4619e+04 0.8257 2.6257e+04 0.9885 9.1230e+03 0.9821
i4 2.4674e+04 0.8117 2.6191e+04 0.9837 9.0680e+03 0.9839
i5 2.6249e+04 0.7857 2.6741e+04 0.9879 9.0510e+03 0.9854
5000 × 40 i1 3.2266e+04 0.8247 3.4927e+04 0.9894 1.0874e+04 0.9902
i2 3.2652e+04 0.8272 3.4568e+04 0.9917 1.0825e+04 0.9854
i3 3.5017e+04 0.7611 3.4814e+04 0.9873 1.0824e+04 0.9870
i4 3.1002e+04 0.8618 3.4680e+04 0.9911 1.0844e+04 0.9883
i5 3.2582e+04 0.8179 3.4796e+04 0.9927 1.0960e+04 0.9845
25000 × 50 i1 1.0672e+05 0.9034 1.2456e+05 0.9980 3.6608e+04 0.9970
i2 1.0797e+05 0.9007 1.2617e+05 0.9962 3.8147e+04 0.9954
i3 1.0870e+05 0.8910 1.2583e+05 0.9980 3.6753e+04 0.9972
i4 1.0639e+05 0.9101 1.2547e+05 0.9974 3.6557e+04 0.9967
i5 1.0281e+05 0.9360 1.2477e+05 0.9974 3.6621e+04 0.9972

123
Arab J Sci Eng

250 × 10 500 × 20

CLS MCC CTPS CLS MCC CTPS


104.2
104

Cloud Makespan

Cloud Makespan
104
103.8

103.8 103.6

i1 i2 i3 i4 i5 i1 i2 i3 i4 i5
Instance Instance
2500 × 30 5000 × 40

CLS MCC CTPS CLS MCC CTPS

104.5
Cloud Makespan

Cloud Makespan
104.5

104
104
i1 i2 i3 i4 i5 i1 i2 i3 i4 i5
Instance Instance
25000 × 50

CLS MCC CTPS


Cloud Makespan

105

104.5
i1 i2 i3 i4 i5
Instance

Fig. 3 Graphical comparison of cloud makespan for CLS, MCC and CTPS in 250 × 10, 500 × 20, 2500 × 30, 5000 × 40 and 25,000 × 50 datasets

Here, high (hi) and low (lo) are used to represent the high ing execution time of the task into two parts, namely (0.25
and low heterogeneity of tasks or clouds, respectively. For ×etc(i, j)) and ((1 – 0.25) ×etc(i, j)) where etc(i, j) is the
instance, u_i_hi hi instance shows that uniform distribution expected time to compute task i on cloud j. The first part
is used to generate the instance, the type of matrix is incon- is denoted as ppt (i, j), whereas the second part is denoted
sistent and heterogeneity of both tasks and clouds is high. as pt (i, j). However, the communication between the pre-
Note that this dataset is extensively used in the task schedul- processing time and processing time is assumed to zero.
ing algorithms [2,12,17,20]. It is also important to note that We calculated the cloud makespan and average cloud
we use ETC and dataset interchangeably. resource utilization of the proposed algorithm CTPS for 512
We partitioned the execution time of each task (i.e., the ele- × 16 dataset, which are listed in Table 8. The results of CTPS
ment of the instance) into pre-processing and processing time are compared with two well-known algorithms, namely CLS
as follows. (1) We generated a random number in between [11] and MCC [2], as listed in Table 8. The graphical com-
0 and 1 (say, 0.25). (2) Next we divided the correspond- parison of makespan is also shown in Fig. 1. Here, 12 out 12

123
Arab J Sci Eng

Table 12 Comparison of cloud makespan and average cloud resource utilization for CMAXMS, CMMS, CMAXMTPS and CMMTPS in synthetic
datasets
Dataset Instance CMAXMS CMMS CMAXMTPS CMMTPS
M U M U M U M U

250 × 10 i1 2.0960e+04 0.9929 1.0285e+04 0.9426 9.9210e+03 0.9666 5.8860e+03 0.9397


i2 2.2008e+04 0.9945 1.1003e+04 0.9532 1.0294e+04 0.9680 6.5170e+03 0.9393
i3 2.0793e+04 0.9935 1.1060e+04 0.9575 9.5610e+03 0.9897 6.2610e+03 0.9543
i4 2.0129e+04 0.9917 1.0378e+04 0.9360 9.1480e+03 0.9835 6.1520e+03 0.9137
i5 2.1217e+04 0.9919 1.0271e+04 0.9546 8.9950e+03 0.9780 5.1100e+03 0.9651
500 × 20 i1 1.6910e+04 0.9877 7.9310e+03 0.9335 5.4580e+03 0.9661 3.6010e+03 0.9140
i2 1.6446e+04 0.9862 7.7210e+03 0.9454 5.7180e+03 0.9703 3.1860e+03 0.9471
i3 1.6490e+04 0.9759 7.9440e+03 0.9186 5.7940e+03 0.9632 3.4740e+03 0.9253
i4 1.6877e+04 0.9828 7.9950e+03 0.9519 5.4580e+03 0.9661 3.3470e+03 0.9537
i5 1.7118e+04 0.9883 8.1730e+03 0.9413 5.5850e+03 0.9695 3.4690e+03 0.9398
2500 × 30 i1 5.2269e+04 0.9968 2.0887e+04 0.9786 1.4288e+04 0.9931 7.5870e+03 0.9801
i2 5.1679e+04 0.9969 2.0850e+04 0.9691 1.4522e+04 0.9914 7.7120e+03 0.9771
i3 5.2105e+04 0.9953 2.0849e+04 0.9820 1.4300e+04 0.9946 7.6850e+03 0.9661
i4 5.1598e+04 0.9967 2.0698e+04 0.9725 1.4364e+04 0.9902 7.6790e+03 0.9800
i5 5.2613e+04 0.9947 2.1437e+04 0.9706 1.4226e+04 0.9938 7.7660e+03 0.9632

250 × 10 500 × 20

CMAXMS CMMS CMAXMS CMMS


104.5 104.5
CMAXMTPS CMMTPS CMAXMTPS CMMTPS
Cloud Makespan

Cloud Makespan

104
104

103.5

i1 i2 i3 i4 i5 i1 i2 i3 i4 i5
Instance Instance
2500 × 30
105
CMAXMS CMMS
CMAXMTPS CMMTPS
Cloud Makespan

104

i1 i2 i3 i4 i5
Instance

Fig. 4 Graphical comparison of cloud makespan for CMAXMS, CMMS, CMAXMTPS and CMMTPS in 250 × 10, 500 × 20 and 2500 × 30
datasets

123
Arab J Sci Eng

instances give better makespan and average cloud resource results of CTPS are better in makespan for all the 25 instances
utilization for CTPS in compare to CLS [11] and MCC [2] of five different datasets.
algorithm. Similarly, we compared the performance of the proposed
The cloud makespan and average cloud resource uti- CMMTPS and CMAXMTPS algorithms with CMMS and
lization are also calculated for the proposed algorithms, CMAXMS as listed in Table 12 (Fig. 4). Note that we present
CMMTPS and CMAXMTPS, which are jointly listed in three datasets due to the system resource limitation. Here, all
Table 9. The results of the proposed algorithms are com- the 15 instances give better result for the proposed algorithm
pared with two well-known algorithms, namely CMMS [11] CMMTPS in comparison with CMMS and the proposed algo-
and CMAXMS [2] (refer Table 9, Fig. 2). Here, the cloud rithm CMAXMTPS in comparison with CMAXMS.
makespan and average cloud resource utilization are better
for all the instances of 512 × 16 dataset in comparison with
CMMS [11] and CMAXMS [2] algorithm. 6.3 Analysis of Variance

Analysis of variance (ANOVA) [45] is a well-known statisti-


cal method, which is used to validate the simulation results.
6.2 Simulation on Synthetic Datasets Let us assume that the means of three different populations
are same. Mathematically, μ1 = μ2 = μ3 , and we call it by
In this section, we took five different datasets. In each dataset, null hypothesis H0 . On the other hand, we set the alternate
we generated five different instances (i.e., i1 to i5). The spec- hypothesis as the mean of one population is different. Our
ification of the datasets is given in Table 10. The general goal is to reject the null hypothesis. The test was performed
structure of these datasets is n × m where n is the num- using data analysis tool of Microsoft Excel 2013. We used
ber of tasks and m is the number of clouds. It is important ANOVA single factor as our analysis tools and assumed the
to note that these datasets are generated using Monte Carlo value of alpha is 0.05. We conducted the ANOVA for 512
method as per the procedure given in [12]. Here, we used the × 16 benchmark dataset and all the synthetic datasets. The
pre-defined random number generation function, randi() of ANOVA results are given in Tables 13, 14 and 15 for bench-
MATLAB to generate the instances of the datasets [2,12]. mark dataset and Tables 16, 17 and 18 for synthetic datasets.
Like benchmark dataset, we followed the same procedure to Note that d f denotes the degree of freedom in Tables 13, 14,
divide the execution time of the tasks. 15, 16, 17 and 18, which is calculated by finding the differ-
Cloud makespan and average cloud resource utilization ence between the number of samples and number of groups
of CTPS are compared with the CLS and MCC algorithms, under consideration.
which are listed in Table 11. The graphical comparison of The test criteria are as follows: If F value > F criti-
makespan is separately shown for the datasets in Fig. 3. The cal, then we reject the null hypothesis. Otherwise, we accept

Table 13 Comparison of sum


Source of variation Sum of square df Mean square F value p value F critical
of square, d f , mean square, F
value, p value and F critical for Between groups 2.5853e+10 02 1.2927e+10 1.6939 0.1994 3.2849
CLS, MCC and CTPS in
benchmark dataset Within groups 2.5183e+11 33 7.6313e+09
Total 2.7769e+11 35

Table 14 Comparison of sum


Source of variation Sum of square df Mean square F value p value F critical
of square, d f , mean square, F
value, p value and F critical for Between groups 2.0544e+09 02 1.0272e+09 1.0351 0.3664 3.2849
CMAXMS, CMMS and
CMAXMTPS in benchmark Within groups 3.2748e+10 33 9.9237e+08
dataset Total 3.4803e+10 35

Table 15 Comparison of sum


Source of variation Sum of square df Mean square F value p value F critical
of square, d f , mean square, F
value, p value and F critical for Between groups 2.6095e+09 02 1.3048e+09 1.3567 0.2715 3.2849
CMAXMS, CMMS and
CMMTPS in benchmark dataset Within groups 3.1736e+10 33 9.6169e+08
Total 3.4345e+10 35

123
Arab J Sci Eng

Table 16 Comparison of sum


Source of variation Sum of square df Mean square F value p value F critical
of square, d f , mean square, F
value, p value and F critical for Between groups 1.1691e+10 02 5.8453e+09 5.2199 0.0076 3.1239
CLS, MCC and CTPS in
synthetic dataset Within groups 8.0626e+10 33 1.1198e+09
Total 9.2317e+10 74

Table 17 Comparison of sum


Source of variation Sum of square df Mean square F value p value F critical
of square, d f , mean square, F
value, p value and F critical for Between groups 3.4845e+09 02 1.7423e+09 1.6714e+01 4.5710e-06 3.2199
CMAXMS, CMMS and
CMAXMTPS in synthetic Within groups 4.3780e+10 42 1.0424e+08
dataset Total 7.8625e+09 44

Table 18 Comparison of sum


Source of variation Sum of square df Mean square F value p value F critical
of square, d f , mean square, F
value, p value and F critical for Between groups 4.6280e+09 02 2.3140e+09 2.2960e+01 1.8295e-07 3.2199
CMAXMS, CMMS and
CMMTPS in synthetic dataset Within groups 4.2329e+09 42 1.0078e+08
Total 8.8608e+09 44

·105 ·104
Confidence Interval

Confidence Interval

µ1 µ1
µ2 µ2
0.68
µ3 2.6
µ3

CLS MCC CTPS CMAXMS CMMS CMAXMTPS


Algorithm Algorithm

Fig. 5 Graphical comparison of confidence interval for CLS, MCC Fig. 6 Graphical comparison of confidence interval for CMAXMS,
and CTPS in benchmark dataset CMMS and CMAXMTPS in benchmark dataset

the alternate hypothesis. In benchmark dataset (Tables 13, comparison clearly shows that the proposed algorithms out-
14, 15), we fail to reject the null hypothesis as F value < F perform existing algorithms.
critical. It shows that the means of the three different pop-
ulations are equal. It is also confirmed by the fact that the 6.4 Discussion
p value is much greater than 0.05. The rationality behind
this rejection is that the range of the instances is not in a The rationality behind the better performance of three pro-
specified range, and they vary from one instance to another. posed algorithms is as follows.
In synthetic dataset (Tables 16, 17, 18), we reject the null
hypothesis as F value > F critical. It clearly indicates that – The proposed algorithms partition the task into two
the means of the three populations are not same. It is con- phases. Therefore, each phase of the task can be sched-
firmed by the fact that the p value is much less than 0.05. uled in different clouds. However, the existing algorithms
The ANOVA test clearly shows that the proposed algorithms schedule the task in a single cloud. Therefore, they pro-
outperform existing algorithms as per their applicability. duce poor makespan.
We also calculated confidence interval (CI) for benchmark – The proposed algorithms use machine ready time and
and synthetic datasets. The results of CI are shown in Figs. 5, next ready time to calculate the worst-case completion
6 and 7 for benchmark dataset and Figs. 8, 9 and 10 for syn- time of the task. However, the existing algorithms con-
thetic dataset. Note that we set the CI level as 95%. The sider machine ready time only. Therefore, the completion

123
Arab J Sci Eng

·104 ·104

Confidence Interval
Confidence Interval

2.99
µ1 µ1
2.6 µ2 µ2
µ3 µ3

CMAXMS CMMS CMMTPS CMAXMS CMMS CMMTPS


Algorithm Algorithm

Fig. 7 Graphical comparison of confidence interval for CMAXMS, Fig. 10 Graphical comparison of confidence interval for CMAXMS,
CMMS and CMMTPS in benchmark dataset CMMS and CMMTPS in synthetic dataset

·104
time without considering the idle slots created by early
arrival tasks.
Confidence Interval

µ1
3.76 µ2
µ3 7 Conclusion

We have presented three task partitioning scheduling algo-


rithms, namely CTPS, CMMTPS and CMAXMTPS, for
heterogeneous multi-cloud environment. The proposed algo-
CLS MCC CTPS
rithms comprise two different phases, pre-processing and
Algorithm
processing. The CTPS is an online algorithm, which has
Fig. 8 Graphical comparison of confidence interval for CLS, MCC been shown to run in O(knmo) time for k iterations, n tasks,
and CTPS in synthetic dataset m clouds and o actual completion time. The CMMTPS and
CMAXMTPS are offline algorithms, which have been shown
·104 to run in O(kn 2 mo) time. The simulation results have been
shown in terms of two performance metrics, makespan and
average cloud resource utilization using various benchmark
Confidence Interval

2.99 and synthetic datasets. The results of the proposed algorithms


µ1 have compared with the existing CLS, MCC, CMMS and
µ2
µ3 CMAXMS algorithms as per their applicability. The com-
parison results have shown that the proposed algorithms
outperform than four existing algorithms in terms of two
performance metrics.
In the proposed algorithms, we have not considered the
CMAXMS CMMS CMAXMTPS communication time between the pre-processing and pro-
Algorithm cessing phases. Moreover, the cost of the execution time and
Fig. 9 Graphical comparison of confidence interval for CMAXMS, transfer costs have not considered in the proposed algorithms.
CMMS and CMAXMTPS in synthetic dataset Therefore, our future effort is aimed to incorporate both time
and cost in order to make the algorithm more efficient and
realistic one.
time is not estimated properly as the tasks are partitioned
into different parts. As a result, they perform worse than
the proposed algorithms.
– The proposed algorithms calculate actual pre-processing
References
or processing completion time to minimize the task
1. Avetisyan, A.I.; Campbell, R.; Gupta, I.; Heath, M.T.; Ko, S.Y.;
assignment time and properly utilize the idle slots, Ganger, G.R.; Kozuch, M.A.; OHallaron, D., Kunze, M., Kwan,
whereas the existing algorithms calculate the completion T.T., Lai, K., Lyons, M., Milojicic, D.S., Lee, H.Y., Soh, Y.C.,

123
Arab J Sci Eng

Ming, N.K., Luke, J., Namgoong, H.: Open cirrus: a global cloud a multi-cloud environment. J. Innov. Digit. Ecosyst. 1, 12–25
computing testbed. IEEE Comput. Soc. 43(4), 35–43 (2010). (2014)
2. Panda, S.K.; Jana, P.K.: Efficient task scheduling algorithms for 22. Reda, N.M.; Tawfik, A.; Marzok, M.A.; Khamis, S.M.: Sort-mid
heterogeneous multi-cloud environment. J. Supercomput. 71(4), tasks scheduling algorithm in grid computing. J. Adv. Res. 6(6),
1505–1533 (2015) 987–993 (2015)
3. Buyya, R.; Yeo, C.S.; Venugopal, S.; Broberg, J.; Brandic, I.: Cloud 23. Gogos, C.; Valouxis, C.; Alefragis, P.; Goulas, G.; Voros, N.;
computing and emerging IT platforms: vision, hype and reality for Housos, E.: Scheduling independent tasks on heterogeneous pro-
delivering computing as the 5th utility. Future Gener. Comput. Syst. cessors using heuristics and column pricing. Future Gener. Comput.
25, 599–616 (2009) Syst. 60, 48–66 (2016)
4. Beaty, K.A.; Naik, V.K.; Perng, C.S.: Economics of cloud comput- 24. Celaya, J.; Arronategui, U.: Fair scheduling of bag-of-tasks appli-
ing for enterprise IT. IBM J. Res. Dev. 55(6), 1–13 (2011) cations on large-scale platforms. Future Gener. Comput. Syst. 49,
5. Foster, I.; Zhao, Y.; Raicu, I.; Lu, S.: Cloud computing and grid 28–44 (2015)
computing 360-degree compared. Grid Computing Environments 25. Anglano, C.; Canonico, M.: Scheduling algorithms for multi-
Workshop, IEEE, pp. 1–10 (2008) ple bag-of-task applications on desktop grids: a knowledge-free
6. Public Cloud Adoption. http://www.gartner.com/newsroom/id/ approach. IEEE International Symposium on Parallel and Dis-
3443517. Accessed on 17th Oct 2016 tributed Processing, pp. 1–8 (2008)
7. Cloud Computing Trends. http://www.rightscale.com/blog/cloud- 26. Wang, S.; Yan, K.; Liao, W.; Wang, S.: Towards a load balancing
industry-insights/cloud-computing-trends-2016-state-cloud-surv in a three-level cloud computing network. 3rd IEEE International
ey. Accessed on 18th Oct 2016 Conference on Computer Science and Information Technology,
8. Kalyaev, A.I.; Kalyaev, I.A.: Method of multiagent scheduling of Vol. 1, pp. 108–113 (2010)
resources in cloud computing environments. J. Comput. Syst. Sci. 27. Lin, Y.; Thai, M.; Wang, C.; Lai, Y.: Two-tier project and job
Int. 55(2), 211–221 (2016) scheduling for SaaS cloud service providers. J. Netw. Comput.
9. Dabbagh, M.; Hamdaoui, B.; Guizani, M.; Rayes, A.: An energy- Appl. 52, 26–36 (2015)
efficient VM prediction and migration framework for overcommit- 28. Xu, X.; Hu, H.; Hu, N.; Ying, W.: Cloud task and virtual machine
ted clouds. IEEE Trans. Cloud Comput. 1–13 (2016). doi:10.1109/ allocation strategy in cloud computing environment. Netw. Com-
TCC.2016.2564403 put. Inf. Secur. Commun. Comput. Inf. Sci. 345, 113–120 (2012)
10. Zhou, A.; Wang, S.; Cheng, B.; Zheng, Z.; Yang, F.; Chang, R.N.; 29. Shi, T.; Yang, M.; Li, X.; Lei, Q.; Jiang, Y.: An energy-efficient
Lyu, M.R.; Buyya, R.: Cloud service reliability enhancement via scheduling scheme for time-constrained tasks in local mobile
virtual machine placement optimization. IEEE Trans. Serv. Com- clouds. Pervasive Mob. Comput. 27, 90–105 (2016)
put. 1–13 (2015) (in press) 30. Xiong, Y.; Wan, S.; She, J.; Wu, M.; He, Y.; Jiang, K.: An energy-
11. Li, J.; Qiu, M.; Ming, Z.; Quan, G.; Qin, X.; Gu, Z.: Online opti- optimization-based method of task scheduling for a cloud video
mization for scheduling preemptable tasks on IaaS cloud systems. surveillance center. J. Netw. Comput. Appl. 59, 63–73 (2016)
J. Parallel Distrib. Comput. 72, 666–677 (2012) 31. Hosseinimotlagh, S.; Khunjush, F.; Samadzadeh: SEATS: smart
12. Panda, S.K.; Jana, P.K.: Normalization-based task scheduling algo- energy-aware task scheduling in real-time cloud computing. J.
rithms for heterogeneous multi-cloud environment. Inf. Syst. Front. Supercomput. 71, 45–66 (2015)
(2016) (in press) 32. Abdullahi, M.; Ngadi, M.A.; Abdulhamid, S.M.: Symbiotic organ-
13. Panda, S.K.; Jana, P.K.: Uncertainty-based QoS min–min algo- ism search optimization based task scheduling in cloud computing
rithm for heterogeneous multi-cloud environment. Arab. J. Sci. environment. Future Gener. Comput. Syst. 56, 640–650 (2016)
Eng. 41(8), 3003–3025 (2016) 33. Cheng, M.; Prayogo, D.: Symbiotic organisms search: a new meta-
14. Kwok, Y.; Ahmad, I.: Dynamic critical-path scheduling: an effec- heuristic optimization algorithm. Comput. Struct. 139, 98–112
tive technique for allocating task graphs to multiprocessors. IEEE (2014)
Trans. Parallel Distrib. Syst. 7(5), 506–521 (1996) 34. Salman, A.; Ahmad, I.; Al-Madani, S.: Particle swarm optimization
15. Topcuoglu, H.; Hariri, S.; Wu, M.: Performance-effective and low- for task assignment problem. Microprocess. Microsyst. 26(8), 363–
complexity task scheduling for heterogeneous computing. IEEE 371 (2002)
Trans. Parallel Distrib. Syst. 13(3), 260–274 (2002) 35. Pandey, S.; Wu, L.; Guru, S.M.; Buyya, R.: A particle swarm
16. Bajaj, R.; Agrawal, D.P.: Improving scheduling of tasks in a het- optimization-based heuristic for scheduling workflow applications
erogeneous environment. IEEE Trans. Parallel Distrib. Syst. 15(2), in cloud computing environments. 24th IEEE International Con-
107–118 (2004) ference on Advanced Information Networking and Applications,
17. Braun, T.D.; Siegel, H.J.; Beck, N.; Boloni, L.L.; Maheswaran, M.; pp. 400–407 (2010)
Reuther, A.I.; Robertson, J.P.; Theys, M.D.; Yao, B.; Hensgen, D.; 36. Ibarra, O.H.; Kim, C.E.: Heuristic algorithms for scheduling inde-
Freund, R.F.: A comparison of eleven static heuristics for map- pendent tasks on nonidentical processors. J. Assoc. Comput. Mach.
ping a class of independent tasks onto heterogeneous distributed 24(2), 280–289 (1977)
computing systems. J. Parallel Distrib. Comput. 61(6), 810–837 37. Sotomayor, B.; Montero, R.S.; Llorente, I.M.: Resource leasing and
(2001) the art of suspending virtual machines. 11th IEEE International
18. Liu, C.; Yang, S.: A heuristic serial schedule algorithm for unrelated Conference on High Performance Computing and Communica-
parallel machine scheduling with precedence constraints. J. Softw. tions, pp. 59–68 (2009)
6(6), 1146–1153 (2011) 38. Patel, D.K.; Tripathy, D.; Tripathy, C.R.: Survey of load balancing
19. Kumar, V.S.A.; Marathe, M.V.; Parthasarathy, S.; Srinivasan, A.: techniques for grid. J. Netw. Comput. Appl. 65, 103–119 (2016)
Scheduling on unrelated machines under tree-like precedence con- 39. Braun Data Set. https://github.com/chgogos/hcsp/tree/master/
straints. Algorithmica 55, 205–226 (2009) 512x16. Accessed on 31st March 2017
20. Maheswaran, M.; Ali, S.; Seigel, H.J.; Hensgen, D.; Freund, R.F.: 40. Salah, K.; Boutaba, R.: Estimating service response time for elastic
Dynamic mapping of class of independent tasks onto heteroge- cloud applications. IEEE 1st International Conference on Cloud
neous computing system. J. Parallel Distrib. Comput. 59, 107–131 Networking, pp. 12–16 (2012)
(1999) 41. Salah, K.; Elbadawi, K.; Boutaba, R.: An analytical model for esti-
21. Gounaris, A.; Karampaglis, Z.; Naskos, A.; Manolopoulos, Y.: mating cloud resources of elastic services. J. Netw. Syst. Manag.
A bi-objective cost model for optimizing database queries in 24(2), 285–308 (2016)

123
Arab J Sci Eng

42. Kafhali, S.E.; Salah, K.: Stochastic modelling and analysis of cloud 44. Ahmed, E.; Naveed, A.; Hamid, S.H.A.; Gani, A.; Salah, K.: Formal
computing data center. 20th Conference on Innovations in Clouds, analysis of seamless application execution in mobile cloud comput-
Internet and Networks, IEEE, pp. 122–126 (2017) ing. J. Supercomput. 1–27 (2017). doi:10.1007/s11227-017-2028-
43. Calyam, P.; Rajagopalan, S.; Seetharam, S.; Selvadhurai, A.; Salah, 4
K.; Ramnath, R.: VDC-analyst: design and verification of virtual 45. Muller, K.E.; Fetterman, B.A.: Regression and ANOVA: An Inte-
desktop cloud resource allocations. Comput. Netw. 68, 110–122 grated Approach Using SAS Software. SAS Publisher, Barhawar
(2014) (2002)

123

You might also like