Chen 2021
Chen 2021
DOI: 10.1002/cpe.6256
RESEARCH ARTICLE
KEYWORDS
cloud computing, differential evolution, independent task, task scheduling
1 INTRODUCTION
With the rapid development of modern science and technology, cloud computing1 has been widely adopted in numerous applications to achieve
high dealing capability while keeping cost under control. Especially in systems with a large number of compute-intensive tasks,2 such as intelligent
transportation system,3,4 video surveillance system,5 healthcare service system,6,7 IoT system,8 real-time simulation system,9 and DNA motif finding
system,10 there is an obvious trend toward realizing various functionalities upon cloud computing platforms constituted of hardware and software
resources11 in a pay-as-you-go manner to consumers. Cloud computing is a high-level computing processing model based on parallel computing,
distributed computing, and grid computing, and provides meaningful services for numerous users to handle massive tasks constantly.12,13
Although cloud computing brings lot of advantages to large-scale real-time systems, it results to a serious resource allocating and task schedul-
ing problem. Cloud is an organic combination of heterogeneous and distributed resources,14 which are geographically distributed and share the
computing services. The elasticity and scalability of cloud computing enable the access of services whenever and wherever possible and eliminate
the requirements for consumers to maintain large amounts of permanent resources.15,16 Therefore, how to effectively perform task scheduling and
resource allocation17 to meet users’ demands on cloud computing services, while maximizing cloud computing resource utilization18 and system
revenue, is a vital complex and important issue in the current research field of cloud computing.
Task scheduling in cloud computing is a process of adjusting resources among different users according to certain resource usage rules in a
specific cloud environment.19 Task scheduling solves the supply and demand constraint between application load requests and platform resource
provision capabilities,20,21 and is involved in the benefit of cloud providers and users. Cloud service providers are concerned on data center energy
consumption, resource load balancing, and system revenue, while cloud users pay much attention to the completion time and execution cost of tasks.
Concurrency Computat Pract Exper. 2021;e6256. wileyonlinelibrary.com/journal/cpe © 2021 John Wiley & Sons, Ltd. 1 of 12
https://doi.org/10.1002/cpe.6256
2 of 12 CHEN ET AL.
Generally, task scheduling in cloud computing consists of three layers: physical resource layer, virtual resource layer, and task scheduler
layer. As shown in Figure 1, the physical resource layer is composed of the physical resources of clouds, such as computing resource, mem-
ory, network, and storage.22,23 The virtual resource layer shields the heterogeneity of the underlying physical resources through virtualiza-
tion technologies such as server virtualization, storage virtualization, and network virtualization, and provides higher layer a unified resource
use interface. The task scheduler layer mainly stores the scheduling algorithms, and allocates tasks to their most suitable resources. The
resource monitor monitors the running status of physical resources and virtual resources, and feeds the status back to the task scheduler
in time. According to the information captured by the resource monitor, tasks are allocated to appropriate virtual machines to balance the
workload.
In this article, we focus on the scheduling problem of independent tasks in cloud computing systems, and try to propose an efficient approach
to allocate tasks to resources such that the execution times of tasks would be reduced as much as possible. The main contributions of this article are
as follows:
1. Constraints and goals of the task scheduling problem in cloud computing are analyzed, and an exact formulation based on integer programming
is proposed to search the solution space and provide best solutions for the task scheduling problem.
2. Inspired from differential evolution algorithm (DEA), a population-based approach is proposed to allocate tasks to their suitable resources
such that the total time cost would be minimized. The proposed approach not only maintains the diversity of population, but also increases the
probability of searching approximate optimal solutions.
Since DEAs can provide efficient solutions in solving the optimization problem due to their less control parameters, strong convergence per-
formance, and strong global search ability, they are widely adopted in multi-task scheduling problems. By improving the population initialization,
mutation, and selection operations of the standard DEA algorithm, the proposed approach avoids falling into local optimum and can obtain good
enough resource allocations for independent tasks quickly.
The rest of this article is structured as follows. Section 2 shows the research status of task scheduling in cloud computing. Section 3 models the
task scheduling problem and give an exact formulation. Section 4 shows the improved DEA-based task scheduling method. Section 5 analyzes and
discusses the experimental results. Section 6 summarizes this article.
CHEN ET AL. 3 of 12
2 RELATED WORK
Task scheduling is the main part of cloud platform resource management. There are many literatures that have conducted research and analy-
sis deeply. Currently, research on task scheduling and resource allocation in cloud computing addresses the following issues: (1) Analysis of task
scheduling mathematic models with problem assumptions, constraints, and optimization goals according to the characteristics of task sets. (2)
Research on the application range, improvement, and integration of different types of scheduling algorithms, especially the evolutionary-based
algorithms, and apply them to solve the large-scale task scheduling and resource allocation problems.
Braun et al.24 introduced main heuristic algorithms for task scheduling of distributed computing systems, and analysed the advantages and
disadvantages of these heuristic algorithms. Xu et al.25 analyzed a random strategy and four greedy strategies for cloud task scheduling problem and
compared their performances through lots of experiments. However, there was no description of the mission scenarios applicable to each algorithm
and deterministic algorithms were difficult to get optimal solutions for multi-task scheduling problems.
Based on the gravity search algorithm and the firefly algorithm, Neelima et al.26 proposed a heuristic approach named hybrid
self-adaptive learning global search algorithm and firefly algorithm (HSLGSAFA) for scheduling problems of deadline constrained tasks. The
proposed adaptive strategy generated better individuals, but had a large number of control parameters and complex settings. It took sev-
eral experiments to determine and select reasonable parameters, and the proposed algorithm had slow convergence and long iteration
cycles.
Pandey et al.27 used a particle swarm algorithm to solve the work-flow scheduling problem for tasks with dependencies, which had bene-
fits of few operators in intelligent data fusion algorithms,28 simple implementation, and strong global search capability to implement. However,
during the execution of the proposed algorithm, the parameters were fixed and did not have the self-adaptation ability. This algorithm could
not maintain a good balance between the diversity of the population in the early stage and convergence ability in the later period. Ergu et al.17
adopted an analytic hierarchy process (AHP) to analyze the task weight combined with the intrinsic characteristics of cloud tasks such as
bandwidth requirements, completion time, and task cost. They established a task priority matrix, and allocated computing resources accord-
ing to the priorities of tasks. But in many cases, clouds have to deal with a collection of independent tasks and cannot distinguish the priorities
of tasks.
Zuo et al.29 proposed a particle swarm optimization (PSO)-based adaptive learning task scheduling algorithm. This algorithm optimized PSO
from three aspects: initial population selection, speed update strategy selection, and fitness function design. It can efficiently solve the deadline
constrained task scheduling problem. However, the proposed algorithm required to run in a hybrid cloud platform, which limited its applicable scope.
Tsai et al.30 presented an modified DEA for cloud computing environment based on cost and time model. This algorithm integrated DEA and Taguchi
method, which effectively improved the global search ability of DEA. However, the convergence speed of the optimal solution was slow and the
iteration period was too long.
3 SYSTEM MODEL
This article studies the scheduling problem of independent tasks in cloud computing, which belongs to the category of static task scheduling.31 Tasks
can be scientific computing task, data processing task, application simulation task, and so on. Tasks are not non-preemptively scheduled, and once
a task begins to execute, it will execute until end. Each task can be allocated to one and only one computing resource and each resource handles at
most one task at any time. There are many factors that affect the execution time of tasks. For the sake of simplicity, we consider that tasks’ execution
time is only related to the lengths of tasks and the computing power of computing resources.
T = {t1 , t2 , … , tM } is used to donate the set of M independent tasks in a given cloud. A specific task ti can be represented by
ti = <li , mti > (1 ≤ i ≤ M), where li and mti are the instruction length and memory requirement of task ti , and measured by million instructions (MI) and
million bytes (MB), respectively.
In this article, we use VM = {vm1 , vm2 , … , vmN } to donate the set constituted of virtual machines, and N is the number of virtual machines in
the cloud platform. A specific virtual machine vmj can be represented as a quadruple vmj = <cj , mj , bj , sj > (1 ≤ j ≤ N), where cj , mj , bj , sj represent the
computing power, memory capacity, bandwidth capacity, disk capacity of the virtual machine, respectively. All of these parameters are measured by
million instructions per second (Mips) and million bits per second (Mbps).
Tasks considered in this work are independent, that is, there is no precedence nor data dependence between tasks. Since the tasks are
executed non-preemptively, their execution time only depends on the instruction lengths of tasks and the computing capabilities of virtual
machines. We use etcij to denote the expected execution time of task ti when it is allocated to the virtual machine vmj , then etcij can be
computed via:
Hence, the expected execution time matrix32 for all tasks on the virtual machines can be described as follows:
⎡ ⎤
⎢ l1 ∕c1 , l1 ∕c2 , … l1 ∕cN ⎥
⎢ ⎥
⎢ l2 ∕c1 , l2 ∕c2 , … l2 ∕cN ⎥
ETCM×N =⎢ ⎥ (2)
⎢ … ⎥
⎢ ⎥
⎢lM ∕c1 , lM ∕c2 , … lM ∕cN ⎥⎦
⎣
In this article, we assume that each task can be executed on one and only one virtual machine, and each virtual machine can hold at most one
task at any time. We use a boolean variable zij (1 ≤ i ≤ M, 1 ≤ j ≤ N) to indicate whether task ti is allocated to virtual machine vmj . Therefore,
⎧
⎪1, if task ti is allocate to vmj ;
zi,j = ⎨ (3)
⎪0, otherwise
⎩
We use a binary decision variable rik (1 ≤ i, k ≤ M)33 to indicate whether task ti and task tk are allocated to the same virtual machine. If they are
allocated to the same virtual machine, the value of rik is 1, otherwise, the value of rik is 0, that is,
∑
rik = zij zkj 1≤i≠k≤M (4)
1≤j≤N
Now we start to analyze the constraints of the multi-task scheduling problem. Since each task can be assigned to one and only virtual machine,
zij satisfies the following constraints.
∑
zij = 1 1≤i≤M (5)
1≤j≤N
In order to ensure the consistency and correctness of system execution, tasks can be allocated to virtual machines only if memory requirements
are satisfied, which can be expressed as:
Meanwhile, we use sti and fti to denote the start time and the finish time of task ti , respectively. Then sti and fti can be computed via
∑ ∑
sti = zij rki etckj 1≤i≤M
1≤j≤N 1≤k≤i
∑
fti = sti + zij etcij 1≤i≤M
1≤j≤N
Since at most one task can be executed on a given virtual machine at any time, the execution time of tasks assigned to the same virtual machine
cannot overlap. In other words, for two tasks ti and tj allocated to the same virtual machine, ti either ends before tj begins to execute, or executes
after tj is finished, which can be expressed as:
The optimization goal of the scheduling problem studied in this article is to minimize the makespan (i.e., the maximum execution time of tasks).
We use M(T) to denote the makespan of tasks in the set T, then
With the above constraints and objective function, task scheduling problem studied in this article can be formulated as the following integer
programming (IP) model:
min M(T)
In this section, a DEA-based approach is proposed to allocate tasks to their suitable resources such that the total time cost of tasks would be
minimized.
DEA, which is an evolutionary-based algorithm,34 was proposed by Storn and Price in 1995 on the basis of evolutionary ideas such as genetic
algorithm to solve the Chebyshev polynomial problem. DEA is a heuristic random search method based on group differences. The advantages of
DEA lie in its strong global search capability, less control parameters, high intelligibility, and implementation, strong convergence and robustness,
and so on. DEA contains the following four standard operations: population initialization, mutation, crossover, and selection.
The population in DEA consists of Ps individuals, and can be represented as Xi0 = (xi,1
0
, xi,2
0
, … , xi,n
0
) where 1 ≤ i ≤ Ps . Each individual is composed of n
dimensions, where n is the number of tasks. Each dimension of a given individual is chosen from the interval (xj− , xj+ ). The value of the jth dimension
of the ith individual can be calculated by:
The value of Ps ranges from 5n to 10n. All individuals of the population need follow three steps of exploration, namely, mutation, combination, and
selection.
4.1.2 Mutation
The most basic variation component of DEA is the difference vector of parents, and each difference vector corresponds to the difference vec-
tor of two different individuals of the parent population. According to the choosing method of parent individuals, DEA has a variety of mutation
strategies.35 When the population evolves to the G generation, for each target vector XiG , DEA uses the following mutation strategy to generate a
variant individual ViG by applying a differential mutation operation:
ViG = Xr1
G G
+ F ∗ (Xr2 G
− Xr3 ) (8)
ViG = Xbest
G G
+ F ∗ (Xr1 G
− Xr2 ) (9)
ViG = Xbest
G G
+ F ∗ (Xr1 G
+ Xr2 G
− Xr3 G
− Xr4 ) (10)
G
Equation (8) is the most commonly used mutation strategy, in which Xr1 G
, Xr2 G
, and Xr3 are randomly selected in the population, and r1 ≠ r2 ≠ r3 ≠ i.
As a scaling factor, F is an important parameter to control the convergence and diversity of the population. The general value of F is from 0 to 2.
G G G G G G
The difference vector of Xr2 and Xr3 is acting on the base vector Xr1 . Since Xr1 , Xr2 , and Xr3 are randomly selected, the search scope of DEA under
this mutation strategy is wide. The diversity of the population is kept at the maximum degree, and the probability of searching the global optimal
solution is high. Of course, due to the large search range, the convergence rate of DEA is slow at the beginning of evolution.
Equations (9) and (10) introduce the optimal solutions of parent populations to guide the search direction, and the generation of variant indi-
vidual is constrained by the optimal solution. The search range expands around the optimal solution, which improves the convergence performance
of the algorithm. However, due to the pertinence of the base vector selection, the diversity of the population is insufficient, which causes DEA to be
prone to fall into a local optimum, increasing the probability of obtaining a local optimal solution.
4.1.3 Crossover
In order to increase the diversity of the population, DEA conducts the crossover operation on variant individual ViG and target individual XiG for the
purpose of generating trial individual UGi . The genes of UGi come from ViG and XiG , which are decided by crossover probability factor CR ranging in the
interval [0, 1].
6 of 12 CHEN ET AL.
⎧ G
⎪V , if randj (0, 1) ≤ CR or j == k;
UGi,j = ⎨ i,j
⎪Xi,jG , otherwise
⎩
4.1.4 Selection
Similar to other evolutionary algorithms, DEA adopts a selection operation of survival of the fittest to ensure that the algorithm evolves toward an
optimal solution. DEA evaluates the fitness value of trial individual UGi and target individual XiG , then picks the individual of next generation according
to the following equation:
⎧ G
⎪U , if f(UGi ) ≤ f(XiG );
XiG+1 = ⎨ i (11)
⎪XiG , otherwise
⎩
Through the above mutation, crossover, and selection operations, the population evolves to the next stage. DEA will continue operating the
circulative iteration process until the optimal solution of the population reaches a predetermined error precision or the iteration number reaches
the predetermined iterative upper limit.
DEA is an effective method in solving complex optimization problems. It has advantages of strong global search capability and rapid convergence.
Since the scheduling problem of independent task is an integer programming one, its solution space is huge and hard to be completely searched. The
standard DEA has drawbacks in solving the task scheduling and resource allocation problem. Since DEA is prone to trapping into local optimum and
premature convergence, it is not suitable to directly use DEA in cloud task scheduling.
In order to solve the above problem, this article proposes an efficient modified DEA-based scheduling algorithm (MDEA-SA), and the standard
DEA is improved in three aspects: population initialization, new mutation strategy, and adaptive adjustment of parameters.
According to the description of cloud task scheduling of Section 3, the proposed algorithm deals with the mapping between tasks and vir-
tual machines, which are represented as T = {t1 , t2 , … , tn }, VM = {vm1 , vm2 , … , vmm }, respectively. Individual of the population can be represented
as an n-dimension vector, each of which represents the virtual machine selected to execute the task. For example, we assume that the cloud has
five tasks and three virtual machines, and the i − th individual of the population is X i = (vm2 , vm1 , vm3 , vm1 , vm3 ). Then, according to the individual
X i , we can find that task t1 are allocated to vm2 , tasks t2 and t4 are allocated to vm1 , and the virtual machine vm3 would hold and execute tasks
t3 and t5 .
When MDEA-SA is adopted to generate the task execution sequence for virtual machines, an evaluation function needs to be defined to calcu-
late the fitness values of individuals. From Section 3, we can find that the major concern of cloud task scheduling is the completion time of tasks.
Individuals with smaller makespan values have better fitness, that is, they are the elites of the population.
The size and diversity of the initial population X greatly affect the performance of MDEA-SA. If the size is too small, the population will quickly lose
its diversity, fall into a local optimum, and accelerate local convergence. However, if the population size is too large, although the search capacity of
the population is increased, the convergence rate is much slow. A large number of experiments has shown that population size is recommended to
select 5-10 times of the problem dimension.
In order to enhance the diversity of the initial population, a swap strategy is added into the original random distribution scheme, and the initial
individual is iterated several times by randomly selecting two dimensions and exchanging its components. Population initialization strategy is shown
in Algorithm 1.
CHEN ET AL. 7 of 12
1: for i = 1 to Ps do
2: Create an initial individual Xi0 = (xx,1
0
, xi,2
0
, ⋅, xi,n
0
)
3: for j = 1 to n do
4: Generate two random number r1 and r2, where 1 ≤ r1 ≠ r2 ≤ n
5: Swap the value of position r1 and r2
6: end for
7: Obtain an initial individual Xi0 = (xx,1
0
, xi,2
0
, ⋅, xi,n
0
)
8: end for
9: Obtain Ps feasible individual
Mutation operation is one of the core steps of DEA, and also the key difference between DEA and other evolution algorithms. Mutation strat-
egy determines the evolution direction of the population. Since task scheduling problem in cloud computing can be formulated as an integer
programming problem, the dimension component is generally the identity of resources and not effective in directing us to achieve better solu-
tions. Therefore, considering the discrete nature of the search space, this article adopts a jumping mutation strategy to solve population evolution
problems.
G G G
The jumping mutation strategy37 is similar to Equation (9). Variant individuals are determined by Xbest , Xr1 , and Xr2 . The diversity of the population
G G
is maintained by Xr1 and Xr2 , and their relationship can be written as:
G
The three parameters F1, F2, and F3 represent the probabilities of randomly jumping toward the best individual Xbest , the ran-
G G
dom individual Xr1 and Xr2 , respectively. Using this jumping mutation strategy, variant individual ViG is synthesized by Xbest
G G
, Xr1 ,
G
and Xr2 .
Compared with other evolutionary algorithms, DEA has fewer control parameters. Both the jumping factors F1, F2, F3 and crossover
probability factor CR have a great influence on the performance of this algorithm. Jumping factors determine the disturbance degree
of the differential variation on the base vector. When F1 decreases and both F2 and F3 increase, the diversity of the population
becomes larger and the search ability is stronger. To the opposite, the probability of the population converging to a local optimal solution
increases.
CR also has a great influence on the diversity of the group. When CR is small, the proportion of the mutant individual in the test
individual is small too, which is beneficial to maintain the diversity of the population. In order to simplify the parameter adaptive adjust-
ment strategy, we suppose that values of F2 and F3 are the same. Values of F1 and CR are chosen from [Fmin , Fmax ] and [CRmin , CRmax ],
respectively. We use Gmax and gen to denote the total iteration number and the current number of iterations, F1, F2, F3, and CR can be
computed via:
F2 = F3 = (1 − F1 )∕2
When the number of iterations reaches the predefined evolution algebra of populations Gmax , the modified DEA-based scheduling algorithm
stops, and an approximate optimal solution is achieved. Algorithm 2 lists the pseudo-code of the scheduling algorithm proposed in this
section.
8 of 12 CHEN ET AL.
In this section, we compare the proposed MDEA-SA approach with four existing algorithms including DEA-SA, PSO-SA, Min-min, and Max-min. We
conduct experiments under the same condition and compare the performance of the above five algorithms. DEA-SA is based on the standard DEA
which uses the standard strategy. PSO-SA is based on the standard particle swarm optimization. Min-min and Max-min are traditional and classic
deterministic algorithms to solve the multi-task scheduling problems. In our experiments, 100 independent tasks and 10 virtual machines are used,
and their parameters are shown in Tables 1 and 2. In our experiments, each test would repeat 100 times, and each data point in this section denotes
the average values of results of 100 instances.
According to Tables 1 and 2, the execution time of tasks on virtual machines can be obtained, and is shown in Table 3. For the sake of simplicity,
only the execution time of tasks t1 to t5 on virtual machines vm1 to vm5 is shown here.
For evolutionary-based algorithms, the population size, iteration period, and run number are, respectively, set to 100, 1000, and 20. Other
control parameters of evolutionary algorithms are set as Table 4.
Task 1 2 3 4 5 6 7 8 9 10
t1-t10 2.15 1.99 1.33 1.83 1.22 0.35 2.9 1.55 2.09 2.53
t11-t20 0.32 1.12 1.98 1.81 2.47 2.06 1.46 0.71 1.07 2.77
t21-t30 1.32 0.51 2.91 2.47 1.04 0.51 0.33 2.02 1.27 1.08
t31-t40 0.31 2.37 0.68 0.73 1.08 2.24 2.44 2.08 0.61 1.44
t41-t50 1.21 2.66 1.64 2.22 2.31 0.45 2.14 1.28 0.78 1.32
t51-t60 2.77 2.29 1.28 2.82 0.78 0.60 0.36 0.73 1.91 2.55
t61-t70 1.52 2.45 2.74 1.88 2.19 0.39 2.27 2.70 0.99 2.00
t71-t80 1.60 2.19 1.32 2.30 2.86 0.34 0.83 2.65 0.33 2.45
t81-t90 1.45 2.12 2.18 2.77 1.57 2.29 0.32 1.57 1.39 2.64
t91-t100 2.07 0.39 0.3 2.80 1.61 2.14 1.79 0.42 0.81 2.11
CHEN ET AL. 9 of 12
In this section, the number of tasks is varying from 10 to 100, and each instance repeats 100 times. The makespan of tasks obtained by different
scheduling algorithms are shown in Table 5 and Figure 2.
From Table 5 and Figure 2, we can find that our approach can achieve a smaller makespan that others, no matter how many tasks are
tested on. When 100 tasks are scheduled by 10 virtual machines, the makespan obtained by our approach is 18, which is 1.96%, 1.37%,
0.88%, and 8.12% less than that of DEA-SA, PSO-SA, Max-min, and Min-min, respectively. The experiment results show the effectiveness of our
algorithm.
In order to verify the convergence of MDEA-SA, the results of MDEA-SA and DEA-SA involved in iteration numbers are analyzed and contrasted
when 100 tasks are scheduled on 10 virtual machines. The convergence of MDEA-SA and DEA-SA is shown in Figure 3.
10 of 12 CHEN ET AL.
As can be seen from the Figure 3, MDEA-SA has better convergence performance and reaches a balance faster. MDEA-SA increases the diversity
of the population by adding a swap strategy when the initial population is generated. Hence the initial population of MDEA-SA has better individuals.
Meanwhile, MDEA-SA uses the jumping mutation strategy and adjusts the control parameters in time, which can greatly accelerate the convergence
speed.
To order to verify the load balances of virtual machines when tasks are scheduled according to the above five algorithms, the minimum and
maximum execution time of the virtual machines are as shown in Figure 4.
As can be seen from Figure 4, the differences of execution time obtained by Max-min are the least, for the fairness of Max-min are optimal.
However, our approach has the second best performance in terms of load balance. The value of difference between the minimum and maximum
execution time obtained by our approach is 1.2, which is 4.9, 4.0, and 8.7 times less than that obtained by DEA-SA, PSO-SA, and Min-min, respectively.
These results show that MDEA-SA can achieve quite small makespan while premising the load balancing of virtual machines.
6 CONCLUSIONS
This article studies the scheduling problem of independent tasks in cloud environment, and proposes an improved DEA-based scheduling algorithm
to allocate tasks to their appropriate virtual machines. Since traditional DEA is not suitable for task scheduling in cloud computing, we improve
DEA in aspects of the initial population generation, mutation strategy and parameter adjustment. The proposed approach increases the diversity of
population by adopting an exchange strategy, and increases the probability of searching approximate optimal solutions. Experiments show that the
proposed approach can achieve a better performance in terms of makespan, convergence, and load balance.
ACKNOWLEDGMENTS
This research is supported by National Key Research and Development Program No. 2017YFB1001900 and No. 2018YFB2101304, Defense
Industrial Technology Development Program No. JCKY2016607B006, the Special Civil Aircraft Research Program No. MJ-2017-S-39, and the
Fundamental Research Funds for the Central Universities.
ORCID
Jinchao Chen https://orcid.org/0000-0001-6234-1001
REFERENCES
1. Buyya R, Yeo CS, Venugopal S, Broberg J, Brandic I. Cloud computing and emerging IT platforms: vision, hype, and reality for delivering computing as the
5th utility. Futur Gener Comput Syst. 2009;25(6):599-616.
2. Zhang M, Zhang D, Yao H, Zhang K. A probabilistic model of human error assessment for autonomous cargo ships focusing on human autonomy
collaboration. Saf Sci. 2020;130:104838.
3. Chen Z, Zhang Y, Wu C, Ran B. Understanding individualization driving states via latent dirichlet allocation model. IEEE Intell Transp Syst Mag.
2019;11(2):41-53.
4. Jiang C, Li R, Chen T, Xu C, Li L, Li S. A two-lane mixed traffic flow model with drivers’ intention to change lane based on cellular automata. Int J Bio-Inspir
Comput. 2020;6(4):29-240.
5. Chen Z, Cai H, Zhang Y, et al. A novel sparse representation model for pedestrian abnormal trajectory understanding. Expert Syst Appl. 2019;138:112753.
6. Liang H, Zou J, Zuo K, Khan MJ. An improved genetic algorithm optimization fuzzy controller applied to the wellhead back pressure control system. Mech
Syst Signal Process. 2020;142:106708.
7. Wu X, Zhang Y, Wang A, Shi M, Wang H, Liu L. MNSSp3: medical big data privacy protection platform based on Internet of Things. Neural Comput Appl.
2020. https://doi.org/10.1007/s00521-020-04873-z.
8. Liu X, Xiao Z, Zhu R, Wang J, Liu L, Ma M. Edge sensing data-imaging conversion scheme of load forecasting in smart grid. Sustain Cities Soc.
2020;62:102363.
9. Chen J, Du C, Han P, Du X. Real-time digital simulator for distributed systems. Simulat Trans Soc Model Simulat Int. 2021. https://doi.org/10.1177/
0037549720986865.
10. Wu X, Wei Y, Mao Y, Wang L. A differential privacy DNA motif finding method based on closed frequent patterns. Clust Comput. 2018;22(21):2907-2919.
11. Chen J, Du C, Xie F, Lin B. Scheduling non-preemptive tasks with strict periods in multi-core real-time systems. J Syst Archit. 2018;90:72-84.
12. Han P, Du C, Chen J, Ling F, Du X. Cost and makespan scheduling of workflows in clouds using list multiobjective optimization technique. J Syst Archit.
2021;112:101837.
13. Wu X, Wang H, Wei D, Shi M. ANFIS with natural language processing and gray relational analysis based cloud computing framework for real time energy
efficient resource allocation. Comput Commun. 2020;150:122-130.
14. Wadhonkar A, Theng D. A survey on different scheduling algorithms in cloud computing. Paper presented at: Proceedings of the 2016 2nd International
Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB). Chennai, India; 2016:665-669. https://
doi.org/10.1109/AEEICB.2016.7538374.
15. Han P, Du C, Liu Y, Chen J, Du X. An efficient differential evolution algorithm for task scheduling in heterogeneous cloud systems. Paper presented
at: Proceedings of the 2019 IEEE 3rd Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC).
Chongqing, China; 2019:1578-1582. https://doi.org/10.1109/IMCEC46724.2019.8984085.
16. Liang H, Zou J, Li Z, Khan MJ, Lu Y. Dynamic evaluation of drilling leakage risk based on fuzzy theory and PSO-SVR algorithm. Futur Gener Comput Syst.
2019;95:454-466.
17. Ergu D, Kou G, Peng Y, Shi Y, Shi Y. The analytic hierarchy process: task scheduling and resource allocation in cloud computing environment. J Supercomput.
Perth, WA, Australia; 2013;64(3):835-848.
18. Jafarnejad GE, Masoud RA, Nasih QN. Load-balancing algorithms in cloud computing: a survey. J Netw Comput Appl. 2017;88:50-71.
19. Madni SHH, Latiff MSA, Coulibaly Y, Abdulhamid SM. Resource scheduling for infrastructure as a service (IaaS) in cloud computing. J Netw Comput Appl.
2016;68:173-200.
20. Chen J, Du C, Xie F, Yang Z. Schedulability analysis of non-preemptive strictly periodic tasks in multi-core real-time systems. Real-Time Syst.
2016;52(3):239-271.
21. Chen J, Du C, Han P. Scheduling independent partitions in integrated modular avionics systems. PLoS One. 2016;11(12):e0168064.
22. Chen J, Du C, Han P, Du X. Work-in-progress: non-preemptive scheduling of periodic tasks with data dependency upon heterogeneous multiprocessor
platforms. Paper presented at: Proceedings of the 2019 IEEE Real-Time Systems Symposium (RTSS). Hong Kong, China; 2019:540-543. https://doi.org/
10.1109/RTSS46320.2019.00059.
23. Chen J, Du C, Han P, Zhang Y. Sensitivity analysis of strictly periodic tasks in multi-core real-time systems. IEEE Access. 2019;7:135005-135022.
24. Braun TD, Siegel HJ, Beck N, et al. A comparison of eleven static heuristics for mapping a class of independent tasks onto heterogeneous distributed
computing systems. J Parall Distrib Comput. 2001;61(6):810-837.
25. Xu X, Hu H, Hu N, Ying W. Cloud task and virtual machine allocation strategy in cloud computing environment. Network Computing and Information
Security; Berlin, Germany: Springer Berlin Heidelberg; 2012:113-120.
26. Neelima P, Rama Mohan Reddy A. An efficient hybridization algorithm based task scheduling in cloud environment. J Circuits Syst Comput.
2018;27(02):1-25.
27. Pandey S, Wu L, Guru SM, Buyya R. A particle swarm optimization-based heuristic for scheduling workflow applications in cloud computing environ-
ments. Paper presented at: Proceedings of the 2010 24th IEEE International Conference on Advanced Information Networking and Applications. Perth,
WA, Australia; 2010:400-407. https://doi.org/10.1109/AINA.2010.31.
28. Liu X, Zhu R, Anjum A, Wang J, Zhang H, Ma M. Intelligent data fusion algorithm based on hybrid delay-aware adaptive clustering in wireless sensor
networks. Futur Gener Comput Syst. 2020;104:1-14.
29. Zuo X, Zhang G, Wei T. Self-adaptive learning PSO-based deadline constrained task scheduling for hybrid IaaS cloud. IEEE Trans Automat Sci Eng.
2014;11(2):564-573.
30. Tsai JT, Fang JC, Chou JH. Optimized task scheduling and resource allocation on cloud computing environment using improved differential evolution
algorithm. Comput Oper Res. 2013;40(12):3045-3055.
31. Amiri M, Mohammad-Khanli L. Survey on prediction models of applications for resources provisioning in cloud. J Netw Comput Appl.
2017;82(Mar):93-113.
12 of 12 CHEN ET AL.
32. Li K, Wang J. Multi-objective optimization for cloud task scheduling based on the ANP model. Chin J Electron. 2017;26(5):889-898.
33. Yuan H, Bi J, Tan W, Zhou M, Li BH, Li JTTSA. An effective scheduling approach for delay bounded tasks in hybrid clouds. IEEE Trans Cybern.
2017;47(11):3658-3668.
34. Qin AK, Huang VL, Suganthan PN. Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans Evol Comput.
2009;13(2):398-417.
35. Islam SM, Das S, Ghosh S, Roy S, Suganthan PN. An adaptive differential evolution algorithm with novel mutation and crossover strategies for global
numerical optimization. IEEE Trans Syst Man Cybern B. 2012;42(2):482-500.
36. Teylo L, Paula UD, Frota Y, Oliveira DD, LMA D. A hybrid evolutionary algorithm for task scheduling and data assignment of data-intensive scientific
workflows on clouds. Futur Gener Comput Syst. 2017;76(11):1-17.
37. Ismail A, Jeng DS. Self-evolving neural network based on PSO and JPSO algorithms. Int J Civil Environ Eng. 2012;6(9):723-729.
How to cite this article: Chen J, Han P, Liu Y, Du X. Scheduling independent tasks in cloud environment based on modified
differential evolution. Concurrency Computat Pract Exper. 2021;e6256. https://doi.org/10.1002/cpe.6256