Sensors 21 06212 v2
Sensors 21 06212 v2
Article
Multi-Objective and Parallel Particle Swarm Optimization
Algorithm for Container-Based Microservice Scheduling
Xinying Chen * and Siyi Xiao
virtualization. Compared with a virtual machine, Docker Container has less consumption,
is simpler, and can be deployed faster. Current mainstream container management tools
include Docker Swarm [7], Apache Mesos [8], and Google Kubernetes [9]. Despite the
rapid development of these technologies and a certain number of practical container-based
microservice scheduling solutions, there are still some important issues that need to be
resolved in container-based microservice scheduling.
Three scheduling strategies are commonly used in the currently popular container
cluster management tool Docker Swarm [10]: Spread, Binpack, and Random. In the
Kubernetes scheduler, there are two: the predicate phase and the priority phase. These
two management tools only focus on the use of physical resources, ignoring other aspects
such as network overhead and cluster load balancing. An effective scheduling scheme
should be more comprehensive, such that the allocation of computing resources and storage
resources of physical nodes is more effective. While realizing cluster load balancing, local
load balancing should also be realized. To achieve this, comprehensive consideration of
service reliability and network transmission overhead is required. Further research is
needed to create such a scheduling method.
The container-based microservice scheduling problem is a typical NP-hard problem.
At present, many researchers use many methods to solve the virtual machine schedul-
ing problem in cloud computing. Daniel Guimaraes Lago et al. [11] have proposed a
container-based microservice scheduling algorithm based on resource type awareness. This
algorithm includes two parts: The first finds the optimal deployment of physical machines
for the container, and the other reduces the network transmission power consumption.
Mosong Zhou et al. [12] have inferred task resource requirements based on similar task
runtime information and proposed a fine-grained resource scheduling method. Carlos
Guerrero et al. [13] have proposed a genetic algorithm approach with the aim of finding
a suitable solution to address the problem of container allocation and elasticity using the
NSGA-II. Lin Miao et al. [14] have proposed a multi-objective optimization model for
container-based microservice scheduling with the aim of solving the scheduling problem
using an ant colony algorithm. Nguyen Dinh Nguyen et al. [15] aimed to overcome the
bottleneck problem through use of a leader election algorithm, which functions by evenly
distributing the leaders throughout the nodes in a cluster. Salman Taherizadeh et al. [16]
considered the specific quality of service (QoS) trade-offs and proposed an innovative
capillary computing architecture.
These methods can solve the container-based microservice scheduling problem, to
some extent; however, most of them can only achieve cluster load balancing, and cannot
achieve local load balancing. These methods are prone to uneven use of resources within
the node, resulting in unreasonable container allocation, which leads to increased trans-
mission overhead and reduced reliability. At the same time, these methods suffer from
slow optimization speeds and can easily fall into local optima. In order to solve these
problems, we first re-design the representation of the scheduling scheme. Then, in order
to reduce transmission overhead, improve cluster reliability, and load balancing, three
target models are proposed. Finally, a parallel particle swarm optimization algorithm [17]
is used, in order to solve the multi-objective optimization problem of container-based
microservice scheduling.
The main contributions of this paper are as follows.
• First, we establish three new optimization target models for the container-based
microservice scheduling problem: the network transmission cost model between
microservices, the global and local load balancing model, and the service reliability
model. The optimization target model proposed in this paper can solve the above-
mentioned problems that exist in current methods, at least to a certain extent.
• Second, a new representation of the scheduling scheme and particle is proposed to
increase the searching speed. Based on this representation, a new particle swarm
updating method is proposed, which preserves the diversity of particles while also
approaching the optimal solution during the optimization process.
Sensors 2021, 21, 6212 3 of 28
• Finally, a parallel particle swarm optimization algorithm is used to solve the multi-
objective optimization problem of container-based microservice scheduling. The algo-
rithm utilizes Pareto-optimal theory to select the individual extremum and the global
extremum of the particle swarm to improve the optimization efficiency of the algo-
rithm. At the same time, parallel computing is used to improve the solution speed of
the algorithm. Through inter-process communication, particle swarms can exchange
optimal solutions with each other, thus improving the efficiency of the global search
and allowing the particle swarms to avoid falling into local optima.
The rest of this paper is structured as follows. Section 2 briefly introduces related
technologies. Section 3 proposes the three optimization objective models. Section 4 intro-
duces the multi-objective optimization parallel particle swarm optimization algorithm for
container-based microservice scheduling (MOPPSO-CMS) in detail. Section 5 provides the
experimental comparison and analysis, and concludes the paper.
2. Related Technologies
This section introduces the techniques and theories used in this paper.
Start
Initialize
Evaluate
Particle
Update
Particle
If the stop
condition
satisfied
End
ms1
3 Instances 3 Instances
ms2 ms3
2 Instances
1 Instances ms4
ms5
2 Instances
where xi,j represents whether the container of microservice i is allocated to physical node j.
If the container of microservice i is allocated to physical node j, then xi,j = 1; otherwise,
xi,j = 0. In any physical node, there can be at most one container instance from the same
microservice [14]. This model uses the average network distance of all the container pairs
between consumer and provider microservices to calculate the data transmission overhead
between two microservice containers.
The model of network transmission overhead in GA-MOCA [13] is defined as follows:
ServiceMeanDistance(msi ) =
∑∀contk |contk ≡msi (∑∀contk0 ≡msi0 |(msi0 ,msi ) prov/cons dist alloc(contk ),alloc(contk0 ) ) (5)
,
|contk | × |contk0 |
where cont0k ≡ msi means that container contk encapsulates/executes microservice msi ,
alloc(contk /msi ) = pm j means that the physical machine pm j allocates service msi /container
contk , and |contk | is the total number of containers. This model approximates the network
overhead between the microservices of an application using the average network distance
between all the pairs of consumer and provider containers.
In the GA-MOCA algorithm model, only the network distance between physical
nodes is considered, while the request and transmission amounts are ignored. In the
ACO-CMS algorithm model, although considering the request quantity and transmission
quantity on the basis of GA-MOCA, there are still shortcomings. In the process of network
transmission, there are obvious differences in transmission speed, distance, and other
factors between containers allocated in the same physical node and containers allocated in
different physical nodes. This problem is not adequately solved in the models of the above
two papers; therefore, we propose a new network transmission overhead model definition:
type type
Trans_Consume(type) = Link(conti , contk ) Trans(conti , contk )
type type
(6)
Dist(conti , contk ) PassTime(conti , contk ),
The total network transmission overhead, Total_Consume, consists of the network trans-
mission overhead between containers assigned to the same physical node Inner_Consume
and the network transmission overhead between containers assigned to different physical
nodes Outer_Consume. Trans_Consume(type) indicates the calculation method of network
type
transmission consumption under different types. According to type, the contk is divided
into contin out
k and contk . conti represents the container instance of microservice msi . Con-
tainer instances of microservices that have consumer relationships with microservice msi
and are assigned to different physical nodes, represented as contout
k . Containers of microser-
vices that have consumer relationships with microservice msi and are assigned to same
physical nodes are represented as contin k . Based on the GA-MOCA algorithm and ACO-
CMS algorithm models, this model focuses on the difference between the network overhead
transmitted between the containers of the same physical node and the network overhead
allocated between the containers of different physical nodes. Considering the transmission
time issues, the optimization of the transmission overhead is more comprehensive.
1
RESRC_CONS( X ) = max max
σ1 + σ2 1≤ j≤n
m m (10)
Link i × Cal_Reqsti Link i × Str_Reqsti
( ∑ xi,j σ1 , ∑ xi,j σ2 ),
i =1
Scalei × Cal_Resrci i =1
Scale i × Str_Resrci
where Cal_Reqsti , Cal_Resrci , Str_Reqsti , and Str_Resrci have the same meaning as Cal_Reqi ,
Cal_Resi , Str_Reqi , and Str_Resi in this paper, respectively. σ1 and σ2 are the standard de-
viation values of the utilization rate of computing resources and storage resources of the
physical nodes in the cluster, respectively. This model operates on the assumption that
the worst load of the cluster is not necessarily the maximum resource utilization rate with
a relatively balanced resource load, but a high resource utilization rate with a relatively
unbalanced resource load.
The model of load balancing in GA-MOCA is defined as
l pm
BalanceClusterUse = σ ( PMusage ), i f ∃msi | alloc(msi ) = pml , (11)
where ureqi denotes the number of user requests for application i, msreqi denotes the
number of microservice requests msi needed for each ureq j request from application j,
and resi denotes the computational resources required for a microservice request. In this
mode, we define a metric called the threshold distance, which is the difference between
the resource consumption of a container and the threshold value of a microservice. This
is formalized in Equation (13), which uses the standard deviation of the percentage of
resource usages of the physical nodes, in order to evaluate the balance of the cluster.
It is obvious that GA-MOCA ignores other factors relevant to load balancing. On the
basis of GA-MOCA, the influence of storage on load balancing is added to the ACO-CMS
model. Using the maximum value of the resource utilization rate with the coefficient
among the physical nodes reflects the worst-case load for the load balancing of the cluster.
Although the use of a maximum value is more comprehensive, it ignores the combined
effects of other factors on load balancing. This will lead to inefficiency in storage and
computational resources.
Sensors 2021, 21, 6212 10 of 28
The models of these two papers cannot adequately address these problems. In order to
address these problems, we propose a global load balancing approach in this paper. Global
load balancing consists of cluster load balancing, which is the load balancing of the whole
physical node cluster, and local load balancing, which means the resources are balanced
within one physical node. Global load balancing aims to achieve the load balancing of the
entire physical node cluster and the rational use of the entire cluster resources at the same
time. The objective model of load balancing is designed as follows:
Calc_Req j Str_Req j
CalcStrDi f j = | − |, (14)
Calc_Res j Str_Res j
Str_Req j Mem_Req j
StrMemDi f j = | − |, (15)
Str_Res j Mem_Res j
Mem_Req j Calc_Req j
MemCalcDi f j = | − |, (16)
Mem_Res j Calc_Res j
∑nj=1 (CalcStrDi f j + StrMemDi f j + MemCalcDi f j )
LocalLoadBalancing = , (17)
3n
σ + σstr + σmem
ClusterLoadBalancing = clac , (18)
3
ClusterLoadBalancing + LocalLoadBalancing
GlobalLoadBalancing = , (19)
2
where LocalLoadBalancing is the sum of the differences of the ratio between the three re-
sources of the physical node. The differences are represented as CalcStrDi f , StrMemDi f ,
and MemCalcDi f . The larger the value is, the more unbalanced it is. σcalc , σstr , and σmem
represent the standard deviation of computing resources, storage resources, and memory
resources used throughout the physical node cluster, respectively. ClusterLoadBalancing
is calculated using three standard deviations. The greater the standard deviation, the
more discrete and the more unbalanced the use. GlobalLoadBalancing is the mean of
ClusterLoadBalancing and LocalLoadBalancing. We intend to achieve cluster load balanc-
ing for each resource, making each resource use more reasonable. In this paper, the physical
node storage, memory, and computing resources are calculated, combining the local load
balancing with cluster load balancing.
This model uses the average number of request failures as an indicator to measure
the reliability of cluster services, which is mainly related to the number of microservice
requests and the failure rate of the nodes.
The reliability model in GA-MOCA is defined as
This model measures the reliability of the system through the failure rate of the
applications. An application fails when any of its microservices fail, and a microservice
fails when all of the container replicas fail. A container fail is generated by a fail in the
container, f aili , or by a fail in the physical machine that allocates the container, f aill .
As both physical nodes and containers may have unpredictable errors due to various
problems, the number of requests failed is an important indicator to measure the reliability
Sensors 2021, 21, 6212 11 of 28
of a service. The definition of the model in GA-MOCA is multiplicative. When the number
of microservices and physical nodes is large, the result is too small, which is not conducive
to calculation by the computer and the comparison between the results. Compared with
GA-MOCA, the failure rate of physical nodes is only considered in the ACO-CMS model,
while the failure rate of containers is ignored. In addition, the container instance of the
same microservice in each node in the ACO-CMS model is unique. This constraint means
that the ACO-CMS model is unable to find an effective allocation scheme in the case of
more container instances and less physical nodes. To solve the above problems, the model
proposed in this paper is as follows:
cont1
cont3 cont4
cont2
node that the microservice has been allocated, for example, if ms6 = {3, 1}, then perhaps
ms6 = {3} after mutation. Therefore, if ms6 only has two container instances when the
operations occur, ms6 may not fill the quantity limit or exceed the resources that the physical
nodes can provide. This can generate an invalid schedule scheme. This is the case for all of
the other operations, as well.
chromosome
ms1 {3,1}
ms2 {3}
ms3 {2,3}
ms4 {1,2,1,1}
ms5 {1,2}
ms6 {3,1}
Third, when there are large amounts of containers and microservices, the representa-
tion method uses significant amounts of memory to record the allocation order of containers
when the algorithm is running, and the allocation order of containers has no direct impact
on the optimization of the scheduling plan; however, this is suitable for the operation of
their algorithm, specifically.
Considering the above problems, we define a new scheduling scheme expression,
based on the number of containers. Each scheduling scheme is represented by a two-
dimensional array, each row representing a microservice msi , and each column represents
a physical node pm j . The element (msi , pm j ) represents the number of containers of
microservice msi allocated to physical node pm j . Consider the simple application we
mentioned above (shown in Figure 2) as an example; one of its schedule schemes (or
particles) is shown in Figure 5. Figure 5 shows the original state of the particle, which is
randomly initialized by the MOPPSO-CMS algorithm. As microservice1 has two container
instances, the total number of rows in ms1 is two. The allocations (ms1 , pm1 ) = 1 and
(ms1 , pm3 ) = 1 are randomly initialized, where one of the ms1 containers is assigned to
pm1 and the other is assigned to pm3 . Compared to the previous representation method,
this method has several advantages:
First, the new representation method and the characteristics of MOPPSO-CMS al-
gorithm have reduced time complexity. When the MOPPSO-CMS algorithm begins, it
first initializes the particles (shown in Figure 5), then finds the suitable schedule scheme
by changing the number of containers in the physical node, instead of traversing each
container and physical node. Thus, if there are x containers, y microservices, and z physical
nodes, and the MOPPSO-CMS algorithm has a population of m particles and n iterations,
the time complexity of the MOPPSO-CMS algorithm is O(y × m × n).
Sensors 2021, 21, 6212 14 of 28
ms5 0 2 0
Second, the transfer and copy operations, which are discussed later, can avoid gener-
ating an invalid schedule scheme while looking for a suitable schedule scheme, as they do
not change the total number of containers.
Third, the memory resource of the new representation method only depends on
the number of microservices and the physical nodes. The amount of containers will not
significantly affect the new representation method.
In conclusion, the new representation method combines the advantages and over-
comes the shortcomings of both ACO-CMS and GA-MOCA. ACO-CMS will not generate
an invalid schedule scheme, as it picks the containers in order to find suitable physical
nodes; however, this may result in increased time complexity. The GA-MOCA may have
less time complexity, but can generate many invalid schedule schemes. The new represen-
tation will reduce the time complexity and avoid generating invalid schedule schemes at
the same time, thus combining the advantages of both methods.
ms1 1 0 1 ms1 0 1 1
ms2 1 2 0 ms2 1 1 1
rand()=0.5
ms3 1 1 1 transfer ms3 0 2 1
ms4 0 0 1 ms4 0 0 1
ms5 0 2 0 ms5 1 0 1
In the figure, there is a 0.5 probability for the transfer operation to occur in each
position of the particle. If the transfer occurs, the microservice would randomly transfer
its containers to other physical nodes. For example, if a transfer occurs at (ms1 , pm1 ),
the containers in the physical node pm1 are randomly transferred to pm2 . Similarly, if a
transfer occurs at (ms5 , pm2 ), the containers in the physical node pm2 are randomly trans-
ferred to pm1 and pm3 . The number of the transfer containers are random, for example,
for (ms2 , pm2 ), it could transfer one or two to pm3 . If the number of containers in the
position is 0, no transfer occurs.
Sensors 2021, 21, 6212 15 of 28
Further, in order to increase the global optimization ability and optimization efficiency,
the copy operation is integrated into the process of particle swarm optimization. Each row
in the particle will copy the individual extremum pbest and the global extremum gbest
according to a specified probability (i.e., the learning factors c1 and c2 ), taking the particle
itself and the individual extremum pbest as an example. The copy operation of the particle
is illustrated in Figure 7.
ms1 0 1 1 ms1 0 0 2
ms2 1 1 1 ms2 1 1 1
rand()=0.5
ms3 0 2 1 copy ms3 0 1 2
ms4 0 0 1 ms4 1 0 0
ms5 1 1 0 ms5 1 1 0
In the figure, the left side is the particle, and the right side is the individual extremum
pbest of the particle. According to the learning factor, the probability of a copy operation
occurring is 0.5. The copy operation occurs at ms2 , and the particle copies the elements
of the same row in pbest, covering their own elements to achieve the purpose of learning
from the individual extremum.
Each iteration generates a new set of gbest, with the gbest in the new set denoted by
gbest0 . The gbest0 are compared with gbest in the Pareto-optimal front. All gbest that are
Pareto-dominated by the gbest0 are dropped, and the gbest0 are added to the Pareto-optimal
front. If any gbest0 is Pareto-dominated by any one of the gbest, then it is dropped. If gbest0
does not Pareto-dominate any gbest, and all gbest do not Pareto-dominate gbest0 , then
gbest0 is added to the Pareto-optimal front.
When the Pareto-optimal front of the swarm is updated, inter-process communication
is carried out. The swarm uploads the gbest0 that was added most recently to the Pareto-
optimal front and the shared memory. The other swarm downloads the gbest0 from the
shared memory, all the local gbest Pareto-dominated by the gbest0 are dropped, and the
gbest0 are added to the Pareto-optimal front. The gbest0 are uploaded to the shared memory
again, for the rest of the swarm to download. If the gbest0 is Pareto-dominated by any one
of the local gbest, then the gbest0 is dropped. If gbest0 does not Pareto-dominate any gbest,
and all gbest do not Pareto-dominate gbest0 , then the gbest0 is added to Pareto-optimal
front, and the gbest0 is uploaded again. The operation of inter-process communication is
shown in Figure 8.
The particle or schedule scheme is output, which has the minimum value of a sum of
fitness in the Pareto-optimal front. The algorithm pseudo-code is shown in Algorithm 2.
gbest1
gbest1
particle particle
swam swam particle particle particle particle swam
3 4 swam swam swam 4
3 4 3 gbest4
(a) Independent computa- (b) Particle swarm 1 puts its (c) Particle swarm 4 obtains
tion of individual particle own gbest1 in shared mem- gbest1 from shared memory.
swarms. ory.
Shared Shared
memory memory
gbest1 gbest1
Table 4. Cont.
pnum iter_num ω c1 c2
300 300 0.45 0.1 0.45
600,000
550,000 MOPPSO-CMS
GA-MOCA
500,000 ACO-CMS
450,000 Spread
400,000
TotalConsume
350,000
300,000
250,000
200,000
150,000
100,000
50,000
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6
UserRequest
The Spread algorithm selects a physical node with the least container instances for
deployment each time. The algorithm distributes the microservice containers as evenly
as possible to each physical node, leading to nodes with consumption relations that are
easily assigned to different physical nodes. Thus, the transmission overhead between
microservices is greatly increased.
GA-MOCA considers the influence of the distance between physical nodes in finding
solutions, so its effect is slightly better than Spread.
ACO-CMS optimizes factors such as physical node distance and transmission data size,
but ignores the possible impact of transmission time. Moreover, due to the characteristics
of the ACO-CMS algorithm, the container instance of the microservice on a physical node is
unique, so it is impossible to deploy multiple consumption-related microservice containers
to the same physical node.
Given the significant difference in the number of two microservice containers, the re-
sult was an increase in network transmission overhead. Figure 9 shows that the Spread
algorithm performed the worst, and it always had the highest network transmission
overhead, with the ACO-CMS and GA-MOCA algorithms performing better; however,
the MOPPSO-CMS algorithm achieved the best performance, as it considers the influ-
ence of the data transmission amount and network distance on the network transmission
overhead, as well as the influence of time required for data transmission on the network
transmission overhead.
0.14
MOPPSO-CMS
NSGA-II
0.12 ACO-CMS
Spread
0.1
LocalLoadBalance
0.08
0.06
0.04
0.02
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6
UserRequest
Figure 10. Performance differences of the four algorithms on local load balancing.
Figure 10 shows that the Spread algorithm achieved reasonable performance under
low user requests; however, when the amount of user requests increased, the performance
worsened. This is because the algorithm tried to divide each container to each physical
node and, when there are few user requests, each physical node is allocated less containers,
which does not result in reduced performance, as the resources are not overwhelmed. Both
the ACO-CMS algorithm and the GA-MOCA algorithm fluctuated, and their performance
was not stable. This may be because both algorithms optimize the load balancing of the
cluster, but ignore local load balancing. The MOPPSO-CMS algorithm had the best effect,
as it was specially designed to optimize the local load balancing, such that the resources on
the physical nodes can be balanced and reasonably used in the calculation process.
0.12
0.08
0.06
0.04
0.02
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6
UserRequest
0.16
MOPPSO-CMS
Cluster Memory Resource Standard Deviation
0.14 NSGA-II
ACO-CMS
0.12 Spread
0.1
0.08
0.06
0.04
0.02
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6
UserRequest
0.12
MOPPSO-CMS
Cluster Storage Resource Standard Deviation
NSGA-II
0.1 ACO-CMS
Spread
0.08
0.06
0.04
0.02
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6
UserRequest
0.08
MOPPSO-CMS
0.07 NSGA-II
ACO-CMS
Spread
0.06
0.05
LoadBalance
0.04
0.03
0.02
0.01
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6
UserRequest
Figure 14. Performance differences of the four algorithms in global load balancing.
160
MOPPSO-CMS
140 NSGA-II
ACO-CMS
120 Spread
100
LinkFail
80
60
40
20
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6
UserRequest
Figure 15. Performance comparison results of the three algorithms in service reliability.
Figure 15 shows that the Spread algorithm was the worst, as it divides the containers as
evenly as possible between each physical node, which means that the number of requests
that may fail during transmission between containers on different physical nodes can
increase significantly. Although the ACO-CMS and GA-MOCA algorithms optimize
service reliability, they still have shortcomings. The MOPPSO-CMS algorithm in this paper
performed best, as it optimizes the requests between different physical nodes and between
the same physical nodes, while the three other algorithms do not.
1200
MOPPSO-CMS
NSGA-II
1000 ACO-CMS
Spread
800
RunTime
600
400
200
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6
UserRequest
Figure 16. Performance comparison results of the four algorithms, in terms of running time.
Sensors 2021, 21, 6212 26 of 28
Table 6 shows that the running times of the MOPPSO-CMS and Spread algorithms
were significantly lower than that of the GA-MOCA and ACO-CMS algorithms, which are
shown in bold. The first reason is that the convergence rates of the NSGA and ACO algo-
rithms are slow. The second reason is that the design of each ant and genome significantly
increases the calculation time of the algorithm, when there are multiple containers present.
The MOPPSO-CMS algorithm in this paper optimizes the shortcomings of the above two
algorithms and uses the particle swarm optimization algorithm with faster convergence in
parallel, which resulted in a quicker running time. The Spread algorithm achieves good
performance in running speed, due to the algorithm’s simplicity.
6. Conclusions
In this paper, according to the characteristics of container microservice scheduling,
three optimization objectives were proposed to reduce network transmission overhead,
stabilize load balancing, and increase service reliability. A multi-objective optimization
model was established, and a multi-objective optimization algorithm, based on the particle
swarm optimization algorithm, was proposed to solve the microservice container schedul-
ing problem. In this paper, parallel computing was used to ensure that different particle
swarms share the optimal solution through inter-process interaction, which effectively
avoids the particle swarm optimization algorithm falling into local optima. Further, we
optimized the representation of particles, and successfully reduced the calculation time of
the algorithm when multiple containers are present, addressing a critical disadvantage of
the previously discussed methods. Compared to other algorithms, although our algorithm
was slightly worse in partial load balancing than other algorithms, it had obvious advan-
tages in reducing network transmission overhead, balancing cluster, node load balancing,
improving service reliability, and reducing operation time. In future research, we plan
to study the results of the proposed optimization algorithm in an actual physical cluster.
On the basis of the three optimization objectives proposed in this paper, we will consider
other optimization objectives, and try to combine other optimization algorithms with
microservice container scheduling. Finally, future research will also include multi-objective
optimization algorithm performance improvement methods.
Author Contributions: Software, S.X.; Writing original draft, S.X.; Writing review & editing, X.C. All
authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.
Sensors 2021, 21, 6212 27 of 28
References
1. Johannes, T. Microservices. Softw. IEEE 2015, 32, 116.
2. Newman, S. Building Microservices: Designing Fine-Grained Systems; Oreilly & Associates Inc.:Sebastopol, CA, USA, 2015.
3. Daya, S.; Van Duy, N.; Eati, K.; Ferreira, C.M.; Glozic, D.; Gucer, V.; Gupta, M.; Joshi, S.; Lampkin, V.; Martins, M.; et al.
Microservices from Theory to Practice: Creating Applications in IBM Bluemix Using the Microservices Approach; IBM Redbooks:
Washington, DC, USA, 2016.
4. Hoff, T. Lessons Learned from Scaling Uber to 2000 Engineers, 1000 Services, and 8000 git Repositories. Available on-
line:http://highscalability.com/blog/2016/10/12/lessons-learned-from-scaling-uber-to-2000-engineers-1000-ser.html (accessed
on 12 October 2016) .
5. Ren, Z.; Wang, W.; Wu, G.; Gao, C.; Chen, W.; Wei, J.; Huang, T. Migrating web applications from monolithic structure to
microservices architecture. In Proceedings of the Tenth Asia-Pacific Symposium on Internetware, Beijing, China, 16 September
2018; pp. 1–10.
6. Guidi C., Lanese I., Mazzara M., Montesi F Microservices: A Language-Based Approach. In Present and Ulterior Software
Engineering; Mazzara M., Meyer B., Eds.; Springer: Cham, Switzerland, 2017; pp. 217–225. .
7. Naik, N. Building a virtual system of systems using docker swarm in multiple clouds. In Proceedings of the 2016 IEEE
International Symposium on Systems Engineering (ISSE), Edinburgh, UK, 3–5 October2016; pp. 1–3.
8. Frampton, M. Apache mesos. In Complete Guide to Open Source Big Data Stack; Apress: Berkeley, CA, USA, 2018; Volume 59,
pp. 644–645.
9. Sabharwal, N.; Pandey, P. Pro Google Kubernetes engine: Network, security, monitoring, and automation configuration. In Pro
Google Kubernetes Engine: Network, Security, Monitoring, and Automation Configuration; Apress: Berkeley, CA, USA, 2020.
10. Freeman, A. Docker swarms. In Essential Docker for ASP.NET Core MVC; Apress: Berkeley, CA, USA, 2017.
11. Lago, D.; Madeira, E.; Medhi, D. Energy-Aware Virtual Machine Scheduling on Heterogeneous Bandwidths’ Data Centers. IEEE
Trans. Parallel. Distrib. Syst. 2017, 29, 1. [CrossRef]
12. Zhou, M.; Dong, X.; Chen, H.; Zhang, X. A Dynamic Fine-grained Resource Scheduling Method in Cloud Environment. J. Softw.
2020, 31, 315–333.
13. Guerrero, C.; Lera, I.; Juiz, C. Genetic Algorithm for Multi-Objective Optimization of Container Allocation in Cloud Architecture.
J. Grid Comput. 2018, 16, 113–135. [CrossRef]
14. Lin, M.; Xi, J.; Bai, W.; Wu, J. Ant Colony Algorithm for Multi-Objective Optimization of Container-Based Microservice Scheduling
in Cloud. IEEE Access 2019, 7, 83088–83100.
15. Nguyen, N.D.; Kim, T. Balanced Leader Distribution Algorithm in Kubernetes Clusters. Sensors 2021, 21, 869. [CrossRef]
[PubMed]
16. Taherizadeh, S.; Stankovski, V.; Grobelnik, M. A Capillary Computing Architecture for Dynamic Internet of Things: Orchestration
of Microservices from Edge Devices to Fog and Cloud Providers. Sensors 2018, 18, 2938. [CrossRef] [PubMed]
17. Fan, S.K.S.; Chang, J.M. A parallel particle swarm optimization algorithm for multi-objective optimization problems. Eng. Optim.
2009, 41, 673–697. [CrossRef]
18. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks,
Perth, Australia, 27 November–1 December 1995; pp. 1942–1948.
19. Jun, S. Research on Quantum Behavior Particle Swarm Optimization Algorithm. Ph.D. Thesis, Southern Yangtze University,
Wuxi, China 2009.
20. Lifeng, X.; Zusheng, H.; Zhongzhu, Y.; Weilong, D. Hybrid Particle Swarm Optimization Algorithm with Multi-level Disturbance.
J. Softw. 2019, 30, 1835–1852.
21. Liu, K.; Cui, Y.; Ren, J.; Li, P. An Improved Particle Swarm Optimization Algorithm for Bayesian Network Structure Learning via
Local Information Constraint. IEEE Access 2021, 9, 40963–40971.
22. Liao, L.; Leung, V.C.M.; Li, Z.; Chao, H.C. Genetic Algorithms with Variant Particle Swarm Optimization Based Mutation for
Generic Controller Placement in Software-Defined Networks. Symmetry 2021, 13, 1133. [CrossRef]
23. Qamar, M.S.; Tu, S.; Ali, F.; Armghan, A.; Munir, M.F.; Alenezi, F.; Muhammad, F.; Ali, A.; Alnaim, N. Improvement of Traveling
Salesman Problem Solution Using Hybrid Algorithm Based on Best-Worst Ant System and Particle Swarm Optimization. Appl.
Sci. 2021, 11, 4780. [CrossRef]
24. Wang, X.; Lv, S. The roles of particle swarm intelligence in the prisoner’s dilemma based on continuous and mixed strategy
systems on scale-free networks. Appl. Math. Comput. 2019, 355, 213–220. [CrossRef]
25. Chhibber, D.; Bisht, D.C.; Srivastava, P.K. Pareto-optimal solution for fixed-charge solid transportation problem under intuitionis-
tic fuzzy environment. Appl. Soft Comput. 2021, 107, 107368. [CrossRef]
26. Nagaballi, S.; Kale, V.S. Pareto optimality and game theory approach for optimal deployment of DG in radial distribution system
to improve techno-economic benefits. Appl. Soft Comput. 2020, 92, 106234. [CrossRef]
27. Czajkowski, M.; Kretowski, M. A Multi-Objective Evolutionary Approach to Pareto Optimal Model Trees. A Preliminary Study; Springer
International Publishing: Berlin/Heidelberg, Germany, 2016.
28. Wang, H.; Yen, G.G.; Xin, Z. Multi-objective Particle Swarm Optimization Algorithm based on Pareto Entropy. J. Softw. 2014, 25,
1025–1050.
Sensors 2021, 21, 6212 28 of 28
29. Yuliang, S.; Yali, S.; Zhongmin, Z.; Honglei, Z.; Yu, C.; Lizhen, C. Privacy Protection Service Pricing Model based on Pareto
Optimal. J. Comput. 2016, 39, 1267–1280.
30. Chang’an, S.; Yimin, L.; Xiqin, W.; Peng, Y. Research on Radar-communication Shared Aperture based on Pareto optimal. J.
Electron. Inform. 2016, 38, 2351–2357.
31. Fu, S. Failure-aware resource management for high-availability computing clusters with distributed virtual machines. J. Parallel
Distrib. Comput. 2010, 70, 384–393. [CrossRef]
32. Corp, A. Alibaba Cluster Trace V2018. Available online: https://github.com/alibaba/clusterdata (accessed on 3 December 2018).
33. Borowska, B. An improved CPSO algorithm. In Proceedings of the 2016 XIth International Scientific and Technical Conference
Computer Sciences and Information Technologies (CSIT), Lviv, Ukraine, 6–10 September 2016; pp. 1–3. [CrossRef]
34. Borowska, B. Social strategy of particles in optimization problems. In World Congress on Global Optimization; Springer:
Berlin/Heidelberg, Germany, 2019; pp. 537–546.