Multiprocessor Scheduling Using Particle Swarm Opt
Multiprocessor Scheduling Using Particle Swarm Opt
discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/266472011
Article
CITATIONS READS
3 63
5 authors, including:
Sakthi Bhuvana
Adhiparasakthi Engineering College
17 PUBLICATIONS 106 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Sakthi Bhuvana on 11 February 2015.
The problem of scheduling a set of Several research works has been carried
dependent or independent tasks in a out in Task Assignment Problem [TAP]. The
International Journal of the Computer, the Internet and Management Vol. 17. No.3 (September-December, 2009) pp 11 -24
11
S.N.Sivanandam, P Visalakshi, and A.Bhuvaneswari
traditional methods such as branch and discusses the work related to task assignment
bound, divide and conquer, and dynamic problem. Section 4 explains the Particle
programming gives the global optimum, but Swarm Optimization heuristic used in this
is often too time consuming or do not apply paper. Section 5 illustrates the proposed
for solving typical real-world problems. The methods of this paper in detail. Section 6
researchers [9] [10] [23] had derived optimal reports the results obtained in this work.
task assignments to minimize the sum of task Finally Section 7 discusses the conclusions
execution and communication costs with the and directions for further research in this
branch-and-bound method and evaluated the area.
computational complexity of this method
using simulation. V.M.Lo [3] says that many
of the heuristic algorithms use a graphical 3. Problem Definition
representation of the task-processor system
such that a Max Flow/Min Cut Algorithm This paper considers the Task
can be utilized to find assignments of tasks Assignment Problem with following
to processors which minimize total execution scenario. The system consists of a set of
and communication costs [19] and concludes heterogeneous processors (n) having
that a measure of degree to which an different memory and processing resources,
algorithm achieves load balancing [20] can which implies that tasks (r), executed on
yield fairly unbalanced assignments. different processor encounters different
Traditional methods used in optimization are execution cost. The communication links are
deterministic, fast, and give exact answers assumed to be identical, however
but often tends to get stuck on local optima. communication cost between two tasks will
Also the time complexity from exponential be encountered when executed on different
to polynomial for traditional search processors. A task will make use of the
algorithms on NP-hard problems cannot be resources from its execution processor [19].
changed.
The objective is to minimize the total
Consequently, another approach is execution and communication cost
needed when the traditional methods cannot encountered by the task assignment subject
be applied. The modern heuristic approach to the resource constraints. To achieve
helps in such situation. Modern heuristics are minimum cost for the TAP, the function is
general purpose optimization algorithms. formulated as,
Their efficiency or applicability is not tied to
any specific problem-domain. Available Min
heuristics include Simulated Annealing r n r −1 r n
12
Multiprocessor Scheduling Using Particle Swarm Optimization
International Journal of the Computer, the Internet and Management Vol. 17. No.3 (September-December, 2009) pp 11 -24
13
S.N.Sivanandam, P Visalakshi, and A.Bhuvaneswari
particle flies through potential solutions that information travels slowly along the
toward pbesti and gbest in a navigated way graph. This allows for different regions of
while still exploring new areas by the the search space to be explored at the same
stochastic mechanism to escape from local time, as information of successful regions
optima. If c1 = c2, each particle will be takes a long time to travel to the other side of
attracted to the average of pbest and gbest the graph. It is called k best topology in
[1]. Since c1 expresses how much the general,(each node connected with k nodes).
particle trusts its own past experience, it is It is a circle topology if k=2 and called gbest
called the cognitive parameter, and since c2 topology. The four clusters topology
expresses how much it trusts the swarm, it is represents four cliques connected among
called the social parameter. Most themselves by several gateways.
implementations use a setting with c1 Sociologically, it resembles four mostly
roughly equal to c2 [2]. The inertia weight, isolated communities, where a few
w controls the momentum of the particle. individuals have an acquaintance outside
The inertia weight can be dynamically varied their group.
by applying an annealing scheme for the w-
setting of the PSO, where w decreases from The pyramid represents a three
w = 0.9 to w = 0.4 over the whole run. A dimensional wire-frame pyramid. It has the
significant performance improvement is seen lowest average distance of all the graphs and
by varying the inertia. the highest first and second degree
neighbors. The square is a graph representing
4.1 Neighborhood Topology a rectangular lattice that folds like a torus.
This structure, is commonly used to
The selection of the neighborhood plays represent neighborhoods in the Evolutionary
a major role in reaching the optimal solution Computation and Cellular Automata
faster. The various topology structure is communities, and is referred to as the von
illustrated in Figure 1. The all topology Neumann neighborhood (Hypercube
represents a fully connected graph, and, topology).
based on all the statistics, it is conjectured
that information spreads quickly [11]. The wheel topology, in which the only
Sociologically, it could represent a small and connections are from one central particle to
closed community where decisions are taken the others. Based on the available topologies
in consensus. [15] it can be concluded that the shorter the
average topological distance between any
The ring topology represents a regular two particles is, the faster the convergence
graph with a minimum number of edges gets and less connected topologies do not
between its nodes. The graph statistics show seem to prevent premature convergence [5].
5. Proposed Methodology
Figure 1. Fully connected, Ring and Master- This section discusses Simple PSO, the
Slave Topology in PSO. proposed Hybrid PSO and the proposed
Parallel PSO. In PSO, each particle
14
Multiprocessor Scheduling Using Particle Swarm Optimization
T1 T2 T3 T4 T5
particle 1 P3 P2 P1 P2 P2
particle 2 P1 P2 P3 P1 P1
particle 3 P1 P3 P2 P1 P2
particle 4 P2 P1 P2 P3 P1
particle 5 P2 P2 P1 P3 P1
Figure 2. Representation of particles
In hybrid version, hybridization is done the fitness value for each vector. The
by performing simulated annealing at the end objective value of Q(X) in Equation (1) can
of an iteration of simple PSO [22]. In parallel be used to measure the quality of each
version of PSO, data parallelization is solution vector. In modern heuristics the
implemented. Generally the Task infeasible solutions are also considered since
Assignment Problem is to assign the n tasks they may provide a valuable clue to targeting
to m processors so that the load is shared and optimal solution [14]. A penalty function is
also balanced. Here the proposed system devised to estimate the infeasibility level of a
considers n=20 and m=5 i.e. 20 tasks should solution. The penalty function is only related
be shared among 5 processors. The system to constraints (3) and (4), and it is given by
calculates the fitness value of each
assignment and selects the optimal r
Penalty ( X ) = max ( 0 , ∑ m ix ik − M k )
assignment from the set of solutions. The i =1 . (10)
system compares the memory and processing
+ max ( 0 , ∑
r
International Journal of the Computer, the Internet and Management Vol. 17. No.3 (September-December, 2009) pp 11 -24
15
S.N.Sivanandam, P Visalakshi, and A.Bhuvaneswari
Evaluate the initial Swarm using the Evaluate the initial Swarm using the
fitness function fitness function
Initialize the personal best of each particle Initialize the personal best of each particle
and the global best of the entire swarm and the global best of the entire swarm
Update the particle velocity using global Update the particle velocity using
best personal best
Apply velocities to the particles positions Apply velocities to the particles positions
Yes
Yes
Get the best individual from the last generation Get the best individual from the last
generation
16
Multiprocessor Scheduling Using Particle Swarm Optimization
The global neighborhood ‘gbest’ is the and once it finds a good region, the
most intuitive neighborhood. In the local exploitation search kicks in. However, since
neighborhood ‘lbest’ a particle is just the two strategies are usually inter-wound,
connected to a fragmentary number of the search may be conducted to other regions
processes [1]. before it reaches the local optima. As a
result, many researchers suggest employing a
5.2.1 GBEST PSO hybrid strategy, which embeds a local
optimizer in between the iterations of the
In the global version, every particle has meta-heuristics [2] [8].
access to the fitness and best value so far of
all other particles in the swarm. Each particle The embedded simulated annealing
compares its fitness value with all other heuristic proceeds as follows. Given a
particles. This method implements the star particle vector, its r elements are sequentially
topology. It has exploitation of solution examined for updating. The value of the
spaces but exploration is weak [18]. The examined element is replaced, in turn, by
implementation can be depicted as a flow each integer value from 1 to n, and retains
chart as shown in figure 3. the best one that attains the highest fitness
value among them. While an element is
5.2.2 LBEST PSO examined, the values of the remaining r -1
elements remain unchanged. A neighbor of
In the local version, each particle keeps the new particle is selected. The fitness
track of the best vector lbest attained by its values for the new particle and its neighbor
local topological neighborhood of particles. are found. They are compared and the
Each particle compares with its neighbors minimum value is selected. This minimum
decided based on the size of the value is assigned to the personal best of this
neighborhood. The groups exchange particle. The heuristic is terminated if all the
information about local optima. Here the elements of the particle have been examined
exploitation of solution space is weakened for updating and all the particles are
and exploration becomes stronger. The examined. The computation for the fitness
implementation can be depicted as a flow value due to the element updating can be
chart as shown in figure 4. maximized. Since a value change in one
element affects the assignment of exactly
5.3 Hybrid PSO one task, we can save the fitness
computation by only recalculating the system
Modern meta-heuristics manage to costs and constraint conditions related to the
combine exploration and exploitation search. reassigned task. The flow is shown in figure 5.
The exploration search seeks for new regions,
International Journal of the Computer, the Internet and Management Vol. 17. No.3 (September-December, 2009) pp 11 -24
17
S.N.Sivanandam, P Visalakshi, and A.Bhuvaneswari
Has maximum
iteration reached?
No
18
Multiprocessor Scheduling Using Particle Swarm Optimization
No Is m reached?
Yes
Yes
Output Reached
STOP
International Journal of the Computer, the Internet and Management Vol. 17. No.3 (September-December, 2009) pp 11 -24
19
S.N.Sivanandam, P Visalakshi, and A.Bhuvaneswari
In the parallel version we had set to (20, 5). The values of the other
implemented the data parallelism [12] which parameters are generated randomly as
involves the generation of new particles in specified in [2]. The results of this
every iteration. The existing particles are experiment are obtained by varying the
evaluated and new particles are generated for number of particles, number of iterations and
each iteration. If the new particle has a better topology of neighborhood particles.
solution then the existing particles get
adjusted to that value. The search space is 6.1 Cost Evaluation
increased in this version. Hence the
convergence is also very faster The task incurs the execution cost and
outperforming all the other versions of PSO. communication cost when executed on
Figure 6 illustrates the working of parallel different machines. Our objective is to
PSO where K is the population size and m is minimize this total cost. We shall discuss the
the maximum iteration specified. evaluation of cost in various versions of
PSO. For each version the number of
6. Results and Discussion iterations was increased up to 100 and the
results were recorded.
This section describes the results of
simulations conducted to gain insight into 6.1.1 Cost Evaluation in Global Best PSO
the performance of the PSO algorithm
implementation. Various versions of PSO In the first method the cost is compared
algorithm like the simple PSO, the global with an increase in the number of iterations.
PSO, Hybrid PSO with Simulated Annealing The cost obtained initially was 1011. As
and Parallel PSO were implemented in illustrated in figure 7, the cost reduces as we
MATLAB 7.0.1 and run with E.Taillard’s increase the number of iterations and Gbest
benchmark data. The experimental results PSO converges to the minimum cost of 857
clearly demonstrate the effectiveness of the at the 28th iteration and remains the same till
Parallel version of PSO. The value of (r, n) is the last iteration.
Cost
1050
1000
950
Cost
900 Cost
850
800
750
11
30
45
60
75
90
1
In the second method the cost obtained is illustrated in figure 8, the cost reduces as we
compared with an increase in the population increase the population size and Gbest PSO
size. The cost initially was 883. As converges to the minimum cost of 783 for
20
Multiprocessor Scheduling Using Particle Swarm Optimization
the population size of 500 and remains the for the population size of 500. The increase
same albeit the increase in the population in cost is because of the each particle getting
size. the information only from its neighbors.
900 940
880 920
860
900
840
880
820
cost
Cost 860
cost
800 Cost
840
780
760 820
740 800
720 780
100 200 300 400 500
760
Population size
100 200 300 400 500
Population size
Figure 8 Decreasing Cost in Gbest PSO
with varying population size Figure 10 Decreasing Cost in Lbest PSO
with varying population size
6.1.2 Cost Evaluation in Local Best PSO
6.1.3 Cost Evaluation in Hybrid PSO
In the first method the cost is compared
with an increase in the number of iterations. In the first method the cost is compared
Figure 9 depicts the cost obtained initially as with an increase in the number of iterations.
1035. The cost reduces as we increase the In this method the cost obtained initially was
number of iterations and Lbest PSO 940 as shown in figure 11. The cost reduces
converges to the minimum cost of 857 at the as we increase the number of iterations and
100th iteration only. This is because it gets Hybrid PSO converges to the minimum cost
the information only from its neighbors. of 857 at the 21st iteration and remains the
same till the last iteration. The faster
Local Best PSO
convergence is due to the hybridization with
1200
simulated annealing.
1000
Hybrid PSO
800
Cost
600 Cost 960
940
400
920
200
cost900
0 880 Cost
1 12 13 14 26 37 80 89 100 860
840
Number of iterations
820
800
Figure 9 Local best PSO with varying 1 5 21 30 40 50 60 70 80 90 100
International Journal of the Computer, the Internet and Management Vol. 17. No.3 (September-December, 2009) pp 11 -24
21
S.N.Sivanandam, P Visalakshi, and A.Bhuvaneswari
population size. In this method the cost In the second method the cost obtained
obtained initially was 885 as shown in figure is compared with an increase in the
12. The cost reduces as we increase the population size. In this method the cost
population size and Hybrid PSO converges obtained initially was 880. The cost reduces
to the minimum cost of 783 for a population as we increase the population size and
size of 500 and remains the same for any Parallel PSO converges to the minimum cost
increase in the population size. The faster of 783 for the population size of 300 and
convergence is due to the hybridization with remains the same for any increase in the
simulated annealing. population size and the results are depicted
900
in Figure 14.
880
860
900
840
880
820
cost
Cost 860
800
840
780
820
cost
760 Cost
800
740
780
720
100 200 300 400 500 760
720
100 200 300 400 500
Figure 12 Hybrid PSO with varying Population size
population size
Figure 14 Parallel PSO with varying
6.1.4 Cost Evaluation in Parallel PSO population size
In the first method the cost is compared 6.2 Time Taken For Convergence
with an increase in the number of iterations.
In this method the cost obtained initially was Next let us consider the time taken for
1188. The cost reduces as we increase the the convergence of the particles. The parallel
number of iterations and Parallel PSO version of PSO converges faster than all
converges to the minimum cost of 857 at the other versions. This is checked for varying
34th iteration and remains the same till the population.
last iteration and the results are depicted in
Figure 13.
Parallel PSO 3
2.5
1400
Time Taken(in seconds)
1200 2
Global Best
1000
Local Best
Cost 800
1.5
Hybrid
600
1 Parallel
400
200
0.5
0
1 7 21 35 50 65 80 95 0
100 200 300 400 500
Figure 13 Parallel PSO with varying Figure 15. Time Taken for
number of iterations convergence
22
Multiprocessor Scheduling Using Particle Swarm Optimization
As the population increases the global on Computers, Vol. 37, No. 11, pp.
and local best versions takes longer time for 1384– 1397.
the convergence. But hybrid performs better [4] A. Abdelmageed Elsadek, B. Earl
and parallel version still better. This can be Wells (1999), “A Heuristic model for
inferred from Figure 15. task allocation in heterogeneous
distributed computing systems”, The
International Journal of Computers
7. Conclusion and Their Applications, Vol. 6, No. 1.
[5] M. Fatih Taşgetiren & Yun-Chia Liang,
In many problem domains, we are “A Binary Particle Swarm
required to assign the tasks of an application Optimization Algorithm for Lot Sizing
to a set of distributed processors such that Problem”, Journal of Economic and
the incurred cost is minimized and the Social Research, Vol.5 No.2, pp. 1-20.
system throughput is maximized. Several [6] Tzu-Chiang Chiang, Po-Yin Chang,
versions of the task assignment problem and Yueh-Min Huang (2006), “Multi-
(TAP) have been formally defined but, Processor Tasks with Resource and
unfortunately, most of them are NP- Timing Constraints Using Particle
complete. In this paper, we have proposed a Swarm Optimization”, IJCSNS
particle swarm optimization/simulated International Journal of Computer
annealing (PSO/SA) algorithm which finds a Science and Network Security, Vol.6
near-optimal task assignment with No.4.
reasonable time. Then we had implemented [7] K. E. Parsopoulos, M. N. Vrahatis
the parallel version which outperforms all (2002), “Recent approaches to global
the other versions of PSO. We are currently optimization problems through particle
conducting our research for using PSO to swarm optimization”, Natural
solve another version of the TAP with Computing Vol.1, pp. 235 – 306.
dependent tasks and the problem objective is [8] Chen Ai-ling, YANG Gen-ke, Wu Zhi-
to minimize the cost for accomplishing the ming (2006), “Hybrid discrete particle
task execution in a dynamic environment. swarm optimization algorithm for
capacitated vehicle routing problem”,
Journal of Zhejiang University Vol.7,
References No.4, pp.607-614.
[9] Ruey-Maw Chen, Yueh-Min Huang
[1] J. Kennedy and Russell C. Eberhart, (2001), “Multiprocessor Task
(2001), Swarm Intelligence, pp 337- Assignment with Fuzzy Hopfield
342, Morgan-Kaufmann. Neural Network Clustering
[2] Peng-Yeng Yin, Shiuh-Sheng Yu, Pei- Techniques”, Journal of Neural
Pei Wang, Yi-Te Wang (2006), “A Computing and Applications, Vol.10,
hybrid particle swarm optimization No.1.
algorithm for optimal task assignment [10] Dar-Tzen Peng, Kang G. Shin, Tarek
in distributed systems”, Computer F. Abdelzaher (1997), “Assignment
Standards & Interfaces , Vol.28, pp. and Scheduling Communicating
441-450. Periodic Tasks in Distributed Real-
[3] Virginia Mary Lo (1998), “Heuristic Time Systems”, IEEE Transactions On
algorithms for task assignment in dis- Software Engineering, Vol. 23, No. 12
tributed systems”, IEEE Transactions [11] Rui Mendes, James Kennedy and José
Neves (2005), “The Fully Informed
International Journal of the Computer, the Internet and Management Vol. 17. No.3 (September-December, 2009) pp 11 -24
23
S.N.Sivanandam, P Visalakshi, and A.Bhuvaneswari
24