0% found this document useful (0 votes)
129 views15 pages

Multiprocessor Scheduling Using Particle Swarm Opt

This document discusses multiprocessor scheduling using particle swarm optimization. It proposes a hybrid heuristic model involving particle swarm optimization (PSO) and simulated annealing (SA) algorithms to perform static allocation of tasks in a heterogeneous distributed computing system. The goal is to minimize total execution and communication costs. It also proposes a parallel version of PSO that involves data parallelism. Experimental results show that the parallel PSO version finds near-optimal solutions more effectively and efficiently compared to other methods.

Uploaded by

Joel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
129 views15 pages

Multiprocessor Scheduling Using Particle Swarm Opt

This document discusses multiprocessor scheduling using particle swarm optimization. It proposes a hybrid heuristic model involving particle swarm optimization (PSO) and simulated annealing (SA) algorithms to perform static allocation of tasks in a heterogeneous distributed computing system. The goal is to minimize total execution and communication costs. It also proposes a parallel version of PSO that involves data parallelism. Experimental results show that the parallel PSO version finds near-optimal solutions more effectively and efficiently compared to other methods.

Uploaded by

Joel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

See

discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/266472011

Multiprocessor Scheduling Using Particle


Swarm Optimization Multiprocessor
Scheduling Using Particle Swarm Op....

Article

CITATIONS READS

3 63

5 authors, including:

Sakthi Bhuvana
Adhiparasakthi Engineering College
17 PUBLICATIONS 106 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

semantic web service composition View project

All content following this page was uploaded by Sakthi Bhuvana on 11 February 2015.

The user has requested enhancement of the downloaded file.


Multiprocessor Scheduling Using Particle Swarm Optimization

Multiprocessor Scheduling Using Particle Swarm Optimization

S.N.Sivanandam1, P Visalakshi2, and A.Bhuvaneswari3

Professor and Head1 Senior Lecturer2 Pg Student3


Department of Computer Science and Engineering,
Psg College of Technology,
Peelamedu, Coimbatore – 641004, Tamilnadu, India

Abstract distributed computing system is a well-


studied area. In this paper, a static task
The problem of task assignment in allocation [4] in the heterogeneous
heterogeneous computing systems has been computing system is examined which
studied for many years with many variations. provides a variety of architectural
We have developed a new hybrid capabilities, orchestrated to perform on
approximation algorithm and also a parallel application problems whose tasks have
version of Particle Swarm Optimization to diverse execution requirements. Static
solve the task assignment problem. The allocation techniques can be applied to the
proposed hybrid heuristic model involves large set of real-world applications that are
Particle Swarm Optimization (PSO) able to be formulated in a manner which
Algorithm and Simulated Annealing (SA) allows for deterministic execution. Some
algorithm. This PSO/SA performs static advantages of these techniques over dynamic
allocation of tasks in a heterogeneous ones, which determine the module
distributed computing system in a manner assignment during run time, are that static
that is designed to minimize the cost. techniques have no run time overhead and
Particle Swarm Optimization with they can be designed using very complex
dynamically reducing inertia is implemented algorithmic mechanisms which fully utilize
which yields better result than fixed inertia. the known properties of a given application.
The parallel version of Particle Swarm In this paper, a very fast and easily
Optimization involves data parallelism. The implemented hybrid algorithm is presented
experimental results manifest that among the based on particle swarm optimization (PSO)
two proposed methods, the parallel version [15] and simulated annealing (SA)
is effective and efficient in finding near algorithm. Also the parallel version of the
optimal solutions. PSO algorithm is proposed. The proposed
Keywords: Task assignment problem, methods assign the tasks to processors and
Distributed systems, Hybrid strategy, avoid becoming trapped in local optimum
Parallel Strategy, Particle swarm and also lead to faster convergence towards
optimization, Simulated Annealing. the targeted solution.

1. Introduction 2. Related Work

The problem of scheduling a set of Several research works has been carried
dependent or independent tasks in a out in Task Assignment Problem [TAP]. The

International Journal of the Computer, the Internet and Management Vol. 17. No.3 (September-December, 2009) pp 11 -24

11
S.N.Sivanandam, P Visalakshi, and A.Bhuvaneswari

traditional methods such as branch and discusses the work related to task assignment
bound, divide and conquer, and dynamic problem. Section 4 explains the Particle
programming gives the global optimum, but Swarm Optimization heuristic used in this
is often too time consuming or do not apply paper. Section 5 illustrates the proposed
for solving typical real-world problems. The methods of this paper in detail. Section 6
researchers [9] [10] [23] had derived optimal reports the results obtained in this work.
task assignments to minimize the sum of task Finally Section 7 discusses the conclusions
execution and communication costs with the and directions for further research in this
branch-and-bound method and evaluated the area.
computational complexity of this method
using simulation. V.M.Lo [3] says that many
of the heuristic algorithms use a graphical 3. Problem Definition
representation of the task-processor system
such that a Max Flow/Min Cut Algorithm This paper considers the Task
can be utilized to find assignments of tasks Assignment Problem with following
to processors which minimize total execution scenario. The system consists of a set of
and communication costs [19] and concludes heterogeneous processors (n) having
that a measure of degree to which an different memory and processing resources,
algorithm achieves load balancing [20] can which implies that tasks (r), executed on
yield fairly unbalanced assignments. different processor encounters different
Traditional methods used in optimization are execution cost. The communication links are
deterministic, fast, and give exact answers assumed to be identical, however
but often tends to get stuck on local optima. communication cost between two tasks will
Also the time complexity from exponential be encountered when executed on different
to polynomial for traditional search processors. A task will make use of the
algorithms on NP-hard problems cannot be resources from its execution processor [19].
changed.
The objective is to minimize the total
Consequently, another approach is execution and communication cost
needed when the traditional methods cannot encountered by the task assignment subject
be applied. The modern heuristic approach to the resource constraints. To achieve
helps in such situation. Modern heuristics are minimum cost for the TAP, the function is
general purpose optimization algorithms. formulated as,
Their efficiency or applicability is not tied to
any specific problem-domain. Available Min
heuristics include Simulated Annealing r n r −1 r n

algorithms [13], Genetic Algorithms [17] Q( X ) = ∑∑ eik xik + ∑


i =1 k =1
∑ cij(1 − ∑ xikx jk )
i =1 j =i +1 k =1
[23], and Ant Colony algorithm [24]. Peng-
(1)
Yeng et.al [2] had proposed a hybrid strategy n
using Hill Climbing algorithm as a local ∑x ik = 1, ∀i = 1,2,..., r (2)
search method along with Particle Swarm k =1

Optimization. Hill Climbing heuristic has the n

problem of getting trapped in local optima. ∑mx


k =1
i ik ≤ Mk , ∀k = 1,2,..., n (3)
n
The remainder of this Section ∑ px i ik ≤ Pk , ∀k = 1,2,..., n (4)
formulates the TAP and discusses the k =1

existing methods for TAP. Section 3 xik ∈ {0,1}, ∀i, k (5)

12
Multiprocessor Scheduling Using Particle Swarm Optimization

xik is set to 1 if task i is assigned to the resulting algorithm is referred to as the


processor k. n denotes the number of gbest PSO. When smaller neighborhoods are
processors, r denotes the number of tasks, eik used, the algorithm is generally referred to as
denotes the incurred execution cost if tasks i the lbest PSO. The performance of each
is executed on processor k. cij is the incurred particle is measured using a fitness function
communication cost if tasks i and j are that varies depending on the optimization
executed on different processors. mi and pi problem. [24]
represents the memory requirements and
processing requirements of task i Each particle in the swarm is
respectively. Mk and Pk are the memory and represented by the following characteristics:
processing capacity of processor k. xi: The current position of the particle; vi:
The current velocity of the particle; yi: The
Q(X) is the objective function which personal best position of the particle. The
combines the total execution cost and total personal best position of particle i is the best
communication cost specified as 1st and 2nd position visited by particle i so far. There are
terms respectively in equation (1). The first two versions for keeping the neighbors’ best
constraint mentioned in equation (2) says vector, namely lbest and gbest. In the local
that each task should be assigned to exactly version, each particle keeps track of the best
one processor. Equation (3) and equation (4) vector lbest attained by its local topological
which is the 2nd and 3rd constraints assure neighborhood of particles. For the global
that the resource demand should never version, the best vector gbest is determined
exceed the resource capacity. The final by any particles in the entire swarm. Hence,
constraint as mentioned in equation (5) the gbest model is a special case of the lbest
specifies that the xik is a binary decision model. During each PSO iteration, particle i
variable. adjusts its velocity vij and position vector
particleij through each dimension j by
referring to, with random multipliers, the
4. Particle Swarm Optimization personal best vector (pbestij) and the swarm’s
best vector (gbestj, if the global version is
PSO is a stochastic optimization adopted). If global version is adopted, the
technique[21] which operates on the equations (6) and (7) are used.
principle of social behavior like bird flocking
or fish schooling[1] [15]. In a PSO system, a vij = w ∗ vij + c1 ∗ rand1(pbestij - particleij)
swarm of individuals (called particles) fly + c2 ∗ rand2(gbestj - particleij) (6)
through the search space. Rui mendes [11]
particleij = particleij + vij
discusses the complete information about the
(7)
Particle Swarm Optimization. Each particle
If local version is adopted, then the
represents a candidate solution to the
following equations (8) and (9) are used:
optimization problem. The position of a
particle is influenced by the best position
vij = w ∗ vij + c1 ∗ rand1(pbestij - particleij)
visited by itself i.e. its own experience and
the position of the best particle in its + c2 ∗ rand2(lbestj - particleij) (8)
neighborhood i.e. the experience of particleij = particleij + vij
neighboring particles. When the (9)
neighborhood of a particle is the entire where c1 and c2 are the cognitive
swarm, the best position in the neighborhood coefficients and rand1 and rand2 are random
is referred to as the global best particle, and real numbers drawn from U (0, 1). Thus, the

International Journal of the Computer, the Internet and Management Vol. 17. No.3 (September-December, 2009) pp 11 -24

13
S.N.Sivanandam, P Visalakshi, and A.Bhuvaneswari

particle flies through potential solutions that information travels slowly along the
toward pbesti and gbest in a navigated way graph. This allows for different regions of
while still exploring new areas by the the search space to be explored at the same
stochastic mechanism to escape from local time, as information of successful regions
optima. If c1 = c2, each particle will be takes a long time to travel to the other side of
attracted to the average of pbest and gbest the graph. It is called k best topology in
[1]. Since c1 expresses how much the general,(each node connected with k nodes).
particle trusts its own past experience, it is It is a circle topology if k=2 and called gbest
called the cognitive parameter, and since c2 topology. The four clusters topology
expresses how much it trusts the swarm, it is represents four cliques connected among
called the social parameter. Most themselves by several gateways.
implementations use a setting with c1 Sociologically, it resembles four mostly
roughly equal to c2 [2]. The inertia weight, isolated communities, where a few
w controls the momentum of the particle. individuals have an acquaintance outside
The inertia weight can be dynamically varied their group.
by applying an annealing scheme for the w-
setting of the PSO, where w decreases from The pyramid represents a three
w = 0.9 to w = 0.4 over the whole run. A dimensional wire-frame pyramid. It has the
significant performance improvement is seen lowest average distance of all the graphs and
by varying the inertia. the highest first and second degree
neighbors. The square is a graph representing
4.1 Neighborhood Topology a rectangular lattice that folds like a torus.
This structure, is commonly used to
The selection of the neighborhood plays represent neighborhoods in the Evolutionary
a major role in reaching the optimal solution Computation and Cellular Automata
faster. The various topology structure is communities, and is referred to as the von
illustrated in Figure 1. The all topology Neumann neighborhood (Hypercube
represents a fully connected graph, and, topology).
based on all the statistics, it is conjectured
that information spreads quickly [11]. The wheel topology, in which the only
Sociologically, it could represent a small and connections are from one central particle to
closed community where decisions are taken the others. Based on the available topologies
in consensus. [15] it can be concluded that the shorter the
average topological distance between any
The ring topology represents a regular two particles is, the faster the convergence
graph with a minimum number of edges gets and less connected topologies do not
between its nodes. The graph statistics show seem to prevent premature convergence [5].

It must be noted that the results only


indicate (although with some certainty) that
the best topology is gbest in general.

5. Proposed Methodology

Figure 1. Fully connected, Ring and Master- This section discusses Simple PSO, the
Slave Topology in PSO. proposed Hybrid PSO and the proposed
Parallel PSO. In PSO, each particle

14
Multiprocessor Scheduling Using Particle Swarm Optimization

corresponds to a candidate solution of the which correspond to a task assignment that


underlying problem. Thus, we let each assigns five tasks to three processors, and
particle represent a decision for task Particleparticle3,T4=P1 means that Task 4 is
assignment using a vector of r elements, and assigned to Processor 1.
each element is an integer value between 1 to
n. Figure 2. shows an illustrative example
where each row represents the particles

T1 T2 T3 T4 T5
particle 1 P3 P2 P1 P2 P2
particle 2 P1 P2 P3 P1 P1
particle 3 P1 P3 P2 P1 P2
particle 4 P2 P1 P2 P3 P1
particle 5 P2 P2 P1 P3 P1
Figure 2. Representation of particles

In hybrid version, hybridization is done the fitness value for each vector. The
by performing simulated annealing at the end objective value of Q(X) in Equation (1) can
of an iteration of simple PSO [22]. In parallel be used to measure the quality of each
version of PSO, data parallelization is solution vector. In modern heuristics the
implemented. Generally the Task infeasible solutions are also considered since
Assignment Problem is to assign the n tasks they may provide a valuable clue to targeting
to m processors so that the load is shared and optimal solution [14]. A penalty function is
also balanced. Here the proposed system devised to estimate the infeasibility level of a
considers n=20 and m=5 i.e. 20 tasks should solution. The penalty function is only related
be shared among 5 processors. The system to constraints (3) and (4), and it is given by
calculates the fitness value of each
assignment and selects the optimal r
Penalty ( X ) = max ( 0 , ∑ m ix ik − M k )
assignment from the set of solutions. The i =1 . (10)
system compares the memory and processing
+ max ( 0 , ∑
r

capacity of the processor with the memory p ix ik − P k )


i =1
and processing requirements of the tasks
respectively. If capacity is enough then the
This penalty is added to the objective
task is assigned, else a penalty is added to
function whenever the resource requirement
the calculated fitness value.
exceeds the capacity. Hence the fitness
function of the particle vector can finally be
5.1 Fitness Evaluation
defined as,
The initial population is generated
randomly and checked for the consistency Fitness ( X ) = (Q( X ) + Penalty ( X )) −1 (11)
[6]. Then each particle must be assigned with
the velocities obtained randomly and it lies Hence, as the fitness value increases the
in the interval [0, 1]. Each solution vector in total cost is minimized which is the objective
the solution space is evaluated by calculating of the problem.

International Journal of the Computer, the Internet and Management Vol. 17. No.3 (September-December, 2009) pp 11 -24

15
S.N.Sivanandam, P Visalakshi, and A.Bhuvaneswari

5.2 Simple PSO

As we had seen earlier in Section 2 the


classical PSO is very simple. There are two
versions for keeping the neighbors’ best
vector, namely lbest and gbest.

Generate the initial Swarm Generate the initial Swarm

Evaluate the initial Swarm using the Evaluate the initial Swarm using the
fitness function fitness function

Initialize the personal best of each particle Initialize the personal best of each particle
and the global best of the entire swarm and the global best of the entire swarm

Update the particle velocity using global Update the particle velocity using
best personal best

Apply velocities to the particles positions Apply velocities to the particles positions

Evaluate new particles positions Evaluate new particles positions

Re-evaluate the original swarm


Re-evaluate the original swarm

Find the new personal best and global best


Find the new personal best and global best

Has maximum Has maximum No


iteration reached? iteration
No reached?

Yes
Yes

Get the best individual from the last generation Get the best individual from the last
generation

Figure 3. Global best PSO Figure 4. Local best PSO

16
Multiprocessor Scheduling Using Particle Swarm Optimization

The global neighborhood ‘gbest’ is the and once it finds a good region, the
most intuitive neighborhood. In the local exploitation search kicks in. However, since
neighborhood ‘lbest’ a particle is just the two strategies are usually inter-wound,
connected to a fragmentary number of the search may be conducted to other regions
processes [1]. before it reaches the local optima. As a
result, many researchers suggest employing a
5.2.1 GBEST PSO hybrid strategy, which embeds a local
optimizer in between the iterations of the
In the global version, every particle has meta-heuristics [2] [8].
access to the fitness and best value so far of
all other particles in the swarm. Each particle The embedded simulated annealing
compares its fitness value with all other heuristic proceeds as follows. Given a
particles. This method implements the star particle vector, its r elements are sequentially
topology. It has exploitation of solution examined for updating. The value of the
spaces but exploration is weak [18]. The examined element is replaced, in turn, by
implementation can be depicted as a flow each integer value from 1 to n, and retains
chart as shown in figure 3. the best one that attains the highest fitness
value among them. While an element is
5.2.2 LBEST PSO examined, the values of the remaining r -1
elements remain unchanged. A neighbor of
In the local version, each particle keeps the new particle is selected. The fitness
track of the best vector lbest attained by its values for the new particle and its neighbor
local topological neighborhood of particles. are found. They are compared and the
Each particle compares with its neighbors minimum value is selected. This minimum
decided based on the size of the value is assigned to the personal best of this
neighborhood. The groups exchange particle. The heuristic is terminated if all the
information about local optima. Here the elements of the particle have been examined
exploitation of solution space is weakened for updating and all the particles are
and exploration becomes stronger. The examined. The computation for the fitness
implementation can be depicted as a flow value due to the element updating can be
chart as shown in figure 4. maximized. Since a value change in one
element affects the assignment of exactly
5.3 Hybrid PSO one task, we can save the fitness
computation by only recalculating the system
Modern meta-heuristics manage to costs and constraint conditions related to the
combine exploration and exploitation search. reassigned task. The flow is shown in figure 5.
The exploration search seeks for new regions,

International Journal of the Computer, the Internet and Management Vol. 17. No.3 (September-December, 2009) pp 11 -24

17
S.N.Sivanandam, P Visalakshi, and A.Bhuvaneswari

Generate the initial


Swarm

Evaluate the initial Swarm using the


fitness function

Initialize the personal best of each particle


and the global best of the entire swarm

Update the particle velocity using


personal best or local best

Apply velocities to the particles positions

Evaluate new particles positions

Re-evaluate the original swarm and find the


new personal best and global best

Improve solution quality using simulated annealing

Has maximum
iteration reached?
No

Get the best individual from the last generation

Figure 5. Hybrid PSO

18
Multiprocessor Scheduling Using Particle Swarm Optimization

5.4 Parallel PSO


START

Initialize constants K m, c1, c2, wmax and


wmin

Randomly initialize the position of K particles

Randomly initialize the velocity of K partices

F(XK1) F(XK2) F(XK2) … F(XKm)

Update local best from


each K particle.

Update K particle’s global best obtained till now

No Is m reached?

Yes

No Is fitness value same for


m*2 ?

Yes
Output Reached

STOP

Figure 6. Parallel PSO

International Journal of the Computer, the Internet and Management Vol. 17. No.3 (September-December, 2009) pp 11 -24

19
S.N.Sivanandam, P Visalakshi, and A.Bhuvaneswari

In the parallel version we had set to (20, 5). The values of the other
implemented the data parallelism [12] which parameters are generated randomly as
involves the generation of new particles in specified in [2]. The results of this
every iteration. The existing particles are experiment are obtained by varying the
evaluated and new particles are generated for number of particles, number of iterations and
each iteration. If the new particle has a better topology of neighborhood particles.
solution then the existing particles get
adjusted to that value. The search space is 6.1 Cost Evaluation
increased in this version. Hence the
convergence is also very faster The task incurs the execution cost and
outperforming all the other versions of PSO. communication cost when executed on
Figure 6 illustrates the working of parallel different machines. Our objective is to
PSO where K is the population size and m is minimize this total cost. We shall discuss the
the maximum iteration specified. evaluation of cost in various versions of
PSO. For each version the number of
6. Results and Discussion iterations was increased up to 100 and the
results were recorded.
This section describes the results of
simulations conducted to gain insight into 6.1.1 Cost Evaluation in Global Best PSO
the performance of the PSO algorithm
implementation. Various versions of PSO In the first method the cost is compared
algorithm like the simple PSO, the global with an increase in the number of iterations.
PSO, Hybrid PSO with Simulated Annealing The cost obtained initially was 1011. As
and Parallel PSO were implemented in illustrated in figure 7, the cost reduces as we
MATLAB 7.0.1 and run with E.Taillard’s increase the number of iterations and Gbest
benchmark data. The experimental results PSO converges to the minimum cost of 857
clearly demonstrate the effectiveness of the at the 28th iteration and remains the same till
Parallel version of PSO. The value of (r, n) is the last iteration.

Cost

1050
1000
950
Cost

900 Cost
850
800
750
11

30

45

60

75

90
1

Num ber of Iterations

Figure 7 Global best PSO with varying number of iterations

In the second method the cost obtained is illustrated in figure 8, the cost reduces as we
compared with an increase in the population increase the population size and Gbest PSO
size. The cost initially was 883. As converges to the minimum cost of 783 for

20
Multiprocessor Scheduling Using Particle Swarm Optimization

the population size of 500 and remains the for the population size of 500. The increase
same albeit the increase in the population in cost is because of the each particle getting
size. the information only from its neighbors.
900 940
880 920
860
900
840
880
820
cost

Cost 860

cost
800 Cost
840
780

760 820

740 800
720 780
100 200 300 400 500
760
Population size
100 200 300 400 500

Population size
Figure 8 Decreasing Cost in Gbest PSO
with varying population size Figure 10 Decreasing Cost in Lbest PSO
with varying population size
6.1.2 Cost Evaluation in Local Best PSO
6.1.3 Cost Evaluation in Hybrid PSO
In the first method the cost is compared
with an increase in the number of iterations. In the first method the cost is compared
Figure 9 depicts the cost obtained initially as with an increase in the number of iterations.
1035. The cost reduces as we increase the In this method the cost obtained initially was
number of iterations and Lbest PSO 940 as shown in figure 11. The cost reduces
converges to the minimum cost of 857 at the as we increase the number of iterations and
100th iteration only. This is because it gets Hybrid PSO converges to the minimum cost
the information only from its neighbors. of 857 at the 21st iteration and remains the
same till the last iteration. The faster
Local Best PSO
convergence is due to the hybridization with
1200
simulated annealing.
1000
Hybrid PSO
800
Cost
600 Cost 960
940
400
920
200
cost900
0 880 Cost
1 12 13 14 26 37 80 89 100 860
840
Number of iterations
820
800
Figure 9 Local best PSO with varying 1 5 21 30 40 50 60 70 80 90 100

number of iterations Number of iterations

In the second method the cost obtained


is compared with an increase in the Figure 11 Hybrid PSO with varying
population size. Figure 10 depicts the cost number of iterations
obtained initially as 914. The cost reduces as
we increase the population size and Lbest In the second method the cost obtained
PSO converges to the minimum cost of 818 is compared with an increase in the

International Journal of the Computer, the Internet and Management Vol. 17. No.3 (September-December, 2009) pp 11 -24

21
S.N.Sivanandam, P Visalakshi, and A.Bhuvaneswari

population size. In this method the cost In the second method the cost obtained
obtained initially was 885 as shown in figure is compared with an increase in the
12. The cost reduces as we increase the population size. In this method the cost
population size and Hybrid PSO converges obtained initially was 880. The cost reduces
to the minimum cost of 783 for a population as we increase the population size and
size of 500 and remains the same for any Parallel PSO converges to the minimum cost
increase in the population size. The faster of 783 for the population size of 300 and
convergence is due to the hybridization with remains the same for any increase in the
simulated annealing. population size and the results are depicted
900
in Figure 14.
880

860
900
840
880
820
cost

Cost 860
800
840
780
820

cost
760 Cost
800
740
780
720
100 200 300 400 500 760

Population size 740

720
100 200 300 400 500
Figure 12 Hybrid PSO with varying Population size
population size
Figure 14 Parallel PSO with varying
6.1.4 Cost Evaluation in Parallel PSO population size

In the first method the cost is compared 6.2 Time Taken For Convergence
with an increase in the number of iterations.
In this method the cost obtained initially was Next let us consider the time taken for
1188. The cost reduces as we increase the the convergence of the particles. The parallel
number of iterations and Parallel PSO version of PSO converges faster than all
converges to the minimum cost of 857 at the other versions. This is checked for varying
34th iteration and remains the same till the population.
last iteration and the results are depicted in
Figure 13.
Parallel PSO 3

2.5
1400
Time Taken(in seconds)

1200 2
Global Best
1000
Local Best
Cost 800
1.5
Hybrid
600
1 Parallel
400
200
0.5
0
1 7 21 35 50 65 80 95 0
100 200 300 400 500

Number of iterations Population Size

Figure 13 Parallel PSO with varying Figure 15. Time Taken for
number of iterations convergence

22
Multiprocessor Scheduling Using Particle Swarm Optimization

As the population increases the global on Computers, Vol. 37, No. 11, pp.
and local best versions takes longer time for 1384– 1397.
the convergence. But hybrid performs better [4] A. Abdelmageed Elsadek, B. Earl
and parallel version still better. This can be Wells (1999), “A Heuristic model for
inferred from Figure 15. task allocation in heterogeneous
distributed computing systems”, The
International Journal of Computers
7. Conclusion and Their Applications, Vol. 6, No. 1.
[5] M. Fatih Taşgetiren & Yun-Chia Liang,
In many problem domains, we are “A Binary Particle Swarm
required to assign the tasks of an application Optimization Algorithm for Lot Sizing
to a set of distributed processors such that Problem”, Journal of Economic and
the incurred cost is minimized and the Social Research, Vol.5 No.2, pp. 1-20.
system throughput is maximized. Several [6] Tzu-Chiang Chiang, Po-Yin Chang,
versions of the task assignment problem and Yueh-Min Huang (2006), “Multi-
(TAP) have been formally defined but, Processor Tasks with Resource and
unfortunately, most of them are NP- Timing Constraints Using Particle
complete. In this paper, we have proposed a Swarm Optimization”, IJCSNS
particle swarm optimization/simulated International Journal of Computer
annealing (PSO/SA) algorithm which finds a Science and Network Security, Vol.6
near-optimal task assignment with No.4.
reasonable time. Then we had implemented [7] K. E. Parsopoulos, M. N. Vrahatis
the parallel version which outperforms all (2002), “Recent approaches to global
the other versions of PSO. We are currently optimization problems through particle
conducting our research for using PSO to swarm optimization”, Natural
solve another version of the TAP with Computing Vol.1, pp. 235 – 306.
dependent tasks and the problem objective is [8] Chen Ai-ling, YANG Gen-ke, Wu Zhi-
to minimize the cost for accomplishing the ming (2006), “Hybrid discrete particle
task execution in a dynamic environment. swarm optimization algorithm for
capacitated vehicle routing problem”,
Journal of Zhejiang University Vol.7,
References No.4, pp.607-614.
[9] Ruey-Maw Chen, Yueh-Min Huang
[1] J. Kennedy and Russell C. Eberhart, (2001), “Multiprocessor Task
(2001), Swarm Intelligence, pp 337- Assignment with Fuzzy Hopfield
342, Morgan-Kaufmann. Neural Network Clustering
[2] Peng-Yeng Yin, Shiuh-Sheng Yu, Pei- Techniques”, Journal of Neural
Pei Wang, Yi-Te Wang (2006), “A Computing and Applications, Vol.10,
hybrid particle swarm optimization No.1.
algorithm for optimal task assignment [10] Dar-Tzen Peng, Kang G. Shin, Tarek
in distributed systems”, Computer F. Abdelzaher (1997), “Assignment
Standards & Interfaces , Vol.28, pp. and Scheduling Communicating
441-450. Periodic Tasks in Distributed Real-
[3] Virginia Mary Lo (1998), “Heuristic Time Systems”, IEEE Transactions On
algorithms for task assignment in dis- Software Engineering, Vol. 23, No. 12
tributed systems”, IEEE Transactions [11] Rui Mendes, James Kennedy and José
Neves (2005), “The Fully Informed

International Journal of the Computer, the Internet and Management Vol. 17. No.3 (September-December, 2009) pp 11 -24

23
S.N.Sivanandam, P Visalakshi, and A.Bhuvaneswari

Particle Swarm: Simpler, Maybe


Better”, IEEE Transactions of [20] S.Batainah and M.AI – Ibrahim (1998),
Evolutionary Computation, Vol. 1, No. ”Load management in loosely coupled
1, January. multiprocessor systems”, Journal of
[12] J. F. Schutte, J. A. Reinbolt, B. J. Dynamics and Control, Vol.8, No.1,
Fregly, R. T. Haftka and A. D. George pp. 107-116.
(2004), “Parallel global optimization [21] Yuhui Shi (2004), “Particle Swarm
with the particle swarm algorithm”, Optimization”, IEEE Neural Network
International Journal For Numerical Society ,
Methods In Engineering, Vol 6, [22] Yskandar Hamam , Khalil S. Hindi
pp.2296–2315. (2000), “Assignment of program
[13] Osman, I.H. (1993), “ Metastrategy modules to processors: A simulated
simulated annealing and tabu search annealing approach”, European
algorithms for the vehicle routing Journal of Operational Research 122
problem”, Annals of Operations 509-513
Research, Vol.41, No.4, pp.421-451. [23] Annie S. Wu, Shiyun Jin, Kuo-Chi Lin
[14] Y. Shi, and R. Eberhart (1998), and Guy Schiavone (2004),
“Parameter Selection in Particle Swarm “Incremental Genetic Algorithm
Optimization, Evolutionary Program- Approach to Multiprocessor
ming VII”, Proceedings of Scheduling”, IEEE Transactions on
Evolutionary Programming, pp. 591- Parallel and Distributed Systems.
600. [24] Graham Ritchie (2003), “ Static Multi-
[15] James Kennedy, Russell Eberhart, processor scheduling with Ant Colony
“Particle Swarm Optimization”, Proc. Optimisation and Local search”,
IEEE Int'l. Conference on Neural Master of Science thesis, University of
Networks , Vol.4, pp.1942-1948. Edinburgh.
[16] F. Van Den Bergh, A.P. Engelbrecht
(2006), “A study of particle swarm
optimization particle trajectories”,
Information Sciences, PP. 937–97.
[17] Edwin S . H . Hou, Ninvan Ansari,
and Hong Ren (1994), “A genetic
algorithm for multiprocessor
scheduling”, IEEE Transactions On
Parallel And Distributed Systems, Vol.
5, No. 2.
[18] Maurice Clerc and James Kennedy
(2002), “The Particle Swarm—
Explosion, Stability, and Convergence
in a Multidimensional Complex
Space”, IEEE Transactions On
Evolutionary Computation, Vol.6,No.1.
[19] Ioan Cristian Trelea (2003), “The
particle swarm optimization algorithm:
convergence analysis and parameter
selection”, Information Processing
Letters, Vol. 85, pp. 317–325.

24

View publication stats

You might also like