Adaptive Swarm Intelligence Algorithms in Solving Combinatorial Optimization Problems
Adaptive Swarm Intelligence Algorithms in Solving Combinatorial Optimization Problems
2018 Adaptive Swarm Intelligence Algorithms in Solving Combinatorial Optimization Problems ("Young Scientist USA")
by Matrenin, P. V.
Matrenin PV (2014). Adaptive Swarm Intelligence Algorithms in Solving Combinatorial Optimization Problems. In Young Scientist USA (p. 129). Auburn,
WA: Lulu Press.
Abstract. The paper proposes a new approach to the study and implementation of swarm intelligence algorithms with adaptation of algorithms parameters
to the conditions of the problems to be solved. The results of application of the approach to the Particle Swarm Optimization algorithm for solving scheduling
problems.
Among the most urgent tasks that belong to the class of NP-hard, are scheduling problems. The complexity of planning problems with the constant
improvement of automation facilities led to the rise of interest in scheduling theory and scheduling problems. Despite the diversity of production
systems, formalized description of the scheduling problem (job-shop problem), presented in [1] and used in this work, may be viewed as the basis for a
large class of multi-stage systems.
There is a nite set of jobs and a nite set of devices (machine-tools, performers). The process of service includes several stages, each of which is
performed at a certain devices during a speci c time period. Functioning of the system can be described by specifying the schedule as a set of
guidelines specifying exactly what works are to be serviced by particular devices at any given moment of time. The best schedule is considered the one
the works according to which will require the least time for completion of all works, i.e., that of the longest work.
2. Methods of solution
Brute force method theoretically allows to nd the exact solution, but is not used, as it is time-consuming. The complexity of brute force algorithm for
a task with n requests and m appliances, if each requirement contains m stages in different sequences is O ((n!) m). Directed enumeration methods,
such as branch and bound, are applicable only for some particular cases [1].
In practice, various heuristic, approximate methods are used. The methods based on mechanisms that are used in nature, such as genetic algorithms,
arti cial neural networks, swarm intelligence are effective. The adaptive algorithm of swarm intelligence proposed in this paper is based on these
methods. The concept of swarm intelligence is a promising area of research of both theoretical and practical nature.
The concept of Swarm Intelligence originated in 1989 [2] and is based on the collective behavior of decentralized self-organizing systems. Such
systems consist of a population of simple agents locally interacting with each other and with the external environment in order to achieve a
predetermined goal, and are characterized by relatively simple behavior of individual elements, and intellectual general behavior.
An essential property of all swarm intelligence methods is the dependence of their ef ciency on coef cients used [3,4]. Since the coef cients can
assume an in nite number of values within a certain range, there is a question about the method of their selection. It is reasonable to apply for this
purpose genetic algorithm acting in analogy to the nature using natural selection for evolution. At the moment, the proposed approach is
implemented and studied for the Ant Colony Optimization [4] and Particle Swarm Optimization.
Particle Swarm Optimization Method was originally designed for modeling social behavior and based on the behavior of ocks of birds. [3] The
essence is in the movement of particles in the solution space. Let the task of nding a minimum (maximum) of function of the type f (X), where X is
the vector of variable parameters which can assume values from some domain D. Then each particle at each time is characterized by the value of the
parameters X from domain D (coordinates of the point in the solution space) and the value of optimized function f (X). The particle "remembers" the
best point in the solution space in which it has been, and seeks to return to it. As the link between the particles the so-called shared memory is used
(each particle knows the coordinates of the best point of all where any particle of the swarm has been).
http://www.youngscientistusa.com/archive/1/63/ 1/3
16.01.2018 Adaptive Swarm Intelligence Algorithms in Solving Combinatorial Optimization Problems ("Young Scientist USA")
where Vij is the speed of the i-th particle at the j-th iteration of the algorithm, Pij are the coordinates of the best point in the solution space in which
the i-th particle has been from the rst to the j-th iteration of the algorithm, Xij are the coordinates of the position of the i-th particle of the j-th
iteration of the algorithm, G are the coordinates of the best point which was found by the swarm at the time of j-th iteration, r1 and r2 are random
numbers uniformly distributed in the interval [0,1). The coef cients α1 and α2 determine the signi cance for the particle of its best position and the
best position among all the swarm. Coef cient ω characterizes the inertial properties of the particles.
To implement the selection of the parameters we used genetic algorithm as an add-on over swarm intelligence algorithm, in this case over the Particle
Swarm Optimization. The proposed approach to the Ant Colony Optimization is presented in [4]; for the other algorithms, including Particle Swarm
Optimization, the only difference will be in the number of coef cients and their range of admissible values. This approach can be described as follows.
Applied optimization problem solution is performed by a swarm intelligence algorithm with genetic algorithm used as an add-on conducting
parameter selection. As genes algorithm parameters are used, the quality of the solution obtained in this set of coef cients is taken as the value of the
tness function. Best sets of coef cients are selected for the next iteration, undergo crossover and mutation, thus achieving adaptation to each
problem solved at the moment. The process runs until it reaches the stop criterion.
The proposed method is very exible, it does not depend on the optimization problem, nor on swarm intelligence algorithm. From the above
description it can be seen that it depends neither on the type of optimization problem, nor of the choice of swarm intelligence algorithm and its
implementation, which sets it apart from other methods of selection coef cients.
6. Research results
To study the effectiveness of the proposed method, worldwide known common test problems were applied [5,6,7]. Table 1 presents comparison of
results for some tasks obtained by the particle swarm algorithm without adaptation and with adaptation.
ft06 5 5 5 59.77 55 55 10 5
Table 1 shows that the plans obtained with adaptation are not only signi cantly more effective than the average, but even than the best results
obtained by manual selection of coef cients. In [8] the best results for the problems [5,6,7] are given which were obtained by various authors using
various methods. The comparison showed that for many problems the results obtained in this study were identical. For some test problems they were
worse, but the maximum relative deviation does not exceed 3%. Comparison for some problems is given in Table 2.
http://www.youngscientistusa.com/archive/1/63/ 2/3
16.01.2018 Adaptive Swarm Intelligence Algorithms in Solving Combinatorial Optimization Problems ("Young Scientist USA")
Task Adaptive Particle Swarm Optimization The best known result
ft06 55 55
7. Conclusion
Using evolutionary adaptation parameters of Particle Swarm Optimization increases its ef ciency, simpli es the study, allows to adapt the algorithm
for the problems to be solved. Implementation of adaptation is possible by using genetic algorithm by analogy with the evolutionary selection in
nature. The proposed approach allows to obtain schedules with effectiveness close to that of the best existing methods.
References
1. Tanayev V.S., Sotskov Yu.I., Strusevich V.A. Scheduling Theory. Multi-stage systems. Moscow: Nauka, Chief Ed. Sci. lit., 1989. 328 p.
2. Beni, G., Wang, J. Swarm Intelligence in Cellular Robotic Systems, Proceed. NATO Advanced Workshop on Robots and Biological Systems,
Tuscany, Italy, June 26-30 (1989).
3. Kennedy J., Eberhart RC, Particle Swarm Optimization. Proc. of IEEE International Conference on Neural Network, Piscataway, NJ. Pp .. 1942-
1948 (1995).
4. Matrenin P.V., Seca V.G. Optimizing Adaptive Ant Colony Algorithm for the problem of scheduling // Software Engineering. 2013. Number 4.
Pp. 34 - 40.
5. Adams J., Balas E., Zawack D. The shifting bottleneck procedure for job shop scheduling // Management Science. 1991. Number 34. P. 391-
401.
6. Fisher H., Thompson G. Probabilistic learning combination of local job-shop scheduling rules, in Industrial Scheduling. Prentice-Hall,
Englewood Cliffs, NJ, 1963.
7. Lawrence S. Supplement to "resource constrained project scheduling: an experimental investigation of heuristic scheduling techniques" //
tech. rep., GSIA, Carnegie Mellon University, October 1984.
8. Pezzella F., Merelli E. A tabu search metho d guided by shifting bottleneck for the job shop scheduling problem // European Journal of
Operational Research. 2000. Number 120. P 297-310.
http://www.youngscientistusa.com/archive/1/63/ 3/3