A New Optimization Algorithm For Combinatorial Problems 2013
A New Optimization Algorithm For Combinatorial Problems 2013
Abstract—Combinatorial optimization problems are The solution is usually the best values of the variables or
those problems that have a finite set of possible solutions. the best scenarios which can also be called the optimal
The best way to solve a combinatorial optimization solution. This optimal solution should give us the best
problem is to check all the feasible solutions in the search performance or best fitness in terms of the objective function.
space. However, checking all the feasible solutions is not In most optimization problems there is more than one local
always possible, especially when the search space is large. solution. Therefore, it becomes very important to choose a
Thus, many meta-heuristic algorithms have been devised good optimization method that will not be greedy and look
and modified to solve these problems. The meta-heuristic only in the neighborhood of the best solution; because this will
approaches are not guaranteed to find the optimal solution mislead the search process and leave it stuck at a local solution.
since they evaluate only a subset of the feasible solutions, However, the optimization algorithm should have a mechanism
but they try to explore different areas in the search space to balance between local and global search. An example of a
in a smart way to get a near-optimal solution in less cost two-dimensional function that has more than one local and
and time. In this paper, we propose a new meta-heuristic global solution is shown in Fig.1 [1].
algorithm that can be used for solving combinatorial There are multiple methods used to solve optimization
optimization problems. The method introduced in this problems of both the mathematical and combinatorial types. In
paper is named the Global Neighborhood Algorithm fact, if the optimization problem is simple or if the search space
(GNA). The algorithm is principally based on a balance is small, then the optimization problem can be solved using
between both the global and local search. A set of random conventional analytical or numerical procedures. However, if
solutions are first generated from the global search space, the optimization problem is difficult or if the search space is
and then the best solution will give the optimal value. After large, it will become difficult to solve the optimization problem
that, the algorithm will iterate, and in each iteration there by using conventional mathematics or using numerical
will be two sets of generated solutions; one from the global induction techniques. For this reason, many meta-heuristic
search space and the other set of solutions will be optimization methods have been developed to solve such
generated from the neighborhood of the best solution. difficult optimization problems. These include Genetic
Throughout the paper, the algorithm will be delineated algorithm (GA), simulated annealing (SA), ant colony
with examples. In the final phase of the research, the algorithm (ACA), and particle swarm (PS). Most of these
results of GNA will be discussed and compared with the meta-heuristic optimization problems are inspired by nature,
results of Genetic Algorithm (GA) as an example of biology, or environment.
another optimization method. The term meta-heuristic refers to a specific class of
heuristic methods. Fred Glover first used this term and defined
Keywords—meta-heuristic; optimization; combinatorial it as follows, “A meta-heuristic refers to a master strategy that
problems guides and modifies other heuristics to produce solutions
I. INTRODUCTION beyond those that are normally generated in a quest for local
optimality.
Many optimization problems have been encountered in
different domains of manufacturing and industry. Usually the The heuristics guided by such a meta-strategy may be high
optimization problem that needs to be solved is first formulated level procedures or may embody nothing more than a
and all the constraints are given. The optimization problem description of available moves for transforming one solution
mainly consists of an objective function and a set of into another, together with an associated evaluation rule.” [2].
constraints. The objective function can be in mathematical The meta-heuristic algorithms do not always guarantee an
form or combinatorial form. Once the objective function of the optimal solution. However, in most cases a near optimal
optimization problem is formulated and all the constraints are solution can be obtained in much less time than the
defined, then the main issue is to solve this problem. computational methods [3-4].
63 | P a g e
www.ijarai.thesai.org
(IJARAI) International Journal of Advanced Research in Artificial Intelligence,
Vol. 2, No.5, 2013
2) Single Solution and Population based approaches: In this paper we introduce a new optimization algorithm
In the single based solution, a unique solution is first that can be applied to combinatorial problems. The new
generated and then based on a certain move criteria, other optimization problem is named Global Neighborhood
solutions are generated. Some of the meta-heuristic methods Algorithm (GNA), and it is a population based and derivative
that start with a single solution are: Tabu Search (TS) and free algorithm like other evolutionary optimization algorithms
Simulated Annealing (SA). Population based algorithms on the including Genetic Algorithm (GA), Ant Colony (ACA) and
other hand start by generating a set of multiple initial solutions. Evolutionary Strategy (ES). A set of randomly generated
Examples of those methods would be Genetic Algorithm (GA) solutions from the entire search space are first generated and
and Ant Colony Algorithm (ACA). then the best of these solutions is chosen. After that, the
The computational drawbacks of mathematical techniques algorithm will iterate, and in each iteration there will be two
and methods (i.e., complex derivatives, sensitivity to initial sets of generated solutions; one from the global search space
values, and the large amount of enumeration memory required) and the other set of solutions will be generated from the
have forced researchers to rely on meta-heuristic algorithms neighborhood of the best solution. This paper starts with a
based on simulations and some degree of randomness to solve background about optimization problems, then the
optimization problems [9]. Although, these meta-heuristic methodology of the GNA algorithm is explained, and after that
approaches are not very accurate and they do not always give results for using this algorithm to solve the well-known
the optimal solution, in most cases they give a near optimal Traveling Salesman(TSP) problem are also discussed.
solution with less effort and time than the mathematical
methods [10]. II. METHOLODOLGY
The algorithm proposed in this paper is used to optimize
The meta-heuristic algorithms are general purpose
combinatorial problems. The combinatorial problems could
stochastic search methods simulating natural selection and
have more than one local and global optimal value within the
64 | P a g e
www.ijarai.thesai.org
(IJARAI) International Journal of Advanced Research in Artificial Intelligence,
Vol. 2, No.5, 2013
search space values. The proposed methodology will work to The other 50% of the ( generated solutions will be still
find the optimal value among these local optima by switching generated from the whole search space, and the reason for that
between exploration and exploitation. Exploration allows for is to allow for the exploration of the search space, because if
exploring the whole search space. Exploitation allows focusing we just choose the solutions close to the best solution we will
the search in the neighborhood of the best solution of generated only be able to find the local solution around this point, and
solutions. since the function that need to be optimized could have more
than one local optima, which might lead us to get stuck at one
In order to explain the methodology of the GNA algorithm,
of these local optima.
assume we have discrete function that we need to optimize and
let us say that we need to minimize this function (without loss Next, the best solutions from the above ( solutions
of generality). (50%, 50%) is calculated. The new value for the best solution
is compared to best known solution and if it was found to be
So the objective function we have is:
better it will replace it.
The procedure is then repeated until a certain stop criterion
is met. This stop criterion can be a pre-specified number of
Where: iterations (t), or when there is no further improvement on the
final value of the optimal solution we obtained.
, are the different combinations of the The pseudo code for the GNA algorithm is shown in Fig.3.
solution sequence; we can think of these combinations as the
city sequence in the TSP problem. Define objective function (g)
We need to find the optimal combination or solution
that will give the optimal (minimum) value for Initialize the values for all parameters: m,t
the objective function ( .In general, if each of the variables
can be chosen in ways Generate (m) feasible solutions from the search space
respectively, then if we want to enumerate all the possible
solutions this will yield solutions. Evaluate the fitness from the objective function (g)
However, this process could take several hours or days
depending on the size of the problem. Thus, using a meta- Optimal solution= the best solution.
heuristic approach is better even if does not always give the
optimal solution, but in most cases it will give a solution that is i=1
close to the optimal solution with less computational power.
According to the GNA algorithm, a set of ( random Do while i<t,++
solutions are first randomly generated from the set of all Generate 50% × m solutions from the
possible solution, where: can be chosen in neighborhood of the best solution
ways.
Generate 50% × m solutions from the
The generated solutions will then look like:
search space
where
Find the best solution from the (m)
The fitness for the above solution will be evaluated and this
generated solution
can be done by substituting them in the objective function ( .
The solutions are then sorted according to their fitness If best solution is less (better) than
obtained from the objective function: optimal solution
65 | P a g e
www.ijarai.thesai.org
(IJARAI) International Journal of Advanced Research in Artificial Intelligence,
Vol. 2, No.5, 2013
Where:
: The total distance for a sequence of N cities.
: The Euclidean distance between the current city
and the next city to be visited.
: The Euclidean distance between the last visited
city and the first visited city.
The GNA algorithm was used to solve the Traveling Sales Fitness of Optimal Number of
RUN Run Time
Solution iterations
man Problem (TSP). The TSP problem consists of a number of
cities that need to be visited one time for each, starting from 1 2026 10000 28.64
one city and ending at the same city. In order to optimize the 2 2022 10000 28.31
TSP problem, the optimal sequence of the different cities that
gives the minimum cost (distance) of the tour length has to be 3 2026 10000 28.39
66 | P a g e
www.ijarai.thesai.org
(IJARAI) International Journal of Advanced Research in Artificial Intelligence,
Vol. 2, No.5, 2013
TABLE. II shows that the run time for the GA is more than
twice the run time for the GNA, and the solution obtained by
the GA is not always close to the known optimal solution.
MINTAB software was used to conduct a statistical analysis
between the means of the two optimal solutions obtained by
both GA and GNA.
Statistical Analysis was conducted to test if there is a
statistical difference between the average for each algorithm.
A two- Sample T-Test was used for this purpose. The output
from MINITAB is shown in Fig.5.
67 | P a g e
www.ijarai.thesai.org
(IJARAI) International Journal of Advanced Research in Artificial Intelligence,
Vol. 2, No.5, 2013
IV. CONCLUSION [9] K. S. Lee, and Z. W. Geem, “ A new structural optimization method
based on the harmony search algorithm,” Computers and Structures 82(2
In this paper, a new meta-heuristic optimization method 004) 781–798, 2004.
was introduced and named Global Neighborhood algorithm [10] K. S. Lee , Z. W. Geem, “A new meta-heuristic algorithm for
(GNA). This optimization method is a population based continuous engineering Optimization,” Comput. Methods Appl. Mech.
algorithm; since it starts with generating a set of random Engrg. 194 (2005) 3902–3933, 2005.
solutions from the search space for the optimization problem. [11] M. G. Omran and M. Mahdavi, “Global-best harmony search,” Applied
The proposed algorithm can be used to solve combinatorial Mathematics and Computation 198 ,643–656, 2008.
optimization problems. These combinatorial problems are [12] I. Rechenberg, Cybernetic solution path of anexperimental problem,
Royal Aircraft Establishment, Library Translation no. 1122, 1965.
usually more difficult to solve than other continuous
[13] L. J. Fogel, A. J. Owens and M. J. Walsh, Artificial intelligence through
optimization problems. The methodology of this algorithm was simulated evolution, Chichester, UK: John Wiley,1966.
elaborated and 29-cities TSP optimization problem was solved
[14] K. De Jong, “Analysis of the behavior of a class of genetic adaptive
using the GNA. The TSP optimization problem was also systems”, Ph.D. Thesis, Ann Arbor, MI: University of Michigan, 1975.
solved using genetic algorithm (GA) and the results were [15] J. R. Koza, “Genetic programming: A paradigm for genetically breeding
compared to the GNA. Statistical analysis was conducted using populations of computer programs to solve Problems,” Report No.
MINITAB software, and it was found that the GNA showed STAN-CS-90-1314, Stanford, CA:Stanford University, 1990.
better performance, and the results obtained were very close to [16] D. E. Goldberg, Genetic algorithms in search optimization and machine
the known optimal solution. Future studies can include learning, Boston, MA: Addision-Wesley, 1989.
different variants for the basic GNA algorithm to enhance the [17] J. H. Holland, Adaptation in natural and artificial systems, Ann Arbor,
search power. MI: University of Michigan Press, 1975.
[18] F. Glover, “Heuristic for integer programming using surrogate
REFERENCES constraints,” Decision Science, 8(1):156–66, 1977.
[1] T. Weise, Global Optimization Algorithms – Theory and Application, [19] M. Dorigo, V. Maniezzo and A. Colorni, “The ant system: Optimization
Germany: it-weise.de (self-published), [Online]. Available: by a colony of cooperating agents,” IEEE Trans Systems Man Cybernet,
http://www.it-weise.de/ , 2009 26(1), 29–41, 1996.
[2] F. Glover and M. Laguna. Tabu search. Kluwer Academic Publishers, [20] J. Kennedy and R. Eberhart, “Particle swarm optimization,” IEEE
1997. International Conference on Neural Networks Perth, Australia, pp:
[3] C. R. Reeves and J. E. Beasley, Modern heuristic techniques for 1942-1948, 1995.
combinatorial problems, McGraw-Hill, 1995 [21] Z. W. Geem, J. H. Kim and G. Loganathan, “A new heuristic
[4] W. Wang, P. C. Nelson, and T. M. Tirpak, “Optimization of high-speed optimization algorithm: Harmony search,” Simulation, 76(2), 60, 2001.
multistation SMT placement machines using evolutionary algorithms,” [22] S. Nakrani and C. Tovey, “On honey bees and dynamic server allocation
IEEE Transactions on Electronics Packaging Manufacturing, 22(2), 137- in internet hosting centers,” Adapt. Behav., 12(3-4), 223, 2004.
146, 1995. [23] S. Kirkpatrick, C. Gelatt and M. Vecchi, “Optimization by simulated
[5] E. Silver, R. V. Vidal, and D. de Werra, “A tutorial on heuristic annealing,” Science 1983, 220(4598), 671–80, 1983.
methods,” European Journal of Operational Research, 5, 153-162, 1980. [24] E. Rashedi, H. Nezamabadi-Pour and S. Saryazdi, “GSA: A
[6] S. H. Zanakis, J. R. Evans, and A. A. Vazacopoulos, “ Heuristic gravitational search algorithm,” Inform. Sci., 179(13): 2232-2248, 2009.
methods and applications: a categorized survey,” European Journal of [25] M. Khajehzadeh, M. R. Taha, A. El-Shafie and M. Eslami “A Survey
Operational Research, 43, 88-110, 1989. on Meta-Heuristic Global Optimization Algorithms,” Research Journal
[7] D. S. Johnson, “Local optimization and the traveling salesman of Applied Sciences, Engineering and Technology 3(6), 569-578, 2011.
problem,” In Goos, G. and Hartmanis, J. (eds) Automata, Languages and [26] Z. I. Berlin, “MP-TESTDATA- the TSPLIB Symmetric Traveling
Programming, Lecture Notes in Computer Science, 442, Springer, Salesman Problem Instances," vol. 2010: Zuse Institute Berlin, 2010.
Heidelberg, 446-461, 1990.
[8] E. Aarts and J. K Lenstra, “Local search in combinatorial optimization,”
Wiley,1997.
68 | P a g e
www.ijarai.thesai.org