Metaheuristic Optimization Frameworks: A Survey and Benchmarking
Metaheuristic Optimization Frameworks: A Survey and Benchmarking
DOI 10.1007/s00500-011-0754-8
ORIGINAL PAPER
Springer-Verlag 2011
Abstract This paper performs an unprecedented com- and affordable time and cost. However, heuristics are
parative study of Metaheuristic optimization frameworks. usually based on specific characteristics of the problem at
As criteria for comparison a set of 271 features grouped in hand, which makes their design and development a com-
30 characteristics and 6 areas has been selected. These plex task. In order to solve this drawback, metaheuristics
features include the different metaheuristic techniques appear as a significant advance (Glover 1977); they are
covered, mechanisms for solution encoding, constraint problem-agnostic algorithms that can be adapted to incor-
handling, neighborhood specification, hybridization, par- porate the problem-specific knowledge. Metaheuristics
allel and distributed computation, software engineering have been remarkably developed in recent decades (Voß
best practices, documentation and user interface, etc. A 2001), becoming popular and being applied to many
metric has been defined for each feature so that the scores problems in diverse areas (Glover and Kochenberger 2002;
obtained by a framework are averaged within each group of Back et al. 1997). However, when new are considered,
features, leading to a final average score for each frame- metaheuristics should be implemented and tested, implying
work. Out of 33 frameworks ten have been selected from costs and risks.
the literature using well-defined filtering criteria, and the As a solution, object-oriented paradigm has become a
results of the comparison are analyzed with the aim of successful mechanism used to ease the burden of applica-
identifying improvement areas and gaps in specific tion development and particularly, on adapting a given
frameworks and the whole set. Generally speaking, a sig- metaheuristic to the specific problem to solve. Based on
nificant lack of support has been found for hyper-heuristics, this paradigm, there are a number of proposals which
and parallel and distributed computing capabilities. It is jointly offer support for the most widespread techniques,
also desirable to have a wider implementation of some platforms and languages. In this article, we coin these kind
Software Engineering best practices. Finally, a wider sup- of approaches as metaheuristic optimization frameworks
port for some metaheuristics and hybridization capabilities (MOFs).
is needed. In addition to the advantages of having pre-implemented
metaheuristics in terms of testing and reuse, using a MOF
can provide a valuable benefit. They support the evaluation
1 Introduction and motivation and comparison of different metaheuristics to select the
best performing one for the problem at hand.
Heuristic methods have proven to be a comprehensive tool However, as the number of alternatives is extensive (we
to solve hard optimization problems; they bring a balance have identified 33 different MOFs in literature) this
of ‘‘good’’ solutions (relatively close to global optimum) becomes a double-edged sword and the choice of the right
MOF results in a major issue. Due to the wide number of
metaheuristics (and variants), each of the MOFs is focused
on a particular subset; in this context, not choosing the
J. A. Parejo (&) A. Ruiz-Cortés S. Lozano P. Fernandez
University of Sevilla, Seville, Spain right MOF leads to a no-win situation; this would imply
e-mail: [email protected] further costs due to the change from one MOF to another,
123
J. A. Parejo et al.
or the risk of obtaining a sub-optimal solution due to the new versions). Moreover, the possibility of downloading
use of inappropriate metaheuristics. the benchmark as a spreadsheet and tailoring it to user
A comparative framework is a useful tool to guide a needs by modifying its weights is also crucial for making it
selection of the MOF that best suits a particular scenario. more relevant and applicable.
However comparisons of frameworks in literature are The remainder of this article is organized as follows:
either informal evaluations using author criteria or focused Section 2 defines what a metaheuristic optimization
on performance (Wilson et al. 2004). Gagnè and Parizeau framework is and outlines the advantages and disadvan-
(2006) present a comparison (over 6 features) of MOFs tages of using such tools. Next, Sect. 3 describes the
supporting evolutionary algorithms. Voß (2002) presents a methodology used to create our comparative framework
constructive discussion of various software libraries, but divided into six areas. In further sections, each area is
there is a lack of a comparative analysis. Alternatively, developed in detail (Sects. 4 to 9), defining a set of char-
some articles (such as Cahon et al. 2004; Di Gaspero and acteristics, its importance, metrics and data sources used
Schaerf 2003) presenting a concrete MOF, include a related for its evaluation. In each section, charts and interesting
work section with a comparison of specific features with results on the current support by the selected MOFs are
other MOFs; however, those works present a narrow per- provided. In Sect. 10 we discuss the results obtained from a
spective with a comparison of a reduced set of MOFs. global perspective, showing significant gaps and general
To the best of our knowledge, no general reviews nor tendencies. Finally, in Sect. 11 we summarize and present
detailed comparative studies of MOFs have been con- the main conclusions and future work.
ducted in the literature. Moreover, a conceptual discussion Details about MOF assessment are provided as tables in
about the desirable set of features of a MOF has not been ‘‘Appendix’’ and at http://www.isa.us.es/MOFComparison.
carried out.
The key point of this article is to provide a general
comparative framework to guide the selection of a partic- 2 Metaheuristic optimization frameworks
ular MOF and to evaluate the current MOFs found in the
literature from a research perspective. In doing so, this Problem types that model real-life situations (e.g. traveling
article extends the comparative framework of Gagnè and salesman problem, knapsack problem, MAX-SAT prob-
Parizeau (2006) including frameworks that incorporate lem, etc.) have concrete instances that have a solution
several types of metaheuristic techniques (cf. Sect. 4) and space that contains specific solutions. When those solutions
presents a comparative analysis of a large set of features. are evaluated using an objective function (or a set of
Specifically, this paper advances the state of the art in functions for multi-objective problems) we can define an
the following: optimization problem as searching for the solution that
provides the maximum (or minimum) value.
1. A general comparative framework for MOFs that can
According to Glover and Kochenberger (2002) we
be used to classify, evaluate and compare them.
define metaheuristics as: ‘‘An iterative process that guides
2. An analysis of the current relevant MOFs in the
the operation of one or more subordinate heuristics (which
literature based in the comparative framework
may be from a local search process, to a constructive
proposed.
process of random solutions) to efficiently produce quality
3. An evaluation of the current state of the art of MOFs
solutions for a problem’’. An interesting concept in this
from the research context that can be used: (i) to guide
definition is the establishment of two distinct levels for
newcomers in the area and (ii) to identify relevant gaps
metaheuristic problem solving: the heuristic level that is by
to MOF developers.
definition highly dependent on the problem, and the
It is important to highlight that the main value of this metaheuristic level based on the aforementioned level but
study lies neither in comparing the rankings of two con- expressed as a problem-independent process. For instance,
crete MOFs in a feature or characteristic, nor in stating when we apply simulated annealing (SA) (Kirkpatrick
which MOF better fulfills the benchmark criteria. The main et al. 1983), we use three subordinate heuristics: the crea-
contribution of the paper is the establishment of a general tion of an initial solution to the problem, the generation of
comparison framework which clearly defines the set of similar (neighboring) solutions to another solution by some
desirable features of MOFs; depicting a real ‘‘state of the criterion; and the evaluation of solutions (objective func-
art’’ MOF with improvement directions and gaps in fea- tion). These heuristics are highly dependent on the specific
tures support. This comparison framework has shown its problem addressed and how we encode solutions, but based
value an generality, allowing the evaluation of the new on them, we can establish a general iterative algorithm that
versions of assessed MOFs released during the realization has been successfully applied to a huge variety of
of this study without modifications (four MOFs released problems.
123
Metaheuristic optimization frameworks
Thus, for each metaheuristic technique and type of 2.1 Why are MOFs valuable?
problem, we have a set of subordinate heuristics that define
how the metaheuristic is adapted to the problem-type at The No Free Lunch (NFL) theorem of Wolpert and Mac-
hand. Note that a given problem-type may have multiple ready (1997) can be summarized as follows: ‘‘There is no
valid sets of subordinate heuristics. For instance, when strategy or algorithm that generally behaves better than
using a genetic algorithm we can have different solution another for the entire set of possible problems’’. Ho and
encodings (e.g. using bit strings or integer vectors) and Pepyne (2002) expressed it as follows: ‘‘Universal opti-
consequently, different ways of generating the initial mizers are impossible’’.
population, crossover and mutation operators, etc. For a The NFL theorem has been used as an argument against
specific instance of a problem, the application of a meta- the use of MOFs, since there can be no universal optimal
heuristic will provide a solution depending on the specific solver nor a software implementation of it (Voß 2002,
subordinate heuristics used. Chapter 4, pp 82–83). Frameworks are not intended to be a
A MOF can be defined as ‘‘a set of software tools that universal optimal implemented solution. Frameworks are
provide a correct and reusable implementation of a set of tailorable tools that allow us to perform this implementa-
metaheuristics, and the basic mechanisms to accelerate the tion in an better way in terms of the implementation cost
implementation of its partner subordinate heuristics (pos- and effort.
sibly including solution encodings and technique-specific The NFL theorem implies the need to ‘‘match’’ a
operators), which are necessary to solve a particular problem and the optimization technique used to solve it in
problem instance using techniques provided’’. Figure 1 order to obtain optimal or near optimal solutions. Meta-
depicts a conceptual map showing these elements and their heuristics allow to perform such a matching by adapting its
relationships. In this figure, MOFs and their components underlying heuristics. The purpose of MOFs is the opti-
are shaded. mization of such adaption mechanisms in a more reusable
Specifically, MOFs not only provide a set of imple- and effortless way.
mented techniques, but also facilities to simplify the Furthermore, if no algorithm behaves better than another
adaptation of those implementations to the specific prob- (as stated by NFL), when trying to solve a new problem
lem to address and additional tools to help the whole without specific knowledge (with regards to well-known
optimization problem solving activities. Moreover, MOFs similar problems and their best-matching techniques), it is
usually provide mechanisms to monitor the optimization even more advantageous to use various metaheuristics to
processes, supporting tools to determine appropriate values ensure a proper matching to the problem. The benefits of
of parameters of techniques, and to identify the reasons that using MOFs that implement several metaheuristics are
prevent techniques from finding optimal solutions. even more obvious.
123
J. A. Parejo et al.
Reduced implementation effort and ability to apply various techniques Steep learning curve
and variants with little additional effort
Additional tools to help problem solving (monitoring, reporting, Advanced knowledge needed for adaptation and inflexibility to adapt
parallel and distributed computing) to use some metaheuristics variants
Optimized and error-free implementation (except the extensions and Induced complexity (when debugging and testing) and additional
adaption created by users or the undetected errors that could be dependences
present in the MOF)
New users with little knowledge can use the framework not only as a The choice of the right MOF may be an issue, since switching from one
tool for software application development environment but as a MOF to another has a high cost, they provide diverse features support
methodological aid and there are no comparative benchmarks in literature
The main advantage of using MOFs is that they provide 2.2 Drawbacks: all that glitters are not gold
correct, fully functional and optimized versions of a set of
metaheuristic techniques and their variants. Moreover, they MOFs also have some drawbacks. One is their steep
provide mechanisms to facilitate the proper implementa- learning curve. The user needs to know the set of variation
tion of the underlying heuristics, depending on the prob- and extension points to use in order to adapt the framework
lem, the representation of solutions, etc. As a consequence, to the problem and understand how they are related to the
we only have to implement those elements which are behavior of the software. This means that when we exactly
directly related to the problem, freeing us, as far as pos- know which technique to apply and we are confident in our
sible, of worrying abouts the aspects that do not depend on implementation skills, using a MOF may be discouraged
it. In addition, the use of MOFs decreases in general the unless you have expertise in using it. Another drawback to
risk of bugs in the implementation and therefore the time consider when using MOFs is that the flexibility to adapt
(and associated cost) invested in debugging. Complemen- the MOF is limited by its design. Consequently, a proper
tary, some MOFs provide additional features to aid solving framework design is essential to achieve the most favorable
the optimization problem, such as optimization process balance between the capabilities provided and its flexibil-
monitoring and results analysis tools, capabilities for par- ity. This drawback implies that it could be impossible to
allel and distributed optimization tasks execution, sup- implement certain variants or modify certain behavior
porting mechanisms for techniques parameters value when using a MOF, this drawback is specially serious in
determination, graphical reports and user-friendly inter- the context of research, where experimentation with dif-
faces (see Sects. 6 and 7 for a detailed review of this kind ferent variants and capability of customization is a key
of features). feature (cf. Sect. 2.1 with the definition of usage contexts
Some of the advantages shown above as well as identified). An increased testing and debugging complexity
features described in the following sections are more is a disadvantage resulting from the inversion of control
valuable than others depending on the application con- (i.e. loss of explicit control over the execution flow of our
text. Specifically, we identify three main MOF usage application) that involves the use of a framework. The use
contexts: of MOFs implies increasing the size of the software, cre-
ating dependencies on third-party libraries and an increase
• Industrial application of optimization problem solving:
on the complexity of the application.
In this context, implementation burden reduction and
The advantages and drawbacks of using MOFs dis-
its optimization are the most valuable features.
cussed in this section are shown in Table 1.
• Research on metaheuristics and optimization problem
solving: In this context, optimization process monitor-
ing and results analysis tools are likely to be the most 3 Review method
valuable features.
• Teaching of metaheuristics: In this context, ease of use The present comparative is based on the software tech-
of the graphical interface, reports and graphical repre- nology evaluation methodology proposed by Brown and
sentation of solutions and methodological guidance Wallnau (1996), which seeks to identify the value added by
through wizards and GUI are likely to be the most technology through the establishment of a descriptive
valuable features. model in terms of its features of interest and their
123
Metaheuristic optimization frameworks
relationship and importance to its usage contexts. In this • RQ4: Can current MOFs help to find out the best
work, only the first phase (descriptive modeling) of the parameter values for their supported metaheuristics
proposed methodology is performed, providing a solid (perform hyper-heuristic search)?
basis for the evaluation of technologies and a context for • RQ5: To what extent do current MOFs take advantage
describing the features of interest. The second phase of parallelization capabilities of metaheuristics and
includes conducting experiments with each of the MOFs distributed computing?
associated with specific use scenarios and is beyond the • RQ6: What additional tools are provided by current
scope of this article. MOFs in order to support the whole of the optimization
In order to establish our descriptive model of charac- problem-solving process?
teristics to be supported by MOFs, and select the set of • RQ7: Which costs and licensing model do current
MOFs to assess, we followed a systematic and structured MOFs go by?
method inspired by the guidelines of Kitchenham (2004). • RQ8: What platforms (operating system, programming
First, we stated a set of research questions (see next sub- languages, etc.) are supported by current MOFs? This
section). Second, in order to find the list of candidate question motivates the following sub-question:
MOFs, we established information sources used for the
– Given the current support of techniques by MOFs,
search (cf. Sect. 3.2). Then, we applied filtering criteria to
are all techniques available on each platform?
obtain the final set of MOFs to be analyzed (cf. Sect. 3.3).
Finally, we composed and grouped the full set of com- • RQ9: Are current MOFs using software engineering
parison criteria and used them to assess MOFs. best practices in order to improve code quality,
maintainability, stability and performance?
3.1 Research questions
After reviewing all this information we also want to
answer some more general questions:
The aim of this study is to answer the following research
questions: • RQ10: What degree of maturity and popularity do
current MOFs have? This question motivates the
• RQ1: What metaheuristics are currently supported by
following sub-questions:
MOFs? This question motivates the following sub-
questions: – What problems have been solved with each MOF?
– What documentation and help on its use does each
– Is there a MOF that supports the whole set of
MOF provide to its users?
techniques?
– Are current MOFs supported by scientific
– What is the most popular technique? i. e., Which is
publications?
the technique implemented by most MOFs?
– What is the user community of each current MOF?
– Is there a ‘‘core set of techniques’’ supported by
– Which is currently the most popular MOF?
more than a half of the assessed MOFs?
• RQ11: What are the challenges to be faced in the
• RQ2: What tailoring mechanisms do current MOFs
evolution and development of MOFs?
provide in order to adapt to solve a problem, and to
what extent are those mechanisms supported? This
question motivates the following sub-questions: 3.2 Source material
– Is there a ‘‘core set of adaption mechanisms’’ (such
as solution encoding mechanisms, operators, etc.) The information sources used for the search of MOFs have
supported by more than half of the assessed MOFs? primarily been electronic databases through their online
– What MOF is better suited to adapt to specific search engines. Specifically, we have searched on: IEEE
problem solving? Xplore, ACM Digital Library, SpringerLink and Scopus.
The following search strings have been used: ‘‘Metaheu-
• RQ3: What combination of techniques (hybrid ristic Optimization Framework’’, ‘‘Heuristic Optimization
approaches) are supported when using a MOF? This Framework’’, ‘‘Metaheuristic Software library’’, ‘‘Meta-
question motivates the following sub-question: heuristic Optimization Library’’ and ‘‘Metaheuristic Opti-
– Is hybridization a widely supported feature (sup- mization Tool’’.
ported by more than half of the assessed Based on the results obtained, a list of candidate MOFs
frameworks)? was generated that later was enlarged using direct web
– What is the most common hybridization mechanism searches (using Google and the search strings described
supported by MOFs? above) and references present on papers and frameworks’
123
J. A. Parejo et al.
web sites. Key references obtained during this phase were guages (such as Java or C??). They must provide a
Voß (2002) and Gagnè and Parizeau (2006). However, general design where user-defined classes are integrated
framework web sites were a key data source, given that in order to produce an optimization application for
their links, articles and related work sections allowed us solving the problem at hand. There are useful optimiza-
establish the full reference set to study. After a detailed tion tools that do not meet those requirements and are
analysis of these references, an initial set of main supported consequently out of the scope of this article, but might be
features and MOFs were established, and basic information studied in a similar comparative research work. This
gathering of those tools was performed. The list of candi- criterion eliminated three optimization tools: Evolution-
date optimization tools contains 33 entries: Comet, EvA2, ary::Algorithm, PISA, Comet and OptQuest.
evolvica, Evolutionary::Algorithm, GAPlayground, jaga, • MOFs must support at least two different optimization
JCLEC, JGAP, jMetal, n-genes, Open Beagle, Opt4j, Pa- techniques (we consider multi-objective variants of
radisEO/EO, Pisa, Watchmaker, FOM, Hypercube, Hot- techniques as different techniques). Otherwise, they are
Frame, Templar, EasyLocal, iOpt, OptQuest, JDEAL, considered specific applications, even if they can adapt
Optimization Algorithm Toolkit, HeuristicLab, MAFRA, to various problems using the mechanisms that char-
Localizer, GALIB, DREAM, Discropt, MALLBA, acterize OO frameworks. This criterion eliminated nine
MAGMA and UOF. MOFs, namely: evolvica, n-genes, GALib, GAPlay-
ground, Hypercube, JGAP, Open Beagle, jmetal,
3.3 Inclusion and exclusion criteria watchmaker.
• Those frameworks for which an executable version or
Some MOFs were discarded to keep the size and com- source code with its documentation could not be
plexity of the review at a manageable level, establishing obtained were also eliminated (after contacting authors
the following filtering criteria: and requesting from them a valid version). This
criterion eliminated four frameworks, namely: iOpt,
• The development of MOFs must be alive, and error
JDEAL, OptQuest and MAGMA.
fixing supported by their developers. A MOF where
users must debug all errors found by themselves and Table 2 shows the final set of frameworks compared
that will not provide future improvements or features is along with their specific versions and web sites.
not a valid option. Consequently this is our first filtering As a consequence, only a subset of possible optimization
criterion. We consider as abandoned those frameworks tools has been evaluated. In spite of the considerable effort
without new versions (even minor bug fixes) or papers during the development of this work, and that the MOFs
published in the past 5 years. This criterion eliminated have been chosen based on well-defined and consistent
eight frameworks, specifically: jaga, hotframe, templar, filtering criteria, some metaheuristic optimization libraries
MAFRA, DREAM, Discropt and UOF. of great practical interest did not qualify and therefore have
• Optimization tools to be evaluated must be frameworks not been included in this study (e.g. JGAP, Hypercube,
implemented in general purpose Object Oriented lan- Watch-maker or Comet).
123
Metaheuristic optimization frameworks
3.4 Comparision criteria with the associated research question that we intend to
answer through the evaluation of each characteristic.
Evaluating a software tool usually implies understanding Table 3 covers a wide range of concerns, from MOF-
and balancing competing concerns regarding the new specific characteristics such as supported metaheuristic
technology. In this sense, the proposed comparative criteria techniques or solution encoding (covered in areas C1, C2
cover six areas of interest that in turn are subdivided into and C3), to general concerns such as usability, documen-
30 specific characteristics (and an additional subdivision tation and licensing model (covered in areas C4, C5 and
level that comprises 271 features). Table 3 shows the areas C6). Consequently, the use of this six areas allows us to
and corresponding set of characteristics in this study, along easily discern the interesting features on the three usage
123
J. A. Parejo et al.
contexts described in Sect. 2.1. This benchmark, by need a subjective criteria, we have adopted the perspective
focusing on key areas and features for each context, pro- of the research-use context (cf. Sect. 2) and the research
vides a general view of the state of the art MOFs and questions stated. We are working on three levels: areas,
provides a global assessment. Specifically, these areas are characteristics and features; where characteristics are
directly related to our research questions: aggregated into areas and various features are used to
evaluate individual characteristics. For each feature and
• Area C1 is related to RQ1, establishing a set of
MOF a value is measured with two methods: First, features
metaheuristic techniques and variants to be supported
corresponding characteristics of areas C1–C4 are evaluated
by MOFs. The assessment of this area for each
using a binary true/false value avoiding subjectivity on the
framework allows us to answer both RQ1 and its sub-
value assignment. This information is defined as feature
questions.
coverage and is the base of a more general evaluation that
• Area C2 is related to RQ2; the characteristics of this
provides a global quantitative value for each characteristic
area describe the possible ways of tailoring to the
and area. Second, areas C5 and C6 respresent non-func-
problem through metaheuristic. Thus its assessment
tional characteristics corresponding to transversal aspects
provides a basic way of answering RQ2, showing the
that cannot be measured in an objective way; as a conse-
support provided by each framework and also which
quence, each feature is defined with a score marked by the
tasks are the responsibility of the user.
research use context.
• Area C3 is related to RQ3, RQ4 and RQ5, grouped as
A specific value has been given to each characteristic
advanced capabilities support.
based on these features. In so doing, a weighting that
• Area C4 is related to RQ6, by defining different kinds
defines the contribution of each feature to the general
of additional tools that are (or could be) supported by
support of the characteristic has been set (‘‘Weight’’ col-
MOFs.
umn of Table 4). In the same way; each area is measured
• Area C5 is related to RQ7, RQ8 and RQ9 showing the
based on a weighted sum of the evaluation of its corre-
platforms and programming languages supported by
sponding characteristics. The proposed weights range from
each framework, along with the use of software
0.0 to 1.0, meaning none and full contribution to charac-
engineering best practices.
teristics support, respectively.
• Area C6 is related to RQ10, by defining characteristics
Three different types of metrics have been devised:
that assess the issues concerning the sub-questions of
RQ10. • Uniform: weighting is associated evenly to each feature
of the characteristic. This metric type is usually
As there are different kinds of characteristics, a proper
associated with variants or features with no clear
quantification of the facilities provided by MOFs is a
predominance in terms of popularity or performance.
complex issue. Sometimes it is meaningless to use quan-
• Proportional: a basis feature is given a significant
titative values for assessing certain characteristics (e.g. it
weight (usually 0.5) and the remaining weight is evenly
makes no sense to associate a quantitative value to the
associated with the other features of the characteristic.
language in which the MOF is implemented). Therefore,
This metric type is associated with a characteristic with
for some characteristics we avoid defining metrics, treating
a more useful feature with some rare variants or
them simply as attributes of MOFs which might be relevant
additional features.
to users. In other cases (such as MOF size), the charac-
• Ad Hoc: weighting is associated with features based on
teristics have been left out of the comparative analysis
specific author criteria.
because they do not affect the research questions. How-
ever, the information harvested can be useful for further It is important to note that we have set weights from a
analysis. research use context on optimization problem solving;
In our comparative approach, we have attempted to however in other specific scenarios such as teaching, or
obtain a knowledge base about real capabilities provided industrial problem solving, weights could vary in order to
by MOFs which are as objective as possible. In so doing, reflect the exact importance of features, characteristics and
each characteristic has been defined, and a set of features is areas on those contexts. This mechanism allows custom-
identified to evaluate its support (with minor exceptations). ized versions of the comparative study and tailored
Features are defined taking into account themaximum conclusions. This information is published as a public
possible support that could provide an ideal MOF, not the google documents spreadsheet at: http://www.isa.us.es/MOF
current state of the art MOFs in order to identify gaps, and Comparison (moreover, this document contains comments
answer RQ11. Consequently, there are characteristics that about cover of features and why some features are assessed
are not fully supported by any MOF and even some for as partially supported by MOFs). In this way data can be
which current support is nearly non-existent. In case we verified and reused, and weights can be redefined.
123
Metaheuristic optimization frameworks
123
J. A. Parejo et al.
123
Metaheuristic optimization frameworks
Moreover, for areas C1, C2, C3 and C4, tables showing (C1.4, C1.9 and C1.10) or populations (C1.6, C1.7, C1.8,
feature cover per framework (and weights associated as an C1.9 and C1.10). Furthermore, we have evaluated the
additional column) are provided in this article, corre- incorporation of techniques for multi-objective problem
sponding to Tables 4, 5, 6 and 7, respectively. In the fol- solving (C1.11). Metaheuristics and variants described in
lowing sections we describe each area, its characteristics, this section have been chosen following Glover and Ko-
corresponding features and weights and global scores chenberger (2002) and some technique-specific references
obtained by each MOF. Tables 9, 10 and 11 in the such as Aarts and Lenstra (1997), Back et al. (1997) and
appendix show these scores in detail. Clerc (2006). We next describe in detail each of these
characteristics; the cover of features by frameworks and
their weights are shown in Table 4.
C1.1 Steepest descent/hill climbing This technique
4 Metaheuristic techniques (C1)
searches successively for the best neighbor solution until
reaching a local optimum. This technique is commonly
The main feature of any MOF is the set of supported
used for hybridization (c.f. characteristic C3.1). Metric: We
metaheuristics. A characteristic is defined for each meta-
have defined two different features: (1) basic implemen-
heuristic, which indicates the support the MOFs provide
tation until local optimum is found, and (2) multi-start
for it.
implementation using a random initial solution when local
optimum is found. A uniform metric is used (with each
4.1 Characteristics description feature weighing 0.5).
C1.2 Simulated annealing This technique is inspired by
A set of 11 characteristics has been defined, with 52 fea- the natural process of slow cooling used in metallurgy. It
tures, comprising most major metaheuristics proposed in was proposed by Kirkpatrick et al. (1983). We have
the literature, either based on intelligent search (charac- defined a feature associated with the basic implementation
teristics C1.1, C1.2, C1.3 and C1.5), on solution building of this technique and features for some of its variants.
123
J. A. Parejo et al.
123
Metaheuristic optimization frameworks
Specifically, we have evaluated variants on the cooling A uniform metric is used (having a weight of 0.2 for each
scheme: linear and exponential scheme proposed by one of the five features).
Kirkpatrick et al. (1983), logarithmic scheme defined by C1.6 Evolutionary algorithms (EA) There are many
Geman and Geman (1987) and schemes based on ther- techniques based on principles of biological evolution that
modynamics (defined by Nulton and Salamon (1988) and can be called evolutionary algorithms. These techniques can
Andresen and Gordon (1994). Addtionally, we have eval- be divided into three independently developed approaches:
uated the variants on the acceptance criterion of worsening evolutionary strategies (ES) proposed by Rechenberg
solutions: metropolis acceptance proposed by Kirkpatrick (1965), evolutionary programming according to Fogel et al.
et al. (1983) and logistic acceptance (Goldberg 1990). (1966) and genetic algorithms as developed by Holland
Metric: A proportional metric is used, where the basic (1992). These techniques present different variants based on
implementation has a weight of 0.5, each cooling scheme the elements used for adapting to the problem (some of them
variant weighs 0.1 and each acceptance criterion variant present in other techniques) and some additional variation
weighs 0.1. points. In order to create a global and coherent comparative
C1.3 Tabu search Basic ideas of tabu search were pro- criteria, we have identified various characteristics for those
posed by Glover (1989). This technique uses procedures variations. Remarkably, the selection of individuals for
designed to cross boundaries of local optima by estab- crossover and survival is independent of the solution
lishing an adaptive memory to guide the search process, encoding; thus, frameworks can provide implementations
avoiding searching in circles through the solution space. using different selection criteria and can reuse them, since
This memory scheme is implemented using data structures mechanisms for selecting solutions are used in various
that store either visited solutions (tabu list) or components metaheuristics. We have created a characteristic for evalu-
of those solutions and even the frequency of appearance of ating the support for solution selection (C2.4). Crossover and
each solution component. In order to avoid discarding mutation mechanisms are dependent on the representation
promising solutions, aspiration criteria is implemented, for scheme used, and the efficiency of a specific mechanism will
instance it allows the selection of a tabu solution if it strongly depend on the problem to be solved. Consequently,
improves the current solution by a percentage. Metric: An we have created an associated characteristic in the area of
ad hoc. metric is used to assess this characteristic. A feature adaptation to the problem (C2.3).
representing the basic implementation of this technique Thus, this feature (C1.6) only measures the support
using a tabu list weighs 0.3, components recency memory provided by frameworks for general evolutionary algo-
feature weighs 0.2, components frequency-based memory rithms, without taking into account solution encoding
weighs 0.3 and aspiration criteria feature weighs 0.2. capabilities, the genetic operators nor the selection mech-
C1.4 GRASP This technique was proposed by Feo and anisms available. Of the many variants that have been
Resende (1989, 1995), and specifies two stages for each proposed in literature for the basic evolutionary algorithm,
interaction: (1) solution building adding components in a we take into account (1) the use of variable population
stochastic greedy way (one among the best choices for each sizes (e.g. GAVaPS Arabas et al. (1994), (2) niching
component is selected sequentially until the solution is methods (commonly used to solve multi-modal optimiza-
built) and (2) a local search is performed based on the built tion problems), (3) individuals that encode more than one
solution. Candidate components of the first stage are sorted solution to the problem (usually diploid). Goldberg and
and evaluated using a greedy value function, generating a Smith (1987), (4) coevolution of multiple populations in
restricted candidate list (RCL). Metric: A unique feature competitive and cooperative environments as described in
indicating support for this technique is used, evaluated as a (Back et al. 1997, Chapter on Coevolutionary Algorithms)
binary value indicating if the framework provides some and (5) differential evolution as developed by Price et al.
kind of support for it. (2005). Variants (1), (3) and (4) as well as some versions of
C1.5 Variable neighborhood search (VNS) This tech- (2) can be implemented regardless of the problem, the
nique proposes a systematic exchange on neighborhood solution encoding or the operators used.
structure in a local search context. It was propose by Metric: An ad hoc. metric is defined to assess this
Mladenović (1995). Many variants of this technique have characteristic. Three features have been identified to eval-
been proposed in literature, and based on them we propose uate the support of the different evolutionary approaches,
the following features: (1) Original proposal implementa- with each feature weighing 0.2. With regard to the variants,
tion (VNS); (2) Variable neighborhood descent (VND); (3) (1) weighs 0.05, (2) weighs 0.1 and (3) weighs 0.05, (4)
Reduced VNS (RVNS); (4) Variable neighborhood weighs 0.1 and (5) weighs 0.1. We evaluate variants as
decomposition search (VNDS) by Hansen et al. (2001) and binary variables, in terms of the support afforded by
(5) Skewed VNS by de Souza and Martins (2008). Metric: frameworks.
123
J. A. Parejo et al.
C1.7 Particle swarm optimization (PSO) This technique metric is defined for this characteristic; corresponding
is a stochastic algorithm is inspired by the behavior of birds weights are shown in Table 4.
flocking and fish schooling. The algorithm iteratively C1.10 Scatter search (SS) This technique was proposed
modifies a population of solutions (named the swarm), by Glover (1977). It operates on a set of solutions, the
whose interactions are expressed as equations. Solutions in reference set, by combining existing solutions to create
the swarm are represented as particles in an n-dimensional new ones. In contrast to other evolutionary methods like
space with a position and speed. The original proposal by genetic algorithms, scatter search is based on systematic
Kennedy and Eberhart (1995) has been applied success- designs and methods, where new solutions are created from
fully to a variety of problems (Clerc 2006; Parsopoulos and the linear combination of two solutions of the reference set,
Vrahatis 2002a). Moreover, this technique has been adap- using strategies for search diversification and intensifica-
ted to support discrete variables, and different equations to tion. Metrics: This technique has a unique feature, evalu-
rule swarm interaction have been proposed (Chatterjee and ated as a binary value, which indicates if the framework
Siarry 2006; Wilke et al. 2007; Vesterstrm and Riget 2002; provides it with some kind of support.
Rahman et al. 2009). The topology of the neighborhood of C1.11 Multi-objective metaheuristics The technique
particles, i.e. the particles that influence the position of a most commonly used to solve multi-objective optimization
given particle according to the equations, generate a full set problems is EA (Dreo et al. 2005). However, some variants
of possible variants. In the original PSO, two different of other techniques have also been taken into account: SA
kinds of topologies were defined: (1) global, specifying that (MOSA as proposed by Ulungu et al. (1999) and PASA as
all particles are neighbors of each other; and (2) local, developed by Suresh and Mohanasundaram (2004), PSO
specifying that only a specific number of particles can (Parsopoulos and Vrahatis (2002b) and ACO (Iredi et al.
affect a given particle. In Kennedy and Mendes (2002) a (2001). Those variants have been adapted to solve multi-
systematic review of neighborhood topologies is described, objective optimization problems. Regarding the EA vari-
and in Suganthan (1999) the concept of ‘‘dynamic’’ ants to evaluate, we have taken into account the original
neighborhood topology is proposed. Another interesting proposal by Goldberg (1989) (PGA), MOGA as proposed
variant is the use of a ‘‘life time’’ for solutions in the by Fonseca and Fleming (1993), Non Dominated Sorting
swarm; after this time solutions are randomized. Metrics: Genetic Algorithm (NSGA and NSGA-II) as developed by
We have created a feature to represent the original proposal Deb et al. (2002), Niched Pareto Genetic Algorithm
for real variables and classic equations. It weighs 0.3. (NPGA) according to Horn et al. (1994), Strength Pareto
Discrete variable support weighs 0.2. Equation custom- Evolutionary Algorithm (SPEA and SPEA-II (Zitzler and
ization weighs 0.2. The explicit modeling and support of Thiele 1999; Zitzler et al. 2001), Pareto Envelope based
different neighborhood topologies weighs 0.2. Finally, Selection Algorithms (PESA and PESA-II) (Corne et al.
lifetime support weighs 0.1. 2000), Pareto-archived ES (PAES) (Knowles and Corne
C1.8 Artificial immune systems (AIS) This technique 2000), multi-objective messy GA (MOMGA) (Van Vel-
intends to use the structure and operation of biological dhuizen and Lamont 2000) and ARMOGA (Sasaki 2005).
immune systems of mammals and apply it to solving Metrics: A uniform metric is used to assess this
optimization problems. This technique comprises various characteristic.
proposals: Clonal Selection algorithms originally proposed
by Nossal and Lederberg (1958) and its variants such as 4.2 Assessment and feature coverage analysis
CLONALG, developed by de Castro and Von Zuben
(2002) and optIA; Immune Network algorithms and Dent- In order to assess this area, we have crawled the source
ritic Cell algorithms Metrics: A uniform metric is used to code, user and technical documentation and user interface
assess this characteristic (with each feature weighing 0.25). of each selected MOF. Table 4 shows the feature coverage
C1.9 Ant Colony System (ACS) This technique, also of Area C1, along with the weight corresponding to each
known as Ant Systems (AS) is a probabilistic optimization feature in its associated characteristic. The last column of
algorithm inspired by the food foraging behavior of ants. this table shows the number of MOFs supporting each
Ant Systems use a data structure called ‘‘pheromone trace’’ feature. The last two rows show the number of features
to support communication between ants. In this article the supported by each MOF and a score computed as the
following variants are taken into account: The original weighted sum of features supported divided by the number
proposal of Ant System (AS) and Ant Colony System of characteristics in the area. It is remarkable that only four
(ACS) as propsed by Dorigo and Gambardella (1997), Ant features of this area are supported by a minimum of six out
System using Rankings (ASrank), Min–Max Ant System of the ten MOFs under study. Only the core techniques
(MMAS) according to Stutzle and Hoos (1997) and API as (namely SD/HC, SA and EA) have features in this range.
developed by Monmarchè et al. 2000). Metrics: An ad hoc This shows a dispersion in techniques supported by MOFs,
123
Metaheuristic optimization frameworks
and consequently, implies that users have little choice if The almost universal support for EA and the lack of
they want to use techniques out of this set. Thus MOF is support for AIS are remarkable. SS is only supported by
determined by the technique the user wants to apply. Eva2, and GRASP is only supported by FOM. Other
An interesting fact shown in Table 4 is that 39% of metaheuristics with very little support are ACO, TS and
features in this area are not supported by any MOF. Con- VNS. This could be due to the complexity of modeling in
sequently, current MOFs have ample scope for improve- abstract, the elements involved in their operation and
ment in this area. Moreover the distribution of those reusing or customizing them (ACO and TS are based on
unsupported features imply that MOF technique support is features of solutions, and VNS needs to apply different
aimed at the basic variants. This does not apply to the neighborhood structures). When applying EAs using java;
techniques in the core set, TS and some multi-objective ECJ, JCLEC and EvA2 appear as highly competitive
variants, since those techniques only have features that options; while Paradiseo and MALLBA are the MOFs
represent variants with more than 30% of MOFs supporting available if the user plans to use C??. In .NET environ-
them. ParadisEO, Eva2 and FOM have the highest number ments, the only option available for applying EAs is
of features supported in this area, followed by HeuristicLab HeuristicLab.
and OAT. We can provide an answer to RQ1 and its sub-questions
based on information shown in Table 4 and Fig. 2. char-
4.3 Comparative analysis acteristics of area 1 summarize the whole set of metaheu-
ristics currently supported by assessed frameworks. Most
FOM is the framework that provides a broader support of variants of those techniques are unsupported. The most
optimization techniques, closely followed by Paradiseo, widely supported techniques are EA, SD/HC and SA,
Eva2 and HeuristicLab. It is important to note that more which are supported by more than 60% of asssessed
features supported do not imply more techniques sup- frameworks. Finally, there is no universal MOF, which
ported, since some techniques have a number of variants provides support for all the techniques.
and specific heuristics implementations modeled as fea-
tures. The weights contribute to express this fact by making
that each technique sums a total score of 1 unit once the 5 Adapting to a problem and its structure (C2)
features are weighted. Figure 2 shows a stacked columns
diagram for the C1 area characteristics. Each color or As stated in the previous section, MOFs provide imple-
texture represents a metaheuristic and each column the mentation of metaheuristic techniques for problem solving.
support provided by a MOF. The number of techniques They also provide mechanisms to express problems prop-
supported by each MOF can be easily identified by the erly in order to apply these techniques. MOFs allow for the
number of different colors/textures in its column. The adaptation of their supported metaheuristics for better
degree of support for each technique is expressed through problem solving.
each color’s height (computed based on the weight asso- For instance, frameworks can provide appropriate data
ciated to their features and the feature support information structures that the techniques can handle. This two-way
shown). The total height of each column provides a mea- adaptation (techniques to problem for efficient problem
sure of the global support of metaheuristics by its corre- solving, and problems to techniques for proper solution
sponding MOF. handling and underlying heuristics implementation) is
123
J. A. Parejo et al.
basically done in three ways: selecting an appropriate neighborhood structures that apply different neighbor-
solution representation/encoding, specifying the objective hood structures randomly or based on some rule.
function to optimize and implementing the set of under- • C2.3 Auxiliary Mechanisms supporting population-
lying heuristics required by the metaheuristic used to solve based heuristics (genetic operators): Genetic operators
the problem. are the main underlying heuristics on EA. Their
implementation (except for selection operators, evalu-
ated in C2.4) is usually dependent on solution repre-
5.1 Characteristics description
sentation; therefore, MOFs must provide the
corresponding implementations for their supported
This area evaluates the capabilities provided by MOFs to
representations. Various alternatives for implementing
support this adaption. Characteristic C2.1 aims at assessing
each genetic operator have been proposed in literature
capabilities to represent solutions to optimization problems
as described below. We have relied primarily on (Back
based on the set of data structures provided by frameworks.
et al. 1997, chapter C3.3) to develop the definition and
Characteristics C2.2, C2.3 and C2.4 aim to assess the
features of this characteristic.
supported set of underlying heuristics. Characteristic C2.5
aims to assess the capabilities of declarative objective The most common genetic operators are crossover and
function specification based on the representations assessed mutation. Weights have been evenly distributed among all
in C2.1. Finally, C2.6 aims to assess capabilities of con- variants provided for each operator. Next, we enumerate
straint handling. Features and characteristics described in the crossover operators proposed in literature for solution
this section have been structured following Back et al. encodings of Table 3.
(1997) and Rothlauf (2006) for solution encodings (C2.1),
• Binary and integer vectors: The original crossover
Back et al. (1997) for selection and genetic operators (C2.3
operator was proposed by Holland (1992) and named
and C2.4), Aarts and Lenstra (1997) for neighborhood
‘‘one point crossover’’ (1PX), the generalization of this
definition capabilities (C2.2) and Michalewicz and Fogel
operator for n crossover points (NPX) was proposed by
(2004) for constraint handling techniques (C2.6). Next we
Jong (1975), uniform crossover (UX) (Ackley 1987),
describe in detail each of these characteristics:
punctuated crossover (PNCTX) (Schaffer and Morishi-
• C2.1 Solution encoding: Solution encodings are data ma 1987), shuffled crossover (SX) (Eshelman et al.
structures that allow the modeling of solutions for 1989), half uniform crossover (HCX) (Eshelman 1991)
metaheuristic techniques to handle. In this sense, the and random respectful crossover (RRX) as proposed by
increased flexibility and the more data structures Radcliffe (1991).
provided, the lower the effort invested by the users to • Floating Point vectors: Operators 1PX, NPX and UX
address problems. Metric: In order to evaluate this are in principle applicable to floating point vectors, but
characteristic, we have taken into account three criteria: they support a set of specific crossover operators for
provided data structures (vectors, matrices, trees, being implemented by MOFs: arithmetic crossover
graphs and maps), data types and information encoding (AX/BLX) (Michalewicz 1994, p 112), heuristic cross-
and the ability to use combined representations as over (HX) (Wright 1994), simplex crossover (SPLX)
described by Rothlauf (2006). A proportional metric is (Renders and Bersini 1994), geometric crossover
used, where this last feature weighs 0.4. Data types (GEOMX) (Michalewicz 1994), blend crossover
taken into account are bits (with usual or Gray (BLX-alpha) (Eshelman and Schaffer 1993), crossover
encoding), integers, floating point numbers and strings. operators based on objective function scanning (F-BSX)
The remaining weight is evenly divided among these and diagonal multi-parental crossover (DMPX) as
combination of data type and data structure. proposed by Eiben et al. (1994).
• C2.2 Neighborhood structure definition: A proper • Permutations: Basic crossover operators, such as 1PX,
neighborhood structure definition is a key factor for NPX, UX, etc., generate infeasible individuals when
the success of intelligent search-based heuristics. using permutation-based representations; it is therefore
Neighborhood structure strongly depends on solution necessary to design specific operators for such repre-
representation, and its suitability depends on the sentations, such as order crossover operator (OX)
problem to be solved and the technique used to solve (Davis 1985), partially mapped crossover (PMX)
it (as stated by Aarts and Lenstra (1997). Metric: The (Goldberg and Lingle 1985), order-2 and position
assessment is divided into three features: pre-defined crossover operators (Syswerda 1991), uniform cross-
neighborhood structures provided by MOFs weigh 0.6; over for permutations (UPX) (Davis 1985, p 80),
neighborhood structures of composite representations maximal preservative crossover (MPX) (Muhlenbein
weigh 0.3, and a weight of 0.1 is given to complex 1991, p 331), cycle crossover (CX) (Oliver et al. 1987)
123
Metaheuristic optimization frameworks
and merge crossover (MX) ad defined by Blanton and • State machines: The basic mutation operator for state
Wainwright (1993). machines is based on the set of its states and transitions,
• State machines: Crossover operators for state machines slightly modifying any state or transition as porposed
(SMFx) were initially proposed by Fogel (1964), Fogel by (Back et al. 1997, C3.2.4).
et al. (1966) (pp 21–23). In this comparative study, we • Trees: The mutation operators for trees covered by this
evaluate those operators and 1PX using a vectorial comparison are those proposed by Angeline et al.
representation of the state machine (SM1PX) as defined (1996): (1) grow mutation operator (TGm); (2) reduc-
by Zhou and Grefenstette (1986), state one to one tion mutation operator (TSHRm); (3) swapping muta-
state interchange as proposed by Fogel and Fogel tion operator (TSWm); (4) cycle mutation operator
(1986), uniform crossover for state machines (SMUX) (TCm); and (5) the gaussian mutation operator for
and the merge operator (SMJO) as defined by Birgmeier numeric nodes (TGNm). The adaption proposed by
(1996). Montana (1995) is also taken into account.
• Trees: There is real difficulty in defining proper cross- • Mutation operators for composite representations
over operators for trees, and specifically trees represent- (CSm): Mutation operators for individuals using
ing programs, since generally constraints have to be composite representations can be created by applying
imposed on their structure, semantics and associate data the corresponding operators to each component of the
types. The most common crossover operator for trees representation.
were proposed by Cramer (1985). In this comparative, • Composite Mutation operators (CPXm): Composite
we also considered those defined by Koza (1992) and the mutation operators are possible through the assignment
adaptations proposed by Montana (1995). of a probability (or decision rule) to the application of
• Crossover operators for composite representations an operator from a set of valid operators for the
(CSX): Crossover operators for individuals using com- representation used.
posite representations can be used by applying the • Mutation operators using dynamic probability (DEm):
corresponding operators to each component of the There exists empirical evidence (Fogarty 1989) that the
representation. use of a dynamic mutation probability that decreases
• Composite crossover operators (CMPX): By assigning exponentially along the evolution process, improves the
a probability (or decision rule) to the application of an performance of EAs. In this comparison, we have taken
operator from a set of valid crossover operators for the this feature it into account.
representation used, composite crossover operators are
Metric: A uniform metric is defined, where the weight
possible.
evenly distributed among mutation (0.5) and crossover
Next we enumerate the mutation operators proposed in (0.5) operators. For each variant of those operators, weights
literature for the solution encodings of Table 3. are uniformly associated.
• Binary and integer vectors: We have taken into account • C2.4 Selection mechanisms: This characteristic assesses
the original mutation operator proposed by (Holland the support for the different criteria for solution
1975, pp 109–111). selection. The problem of selecting a subset amongst
• Floating point vectors: The mutation operator based on a larger set of solutions appears as a specific heuristic
an uniform distribution U(b, - b) (RUm) proposed by on a number of metaheuristic techniques (SA, TS, EA,
Davis (1989), the normal mutation operator (RNm) ACO, etc.). By applying OO analysis and design
developed by Schwefel (1981), the mutation operators methodologies and specifically the strategy design
based on Cauchy (RCm) and Laplace (RLm) distribu- pattern1, objects encapsulating the solution selection
tion as proposed by Montana and Davis (1989, Yao and logic are called selectors. The use of different selectors
Liu (1996), and the proposals of adaption of mutation allows for controlling the trade-off between exploration
ratio according to Schwefel (1981) and Fogel et al. and exploitation of the search space. As a consequence,
(1991), are the mutation operator for floating vectors performance of metaheuristic techniques in finding
that have been considered. good solutions to problems is drastically affected by
• Permutations: The mutation operators for permutations those selection criteria. Usually, selection criteria are
covered by this comparison are 2-opt (P2Optm), 3-opt
1
(P3Optm) and k-opt (PKOptm), simple interchange The strategy pattern is a particular software design pattern, whereby
mutation operator (PSWm) o insertion operator (deleting algorithms can be selected at runtime. This pattern is useful for
situations where it is necessary to dynamically swap the algorithms
the item from its original position) of 2 element (PIm)
used in an application. The strategy pattern is intended to provide a
and ‘‘scramble mutation operator’’ (PSCm) (Syswerda means to define a family of algorithms, encapsulate each one as an
1991). object and make them interchangeable Gamma et al. (1994).
123
J. A. Parejo et al.
based on the adequacy of solutions, but there is a wide evaluation of solutions. Moreover, a partial implementation
set of possibilities, from random to elitism (stochastic would be provided, where MOF users would customize the
and deterministic). data entry form and solution representation (graphical or
textual), designing a user friendly interface integrated
In this comparison the following criteria are taken into
within the framework. Metric: A uniform metric is defined
account: (1) elitist selector (Es), that picks the best solutions,
to assess this characteristic, using features enumerated
and its variants; expected value selector (EVs) and elitist
above: DSL support, DSL tools and forms for solution
expected value selector (EEVs) as proposed by Jong (1975);
evaluation by human operators.
(2) proportional selector (Ps) as proposed by Holland (1975),
where probability of select s, P(s) is proportional to their • C2.6 Constraint Handling: A feature of great impor-
fitness, and its variants, random sampling selector (RSSs) tance for proper problem modeling is constraint defi-
and stochastic tournament selector (STs) Brindle (1981); nition support. There are usually two different ways to
stochastic universal sampling selector (SUSs) as proposed handle constraints when solving optimization prob-
by Baker (1987); (3) ranking based selectors: linear (LRs) lems2: (1) include constraint meeting in objective
and non-linear (NLRs), developed by Whitley (1989); (4) function definition as penalties; (2) and create repairing
selection schemas ðl; kÞ; ðl þ kÞ and (5) threshold based mechanisms that are applied to infeasible solutions.
selectors (Ths); (6) Boltzman selector (Bs), (7) a fully ran- There are three alternatives of implementation for those
dom selector (RNDs) (8) and a selector that combines a pair mechanisms on MOFS: (a) provide global repairing
of different selectors (COMBs) by dividing the set of ele- mechanisms that users can implement for the problem
ments to select amongst its components. Metric: A uniform at hand, (b) explicit modeling of each constraint and
metric is used to assess this characteristic. (c) specific repairing mechanisms for each constraint.
In the same way as in characteristic C2.5, (3) the use of
• C2.5 Fitness function specification Support: The most
a DSL can make it easier to specify constraints for
problem dependent element of metaheuristic techniques
users, and some mechanisms, such as penalization [cf.
is the objective function to be optimized. Therefore,
(1)], can be applied without the need of implementation
even when using MOFs, its evaluation is usually
by users. Metric: An ad hoc metric is defined to assess
implemented explicitly by users and integrated into
this characteristic, where the weights have been asso-
the framework through its extension points. However,
ciated with each feature as follows: (1) penalization 0.3,
based on the solution encodings supplied by MOFs, it is
(2.a) global repairing mechanism 0.2, (2.b) individual
possible to provide tools for declarative objective
constraint modeling 0.2 (2.c) individual constraints
function specification, freeing the user from the low-
repairing mechanisms 0.2 and (3) DSL support 0.1.
level task of implementing it.
In this case, a Domain-Specific Language (DSL) is a tool
5.2 Assessment and feature coverage analysis
of great interest for objective function specification. The
advantages of using a DSL, compared with classical imple-
Table 5 shows the feature coverage of area C2, along with
mentation, are that the DSL can be a much simpler language
the weight corresponding to each feature in its associated
than the implementation language, and integration of the
characteristics. The last row and last column of this table,
objective function can be automatic if the MOF supports it. If
respectively, show the sum of features supported by each
the MOF provides suitable DSL tools for the specification of
MOF and the number of MOFs supporting each feature. It
the objective function (such as syntax highlighting and in-
is remarkable that only 9.57% of features of this area are
line debugging and error information), it could lead to a more
supported by a minimum of six out of the ten MOFs under
declarative paradigm for metaheuristic problem solving,
study. Moreover, those features are associated with only
improving the usability of metaheuristics and contributing to
three characteristics (namely C2.1, C2.3 and C2.4) and are
a wider application of such techniques. There are aslo
mainly related to EA. An interesting fact shown in Table 5
drawbacks when using DSLs for objective function specifi-
is that more than 25% of features in this area are not
cation, such as the need to learn a new language, performance
supported by any framework.
loss and the inability to model some objective functions
using the language constructs.
2
Finally, there are problem types for which the automa- Various techniques to adapt metaheuristics to constrained problems
have been proposed in literature (c.f. Michalewicz and Fogel (2004)
tization of objective function evaluation is impossible, for instance). However, most of these approaches require ad hoc
since it relies on a human operator’s interaction to evaluate implementation of the techniques depending on the problem and type
solutions. In order to support this kind of problems, MOFs of constraints to handle; consequently, it is difficult to integrate those
can provide a form in which users can directly provide the proposals into a MOF. Those ad hoc techniques have been omitted in
our comparison.
123
Metaheuristic optimization frameworks
5.3 Comparative analysis heuristics and distributed and parallel execution. These
characteristics are of great interest since they can either
Area C2 along with C3 have the smallest average score of drastically improve the results obtained or simplify the
our benchmark, evidencing that framework developers application of techniques. They are especially interesting
have put more emphasis on coding algorithms for problem because their implementation involves high cost and
solving than in the support for an easy and efficient complexity, preventing their application in many contexts.
adaptation of these algorithms to the problem. Remarkably, As MOFs can provide these characteristics pre-imple-
there is a lack of support for: (1) the definition of neigh- mented, their applicability is significantly broadened.
borhood structures (except EasyLocal, ParadisEO and
HeuristicLab), (2) the specification of the objective func- 6.1 Characteristics description
tion and (3) constraint handling (exceptions are FOM,
Eva2, ParadisEO and HeuristicLab). The following describes these characteristics:
In Fig. 3 a stacked columns diagram is shown for the C3.1 hybridization Hybrid metaheuristic techniques are
characteristics of this area. Just like in Fig. 2 colors rep- those that combine several techniques. There is ample
resent characteristics of this area and columns their support empirical evidence of the success of hybrid techniques for
by the assessed MOFs. optimization problem solving (as stated by Talbi 2002).
Based on information shown in Table 5 and Fig. 3, we Several authors have described taxonomies of hybrid
can provide an answer for RQ2. The means of problem metaheuristics, to discern the ways techniques can be
adaption are summarized by the characteristics of area C2; combined such as Talbi (2002) and Roli and Blum (2008).
however, current support of these mechanisms is limited In this work we restrict the concept of hybrid metaheuristic
and strongly depends on the MOF and metaheuristic to use to a combination of techniques integrated at a high level (as
for problem solving. defined by Raidl 2006), where each technique keeps its
It is important to note that characteristic C2.4 is inti- overall structure except at the point of invocation of the
mately related to EA support, and consequently those other. Specifically, we have considered four different types
MOFs that do not support this technique are not able to of hybridization: (1) batch execution of the same technique
support the features of this characteristic. However, those (BEMIh), in which the technique is executed several times;
MOFs, such as EasyLocal, are still able to provide support (2) batch execution of different techniques (BEMMh),
for the rest of the area and constitute very useful alterna- where various techniques are executed sequentially and
tives when applying other techniques. Thus, users must where the results of one can be used as an initial solution of
have this into account when comparing different MOFs. others; (3) interleaved execution of a technique as a step in
each iteration of another, possibly affecting the internal
variables (IMMh); and (4) combinations of various types of
6 Advanced characteristics (C3) the above (Ch). Metric: An ad hoc metric is defined to
assess this characteristic, with the weights of the features
In this area we evaluate general and advanced character- being (1) BEMIh 0.1, (2) BEMMh 0.2, (3) IMMh 0.6 and
istics, not related to specific metaheuristics techniques. (4) Ch 0.1.
Specifically, the characteristics assessed in this area are the C3.2 hyper.heuristics A hyper-heuristic is readily defined
use of hybrid techniques, the implementation of hyper- as a heuristic that selects heuristics. Hyper-heuristics are
123
J. A. Parejo et al.
Fig. 4 Advanced
characteristics support
intended to provide robust and general techniques of broad and parallel exploration of its current solution’s neighbor-
applicability without needing extensive knowledge of both hood (LSPDNM).
the technique and the problem to solve. Hyper-heuristics Parallel population-based metaheuristics There are two
have received much attention in recent years (Chakhlevitch different approaches to create paralell population-based
and Cowling 2008; Cowling et al. 2002). Hyper-heuristics metaheuristics: (1) parallel and distributed objective func-
search from the heuristic space the heuristic that best solves a tion evaluation for the individuals of the population
particular problem. The search space for hyper-heuristics (PDPEDM), where in each network node a different subset
could consist of four different subspaces: (1) optimization of individuals conform the current population to be eval-
techniques space, with fixed parameters for each technique; uated. The main difference with SSPDM is that a unique
(2) parameter values space for a technique; (3) underlying instance of the metaheuristic algorithm is executed in the
heuristics space for a technique (e.g. searching on a space of distributed environment. (2) Parallel evaluation of the
applicable selection, mutation or crossover operators when objective function, where computing objective function of
using an evolutionary algorithm); and (4) search space of a solution implies parallel processing in various nodes
possible solution encodings. Metric: A uniform metric is (PDESSM). Metric: A uniform metric is defined to assess
defined to assess these characteristics (with each search this characteristic, where variants taken into account are
space weighing 0.25). IPDM, SSPDM, LSPDNM, PDPEDM and PDESSM.
C3.3 paralell and distributed computation Many adap-
tations of metaheuristics have been proposed in the litera- 6.2 Assessment and feature cover analysis
ture to exploit the paralell processing capabilities available
in current distributed environments. Incorporating these Table 6 shows feature coverage of Area C3, along with the
strategies in a MOF is a significant improvement in their weight corresponding to each feature in its associated
applicability and relevance to the resolution of a great characteristic. The last row and last column of this table,
number of real problems, given the complexity and cost of respectively, show the sum of features supported by each
its implementation. Parallel and distributed execution of MOF and the number of MOFs supporting each feature. It
metaheuristics techniques without intercommunication is remarkable that only 6.25% of features of this area are
(IPDM) can be implemented independently of the tech- supported by a minimum of six out of the ten MOFs under
nique to apply. The only requirement is the installation of study. Furthermore, 40% of MOFs provide a nearly nil
the MOF in each of the computers of the distributed support (fewer than 10% of features) in this area.
environment and enabling a mechanism for communication
and control in order to design, plan, launch execution and 6.2.1 Comparative analysis
control optimization tasks in that distributed environment.
Another similar variant is one in which techniques can With respect to the features of this criterion, the highest
exchange solutions (SSPDM). A parallel EA-based on scores correspond to ParadisEO and FOM. Although both
islands with migration (as proposed by Whitley et al. frameworks support the first characteristic, FOM does not
(1999) would qualify as a SSPDM technique. Finally, support Parallel and Distributed Optimization whilst Pa-
techniques that need a change on the implementation of radiseEO does not support Hyper-heuristics. Currently,
metaheuristics are sub-classified by Cahon et al. (2004) FOM is the only framework that supports Hyper-heuristics.
into Parallel Local Search Metaheuristics a unique exe- In Fig. 4 a stacked columns diagram is shown for the
cuting instance of the metaheuristic controls the distributed characteristics of this area.
123
Metaheuristic optimization frameworks
Table 6 and Fig. 4, answer RQ3, RQ4 and RQ5. Basic C4.2 Batch mode execution The ability to automatically
hybridization, such as (BEMIh) and (BEMMh) is currently run a set of optimization tasks, where the user only has to
supported by many MOFs, but more advanced hybridiza- specify the sequence and number of times to execute each
tion techniques, such as (IMMh) and (Ch) are not. Parallel task is important when performing experiments. The sup-
and distributed computing is currently supported by Pa- port of this feature promotes cost reduction, by automating
radisEO, ECJ, MALLBA and to a limited extent by other one of the most tedious tasks of research and studies with
mainly EA-oriented frameworks such as JCLEC and EvA2. empirical validation. We have defined four features related
to this automation: (1) repeated execution of a task (using
the same technique, parameters values and instance of the
7 Global optimization process support (C4) problem); (2) repeated execution of a task with different
parameters (defined a range or set of values for the
One of the strengths of MOFs is their capacity to support parameters of the technique); (3) execution of various tasks
the optimization process in its broadest sense, from prob- on the same instance of the problem; and (4) execution of
lem modeling to experimentation, execution and results various tasks on multiple instances of the problem. Metric:
analysis. This support allows users without a deep knowl- A weight of 0.2 has been given for the four features
edge in the area to apply metaheuristic techniques and described above. In addition the ability to randomize the
obtain useful real results. This area evaluates these optimization task execution sequence and the generation
capacities. and loading of a document or file where tasks are defined
(the task execution plan, where description of tasks to
7.1 Characteristics description execute can be user-supplied or generated by MOFs)
weighs 0.2.
Seven characteristics have been established, covering the C4.3 Experimental design The appropriate design of
various stages of execution of the global optimization experiments is essential to obtain valid conclusions in any
problem-solving process (4.1, 4.2, 4.3, 4.4 and 4.7) and the study. This characteristic assesses the support provided by
ability to interact with the user (4.5) and with other systems MOFs to establish hypothesis, identify dependent and
(4.6). The following describes those characteristics: independent variables and select and define experiments
C4.1 termination conditions Metaheuristics do not pro- properly using standard designs (factorial, latin squares,
vide explicit temination criteria, since, in general it is not fractional, etc.). This characteristic is assessed indepen-
possible to evaluate whether it has reached the global opti- dently of the previous characteristic (C4.2) and the capacity
mum solution. Therefore, users have to set criteria based on for statistical analysis of results (C4.4). There are two dif-
the specific needs and context of the problem to decide when ferent ways to support this characteristic: (1) provide inte-
to stop the execution of the metaheuristic. MOFs can provide gration mechanisms with design of experiments systems
implementations of the usual criteria for reuse, among which (such as GOSSET Sloane and Hardin (1991–2003)); and (2)
we find the following: (1) maximum number of iterations, (2) implement the utilities for experimental design in the MOF
maximum execution time, (3) maximum number of objec- itself. The alternative (1) implies that capabilities for
tive function evaluations, (4) maximum number of iterations experiment design are those of the system to integrate with
or execution time without improvement in the optimal and are difficult to assess in the context of this comparative.
solution found (5) reaching a concrete objective function We have created a set of features in order to assess the
value (6) and logical combinations (using operators AND/ capabilities of frameworks that use this approach (2):
OR) of the above (e.g. ExecTime B 36,000 OR ExecTime- (a) hypothesis definition support, specifically common
WithOutImprovement C 3,600. (7) Furthermore, termina- hypothesis, such as equality of performance of two tech-
tion conditions can be established independently of the niques or irrelevance of the value of a parameter in a range;
problem to solve but dependent on the technique used, such (b) experiments modeling, supporting the definition of
as a termination criterion based on the diversity of the pop- dependent and independent variables and their nature
ulation when using an EA. Finally, (8) we evaluate the (nominal, ordinal or scalar); (c) experiments design based
facilities provided to enable the definition of specific crite- on the previous model using common schemes; and finally
rion by its implementation. In this sense, we have assessed (d) the capability of executing the experiments automati-
the use of abstract classes or interfaces to evaluate the ter- cally, this feature assess the capability of generating a
mination condition and its use in the implementation of the proper task execution plan for the experiments designed
metaheuristic techniques provided. Metric: A proportional (C4.2 evaluates capabilities of automation of those plans
metric is defined, where (8) weighs 0.3, and the remaining execution). Metric: A proportional metric is defined, where
weight is evenly distributed among the other described approach (1) weighs 0.2, and the remaining weight is evenly
criteria. distributed among features of approach (2).
123
J. A. Parejo et al.
C4.4 Statistical analysis One of the most important buttons, etc.); (2) technique specification and parameter
elements to ensure the validity of any study is the ability to configuration support, (3) problem modeling and data
perform statistical tests on its results. Therefore, one of the import, (4) Graphical support of advanced features (sub-
most common tasks in solving optimization problems (and divided into batch mode execution configuration, design of
in any study with an empirical component) is the statistical experiments and statistical analysis of results) (5) the use of
analysis of experimental data and results. There are two optimization project where all the information about
different ways to support this characteristic: (1) to provide problem instances, techniques and results are stored and (6)
integration mechanisms with statistical analysis systems the graphical representation of results through diagrams
(such as R or SPSS); and (2) to implement the utilities for and figures. Metric: A uniform metric is defined to assess
statistical analysis in the MOF itself. One of the disad- this characteristic (each feature weighs 0.2). If the MOF
vantages of approach (1) is that the user must import data only shows the evolution of the objective function of the
into the statistical analysis system and perform statistical best solution, but no additional metrics are provided (such
tests on it, interpret results and return to the framework to as population diversity when using EA, or current solution
change parameters or implementations if necessary. This when using TS or SA), then feature (6) has been evaluated
approach frees the MOF from the implementation of the with half of the weight.
statistical tests. Moreover, statistical analysis systems are C4.6 Interoperability This characteristic assesses the set
usually more complete and powerful than implementations of capabilities that frameworks provide to exchange
of tests integrated on frameworks. On the other hand, the information and interact with other systems. Specifically
use of strategy (2) allows the framework to automate the the following features are taken into account: (1) results
tests and associated data exchange, showing the results and data export capabilities (considering formats such as
integrated in its user interface and even react autonomously CSV or excel/odf files); (2) data import capabilities (using
to the results of tests. A set of features have been created in formats such as CSV, excel/odf files or specific formats of
order to assess capabilities of frameworks that use standard libraries of each problem type, such as SATLIB or
approach (2), concerning the support of various tests both TSPLIB); (3) the capability of deployment and invocation
parametric and non-parametric: (a) t student; (2) one-way as a web service (as in Garcı́a-Nieto et al. (2007); and (4)
ANOVA; (3) two-way ANOVA; (4) n-way ANOVA; (5) the use of XML to store information associated with
Mann–Withney U test; (6) Wilcoxon test; and (7) Kol- optimization projects (selected solution encoding, objective
mogorov–Smirnov test (or any test to assess the distribu- function and problem model, techniques and their param-
tion of normal data). The use of approach (2) does not eters, experiment design and results and statistical analysis,
necessarily imply that approach (1) cannot be applied. In etc.), so that other systems can process these data and
this sense the integration with the statistical software can parameters in a simple way. Metric: A uniform metric is
be performed at the test execution level (to free the defined to assess this characteristic (each feature weighs
implementation burden), while providing programmatic 0.25).
support or graphical interfaces integrated in the MOF.
Metric: A proportional metric is defined, where approach
7.2 Assessment and feature cover analysis
(1) weighs 0.3, and the remaining weight is distributed
uniformly among features of approach (2).
The feature coverage of C4 area is shown on Table 7,
C4.5 User interface, graphical reports and charts The
along with the weight corresponding to each feature in its
usability of applications strongly depends on the proper
associated characteristic. As an exception, the features of
design of its Graphical User Interface (GUI). Specifically,
the GUI characteristic have been assessed using a real
an appropriate GUI for MOFs requires taking into account
value between 0.0 and 1.0. The last row and last column of
the rest of the characteristics of this comparison criteria:
this table, respectively, show the sum of features supported
the ability to select and configure the parameters of the
by each MOF and the number of MOFs supporting each
different techniques, reporting of the results and monitor-
feature.
ing of the status of optimization tasks and of the global
execution plan, the control of nodes in distributed and
parallel computing environments, the on-line technical 7.2.1 Comparative analysis
support and the assistance or communication with the user
forums and developers of the MOF. Moreover, although The low score obtained by ParadisEO in this area is sur-
GUI design and usability could be assessed, the evaluation prising, highlighting this as a potential area of improve-
would include a subjective bias. In order to avoid it, we ment for that framework. OAT is among the highest scored
have defined the following set of features to be evaluated: frameworks (which has a well-designed GUI as well as
(1) Integrated help and basic usability (menus, shortcut powerful experiments execution and statistical analysis
123
Metaheuristic optimization frameworks
support) followed by JCLEC, whose characteristics in this defined to group this set of characteristics as described
area have been evaluated together with those of its asso- below.
ciated project KEEL (focused on Data Mining and classi-
fication applications). Note that this area has, together with 8.1 Characteristics description
areas C2 and C3, the lowest support levels, thus repre-
senting significant areas of improvement in the present C5.1 Language Implementation language can be a key
framework ecosystem. In Fig. 5 a stacked columns diagram factor for users of MOFs, since the use of a well-known
is shown for the characteristics of this area. programming language reduces development costs and
Table 7 and Fig. 5 answer the requirements for RQ6. likelihood of errors. Frameworks under consideration in
Area C4 characteristics summarize the capabilities pro- this share are implemented in C??, C# and Java.
vided by current MOFs for helping conducting research C5.2 Licensing Cost is not a characteristic of interest
studies and the general problem-solving process. Those since all the frameworks assessed are free; however,
characteristics vary from statistical analysis and experi- licensing of MOFs can limit the context and purposes of
ment execution engines, to GUIs with wizards and chart their use, or they can be forced to provide the client with
generation. These tools, however, are not inter-operable, the source code of the generated application. From this
and the quality and support of each MOF is not homoge- perspective the types of license we take into account are (1)
neous; it is dispersed on the set of frameworks. Conse- commercial; (2) free without providing MOF source code
quently, those tools are not available for all techniques or nor commercial use; (3) free with MOF source code
for programming languages and platforms. available only for certain organization and usages (usually
universities and non profit activities); (4) MOF source code
available under GPL (GNU General Public License) or
8 Design, implementation and licensing (C5) similar, that forces the distribution of the source code of
derived products under GPL license; and (5) MOF source
Both a suitable licensing model and the availability to run code available under LGPL (GNU Lesser General Public
in multiple platforms are essential to the success of any License) or similar, that allows the use for commercial
software product. In the case of software frameworks, application without restrictions on source code availability.
incorporating proper design and effective implementation Metric: This feature is not evaluated using a set of features
is also very important, since applications created using it but we establish a direct score, based on the freedom that
incorporate their design therein (with the errors and prob- each license provides: (1) Commercial Licensing = 0; (2)
lems that they may contain). Moreover, the efficiency of Free binaries (no commercial use) = 0.25; (3) Restricted
those applications is limited by the efficiency of the availability of source code = 0.5; (4) GPL = 0.75 and (5)
framework. As a consequence, a comparison area has been LGPL =1.
123
J. A. Parejo et al.
C5.3 Supported platforms The set of platforms taken quantitative evaluative criteria, since the functionalities
into account are: Windows, Unix (Linux, Solaris, HPUX, supported are not directly related to it, and an increase in its
etc.) and Mac. Metric: A uniform metric is defined, with size does not necessarily imply greater complexity in its
each platform weighing 0:b 3; in the case of partial support use. Therefore, we consider it as a qualitative criterion. As
(only a limited set of features are available on a certain a consequence, we consider some of these measures for
platform) we penalize it with 50%. each framework, but they will not be included in the
C5.4 Software engineering best practices A proper quantitative assessments.
design and following of software engineering best practices C5.6 Numerical handling Most metaheuristic techniques
is especially important for MOFs. However, assessing the are stochastic, requiring the use of a random number
design of a framework in a quantitative and objective way generator. This fact has two consequences: (1) choosing a
is a difficult task. As a result, features only evaluate basic good random number generator is a key point for the
use of certain tools and processes recognized as best proper behavior of the techniques implemented by MOFs;
practices such as (1) the use of design patterns to promote and (2) in order to support experiments replicability, a
flexibility in variation points; (2) the use of automated tests unique seed must be used on all random number generators
(unit tests): this characteristic is evaluated based on the used by along the framework and its customizations/
source code of MOFs (for those that do not provide the extensions developed by users. Features evaluating this two
source code, evaluation is based on the documentation, if important points are defined for this characteristic, where
tests exists); (3) explicit documentation of the MOF vari- (1) evaluates if a proper random number generator is pro-
ation and extension points; and (4) the use of reflective vided (either a Mersene Twister implementation or support
capabilities and dependence injection to promote flexibility for customization of the random number generation
as described by Fowler (2004). The latter feature corre- scheme); and (2) evaluates the replicability of experiments
sponds to the capabilities of the framework to dynamically based on the support of a global seed and provision of a
load types of problems, objective functions and other ele- random number generator using this seed to user imple-
ments associated with customization or extension without mented modules. Metric: A uniform metric is defined to
having to recompile the framework. With regard to feature assess this characteristic.
(4), MOFs that perform runtime loading of modules have
been associated with half of the weight, while those that 8.2 Assessment and feature cover analysis
use a dependence injection system for the management of
modules have full weight. Metric: A uniform metric is This area seems to be the most homogeneous and sup-
defined to assess this characteristic. ported in the sense that most frameworks support almost all
C5.5 Size A basic measure of the complexity of a the features and to a high degree. The platforms supported
framework is its size. The size of a framework can be is practically universal, except for HeuristicLab, EsayLocal
measured by various metrics, number of lines of code, and some modules of ParadisEO. It is remarkable also the
number of classes and packages/modules, number of vari- general adoption of the UML notation, as well as the open
ation points and possible combinations of components, etc. source licensing models. In Fig. 6 a stacked columns dia-
It would be inappropriate to use the size of frameworks as a gram is shown for some characteristics of this area. With
123
Metaheuristic optimization frameworks
123
J. A. Parejo et al.
123
Metaheuristic optimization frameworks
123
J. A. Parejo et al.
123
Metaheuristic optimization frameworks
the authors’ own personal view of open questions, based on ECJ, PARADISEO, JCLEC and HeuristicLab; moreover,
the analysis presented in this paper. Figure 10 shows global other frameworks such as EvA2 and Opt4j released minor
score results for MOFs as Kiviat diagrams, summarizing versions with bug fixes and minor features. This evolution
the results of this study; evaluating MOFs from a research allowed us to test the evaluation framework presented in this
user perspective. In the appendix, Table 7 shows the global study. No modifications were needed in order to assess those
score obtained for each MOF and characteristic as well as new versions of the MOFs and their features; thus it validates
the average for each area. the flexibility and completeness of our approach. Moreover,
To achieve the maximum score in areas C1, C2 and C3, both the previous and new versions of those frameworks
each MOF would have to implement an ample subset of the were evaluated, providing a dynamic view of the ecosystem,
current state of the art on metaheuristics, so it is not sur- in contrast with the static one shown in the previous sections.
prising that the scores do not generally reach the maximum In this sense, we can evaluate the ‘‘hot areas’’, i.e. those areas
possible value. On the contrary, the small average values where more evolution has been performed and the speed in
on areas C4 and C6 are significant and therefore show a the evolution of the assessed MOFs. In this sense the area
general improvement direction for current MOFs. with bigger improvements are C4 and C5, primarily due to
the improvements in the GUI and licensing model of Heu-
10.1 Capabilities discussion risticLab and the new GUI of ECJ. Additionally, C1 and C6
have also improved significantly but in a smaller scale, since
On average, the MOF with the best score is ECJ (maximum new techniques and better documentation are provided by
area in Fig. 10), making it a preferred choice if users can use the assessed MOFs. The MOF with a bigger improvement in
EA on java. However, this MOF scores below average in areas this time was HeuristicLab, changing directly form version
C1 and C5, which are clear improvement areas for it, and 1.1 to version 3.3. In this new version significant improve-
could lead users to evaluate different options (C1 measures ment is demonstrated in the licensing model (it becomes an
techniques available). The next best scored MOF is Paradi- open source project under GPL license) and the GUI and
sEO, salient in areas C1 and C3, which uses C?? as its documentation have been improved significantly. The next
implementation language. This MOF, however, scores below framework in terms of improvement during the creation of
average in C4 area, making this a clear improvement area. The this benchmark was ECJ, where a multi-objective technique
MOFs that provide the amplest support in terms of the variety and a GUI were added, i.e. complementary, significant
of metaheuristics (criterion C1) are FOM and ParadisEO. The improvements in documentation have been developed.
score obtained by OAT in C4 area is remarkable, much above Finally, the evolution measured shows that the current MOFs
average, and it is due to its GUI, execution of experiments and Ecosystem is a vibrant and living one, where new versions
statistical analysis tooling. In this same area the support of and important features are added continuously.
JCLEC (and its twin project KEEL) is also above average. Both the final evaluation of current versions and the
However, the best score of the GUI characteristic is obtained previous one are available as Google Docs spreadsheets at
by HeuristicLab that in its last version (3.3) provided a com- http://www.isa.us.es/MOFComparison and http://www.isa.
plete, highly configurable and intuitive user interface. C5 area us.es/MOFComparison-OLD, respectively. They can be
is where all of MOFs provide better average results. This is not downloaded and exported to different formats such as MS
surprising given that these characteristics are key for frame- Office or open office for customization and tailoring.
works use and success and are clear signs of technical com-
petence and maturity. In this sense, MOFs without good
10.3 Potential areas of improvement of current
design or implementation simply do not survive. Finally, the
frameworks
average value of area C6 indicates the need to improve doc-
umentation, user guidance and support. Thus we define
In addition to the points stated above about area C6, based on
Challenge 1: Improve documentation, user guidance and
the finished comparative study carried out, and on results
support and GUI tooling.
described above, we enumerate below some gaps and unsup-
ported features that have been identified. The areas where we
10.2 Evolution of the ecosystem of MOFs
see the most room for improvement are C2 (adaption to the
problem and its structure), C3 (advance characteristics) and C4
The creation of this benchmark has been a time-consuming
(general optimization process support). Specifically, some
and demanding task. However, the length of this task has
features that have room for improvement are
allowed the evaluation of an additional feature of the eco-
system of MOFS: its liveliness and evolution speed. During • Hyper-heuristics support.
the creation of this benchmark, various frameworks released • Support for designing and automated running of
new major versions with important improvements, namely experiments and for analyzing results.
123
J. A. Parejo et al.
• User guides together with wizards, project templates the use of any of these frameworks and on the availability
and GUI to aid the optimization process. of a good number of MOFs.
• Parallel and distributed computing support. From the MOFs assessment carried out, we can draw the
• Domain-Specific Languages for objective function and following conclusions:
constraints formulation.
• Frameworks are useful tools that can speed up the
Thus we define Challenge 2: Provide added-value fea- development of optimization-based problem solving
tures for optimization, such as hyper-heuristics and parallel projects, reducing their development time and costs.
and distributed computing capabilities. They might also be applied by non-expert users as well as
In particular, in the context of area C5 (design, imple- extend the user base and the applications scope for
mentation and licensing), we have identified the following metaheuristics techniques.
issues regarding software engineering best practices: • There are many MOFs available, which overlap and
provide similar capabilities which means that a certain
• Absence of unit tests. Note that one of the discarded
duplication of efforts has been made. It would be great
EA-oriented optimization library (JGAP) is recognized
if a certain coordination and standardization of these
reference for this practice Meffert (2006); however,
MOFs were carried out in order to improve the support
assessed MOFs do not provide unit tests in general
given to the user community.
(except for JCLEC and HeuricLab).
• There are visible gaps in the support of specific key
• Heterogeneity of project building and description
characteristics, as shown in Sect. 10.3.
mechanisms. It would be interesting that, as in Paradi-
• There is impending work we have to face in the near
sEO, projects provide files for framework compilation
feature, namely
using standard mechanisms such as makefiles in C??,
• Perform the second phase of the technology evaluation
or ant or maven builds files in java.
methodology followed in this study as defined by Brown
• Absence of explicit documentation of variation points.
and Wallnau (1996), establishing a set of specific use
Although all the frameworks that have been evaluated
scenarios and conducting experiments of application
provide extensive technical documentation of the differ-
using the evaluated MOFs.
ent classes and modules, none of them provide a scheme
• As the authors of one of the frameworks studied
(such as feature models) to describe the variation points
(FOM), we plan to enhance it according to the potential
of the framework, nor are these even described explicitly
improvement areas identified in this paper.
in natural language in the documentation. Moreover,
none of the frameworks use the UML profiles for
framework documentation Fontoura et al. (2001). Acknowledgments We would like to thank Stefan Wagner, Andreas
Schaerf, Sebastián Ventura, Sean Luke, Marcel Kronfeld and David L.
• Limited dynamic and reflexive capabilities for loading
Woodruff for their helpful comments in earlier versions of this article.
problems, heuristics and techniques variants. Thus, We are thankful to David Benavides and Sergio Segura for providing us
only Opt4j uses a dependency injection mechanism their inspirational work Benavides et al. (2009), and Ana Galan for her
(such as Google Juice or Spring). linguistic support. This work has been partially funded by the European
Commission (FEDER) and Spanish Government under CICYT project
Finally, regarding area C1 (Metaheuristic techniques) SETI (TIN2009-07366) and the Andalusian Government projects
there is always the possibility of enlarging the portfolio of ISABEL (TIC-2533) and THEOS (TIC-5906).
techniques implemented. The current support is uneven,
with some techniques (such as EA) practically universally
Appendix: Data tables
supported and others (such as GRASP, SS, ACO or AIS)
being rarely implemented.
In this section, we provide detailed information about the
Thus we define Challenge 3: Improve techniques and
scores obtained in each characteristic by each framework.
variants support and Challenge 4: Develop standard bench-
Interested readers can obtain more detailed information about
marks for MOFs.
assessment on characteristics and features (including com-
ments on problems found on the assessment, penalizations on
11 Conclusions some features and its underlying reasons and informations
sources used to assess it) in http://www.isa.us.es/MOF
In this paper an assessment based on the state of the art of Comparison. Moreover, this spreadsheet can be downloaded
the main MOFs has been made. The motivation of the and exported to various formats, and it is provided in such a
study is based on the implications of the NFL theorem in way that user can customize weights of each characteristic,
terms of the desirability and advantages of using such tools, feature and area, allowing the creation of tailored benchmarks
on the complexity and difficulty of learning and mastering more adapted to its specific needs (see Tables 9, 10, 11).
123
Metaheuristic optimization frameworks
C1-Metaheuristic techniques
Steepest descent/fill 1 1 1 1 0 1 0 1 1 1 0.800
climbing (SD)
Simulated annealing (SA) 0 0.8 0.6 0.9 0 0 0.6 0.7 0.8 0.7 0.510
Tabu search (TS) 0 0.7 0 0.9 0 0 0 1 0.7 0 0.330
GRASP 0 0 0 1 0 0 0 0 0 0 0.100
Variabl neighborhood 0 0.2 0 0.2 0 0 0 0.2 0 0 0.060
search (VNS)
Evolutionary 0.85 0.6 0.8 0.25 0.7 0.7 0.7 0 0.7 0.4 0.570
algorithms (EA)
Particle swarn 0.3 0.7 0.5 0 0 0 0.5 0 0.3 0.3 0.260
optimization) (PSO)
Artificial immune 0 0 0 0 0 0.25 0 0 0 0 0.025
systems (AIS)
ACO 0 0 0 0.7 0 0.9 0 0 0 0.3 0.190
Scatter search 0 0 0.438 0 0 0 0 0 0 0 0.044
Multiobjective 0.125 0.188 0.438 0 0.188 0 0.125 0 0.0625 0 0.113
metaheuristics
C2-Adaption to the problem and its structure
Solution enconding 0.7 0.7 0.775 0.075 0.588 0.113 0.738 0 0.588 0.15 0.443
Neighborhood definition 0 0.9 0 0 0 0 0 0.9 0.9 0 0.270
E/A auxiliary methods 0.226 0.409 0.393 0 0.426 0.02 0.321 0 0.616 0.197 0.261
Solution selection 0.6 0.467 0.667 0.467 0.533 0.333 0.2 0 0.467 0.267 0.400
Objective function 0 0 0 0 0 0 0 0 0.333 0 0.033
specification
Contraint handling 0 0.3 0.6 0.7 0 0 0 0 0 0 0.160
C3-Advanced characteristics
Hybridization support 0.4 0.7 0 0.5 0.4 0.1 0.1 0.3 0.9 0.3 0.370
Hyper-heuristics support 0 0 0 0.5 0 0 0 0 0 0 0.050
Parall. and dist. opt 0.8 0.8 0.6 0 0 0 0 0 0 0.8 0.300
C4-General optimization process support
Finalization conditions 0.7 0.9 0.95 0.8 0.6 0.75 0.6 0.15 0.5 0.7 0.665
support
Batch processing 0.4 0 0.2 0.2 0.4 0.3 0.2 0.6 0.4 0 0.270
Experiments design 0 0 0 0.6 0.72 0.74 0 0.04 0.1 0 0.220
support
Statistical Analysis 0 0 0.15 0.5 0.6 0.7 0 0.2 0 0 0.215
features
User interface and 0.483 0 0.533 0.367 0.25 0.75 0.45 0 1 0.083 0.392
graphical reports
Interoperability 0.625 0.25 0.125 0 0.25 0.25 0 0 0.75 0 0.225
C6-Documentation and support
Problems and tutorials 0.36 0.136 0.068 0.034 0.153 0.288 0.136 0.033898305 0.407 0.136 0.175
Papers 1 0.407 0.283 0.027 0.195 0.027 0.097 0.142 0.345 0.221 0.274
Documentation 0.8 0.79 0.6 0.41 0.59 0.14 0.62 0.2 0.61 0.35 0.511
Popularity/ users 1 0.0595238 0 0 0 0.027 0 0 0 0 0.118
Boldface values denote the best (higher) values of each row
123
J. A. Parejo et al.
Licensing Open source CECILL LGPL LGPL LGPL LGPL LGPL GPL GPL Open
(academic free (ParadisEO)y Source
license) LGPL (EO)
Supported All All (except for All All All All All Windows Windows Unix
platforms windows if and
using PEO) Unix
Sof. eng. best 0.62 0.4 0.2 0.64 0.9 0.1 0.7 0.4 0.7 0.2
practices
Packages/ 28 10 54 215 63 109 35 80 119 80
modules
Classes/files (for 226 542 594 510 304 373 417 244 785 514
non OO
languages)
Numerical 1 1 0.75 0 1 0.5 1 1 1 0
handling
Boldface value denotes the best (higher) value of the row
1 Supported metaheuristics 0.207 0.381 0.394 0.450 0.081 0.259 0.175 0.264 0.324 0.245 0.282
2 Problem adaption/encoding 0.254 0.413 0.306 0.090 0.258 0.078 0.210 0.150 0.484 0.102 0.249
3 Advanced metaheuristic 0.400 0.500 0.200 0.333 0.133 0.033 0.033 0.100 0.300 0.367 0.226
characteristics
4 Optimization process support 0.368 0.192 0.347 0.411 0.470 0.582 0.208 0.165 0.458 0.131 0.356
5 Design, implementation and 0.905 0.797 0.738 0.660 0.975 0.650 0.925 0.717 0.708 0.417 0.786
licensing
6 Documentation, samples and 0.789 0.348 0.238 0.118 0.234 0.364 0.213 0.094 0.340 0.177 0.304
popularity
Average per framework 0.487 0.439 0.371 0.344 0.359 0.328 0.294 0.248 0.436 0.240 0.367
Boldface values denote the best (higher) values of each row
123
Metaheuristic optimization frameworks
of the 5th International Conference on Genetic Algorithms, Morgan Eshelman LJ, Schaffer JD (1993) Real-coded genetic algorithms and
Kaufmann Publishers Inc., San Francisco, pp 452–459 interval-schemata. In: Whitley DL (ed) Foundation of Genetic
Brindle A (1981) Genetic algorithms for function optimization. Ph.D. Algorithms 2, Morgan Kaufmann., San Mateo, pp 187–202
thesis, University of Alberta, Edmonton Eshelman LJ, Caruana RA, Schaffer JD (1989) Biases in the
Brown AW, Wallnau KC (1996) A framework for evaluating software crossover landscape. In: Proceedings of the third international
technology. IEEE Softw 13(5):39–49. doi:10.1109/52.536457 conference on Genetic algorithms, Morgan Kaufmann Publishers
Brownlee J (2007) Oat: the optimization algorithm toolkit. Tech. rep., Inc., San Francisco, pp 10–19
Complex Intelligent Systems Laboratory, Swinburne University Feo T, Resende M (1989) A probabilistic heuristic for a computa-
of Technology tionally difficult set covering problem. Oper Res Lett 8:67–
Cahon S, Melab N, Talbi EG (2004) Paradiseo: a framework for the 71
reusable design of parallel and distributed metaheuristics. Feo TA, Resende MG (1995) Greedy randomized adaptive search
J Heuristics 10(3):357–380. doi:10.1023/B:HEUR.0000026900. procedures. J Glob Optim 6:109–133
92269.ec Fogarty TC (1989) Varying the probability of mutation in the genetic
Chakhlevitch K, Cowling P (2008) Hyperheuristics: recent develop- algorithm. In: Proceedings of the 3rd International Conference
ments. In: Adaptive and multilevel metaheuristics, pp 3–29 on Genetic Algorithms, Morgan Kaufmann Publishers Inc., San
Chatterjee A, Siarry P (2006) Nonlinear inertia weight variation for Francisco, pp 104–109
dynamic adaptation in particle swarm optimization. Comput Fogel D, Fogel L, Atmar J (1991) Meta-evolutionary programming,
Oper Res 33(3):859–871. doi:10.1016/j.cor.2004.08.012, http:// vol 1. In: Conference Record of the Twenty-Fifth Asilomar
www.sciencedirect.com/science/article/B6VC5-4DBJG28-2/2/5 Conference on Signals, Systems and Computers, 1991.
7210fee165fec156db017ff5e59aa5f pp 540–545. doi:10.1109/ACSSC.1991.186507
Clerc M (2006) Particle swarm optimization. ISTE Publishing Fogel LJ (1964) On the organization of intellect. Ph.D. thesis, UCLA
Company Fogel LJ, Fogel DB (1986) Artificial intelligence through evolution-
Corne D, Knowles JD, Oates MJ (2000) The pareto envelope-based ary programming. Tech. rep., Final Report for US Army
selection algorithm for multi-objective optimisation. In: Pro- Research Institute, contract no PO-9-X56-1102C-1
ceedings of the 6th International Conference on Parallel Problem Fogel LJ, Owens AJ, Walsh MJ (1966) Artificial intelligence through
Solving from Nature (PPSN VI), Springer, London, pp 839–848 simulated evolution. Wiley
Cowling PI, Kendall G, Soubeiga E (2002) Hyperheuristics: a tool for Fonseca CM, Fleming PJ (1993) Genetic algorithms for multiobjec-
rapid prototyping in scheduling and optimisation. In: Proceed- tive optimization: Formulationdiscussion and generalization. In:
ings of the Applications of Evolutionary Computing on Proceedings of the 5th International Conference on Genetic
EvoWorkshops 2002, Springer, London, pp 1–10 Algorithms, Morgan Kaufmann Publishers Inc., San Francisco,
Cramer NL (1985) A representation for the adaptive generation of pp 416–423
simple sequential programs. In: Proceedings of the 1st Interna- Fontoura M, Lucena C, Andreatta A, Carvalho S, Ribeiro C (2001)
tional Conference on Genetic Algorithms, L. Erlbaum Associates Using uml-f to enhance framework development: a case study in
Inc., Hillsdale, pp 183–187 the local search heuristics domain. J Syst Softw 57(3):201–206
Davis L (1985) Applying adaptive algorithms to epistatic domains. In: Fowler M (2004) Inversion of control containers and the dependency
Proceedings of the 9th international joint conference on Artificial injection pattern. http://www.martinfowler.com/articles/injection.
intelligence (IJCAI’85). Morgan Kaufmann Publishers Inc., San html
Francisco, pp 162–164 Gagnè C, Parizeau M (2006) Genericity in evolutionary computation
Davis L (1989) Adapting operator probabilities in genetic algorithms. software tools: principles and case-study. Int J Artif Intell Tools
In: Proceedings of the third international conference on Genetic 15(2):173–194
algorithms. Morgan Kaufmann Publishers Inc., San Francisco Gamma E, Helm R, Johnson R, Vlissides J (1994) Design patterns:
pp 61–69 elements of reusable object-oriented software, illustrated edition.
Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist Addison-Wesley Professional
multiobjective genetic algorithm: Nsga-ii. IEEE Trans Evol Garcı́a-Nieto J, Alba E, Chicano F (2007) Using metaheuristic
Comput 6:182–197 algorithms remotely via ros. In: Proceedings of the 9th annual
de Castro L, Von Zuben F (2002) Learning and optimization using the conference on Genetic and evolutionary computation (GECCO
clonal selection principle. IEEE Trans Evol Comput 6(3):239– ’07), ACM, New York, pp 1510–1510. doi: 10.1145/1276
251 958.1277239
Di Gaspero L, Schaerf A (2003) Easylocal??: An object-oriented Geman S, Geman D (1987) Stochastic relaxation, gibbs distributions,
framework for flexible design of local search algorithms. Softw and the bayesian restoration of images. Readings in computer
Pract Exp 33(8):733–765. doi:10.1002/spe.524 vision: issues, problems, principles, and paradigms, pp 564–584
Dorigo M, Gambardella L (1997) Ant colony system: a cooperative Glover F (1977) Heuristics for integer programming using surrogate
learning approach to the traveling salesman problem. IEEE constraints. Decis Sci 8(1):156–166. doi:10.1111/j.1540-5915.
Trans Evol Comput 1(1):53–66. doi:10.1109/4235.585892 1977.tb01074.x
Dreo, Pétrowski A, Siarry P, Taillard E (2005) Metaheuristics for Glover F (1989) Tabu search—part i. ORSA J Comput 1:190–206
hard optimization: methods and case studies. Springer Glover F, Kochenberger GA (2002) Handbook of metaheuristic.
Eiben AE, Raué PE, Ruttkay Z (1994) Genetic algorithms with multi- Kluwer Academic Publishers
parent recombination. In: Proceedings of the International Goldberg D, Lingle R (1985) Alleles loci and the traveling salesman
Conference on Evolutionary Computation. The Third Confer- problem. In: Proc. 1st Int. Conf. on Genetic Algorithms and their
ence on Parallel Problem Solving from Nature (PPSN III), Applications, pp 154–159
Springer, London, pp 78–87 Goldberg DE (1989) Genetic algorithms in search, optimization, and
Eshelman LJ (1991) The chc adaptive search algorithm: how to have machine learning. Addison-Wesley.
safe search when engaging in nontraditional genetic recombina- Goldberg DE (1990) A note on boltzmann tournament selection for
tion. Foundations of Genetic Algorithms pp 265–283. http:// genetic algorithms and population-oriented simulated annealing.
ci.nii.ac.jp/naid/10000024547/en/ Complex Syst 4(4):445–460
123
J. A. Parejo et al.
Goldberg DE, Smith RE (1987) Nonstationary function optimization Monmarchè N, Venturini G, Slimane M (2000) On how pachycondyla
using genetic algorithm with dominance and diploidy. In: apicalis ants suggest a new search algorithm. Futur Gener
Proceedings of the Second International Conference on Genetic Comput Syst 16(9):937–946
Algorithms on Genetic algorithms and their application, L. Montana DJ (1995) Strongly typed genetic programming. Evol
Erlbaum Associates Inc., Hillsdale, pp 59–68 Comput 3(2):199–230. doi:10.1162/evco.1995.3.2.199
Hansen P, Mladenović N, Perez-Britos D (2001) Variable neighbor- Montana DJ, Davis L (1989) Training feedforward neural networks
hood decomposition search. J Heuristics 7(4):335–350. doi: using genetic algorithms. In: Proceedings of the 11th international
10.1023/A:1011336210885 joint conference on Artificial intelligence (IJCAI’ 89). Morgan
Ho YC, Pepyne DL (2002) Simple explanation of the no-free- Kaufmann Publishers Inc., San Francisco, pp 762–767
lunch theorem and its implications. J Optim Theory Appl Muhlenbein H (1991) Evolution on time and space-the parallel
115(3):549–570. http://www.ingentaconnect.com/content/klu/jota/ genetic glgorithm. Foundations of Genetic Algorithms. http://ci.
2002/00000115/00000003/00450394 nii.ac.jp/naid/10016718767/en/
Holland JH (1975) Adaptation in natural and artificial systems: an Nossal GJV, Lederberg J (1958) Antibody production by single cells.
introductory analysis with applications to biology, control, and Nature 181(4620):1419–1420. doi:10.1038/1811419a0
artificial intelligence. University of Michigan Press Nulton JD, Salamon P (1988) Statistical mechanics of combinatorial
Holland JH (1992) Adaptation in natural and artificial systems. MIT optimization. Phys Rev A 37(4):1351–1356. doi:10.1103/Phys
Press, Cambridge RevA.37.1351
Horn J, Nafpliotis N, Goldberg D (1994) A niched pareto genetic Oliver IM, Smith DJ, Holland JRC (1987) A study of permutation
algorithm for multiobjective optimization. In: Proceedings of the crossover operators on the traveling salesman problem. In:
First IEEE Conference on Evolutionary Computation, 1994, vol Proceedings of the Second International Conference on Genetic
1, pp 82–87. doi:10.1109/ICEC.1994.350037 Algorithms on Genetic algorithms and their application, L.
Iredi S, Merkle D, Middendorf M (2001) Bi-criterion optimization Erlbaum Associates Inc., Hillsdale, pp 224–230
with multi colony ant algorithms. In: Proceedings of the First Parejo JA, Racero J, Guerrero F, Kwok T, Smith K (2003) Fom: A
International Conference on Evolutionary Multi-Criterion Opti- framework for metaheuristic optimization. Computational Sci-
mization (EMO ’01), Springer, London, pp 359–372 ence ICCS 2003 Lecture Notes in Computer Science
Jong KAD (1975) An analysis of the behavior of a class of genetic 2660:886–895, no-indexada
adaptive systems. Ph.D. thesis, University of Michigan Parsopoulos K, Vrahatis M (2002a) Recent approaches to global
Kennedy J, Eberhart R (1995) Particle swarm optimization, vol 4. In: optimization problems through particle swarm optimization. Nat
Proceedings., IEEE International Conference on Neural Net- Comput 1(2):235–306
works, 1995. pp 1942–1948. doi:10.1109/ICNN.1995.488968. Parsopoulos KE, Vrahatis MN (2002b) Particle swarm optimization
Kennedy J, Mendes R (2002) Population structure and particle swarm method in multiobjective problems. In: Proceedings of the 2002
performance. In: Proceedings of the IEEE Congress on Evolu- ACM symposium on Applied computing (SAC ’02), ACM, New
tionary Computation (CEC) York, pp 603–607. doi:10.1145/508791.508907
Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by Price KV, Storn RM, Lampinen JA (2005) Differential evolution: a
simulated annealing. Science 220(4598):671–680. http://www. practical approach to global optimization. Natural Comput-
jstor.org/stable/1690046 ing Series, Springer, Berlin. http://www.springer.com/west/
Kitchenham BA (2004) Procedures for undertaking systematic reviews. home/computer/foundations?SGWID=4-156-22-32104365-0&
Tech. rep., Computer Science Department, Keele University teaserId=68063&CENTER_ID=69103
Knowles JD, Corne DW (2000) Approximating the nondominated Radcliffe NJ (1991) Forma analysis and random respectful recom-
front using the pareto archived evolution strategy. Evol Comput bination. In: In Foundations of Genetic Algorithms, pp 222–229
8(2):149–172. doi:10.1162/106365600568167 Rahman I, Das AK, Mankar RB, Kulkarni BD (2009) Evaluation of
Koza JR (1992) Genetic programming: on the programming of repulsive particle swarm method for phase equilibrium and
computers by natural selection. MIT Press phase stability problems. Fluid Phase Equilibria. doi:10.1016/
Kronfeld M, Planatscher H, Zell A (2010) The EvA2 optimization j.fluid.2009.04.014
framework. In: Blum C, Battiti R (eds) Learning and Intelligent Raidl GR (2006) Hybrid metaheuristics. Springer, chap a unified view
Optimization Conference, Special Session on Software for on hybrid metaheuristics, pp 1–12
Optimization (LION-SWOP), Springer, Venice, no. 6073 in Rechenberg I (1965) Cybernetic solution path of an experimental
Lecture Notes in Computer Science, LNCS, pp 247–250. problem. Royal Aircraft Establishment Library Translation 1122,
http://www.ra.cs.uni-tuebingen.de/publikationen/2010/Kron10E Farnborough
vA2Short.pdf Renders JM, Bersini H (1994) Hybridizing genetic algorithms with
Luke S, Panait L, Balan G, Paus S, Skolicki Z, Popovici E, Sullivan hill-climbing methods for global optimization: two possible
K, Harrison J, Bassett J, Hubley R, Chircop A, Compton J, ways. In: Proceedings of the First IEEE Conference on
Haddon W, Donnelly S, Jamil B, O’Beirne J (2009) Ecj: A java- Computational Intelligence, vol 1, pp 312–317. doi:10.1109/
based evolutionary computation research system. http://cs.gmu. ICEC.1994.349948
edu/eclab/projects/ecj/ Roli A, Blum C (2008) Hybrid metaheuristics: an introduction. In:
Martin Lukasiewycz FR Michael Glab, Helwig S (2009) Opt4j—the Hybrid metaheuristics, Springer
optimization framework for java. http://www.opt4j.org Rothlauf F (2006) Representations for genetic and evolutionary
Meffert K (2006) JUnit Profi-Tips. Entwickler Press algorithms, 2nd edn. Springer
Michalewicz Z (1994) Genetic algorithms plus data structures equals Sasaki D (2005) Armoga: an efficient multi-objective genetic
evolution programs. Springer, New York algorithm. Tech. rep.
Michalewicz Z, Fogel DB (2004) How to solve it: modern heuristics. Schaffer JD, Morishima A (1987) An adaptive crossover distribution
Springer mechanism for genetic algorithms. In: Proceedings of the Second
Mladenović N (1995) A variable neighborhood search algorithm - a International Conference on Genetic Algorithms on Genetic
new metaheuristic for combinatorial optimization. Abstracts of algorithms and their application, L. Erlbaum Associates Inc.,
papers published at Optimization Week, p 112 Hillsdale, pp 36–40
123
Metaheuristic optimization frameworks
Schwefel HP (1981) Numerical optimization of computer models. Voß S (2001) Meta-heuristics: the state of the art. pp 1–23
Wiley, New York Voß S (2002) Optimization software class libraries. Kluwer Aca-
Sloane NJA, Hardin RH (1991–2003) Gosset: a general-purpose demic Publishers
program for designing experiments. http://www.research.att. Wagner S (2009) Heuristic optimization software systemsm odeling
com/njas/gosset/index.html of heuristic optimization algorithms in the heuristic lab software
de Souza MC, Martins P (2008) Skewed vns enclosing second order environment. Ph.D. thesis, Johannes Kepler University, Linz
algorithm for the degree constrained minimum spanning tree Whitley D (1989) The genitor algorithim and selection pressure: Why
problem. Eur J Oper Res 191(3):677–690. doi:10.1016/ rank-based allocation of reproductive trials is best. In: Proceed-
j.ejor.2006.12.061, http://www.sciencedirect.com/science/article/ ings of the Third International Conference on Genetic Algo-
B6VCT-4N2KTC4-7/2/7799160d76fbba32ad42f719ee72bbf9 rithms, pp 116–121
Stutzle T, Hoos H (1997) Max–min ant system and local search for Whitley D, Rana S, Heckendorn RB (1999) The island model genetic
the traveling salesman problem. In: EEE International Confer- algorithm: on separability, population size and convergence. CIT
ence on Evolutionary Computation, 1997, pp 309–314. doi: J Comput Inf Technol 7(1):33–47
10.1109/ICEC.1997.592327. Wilke DN, Kok S, Groenwold AA (2007) Comparison of linear and
Suganthan PN (1999) Particle swarm optimiser with neighbourhood classical velocity update rules in particle swarm optimization:
operator. In: Proceedings of the IEEE Congress on Evolutionary notes on diversity. International. Int J Numer Methods Eng
Computation (CEC), pp 1958–1962 70(8):962–984
Suresh R, Mohanasundaram K (2004) Pareto archived simulated Wilson GC, Mc Intyre A, Heywood MI (2004) Resource review:
annealing for permutation flow shop scheduling with multiple Three open source systems for evolving programs–lilgp, ecj and
objectives, vol 2. In: IEEE Conference on Cybernetics and grammatical evolution. Genet Program Evolv Mach
Intelligent Systems, 2004, pp 712–717. doi:10.1109/ICCIS. 5(1):103–105. doi:10.1023/B:GENP.0000017053.10351.dc
2004.1460675 Wolpert DH, Macready WG (1997) No free lunch theorems for
Syswerda G (1991) Foundations of genetic algorithms. Morgan optimization. IEEE Trans Evol Comput 1(1):67–82. doi:
Kaufmann, chap a study of reproduction in generational and 10.1109/4235.585893.
steady-state genetic algorithms Wright AH (1994) Genetic algorithms for real parameter optimiza-
Talbi EG (2002) A taxonomy of hybrid metaheuristics. J Heuristics tion. In: Foundations of genetic algorithms, Morgan Kaufmann,
8(5):541–564 pp 205–218
Ulungu E, Teghem J, Fortemps P, Tuyttens D (1999) Mosa method: a Yao X, Liu Y (1996) Fast evolutionary programming. In: Proc. 5th
tool for solving multiobjective combinatorial optimization Ann. Conf. on Evolutionary Programming
problems. J Multi Criter Decis Anal 8(4):221–236 Zhou H, Grefenstette JJ (1986) Induction of finite automata by genetic
Van Veldhuizen DA, Lamont GB (2000) Multiobjective optimization algorithms. In: Proceedings of the 1986 IEEE International
with messy genetic algorithms. In: Proceedings of the 2000 Conference on Systems, Man, and Cybernetics, pp 170–174
ACM symposium on Applied computing (SAC ’00), ACM, New Zitzler E, Thiele L (1999) Multiobjective evolutionary algorithms: a
York, pp 470–476. doi:10.1145/335603.335914 comparative case study and the strength pareto approach. IEEE
Ventura S, Romero C, Zafra A, Delgado J, Hervás C (2008) Jclec: a java Trans Evol Computation 3(4):257–271
framework for evolutionary computation. Soft computing—a Zitzler E, Laumanns M, Thiele L (2001) Spea2: Improving the
fusion of foundations, methodologies and applications strength pareto evolutionary algorithm. Tech. rep., Computer
12(4):381–392. 10.1007/s00500-007-0172-0 Engineering and Networks Laboratory (TIK). Department of
Vesterstrm JS, Riget J (2002) Particle swarms: extensions for Electrical Engineering. Swiss Federal Institute of Technology
improved local, multi-modal, and dynamic search in numerical (ETH)
optimization. Ph.D. thesis, Dept. of Computer Science, Univer-
sity of Aarhus
123