A Hybrid Approach To Parameter Tuning in Genetic Algorithms: Bo Yuan Marcus Gallagher
A Hybrid Approach To Parameter Tuning in Genetic Algorithms: Bo Yuan Marcus Gallagher
A Hybrid Approach To Parameter Tuning in Genetic Algorithms: Bo Yuan Marcus Gallagher
Bo Yuan
Marcus Gallagher
Abstract- Choosing the best parameter setting is a wellknown important and challenging task in Evolutionary
Algorithms (EAs). As one of the earliest parameter
tuning techniques, the Meta-EA approach regards each
parameter as a variable and the performance of
algorithm as the fitness value and conducts searching
on this landscape using various genetic operators.
However, there are some inherent issues in this
method. For example, some algorithm parameters are
generally not searchable because it is difficult to define
any sensible distance metric on them. In this paper, a
novel approach is proposed by combining the Meta-EA
approach with a method called Racing, which is based
on the statistical analysis of algorithm performance
with different parameter settings. A series of
experiments are conducted to show the reliability and
efficiency of this hybrid approach in tuning Genetic
Algorithms (GAs) on two benchmark problems.
1 Introduction
Evolutionary Algorithms (EAs) refer to a broad class of
optimization algorithms under a unified framework. The
total number of algorithms and their variations is very
large and in practice a number of choices such as the
crossover rate, the type of crossover and the population
size etc. must be made in advance in order to fully specify
a complete EA. The importance of these choices is at least
threefold. Firstly, EAs are not completely parameter robust
and EAs with inappropriate choices may not be able to
solve problems effectively[10]. Secondly, when two or
more EAs are compared, arbitrarily specified parameters
may make the comparison unfair and conclusions
misleading. Finally, finding the optimal setting may be also
helpful for better understanding the mechanisms of EAs.
There has been a large amount work dedicated to
finding the optimal parameters of EAs [6, 7]. However,
it has shown that this kind of optimal setting does not exist
in general. In fact, for different problems, there are
different optimal specifications[17]. If each algorithm
parameter is regarded as a variable and the performance of
the EA is regarded as the objective value, there is a
performance landscape of EAs on each problem and
finding the best parameter setting simply means finding the
global optimum.
2 Genetic Algorithms
A classical Genetic Algorithm [8, 11] is used in this paper,
which is specified in Figure 1 (i.e., there are some other
versions of GAs with more or less difference). Two parents
from the current population are selected at each time and
two offspring are generated through recombination of these
two parents with probability Pc (the crossover rate).
Otherwise, these two parents are kept unchanged and
copied to the mating pool. When the mating pool is full,
mutation is applied, which changes the value of each
variable with probability Pm. For variables of binary
problems, which are the case of this paper, mutation is
applied by flipping a bit from 0 to 1 and vice versa.
Finally, new individuals are evaluated and replace the old
population. If elitism is applied, the best individual found
so far in previous generations will be copied to the new
population, replacing a randomly chosen individual.
Tournament {2, 4, 6}
Linear Ranking {1.5, 1.8, 2.0}
1,
f ( BL ) + f ( BR ) + B ,
if |B|=1
if i{bi = 0 or bi = 1}
f ( B L ) + f ( B R ),
otherwise
(1)
Meta-EA
High-Level
Individuals
Racing
Racing
Best GA(1)
Best GA(N)
Fitness Value of
1st Individual
Fitness Value of
Nth Individual
4 Experiments
4.1 The Effect of GA Paramters
In this section, some preliminary experiments were
conducted to demonstrate the effect of parameter setting
with regard to the performance of GAs. More specifically,
we focus on three points:
1.
2.
3.
Pc
Pm
50
0.8
0.01
50
0.8
0.01
S
Truncation
Ratio= 0.1
Tournament
Size=2
Uniform
Two-Point
S=0.3
S=0.5
Pm = 0.01
118.114.7
127.013.4
136.717.9
Pm = 0.20
104.49.1
96.29.1
92.68.5
Racing
Cost
Ratio
0.01
87.7
87.6
0.0833
0.8
0.20
73.7
73.4
0.1142
20
0.8
0.01
92.0
92.0
0.1179
20
0.5
0.01
90.0
89.3
0.1042
Pc
Pm
50
0.8
50
Standard Deviation
[20, 100]
Pc
[0, 1]
0.05
Pm
[0, 0.2]
0.02
5 Conclusion
The major motivation of this paper is to investigate the
issue of parameter tuning in GAs as well as other EAs. The
traditional Meta-EA approach and a relatively new
statistical Racing approach were analysed and their
advantages and disadvantages were also discussed. A
novel method was proposed by combining these two
approaches in order to exploit their unique strengths while
avoiding some inherent weaknesses. The core idea is to use
the Meta-EA approach to optimize those tunable algorithm
parameters while Racing is used to identify the best
algorithm from a set of candidates different from each
other only in terms of those non-searchable parameters. By
doing so, this new method could enjoy both the global
optimization ability of the Meta-EA and Racings ability of
handling non-searchable parameters.
A simple ES in combination with Racing was used to
tune a GA with six parameters on the One-Max problem.
Note that the focus here is not to argue which EA should
be used as the Meta-EA. Instead, we intend to highlight the
advantage of the hybrid approach as well as the efficiency
of Racing. The estimated running time of the parameter
tuning task is more than 70 hours on a P III 800 MHz PC
without Racing (i.e., Meta-EA + Brute Force) while it
actually only took about 9 hours with the help of Racing
(i.e., Meta-EA + Racing).
Certainly, we are also aware that the parameter tuning
of EAs is still a very time-consuming process especially for
complex test problems, which prevents it from being
widely applied in practice. One possible direction is to
look for more effective statistical methods that could
achieve better efficiency and reliability. After all, Racing is
a general framework, which allows different techniques to
be plugged into it. On the other hand, if the search space is
known to be unimodal or a good starting point could be
provided, classical local optimization methods may
achieve faster convergence speed compared to Meta-EAs.
Acknowledgement
This work was supported by an Australian Postgraduate
Award granted to Bo Yuan.
References
[1] Bck, T. Evolutionary algorithms in theory and
[2] Bartz-Beielstein,
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]