A Genetic Algorithm For Function Optimization: A Matlab Implementation
A Genetic Algorithm For Function Optimization: A Matlab Implementation
Matlab Implementation
Christopher R. Houck
North Carolina State University
and
Jeery A. Joines
North Carolina State University
and
Michael G. Kay
North Carolina State University
A genetic algorithm implemented in Matlab is presented. Matlab is used for the following reasons:
it provides many built in auxiliary functions useful for function optimization; it is completely
portable; and it is ecient for numerical computations. The genetic algorithm toolbox developed
is tested on a series of non-linear, multi-modal, non-convex test problems and compared with
results using simulated annealing. The genetic algorithm using a
oat representation is found to
be superior to both a binary genetic algorithm and simulated annealing in terms of eciency and
quality of solution. The use of genetic algorithm toolbox as well as the code is introduced in the
paper.
Categories and Subject Descriptors: G.1 [Numerical Analysis]: Optimization|Unconstrained
Optimization, nonlinear programming, gradient methods
General Terms: Optimization, Algorithms
Additional Key Words and Phrases: genetic algorithms, multimodal nonconvex functions, Matlab
1. INTRODUCTION
Algorithms for function optimization are generally limited to convex regular functions. However, many functions are multi-modal, discontinuous, and nondierenName: Christopher R. Houck
Address: North Carolina State University, Box 7906, Raleigh, NC, 27695-7906,USA,(919) 5155188,(919) 515-1543,[email protected]
Aliation: North Carolina State University
Name: Jeery A. Joines
Address: North Carolina State University, Box 7906, Raleigh, NC, 27695-7906,USA,(919) 5155188,(919) 515-1543,[email protected]
Aliation: North Carolina State University
Name: Michael G. Kay
Address: North Carolina State University, Box 7906, Raleigh, NC, 27695-7906,USA,(919) 5152008,(919) 515-1543,[email protected]
Aliation: North Carolina State University
Sponsor: This research was funded in part by the National Science Foundation under grant number DMI-9322834.
C. Houck et al.
tiable. Stochastic sampling methods have been used to optimize these functions.
Whereas traditional search techniques use characteristics of the problem to determine the next sampling point (e.g., gradients, Hessians, linearity, and continuity),
stochastic search techniques make no such assumptions. Instead, the next sampled
point(s) is(are) determined based on stochastic sampling/decision rules rather than
a set of deterministic decision rules.
Genetic algorithms have been used to solve dicult problems with objective
functions that do not possess \nice" properties such as continuity, dierentiability,
satisfaction of the Lipschitz Condition, etc.[Davis 1991; Goldberg 1989; Holland
1975; Michalewicz 1994]. These algorithms maintain and manipulate a family, or
population, of solutions and implement a \survival of the ttest" strategy in their
search for better solutions. This provides an implicit as well as explicit parallelism
that allows for the exploitation of several promising areas of the solution space at
the same time. The implicit parallelism is due to the schema theory developed by
Holland, while the explicit parallelism arises from the manipulation of a population
of points|the evaluation of the tness of these points is easy to accomplish in
parallel.
Section 2 presents the basic genetic algorithm, and in Section 3 the GA is tested
on several multi-modal functions and shown to be an ecient optimization tool.
Finally, Section 4 brie
y describes the code and presents the list of parameters of
the Matlab implementation.
2. GENETIC ALGORITHMS
Genetic algorithms search the solution space of a function through the use of simulated evolution, i.e., the survival of the ttest strategy. In general, the ttest
individuals of any population tend to reproduce and survive to the next generation, thus improving successive generations. However, inferior individuals can, by
chance, survive and also reproduce. Genetic algorithms have been shown to solve
linear and nonlinear problems by exploring all regions of the state space and exponentially exploiting promising areas through mutation, crossover, and selection
operations applied to individuals in the population [Michalewicz 1994]. A more
complete discussion of genetic algorithms, including extensions and related topics,
can be found in the books by Davis [Davis 1991], Goldberg [Goldberg 1989], Holland[Holland 1975], and Michalewicz [Michalewicz 1994]. A genetic algorithm (GA)
is summarized in Fig. 1, and each of the major components is discussed in detail
below.
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
selection f unction P
reproduction f unction P
evaluate P
i
C. Houck et al.
the genetic algorithm to maximization since the evaluation function must map the
solutions to a fully ordered set of values on <+ . Extensions, such as windowing and
scaling, have been proposed to allow for minimization and negativity.
Ranking methods only require the evaluation function to map the solutions to
a partially ordered set, thus allowing for minimization and negativity. Ranking
methods assign Pi based on the rank of solution i when all solutions are sorted.
Normalized geometric ranking, [Joines and Houck 1994], denes Pi for each individual by:
where:
(2)
Genetic Operators provide the basic search mechanism of the GA. The operators are
used to create new solutions based on existing solutions in the population. There
are two basic types of operators: crossover and mutation. Crossover takes two
individuals and produces two new individuals while mutation alters one individual
to produce a single new solution. The application of these two basic types of
operators and their derivatives depends on the chromosome representation used.
Let X and Y be two m-dimensional row vectors denoting individuals (parents)
from the population. For X and Y binary, the following operators are dened:
binary mutation and simple crossover.
Binary mutation
ips each bit in every individual in the population with probability pm according to equation 3.
U(0; 1) < pm
0
xi = 1 ?x x; i ; ifotherwise
(3)
i
Simple crossover generates a random number r from a uniform distribution from
1 to m and creates two new individuals (X0 and Y0 ) according to equations 4 and 5.
i<r
0
(4)
xi = xyii;; ifotherwise
if i < r
yi0 = xyii;; otherwise
(5)
Operators for real-valued representations, i.e., an alphabet of
oats, were developed by Michalewicz [Michalewicz 1994]. For real X and Y , the following op-
C. Houck et al.
X 0 = X + r(X ? Y )
Y 0 = X
if x0i ai ; x0i bi 8i
feasibility = 1;
0; otherwise
(12)
(13)
(14)
The Matlab implementation of the algorithm has been tested with respect to eciency and reliability by optimizing a family of multi-modal non-linear test problems. The family of test problems is taken from Corana, [Corana et al. 1987],
which compare the use of the simulated annealing algorithm to the simplex method
of Nelder-Mead and adaptive random search. In [Houck et al. 1995a] we report in
detail the eectiveness of the genetic algorithm for solving the continuous locationallocation problem, and in [Houck et al. 1995b] on the use of the genetic algorithm in
conjunction with local-improvement heuristics for non-linear function optimization,
location-allocation, and the quadratic assignment problem.
The Corana family[Corana et al. 1987] of parameterized functions,qn, are very
simple to compute and contain a large number of local minima. This function is
basically a n-dimensional parabola with rectangular pockets removed and where the
global minima occurs at the origin (0; 0; : : :; 0). This family is dened as follows:
k1:::;kn 2Z
Df ? Dm
n
X
di x2i ; x 2 Dr ; d 2 <n+ ;
i=1
n
X
di zi2 ; x 2 dk1;:::;kn ; (k1; : : :; kn) 6= 0;
i=1
8
< kisi + ti if ki < 0,
= 0,
: k0isi ? ti ifif kkii >
0,
For the optimization of the test function two dierent representations were used.
A real-valued alphabet was employed in conjunction with the selection, mutation
and crossover operators with their respective options as shown in table I. Also,
a binary representation was used in conjunction with the selection, mutation and
crossover operators with their respective options as shown in table II. A description
of the options for each of the functions is provided in the following section, Section 4.
Table I. GAOT Parameters used for Real-Valued Corana Function Optimization
Name
Parameters
Uniform Mutation
4
Non-Uniform Mutation
[4 max 3]
Multi-Non-Uniform Mutation
[6 max 3]
Boundary Mutation
4
Simple Crossover
4
Arithmetic Crossover
4
Heuristic Crossover
[2 3]
Normalized Geometric Selection 0.08
G
G
Table II. GAOT Parameters used for Binary Corana Function Optimization
Name
Parameters
Binary Mutation
0.05
Simple Crossover
0.6
Normalized Geometric Selection 0.08
Two dierent evaluation functions were used for both the
oat and binary genetic
algorithm, the rst simply returned the value of the Corana function at the point
C. Houck et al.
as determined by the genetic string. The second evaluation function utilizes a Sequential Quadratic Programming (SQP) (available in Matlab) method to optimize
the Corana function starting from the point as determined by the genetic string.
This provides the genetic algorithm with a local improvement operator which, as
shown in [Houck et al. 1995b], can greatly enhance the performance of the genetic
algorithm. Many researchers have shown that GAs perform well for a global search
but perform very poorly in a localized search [Davis 1991; Michalewicz 1994; Houck
et al. 1995a; Bersini and Renders 1994]. GAs are capable of quickly nding promising regions of the search space but may take a relatively long time to reach the
optimal solution.
Both the
oat genetic algorithm (FGA) and binary genetic algorithm (BGA) were
run 10 times with dierent random seeds. The simulated annealing (SA) results
are taken from the 10 replications of these test problems reported in [Corana et al.
1987]. The resulting solution value found and the number of function evaluations
to obtain that solution are shown in Table III. Since Corana et al. did not use
an improvement procedure, both the FGA and BGA were run without the use of
SQP. As shown in the table, the FGA outperformed both BGA and SA in terms
of computational eciency and solution quality. With respect to the epsilon of
1e?6 as used in [Corana et al. 1987], FGA found the optimal in all three cases
in all replications, while SA was unable to nd the optimal two times for the 4
dimensional case and not at all for the 10 dimensional case. The table also shows
that the use of the local improvement operator signicantly increases the power of
the genetic algorithm in terms of solution quality and speed of convergence to the
optimal.
Table III. Solution Quality and Procedure Eciency
of Min. Sol. Avg. # Std. #
Dim. Method Avg Sol. Std.
Sol.
of eval. of eval.
FGA
5 75 ?7 2 87 ?7 2 09 ?7 6 90 +3 1 33 +3
FGA-SQP 0 00 +0 0 00 +0 0 00 +0 6 02 +2 1 89 +2
2
BGA
4 51 ?7 3 40 ?7 3 31 ?8 9 60 +3 3 56 +3
BGA-SQP 8 45 ?15 2 67 ?15 5 40 ?79 8 48 +2 2 61 +2
SA
1 13 ?8 1 42 ?8 4 21 ?10 6 89 +5 1 73 +4
FGA
6 80 ?7 3 35 ?7 1 58 ?7 1 06 +5 5 56 +4
FGA-SQP 0 00 +0 0 00 +0 0 00 +0 3 76 +3 1 27 +3
4
BGA
5 34 ?7 2 99 ?7 3 57 ?9 3 07 +5 7 25 +4
BGA-SQP 2 53 ?9 7 71 ?9 6 80 ?26 3 32 +4 1 78 +4
SA
6 18 ?4 1 40 ?3 8 70 ?8 1 38 +6 1 11 +5
FGA
6 15 ?7 4 01 ?7 1 68 ?8 2 31 +5 3 06 +4
FGA-SQP 0 00 +0 0 00 +0 0 00 +0 5 38 +4 3 29 +4
10
BGA
1 74 +2 1 85 +2 2 29 +1 1 47 +6 6 96 +4
BGA-SQP 5 74 +2 1 09 +3 4 23 +0 8 26 +2 1 63 +2
SA
5 40 ?4 0 00 +0 5 40 ?4 1 62 +6 3 65 +4
:
Min #
of eval.
5 87 +3
3 95 +2
4 56 +3
6 03 + 2
6 56 +5
4 81 +4
1 66 +3
1 93 +5
1 32 +4
1 18 +6
1 77 +5
3 80 +4
1 34 +6
5 27 +2
1 55 +6
:
The results of this testing show that the use of genetic algorithms for function optimization is highly ecient and eective. The use of a local improvement
procedure, in this case SQP, can greatly enhance the performance of the genetic
algorithm.
4. GAOT: A MATLAB IMPLEMENTATION
Matlab is a technical computing environment for high-performance numeric computation. Matlab integrates numerical analysis, matrix computation and graphics
in an easy-to-use environment. User-dened Matlab functions are simple text les
of interpreted instructions. Therefore, Matlab functions are completely portable
from one hardware architecture to another without even a recompilation step.
The algorithm discussed in Section 2 has been implemented as a Matlab toolbox,
i.e., a group of related functions, named GAOT, Genetic Algorithms for Optimization Toolbox. Each module of the algorithm is implemented using a Matlab function. This provides for easy extensibility, as well as modularity. The basic function
is the ga function, which runs the simulated evolution. The basic call to the ga
function is given by the following Matlab command.
[x,endPop,bPop,traceInfo] = ga(bounds,evalFN,evalParams,params,startPop,...
termFN,termParams,selectFN,selectParams,xOverFNs,xOverParams,mutFNs,mutParams)
Output parameters
|x is the best solution string, i.e. nal solution,
|endPop(optional) is the nal population,
|bPop(optional) is a matrix of the best individuals and the corresponding generation they were found,
|traceInfo(optional) is a matrix of maximum and mean functional value of the
population for each generation.
Input parameters
|bounds is a matrix of upper and lower bounds on the variables,
|evalFN is the evaluation function, usually a .m le,
|evalParams(optional) is a row matrix of any parameters to the evaluation function defaults to [NULL],
|params(optional) is a vector of options, i.e. [epsilon prob param disp param]
where epsilon is the change required to consider two solutions dierent and
prob params is 0 if you want to use the binary version of the algorithm, or 1
for the
oat version. disp param controls the display of the progress of the algorithm, 1 displays the current generation and the the value of the best solution
in the population, while 0 prevents any output during the run. This parameter
defaults to [1e?6 1 0].
|startPop(optional) is a matrix of solutions and their respective functional values.
The starting population defaults to a randomly created population created with
initialize,
|termFN(optional) is the name of the termination function which defaults to
['maxGenTerm'],
|termParams(optional) is a row matrix of parameters which defaults to [100],
10
C. Houck et al.
The evaluation function is the driving force behind the GA. The evaluation function
is called from the GA to determine the tness of each solution string generated
during the search. An example evaluation function is given below:
function [x, val] = gaDemo1Eval(x,parameters)
val = x(1) + 10*sin(5*x(1))+7*cos(4*x(2));
To run the ga using this test function use either of the following function calls from
Matlab.
bstX = ga([0 10; 0 -10],'gaDemo1Eval')
bstX = ga([0 10; 0 -10],'x(1) + 10*sin(5*x(1))+7*cos(4*x(2))');
11
[current_generation, evalParams]
The evaluation function must return both the value of the string, val and the string
itself, x. This is done so that an evaluation can repair or improve the string if
desired. This allows for the use of local improvement procedures as discussed in
Section 3.
An evaluation function is unique to the optimization of the problem at hand
therefore, every time the ga is used for a dierent problem, an evaluation function
must be developed to determine the tness of the individuals.
The remainder of this section describes the other modules of the genetic toolbox.
While GAOT allows for easy modication of any of these modules, the defaults as
given work well for a wide class of optimization problems as shown in [Houck et al.
1995b].
4.2 Operator Functions
Operators provide the search mechanism of the GA. The operators are used to
create new solutions based on existing solutions in the population. There are two
basic types of operators, crossover and mutation. Crossover takes two individuals
and produces two new individuals while mutation alters one individual to produce
a single new solution. The ga function calls each of the operators to produce new
solutions. The function call for crossovers is as follows:
[c1,c2] = crossover(p1,p2,bounds,params)
where p1 is the rst parent, [solution string function value], p2 is the second parent, bounds is the bounds matrix for the solution space and params is the vector
of [current generation, operatorParams], where operatorParams is the appropriate
row of parameters for this crossover/mutation operator. The rst value of the operatorParams is frequency of application of this operator. For the
oat ga, this is
the discrete number of times to call this operator every generation, while for the
binary ga it is the probability of application to each member of the population.
The mutation function call is similar, but only takes one parent and returns one
child:
[c1] = mutation(p1,bounds,params)
The crossover operator must take all four arguments, the two parents, the bounds
of the search space, the information on how much of the evolution has taken place
and any other special options required. Similarly, mutations must all take the
three arguments and return the resulting child. Table IV shows the operators implemented in Matlab, their corresponding le names, and any options that the
operator takes in addition to the rst option, the number of applications per generation.
4.3 Selection Function
The selection function determines which of the individuals will survive and continue
on to the next generation. The ga function calls the selection function each generation after all the new children have been evaluated to create the new population
from the old one.
The basic function call used in ga for selection is:
12
C. Houck et al.
Table IV. Matlab Implemented Operator Functions
Name
Arithmetic Crossover
Heuristic Crossover
Simple Crossover
Boundary Mutation
Multi-Non-Uniform Mutation
Non-Uniform Mutation
Uniform Mutation
File
arithXover.m
heuristicXover.m
simpleXover.m
boundary.m
multiNonUnifMut.m
nonUnifMut.m
unifMut.m
Options
none
number of retries (t)
none
none
max num of generations, shape parameter (b)
max num of generations, shape parameter (b)
none
[newPop] = selectFunction(oldPop,options)
where newPop is the new population selected, oldPop is the current population,
and options is a vector for any other optional parameters.
Notice that all selection routines must take both parameters, the old population
from which to select members from, and any specic options to that particular
selection routine. The function must return the new population. Table V shows
the selection routines that have been implemented in GAOT. The le names are
provided, as they are the function names to be used in Matlab, and the options for
each function is also provided.
Table V.
Name
Roulette Wheel
Normalized Geometric Select
Tournament
where options is a vector of termination options the rst of which is always the
current generation. bestPop is a matrix of the best individuals and the respective
generation it was found. pop is the current population. Table VI shows the termination routines that have been implemented in GAOT. The le names are provided
13
as they are the function names to be used in Matlab, and the options for each
function is also provided.
Table VI. Matlab Implemented Termination Functions
Name
File
Options
Terminate at Specied Generation maxGenTerm.m
nal generation
Terminate at Optimal or max gen maxGenOptTerm.m nal generation, optimal value, epsilon
Several Matlab demos are provided as a tutorial to the genetic algorithm toolbox.
The rst demo, gademo1, gives a brief introduction to GAs using a simple one
variable function. The second demo, gademo2, uses a more complicated example,
the 4-dimensional Corana function, to further illustrate the use of the toolbox. The
nal demo, gademo3, is a reference to the format used for the operator, selection,
evaluation, and termination functions.
5. SUMMARY
Bersini, H. and Renders, B. 1994. Hybridizing genetic algorithms with hill-climbing meth-
ods for global optimization: Two possible ways. In 1994 IEEE International Symposium
Evolutionary Computation, Orlando, Fl, pp. 312{317.
Corana, A., Marchesi, M., Martini, C., and Ridella, S. 1987. Minimizing multimodal functions of continuous variables with the \simulated annealing" algorithm. ACM Transactions
on Mathematical Software 13, 3, 262{280.
Davis, L. 1991. The Handbook of Genetic Algorithms. Van Nostrand Reingold, New York.
Goldberg, D. 1989. Genetic Algorithms in Search, Optimization, and Machine Learning.
Addison-Wesley.
Holland, J. 1975. Adaptation in natural and articial systems. The University of Michigan
Press, Ann Arbor.
Houck, C., Joines, J., and Kay, M. 1995a. A comparison of genetic algorithms, random
restart, and two-opt switching for solving large location-allocation problems. Computers &
Operations Research forthcoming in special issue on evolution computation.
Houck, C., Joines, J., and Kay, M. 1995b. The eective use of local improvement procedures
in conjunction with genetic algorithms. Technical Report NCSU-IE Technical Report 95,
North Carolina State University.
Joines, J. and Houck, C. 1994. On the use of non-stationary penalty functions to solve constrained optimization problems with genetic algorithms. In 1994 IEEE International Symposium Evolutionary Computation, Orlando, Fl, pp. 579{584.
14
C. Houck et al.