0% found this document useful (0 votes)
29 views26 pages

A Survey of Fitness Approximation Methods Applied in Evolutionary Algorithms

This document provides an overview of fitness approximation methods used in evolutionary algorithms. It discusses instance-based learning methods like fitness inheritance and radial basis function models. It also covers statistical learning methods such as polynomial models and support vector machines. The document focuses on the methods, working styles, and management of fitness approximation during the optimization process. It provides examples of using surrogate-assisted evolutionary algorithms to solve expensive real-world optimization problems.

Uploaded by

sayan1886
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views26 pages

A Survey of Fitness Approximation Methods Applied in Evolutionary Algorithms

This document provides an overview of fitness approximation methods used in evolutionary algorithms. It discusses instance-based learning methods like fitness inheritance and radial basis function models. It also covers statistical learning methods such as polynomial models and support vector machines. The document focuses on the methods, working styles, and management of fitness approximation during the optimization process. It provides examples of using surrogate-assisted evolutionary algorithms to solve expensive real-world optimization problems.

Uploaded by

sayan1886
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Chapter 1

A Survey of Fitness Approximation Methods


Applied in Evolutionary Algorithms

L. Shi and K. Rasheed

Abstract. Evolutionary algorithms (EAs) used in complex optimization domains


usually need to perform a large number of fitness function evaluations in order to
get near-optimal solutions. In real world application domains such as engineering
design problems, such evaluations can be extremely computationally expensive. In
some extreme cases there is no clear definition of the fitness function or the fitness
function is too ambiguous to be deterministically evaluated. It is therefore common
to estimate or approximate the fitness. A popular method is to construct a so-called
surrogate or meta-model, which can simulate the behavior of the original fitness
function, but can be evaluated much faster. An interesting trend is to use multiple
surrogates to gain better performance in fitness approximation. In this chapter, an
up-to-date survey of fitness approximation applied in evolutionary algorithms is pre-
sented. The main focus areas are the methods of fitness approximation, the working
styles of fitness approximation, and the management of the approximation during the
optimization process. To conclude, some open questions in this area are discussed.

1.1 Introduction
In recent years, EAs have been applied to many real-world application domains and
gained much research interest. EAs proved to be powerful tools for optimization
problems and were therefore used in a wide range of real-world applications, espe-
cially for engineering design domains. In such domains, the so-called fitness func-
tions are sometimes discontinuous, non-differential, with many local optima, noisy
L. Shi
Applied Research, McAfee
e-mail: [email protected]
K. Rasheed
Computer Science Department, University of Georgia
e-mail: [email protected]

Y. Tenne and C.-K. Goh (Eds.): Computational Intel. in Expensive Opti. Prob., ALO 2, pp. 3–28.
springerlink.com c Springer-Verlag Berlin Heidelberg 2010
4 L. Shi and K. Rasheed

and ambiguous. It was found that EAs perform better than the conventional optimiz-
ers such as sequential quadratic programming and Simulated Annealing [2, 3, 4, 73].
Many challenges still arise in the application of EAs to real-world domains.
For engineering design problems, a large number of objective evaluations may be
required in order to obtain near-optimal solutions. Moreover, the search space can be
complex, with many constraints and a small feasible region. However, determining
the fitness of each point may involve the use of a simulator or analysis code that takes
an extremely long time to execute. Therefore it would be difficult to be cavalier about
the number of objective evaluations used for an optimization [5, 6]. For tasks like
art design and music composition, no explicit fitness function exists; experienced
human users are needed to do the evaluation. A human’s ability to deal with a large
number of evaluations is limited as humans easily get tired. Another challenge is that
the environment of an EA can be noisy, which means that the exact fitness cannot be
determined, and an approximate fitness must be assigned to each individual. An av-
erage fitness solution to the noise problem requires even more evaluations. For such
problems surrogate-assisted evolution methods based on fitness approximation are
preferable, as they can simulate the exact fitness at a much lower computational cost.
A good fitness approximation method can still lead the EA process to find optimal
or near-optimal solutions and is also tolerant to noise [7, 71].
In this chapter we further extend the discussion about fitness approximation by
introducing more concepts in this area and by presenting new developments in re-
cent years. Three main aspects of fitness approximation are our main focus areas.
Those are the different types of fitness approximation methods, the working styles
and the management schemes of the fitness approximation.
For the methods of fitness approximation, instance-based learning methods, ma-
chine learning methods and statistical learning methods are the most popular ones.
Instance-based and machine learning methods include fitness inheritance, radial ba-
sis function models, the K-nearest-neighbor method, clustering techniques, and neu-
ral network methods. Statistical learning methods also known as functional models
such as the polynomial models, the Kriging models, and the support vector ma-
chines are all widely used for fitness approximation in EAs. Comparative studies
among these methods are presented in this chapter.
For the working styles of the fitness approximation, we discuss both direct and
indirect fitness replacement strategies. The direct fitness replacement method is to
use the approximate fitness to directly replace the original exact fitness during the
course of the EA process. Thus individuals mostly have the approximate fitness
during the optimization. The indirect fitness replacement method is to use the ap-
proximate fitness only for some but not all processes in the EA, such as population
initialization and EA operators. Individuals have the exact fitness during most if not
all of the optimization process.
With fitness approximation in EAs, the quality of an approximate model is
always a concern for lack of training data and the often high dimensionality
of the problem. Obtaining a perfect approximate model is not possible in such
cases. Usually the original fitness function is used with the approximate method
to solve this problem. The original fitness function can either correct some/all
1 A Survey of Fitness Approximation Methods 5

individuals’ fitness in some generations or improve the approximate model by giv-


ing the exact fitness. This is called the management of the fitness approximation
or evolution control. In this chapter, different management methods of approximate
fitness are presented, including online fitness update, offline model training, online
model update, hierarchical models, and model migration. At the end of this charter,
two real-world expensive optimizations by surrogate-assisted EAs are given.

1.2 Fitness Approximation Methods


The optimization problem usually deals with non-linear functions. There are some
classic approximation methods, such as transforming the original function to a sim-
pler one. This method entails transforming the original functions to linear ones, and
then using a linear programming technique, such as the Frank-Wolfe method [8]
or Powell’s quadratic approximation [9]. Other classical methods like the Fourier
approximation and Walsh function approximation use a set of basis functions and
find a weighted sum of such basis functions to use as an approximation. These tech-
niques have been used to first transform the original problem into an easy one, and
then apply the EA to find the optima of the easier version of the original fitness
function [10, 11].
Another class of methods determines a function’s approximation using a chosen
set of evaluated points extracted from the whole design space or from the evalua-
tion history. This class includes in-stance-based learning methods (also known as
lazy learning methods), machine learning methods and statistical learning methods.
The relationships between the different types of fitness approximation methods are
shown in Fig. 1.1. Many instancebased learning and other machine learning meth-
ods have been used for fitness approximation in EAs. Several of the most popular of
these methods are reviewed next.

1.2.1 Instance-Based Learning Methods


1.2.1.1 Fitness Inheritance (FI)
Fitness inheritance techniques are one of the main subclasses of fitness approxima-
tion techniques. One such technique simply assigns the fitness of a new solution
(child) based on the average fitness of its parents or a weighted average based on
how similar the child is to each parent [12]. To deal with a noisy fitness function,
a resampling method combined with a simple average fitness inheritance method is
used to reduce the computational cost in [15]. Another approach is to divide the pop-
ulation into building blocks according to certain schemata. Under this approach, an
individual obtains its fitness from the average fitness of all the members in its build-
ing block [13]. More sophisticated methods such as conditional probability tables
and decision trees are used in [14] for fitness inheritance.
6 L. Shi and K. Rasheed

Fig. 1.1 Fitness approximation methods, FI: Fitness Inheritance KNN: K-Nearest Neigh-
bors RBF: Radial Basis Functions NN: Neural Networks DT: Decision Tree PM: Polynomial
Model SVM: Support Vector Machines

1.2.1.2 Radial Basis Function Model (RBF)

The Radial Basis Function (RBF) model is another instance-based learning method.
RBF networks can also be viewed as a type of neural networks. Since it is a very
popular technique for fitness approximation in EAs [3, 19, 31], it is worthy of being
introduced independently from the normal multilayer neural networks.
An RBF network consists of an input layer with the same number of input units
as the problem dimension, a single hidden layer of k nonlinear processing units and
an output layer of linear weights wI (Fig. 1.2). The size of the hidden layer (k) can
be equal to the sample size if the sample size is small. In the case of a larger sample
size, k is usually smaller than the sample size to avoid excessive calculations. This
RBF network is called the generalized RBF network. The output y(x) of the RBF
network is given as a linear combination of a set of radial basis functions expressed
in the following way:
k
y(x) = w0 + ∑ wi φi (x − ci ) (1.1)
i=1

where w0 and wi are the unknown coefficients to be learned. The term φi (x − ci),
also called the kernel, represents the ith radial basis function. It evaluates the dis-
tance between the input x and the center ci . For the generalized RBF network, the
1 A Survey of Fitness Approximation Methods 7

Fig. 1.2 Structure of RBF network models

centers ci are also unknown and have to be learned by other methods such as the
k-means method.
Typical choices for the kernel include linear splines, cubic splines, multi-
quadratics, thin-plate splines, and Gaussian kernels. A Gaussian kernel is the most
commonly used in practice, having the form:
 
x − ci 
φi (x − ci ) = exp − (1.2)
2σ 2

A detailed comprehensive description of RBF networks can be found in [32].

1.2.2 Machine Learning Methods


1.2.2.1 Clustering Techniques
Clustering algorithms include hierarchical clustering (such as single-linkage,
complete-linkage, average-linkage and Ward’s method), partition clustering (such
as Hard C-Means and K-Means algorithm), and overlapping clustering (such as
Fuzzy C-Means and B-Clump algorithm). Among them, the K-Means algorithm ap-
pears to be the most popular one for application to EAs due to its relative simplicity
and low computational cost. In [16, 17], the entire population is divided into many
clusters, and only the center of each cluster is evaluated. Other individuals’ fitness
values in the clusters are computed using their distance from these centers. Another
approach is to build an approximate model based on sample points composed of
the cluster centers. Every other individual’s fitness is estimated by this approximate
model, which may be a neural network model [18] or an RBF model [19]. An-
other interesting clustering approach applied in EAs is to divide the population into
8 L. Shi and K. Rasheed

several clusters and then build an approximate model for each cluster. The motiva-
tion is that multiple approximate models are believed to utilize more local informa-
tion about the search space and fit the original fitness function better than a single
model [5, 20, 21].

1.2.2.2 Multilayer Perceptron Neural Networks (MLPNNs)


Multilayer Perceptron Neural Networks (MLPNNs) usually utilize the back-
propagation algorithm. MLPNNs have been proven to be powerful tools for fitness
approximation. A MLPNN model is generally used to accelerate the convergence by
replacing the original fitness function [34, 36]. In engineering design domains and
drug design, MLPNNs have been used to reduce the evaluation times of complex
fitness functions [35, 37, 38]. In [68], MLPNNs are used as surrogates to speed-up
the process of an expensive blade design problem.

Fig. 1.3 Structure of the feed-forward MLPNN model

A simple feed-forward MLPNN with one input layer, one hidden layer and one
output layer can be expressed as:
 
K n
y(x) = ∑ w j f ∑ wi j xi + θ j + θ0 (1.3)
j=1 i=1

where n is the number of input neurons (which is usually equal to the problem
dimension), K is the number of nodes of the hidden layer, and the function f is
called the activation function. The structure of a feed-forward MLPNN is shown in
Fig. 1.3. W and θ are the unknown weights to be learned. The most commonly used
activation function is the logistic function, which has the form:
1
f (x) = (1.4)
1 + exp(−cx)

where c is a constant. A comprehensive study can be found in [32].


1 A Survey of Fitness Approximation Methods 9

1.2.2.3 Other Machine Learning Techniques


Other machine learning techniques were also applied for fitness approximation in
EAs. An individual’s fitness can be estimated by its neighbors using the K-nearest-
neighbor algorithm [22]. The screening technique has been used for pre-selection
[2, 5]. Decision Tree (DT) is another machine learning technique which has been
used in [14].

1.2.3 Statistical Learning Methods


Statistical Learning methods for fitness approximation (basically statistical learning
models) as applied to EAs have gained much interest among researchers, and have
been used in several successful GA packages. In these methods, single or multiple
models are built during the optimization process to approximate the original fit-
ness function. These models are also referred to as approximate models, surrogates
or meta-models. Among these models, Polynomial Models, Kriging Models, and
Support Vector Machines (SVM) are the most commonly used.

1.2.3.1 Polynomial Models


Polynomial models (PM) are sometimes called Response Surfaces. Commonly used
quadratic polynomial models have the form:
n n,n
F̂(X) = a0 + ∑ ai xi + ∑ a i j xi x j (1.5)
i=1 i=1, j=1

Where a0 , ai and ai j are the coefficients to be fitted, n is the dimension of the


problem and xi is design variable number i.
Usually the least-squares approximation method is used to fit the unknown co-
efficients a0 , ai and ai j . The main limitation of the least-squares method is that the
number of sample points (N) must exceed (n+1)(n+2)/2 for a second-order polyno-
mial model. Even if this condition is satisfied, the fitting cannot be guaranteed be-
cause the singularity problem may still arise. Another drawback of the least-squares
method is that its computational complexity grows quickly with the problem’s di-
mension which can be unacceptable. The gradient method is introduced to address
the problems of the least-squares method. More implementation details for using
these two methods to fit a polynomial model can be found in [23]. Polynomial
models are widely used as surrogates. One application can be found in [69], which
uses a response surface method to approximate the Pareto front in NSGA-II for an
expensive liquid-rocket engine design problem.

1.2.3.2 Kriging Models

The Kriging model consists of two component models which can be mathematically
expressed as:
10 L. Shi and K. Rasheed

y(x) = f (x) + Z(x) (1.6)


Where f (x) represents a global model and Z(x) is the realization of a stationary
Gaussian random function that creates a localized deviation from the global model.
Typically f (x) is a polynomial and can be as simple as an underlying constant β in
many cases, and then equation (1.6) becomes:

y(x) = β + Z(x) (1.7)

The estimated model of equation (1.7) is given as:

ŷ = β̂ + rT (x)R−1 (y − f β̂ ) (1.8)

Where y is a vector of length N as defined in equation (1.8), ŷ is the estimated


value of y given the current input x, f is a column vector which is filled with ones,
and R is the correlation matrix which can be obtained by computing the correlation
function between any two sampled data points. The form of the correlation function
is specified by the user. Gaussian exponential correlation functions are commonly
used, which is why the Kriging model is also sometimes called a Gaussian process:
 
n  
 i j 2
R(x , x ) = exp − ∑ θk xk − xk 
i j
(1.9)
k=1

The correlation vector between x and the sampled data points is expressed as:
T
rT (x) = R(x, x1 ), R(x, x2 ), . . . , R(x, xn ) (1.10)

Estimation of the parameters is often carried out using the generalized least squares
method or the maximum likelihood method. Detailed implementations can be found
in [24, 25].
In addition to the approximate values, the Kriging method can also provide accu-
racy information about the fitting in the form of confidence intervals for the estimated
values without additional computational cost. In [6, 28], a Kriging model is used to
build the global models because it is believed to be a good solution for fitting com-
plex surfaces. A Kriging model is used to pre-select the most promising solutions in
[29]. In [26, 27, 30], a Kriging model is used to accelerate the optimization or reduce
the expensive computational cost of the original fitness function. In [67], a Kriging
model with a pattern search technique is used to approximate the original expen-
sive function. In [70], a Gaussian process method is used for landscape search in a
multi-objective optimization problem that gives promising performance. One disad-
vantage of the Kriging method is that it is sensitive to the problem’s dimension. The
computational cost is unacceptable when the dimension of the problem is high.

1.2.3.3 Support Vector Machines (SVM)


The SVM model is primarily a classifier that performs classification tasks by con-
structing hyper-planes in a multidimensional space to separate cases with different
1 A Survey of Fitness Approximation Methods 11

class labels. Contemporary SVM models support both regression and classification
tasks and can handle multiple continuous and categorical variables. A detailed de-
scription of SVM models can be found in [40, 41]. SVM models compare favorably
to many other approximation models because they are not sensitive to local optima,
their optimization process does not depend on the problem dimensions, and over-
fitting is seldom an issue. Applications of SVM for fitness approximation can be
found in [42]. The regression SVM is used for constructing approximate models.
There are two types of regression SVMs: epsilon-SVM regression and nu-SVM re-
gression. The epsilon-SVM regression model is more commonly used for fitness
approximation, where the linear epsilon-insensitive loss function is defined by:

Lε (x, y, f ) = |y − f (x)| = max(0, |y − f (x)| − ε ) (1.11)

The sum of the linear epsilon-insensitive losses must be minimized:


N
1 T
w w + C ∑ Lε (xi , yi , f ) (1.12)
2 i=1

This is equivalent to a constrained minimization problem having the form:


N N
1 T
w w + C ∑ ζi + C ∑ ζi∗ (1.13)
2 i=1 i=1

Subject to the following constraints:

wT φ (xi ) + b − yi ≤ ε + ζi∗ (1.14)


yi − wT φ (xi ) − bi ≤ ε + ζi (1.15)
ζi , ζi∗ ≥ 0, i = 1, . . . , N (1.16)

Where φ (x) is called the kernel function. It may have the forms of linear, polyno-
mial, Gaussian, RBF and sigmoid functions. The RBF is by far the most popular
choice of kernel type used in SVMs. This is mainly because of their localized and
finite responses across the entire range of the real x-axis. This optimization problem
can be solved by using quadratic programming techniques.

1.2.4 Existing Research in Multi-surrogate Assisted EAs


For some real world applications, special approximation methods have been used.
For example, a one dimensional approximation of the Kubelka Munk model is used
to replace the expensive Monte Carlo method in an EA for analyzing colon tissue
structure [55]. In [58], a classifier with confidence information is evolved to replace
time consuming evaluations during tournament selection.
12 L. Shi and K. Rasheed

Fig. 1.4 A multiple surrogate model structure used in [18, 33]

Fig. 1.5 A multiple surrogate model structure used in [5]

In some applications, several approximation methods have been combined to


construct a type of fitness approximation model known as a Multi-surrogate. In
[18, 33] the MLPNN model was combined with clustering methods for construct-
ing approximate models (shown in Fig. 1.4). Fig. 1.5 shows another strategy using
clustering techniques and polynomial models together [5]. A trained RBF model
was used to generate sample points for the construction of polynomial models for
fitness approximation in [39]. In [28, 51], the Kriging method was used to construct
a global approximate model for pre-selection then RBF models were built using
those pre-selected sample points for further fitness approximation. Fig. 1.6 shows
the structure of this model. Multiple approximate models formed in a hierarchical
structure have been used to assist the fitness evaluations in 1.11. In [59, 62], multiple
local approximate models are built for each individual, and then these local models
are aggregated into an average or weighted average of all approximate models. In
[64, 65], multiple surrogates are built, and then the best surrogate is used [64] or
the weighted sum of all surrogates is used, where the weights associated with each
surrogate are determined based on the accuracy of the surrogate [65]. Fig. 1.7 shows
this model in detail.
1 A Survey of Fitness Approximation Methods 13

Fig. 1.6 A multiple surrogate model structure used in [51]

Fig. 1.7 A multiple surrogate model structure used in [59]

Multi-surrogates are also used for multi-objective optimization problems. In


[63], the NSGA-II algorithm, a multi-objective EA using PM and RBF surro-
gates together is presented. A local surrogate-assisted evolution strategy using KNN
and RBF models is introduced in [61]. For each new offspring in this strategy, a cu-
bic RBF surrogate is built using the k-nearest previously evaluated points. This local
RBF surrogate is then used to estimate the new off-spring’s fitness. Fig. 1.8 shows
the model structure used in [61]. In [66], Polynomial Models are used to estimate
the coefficients of the fitness surrogate. Thus the surrogate is made adaptive to char-
acteristics of the specific optimization problems.
A recent trend is to use multiple approximate models adaptively. In [60], both
global and local (for each cluster) surrogate models are used. The global model
adaptively evolves from a simple average of the fitness of all individuals in a pop-
ulation, all the way to a Support Vector Machine (SVM) model. The local models
follow a similar path but do not exceed quadratic polynomials. The model evolu-
tion depends on the time complexity as well as model accuracy for each model. A
14 L. Shi and K. Rasheed

Fig. 1.8 A Multiple surrogates structure used in [61]

Fig. 1.9 Global approximate model evolution path in [60]

formula is used to decide whether to continue using the same type of model or
switch to the next at any time. Fig. 1.9 shows the evolution path.

1.3 Comparative Studies for Different Approximate Models


Many approximation methods have been introduced for special problem domains.
Even though these methods are claimed to save many function evaluations and to
be nearly as good as the original fitness function, they are bound to their special
domains, and thus no comparative studies have been conducted on them. On the
other hand, the performance of many general-purpose approximation methods has
been compared in early papers, especially for popular methods such as statistical
learning methods.
1 A Survey of Fitness Approximation Methods 15

The neural network model and the polynomial model were compared in [46, 72].
The study concluded that the performance of the two types of approximation was
comparable in terms of the number of function evaluations required to build the
approximations and the number of undetermined parameters associated with the ap-
proximations. However, the polynomial model had a much lower construction cost.
In [72], after evaluating both methods in several applications, the authors concluded
that both of them can perform comparably for modest data. In [43], a quadratic poly-
nomial model was found to be the best method among the polynomial model, RBF
network, and the Quick-prop neural network when the models were built for regions
created by clustering techniques. The authors were in favor of the polynomial model
because they found that it formed approximations more than an order of magnitude
faster than the other methods and did not require any tuning of parameters. The
authors also pointed out that the polynomial approximation was in a mathemati-
cal form which could be algebraically analyzed and manipulated, as opposed to the
black-box results that neural networks give.
The Kriging model and the neural network model were compared using bench-
mark problems in [47]. However, no clear conclusion was drawn about which model
is better. Instead, the author showed that optimization with a meta-model could lead
to degraded performance. Another comparison was presented in [45] between the
polynomial model and the Kriging model. By testing these two models on a real-
world engineering design problem, the author found that the polynomial and Kriging
approximations yielded comparable results with minimal difference in predictive ca-
pability. Comparisons between several approximate models were presented in [44],
which compared the performance of the polynomial model, the multivariate adaptive
splines’ model, the RBF model, and the Kriging model using 14 test problems with
different scales and nonlinearities. Their conclusion was that the polynomial model
is the best for low-order nonlinear problems, and the RBF model is the best for deal-
ing with high-order nonlinear problems (details shown in Table 1.1). In [59], four
types of approximate models - Gaussian Process (Kriging), RBF, Polynomial model
and Extreme Learning Machine Neural Network (ELMNN) - were compared on
artificial unconstrained benchmark domains. Polynomial Models (PM) were found
to be the best for final solution quality and RBF was found to be the best when
considering correlation coefficients between the exact fitness and estimated fitness.
Table 1.2 shows the performance ranks of these four models in terms of the quality
of the final solution.
So far different approximate models have been compared based on their perfor-
mance, but the word performance itself has not been clearly defined. This is because
the definition of performance may depend on the problem to be addressed, and mul-
tiple criteria need to be considered. Model accuracy is probably the most important
criterion, since approximate models with a low accuracy may lead the optimization
process to local optima. Model accuracy also should be based on new sample points
instead of the training data set points. The reason for this is that for some models
such as the neural network, overfitting is a common problem. In the case of over-
fitting, the model works very well on training data, yielding good model accuracy,
16 L. Shi and K. Rasheed

Table 1.1 Summary of best methods in [44]

Low-order High-order
Nonlinearity Nonlinearity
Small Scale Polynomial RBF
Large Scale Kriging RBF
Overall Polynomial RBF

Table 1.2 Final quality measures for Kriging, PM. RBF and ELMNN approximate models
in [59]

Benchmark Method
domain Kriging PM RBF ELMNN
Ackley 2 1 4 3
Griewank 3 1 2 4
Rosenbrock 1 3 2 4
Step 3 1 2 4

but may perform poorly on new sample points. The optimization process could
easily go in the wrong direction if it is assisted by a model suffering from overfit-
ting. There are other important criteria to be considered, including robustness, effi-
ciency, and time spent on model construction and updating. A fair comparison would
consider the model accuracy as well as all of these criteria.
It is difficult to draw a clear conclusion on which model is the best for the reasons
stated above, though the polynomial model seems to be the best choice for a local
model when dealing with local regions or clusters and enough sample points are
available [43]. In such cases, the fitting problem usually has low-order nonlinearity
and the polynomial model is the best candidate according to [44]. The polynomial
model is also believed to perform the best for problems with noise [44]. As for high-
order nonlinear problems the RBF model is believed to be the best and it is the least
sensitive to the sample size and has the most robustness [44]. So the RBF model is a
good choice for a global model with or without many samples. In [72], NN is found
to perform significantly better than PM when search space is very complex and the
parameters are correctly set.
The SVM model is a powerful fitting tool that belongs to the class of kernel
methods. Because of the beneficial features of SVMs stated above, the SVM model
becomes a good choice for constructing a global model, especially for problems
with high dimension and many local optima, provided that a large sample of points
exists.
1 A Survey of Fitness Approximation Methods 17

1.4 The Working Styles of Fitness Approximation


There are two categories of surrogate incorporation mechanisms in EAs, as shown
in Fig. 1.10. In one category the original fitness is directly replaced by the estimated
fitness when the individual is evaluated throughout the optimization. Only a few
individuals have their exact fitness calculated for control purposes. In the other cat-
egory, the original fitness is kept for each individual and the approximate fitness is
not used to directly replace the original fitness. These two methods are reviewed
next.

Fig. 1.10 Working styles of fitness approximation

1.4.1 Direct Fitness Replacement Methods


Direct fitness replacement is straightforward. Individuals are evaluated by surro-
gates and then the estimated fitness is assigned to each individual. During the course
of the EA process, the approximate fitness assumes the role of the original fitness.
This method has been used in numerous research efforts [6, 12, 13, 14, 15, 16, 18,
19, 22, 26, 27, 28, 29, 30, 31, 34, 35, 36, 37, 42, 48, 56]. The obvious draw-back
is that the inaccuracy of the approximate fitness may lead the EA to inferior local
optima. Consequently, the direct fitness replacement method needs a continuous cal-
ibration process called Evolution Control (described below). Even with Evolution
Control, convergence to true optima cannot be guaranteed.

1.4.2 Indirect Fitness Approximation Methods

The indirect surrogate method computes the exact fitness for each individual during
an EA process and the approximate fitness is used in other ways. For example, the
approximate fitness can be used for population pre-selection. In this method, instead
of generating a random initial population, an individual for the initial population
18 L. Shi and K. Rasheed

can be generated by selecting the best individual from a number of uniformly dis-
tributed random individuals in the design space according to the approximate fitness
[5, 43, 49].
Approximate fitness can also be used for crossover or mutation in a similar man-
ner, through a technique known as Informed Operators [5, 17, 43, 49]. Under this
approach, the approximate models are used to evaluate candidates only during the
crossover and/or mutation process. After the crossover and/or mutation process,
the exact fitness is still computed for the newly created candidate solutions. Using
the approximate fitness indirectly in the form of Informed Operators - rather than di-
rect evaluation - is expected to keep the optimization moving toward the true global
optima and to reduce the risk of convergence to suboptimal solutions because each
individual in the population is still assigned its exact fitness [49]. Experimental re-
sults have shown that a surrogate-assisted informed operator-based multi objective
GA can outperform state-of-art multi objective GAs for several benchmark prob-
lems [5]. Informed Operators also make it easy to use surrogates adaptively, as the
number of candidates can be adaptively determined. Some of the informed operators
used in [49] are explained as follows:
• Informed initialization: Approximate fitness is used for population pre-selection.
Instead of generating a random initial population, an individual for the initial
population can be generated by selecting the best individuals from a number of
uniformly distributed random individuals in the design space according to the
approximate fitness.
• Informed mutation: To perform informed mutation, several random mutations
of the base point are generated. The mutation with the best approximate fitness
value is returned as the result.
• Informed crossover: Two parents are selected at random according to the usual
selection strategy. These two parents are not changed in the course of the
informed crossover operation. Several crossovers are conducted by randomly
selecting a crossover method, randomly selecting its internal parameters and
applying it to the two parents to generate a potential child. The surrogate is used
to evaluate every potential child, and the best child is selected as the outcome.

1.5 The Management of Fitness Approximation


For direct fitness replacement methods the management of the fitness approximation
is necessary to drive the EA to converge to global optima with the cost reduced as
much as possible. There are several ways to conduct the model management, as
shown in Fig. 1.11.

1.5.1 Evolution Control

Evolution Control uses surrogates together with original fitness functions in an


EA process where the original fitness functions are used to evaluate some/all
1 A Survey of Fitness Approximation Methods 19

Fig. 1.11 Management of fitness approximation

individuals in some/all generations. There are two categories of Evolution Con-


trol methods: Fixed Evolution Control and Adaptive Evolution Control. For fixed
evolution control, there are individual-based and generation-based methods. In in-
dividual-based evolution control, only some selected individuals are evaluated by
the exact fitness function. The individual selection can be random or using some
strategy, e.g., selecting the best individual (according to the surrogate) for evolution
control. In generation-based Evolution Control, all individuals in a selected gener-
ation will be evaluated by the original fitness function, the generation selection can
be random or with a fixed frequency. The adaptive Evolution Control adjusts the
frequency of control according to the fidelity of the surrogates.

1.5.2 Offline Model Training


Offline model training constructs surrogates based on human evaluation or previous
optimization history data. In this case, either the approximate model is of high fi-
delity or the original fitness cannot be easily evaluated during an EA process such
as evolutionary art, so the original fitness is never used. An example of this method
can be found in [50].

1.5.3 Online Model Updating


Fitness approximation may be constructed at an early stage of the EA process.
Because of the limited sample points, a surrogate may concentrate on the region
spanned by the existing sample points and not cover the rest of the search space
well. As the EA continues and new individuals enter into the population, the ac-
curacy of the previously built surrogate model will decrease. Thus the surrogate
model needs to be reformed using the old sample points together with the new sam-
ple points. This technique is known as online surrogate up-dating. There has been
considerable research with this method [3, 5, 6, 16, 19, 20, 26, 27, 28, 29, 30, 31].
20 L. Shi and K. Rasheed

1.5.4 Hierarchical Approximate Models and Model Migration


The hierarchical surrogates’ method builds multiple models with a hierarchical
structure during the course of an EA process [51, 52]. In [51], a Gaussian process
model is built for the so-called global model. A user-specified percentage of the best
individuals according to the global model are selected to form a local search space.
Then Lamarckian evolution is performed involving a trust region-enabled gradient-
based search strategy that employs RBF local approximate models to accelerate
convergence in the local search space. In [52] the whole population is divided into
several sub-populations. Each sub-population constructs its own surrogate. At a cer-
tain interval, the individuals in the different sub-populations can migrate into other
sub-populations. This is called an Island Model. To gain a balance between the
model performance and the population diversity, the selection of migration size and
migration interval is important. It has been found that the migration interval plays a
more dominant role than migration size [58].

1.6 Case Studies: Two Surrogate-Assisted EA Real-World


Applications

1.6.1 The Welded Beam Design Domain


The Welded Beam Design problem is illustrated in Fig. 1.12. The goal for this design
is to use the least material to sustain a certain weight. It has four design variables
x = (h, l,t, b); the definition of Welded Beam Design problem can be found in the
appendix. The known optimal solution is 2.38116 with h = 0.2444, l = 6.2187,
t = 8.2915, and b = 0.2444.
We demonstrate it with three GA methods, GADO, GADO-R and ASAGA. GADO
[2] stands for Genetic Algorithm for Design Optimization, a GA that has proved to

Fig. 1.12 Welded Beam Structure


1 A Survey of Fitness Approximation Methods 21

Fig. 1.13 Welded Beam design with global optima 2.38116

be powerful for solving engineering design problems. GADO-R is based on GADO,


and includes global and local polynomial surrogate models structured by clustering
techniques. ASAGA [60] is an adaptive multi-surrogate assisted EA with a backbone
of a GADO. GADO was used with no approximate model assistance. GADO-R incor-
porates fixed quadratic polynomial surrogates through Informed Operators [45, 59].
All three methods ran 30 times with different random starting populations. The av-
erage best fitness values of the 30 runs with corresponding number of actual fitness
evaluations are shown in Fig. 1.13. The figure shows that the surrogate-assisted GA
outperforms the GA with no surrogate assistance and the adaptive surrogate-assisted
GA further outperforms the non-adaptive surrogate-assisted GA.

1.6.2 Supersonic Aircraft Design Domain


This domain concerns the conceptual design of supersonic transport aircraft. It is
summarized briefly here and is described in more detail in [74]. Fig. 1.14 shows a
diagram of a typical airplane automatically designed by the software system. The
GA attempts to find a good design for a particular mission by varying twelve of
the aircraft conceptual design parameters over a continuous range of values. An
optimizer evaluates candidate designs using a multidisciplinary simulator. The op-
timizer’s goal is to minimize the takeoff mass of the aircraft, a measure of merit
commonly used in the aircraft industry at the conceptual design stage. Takeoff mass
is the sum of fuel mass, which provides a rough approximation of the operating cost
22 L. Shi and K. Rasheed

62.5945

45.0132

17.5813

-12.8733 0 12.8733

5.53123

1.6764
0

Fig. 1.14 Supersonic aircraft design problem

of the aircraft, and “dry”mass, which provides a rough approximation of the cost of
building the aircraft. In summary, the problem has 12 parameters and 37 inequality
constraints and only 0.6% of the search space is evaluable.
Fig. 1.15 shows a performance comparison in this domain. Each curve in the fig-
ure shows the average of 15 runs of GADO starting from random initial populations.
The experiments were done once for each surrogate: Least Square PM (LS), Quick-
Prop NN (QP) and RBF in addition to one without the surrogate-assisted informed
operators altogether, with all other parameters kept the same. Fig. 1.15 demonstrates
the performance with each of the three surrogate-assisted methods as well as per-
formance with no approximation at all (the solid line). The figure plots the average
(over the 15 runs) of the best measure of merit found so far in the optimization as
a function of the number of iterations. The figure shows that all surrogate-assisted
methods are better than the plain GADO and the LS approximation method gave the
best performance in all stages of the search in this domain.
1 A Survey of Fitness Approximation Methods 23

240 ’GADO_Aircraft’
’GADO_Aircraft_LS’
’GADO_Aircraft_QP’
’GADO_Aircraft_RBF’
230

220

210

200

190

180

170

0 500 1000 1500 2000 2500 3000 3500 4000

Fig. 1.15 Four GA methods comparison in supersonic aircraft design domain, landscape
shows numbers of fitness function evaluations and vertical direction shows fitness values

1.7 Final Remarks


Using fitness approximation methods to assist GAs and other Evolutionary Algo-
rithms has gained increasing popularity in recent years. This chapter presented a
survey of the popular and recent trends in approximation methods, control strate-
gies and management approaches. An interesting question in this area is: what is
the best model for fitness approximation? Though the answer depends on the prob-
lem and user requirements, we propose an interesting generic solution, which is
to try the simplest model first. If the performance is not satisfactory or degrades
with time, more sophisticated models can be used. So far many researchers use only
one type of approximation model, as in [26]. Some researchers use multiple mod-
els for different levels of approximation, but the approximate model itself is still
fixed [18, 33, 51]. An interesting new direction [60] is to use an adaptive model-
ing method. In this method, first a simple approximation can be used such as the
fitness inheritance or K-nearest neighbor method. If the fitting is not satisfactory,
a more sophisticated model can be used such as the polynomial model. There are
several levels inside the polynomial model method itself from the linear polyno-
mial model to cubic polynomial model. They can be applied following a simple to
complex direction. If this level of approximation is still inadequate, more complex
models should be introduced such as the RBF, Kriging or SVM models. Usually the
more complex models give better fitting accuracy but need more construction time.
This adaptive method can provide the best trade-off between model performance
and efficiency by adaptively adjusting the fitness approximation.
24 L. Shi and K. Rasheed

Appendix: Definitions of Welded Beam Design

Minimize fwieldedb eam (x) = 1.10471h2l + 0.04811tb(14 + l) (1.17)


Subject to13600 − τ (x) ≥ 0 (1.18)
30000 − σ (x) ≥ 0 (1.19)
b−h ≥ 0 (1.20)
Pc (x) − 6000 ≥ 0 (1.21)
0.25 − δ (x) ≥ 0 (1.22)
0.125 ≤ h ≤ 10 (1.23)
0.1 ≤ l,t, b ≤ 10 (1.24)

The terms τ (x), σ (x), and δ (x) are given below:

τ (x) = τ  (x)2 + τ  (x)2 + l τ  (x)τ  (x)/ 0.24(l 2 + (h + t)2) (1.25)

σ (x) = 504000/(t 2b) (1.26)


Pc (x) = 64746.022(1 − 0.0282346t)tb3 (1.27)
δ (x) = 2.1952/(t 3b) (1.28)

where

τ  (x) = 6000/( 2hl) (1.29)
6000(14 + 0.5l) 0.25(l w + (h + t)2)
τ  (x) = (1.30)
2(0.707hl(l 2/12 + 0.25(h + t)2))

References
[1] Abuthinien, M., Chen, S., Hanzo, L.: Semi-blind joint maximum likelihood channel
estimation and data detection for MIMO systems. IEEE Signal Processing Letters 15,
202–205 (2008)
[2] Rasheed, K.: GADO: A genetic algorithm for continuous design optimization. Techni-
cal Report DCS-TR-352, Department of Computer Science, Rutgers University. Ph.D.
Thesis (1998)
[3] Ong, Y.S., Nair, P.B., Keane, A.J., Wong, K.W.: Surrogate-Assisted Evolutionary Opti-
mization Frameworks for High-Fidelity Engineering Design Problems. In: Jin, Y. (ed.)
Knowledge Incorporation in Evolutionary Computation. Studies in Fuzziness and Soft
Computing, pp. 307–332. Springer, Heidelberg (2004)
[4] Schwefel, H.-P.: Evolution and Optimum Seeking. Wiley, Chichester (1995)
[5] Chafekar, D., Shi, L., Rasheed, K., Xuan, J.: Multi-objective GA optimization using
reduced models. IEEE Trans. on Systems, Man, and Cybernetics: Part C 9(2), 261–265
(2005)
[6] Chung, H.-S., Alonso, J.J.: Multi-objective optimization using approximation model-
based genetic algorithms. Technical report 2004-4325, AIAA (2004)
1 A Survey of Fitness Approximation Methods 25

[7] Jin, Y.: A comprehensive survey of fitness approximation in evolutionary computation.


Soft Computing Journal 9(1), 3–12 (2005)
[8] Reklaitis, G.V., Ravindran, A., Ragsdell, K.M.: Engineering Optimization Methods and
Application. Wiley, New York (1983)
[9] Deb, K.: Optimization for Engineering Design: Algorithms and Examples. Prentice-
Hall, New Delhi (1995)
[10] Weinberger, E.D.: Fourier and Taylor series on fitness landscapes. Biological Cybernet-
ics 65(55), 321–330 (1991)
[11] Hordijk, W., Stadler, P.F.: Amplitude Spectra of Fitness Landscapes. J. Complex Sys-
tems 1, 39–66 (1998)
[12] Smith, R., Dike, B., Stegmann, S.: Fitness inheritance in genetic algorithms. In: Pro-
ceedings of ACM Symposiums on Applied Computing, pp. 345–350. ACM, New York
(1995)
[13] Sastry, K., Goldberg, D.E., Pelikan, M.: Don’t evaluate, inherit. In: Proceedings of Ge-
netic and Evolutionary Computation Conference, pp. 551–558. Morgan Kaufmann, San
Francisco (2001)
[14] Pelikan, M., Sastry, K.: Fitness Inheritance in the Bayesian Optimization Algorithm. In:
Deb, K., et al. (eds.) GECCO 2004. LNCS, vol. 3103, pp. 48–59. Springer, Heidelberg
(2004)
[15] Bui, L.T., Abbass, H.A., Essam, D.: Fitness inheritance for noisy evolutionary multi-
objective optimization. In: Proceedings of the 2005 conference on Genetic and evolu-
tionary computation, pp. 779–785 (2005)
[16] Kim, H.-S., Cho, S.-B.: An efficient genetic algorithm with less fitness evaluation by
clustering. In: Proceedings of IEEE Congress on Evolutionary Computation, pp. 887–
894. IEEE, Los Alamitos (2001)
[17] Elliott, L., Ingham, D.B., Kyne, A.G., Mera, N.S., Pourkashanian, M., Wilson, C.W.:
An informed operator based genetic algorithm for tuning the reaction rate parame-
ters of chemical kinetics mechanisms. In: Deb, K., et al. (eds.) GECCO 2004. LNCS,
vol. 3103, pp. 945–956. Springer, Heidelberg (2004)
[18] Jin, Y., Sendhoff, B.: Reducing Fitness Evaluations Using Clustering Techniques and
Neural Network Ensembles. In: Deb, K., et al. (eds.) GECCO 2004. LNCS, vol. 3102,
pp. 688–699. Springer, Heidelberg (2004)
[19] Ong, Y.S., Keane, A.J., Nair, P.B.: Surrogate-Assisted Coevolutionary Search. In: 9th
International Conference on Neural Information Processing, Special Session on Trends
in Global Optimization, Singapore, pp. 2195–2199 (2002)
[20] Rasheed, K.: An incremental-approximate-clustering approach for developing dynamic
reduced models for design optimization. In: Proceedings of the Congress on Evolution-
ary Computation (CEC 2002), pp. 986–993 (2000)
[21] Pelikan, M., Sastry, K., Goldberg, D.E.: Multiobjective hBOA, clustering, and scalabil-
ity. In: Proceedings of the 2005 conference on Genetic and evolutionary computation,
Washington DC, USA, pp. 663–670 (2005)
[22] Takagi, H.: Interactive evolutionary computation. Fusion of the capabilities of EC opti-
mization and human evaluation. Proceedings of the IEEE 89(9), 1275–1296 (2001)
[23] Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical Recipes in
C: the Art of Scientific Computing, 2nd edn. Cambridge University Press, Cambridge
(1992)
[24] Gibbs, M., MacKay, D.J.C.: Efficient Implementation of Gaussian Processes. Cavendish
Laboratory, Cambridge (1997) (unpublished manuscript)
26 L. Shi and K. Rasheed

[25] Williams, C.K.I., Rasmussen, C.E.: Gaussian Processes for regression. In: Touretzky,
D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing
Systems, vol. 8. MIT Press, Cambridge (1996)
[26] Emmerich, M., Giotis, A., Özdemir, M., Bäck, T., Giannakoglou, K.: Metamodel-
assisted evolution strategies. In: Guervós, J.J.M., Adamidis, P.A., Beyer, H.-G.,
Fernández-Villacañas, J.-L., Schwefel, H.-P. (eds.) PPSN 2002. LNCS, vol. 2439, pp.
361–380. Springer, Heidelberg (2002)
[27] El-Beltagy, M.A., Keane, A.J.: Evolutionary optimization for computationally expen-
sive problems using Gaussian processes. In: Proceedings of International Conference
on Artificial Intelligence, pp. 708–714. CSREA (2001)
[28] Zhou, Z., Ong, Y.S., Nair, P.B.: Hierarchical surrogate-assisted evolutionary optimiza-
tion framework. In: Congress on Evolutionary Computation, pp. 1586–1593. IEEE, Los
Alamitos (2004)
[29] Ulmer, H., Streichert, F., Zell, A.: Evolution startegies assisted by Gaussian processes
with improved pre-selection criterion. In: Proceedings of IEEE Congress on Evolution-
ary Computation, pp. 692–699 (2003)
[30] Bueche, D., Schraudolph, N.N., Koumoutsakos, P.: Accelerating evolutionary algo-
rithms with Gaussian process fitness function models. IEEE Trans. on Systems, Man,
and Cybernetics: Part C 35(2), 183–194 (2005)
[31] Ulmer, H., Streicher, F., Zell, A.: Model-assisted steady-state evolution strategies. In:
Cantú-Paz, E., Foster, J.A., Deb, K., Davis, L., Roy, R., O’Reilly, U.-M., Beyer, H.-
G., Kendall, G., Wilson, S.W., Harman, M., Wegener, J., Dasgupta, D., Potter, M.A.,
Schultz, A., Dowsland, K.A., Jonoska, N., Miller, J., Standish, R.K. (eds.) GECCO
2003. LNCS, vol. 2723, pp. 610–621. Springer, Heidelberg (2003)
[32] Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, Ox-
ford (1995)
[33] Graening, L., Jin, Y., Sendhoff, B.: Efficient evolutionary optimization using individual-
based evolution control and neural networks: A comparative study. In: European Sym-
posium on Artificial Neural Networks, pp. 273–278 (2005)
[34] Hong, Y.-S., Lee, H., Tahk, M.-J.: Acceleration of the convergence speed of evolu-
tionary algorithms using multi-layer neural networks. Engineering Optimization 35(1),
91–102 (2003)
[35] Hüscken, M., Jin, Y., Sendhoff, B.: Structure optimization of neural networks for aero-
dynamic optimization. Soft Computing Journal 9(1), 21–28 (2005)
[36] Jin, Y., Hüsken, M., Olhofer, M., Sendhoff, B.: Neural networks for fitness approxima-
tion in evolutionary optimization. In: Jin, Y. (ed.) Knowledge Incorporation in Evolu-
tionary Computation, pp. 281–305. Springer, Berlin (2004)
[37] Papadrakakis, M., Lagaros, N., Tsompanakis, Y.: Optimization of large-scale 3D trusses
using Evolution Strategies and Neural Networks. Int. J. Space Structures 14(3), 211–223
(1999)
[38] Schneider, G.: Neural networks are useful tools for drug design. Neural Networks 13,
15–16 (2000)
[39] Shyy, W., Tucker, P.K., Vaidyanathan, R.: Response surface and neural network tech-
niques for rocket engine injector optimization. Technical report 99-2455, AIAA (1999)
[40] Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines. Cam-
bridge Press (2000)
[41] Shawe-Taylor, J., Cristianini, N.: Kernel Methods for Patter Analysis. Cambridge Press
(2004)
1 A Survey of Fitness Approximation Methods 27

[42] Llorà, X., Sastry, K., Goldberg, D.E., Gupta, A., Lakshmi, L.: Combating User Fatigue
in iGAs: Partial Ordering, Support Vector Machines, and Synthetic Fitness. In: Proceed-
ings of the 2005 conference on Genetic and evolutionary computation, pp. 1363–1370
(2005)
[43] Rasheed, K., Ni, X., Vattam, S.: Comparison of Methods for Developing Dynamic Re-
duced Models for Design Optimization. Soft Computing Journal 9(1), 29–37 (2005)
[44] Jin, R., Chen, W., Simpson, T.W.: Comparative studies of metamodeling techniques
under multiple modeling criteria. Technical report 2000-4801, AIAA (2000)
[45] Simpson, T., Mauery, T., Korte, J., Mistree, F.: Comparison of response surface and
Kriging models for multidiscilinary design optimization. Technical report 98-4755,
AIAA (1998)
[46] Carpenter, W., Barthelemy, J.-F.: A comparison of polynomial approximation and arti-
ficial neural nets as response surface. Technical report 92-2247, AIAA (1992)
[47] Willmes, L., Baeck, T., Jin, Y., Sendhoff, B.: Comparing neural networks and kriging for
fitness approximation in evolutionary optimization. In: Proceedings of IEEE Congress
on Evolutionary Computation, pp. 663–670 (2003)
[48] Branke, J., Schmidt, C.: Fast convergence by means of fitness estimation. Soft Comput-
ing Journal 9(1), 13–20 (2005)
[49] Rasheed, K., Hirsh, H.: Informed operators: Speeding up genetic-algorithm-based de-
sign optimization using reduced models. In: Proceedings of the Genetic and Evolution-
ary Computation Conference (GECCO 2000), pp. 628–635 (2000)
[50] Biles, J.A.: GenJam: A genetic algorithm for generating jazz solos. In: Proceedings of
International Computer Music Conference, pp. 131–137 (1994)
[51] Zhou, Z.Z., Ong, Y.S., Nair, P.B., Keane, A.J., Lum, K.Y.: Combining Global and Lo-
cal Surrogate Models to Accelerate Evolutionary Optimization. IEEE Transactions on
Systems, Man and Cybernetics - Part C 37(1), 66–76 (2007)
[52] Sefrioui, M., Periaux, J.: A hierarchical genetic algorithm using multiple models for op-
timization. In: Deb, K., Rudolph, G., Lutton, E., Merelo, J.J., Schoenauer, M., Schwefel,
H.-P., Yao, X. (eds.) PPSN 2000. LNCS, vol. 1917, pp. 879–888. Springer, Heidelberg
(2000)
[53] Skolicki, Z., De Jong, K.: The influence of migration sizes and intervals on island mod-
els. In: Proceedings of the 2005 conference on Genetic and evolutionary computation,
pp. 1295–1302 (2005)
[54] Rasheed, K., Hirsh, H.: Learning to be selective in genetic-algorithm-based design op-
timization. Artificial Intelligence in Engineering, Design, Analysis and Manufactur-
ing 13, 157–169 (1999)
[55] Hidović, D., Rowe, J.E.: Validating a model of colon colouration using an evolution
strategy with adaptive approximations. In: Deb, K., et al. (eds.) GECCO 2004. LNCS,
vol. 3103, pp. 1005–1016. Springer, Heidelberg (2004)
[56] Ziegler, J., Banzhaf, W.: Decreasing the number of evaluations in evolutionary algo-
rithms by using a meta-model of the fitness function. In: Ryan, C., Soule, T., Keijzer,
M., Tsang, E.P.K., Poli, R., Costa, E. (eds.) EuroGP 2003. LNCS, vol. 2610, pp. 264–
275. Springer, Heidelberg (2003)
[57] Jin, Y., Branke, J.: Evolutionary optimization in uncertain environments: A survey.
IEEE Transactions on Evolutionary Computation 9(3), 303–317 (2005)
[58] Ziegler, J., Banzhaf, W.: Decreasing the number of evaluations in evolutionary algo-
rithms by using a meta-model of the fitness function. In: Ryan, C., Soule, T., Keijzer,
M., Tsang, E.P.K., Poli, R., Costa, E. (eds.) EuroGP 2003. LNCS, vol. 2610, pp. 264–
275. Springer, Heidelberg (2003)
28 L. Shi and K. Rasheed

[59] Lim, D., Ong, Y.S., Jin, Y., Sendhoff, B.: A Study on Metamodeling Techniques, En-
sembles, and Multi-Surrogates in Evolutionary Computation. In: Genetic and Evolu-
tionary Computation Conference, London, UK, pp. 1288–1295. ACM Press, New York
(2007)
[60] Shi, L., Rasheed, K.: ASAGA: An Adaptive Surrogate-Assisted Genetic Algorithm. In:
Genetic and Evolutionary Computation Conference (GECCO 2008), pp. 1049–1056.
ACM Press, New York (2008)
[61] Regis, R.G., Shoemaker, C.A.: Local Function Approximation in Evolutionary Algo-
rithms for the Optimization of Costly Functions. IEEE Transactions on Evolutionary
Computation 8(5), 490–505 (2004)
[62] Zerpa, L.E., Queipo, N.V., Pintos, S., Salager, J.-L.: An Optimization Methodology of
Alkaline-surfactant-polymer Flooding Processes Using Field Scale Numerical Simula-
tion and Multiple Surrogates. Journal of Petroleum Science and Engineering 47, 197–
208 (2005)
[63] Lundström, D., Staffan, S., Shyy, W.: Hydraulic Turbine Diffuser Shape Optimization
by Multiple Surrogate Model Approximations of Pareto Fronts. Journal of Fluids Engi-
neering 129(9), 1228–1240 (2007)
[64] Zhou, Z., Ong, Y.S., Lim, M.H., Lee, B.S.: Memetic Algorithm Using Multi-surrogates
for Computationally Expensive Optimization Problems. Soft Computing 11(10), 957–
971 (2007)
[65] Goel, T., Haftka, R.T., Shyy, W., Queipo, N.V.: Ensemble of Surrogates? Structural and
Multidisciplinary Optimization 33, 199–216 (2007)
[66] Sastry, K., Lima, C.F., Goldberg, D.E.: Evaluation Relaxation Using Substructural In-
formation and Linear Estimation. In: Proceedings of the 8th annual conference on Ge-
netic and Evolutionary Computation Conference (2006)
[67] Torczon, V., Trosset, M.: Using approximations to accelerate engineering design opti-
mization. NASA/CR-1998-208460 (or ICASE Report No. 98-33) (1998)
[68] Pierret, S., Braembussche, R.A.V.: Turbomachinery Blade Design Using a Navier-
Stokes Solver and ANN. Journal of Turbomachinery (ASME) 121(2) (1999)
[69] Goel, T., Vaidyanathan, R., Haftka, R.T., Shyy, W., Queipo, N.V., Tucker, K.: Response
surface approximation of Pareto optimal front in multi-objective optimization. Com-
puter Methods in Applied Mechanics and Engineering (2007)
[70] Knowles, J.: ParEGO: A Hybrid Algorithm with On-Line Landscape Approximation for
Expensive Multiobjective Optimization Problems. IEEE Transactions on Evolutionary
Computation 10(1) (February 2005)
[71] Giannakoglou, K.C.: Design of optimal aerodynamic shapes using stochastic optimiza-
tion methods and computational intelligence. Progress in Aerospace Sciences 38(1)
(2000)
[72] Shyy, W., Papila, N., Vaidyanathan, R., Tucker, K.: Global design optimization for aero-
dynamics and rocket propulsion components. Progress in Aerospace Sciences 37 (2001)
[73] Quagliarella, D., Periaux, J., Poloni, C., Winter, G. (eds.): Genetic Algorithms and Evo-
lution Strategies in Engineering and Computer Science. Recent Advances and Industrial
Applications, ch. 13, pp. 267–288. John Wiley and Sons, West Sussex (1997)
[74] Gelsey, A., Schwabacher, M., Smith, D.: Using modeling knowledge to guide design
space search. In: Fourth International Conference on Artificial Intelligence in Design
1996 (1996)

You might also like