A Survey of Fitness Approximation Methods Applied in Evolutionary Algorithms
A Survey of Fitness Approximation Methods Applied in Evolutionary Algorithms
1.1 Introduction
In recent years, EAs have been applied to many real-world application domains and
gained much research interest. EAs proved to be powerful tools for optimization
problems and were therefore used in a wide range of real-world applications, espe-
cially for engineering design domains. In such domains, the so-called fitness func-
tions are sometimes discontinuous, non-differential, with many local optima, noisy
L. Shi
Applied Research, McAfee
e-mail: [email protected]
K. Rasheed
Computer Science Department, University of Georgia
e-mail: [email protected]
Y. Tenne and C.-K. Goh (Eds.): Computational Intel. in Expensive Opti. Prob., ALO 2, pp. 3–28.
springerlink.com c Springer-Verlag Berlin Heidelberg 2010
4 L. Shi and K. Rasheed
and ambiguous. It was found that EAs perform better than the conventional optimiz-
ers such as sequential quadratic programming and Simulated Annealing [2, 3, 4, 73].
Many challenges still arise in the application of EAs to real-world domains.
For engineering design problems, a large number of objective evaluations may be
required in order to obtain near-optimal solutions. Moreover, the search space can be
complex, with many constraints and a small feasible region. However, determining
the fitness of each point may involve the use of a simulator or analysis code that takes
an extremely long time to execute. Therefore it would be difficult to be cavalier about
the number of objective evaluations used for an optimization [5, 6]. For tasks like
art design and music composition, no explicit fitness function exists; experienced
human users are needed to do the evaluation. A human’s ability to deal with a large
number of evaluations is limited as humans easily get tired. Another challenge is that
the environment of an EA can be noisy, which means that the exact fitness cannot be
determined, and an approximate fitness must be assigned to each individual. An av-
erage fitness solution to the noise problem requires even more evaluations. For such
problems surrogate-assisted evolution methods based on fitness approximation are
preferable, as they can simulate the exact fitness at a much lower computational cost.
A good fitness approximation method can still lead the EA process to find optimal
or near-optimal solutions and is also tolerant to noise [7, 71].
In this chapter we further extend the discussion about fitness approximation by
introducing more concepts in this area and by presenting new developments in re-
cent years. Three main aspects of fitness approximation are our main focus areas.
Those are the different types of fitness approximation methods, the working styles
and the management schemes of the fitness approximation.
For the methods of fitness approximation, instance-based learning methods, ma-
chine learning methods and statistical learning methods are the most popular ones.
Instance-based and machine learning methods include fitness inheritance, radial ba-
sis function models, the K-nearest-neighbor method, clustering techniques, and neu-
ral network methods. Statistical learning methods also known as functional models
such as the polynomial models, the Kriging models, and the support vector ma-
chines are all widely used for fitness approximation in EAs. Comparative studies
among these methods are presented in this chapter.
For the working styles of the fitness approximation, we discuss both direct and
indirect fitness replacement strategies. The direct fitness replacement method is to
use the approximate fitness to directly replace the original exact fitness during the
course of the EA process. Thus individuals mostly have the approximate fitness
during the optimization. The indirect fitness replacement method is to use the ap-
proximate fitness only for some but not all processes in the EA, such as population
initialization and EA operators. Individuals have the exact fitness during most if not
all of the optimization process.
With fitness approximation in EAs, the quality of an approximate model is
always a concern for lack of training data and the often high dimensionality
of the problem. Obtaining a perfect approximate model is not possible in such
cases. Usually the original fitness function is used with the approximate method
to solve this problem. The original fitness function can either correct some/all
1 A Survey of Fitness Approximation Methods 5
Fig. 1.1 Fitness approximation methods, FI: Fitness Inheritance KNN: K-Nearest Neigh-
bors RBF: Radial Basis Functions NN: Neural Networks DT: Decision Tree PM: Polynomial
Model SVM: Support Vector Machines
The Radial Basis Function (RBF) model is another instance-based learning method.
RBF networks can also be viewed as a type of neural networks. Since it is a very
popular technique for fitness approximation in EAs [3, 19, 31], it is worthy of being
introduced independently from the normal multilayer neural networks.
An RBF network consists of an input layer with the same number of input units
as the problem dimension, a single hidden layer of k nonlinear processing units and
an output layer of linear weights wI (Fig. 1.2). The size of the hidden layer (k) can
be equal to the sample size if the sample size is small. In the case of a larger sample
size, k is usually smaller than the sample size to avoid excessive calculations. This
RBF network is called the generalized RBF network. The output y(x) of the RBF
network is given as a linear combination of a set of radial basis functions expressed
in the following way:
k
y(x) = w0 + ∑ wi φi (x − ci ) (1.1)
i=1
where w0 and wi are the unknown coefficients to be learned. The term φi (x − ci),
also called the kernel, represents the ith radial basis function. It evaluates the dis-
tance between the input x and the center ci . For the generalized RBF network, the
1 A Survey of Fitness Approximation Methods 7
centers ci are also unknown and have to be learned by other methods such as the
k-means method.
Typical choices for the kernel include linear splines, cubic splines, multi-
quadratics, thin-plate splines, and Gaussian kernels. A Gaussian kernel is the most
commonly used in practice, having the form:
x − ci
φi (x − ci ) = exp − (1.2)
2σ 2
several clusters and then build an approximate model for each cluster. The motiva-
tion is that multiple approximate models are believed to utilize more local informa-
tion about the search space and fit the original fitness function better than a single
model [5, 20, 21].
A simple feed-forward MLPNN with one input layer, one hidden layer and one
output layer can be expressed as:
K n
y(x) = ∑ w j f ∑ wi j xi + θ j + θ0 (1.3)
j=1 i=1
where n is the number of input neurons (which is usually equal to the problem
dimension), K is the number of nodes of the hidden layer, and the function f is
called the activation function. The structure of a feed-forward MLPNN is shown in
Fig. 1.3. W and θ are the unknown weights to be learned. The most commonly used
activation function is the logistic function, which has the form:
1
f (x) = (1.4)
1 + exp(−cx)
The Kriging model consists of two component models which can be mathematically
expressed as:
10 L. Shi and K. Rasheed
ŷ = β̂ + rT (x)R−1 (y − f β̂ ) (1.8)
The correlation vector between x and the sampled data points is expressed as:
T
rT (x) = R(x, x1 ), R(x, x2 ), . . . , R(x, xn ) (1.10)
Estimation of the parameters is often carried out using the generalized least squares
method or the maximum likelihood method. Detailed implementations can be found
in [24, 25].
In addition to the approximate values, the Kriging method can also provide accu-
racy information about the fitting in the form of confidence intervals for the estimated
values without additional computational cost. In [6, 28], a Kriging model is used to
build the global models because it is believed to be a good solution for fitting com-
plex surfaces. A Kriging model is used to pre-select the most promising solutions in
[29]. In [26, 27, 30], a Kriging model is used to accelerate the optimization or reduce
the expensive computational cost of the original fitness function. In [67], a Kriging
model with a pattern search technique is used to approximate the original expen-
sive function. In [70], a Gaussian process method is used for landscape search in a
multi-objective optimization problem that gives promising performance. One disad-
vantage of the Kriging method is that it is sensitive to the problem’s dimension. The
computational cost is unacceptable when the dimension of the problem is high.
class labels. Contemporary SVM models support both regression and classification
tasks and can handle multiple continuous and categorical variables. A detailed de-
scription of SVM models can be found in [40, 41]. SVM models compare favorably
to many other approximation models because they are not sensitive to local optima,
their optimization process does not depend on the problem dimensions, and over-
fitting is seldom an issue. Applications of SVM for fitness approximation can be
found in [42]. The regression SVM is used for constructing approximate models.
There are two types of regression SVMs: epsilon-SVM regression and nu-SVM re-
gression. The epsilon-SVM regression model is more commonly used for fitness
approximation, where the linear epsilon-insensitive loss function is defined by:
Where φ (x) is called the kernel function. It may have the forms of linear, polyno-
mial, Gaussian, RBF and sigmoid functions. The RBF is by far the most popular
choice of kernel type used in SVMs. This is mainly because of their localized and
finite responses across the entire range of the real x-axis. This optimization problem
can be solved by using quadratic programming techniques.
formula is used to decide whether to continue using the same type of model or
switch to the next at any time. Fig. 1.9 shows the evolution path.
The neural network model and the polynomial model were compared in [46, 72].
The study concluded that the performance of the two types of approximation was
comparable in terms of the number of function evaluations required to build the
approximations and the number of undetermined parameters associated with the ap-
proximations. However, the polynomial model had a much lower construction cost.
In [72], after evaluating both methods in several applications, the authors concluded
that both of them can perform comparably for modest data. In [43], a quadratic poly-
nomial model was found to be the best method among the polynomial model, RBF
network, and the Quick-prop neural network when the models were built for regions
created by clustering techniques. The authors were in favor of the polynomial model
because they found that it formed approximations more than an order of magnitude
faster than the other methods and did not require any tuning of parameters. The
authors also pointed out that the polynomial approximation was in a mathemati-
cal form which could be algebraically analyzed and manipulated, as opposed to the
black-box results that neural networks give.
The Kriging model and the neural network model were compared using bench-
mark problems in [47]. However, no clear conclusion was drawn about which model
is better. Instead, the author showed that optimization with a meta-model could lead
to degraded performance. Another comparison was presented in [45] between the
polynomial model and the Kriging model. By testing these two models on a real-
world engineering design problem, the author found that the polynomial and Kriging
approximations yielded comparable results with minimal difference in predictive ca-
pability. Comparisons between several approximate models were presented in [44],
which compared the performance of the polynomial model, the multivariate adaptive
splines’ model, the RBF model, and the Kriging model using 14 test problems with
different scales and nonlinearities. Their conclusion was that the polynomial model
is the best for low-order nonlinear problems, and the RBF model is the best for deal-
ing with high-order nonlinear problems (details shown in Table 1.1). In [59], four
types of approximate models - Gaussian Process (Kriging), RBF, Polynomial model
and Extreme Learning Machine Neural Network (ELMNN) - were compared on
artificial unconstrained benchmark domains. Polynomial Models (PM) were found
to be the best for final solution quality and RBF was found to be the best when
considering correlation coefficients between the exact fitness and estimated fitness.
Table 1.2 shows the performance ranks of these four models in terms of the quality
of the final solution.
So far different approximate models have been compared based on their perfor-
mance, but the word performance itself has not been clearly defined. This is because
the definition of performance may depend on the problem to be addressed, and mul-
tiple criteria need to be considered. Model accuracy is probably the most important
criterion, since approximate models with a low accuracy may lead the optimization
process to local optima. Model accuracy also should be based on new sample points
instead of the training data set points. The reason for this is that for some models
such as the neural network, overfitting is a common problem. In the case of over-
fitting, the model works very well on training data, yielding good model accuracy,
16 L. Shi and K. Rasheed
Low-order High-order
Nonlinearity Nonlinearity
Small Scale Polynomial RBF
Large Scale Kriging RBF
Overall Polynomial RBF
Table 1.2 Final quality measures for Kriging, PM. RBF and ELMNN approximate models
in [59]
Benchmark Method
domain Kriging PM RBF ELMNN
Ackley 2 1 4 3
Griewank 3 1 2 4
Rosenbrock 1 3 2 4
Step 3 1 2 4
but may perform poorly on new sample points. The optimization process could
easily go in the wrong direction if it is assisted by a model suffering from overfit-
ting. There are other important criteria to be considered, including robustness, effi-
ciency, and time spent on model construction and updating. A fair comparison would
consider the model accuracy as well as all of these criteria.
It is difficult to draw a clear conclusion on which model is the best for the reasons
stated above, though the polynomial model seems to be the best choice for a local
model when dealing with local regions or clusters and enough sample points are
available [43]. In such cases, the fitting problem usually has low-order nonlinearity
and the polynomial model is the best candidate according to [44]. The polynomial
model is also believed to perform the best for problems with noise [44]. As for high-
order nonlinear problems the RBF model is believed to be the best and it is the least
sensitive to the sample size and has the most robustness [44]. So the RBF model is a
good choice for a global model with or without many samples. In [72], NN is found
to perform significantly better than PM when search space is very complex and the
parameters are correctly set.
The SVM model is a powerful fitting tool that belongs to the class of kernel
methods. Because of the beneficial features of SVMs stated above, the SVM model
becomes a good choice for constructing a global model, especially for problems
with high dimension and many local optima, provided that a large sample of points
exists.
1 A Survey of Fitness Approximation Methods 17
The indirect surrogate method computes the exact fitness for each individual during
an EA process and the approximate fitness is used in other ways. For example, the
approximate fitness can be used for population pre-selection. In this method, instead
of generating a random initial population, an individual for the initial population
18 L. Shi and K. Rasheed
can be generated by selecting the best individual from a number of uniformly dis-
tributed random individuals in the design space according to the approximate fitness
[5, 43, 49].
Approximate fitness can also be used for crossover or mutation in a similar man-
ner, through a technique known as Informed Operators [5, 17, 43, 49]. Under this
approach, the approximate models are used to evaluate candidates only during the
crossover and/or mutation process. After the crossover and/or mutation process,
the exact fitness is still computed for the newly created candidate solutions. Using
the approximate fitness indirectly in the form of Informed Operators - rather than di-
rect evaluation - is expected to keep the optimization moving toward the true global
optima and to reduce the risk of convergence to suboptimal solutions because each
individual in the population is still assigned its exact fitness [49]. Experimental re-
sults have shown that a surrogate-assisted informed operator-based multi objective
GA can outperform state-of-art multi objective GAs for several benchmark prob-
lems [5]. Informed Operators also make it easy to use surrogates adaptively, as the
number of candidates can be adaptively determined. Some of the informed operators
used in [49] are explained as follows:
• Informed initialization: Approximate fitness is used for population pre-selection.
Instead of generating a random initial population, an individual for the initial
population can be generated by selecting the best individuals from a number of
uniformly distributed random individuals in the design space according to the
approximate fitness.
• Informed mutation: To perform informed mutation, several random mutations
of the base point are generated. The mutation with the best approximate fitness
value is returned as the result.
• Informed crossover: Two parents are selected at random according to the usual
selection strategy. These two parents are not changed in the course of the
informed crossover operation. Several crossovers are conducted by randomly
selecting a crossover method, randomly selecting its internal parameters and
applying it to the two parents to generate a potential child. The surrogate is used
to evaluate every potential child, and the best child is selected as the outcome.
62.5945
45.0132
17.5813
-12.8733 0 12.8733
5.53123
1.6764
0
of the aircraft, and “dry”mass, which provides a rough approximation of the cost of
building the aircraft. In summary, the problem has 12 parameters and 37 inequality
constraints and only 0.6% of the search space is evaluable.
Fig. 1.15 shows a performance comparison in this domain. Each curve in the fig-
ure shows the average of 15 runs of GADO starting from random initial populations.
The experiments were done once for each surrogate: Least Square PM (LS), Quick-
Prop NN (QP) and RBF in addition to one without the surrogate-assisted informed
operators altogether, with all other parameters kept the same. Fig. 1.15 demonstrates
the performance with each of the three surrogate-assisted methods as well as per-
formance with no approximation at all (the solid line). The figure plots the average
(over the 15 runs) of the best measure of merit found so far in the optimization as
a function of the number of iterations. The figure shows that all surrogate-assisted
methods are better than the plain GADO and the LS approximation method gave the
best performance in all stages of the search in this domain.
1 A Survey of Fitness Approximation Methods 23
240 ’GADO_Aircraft’
’GADO_Aircraft_LS’
’GADO_Aircraft_QP’
’GADO_Aircraft_RBF’
230
220
210
200
190
180
170
Fig. 1.15 Four GA methods comparison in supersonic aircraft design domain, landscape
shows numbers of fitness function evaluations and vertical direction shows fitness values
where
√
τ (x) = 6000/( 2hl) (1.29)
6000(14 + 0.5l) 0.25(l w + (h + t)2)
τ (x) = (1.30)
2(0.707hl(l 2/12 + 0.25(h + t)2))
References
[1] Abuthinien, M., Chen, S., Hanzo, L.: Semi-blind joint maximum likelihood channel
estimation and data detection for MIMO systems. IEEE Signal Processing Letters 15,
202–205 (2008)
[2] Rasheed, K.: GADO: A genetic algorithm for continuous design optimization. Techni-
cal Report DCS-TR-352, Department of Computer Science, Rutgers University. Ph.D.
Thesis (1998)
[3] Ong, Y.S., Nair, P.B., Keane, A.J., Wong, K.W.: Surrogate-Assisted Evolutionary Opti-
mization Frameworks for High-Fidelity Engineering Design Problems. In: Jin, Y. (ed.)
Knowledge Incorporation in Evolutionary Computation. Studies in Fuzziness and Soft
Computing, pp. 307–332. Springer, Heidelberg (2004)
[4] Schwefel, H.-P.: Evolution and Optimum Seeking. Wiley, Chichester (1995)
[5] Chafekar, D., Shi, L., Rasheed, K., Xuan, J.: Multi-objective GA optimization using
reduced models. IEEE Trans. on Systems, Man, and Cybernetics: Part C 9(2), 261–265
(2005)
[6] Chung, H.-S., Alonso, J.J.: Multi-objective optimization using approximation model-
based genetic algorithms. Technical report 2004-4325, AIAA (2004)
1 A Survey of Fitness Approximation Methods 25
[25] Williams, C.K.I., Rasmussen, C.E.: Gaussian Processes for regression. In: Touretzky,
D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing
Systems, vol. 8. MIT Press, Cambridge (1996)
[26] Emmerich, M., Giotis, A., Özdemir, M., Bäck, T., Giannakoglou, K.: Metamodel-
assisted evolution strategies. In: Guervós, J.J.M., Adamidis, P.A., Beyer, H.-G.,
Fernández-Villacañas, J.-L., Schwefel, H.-P. (eds.) PPSN 2002. LNCS, vol. 2439, pp.
361–380. Springer, Heidelberg (2002)
[27] El-Beltagy, M.A., Keane, A.J.: Evolutionary optimization for computationally expen-
sive problems using Gaussian processes. In: Proceedings of International Conference
on Artificial Intelligence, pp. 708–714. CSREA (2001)
[28] Zhou, Z., Ong, Y.S., Nair, P.B.: Hierarchical surrogate-assisted evolutionary optimiza-
tion framework. In: Congress on Evolutionary Computation, pp. 1586–1593. IEEE, Los
Alamitos (2004)
[29] Ulmer, H., Streichert, F., Zell, A.: Evolution startegies assisted by Gaussian processes
with improved pre-selection criterion. In: Proceedings of IEEE Congress on Evolution-
ary Computation, pp. 692–699 (2003)
[30] Bueche, D., Schraudolph, N.N., Koumoutsakos, P.: Accelerating evolutionary algo-
rithms with Gaussian process fitness function models. IEEE Trans. on Systems, Man,
and Cybernetics: Part C 35(2), 183–194 (2005)
[31] Ulmer, H., Streicher, F., Zell, A.: Model-assisted steady-state evolution strategies. In:
Cantú-Paz, E., Foster, J.A., Deb, K., Davis, L., Roy, R., O’Reilly, U.-M., Beyer, H.-
G., Kendall, G., Wilson, S.W., Harman, M., Wegener, J., Dasgupta, D., Potter, M.A.,
Schultz, A., Dowsland, K.A., Jonoska, N., Miller, J., Standish, R.K. (eds.) GECCO
2003. LNCS, vol. 2723, pp. 610–621. Springer, Heidelberg (2003)
[32] Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, Ox-
ford (1995)
[33] Graening, L., Jin, Y., Sendhoff, B.: Efficient evolutionary optimization using individual-
based evolution control and neural networks: A comparative study. In: European Sym-
posium on Artificial Neural Networks, pp. 273–278 (2005)
[34] Hong, Y.-S., Lee, H., Tahk, M.-J.: Acceleration of the convergence speed of evolu-
tionary algorithms using multi-layer neural networks. Engineering Optimization 35(1),
91–102 (2003)
[35] Hüscken, M., Jin, Y., Sendhoff, B.: Structure optimization of neural networks for aero-
dynamic optimization. Soft Computing Journal 9(1), 21–28 (2005)
[36] Jin, Y., Hüsken, M., Olhofer, M., Sendhoff, B.: Neural networks for fitness approxima-
tion in evolutionary optimization. In: Jin, Y. (ed.) Knowledge Incorporation in Evolu-
tionary Computation, pp. 281–305. Springer, Berlin (2004)
[37] Papadrakakis, M., Lagaros, N., Tsompanakis, Y.: Optimization of large-scale 3D trusses
using Evolution Strategies and Neural Networks. Int. J. Space Structures 14(3), 211–223
(1999)
[38] Schneider, G.: Neural networks are useful tools for drug design. Neural Networks 13,
15–16 (2000)
[39] Shyy, W., Tucker, P.K., Vaidyanathan, R.: Response surface and neural network tech-
niques for rocket engine injector optimization. Technical report 99-2455, AIAA (1999)
[40] Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines. Cam-
bridge Press (2000)
[41] Shawe-Taylor, J., Cristianini, N.: Kernel Methods for Patter Analysis. Cambridge Press
(2004)
1 A Survey of Fitness Approximation Methods 27
[42] Llorà, X., Sastry, K., Goldberg, D.E., Gupta, A., Lakshmi, L.: Combating User Fatigue
in iGAs: Partial Ordering, Support Vector Machines, and Synthetic Fitness. In: Proceed-
ings of the 2005 conference on Genetic and evolutionary computation, pp. 1363–1370
(2005)
[43] Rasheed, K., Ni, X., Vattam, S.: Comparison of Methods for Developing Dynamic Re-
duced Models for Design Optimization. Soft Computing Journal 9(1), 29–37 (2005)
[44] Jin, R., Chen, W., Simpson, T.W.: Comparative studies of metamodeling techniques
under multiple modeling criteria. Technical report 2000-4801, AIAA (2000)
[45] Simpson, T., Mauery, T., Korte, J., Mistree, F.: Comparison of response surface and
Kriging models for multidiscilinary design optimization. Technical report 98-4755,
AIAA (1998)
[46] Carpenter, W., Barthelemy, J.-F.: A comparison of polynomial approximation and arti-
ficial neural nets as response surface. Technical report 92-2247, AIAA (1992)
[47] Willmes, L., Baeck, T., Jin, Y., Sendhoff, B.: Comparing neural networks and kriging for
fitness approximation in evolutionary optimization. In: Proceedings of IEEE Congress
on Evolutionary Computation, pp. 663–670 (2003)
[48] Branke, J., Schmidt, C.: Fast convergence by means of fitness estimation. Soft Comput-
ing Journal 9(1), 13–20 (2005)
[49] Rasheed, K., Hirsh, H.: Informed operators: Speeding up genetic-algorithm-based de-
sign optimization using reduced models. In: Proceedings of the Genetic and Evolution-
ary Computation Conference (GECCO 2000), pp. 628–635 (2000)
[50] Biles, J.A.: GenJam: A genetic algorithm for generating jazz solos. In: Proceedings of
International Computer Music Conference, pp. 131–137 (1994)
[51] Zhou, Z.Z., Ong, Y.S., Nair, P.B., Keane, A.J., Lum, K.Y.: Combining Global and Lo-
cal Surrogate Models to Accelerate Evolutionary Optimization. IEEE Transactions on
Systems, Man and Cybernetics - Part C 37(1), 66–76 (2007)
[52] Sefrioui, M., Periaux, J.: A hierarchical genetic algorithm using multiple models for op-
timization. In: Deb, K., Rudolph, G., Lutton, E., Merelo, J.J., Schoenauer, M., Schwefel,
H.-P., Yao, X. (eds.) PPSN 2000. LNCS, vol. 1917, pp. 879–888. Springer, Heidelberg
(2000)
[53] Skolicki, Z., De Jong, K.: The influence of migration sizes and intervals on island mod-
els. In: Proceedings of the 2005 conference on Genetic and evolutionary computation,
pp. 1295–1302 (2005)
[54] Rasheed, K., Hirsh, H.: Learning to be selective in genetic-algorithm-based design op-
timization. Artificial Intelligence in Engineering, Design, Analysis and Manufactur-
ing 13, 157–169 (1999)
[55] Hidović, D., Rowe, J.E.: Validating a model of colon colouration using an evolution
strategy with adaptive approximations. In: Deb, K., et al. (eds.) GECCO 2004. LNCS,
vol. 3103, pp. 1005–1016. Springer, Heidelberg (2004)
[56] Ziegler, J., Banzhaf, W.: Decreasing the number of evaluations in evolutionary algo-
rithms by using a meta-model of the fitness function. In: Ryan, C., Soule, T., Keijzer,
M., Tsang, E.P.K., Poli, R., Costa, E. (eds.) EuroGP 2003. LNCS, vol. 2610, pp. 264–
275. Springer, Heidelberg (2003)
[57] Jin, Y., Branke, J.: Evolutionary optimization in uncertain environments: A survey.
IEEE Transactions on Evolutionary Computation 9(3), 303–317 (2005)
[58] Ziegler, J., Banzhaf, W.: Decreasing the number of evaluations in evolutionary algo-
rithms by using a meta-model of the fitness function. In: Ryan, C., Soule, T., Keijzer,
M., Tsang, E.P.K., Poli, R., Costa, E. (eds.) EuroGP 2003. LNCS, vol. 2610, pp. 264–
275. Springer, Heidelberg (2003)
28 L. Shi and K. Rasheed
[59] Lim, D., Ong, Y.S., Jin, Y., Sendhoff, B.: A Study on Metamodeling Techniques, En-
sembles, and Multi-Surrogates in Evolutionary Computation. In: Genetic and Evolu-
tionary Computation Conference, London, UK, pp. 1288–1295. ACM Press, New York
(2007)
[60] Shi, L., Rasheed, K.: ASAGA: An Adaptive Surrogate-Assisted Genetic Algorithm. In:
Genetic and Evolutionary Computation Conference (GECCO 2008), pp. 1049–1056.
ACM Press, New York (2008)
[61] Regis, R.G., Shoemaker, C.A.: Local Function Approximation in Evolutionary Algo-
rithms for the Optimization of Costly Functions. IEEE Transactions on Evolutionary
Computation 8(5), 490–505 (2004)
[62] Zerpa, L.E., Queipo, N.V., Pintos, S., Salager, J.-L.: An Optimization Methodology of
Alkaline-surfactant-polymer Flooding Processes Using Field Scale Numerical Simula-
tion and Multiple Surrogates. Journal of Petroleum Science and Engineering 47, 197–
208 (2005)
[63] Lundström, D., Staffan, S., Shyy, W.: Hydraulic Turbine Diffuser Shape Optimization
by Multiple Surrogate Model Approximations of Pareto Fronts. Journal of Fluids Engi-
neering 129(9), 1228–1240 (2007)
[64] Zhou, Z., Ong, Y.S., Lim, M.H., Lee, B.S.: Memetic Algorithm Using Multi-surrogates
for Computationally Expensive Optimization Problems. Soft Computing 11(10), 957–
971 (2007)
[65] Goel, T., Haftka, R.T., Shyy, W., Queipo, N.V.: Ensemble of Surrogates? Structural and
Multidisciplinary Optimization 33, 199–216 (2007)
[66] Sastry, K., Lima, C.F., Goldberg, D.E.: Evaluation Relaxation Using Substructural In-
formation and Linear Estimation. In: Proceedings of the 8th annual conference on Ge-
netic and Evolutionary Computation Conference (2006)
[67] Torczon, V., Trosset, M.: Using approximations to accelerate engineering design opti-
mization. NASA/CR-1998-208460 (or ICASE Report No. 98-33) (1998)
[68] Pierret, S., Braembussche, R.A.V.: Turbomachinery Blade Design Using a Navier-
Stokes Solver and ANN. Journal of Turbomachinery (ASME) 121(2) (1999)
[69] Goel, T., Vaidyanathan, R., Haftka, R.T., Shyy, W., Queipo, N.V., Tucker, K.: Response
surface approximation of Pareto optimal front in multi-objective optimization. Com-
puter Methods in Applied Mechanics and Engineering (2007)
[70] Knowles, J.: ParEGO: A Hybrid Algorithm with On-Line Landscape Approximation for
Expensive Multiobjective Optimization Problems. IEEE Transactions on Evolutionary
Computation 10(1) (February 2005)
[71] Giannakoglou, K.C.: Design of optimal aerodynamic shapes using stochastic optimiza-
tion methods and computational intelligence. Progress in Aerospace Sciences 38(1)
(2000)
[72] Shyy, W., Papila, N., Vaidyanathan, R., Tucker, K.: Global design optimization for aero-
dynamics and rocket propulsion components. Progress in Aerospace Sciences 37 (2001)
[73] Quagliarella, D., Periaux, J., Poloni, C., Winter, G. (eds.): Genetic Algorithms and Evo-
lution Strategies in Engineering and Computer Science. Recent Advances and Industrial
Applications, ch. 13, pp. 267–288. John Wiley and Sons, West Sussex (1997)
[74] Gelsey, A., Schwabacher, M., Smith, D.: Using modeling knowledge to guide design
space search. In: Fourth International Conference on Artificial Intelligence in Design
1996 (1996)