HAN2012InTech-Surrogate Based Optimization
HAN2012InTech-Surrogate Based Optimization
net/publication/221927230
Surrogate-Based Optimization
CITATIONS READS
122 6,875
2 authors:
All content following this page was uploaded by Zhong-Hua Han on 03 May 2016.
Surrogate-Based Optimization
Zhong-Hua Han and Ke-Shi Zhang
School of Aeronautics, Northwestern Polytechnical University, Xi’an,
P.R. China
1. Introduction
Surrogate-based optimization (Queipo et al. 2005, Simpson et al. 2008) represents a class of
optimization methodologies that make use of surrogate modeling techniques to quickly find
the local or global optima. It provides us a novel optimization framework in which the
conventional optimization algorithms, e.g. gradient-based or evolutionary algorithms are
used for sub-optimization(s). Surrogate modeling techniques are of particular interest for
engineering design when high-fidelity, thus expensive analysis codes (e.g. Computation
Fluid Dynamics (CFD) or Computational Structural Dynamics (CSD)) are used. They can be
used to greatly improve the design efficiency and be very helpful in finding global optima,
filtering numerical noise, realizing parallel design optimization and integrating simulation
codes of different disciplines into a process chain. Here the term “surrogate model” has the
same meaning as “response surface model”, “metamodel”, “approximation model”,
“emulator” etc. This chapter aims to give an overview of existing surrogate modeling
techniques and issues about how to use them for optimization.
www.intechopen.com
344 Real-World Applications of Genetic Algorithms
the amount of information gained form a limited number of sample points (Giunta et al.,
2001). Currently, there are different DoE methods which can be classified into two
categories: “classic” DoE methods and “modern” DoE methods. The classic DoE methods,
such as full-factorial design, central composite design (CCD), Box-Behnken and D-Optimal
Design (DOD), were developed for the arrangement of laboratory experiments, with the
consideration of reducing the effect of random error. In contrast, the modern DoE methods
such as Latin Hypercube Sampling (LHS), Orthogonal Array Design (OAD) and Uniform
Design (UD) (Fang et al., 2000) were developed for deterministic computer experiments
without the random error as arises in laboratory experiments. An overview of the classic
and modern DoE methods was presented by Giunta et al. (2001). A more detailed
description of existing DoE methods is beyond the scope of this chapter.
The schematics of 40 sample points selected by LHS and UD for a two-dimensional problem
are sketched in Figure 1.
1 1
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
V2
0.5
V2
0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
V1 V1
For an m-dimensional problem, suppose we are concerned with the prediction of the output
of a high-fidelity, thus expensive computer code, which is correspondent to an unknown
function y : m → . By running the computer code, y is observed at n sites (determined by
DoE)
www.intechopen.com
Surrogate-Based Optimization 345
The pair ( S , y S ) denotes the sampled data sets in the vector space.
With the above descriptions and assumptions, our objective here is to build a surrogate
model for predicting the output of the computer code for any untried site x (that is, to
estimate y( x ) ) based on the sampled date sets ( S , y S ), in an attempt to achieve the desired
accuracy with the least possible number of sample points.
y ( x ) = yˆ ( x ) + ε , x ∈ m , (3)
where ŷ ( x ) is the quadratic polynomial approximation and ε is the random error which is
assumed to be normally distributed with mean zero and variance of σ 2 . The error ε i at each
observation is supposed to be independent and identically distributed. The quadratic RSM
predictor ŷ ( x ) can be defined as:
yˆ ( x ) = β 0 + β i xi + β ii xi2 + β ij xi x j ,
m m m m
(4)
i =1 i =1 i =1 j ≥i
β = ( U T U )−1 U T y S , (5)
where
1 x1 x m
(1) 2 (1) 2
(1) (1)
x1(1)x2(1) (1) (1)
xm − 1 xm (x ) (x )
U = ∈ n× p .
1 m
(6)
(n) 2
1 x1 xm
2
(n) (n)
xm x1( n )x(2n ) (n) (n)
xm − 1 xm ( )
x2( n ) ( )
After the unknown coefficients in β are determined, the approximated response ŷ at any
untried x can be efficiently predicted by Eq. (4).
www.intechopen.com
346 Real-World Applications of Genetic Algorithms
y( x ) = f T ( x )β + Z( x ), x ∈ m , (7)
where f( x ) = [ f 0 ( x ),.., f p − 1 ( x )]T ∈ p is defined with a set of the regression basis functions
and β = [ β 0 ,.., β p − 1 ]T ∈ p denotes the vector of the corresponding coefficients. In general,
f T ( x)β is taken as either a constant or low-order polynomials. Practice suggests that the
constant trend function is sufficient for most of the problems. Thus, f T ( x)β is taken as a
constant β 0 in the text hereafter. In Eq.(7), Z (⋅) denotes a stationary random process with
zero mean, variance σ 2 and nonzero covariance of
Here R (x, x′) is the correlation function which is only dependent on the Euclidean distance
between any two sites x and x′ in the design space. In this study, a Gaussian exponential
correlation function is adopted, and it is of the form
where θ = [θ1 ,θ 2 ,...,θm ]T and p = [ p1 , p2 ,..., pm ]T denote the vectors of the unknown model
parameters (hyper parameters) to be tuned. The schematics of a Gaussian exponential
correlation function for one-dimensional problem is sketched in Figure 2.
1.2 1.2
p = 2.0 θ = 1.0
1 1
Correlation function
Correlation function
0.8 0.8
θ = 0.1
0.6 0.6
θ = 1.0
0.4 0.4
0.2 p = 0.5
0.2
p = 1.0
θ = 10.0 p = 2.0
0 0
-4 -3 -2 -1 0 1 2 3 4 -4 -3 -2 -1 0 1 2 3 4
xi - x xi - x
www.intechopen.com
Surrogate-Based Optimization 347
From the derivation by Sacks et al. (1989) the Kriging predictor ŷ ( x ) for any untried x can
be written as
yˆ ( x ) = β 0 + r T ( x )R −1 ( y S − β0 1) , (10)
β 0 = ( 1T R −1 1)−1 1T R −1 y S , (11)
and 1 ∈ n is a vector filled with ones, and R , r are the correlation matrix and the
correlation vector, respectively. R and r are defined as
(12)
R( x( n ) , x(1) ) R( x( n ) , x(2) ) R( x( n ) , x( n ) ) R( x( n ) , x )
where R( x( i ) , x( j ) ) denotes the correlation between any two observed points x( i ) and x( j ) ;
R( x( i ) , x ) denotes the correlation between the i-th observed point x( i ) and the untried point
x.
A unique feature of Kriging model is that it provides an uncertainty estimation (or MSE) for
the prediction, which is very useful for sample-points refinement. It is of the form
Assuming that the sampled data are distributed according to a Gaussian process, the
responses at sampling sites are considered to be correlated random functions with the
corresponding likelihood function given by
1 ( y S − β 0 1)T R −1 ( y S − β 0 1)
exp − .
1
L( β 0 ,σ 2 , θ , p) =
2
(14)
2π (σ 2 )n R σ2
β 0 (θ , p) = ( 1T R −1 1)−1 1T R −1 y S
1 (15)
σ 2 ( β 0 , θ , p) = ( y S − β 0 1)T R −1 ( y S − β 0 1)
n
www.intechopen.com
348 Real-World Applications of Genetic Algorithms
yˆ ( x ) = ωiϕ ( x ) + P( x ) ,
n
(17)
i =1
where ωi are the i-th unknown weight coefficient, ϕ ( x ) = ϕ ( x( i ) − x ) are the basis functions
that depend on the Euclidean distance between the observed point x( i ) and the untried
point x (similar to the correlation function of kriging model); P( x ) is the global trend
function which is taken as a constant β 0 here. To ensure the function values at observed
points are reproduced by the RBFs predictor, the flowing constraints should be satisfied:
yˆ ( x( i ) ) = y ( i ) , i = 1,.., n
. (18)
Then the additional constraints for P (x) should be imposed as
ωi = 0 .
n
(19)
i =0
Solving the linear equations formed by Eq. (18) and Eq. (19) for ωi and β 0 , and substituting
into Eq.(17) yields the RBFs predictor as
yˆ ( x ) = β 0 + φT ( x )Ψ −1 ( y S − β 0 1) . (20)
When the above RBFs predictor is compared with the Kriging predictor (see Eq. (10)), one
can observe that they are essentially similar, only with the basis-function matrix Ψ (also
called Gram matrix) and the basis function vector φ(x) being different from the correlation
matrix R and the correlation vector r (x) of the Kriging predictor, respectively. In addition,
RBFs differs from Kriging at the following two aspects: 1) RBFs doesn’t provide the
uncertainty estimation of the prediction; 2) The model parameters can’t be tuned by MLE
like Kriging. Generally, Kriging can be regarded as a particular form of RBFs.
www.intechopen.com
Surrogate-Based Optimization 349
To build a RBFs model, one needs to prescribe the type of basis functions that only depends
on the Euclidean distance r = x − x′ between any two sites x and x′ . Compared to the
correlation function used for a Kriging model, more choices are available for a RBFs model,
which are partially listed in Table 1.
All the basis functions listed in Table 1 can be classified into two categories: decaying
functions (such as GAUSS and HIMQ) and growing functions (POW, TPS and HMQ). The
decaying functions can yield positive definite matrix Ψ , which allows for the use of
Cholesky decomposition for its inversion; the growing functions generally result in a non-
positive definite matrix Ψ and thus LU decomposition is usually used alternatively. The
schematics of the basis functions for one-dimensional problem is sketched in Figure 3.
8
1
0.8 6
Basis functions
Basis functions
0.6
4
IHMQ
0.4
2
HMQ
0.2 GAUSS
TPS
POW
0
0
-4 -3 -2 -1 0 1 2 3 4
-4 -3 -2 -1 0 1 2 3 4
i xi - x
x -x
Fig. 3. Schematics of basis functions for Radial Basis Functions (left: decaying functions;
right: growing functions).
www.intechopen.com
350 Real-World Applications of Genetic Algorithms
(Smola & Schoelkopf 2004). Although these methods are coming from different research
communities, the idea is similar when using them for function prediction in surrogate
modeling. They are not described in detail here due to the limited space. The readers are
referred to read the paper by Wang & Shan (2007) and the books written by Keane et al. (2005)
and by Forrester et al. (2008) for more description about surrogate modeling techniques.
e( i ) ,
nt
1 yˆ t( i ) − yt( i )
e= e( i ) = , (22)
nt i =1 yt( i )
where nt is number of the test points; yt( i ) and yˆ t( i ) are the true value and predicted value
corresponding to the i-th test point, respectively. The root mean squared error is defined by
(e
nt
(i ) 2
σe = ) nt . (23)
i =1
where nc is the number of state functions which is in line with the number of inequality
constraints (assuming that all the equality constraints have been transformed into inequality
constraints.); x l and x u are the lower and upper bound of design variables, respectively;
the object function y(x ) and state functions g i (x ) are evaluated by an expensive analysis
code. Traditionally, the optimization problem is solved by either a gradient-based algorithm
or a gradient-free algorithm such as GA. It may become prohibitive due to the large
computational cost associated running the expensive analysis code. Alternatively, here we
are concerned with using surrogate modeling techniques to solve the optimization problem,
in an attempt to dramatically improve the efficiency.
www.intechopen.com
Surrogate-Based Optimization 351
Design space
DoE
Sampled database
Surrogate models
www.intechopen.com
352 Real-World Applications of Genetic Algorithms
New design(s)
New design(s)
design space
DoE
distributed
simulate sample
computing
surrogate models
Optimizer
Main new e.g. GA sub-
optimization samples optimizations
no
Converge ?
yes
no
Converge ?
yes
Optimum design
www.intechopen.com
Surrogate-Based Optimization 353
Objective ˆ x)
minimize y(
s.t. gˆ i (x ) ≤ 0, i = 1, nc , (25)
xl ≤ x ≤ xu
where ŷ(x ) and ĝ i (x ) are surrogate models of y(x ) and g i (x ) , respectively. With the
optimal design variables x̂ opt gained by the surrogate models in hand, one needs to run the
expensive analysis code to compute the corresponding true function value and compare it
with what predicted by the surrogate models. If the error between them is blow a threshold,
the optimization process can be terminated; if not, the new sample point is augmented to the
sampled data sets and the surrogate models are rebuilt; the process is repeated until the
optimum solution is approached.
This criterion applies for all the surrogate models and is very efficient for local exploitation
of the promising region in the design space.
y min − yˆ ( x ) ymin − yˆ ( x )
( ymin − yˆ ( x ))Φ +sˆ( x )φ if sˆ > 0
E[ I ( x )] =
0
sˆ( x ) sˆ( x ) (26)
if sˆ = 0
where Φ() and φ ( ) are the cumulative distribution function and probability density
function of a standard normal distribution, respectively. y min = Min( y (1) , y (2) ,..., y ( n ) )
www.intechopen.com
354 Real-World Applications of Genetic Algorithms
denotes the minimum of the observed data so far. The greater the EI, the more improvement
we expect to achieve. The point with maximum EI is located by a global optimizer such as
GA then observed by running the analysis code. For this infill criterion, the constraints can
be accounted by introducing the probability that the constraints are satisfied. The
corresponding sub-optimization problem can be modeled as
nc
Objective maximize E[ I ( x )] ⋅ ∏ P[Gi ( x ) ≤ 0]
i =1
, (27)
s.t. xl ≤ x ≤ xu
where P[Gi ( x ) ≤ 0] denotes the probability that i-th constraint may be satisfied and Gi ( x ) is
a random function corresponding to i-th state function gi ( x ) . P[Gi ( x ) ≤ 0] → 1 when the
constraint is satisfied and P[Gi ( x ) ≤ 0] → 0 when the constraint is violated. P[Gi ( x ) ≤ 0] can
be calculated by
− gˆ ( x )
−∞ e dGi ( x ) = Φ i
1 0 −[ Gi ( x ) − gˆ i ( x )]2 /2 sˆi2 ( x )
sˆi ( x )
P[Gi ( x ) ≤ 0] = (28)
ˆsi ( x ) 2π
where sˆi ( x ) denotes the estimated standard error corresponding to the surrogate model
gˆ i ( x ) .
The optimum site x̂ opt obtained by solving Eq. (27) is observed by running analysis code
and the new sample point is added to the sampled date sets; the surrogate models are
rebuilt and the whole process is repeated until the global optimum is approached.
where A is a constant which balances the influence of the predicted function and the
corresponding uncertainty. Best practice suggests A = 1 works well for a number of realistic
problems. The corresponding sub-optimization problem can be modeled as
The above optimization problem can be solved via a global optimizer such as GA. Since the
point with smallest value of LCB indicates the possible minimum of the unknown function,
the optimum site x̂ opt is then observed and added to sampled data sets to refine the
surrogate models. This procedure is performed iteratively until the global optimum is
reached.
www.intechopen.com
Surrogate-Based Optimization 355
Objective : Minimize Cd
st. : (1) Area ≥ 0.99 × Area0
, (31)
: (2) C l ≥ 0.99 × C l 0
: (3) C m ≤ C m 0
where Area0 ,Cl0 , Cm0 are the area, lift coefficient, and moment coefficient of the baseline
airfoil, respectively. The first constraint is in consideration of the structural design of the
wing to guarantee the volume of the wing; the second one is to enforce a constant lift of the
wing in order to balance the weight of the aircraft at cruise condition; the third one is to
control the pitching moment of the airfoil to avoid large drag penalty of the horizontal tail
paid for balancing the aircraft.
The initial number of samples for Kriging is set to 20, selected by the Latin Hypercube
Sampling (LHS). The airfoil is parameterized by 10 Hicks-Henne bump functions (Hicks &
Henne, 1978); and the maximum amplitude of each bump is Amax / c = 0.544% . Both of the
SSM and EI infill strategies are adopted in the surrogate refinement. Table 2 presents the
optimization results of the two optimization method. The optimized and initial airfoils and
the corresponding pressure coefficient distributions are compared in Figure 7. Note that the
aerodynamic coefficients of the initial airfoil RAE2822 are set to 100. Obviously, the Kriging-
based optimization method gives better result, and with higher efficiency, and is more likely
to find the global optimum.
initial initial
RSM RSM
Kriging Kriging
-1
0.05
-0.5
y/c
Cp
0 0
0.5
o
Ma = 0.73, Re=6.5e6, α=2.7
-0.05
1
Fig. 7. Aerodynamic shape optimization of a transonic airfoil (RAE 2822) via Kriging and
quadratic Response Surface Model (left: pressure distribution; right: airfoil shape); by using
Kriging model with Expected Improvement infill criteria, the drag is reduced by 33.6% with
only 56 calling of Navier-Stokes flow solver.
www.intechopen.com
356 Real-World Applications of Genetic Algorithms
Table 2. Drag reduction of an RAE 2822 airfoil via Kriging and RSM-based optimizations
max L D
min Wwing
s.t. L ≥ 54 / 10 3 kg
(32)
100 ≤ Swing ≤ 110 / m 2
σ max ≤ σ b / 109 pa
δ max ≤ 1 /m
Eight supercritical airfoils are configured along the span. The optimization is subject to 4
constraints. The first constraint is to enforce a constant lift of the wing in order to balance
the weight of the aircraft at cruise condition; the second one is to guarantee a near constant
wing loading; the third and fourth constraints are to make sure that the strength and rigidity
requirements are satisfied. The definition for the limits of design variables is listed in Table
3. The first four design variables define the aerodynamic configuration of the wing. The four
remain are for structure design. The detail can be found in paper by Zhang et al. (2008).
The uniform design table, U100(108), is used to creates one hundred candidate wings for
building surrogate models. The other forty-five candidate wings are created by the uniform
design table, U45(108), for evaluating the approximation model. For each wing, the static
aeroelastic analysis is performed to obtain the responses of lift (L), lift-to-drag ratio (L/D),
wing area (Swing), maximum stress (σmax), maximum deformation (δmax) and wing weight
www.intechopen.com
Surrogate-Based Optimization 357
(Wwing). Then the average relative errors and the root mean squared errors are calculated to
evaluate the approximation models, as listed in Table 4. In this case Kriging and RSM have
comparative high accuracy.
Then the multi-objective optimization for the supercritical wing is performed based on RSM
due to its higher computational efficiency. Weighted sum method is used to transform the
multi-objective optimization into a single-objective optimization. Sequential quadratic
programming method is employed to solve the optimization. One of the candidate wings
with better performance, are selected as the initial point for optimization. The optimal
design is observed by running the analysis codes and the results are listed in Table 5. Where
X 0 and Y 0 is the initial wing scheme and its response, respectively; X * and Y * is the
optimal wing scheme and its actual response, respectively; Ŷ is the response at X *
calculated by the approximation models. For the optimal wing scheme, the largest relative
error of approximation models is no more than 3 percent. It again proves the high accuracy
of the approximation models.
Figure 8 shows the contour of the equivalent stress of the optimal wing. It shows that the
stress is larger in the location of intersection of inner wing and outer wing due to the
inflexion. Figure 9 shows pressure distribution of the optimal wing. It shows that the wing
www.intechopen.com
358 Real-World Applications of Genetic Algorithms
basically meets the design requirements of a supercritical wing. A little bit non-smoothness
of the pressure distribution may be caused by non-uniform deformation of the skin. Figure
10 shows the convergence history of aeroelastic deformation, which shows that fast
convergence of the aeroelastic deformation of the optimal wing.
The optimization , together with the aeroelastic analysis of all candidate wings, only takes
about two days on a personal computer of Pentium(R) 4 CPU 2.8GHz. If more computers
are used to concurrently calculate the performance of different candidate wings, the cost can
be further greatly reduced.
www.intechopen.com
Surrogate-Based Optimization 359
-1
-0.5
Cp
-1 0
0.5
z/b = 0.25
-0.5
Cp
0 1 0 0.25 0.5 0.75 1
-1
x/c
0.5 z/b = 0.55
-0.5
1
0 0.25 0.5 0.75 1
x/c
Cp
0.5
z/b = 0.85
1
0 0.25 0.5 0.75 1
x/c
-1
-0.5
-1
Cp
0
-0.5
-1
0.5
z/b = 0.40
Cp
0
-0.5
1
z/b = 0.70 0 0.25 0.5 0.75 1
0.5 x/c
Cp
0.5
z/b = 0.95 1
0 0.25 0.5 0.75 1
x/c
1
0 0.25 0.5 0.75 1
x/c
1.5
-1.5
Wing-tip Deform in Y-direction /m
1.4
Wing-tip torsion angle /( )
o
-2.0
1.3
-2.5
1.2
-3.0
1.1
-3.5
1.0
-4.0
0.9
-4.5
0.8
1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8
Iteration Iteration
Fig. 10. Convergence history of Y-direction deform and torsion deform on wing tip
www.intechopen.com
360 Real-World Applications of Genetic Algorithms
5. Conclusion
An overview of the existing surrogate models and the techniques about how to use them for
optimization is presented in this chapter. Among the surrogate models, the regression
model such as the quadratic response surface model (RSM) is well suited for a local
optimization problem with relatively simpler design space; interpolation models such as
Kriging or RBFs can be used for highly-nonlinear, multi-modal functions, and thus well
suited for a global problem with relatively more complicated design space. From an
application point of view, the simple framework of surrogate-based optimization is a good
choice for an engineer design, due to the fact that surrogate model can act as an interface
between the expensive analysis code and the optimizer and one doesn’t need to change the
analysis code itself. The drawback of this framework is that the accuracy of optimum only
depends on the approximation accuracy of surrogate model and we generally get an
approximation to the true optimum. In contrast, the bi-level framework with different infill
criteria provides an efficient way to quickly find true optimum without the need of building
globally accurate surrogate models. Multiple infill criteria seem to be a better way to
overcome the drawback of the single infill criterion.
Examples for airfoil and wing designs show that surrogate-based optimization is very
promising for aerodynamic problem with number of design variables being less than about
10. For higher-dimensional problem, the computational cost increases very quickly, which
can be prohibitive. Thus, use of surrogate model for high(er)-dimensional optimization
problems would become an important issue of future work.
6. Acknowledgements
This research was sponsored by the National Natural Science Foundation of China (NSFC)
under grant No. 10902088 and the Aeronautical Science Foundation of China under grant
No. 2011ZA53008
7. References
Elanayar, S. V. T., and Shin, Y. C., “Radial basis function neural network for approximation
and estimation of nonlinear stochastic dynamic systems,” IEE Transactions on
Neural Networks, Vol. 5, No. 4, 1994, pp. 594-603.
Fang, K. T., Lin, D., Winker, P., Zhang, Y., “Uniform design: Theory and application,”
Technometrics, Vol. 42, No. 3, 2000, pp. 237-248.
Forrester, A. I. J., Sóbester, A., and Keane, A. J., “Multi-Fidelity Optimization via Surrogate
Modeling,” Proceedings of the Royal Society A, Vol. 463, No. 2088, 2007, pp. 3251-
3269.
Forrester, A. I. J., Sóbester, A., and Keane, A., “Engineering Design via Surrogate Modeling:
A Practical Guide,” Progress in Astronautics and Aeronautics Series, 226, published
by John Wiley & Sons, 2008.
Giunta, A. A., Wojtkiewicz Jr, S. F., and Eldred, M. S. , “Overview of Modern Design of
Experiments Methods for Computational Simulations,” AIAA paper 2003-649,
2001.
www.intechopen.com
Surrogate-Based Optimization 361
Han, Z. -H., Zhang, K. -S., Song, W. -P., and Qiao, Z. -D., "Optimization of Active Flow
Control over an Airfoil Using a Surrogate-Management Framework," Journal of
Aircraft, 2010, Vol. 47, No. 2, pp. 603-612.
Han, Z.-H., Görtz, S., Zimmermann, R. “On Improving Efficiency and Accuracy of Variable-
Fidelity Surrogate Modeling in Aero-data for Loads Context,” Proceeding of CEAS
2009 European Air and Space Conference, Manchester, UK, Oct. 26-29, 2009.
Han, Z.-H., Zimmermann, R., and Görtz, S., “A New Cokriging Method for Variable-Fidelity
Surrogate Modeling of Aerodynamic Data,” AIAA Paper 2010-1225, 48th AIAA
Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace
Exposition, Orlando, Florida, Jan. 4-7, 2010.
Hardy, R. L., “Multiquadric Equations of Topography and Other Irregular Surface,” Journal
of Geophysical Research, Vol. 76, March, 1971, pp. 1905-1915.
Hicks, R. M., and Henne, P. A., “Wing Design by Numerical Optimization,” Journal of
Aircraft, Vol. 15, No. 7, 1978, pp. 407–412.
Jeong, S., Murayama, M., and Yamamoto, K., "Efficient Optimization Design Method Using
Kriging Model," Journal of Aircraft, Vol. 42, No. 2, 2005, pp. 413- 420.
Jones, D., Schonlau, M., Welch W., “Efficient Global Optimization of Expensive Black-Box
Functions,” Journal of Global Optimization, 1998, Vol. 13, pp. 455-492.
Keane, A. J., Nair, P. B., “Computational Approaches for Aerospace Design: The Pursuit of
Excellence”, John Wiley & Sons, Ltd, Chichester, 2005.
Kovalev, V.E., Karas, O. V. “Computation of a Transonic Airfoil Flow Considering Viscous
Effects and Thin Separated Regions.” La Recherche Aerospatiale (English Edition)
(ISSN 0379-380X), No. 1, 1991, pp. 1-15.
Krige, D. G., “A Statistical Approach to Some Basic Mine Valuations Problems on the
Witwatersrand,” Journal of the Chemical, Metallurgical and Mining Engineering Society
of South Africa, Vol. 52, No. 6, 1951, pp. 119-139.
Laurenceau, J., and Sagaut, P. “Building Efficient Response Surfaces of Aerodynamic
Functions with Kriging and Cokriging,” AIAA Journal, Vol. 46, No. 2, 2008, pp. 498-
507.
Park. J., and Sandberg, I. W., “Universal Approximation Using Radial-Basis-Function
Networks,” Neural Computation, Vol. 3, No. 2, 1991, pp. 246-257.
Powell, M. J. D. “Radial Basis Functions for Multivariable Interpolation: A Review, ”
Algorithms for Approximation, edited by J. C. Mason and M. G. Cox, Oxford Univ.
Press, New York, 1987, Chap. 3, pp. 141-167.
Queipo, N. V., Haftka, R. T., Shyy W., Goel, T., Vaidyanathan, R., and Tucher, P. K.,
“Surrogate-based Analysis and Optimization,” Progress in Aerospace Sciences, Vol.
41, 2005, pp. 1-28.
Sacks, J., Welch, W. J., Mitchell, T. J., and Wynn, H. P., “Design and Analysis of Computer
Experiments,” Statistical Science, Vol. 4, 1989, pp. 409-423.
Simpson, T. W., Mauery, T. M., Korte, J. J., et al., “Kriging Models for Global Approximation
in Simulation-Based Multidisciplinary Design Optimization”, AIAA Journal, Vol. 39,
No. 12, 2001, pp. 2233-2241.
Simpson, T. W., Toropov, V., Balabanov, V., and Viana, F. A. C., “Design and Analysis of
Computer Experiments in Multidisciplinary Design Optimization: a Review of
How Far We Have Come – or Not,” AIAA Paper 2008-5802, 2008.
www.intechopen.com
362 Real-World Applications of Genetic Algorithms
Smola, A. J. and Schoelkopf, B., “A tutorial on support vector regression,” Statistics and
Computing, Vol. 14, 2004, pp. 199-222.
Toal, D. J. J., Bressloff, N. W., Kean, J., “Kriging Hyperparameter Tuning Strategies,” AIAA
Journal, Vol. 46, No .5, 2008, pp. 1240-1252.
Wang, G. G., Shan S., “Review of Metamodeling Techniques in Support of Engineering
Design Optimization,” Journal of Mechanical Design, Vol. 129, No. 4. 2007, pp. 370-
380.
Zhang, K. -S., Han, Z. -H., Li, W. -J., and Song, W. -P., “Coupled Aerodynamic/Structural
Optimization of a Subsonic Transport Wing Using a Surrogate Model,” Journal of
Aircraft, 2008, Vol. 45, No. 6, pp. 2167-2171.
www.intechopen.com
Real-World Applications of Genetic Algorithms
Edited by Dr. Olympia Roeva
ISBN 978-953-51-0146-8
Hard cover, 376 pages
Publisher InTech
Published online 07, March, 2012
Published in print edition March, 2012
The book addresses some of the most recent issues, with the theoretical and methodological aspects, of
evolutionary multi-objective optimization problems and the various design challenges using different hybrid
intelligent approaches. Multi-objective optimization has been available for about two decades, and its
application in real-world problems is continuously increasing. Furthermore, many applications function more
effectively using a hybrid systems approach. The book presents hybrid techniques based on Artificial Neural
Network, Fuzzy Sets, Automata Theory, other metaheuristic or classical algorithms, etc. The book examines
various examples of algorithms in different real-world application domains as graph growing problem, speech
synthesis, traveling salesman problem, scheduling problems, antenna design, genes design, modeling of
chemical and biochemical processes etc.
How to reference
In order to correctly reference this scholarly work, feel free to copy and paste the following:
Zhong-Hua Han and Ke-Shi Zhang (2012). Surrogate-Based Optimization, Real-World Applications of Genetic
Algorithms, Dr. Olympia Roeva (Ed.), ISBN: 978-953-51-0146-8, InTech, Available from:
http://www.intechopen.com/books/real-world-applications-of-genetic-algorithms/surrogate-based-optimization