0% found this document useful (0 votes)
80 views17 pages

A Directed Genetic Algorithm For Global Optimization

1. The document proposes a directed genetic algorithm (DGA) that introduces directed crossover and mutation operators based on Nelder-Mead's simplex method for global optimization problems. 2. Parameters of the DGA are analyzed using Taguchi methods to show their robustness in finding global optimal solutions. Several strategies are also proposed to enhance the DGA's solution accuracy. 3. The DGA and strategies are applied to benchmark test functions and a cantilever beam design problem, demonstrating better performance than other genetic algorithms at finding optimal solutions.

Uploaded by

Parth Trivedi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views17 pages

A Directed Genetic Algorithm For Global Optimization

1. The document proposes a directed genetic algorithm (DGA) that introduces directed crossover and mutation operators based on Nelder-Mead's simplex method for global optimization problems. 2. Parameters of the DGA are analyzed using Taguchi methods to show their robustness in finding global optimal solutions. Several strategies are also proposed to enhance the DGA's solution accuracy. 3. The DGA and strategies are applied to benchmark test functions and a cantilever beam design problem, demonstrating better performance than other genetic algorithms at finding optimal solutions.

Uploaded by

Parth Trivedi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Applied Mathematics and Computation 219 (2013) 7348–7364

Contents lists available at SciVerse ScienceDirect

Applied Mathematics and Computation


journal homepage: www.elsevier.com/locate/amc

A Directed Genetic Algorithm for global optimization


Hsin-Chuan Kuo ⇑, Ching-Hai Lin ⇑
Department of System Engineering and Naval Architecture, National Taiwan Ocean University, No. 2, Beining Rd, Jhongjheng District, Keelung City 202, Taiwan, ROC

a r t i c l e i n f o a b s t r a c t

Keywords: Within the framework of real-coded genetic algorithms, this paper proposes a directed
Directed genetic algorithm genetic algorithm (DGA) that introduces a directed crossover operator and a directed muta-
Nelder–Mead’s simplex algorithm tion operator. The operation schemes of these operators borrow from the reflection and the
Global optimization expansion search mode of the Nelder–Mead’s simplex method. First, the Taguchi method is
employed to study the influence analysis of the parameters in the DGA. The results show
that the parameters in the DGA have strong robustness for solving the global optimal solu-
tion. Then, several strategies are proposed to enhance the solution accuracy capability of
the DGA. All of the strategies are applied to a set of 30/100-dimensional benchmark func-
tions to prove their superiority over several genetic algorithms. Finally, a cantilevered
beam design problem with constrained conditions is used as a practical structural optimi-
zation example for demonstrating the very good performance of the proposed method.
Ó 2012 Elsevier Inc. All rights reserved.

1. Introduction

Optimization technology is an approach to solving design problems with N variables. It defines the problem to be solved
as an objective function f(X), where design variable X = {x1, x2, . . . , xN} is a vector with N variables. The search domains of the
variables form a search space U, which is a set that consists of all possible X’s. The problems may be subjected to Ineq con-
strained conditions g1(X), i = 1, . . . , Ineq. In the search space U, the function f(X) will have a global optimal solution that meets
all of the constrained conditions. For example, if the objective of a problem is to search for a Xe e U so that f(Xe) is the global
optimal value, then the mathematical model of the problem can be defined as the following:
Min f ðXÞ ¼ f ðX e Þ with g i ðXÞ 6 0; i ¼ 1; . . . ; Ineq : ð1Þ
x2S

Along with constant progress in computer science and technology, innovative thinking has led to the proposition of new
computing methods. Studies on optimization technology have leaped from classical mathematical programming approaches
to new methods that imitate the human genetic code, cite animal behavioral rules from natural ecology, or refer to devel-
opment mechanisms of human culture in the social sciences to develop evolutionary computational algorithms that have
high accuracy and efficiency.
An optimization algorithm, a search procedure used to solve problems, is a method based on certain concepts and mech-
anisms for finding a solution through a fixed process. There are two frequently used optimization algorithms. (1) Mathemat-
ical programming approaches: These include traditional algorithms, such as linear programming, nonlinear programming,
integer programming, and dynamic programming. All of these algorithms are able to search out the local optimal solutions
to a problem rapidly. (2) Heuristic Algorithms: Since the 1970s, a large number of researchers have adopted such concepts as
imitating natural ecology, the humanities and social sciences, music, and electromagnetic attraction and repulsion to

⇑ Corresponding authors.
E-mail addresses: [email protected], [email protected] (H.-C. Kuo), [email protected] (C.-H. Lin).

0096-3003/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved.
http://dx.doi.org/10.1016/j.amc.2012.12.046
H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364 7349

propose innovative and executable computing algorithms for solving problems. Examples include the Genetic Algorithm
(GA) [1] presented by Holland in 1975; the Particle Swarm Optimization (PSO) [2] published by Kennedy in 1995; the
Ant Colony Optimization (ACO) [3] introduced by Dorigo in 1995; the Differential Evolution Algorithm (DE) [4] proposed
by Storn in 1997; the Swarm Intelligence [5,6] presented by Bonabeau in 1999; and the ElectroMagnetic-like algorithm
(EM) [7] described by Birbil in 2003.
In 1975, Holland proposed the genetic algorithm by imitating Darwin’s evolutionary theory – survival of the fittest. Later
in the same year, De-Jong [8] applied the GA to optimization problems. In 1989, Goldberg [9] made the GA a popular and
widely used algorithm. By imitating the mode of the biological gene string, he used 0/1 binary codes to encode variables,
gave the binary gene string a corresponding fitness value according to certain criteria, and adopted the population-based
evolution. In the evolutionary process, the rule of ‘‘survival of the fittest’’ was applied to individuals by performing simple
operations on gene-string codes. The operators that were used to generate new population consisted of a selection operator,
a crossover operator, and a mutation operator. The crossover operation created a new individual by mixing gene strings,
while the mutation operation modified the genes in the gene string to provide the population with sufficient proliferation
or diversification. With no constraints on the search space and no need to solve the derivative, such an algorithmic mech-
anism was suitable for parallel computing. Then, in 1991, Goldberg [10] used the Bit-string GA (BGA) to explore the issue of
global optimization. He found that the BGA had a robust evolution capability for finding the global optimal solution, but it
consumed too much computing time and had no apparent performance when applied to high-dimensional or high-accuracy
problems. Furthermore, there was the so-called Hamming-Cliff problem. For example, the Hamming distance between two
adjacent decimal integers, such as 31 and 32, becomes very large when expressed in binary codes, which are 01111 and
10000, respectively.
To improve the BGA’s weakness in addressing continuous variable optimization problems, the real-coded genetic algo-
rithm (RGA) [10,11] adopts a real-coded variable model. There are two main developments regarding the crossover and
mutation operations of the RGA. (1) The crossover operation. In 1991, Wright [12] first used the heuristic mating operation
model to randomly create a new generation; in 1992, Michalewicz [13] proposed the arithmetical crossover; in 1993, Esh-
elman and Schaffer [14] published the BLX-a Crossover; in 1997, Ono and Kobayshi [15] introduced the unimodal normal
distribution crossover (UNDX), which belonged to mean-centric crossovers; in 2006, Deb et al. came up with the parent-cen-
tric crossover (PCX) [16]; and in 2007, Deep and Thakur proposed the Laplace crossover (LX) [17], which is another crossover
of the PCX type. (2) The mutation operation. This operation includes the random mutation (RM) [18] and the non-uniform
mutation (NUM) [19]. Among them, the NUM proposed by Michalewicz in 1992 was the most widely used; in 2010, Gong
et al. proposed a linear map-based mutation scheme for real-coded genetic algorithms [20]. Both BGA and RGA use crossover
and mutation operators to generate new individuals for the next generation that are usually considered as the roles of inten-
sification and diversification in the evolutionary process, respectively.
The performance of population-based algorithms, such as GA, PSO, and DE, depend on whether the population is capable
of wide exploration and deep exploitation. This dependence indirectly explains whether the mutual promotion, mutual
transformation, and reciprocal advance between the two characteristics of diversification and intensification are able to
achieve a dynamic balance. Although the population-based algorithms have capabilities to solve the global optimal solution
and are often applied to the multimodal optimization problems [21,22], in the cases where there are many similar local opti-
mal solutions near the global optimal solution or the shape of the area close to the global optimal solution is long and nar-
row, their solution accuracy and success rates are relatively low or even impossible to use. In recent years, two types of
algorithms have proposed for improvement. (1) The hybrid local search method. In 2003, Chelouah et al. [23] announced
CHA (Continuous Hybrid Algorithm), which conducts RGA (Real-coded Genetic Algorithm) first and then adopts the Nel-
der–Mead’s Simplex Method (NMSM) [24] to continue the search by using the RGA’s result as the starting point. (2) The
strategy-based method. Adnan and Akin [25] proposed the memory particle swarm optimization (MPSO) algorithm based
on a strategy that stores the information of a particle swarm into an external memory during the evolutionary process of
the PSO [2], thus increasing the vitality of the population. The algorithm then conducts local search among the best particles
of the current population to improve the solution accuracy.
In light of these advances, the directed crossover and mutation operators, borrowing from the reflection and the expan-
sion mechanism of the NMSM, are introduced into the real-coded genetic framework; the algorithm is referred to as the di-
rected genetic algorithm (DGA). At first, in this paper, two benchmark problems with one uni-modal and one multimodal are
examined, and the parameter analysis of the DGA algorithm is performed using the Taguchi Method [26]. Then, in this study,
four strategies of DGA are proposed to improve the solution accuracy by through six benchmark functions.
Lastly, the proposed DGA is applied to an optimization design of the structural engineering problem with constrained
conditions. The mathematical form of the constrained optimization problem (COP) is expressed as follows:
Minimize f ðxÞ; ð2Þ

Subject to hi ðxÞ ¼ 0; i ¼ 1; . . . ; Ieq ; ð3Þ

g j ðxÞ 6 0; j ¼ 1; . . . ; Ineq : ð4Þ

When addressing the COP, this paper adopts the penalty function method as follows:
7350 H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364

Ieq Ineq
!
X 2
X
/ðxÞ ¼ f ðxÞ þ R ½hi ðxÞ  þ max½0; g j ðxÞ ; ð5Þ
i¼1 j¼1

where Ieq and Ineq are the numbers of the equality-constrained conditions and the inequality-constrained conditions,
respectively, and R is the penalty factor.

2. Nelder–Mead’s simplex method (NMSM)

The Nelder–Mead’s simplex method [24] is based on two concepts: to determine the search direction and to replace the
worst point in the simplex algorithm. The process of replacing the worst point consists of four types of search schemes:
reflection, expansion, contraction, and shrinkage. The process steps of the simplex method are given below.
(1) Initialization process. Assign a dimension n, a convergence limit e, a reflection coefficient q, an expansion coefficient v,
a contraction coefficient c, and a shrinkage coefficient r. Generate the initial simplex. Given the starting point X1 in the n-
dimensional space first, find n initial vertices Xj that are one step length h away from X1 along each coordinate axis ei.
Xj; j ¼ 2; 3; . . . ; n þ 1;

X j ¼ X 1 þ h  ei ; i ¼ 1; 2; . . . ; n;
where ei is the unit vector of the i-th coordinate axis and h is the step length, which is usually between 0.5 and 15.0.
(2) Assessment process. Calculate the objective function values for these n + 1 vertices and then sort them in ascending
order. Store the point that has the smallest value of the objective function as the current best record. In other words,
f(Xn+1) P  P f(X2) P f(X1), where X1 is the value that is stored.
(3) Simplex generation. Make use of reflection, expansion, contraction, and shrinkage to generate a new vertex.
P
A. Reflection. Compute the reflection point Xr with the expression Xr = Xc + q(Xc  Xn+1), where X c ¼ ni¼1 X i =n. Evaluate
f(Xr). If f ðX 1 Þ 6 f ðX r Þ < f ðX n Þ, replace Xn+1 with Xr.
B. Expansion. If the objective function value f(Xr) computed at the reflection point Xr is smaller than the current best
record f(X1), then the search direction is correct. Continue to expand along the segment that is limited by the points
Xc and Xr until the point Xa is reached. This last point is defined as the following: Xe = Xc + v(Xr  Xc). Evaluate f(Xe). If
f(Xe), then replace Xn+1 with Xe; otherwise, replace Xn+1 with Xr.
C. Contraction. If f ðX r P f ðX n Þ; then perform the contraction. If f ðX n Þ 6 f ðX r Þ < f ðX nþ1 Þ; a contraction point Xs is defined as
Xs = Xc + c(Xr  Xc). If f ðX s Þ 6 f ðX r Þ; then replace Xn+1 with Xs; otherwise, perform the shrinkage. If f ðX r Þ > f ðX nþ1 Þ is
necessary, then another contraction point is X 0s ¼ X c þ cðX nþ1  X c Þ. If f ðX 0s Þ < f ðX nþ1 Þ; then replace Xn+1 with Xs; other-
wise, perform the shrinkage.
D. Shrinkage. Calculate new vertices X 0j ¼ X 1 þ rðX j  X 1 Þ to replace X j ; j ¼ 2; . . . ; n þ 1.

(4) Termination criterion. Check whether the convergence criterion e is met. Eventually, stop searching and take the best
point as the optimal solution; otherwise, go to Step (2). The convergence criterion is the following:
( )12
1 X nþ1
½f ðX j Þ  f ðX c Þ2 6 e:
n þ 1 j¼1

3. Directed genetic algorithm

Selection, crossover and mutation are stochastic operators that generate new generations in the above-mentioned real-
coded genetic algorithms. The course of socio-cultural evolution is not a phenomenon of aimless, arbitrary development, but
towards a clear goal. Such process of moving towards higher-level soul experience and mental state is a common goal of
social species’ evolution [27]. Based on the socio-cultural evolution, in this research, a directed crossover operator and a di-
rected mutation operator, borrowing from the reflection and expansion schemes of the Nelder–Mead’s simplex method, are
proposed. The Directed Genetic Algorithm (DGA) developed is outlined in the rest of this section.

3.1. Stochastic operations

3.1.1. Crossover operator


Two parent individuals, n-dimensional P1 and P2, are randomly selected from the mating pool:

P1 ¼ ðP11 ; P12 ; . . . ; P1n Þ; P2 ¼ ðP 21 ; P 22 ; . . . ; P2n Þ:


(1) Wright’s Heuristic Crossover (HX) [12].
Let P1 be the best among the two parent individuals P1 and P2. The child individual is generated as follows:
H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364 7351

Si ¼ p1i þ rðp1i  p2i Þ; i ¼ 1; . . . ; n: ð6Þ


where r e [1, 0] is a random number with a uniform distribution.
(2) Laplace Crossover (LX) [17]
The LX uses the Laplace distribution and belongs to the parent-centric operators. The probability density function of the
Laplace distribution is the following:
 
1 jx  aj
PðxÞ ¼ exp  ; 1 < x < 1: ð7Þ
2b b
Two offspring are generated as follows:

s11 ¼ p1i þ bjp1i  p2i j; ð8Þ

s21 ¼ p2i þ bjp1i  p2i j;


where the random number b is generated by the inverse distribution function of the Laplace distribution as the following:
(
a  b lnðrÞ; r 6 12 ;
b¼ ð9Þ
a þ b lnðrÞ; r > 12 ;

where r e [1, 0] is a random number with a uniform distribution.

3.1.2. Mutation operator


A variable M e [xl, xu] is used for a randomly selected P = (p1, . . . , M, . . . , pn) with the purpose of performing the mutation
operation, for which xl and xu are the lower and upper bounds of M, respectively.
(1) Non-Uniform Mutation (NUM) [13]
The mutation interval of the variable M is decreased as the ratio a between the number of generations completed G and
the limit on the number of generations Gmax increases. The value of the variable M’ is updated as follows:

M þ DðG; xu  MÞ; if s ¼ 0;
M0 ¼ ; ð10Þ
M þ DðG; M  xl Þ; if s ¼ 1:
where s is a uniformly distributed random number equal to 0 or 1,
b
DðG; yÞ ¼ yf1  r ð1aÞ g; ð11Þ
G
a ¼ Gmax , and b is a parameter determined by the user.r is a uniformly distributed random number in the interval [0, 1].
(2) Makinen, Periaux and Toivanen Mutation (MPTM) [29]
A parent individual P = (p1, . . . , pi, . . . , pn) generates a new offspring S = (s1, . . . , s1, . . . , sn) is created as follows. Let r be a uni-
formly distributed random number such that r e [0, 1].

si ¼ ð1  ^tÞxli þ ^txui ; ð12Þ


where
8  
> rt b
>
< t  t r
; if r < t
^t ¼ t; if r ¼ t ; ð13Þ
>
>  rt b
:
t þ ð1  tÞ 1t ; if r < t

pi  xli
t¼ ;
xui  pi

pi 2 ½xli ; xui :

3.2. Directed operations

3.2.1. Directed crossover operator


By borrowing from the reflection mechanism of NMSM, one individual Xh is selected at random; and then select m indi-
viduals that are better than Xh. The centroid of these m individuals is the following equation:

1 Xm
Xc ¼  Xj; ð14Þ
m j¼1

where m is the number of individuals that are better than Xh.


7352 H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364

The new individual Xn is randomly generated as follows:


X n ¼ X c þ a1  Sc ; ð15Þ

Sc ¼ X c  X h ; ð16Þ
where a1 is set to the coefficient of golden section search method 0.618 or is a random number between 0 and 1. This
operator, which condenses the whole population to its centroid, is also known as the intensification operator.

3.2.2. Directed mutation operator


By borrowing from the expansion mechanism of NMSM, the directed mutation operator randomly selects two individuals
from the population and generates a new individual Xn in the direction from the worse individual Xh to the better individual
Xg. The following equations are used for this purpose:
X n ¼ X g þ a2  Sm ; ð17Þ

Sm ¼ X g  X h ; ð18Þ
where a2 is a random number that is greater than 1 < a2 6 2. This operator is also referred to as the diversification
operator.
The flowchart of the DGA is shown in Fig. 1. The evolutionary process includes the following steps.
Step 1: Initialization

Fig. 1. Flowchart of the DGA.


H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364 7353

Determine the population size Np, the number of replacements Rn = (1  Elite Rate Er)  Np, the maximum number of gen-
erations, and the probabilities of the directed crossover and mutation operators pc and pm, which are between 0 and 1. Ran-
domly create the initial population over n-dimensional search space U by considering a uniform distribution.
Step 2: Evaluation
Calculate the fitness (objective) function value of each individual and then sort these values
Step 3: Generation of the new population
Judge whether pc and pm have been met to decide which operator should be used to generate the new individuals of the
next generation. Use roulette to select individuals. Use the directed crossover operator and directed mutation operator to
generate new individuals. Store the elitist individuals.
Step 4: Termination condition
Check whether the maximum number of generations has been reached. In this case, stop the evolutionary process and
store the best individual as the optimum solution. Conversely, check whether the number of new individuals has been
reached. If this criterion is true, then execute Step 2; otherwise, execute Step 3.

3.3. Sequential DGAs (SDGAs)

To improve solution accuracy of the DGA, this paper proposes three algorithms: Sequential DGA (SDGA), Sequential DGA
Excluding Memory (SDGAMEX), and Sequential DGA Including Memory (SDGAMIN). These three algorithms are described in
the subsections that follow.

3.3.1. Sequential DGA (SDGA)


When the population has reached its mature stage in the search space and has lost its activity, SDGA stops the evolution.
By taking the final best solution as the center, SDGA generates a new search space based this center and reinitializes the pop-
ulation; and then continues to perform DGA over this new search space. The basic steps are the following:

(1) Convergence criterion. When the average distance between the best individual and the other individuals of the popu-
lation becomes less than 1% different from its counterpart’s fitness in the previous generation and this situation per-
sists through Gs generations, the population is viewed to have reached in a stable convergence. In this paper, the Gs
parameter was set equal to 50.
(2) Definition of the new search space. Take the best individual as the center and a radius r to form a hyper-sphere of a new
search space U’, as shown in Fig. 2.
(3) Re-initialization of population. Reinitialize the population over the new search space U’, as shown in Fig. 2.
(4) Execution of DGA. Redo the DGA with the new initial population over the new search space U’.
(5) Stopping criteria. When the DGA approaches convergence again, check whether the best solution falls in the search
space. If so, then terminate the evolution; otherwise, jump to Step (2).

3.3.2. Sequential DGA excluding memory (SDGAMEX)


This evolutionary procedure, named the sequential DGA excluding memory (SDGAMEX), is similar to SDGA except for Step
(3), in which the initial population does not include the previous best individual. In SDGAMEX, the initial population is gen-
erated over the new search space U’, which formed by taking the best individual as the center and a radius r is excluded (see
Fig. 3). The best individuals found in each DGA run are then compared, and the best individual is taken as the global optimal
solution.

3.3.3. Sequential DGA including memory (SDGAMIN)


SDGAMIN is similar to SDGAMEX except for Step (3), in which creates the initial population over the new search space and
includes the best individual found in the previous DGA run (see Fig. 4).

Newly generated population


The optimal solution

Average population distance

Fig. 2. Schematic diagram illustrating the new initial population of SDGA.


7354 H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364

Newly generated population

The optimal solution

Average population distance

Fig. 3. Schematic diagram illustrating the new initial population of SDGAMEX.

Newly generated population

The optimal solution

Average population distance

Reserved optimal solution

Fig. 4. Schematic diagram illustrating the new initial population of SDGAMIN.

3.4. Local search

After the DGA is finished, Nelder–Mead’s simplex method is adopted to improve the solution accuracy. The parameters in
this local search are given as the convergence limit e = 106, the reflection coefficient q = 1, the expansion coefficient v = 2,
the contraction coefficient c = 0.5, and the shrinkage coefficient r = 0.5

4. Results and discussion

To realize the DGA’s evolution characteristics, this paper first chooses two functions (Rosenbrock function with unimodal
and Griewank function with multimodal) to be investigated. In this paper the Taguchi method [26,28] is used to study
parameter analysis of the DGA that probes how the parameters affect the search characteristics, measuring each parameter’s
degree of influence and identifying the appropriate parameter combination.
This paper then performs a test on the DGAs and SDGAs using six benchmark functions, and it compares their results with
other results in the literature. In the end, the proposed algorithm is applied to an optimization problem of a structure.

4.1. Benchmark problems [30]

4.1.1. Unimodal problems


P
(1) Sphere F 1 ðXÞ ¼ ni¼1 x2i :
P P
(2) Rotated hyper-ellipsoid F 2 ðXÞ ¼ ni¼1 ð ij¼1 xj Þ2 :
Pn1 h i
(3) Rosenbrock F 3 ðXÞ ¼ i¼1 100ðxi  x2iþ1 Þ2 þ ðxi  1Þ2 :

The three unimodal functions (F1(X), F2(X), and F3(X)) are sorted by their search difficulties: F1(X) < F2(X) < F3(X)

4.1.2. Multimodal problems : qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 Pn 


P
(4) Ackley F 4 ðXÞ ¼ 20 expð0:2 1n ni¼1 x2i Þ exp  n i¼1 cosð2pxi Þ þ 20 þ e:
P n n
1
(5) Griewank F 5 ðXÞ ¼ 4000 2
i¼1 xi  Pi¼1 cos
px ffi þ 1:
j
Pn
(6) Rastrigin F 6 ðXÞ ¼ i¼1 ðxi  10 cosð2pxi Þ þ 10Þ:
2
H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364 7355

The Ackley function has one global optimal solution, which is located in a narrow basin. Adjacent to this global optimal
solution are many local optima. Both the Rastrigin and the Griewank functions have a large number of highly similar local
optima. Thus, these cases are considered complex multimodal problems that are more difficult to solve [30]. The global opti-
mal solutions X⁄ of the six benchmark functions are X⁄ = [0, . . . , 0], except for F3(X), whose solution is X⁄ = [1, . . . , 1]. All of the
function values are 0.

4.1.3. Evolution characteristics (intensification/diversification) of DGA


For realizing the DGA’s intensification and diversification characteristics during generations, the two average distance ra-
tios for every generation, dpg and dpa, are adopted to examine by through the two 30-dimensional benchmark functions, F3(X)
and F5(X). The dpa is the ratio between dpg and L, where dpg is the average distance between each individual and the best solu-
tion (the best individual) of the population in the current generation. L is the distance between the lower bound and the
upper bound formed by the variables. The dpa is the ratio between dpa and L, where dpa is similar to dpg , except that the best
solution is replaced with the global optimal solution. Their mathematical expressions are as follows:
PN p
jX j Sbest j
dpg ¼ dpg =L  100%, where dpg ¼ j¼1 Np : Xj and Sbest are the jth individual’s position and the best solution of the pop-
ulation in the current generation, respectively.
PN p
jX j S j
dpa ¼ dpa =L  100%, where dpa ¼ j¼1Np and S⁄ is the global optimal solution.
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Pn ffi
u l 2 u l
L¼ i¼1 ðxi  xi Þ , where xi and xi are the upper and the lower bound of variable i, respectively.

When the DGA is applied to optimizing F3(X) and F5(X), the history of the dpg and the dpa are shown in Figs. 5–8. Curves No.
1–5 in each figure are 5 independent runs of DGA. They are depicted as follows:
(1) For F3(X)
As shown in Fig. 5, in less than 100 generations, the dpg, the average distance ratio between the best solution and all indi-
viduals of the population, becomes close to 0%, which indicates that all individuals of the population have gathered closely
around the best solution. In other words, the population has reached its mature stage and has lost its activity. Then, in Fig. 6,
the dpa, which is the average distance ratio between the global optimal solution and all individuals, shows the same trend;
however, its value falls to approximately 2%, which reveals the population’s failure to congregate near the global optimal
solution.
(2) For F5(X)
There exist a large number of local optima in the proximity of this function’s global optimal solution. As shown in Figs. 7
and 8, both the dpg and the dpa require more generations to converge, but the DGA is finally able to search out a solution that
is close to the global optimal solution. The oscillation of the data in Fig. 7 illustrates that the DGA has an outstanding ability
to escape from the local optimum regions.

4.1.4. Parameters analysis of the DGA on 30-dimensional F3(X) and F5(X)


In this paper, Dr. Genichi Taguchi’s experimental design method (the Taguchi method) [26] is used to analyze the evolu-
tion characteristics of the population system that are potentially modulated by the selection of DGA parameters. The Taguchi
method is frequently adopted to determine the best combination of product parameters and to determine the degree of
influence of the parameters on the product quality. The experimental design uses an orthogonal array to obtain information.
In this paper, the orthogonal array L25(56) [26] is selected as an example for illustration of DGA with four parameters, each
parameter has five levels. The symbol L of L25(56)refers to the Latin square and the subscript 25 denotes that 25 experiments
(or parameter combinations) are performed. Each experimental result is viewed as the response (or objective) value of the
problem. The notation (56) means that there are six parameters, each has 5 variant values (named levels) in the experimental

dpg(%)

r
Generation numbe
Fig. 5. The history dpg of F3(x) for five independent runs.
7356 H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364

dpa(%)

Generation number
Fig. 6. The history dpa of F3(x) for five independent runs.

dpg(%)

Generation number
Fig. 7. The history dpg of F5(x) for five independent runs.

Fig. 8. The history dpa of F5(x) for five independent runs.

Table 1
Levels assigned to DGA parameters in the parameter analysis.

Level pc pm Np Rn
1 0.1 0.9 20 0.7
2 0.3 0.7 40 0.6
3 0.5 0.5 60 0.5
4 0.7 0.3 80 0.4
5 0.9 0.1 100 0.3
H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364 7357

problem. A simple statistical method is then used to determine each parameter’s best level (the level that results in the best
average response) and a ranking of the degree of influence of the parameters can be obtained.
The DGA has four parameters: the crossover rate pc, the mutation rate pm, the population size Np, and the number of
replacements Rn. Each parameter is given five levels, as shown in Table 1. Both of the test functions are 30-dimensional,
and the maximum number of generations is 500. The orthogonal array L25(56) is adopted. For each parameter combination
in this array, each function is performed 30 independent runs of DGA. The average and the standard deviations of the optimal
values of 30 runs for each experiment are taken as the responses. For each parameter the average response of each level is
defined as an average value of summation of all corresponding experiments’ responses from L25(56) with the same level. Thus
the parameter analysis on the average and standard deviation of the objective function values for two test functions are
listed in Tables 2–5, respectively. The results of parameter analysis for DGA with Taguchi method as follows:
(1) The combination of the best parameter.
The combination of the best parameter is defined as it is composed of the best level of each parameter [26].

Table 2
Responses of average optimal function values for F3(X).

Level pc pm Np Rn
1 28.96436 28.96266 28.98181 28.95999
2 28.96295 28.96311 28.96991 28.96049
3 28.96000 28.96617 28.95897 28.96011
4 28.95883 28.96193 28.95158 28.96103
5 28.96115 28.95342 28.94502 28.96567
Largest difference 0.00553 0.01275 0.03679 0.00556
Rank 4 2 1 3

Table 3
Responses of Standard deviation of optimal function values for F3(X).

Level pc pm Np Rn
1 0.02274 0.02102 0.01712 0.02286
2 0.07120 0.07295 0.02125 0.02350
3 0.02248 0.02020 0.07170 0.02278
4 0.02153 0.02190 0.02481 0.06902
5 0.02128 0.02315 0.02434 0.02108
Largest difference 0.04992 0.05275 0.05458 0.04794
Rank 3 2 1 4

Table 4
Responses of average optimal function values for F5(X).

Level pc pm Np Rn
1 0.84791 0.65926 0.73958 0.65057
2 0.67371 0.66830 0.63318 0.49173
3 0.61528 0.68655 0.60062 0.38798
4 0.57396 0.59610 0.58710 0.89628
5 0.49014 0.59081 0.64053 0.77445
Largest difference 0.35777 0.09574 0.15248 0.50830
Rank 2 4 3 1

Table 5
Responses of Standard deviation of optimal function values for F5(X).

Level pc pm Np Rn
1 0.24165 0.33029 0.32584 0.40076
2 0.36991 0.37534 0.39542 0.41257
3 0.42400 0.35644 0.37024 0.37131
4 0.39743 0.32476 0.26925 0.17746
5 0.26859 0.31475 0.34084 0.33949
Largest difference 0.18235 0.06059 0.12617 0.23511
Rank 2 4 3 1
7358 H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364

(A) For Rosenbrock F3(X): For the best average optimal value in boldface in Table 2, the best combination is {pc = 0.7, -
pm = 0.1, Np = 100, Rn = 0.7}, and for the best average standard deviation in boldface in Table 3, it is
{pc = 0.9, pm = 0.5, Np = 20, Rn = 0.3}.
(B) For Griewank F5(X): For the best average optimal value, the combination is {pc = 0.9, pm = 0.1, Np = 80, Rn = 0.5}, and for
the best average standard deviation, it is {pc = 0.1, pm = 0.1, Np = 80, Rn = 0.4}.

(2) Ranking the degree of influence of the parameters [26]


In Tables 2–5, for each parameter, the difference between maximum and minimum average values is called the largest
difference. The greater a parameter’s the largest difference is, the stronger its influence is on the evolutionary process or
the higher its degree of influence becomes. The degree of influence of the parameters of DGA can be found: (a) For the Rosen-
brock F3(X), the parameter with the highest degree of influence on the average optimal value and the parameter with the
strongest influence on the average standard deviation are both the same parameter Np, and the parameter pm is second.
(b) For the Griewank F5(X), the parameter lists ordered by their degree of influence on the average optimal value and the
average standard deviation, respectively, are both the same list: Rn, pc, Np, and pm. As shown in Tables 2–5, each parameter’s
the largest differences with respect to the two responses, the average optimal value and the standard deviation, are minor,
which shows that the five different values of the parameters given in the proposed DGA have little influence on the evolution
performance; i.e. the DGA has a high robustness on these two test functions.

4.1.5. The effects of different strategies


To improve the global optimal solution accurately, this paper combines Nelder–Mead’s simplex method (NMSM) with the
DGA and the three SDGAs to form the following four algorithms: DGALS, SDGALS, SDGAMINLS, and SDGAMEXLS. With six 30-
dimensional benchmark functions F1(X)–F6(X), the parameters, obtained from the previous section, for these algorithms are
given as: for uni-modal functions, the population size Np is set to 100; the replacement rate Rn = 70% (the elite rate is 30%);
the crossover rate pc = 0.7; the mutation rate pm = 0.1; for multimodal functions, the population size is set to 80; the replace-
ment rate = 50% (the elite rate is 50%); the crossover rate pc = 0.9; the mutation rate pm = 0.1; the radius of hyper-sphere for
the new search space r = dpg  L; and the maximum number of generations is 500. The convergence criteria e of NMSM is
106. For each function, each algorithm is conducted 30 times independently. The optimization results are listed in Table 6
and convergences are shown in Figs. 9–14.

(1) Uni-modal functions.

The optimization results of uni-modal functions in Table 6 shows that the SDGALS’s performance is the best one, and it
does an excellent job when applied to F1(X) and F2(X). For F3(X), four algorithms combining with Nelder–Mead’s simplex

Table 6
Optimization results with DGA and SDGAs on the problems with 30 variables.

Method Function F1(x) F2(x) F3(x) F4(x) F5(x) F6(x)


DGA Aver. 2.02E03 4.231 31.268 1.93E05 3.12E02 2.72E05
Best 2.90E04 7.49E03 29.114 4.53E07 1.0E08 8.64E07
DGALS Aver. 8.05E07 4.01E03 6.62E07 1.42E06 3.84E07 4.96E07
Best 5.01E07 3.35E03 5.31E07 3.05E07 1.23E07 3.13E07
SDGALS Aver. 2.85E07 3.48E07 6.20E07 1.60E06 4.87E07 5.65E07
Best 5.06E09 6.58E09 4.99E07 5.77E07 5.45E09 4.63E07
SDGAMEXLS Aver. 5.14E07 1.11E03 4.99E07 1.42E06 3.14E07 4.55E07
Best 3.50E07 2.83E04 3.62E07 6.44E07 5.22E12 4.30E08
SDGAMINLS Aver. 5.00E07 6.32E04 7.20E07 1.50E06 3.95E07 4.54E07
Best 3.78E07 6.78E06 5.24E07 1.26E07 5.21E12 1.50E08

Fig. 9. Convergence curves of 30-dimensional F1(x) for four different DGAs.


H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364 7359

Fig. 10. Convergence curves of 30-dimensional F2(x) for four different DGAs.

Fig. 11. Convergence curves of 30-dimensional F3(X) for four different DGAs.

Fig. 12. Convergence curves of 30-dimensional F4(x) for three different DGAs.

Fig. 13. Convergence curves of 30-dimensional F5(x) for three different DGAs.

method show strong effects on the accuracy of results. It is thus clear that, when applied to unimodal functions, the DGAs
have to rely on the local search method to enhance their accuracy of results.
The convergences shown in Figs. 9–11 for three different functions with four strategies of DGA indicate that the evolution
efficiency and accuracy performance for the DGA having the local search and sequential strategies, DGALS, SDGALS,
7360 H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364

Fig. 14. Convergence curves of 30-dimensional F6(x) for three different DGAs.

SDGAMINLS and SDGAMEXLS, are better than DGA. Among these algorithms, the performance of SDGALS is the best one and the
convergence speed SDGAMINLS is the best one.

(1) Multimodal functions

From the results of multimodal functions in Table 6, the SDGAMINLS shows the best overall performance. It is superior to
the original DGA when applied to all of the multimodal functions. The SDGAMINLS shows the largest improvement, by five
orders of magnitude (100,000 times), when used for finding the result of F5(X).
The evolution convergences shown in Figs. 12–14 for three different functions with three strategies of DGA indicate that
the computational efficiency and solution accuracy for the DGA with the local search and sequential strategies, DGALS,
SDGAMINLS and SDGAMEXLS, are improved and their results are better than those of DGA. The set of the parameters’ values
for the evolutionary process in these sequential strategies is a suitable one for saving the evolution cost and increasing the
solution accuracy.
By summarizing the above results, the SDGA combined with a local search is able to greatly improve the performance of
the DGA. For the unimodal benchmark functions, the SDGALS has the best performance; for multimodal benchmark func-
tions, the SDGAMINLS outperforms the others. Furthermore, it is clear from the DGA’s data in Table 6 that, despite its solu-
tions’ lack of accuracy, their results are all close to the global optimal solutions. Thus, the population’s evolution of the
proposed DGA is moving in the correct direction, and by adding some strategies as well as the local search mechanism,
the developed algorithms indeed have the ability to produce accurate and robust results, and fast convergence.

4.1.6. Comparison with the results of other studies in the literature


By using 6 benchmark functions with 30 and 100 variables, the search performances of the proposed DGA algorithms are
compared with those of the one PSO, two BGAs and the seven RGAs methods from other studies [11,17,20,25].
All of the algorithms in this paper were developed in the FORTRAN programming language and adopted single precision
variables; they were compiled using the PowerStation compiler. Particle swarm optimization (PSO) uses three quantities –
the speed of the particle itself, the best position of the individual, and the best location of the group – to determine the direc-
tion in which the particle moves. The memory particle swarm optimization (MPSO) proposed in the literature [25] is based
on the PSO with a strategy that uses external memory to store better particles in every generation. The number of maximum
generations to be searched is 3000. Among the algorithms presented in the literature [25], The MPSOLS apply a local search

Table 7
Comparison of SDGAs’ optimization results with literature data for the unconstrained problems with 30 variables.

Method Function F1(x) F2(x) F3(x) F4(x) F5(x) F6(x)


SDGALS Aver. 2.85E07 3.48E07 6.20E07 – – –
Best 5.06E09 6.58E09 4.99E07 – – –
SDGAMINLS Aver. – – – 1.50E06 3.95E07 4.54E07
Best – – – 1.26E07 5.21E12 1.50E08
MPSOLS[25] Aver. 0 4.50E04 1.56E+00 0 2.16E07 0
Best 0 3.98E08 1.18E04 0 0 0
RCGA[20] Aver. 5.94E02 1.76E+02 9.31E+01 6.08E02 1.21E01 2.46E02
Best 1.28E02 6.82E+01 1.05E+01 2.88E02 3.29E02 3.33E03
LM_RCGA[20] Aver. 8.52E255 3.53E+03 7.14E+01 4.89E15 9.51E03 0
Best 2.58E264 1.08E+02 6.79E01 4.14E15 0 0
HMGA[17] Aver. 5.48E05 – 1.61E+01 3.17E04 1.52E03 9.22E12
LMGA[17] Aver. 1.32E07 – 1.85E+01 2.18E07 1.14E03 1.23E12
HNGA[17] Aver. 3.64E04 – 1.25E+01 1.28E03 3.68E03 3.11E13
LNGA[17] Aver. 8.92E07 – 1.85E+01 6.66E07 8.57E04 7.30E+00
H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364 7361

Table 8
Comparison of SDGAs’ optimization results with literature data for the unconstrained problems with 100 variables.

Method Function F1(x) F2(x) F3(x) F4(x) F5(x) F6(x)


SDGALS Aver. 6.74E07 2.95E03 1.60E05 – – –
Best 3.81E07 5.23E08 9.97E07 – – –
SDGAMINLS Aver. – – – 1.67E06 7.11E07 6.69E07
Best – – – 1.83E07 4.00E14 1.25E09
MPSOLS [25] Aver. 0 9.02E+00 6.96E+01 6.25E08 6.63E06 5.53E08
Best 0 6.25E+00 5.73E01 0 1.98E08 0
RGA [11] Aver. 1.94E+01 – 5.15E+00 – – –
BGA original mutation [11] Aver. 7.57E+01 – 4.01E+01 – – –
BGA modified mutation [11] Aver. 2.87E+01 – 1.61E+01 – – –

with a limited number of iterations to the best particle in every generation. The BGA in the literature [11] uses original and
modified mutation operators. The RGA in the literature [11] uses original and in the literature [17] uses combinations of two
crossover operators, HX and LX, as well as two mutation operators, MPTM and NUM, to form four methods: HX-MPTM
(HMGA), LX-MPTM(LMGA), HX-NUM(HNGA), and LX-NUM(LNGA) and in the literature [20] uses linear map-based mutation
scheme for RGA(LM_RCGA). In this paper, for unimodal functions F1(X)–F3(X), the SDGALS is adopted to perform them,
whereas for multimodal functions F4(X)–F6(X), the SDGAMINLS is used. The results obtained are compared with those of
the other algorithms that were mentioned above. Tables 7 and 8 show the results that were collected by applying these algo-
rithms to the six benchmark functions with 30 and 100 variables. The comparisons are made as follows:

(1) 30-dimensional benchmark problems

From Table 7, which summarizes the results conducted on the six benchmark functions, it is apparent that the overall per-
formances of the two DGA methods presented in this paper as well as the MPSOLS rank first, and the RGAs [17] rank second. Both
the proposed methods, SDGALS and SDGAMINLS, are able to search out the global optimization solutions of the unimodal and the
multimodal functions. The MPSOLS is only inferior to SDGALS when applied to F3(X); this result is because the local search in
MPSOLS is performed for the best particle in every generation, which makes it difficult to obtain a more accurate value.

(1) 100-dimensional benchmark problems

Provided that a function value is considered to be at the best level when it is less than 106 (Table 8), the proposed
SDGAMINLS and MPSOLS have the best overall performance when compared with the other RGA and two BGA algorithms,
and the RGA ranks second. When applied to F2(X) and F3(X), the SDGAMINLS has the best performance.
The optimization results in Tables 7 and 8, collected from testing the 30/100-dimensional functions with several algo-
rithms, show that the solution robustness of the proposed DGAs does not deteriorate as the number of variables increases.
The results show that the proposed algorithms, the SDGALS and the SDGAMINLS, have the features of fast convergence and
high robustness to deal with high-dimensional problems.

4.2. The cantilevered beam design problem

The cantilevered beam design problem with discrete rectangular cross-sections is shown in Fig. 15. The object of this
problem is to determine the best combination of the five different cross-section areas so that the volume of the cantilever

1
p
2 3 4
5

l1 l2 l3 l4 l5
L
y
hi

bi
Fig. 15. Schematic of the cantilevered beam structure with an indication of the design variables.
7362 H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364

beam is minimized. The design problem has up to 10 variables. These variables are the width and height of each cross-sec-
tion, hi and bi (i = 1, . . . , 5). An external force p = 50,000 N is applied at the free end of the cantilevered beam. The maximum
allowable stress at the left end of each section is rmax = 14,000 N/cm2, the material elasticity modulus E is 200 GPa, the
length of each section li (i = 1, . . . , 5) is 100 cm, and the maximum allowable deflection is ymax = 2.715 cm. The height-to-
width aspect ratio of each cross-section is restricted to less than 20. Thus, the mathematical model for the optimization
of this problem is defined as follows:

X ¼ ½b1 ; h1 ; b2 ; h2 ; b3 ; h3 ; b4 ; h4 ; b5 ; h5 T ¼ ½x1 ; x2 ; . . . ; x10 T

Minimize f ðxÞ ¼ 100ðx1 x2 þ x3 x4 þ x5 x6 þ x7 x8 þ x9 x10 Þ ð19Þ


Subject to
Stress constraints:

x1 x22
g 1 ðxÞ ¼ 10:7143  6 0; ð20Þ
103

x3 x24
g 2 ðxÞ ¼ 8:5714  6 0; ð21Þ
103

x5 x26
g 3 ðxÞ ¼ 6:4286  6 0; ð22Þ
103

x7 x28
g 4 ðxÞ ¼ 4:2957  6 0; ð23Þ
103

x9 x210
g 5 ðxÞ ¼ 2:1429  60 ð24Þ
103
 
244 148 76 28 4
g 6 ðxÞ ¼ 104 3
þ 3
þ 3
þ 3
þ 3
 10:86 6 0: ð25Þ
x1 x2 x3 x4 x5 x6 x7 x8 x9 x10
Geometric constraints:

g 7 ðxÞ ¼ x2  20x7 6 0; ð26Þ

g 8 ðxÞ ¼ x4  20x3 6 0; ð27Þ

g 9 ðxÞ ¼ x6  20x5 6 0; ð28Þ

g 10 ðxÞ ¼ x8  20x7 6 0; ð29Þ

g 11 ðxÞ ¼ x1 0  20x9 6 0: ð30Þ


There are 11 constraints in this problem. Among them, Eqs. (20)–(24) are related to the allowable stress constraints, Eq.
(25) is the constraint regarding the allowable deflection, and Eqs. (26)-(30) restrict the geometric shape of the cross-section.
The search ranges of the design variables are 1 6 xi 6 5 (i = 1, 3, 5, 7, 9), 30 6 xi 6 65 (i = 2, 4, 6, 8, 10). The optimization design

Table 9
Comparison of SDGA’s optimization results with the literature data [31,32] for the
cantilevered beam problem.

Variables SDGAMINLS MPNN [31] SUMT [32]


b1 3.0459 3.0606 2.17
h1 60.8969 61.2115 42.74
b2 2.8018 2.8161 2.27
h2 56.0168 56.3214 44.99
b3 2.5251 2.5216 2.82
h3 50.4643 50.4290 50.47
b4 2.2252 2.2136 2.79
h4 44.4745 44.2759 55.42
b5 1.7678 1.7503 3.00
h5 34.8462 35.0141 59.77
f(x) 63044.17 63240.67 65678.00
H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364 7363

Table 10
Comparison among the constrained values in the cantilevered beam with
SDGAMINLS and methods [31] and [32].

Constrained values SDGAMINLS MPNN [31] SUMT [32]


g1 0.5814 0.0703 0.465
g2 0.2205 0.0422 0.068
g3 0.0020 0.0025 0.003
g4 0.1158 0.0125 0.0002
g5 0.00370 0.0014 0.0003
g6 0.00074 0.0094 0.002
g7 0.02241 0.0006 0.524
g8 0.02071 0.0005 0.442
g9 0.03804 0.0030 0.0007
g10 0.03109 0.0039 0.375
g11 0.51026 0.0081 0.222

problem in this paper is addressed by using the penalty function method, Eq. (5). The penalty factor R is set to 107. The pop-
ulation size is set to 60; the replacement rate = 70% (the elite rate is 30%); the crossover rate pc = 0.7; the mutation rate
pm = 0.1.
The optimization results (the best objective function values and the design variable values) that are obtained using the
proposed SDGAMINLS, the MPNN (Mathematical Programming Neural Network) [31], and the SUMT (Sequential Uncon-
strained Minimization Techniques) [32] are listed in Table 9, and the constraints are listed in Table 10. The MPNN is a meth-
od that uses neural networks to solve nonlinear optimization problems. In the literature [32], the ALM (Augmented Lagrange
Multiplier) method is used first to transform the problem into an unconstrained optimization problem; the solution is then
searched out using the SUMT. It is clear from Table 9 that the function value obtained from the proposed SDGAMINLS is a little
better than that from the MPNN and is also superior to SUMP. At the same time, Table 10 reveals that, among the constraints
of the MPNN, three constraints, g3, g10, and g11, does not enough to be met.

5. Conclusions

(1) Based on the framework of real-coded genetic algorithm, this paper borrows from search mechanisms of Nelder–
Mead’s simplex method to propose two operators, directed crossover operator and directed mutation operator, and
develop a new variant of real-coded genetic algorithm, called as the directed genetic algorithm (DGA).
(2) With Taguchi method, the parameter analysis of DGA on two functions are performed, and to determine the best com-
bination of parameters, and the degree of influence of each parameter.
(3) The results of applying the DGA to the six benchmark functions indicated that the DGA itself already has the fast con-
vergence. In order to improve the solution accuracy, four strategies with a local search method are proposed and
applied to six benchmark functions, including three uni-modal ones and three mlti-modal ones. For unimodal func-
tions, SDGALS, has the best performance, while for multimodal functions, SDGAMINLS outperforms the others. More-
over, the local search method that is adopted in this paper – the Nelder–Mead simplex method – has shown its
ability to effectively improve the accuracy of the solution.
(4) In comparison with the results from the application of the eleven algorithms in the literature [11,17,20,25] to the six
30- and 100-dimensional benchmark functions, the proposed algorithms have demonstrated superior performance
and the ability to search out a result that is nearly identical to the global optimal solution.
(5) A constrained optimization problem – the cantilevered beam design optimization problem - has been addressed in
this paper using the penalty function method. The results of the directed genetic algorithm are better than those of
the other studies in the literature [31,32].

References

[1] J.H. Holland, Adaption in Natural and Artificial Systems, The University of Michigan Press, 1975.
[2] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of IEEE International Conference on Neural Networks, vol. 4, Perth, Australia,
1995, pp. 1942–1948.
[3] M. Dorigo, V. Maniezzo, A. Colorni, Ant system: optimization by a colony of co-operating agents, IEEE Trans. Syst. Man Cybern. Part B Cybern. 26 (1996)
29–41.
[4] R. Storn, K. Price, Differential evolution – A simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim. 11 (1997)
341–359.
[5] E. Bonabeau, M. Dorigo, G. Theranlaz, Swarm Intelligence. From Natural to Artificial System, Oxford University Press, 1999.
[6] E. Bonabeau, G. Theranlaz, Swarm smarts, Sci. Am. (2000) 72–79.
[7] S.L. Birbil, S.C. Fang, An electromagnetism-like mechanism for global optimization, J. Global Optim. 25 (2003) 263–282.
[8] K.A. De Jong, An analysis of the behavior of a class of genetic adaptive systems, Ph. D. Dissertation, University of Michigan, Ann Arbor, 1975.
[9] D.E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison Wesley, Reading, MA, 1989.
[10] D.E. Goldberg, Real-coded genetic algorithms, virtual alphabets, and blocking, Complex Syst. 5 (1991) 139–167.
7364 H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364

[11] R. Arora, R. Tulshyan, K. Deb, Parallelization of binary and real-coded genetic algorithms on GPU using CUDA, IEEE Congr. Evol. Comput. 2010 (2010) 1–
8.
[12] A. Wright, Genetic algorithms for real parameter optimization, in: G.J.E. Rawlin (Ed.), Foundations of Genetic Algorithms 1, Morgan Kaufmann, San
Mateo, 1991, pp. 205–218.
[13] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer, New York, 1992.
[14] L.J. Eshelman, J.D. Scahffer, Real-coded genetic algorithms and interval-schemata, in: L. Darrel Whitley (Ed.), Foundations of Genetic Algorithms 2,
Morgan Kaufmann, San Mateo, 1993, pp. 187–202.
[15] I. Ono, S. Kobayashi, A real-coded genetic algorithm for function optimization using unimodal normal distribution crossover, in: Proceedings of 7th Int’l
Conf. on Genetic Algorithms, 1997, 246–253.
[16] K. Deb, A population-based algorithm-generator for real-parameter optimization, Soft Comput. 9 (2005) 236–253.
[17] K. Depp, M. Thakur, A new mutation operator for real coded genetic algorithms, Appl. Math. Comput. 193 (2007) 211–230.
[18] M.E. Farmer, S. Bapna, A.K. Jain, Large scale feature selection using modified random mutation hill climbing, ICPR 2004, In: Proceedings of the 17th
International Conference on, Pattern Recognition, 2004.
[19] A. Neubauer, A theoretical analysis of the non-uniform mutation operator for the modified genetic algorithm, in: IEEE International Conference on,
Evolutionary Computation, 1997, 93–96.
[20] Y.J. Gong, X. Hu, J. Zhang, O. Liu, H.L. Liu, A linear map-based mutation scheme for real coded genetic algorithms, IEEE Congr. Evol. Comput. 2010 (2010)
1–7.
[21] M. Tadahiko, I. Hisao, T. Hideo, Genetic algorithms for flow-shop scheduling problems, Comput. Ind. Eng. 30 (1996) 1061–1071.
[22] S. Ayed, A. Imtiaz, A.M. Sabah, Particle swarm optimization for task assignment problem, Microprocess. Microsys. 26 (2002) 363–371.
[23] R. Chelouah, P. Siarry, Genetic and Nelder–Mead algorithms hybridized for a more accurate global optimization of continuous multi-minima functions,
Eur. J. Oper. Res. 148 (2003) 335–348.
[24] J.A. Nelder, R. Mead, A simplex method for function minimization, Comput. J. 7 (1965) 308–313.
[25] A. Adnan, G. Akin, Enhanced particle swarm optimization through external memory support, IEEE Trans. Evol. Comput. 2 (2005) 1875–1882.
[26] G. Taguchi, Techniques for Quality Engineering, Asian Productivity Organization, Tokyo, 1990.
[27] O. Edward, Wilson, Sociobiology: The New Synthesis 1975, Harvard University Press, Twenty-fifth Anniversary Edition, 2000.
[28] H.C. Kuo, J.L. Wu, A new approach with orthogonal array for global optimization in design of experiments, J. Global Optim. 44 (2009) 563–578.
[29] R.A. Makinen, E.J. Periaux, J. Toivanen, Multidisciplinary shape optimization in aerodynamics and electromagnetic using genetic algorithms, Int. J.
Numer. Methods Fluids 30 (1999) 149–159.
[30] X. Yao, Y. Liu, G. Lin, Evolutionary programming made faster, IEEE Trans. Evol. Comput. 3 (1999) 82–102.
[31] K.C. Gupta, J. Li, Robust design optimization with mathematical programming neural networks, Comput. Struct. 76 (2000) 507–516.
[32] A.V. Fiacco, G.P. McCormick, Nonlinear programming: Sequential Unconstrained Minimization Techniques, John Wiley & Sons, Inc., New York, 1968.

You might also like