A Directed Genetic Algorithm For Global Optimization
A Directed Genetic Algorithm For Global Optimization
a r t i c l e i n f o a b s t r a c t
Keywords: Within the framework of real-coded genetic algorithms, this paper proposes a directed
Directed genetic algorithm genetic algorithm (DGA) that introduces a directed crossover operator and a directed muta-
Nelder–Mead’s simplex algorithm tion operator. The operation schemes of these operators borrow from the reflection and the
Global optimization expansion search mode of the Nelder–Mead’s simplex method. First, the Taguchi method is
employed to study the influence analysis of the parameters in the DGA. The results show
that the parameters in the DGA have strong robustness for solving the global optimal solu-
tion. Then, several strategies are proposed to enhance the solution accuracy capability of
the DGA. All of the strategies are applied to a set of 30/100-dimensional benchmark func-
tions to prove their superiority over several genetic algorithms. Finally, a cantilevered
beam design problem with constrained conditions is used as a practical structural optimi-
zation example for demonstrating the very good performance of the proposed method.
Ó 2012 Elsevier Inc. All rights reserved.
1. Introduction
Optimization technology is an approach to solving design problems with N variables. It defines the problem to be solved
as an objective function f(X), where design variable X = {x1, x2, . . . , xN} is a vector with N variables. The search domains of the
variables form a search space U, which is a set that consists of all possible X’s. The problems may be subjected to Ineq con-
strained conditions g1(X), i = 1, . . . , Ineq. In the search space U, the function f(X) will have a global optimal solution that meets
all of the constrained conditions. For example, if the objective of a problem is to search for a Xe e U so that f(Xe) is the global
optimal value, then the mathematical model of the problem can be defined as the following:
Min f ðXÞ ¼ f ðX e Þ with g i ðXÞ 6 0; i ¼ 1; . . . ; Ineq : ð1Þ
x2S
Along with constant progress in computer science and technology, innovative thinking has led to the proposition of new
computing methods. Studies on optimization technology have leaped from classical mathematical programming approaches
to new methods that imitate the human genetic code, cite animal behavioral rules from natural ecology, or refer to devel-
opment mechanisms of human culture in the social sciences to develop evolutionary computational algorithms that have
high accuracy and efficiency.
An optimization algorithm, a search procedure used to solve problems, is a method based on certain concepts and mech-
anisms for finding a solution through a fixed process. There are two frequently used optimization algorithms. (1) Mathemat-
ical programming approaches: These include traditional algorithms, such as linear programming, nonlinear programming,
integer programming, and dynamic programming. All of these algorithms are able to search out the local optimal solutions
to a problem rapidly. (2) Heuristic Algorithms: Since the 1970s, a large number of researchers have adopted such concepts as
imitating natural ecology, the humanities and social sciences, music, and electromagnetic attraction and repulsion to
⇑ Corresponding authors.
E-mail addresses: [email protected], [email protected] (H.-C. Kuo), [email protected] (C.-H. Lin).
0096-3003/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved.
http://dx.doi.org/10.1016/j.amc.2012.12.046
H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364 7349
propose innovative and executable computing algorithms for solving problems. Examples include the Genetic Algorithm
(GA) [1] presented by Holland in 1975; the Particle Swarm Optimization (PSO) [2] published by Kennedy in 1995; the
Ant Colony Optimization (ACO) [3] introduced by Dorigo in 1995; the Differential Evolution Algorithm (DE) [4] proposed
by Storn in 1997; the Swarm Intelligence [5,6] presented by Bonabeau in 1999; and the ElectroMagnetic-like algorithm
(EM) [7] described by Birbil in 2003.
In 1975, Holland proposed the genetic algorithm by imitating Darwin’s evolutionary theory – survival of the fittest. Later
in the same year, De-Jong [8] applied the GA to optimization problems. In 1989, Goldberg [9] made the GA a popular and
widely used algorithm. By imitating the mode of the biological gene string, he used 0/1 binary codes to encode variables,
gave the binary gene string a corresponding fitness value according to certain criteria, and adopted the population-based
evolution. In the evolutionary process, the rule of ‘‘survival of the fittest’’ was applied to individuals by performing simple
operations on gene-string codes. The operators that were used to generate new population consisted of a selection operator,
a crossover operator, and a mutation operator. The crossover operation created a new individual by mixing gene strings,
while the mutation operation modified the genes in the gene string to provide the population with sufficient proliferation
or diversification. With no constraints on the search space and no need to solve the derivative, such an algorithmic mech-
anism was suitable for parallel computing. Then, in 1991, Goldberg [10] used the Bit-string GA (BGA) to explore the issue of
global optimization. He found that the BGA had a robust evolution capability for finding the global optimal solution, but it
consumed too much computing time and had no apparent performance when applied to high-dimensional or high-accuracy
problems. Furthermore, there was the so-called Hamming-Cliff problem. For example, the Hamming distance between two
adjacent decimal integers, such as 31 and 32, becomes very large when expressed in binary codes, which are 01111 and
10000, respectively.
To improve the BGA’s weakness in addressing continuous variable optimization problems, the real-coded genetic algo-
rithm (RGA) [10,11] adopts a real-coded variable model. There are two main developments regarding the crossover and
mutation operations of the RGA. (1) The crossover operation. In 1991, Wright [12] first used the heuristic mating operation
model to randomly create a new generation; in 1992, Michalewicz [13] proposed the arithmetical crossover; in 1993, Esh-
elman and Schaffer [14] published the BLX-a Crossover; in 1997, Ono and Kobayshi [15] introduced the unimodal normal
distribution crossover (UNDX), which belonged to mean-centric crossovers; in 2006, Deb et al. came up with the parent-cen-
tric crossover (PCX) [16]; and in 2007, Deep and Thakur proposed the Laplace crossover (LX) [17], which is another crossover
of the PCX type. (2) The mutation operation. This operation includes the random mutation (RM) [18] and the non-uniform
mutation (NUM) [19]. Among them, the NUM proposed by Michalewicz in 1992 was the most widely used; in 2010, Gong
et al. proposed a linear map-based mutation scheme for real-coded genetic algorithms [20]. Both BGA and RGA use crossover
and mutation operators to generate new individuals for the next generation that are usually considered as the roles of inten-
sification and diversification in the evolutionary process, respectively.
The performance of population-based algorithms, such as GA, PSO, and DE, depend on whether the population is capable
of wide exploration and deep exploitation. This dependence indirectly explains whether the mutual promotion, mutual
transformation, and reciprocal advance between the two characteristics of diversification and intensification are able to
achieve a dynamic balance. Although the population-based algorithms have capabilities to solve the global optimal solution
and are often applied to the multimodal optimization problems [21,22], in the cases where there are many similar local opti-
mal solutions near the global optimal solution or the shape of the area close to the global optimal solution is long and nar-
row, their solution accuracy and success rates are relatively low or even impossible to use. In recent years, two types of
algorithms have proposed for improvement. (1) The hybrid local search method. In 2003, Chelouah et al. [23] announced
CHA (Continuous Hybrid Algorithm), which conducts RGA (Real-coded Genetic Algorithm) first and then adopts the Nel-
der–Mead’s Simplex Method (NMSM) [24] to continue the search by using the RGA’s result as the starting point. (2) The
strategy-based method. Adnan and Akin [25] proposed the memory particle swarm optimization (MPSO) algorithm based
on a strategy that stores the information of a particle swarm into an external memory during the evolutionary process of
the PSO [2], thus increasing the vitality of the population. The algorithm then conducts local search among the best particles
of the current population to improve the solution accuracy.
In light of these advances, the directed crossover and mutation operators, borrowing from the reflection and the expan-
sion mechanism of the NMSM, are introduced into the real-coded genetic framework; the algorithm is referred to as the di-
rected genetic algorithm (DGA). At first, in this paper, two benchmark problems with one uni-modal and one multimodal are
examined, and the parameter analysis of the DGA algorithm is performed using the Taguchi Method [26]. Then, in this study,
four strategies of DGA are proposed to improve the solution accuracy by through six benchmark functions.
Lastly, the proposed DGA is applied to an optimization design of the structural engineering problem with constrained
conditions. The mathematical form of the constrained optimization problem (COP) is expressed as follows:
Minimize f ðxÞ; ð2Þ
When addressing the COP, this paper adopts the penalty function method as follows:
7350 H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364
Ieq Ineq
!
X 2
X
/ðxÞ ¼ f ðxÞ þ R ½hi ðxÞ þ max½0; g j ðxÞ ; ð5Þ
i¼1 j¼1
where Ieq and Ineq are the numbers of the equality-constrained conditions and the inequality-constrained conditions,
respectively, and R is the penalty factor.
The Nelder–Mead’s simplex method [24] is based on two concepts: to determine the search direction and to replace the
worst point in the simplex algorithm. The process of replacing the worst point consists of four types of search schemes:
reflection, expansion, contraction, and shrinkage. The process steps of the simplex method are given below.
(1) Initialization process. Assign a dimension n, a convergence limit e, a reflection coefficient q, an expansion coefficient v,
a contraction coefficient c, and a shrinkage coefficient r. Generate the initial simplex. Given the starting point X1 in the n-
dimensional space first, find n initial vertices Xj that are one step length h away from X1 along each coordinate axis ei.
Xj; j ¼ 2; 3; . . . ; n þ 1;
X j ¼ X 1 þ h ei ; i ¼ 1; 2; . . . ; n;
where ei is the unit vector of the i-th coordinate axis and h is the step length, which is usually between 0.5 and 15.0.
(2) Assessment process. Calculate the objective function values for these n + 1 vertices and then sort them in ascending
order. Store the point that has the smallest value of the objective function as the current best record. In other words,
f(Xn+1) P P f(X2) P f(X1), where X1 is the value that is stored.
(3) Simplex generation. Make use of reflection, expansion, contraction, and shrinkage to generate a new vertex.
P
A. Reflection. Compute the reflection point Xr with the expression Xr = Xc + q(Xc Xn+1), where X c ¼ ni¼1 X i =n. Evaluate
f(Xr). If f ðX 1 Þ 6 f ðX r Þ < f ðX n Þ, replace Xn+1 with Xr.
B. Expansion. If the objective function value f(Xr) computed at the reflection point Xr is smaller than the current best
record f(X1), then the search direction is correct. Continue to expand along the segment that is limited by the points
Xc and Xr until the point Xa is reached. This last point is defined as the following: Xe = Xc + v(Xr Xc). Evaluate f(Xe). If
f(Xe), then replace Xn+1 with Xe; otherwise, replace Xn+1 with Xr.
C. Contraction. If f ðX r P f ðX n Þ; then perform the contraction. If f ðX n Þ 6 f ðX r Þ < f ðX nþ1 Þ; a contraction point Xs is defined as
Xs = Xc + c(Xr Xc). If f ðX s Þ 6 f ðX r Þ; then replace Xn+1 with Xs; otherwise, perform the shrinkage. If f ðX r Þ > f ðX nþ1 Þ is
necessary, then another contraction point is X 0s ¼ X c þ cðX nþ1 X c Þ. If f ðX 0s Þ < f ðX nþ1 Þ; then replace Xn+1 with Xs; other-
wise, perform the shrinkage.
D. Shrinkage. Calculate new vertices X 0j ¼ X 1 þ rðX j X 1 Þ to replace X j ; j ¼ 2; . . . ; n þ 1.
(4) Termination criterion. Check whether the convergence criterion e is met. Eventually, stop searching and take the best
point as the optimal solution; otherwise, go to Step (2). The convergence criterion is the following:
( )12
1 X nþ1
½f ðX j Þ f ðX c Þ2 6 e:
n þ 1 j¼1
Selection, crossover and mutation are stochastic operators that generate new generations in the above-mentioned real-
coded genetic algorithms. The course of socio-cultural evolution is not a phenomenon of aimless, arbitrary development, but
towards a clear goal. Such process of moving towards higher-level soul experience and mental state is a common goal of
social species’ evolution [27]. Based on the socio-cultural evolution, in this research, a directed crossover operator and a di-
rected mutation operator, borrowing from the reflection and expansion schemes of the Nelder–Mead’s simplex method, are
proposed. The Directed Genetic Algorithm (DGA) developed is outlined in the rest of this section.
pi xli
t¼ ;
xui pi
pi 2 ½xli ; xui :
1 Xm
Xc ¼ Xj; ð14Þ
m j¼1
Sc ¼ X c X h ; ð16Þ
where a1 is set to the coefficient of golden section search method 0.618 or is a random number between 0 and 1. This
operator, which condenses the whole population to its centroid, is also known as the intensification operator.
Sm ¼ X g X h ; ð18Þ
where a2 is a random number that is greater than 1 < a2 6 2. This operator is also referred to as the diversification
operator.
The flowchart of the DGA is shown in Fig. 1. The evolutionary process includes the following steps.
Step 1: Initialization
Determine the population size Np, the number of replacements Rn = (1 Elite Rate Er) Np, the maximum number of gen-
erations, and the probabilities of the directed crossover and mutation operators pc and pm, which are between 0 and 1. Ran-
domly create the initial population over n-dimensional search space U by considering a uniform distribution.
Step 2: Evaluation
Calculate the fitness (objective) function value of each individual and then sort these values
Step 3: Generation of the new population
Judge whether pc and pm have been met to decide which operator should be used to generate the new individuals of the
next generation. Use roulette to select individuals. Use the directed crossover operator and directed mutation operator to
generate new individuals. Store the elitist individuals.
Step 4: Termination condition
Check whether the maximum number of generations has been reached. In this case, stop the evolutionary process and
store the best individual as the optimum solution. Conversely, check whether the number of new individuals has been
reached. If this criterion is true, then execute Step 2; otherwise, execute Step 3.
To improve solution accuracy of the DGA, this paper proposes three algorithms: Sequential DGA (SDGA), Sequential DGA
Excluding Memory (SDGAMEX), and Sequential DGA Including Memory (SDGAMIN). These three algorithms are described in
the subsections that follow.
(1) Convergence criterion. When the average distance between the best individual and the other individuals of the popu-
lation becomes less than 1% different from its counterpart’s fitness in the previous generation and this situation per-
sists through Gs generations, the population is viewed to have reached in a stable convergence. In this paper, the Gs
parameter was set equal to 50.
(2) Definition of the new search space. Take the best individual as the center and a radius r to form a hyper-sphere of a new
search space U’, as shown in Fig. 2.
(3) Re-initialization of population. Reinitialize the population over the new search space U’, as shown in Fig. 2.
(4) Execution of DGA. Redo the DGA with the new initial population over the new search space U’.
(5) Stopping criteria. When the DGA approaches convergence again, check whether the best solution falls in the search
space. If so, then terminate the evolution; otherwise, jump to Step (2).
After the DGA is finished, Nelder–Mead’s simplex method is adopted to improve the solution accuracy. The parameters in
this local search are given as the convergence limit e = 106, the reflection coefficient q = 1, the expansion coefficient v = 2,
the contraction coefficient c = 0.5, and the shrinkage coefficient r = 0.5
To realize the DGA’s evolution characteristics, this paper first chooses two functions (Rosenbrock function with unimodal
and Griewank function with multimodal) to be investigated. In this paper the Taguchi method [26,28] is used to study
parameter analysis of the DGA that probes how the parameters affect the search characteristics, measuring each parameter’s
degree of influence and identifying the appropriate parameter combination.
This paper then performs a test on the DGAs and SDGAs using six benchmark functions, and it compares their results with
other results in the literature. In the end, the proposed algorithm is applied to an optimization problem of a structure.
The three unimodal functions (F1(X), F2(X), and F3(X)) are sorted by their search difficulties: F1(X) < F2(X) < F3(X)
The Ackley function has one global optimal solution, which is located in a narrow basin. Adjacent to this global optimal
solution are many local optima. Both the Rastrigin and the Griewank functions have a large number of highly similar local
optima. Thus, these cases are considered complex multimodal problems that are more difficult to solve [30]. The global opti-
mal solutions X⁄ of the six benchmark functions are X⁄ = [0, . . . , 0], except for F3(X), whose solution is X⁄ = [1, . . . , 1]. All of the
function values are 0.
When the DGA is applied to optimizing F3(X) and F5(X), the history of the dpg and the dpa are shown in Figs. 5–8. Curves No.
1–5 in each figure are 5 independent runs of DGA. They are depicted as follows:
(1) For F3(X)
As shown in Fig. 5, in less than 100 generations, the dpg, the average distance ratio between the best solution and all indi-
viduals of the population, becomes close to 0%, which indicates that all individuals of the population have gathered closely
around the best solution. In other words, the population has reached its mature stage and has lost its activity. Then, in Fig. 6,
the dpa, which is the average distance ratio between the global optimal solution and all individuals, shows the same trend;
however, its value falls to approximately 2%, which reveals the population’s failure to congregate near the global optimal
solution.
(2) For F5(X)
There exist a large number of local optima in the proximity of this function’s global optimal solution. As shown in Figs. 7
and 8, both the dpg and the dpa require more generations to converge, but the DGA is finally able to search out a solution that
is close to the global optimal solution. The oscillation of the data in Fig. 7 illustrates that the DGA has an outstanding ability
to escape from the local optimum regions.
dpg(%)
r
Generation numbe
Fig. 5. The history dpg of F3(x) for five independent runs.
7356 H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364
dpa(%)
Generation number
Fig. 6. The history dpa of F3(x) for five independent runs.
dpg(%)
Generation number
Fig. 7. The history dpg of F5(x) for five independent runs.
Table 1
Levels assigned to DGA parameters in the parameter analysis.
Level pc pm Np Rn
1 0.1 0.9 20 0.7
2 0.3 0.7 40 0.6
3 0.5 0.5 60 0.5
4 0.7 0.3 80 0.4
5 0.9 0.1 100 0.3
H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364 7357
problem. A simple statistical method is then used to determine each parameter’s best level (the level that results in the best
average response) and a ranking of the degree of influence of the parameters can be obtained.
The DGA has four parameters: the crossover rate pc, the mutation rate pm, the population size Np, and the number of
replacements Rn. Each parameter is given five levels, as shown in Table 1. Both of the test functions are 30-dimensional,
and the maximum number of generations is 500. The orthogonal array L25(56) is adopted. For each parameter combination
in this array, each function is performed 30 independent runs of DGA. The average and the standard deviations of the optimal
values of 30 runs for each experiment are taken as the responses. For each parameter the average response of each level is
defined as an average value of summation of all corresponding experiments’ responses from L25(56) with the same level. Thus
the parameter analysis on the average and standard deviation of the objective function values for two test functions are
listed in Tables 2–5, respectively. The results of parameter analysis for DGA with Taguchi method as follows:
(1) The combination of the best parameter.
The combination of the best parameter is defined as it is composed of the best level of each parameter [26].
Table 2
Responses of average optimal function values for F3(X).
Level pc pm Np Rn
1 28.96436 28.96266 28.98181 28.95999
2 28.96295 28.96311 28.96991 28.96049
3 28.96000 28.96617 28.95897 28.96011
4 28.95883 28.96193 28.95158 28.96103
5 28.96115 28.95342 28.94502 28.96567
Largest difference 0.00553 0.01275 0.03679 0.00556
Rank 4 2 1 3
Table 3
Responses of Standard deviation of optimal function values for F3(X).
Level pc pm Np Rn
1 0.02274 0.02102 0.01712 0.02286
2 0.07120 0.07295 0.02125 0.02350
3 0.02248 0.02020 0.07170 0.02278
4 0.02153 0.02190 0.02481 0.06902
5 0.02128 0.02315 0.02434 0.02108
Largest difference 0.04992 0.05275 0.05458 0.04794
Rank 3 2 1 4
Table 4
Responses of average optimal function values for F5(X).
Level pc pm Np Rn
1 0.84791 0.65926 0.73958 0.65057
2 0.67371 0.66830 0.63318 0.49173
3 0.61528 0.68655 0.60062 0.38798
4 0.57396 0.59610 0.58710 0.89628
5 0.49014 0.59081 0.64053 0.77445
Largest difference 0.35777 0.09574 0.15248 0.50830
Rank 2 4 3 1
Table 5
Responses of Standard deviation of optimal function values for F5(X).
Level pc pm Np Rn
1 0.24165 0.33029 0.32584 0.40076
2 0.36991 0.37534 0.39542 0.41257
3 0.42400 0.35644 0.37024 0.37131
4 0.39743 0.32476 0.26925 0.17746
5 0.26859 0.31475 0.34084 0.33949
Largest difference 0.18235 0.06059 0.12617 0.23511
Rank 2 4 3 1
7358 H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364
(A) For Rosenbrock F3(X): For the best average optimal value in boldface in Table 2, the best combination is {pc = 0.7, -
pm = 0.1, Np = 100, Rn = 0.7}, and for the best average standard deviation in boldface in Table 3, it is
{pc = 0.9, pm = 0.5, Np = 20, Rn = 0.3}.
(B) For Griewank F5(X): For the best average optimal value, the combination is {pc = 0.9, pm = 0.1, Np = 80, Rn = 0.5}, and for
the best average standard deviation, it is {pc = 0.1, pm = 0.1, Np = 80, Rn = 0.4}.
The optimization results of uni-modal functions in Table 6 shows that the SDGALS’s performance is the best one, and it
does an excellent job when applied to F1(X) and F2(X). For F3(X), four algorithms combining with Nelder–Mead’s simplex
Table 6
Optimization results with DGA and SDGAs on the problems with 30 variables.
Fig. 10. Convergence curves of 30-dimensional F2(x) for four different DGAs.
Fig. 11. Convergence curves of 30-dimensional F3(X) for four different DGAs.
Fig. 12. Convergence curves of 30-dimensional F4(x) for three different DGAs.
Fig. 13. Convergence curves of 30-dimensional F5(x) for three different DGAs.
method show strong effects on the accuracy of results. It is thus clear that, when applied to unimodal functions, the DGAs
have to rely on the local search method to enhance their accuracy of results.
The convergences shown in Figs. 9–11 for three different functions with four strategies of DGA indicate that the evolution
efficiency and accuracy performance for the DGA having the local search and sequential strategies, DGALS, SDGALS,
7360 H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364
Fig. 14. Convergence curves of 30-dimensional F6(x) for three different DGAs.
SDGAMINLS and SDGAMEXLS, are better than DGA. Among these algorithms, the performance of SDGALS is the best one and the
convergence speed SDGAMINLS is the best one.
From the results of multimodal functions in Table 6, the SDGAMINLS shows the best overall performance. It is superior to
the original DGA when applied to all of the multimodal functions. The SDGAMINLS shows the largest improvement, by five
orders of magnitude (100,000 times), when used for finding the result of F5(X).
The evolution convergences shown in Figs. 12–14 for three different functions with three strategies of DGA indicate that
the computational efficiency and solution accuracy for the DGA with the local search and sequential strategies, DGALS,
SDGAMINLS and SDGAMEXLS, are improved and their results are better than those of DGA. The set of the parameters’ values
for the evolutionary process in these sequential strategies is a suitable one for saving the evolution cost and increasing the
solution accuracy.
By summarizing the above results, the SDGA combined with a local search is able to greatly improve the performance of
the DGA. For the unimodal benchmark functions, the SDGALS has the best performance; for multimodal benchmark func-
tions, the SDGAMINLS outperforms the others. Furthermore, it is clear from the DGA’s data in Table 6 that, despite its solu-
tions’ lack of accuracy, their results are all close to the global optimal solutions. Thus, the population’s evolution of the
proposed DGA is moving in the correct direction, and by adding some strategies as well as the local search mechanism,
the developed algorithms indeed have the ability to produce accurate and robust results, and fast convergence.
Table 7
Comparison of SDGAs’ optimization results with literature data for the unconstrained problems with 30 variables.
Table 8
Comparison of SDGAs’ optimization results with literature data for the unconstrained problems with 100 variables.
with a limited number of iterations to the best particle in every generation. The BGA in the literature [11] uses original and
modified mutation operators. The RGA in the literature [11] uses original and in the literature [17] uses combinations of two
crossover operators, HX and LX, as well as two mutation operators, MPTM and NUM, to form four methods: HX-MPTM
(HMGA), LX-MPTM(LMGA), HX-NUM(HNGA), and LX-NUM(LNGA) and in the literature [20] uses linear map-based mutation
scheme for RGA(LM_RCGA). In this paper, for unimodal functions F1(X)–F3(X), the SDGALS is adopted to perform them,
whereas for multimodal functions F4(X)–F6(X), the SDGAMINLS is used. The results obtained are compared with those of
the other algorithms that were mentioned above. Tables 7 and 8 show the results that were collected by applying these algo-
rithms to the six benchmark functions with 30 and 100 variables. The comparisons are made as follows:
From Table 7, which summarizes the results conducted on the six benchmark functions, it is apparent that the overall per-
formances of the two DGA methods presented in this paper as well as the MPSOLS rank first, and the RGAs [17] rank second. Both
the proposed methods, SDGALS and SDGAMINLS, are able to search out the global optimization solutions of the unimodal and the
multimodal functions. The MPSOLS is only inferior to SDGALS when applied to F3(X); this result is because the local search in
MPSOLS is performed for the best particle in every generation, which makes it difficult to obtain a more accurate value.
Provided that a function value is considered to be at the best level when it is less than 106 (Table 8), the proposed
SDGAMINLS and MPSOLS have the best overall performance when compared with the other RGA and two BGA algorithms,
and the RGA ranks second. When applied to F2(X) and F3(X), the SDGAMINLS has the best performance.
The optimization results in Tables 7 and 8, collected from testing the 30/100-dimensional functions with several algo-
rithms, show that the solution robustness of the proposed DGAs does not deteriorate as the number of variables increases.
The results show that the proposed algorithms, the SDGALS and the SDGAMINLS, have the features of fast convergence and
high robustness to deal with high-dimensional problems.
The cantilevered beam design problem with discrete rectangular cross-sections is shown in Fig. 15. The object of this
problem is to determine the best combination of the five different cross-section areas so that the volume of the cantilever
1
p
2 3 4
5
l1 l2 l3 l4 l5
L
y
hi
bi
Fig. 15. Schematic of the cantilevered beam structure with an indication of the design variables.
7362 H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364
beam is minimized. The design problem has up to 10 variables. These variables are the width and height of each cross-sec-
tion, hi and bi (i = 1, . . . , 5). An external force p = 50,000 N is applied at the free end of the cantilevered beam. The maximum
allowable stress at the left end of each section is rmax = 14,000 N/cm2, the material elasticity modulus E is 200 GPa, the
length of each section li (i = 1, . . . , 5) is 100 cm, and the maximum allowable deflection is ymax = 2.715 cm. The height-to-
width aspect ratio of each cross-section is restricted to less than 20. Thus, the mathematical model for the optimization
of this problem is defined as follows:
x1 x22
g 1 ðxÞ ¼ 10:7143 6 0; ð20Þ
103
x3 x24
g 2 ðxÞ ¼ 8:5714 6 0; ð21Þ
103
x5 x26
g 3 ðxÞ ¼ 6:4286 6 0; ð22Þ
103
x7 x28
g 4 ðxÞ ¼ 4:2957 6 0; ð23Þ
103
x9 x210
g 5 ðxÞ ¼ 2:1429 60 ð24Þ
103
244 148 76 28 4
g 6 ðxÞ ¼ 104 3
þ 3
þ 3
þ 3
þ 3
10:86 6 0: ð25Þ
x1 x2 x3 x4 x5 x6 x7 x8 x9 x10
Geometric constraints:
Table 9
Comparison of SDGA’s optimization results with the literature data [31,32] for the
cantilevered beam problem.
Table 10
Comparison among the constrained values in the cantilevered beam with
SDGAMINLS and methods [31] and [32].
problem in this paper is addressed by using the penalty function method, Eq. (5). The penalty factor R is set to 107. The pop-
ulation size is set to 60; the replacement rate = 70% (the elite rate is 30%); the crossover rate pc = 0.7; the mutation rate
pm = 0.1.
The optimization results (the best objective function values and the design variable values) that are obtained using the
proposed SDGAMINLS, the MPNN (Mathematical Programming Neural Network) [31], and the SUMT (Sequential Uncon-
strained Minimization Techniques) [32] are listed in Table 9, and the constraints are listed in Table 10. The MPNN is a meth-
od that uses neural networks to solve nonlinear optimization problems. In the literature [32], the ALM (Augmented Lagrange
Multiplier) method is used first to transform the problem into an unconstrained optimization problem; the solution is then
searched out using the SUMT. It is clear from Table 9 that the function value obtained from the proposed SDGAMINLS is a little
better than that from the MPNN and is also superior to SUMP. At the same time, Table 10 reveals that, among the constraints
of the MPNN, three constraints, g3, g10, and g11, does not enough to be met.
5. Conclusions
(1) Based on the framework of real-coded genetic algorithm, this paper borrows from search mechanisms of Nelder–
Mead’s simplex method to propose two operators, directed crossover operator and directed mutation operator, and
develop a new variant of real-coded genetic algorithm, called as the directed genetic algorithm (DGA).
(2) With Taguchi method, the parameter analysis of DGA on two functions are performed, and to determine the best com-
bination of parameters, and the degree of influence of each parameter.
(3) The results of applying the DGA to the six benchmark functions indicated that the DGA itself already has the fast con-
vergence. In order to improve the solution accuracy, four strategies with a local search method are proposed and
applied to six benchmark functions, including three uni-modal ones and three mlti-modal ones. For unimodal func-
tions, SDGALS, has the best performance, while for multimodal functions, SDGAMINLS outperforms the others. More-
over, the local search method that is adopted in this paper – the Nelder–Mead simplex method – has shown its
ability to effectively improve the accuracy of the solution.
(4) In comparison with the results from the application of the eleven algorithms in the literature [11,17,20,25] to the six
30- and 100-dimensional benchmark functions, the proposed algorithms have demonstrated superior performance
and the ability to search out a result that is nearly identical to the global optimal solution.
(5) A constrained optimization problem – the cantilevered beam design optimization problem - has been addressed in
this paper using the penalty function method. The results of the directed genetic algorithm are better than those of
the other studies in the literature [31,32].
References
[1] J.H. Holland, Adaption in Natural and Artificial Systems, The University of Michigan Press, 1975.
[2] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of IEEE International Conference on Neural Networks, vol. 4, Perth, Australia,
1995, pp. 1942–1948.
[3] M. Dorigo, V. Maniezzo, A. Colorni, Ant system: optimization by a colony of co-operating agents, IEEE Trans. Syst. Man Cybern. Part B Cybern. 26 (1996)
29–41.
[4] R. Storn, K. Price, Differential evolution – A simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim. 11 (1997)
341–359.
[5] E. Bonabeau, M. Dorigo, G. Theranlaz, Swarm Intelligence. From Natural to Artificial System, Oxford University Press, 1999.
[6] E. Bonabeau, G. Theranlaz, Swarm smarts, Sci. Am. (2000) 72–79.
[7] S.L. Birbil, S.C. Fang, An electromagnetism-like mechanism for global optimization, J. Global Optim. 25 (2003) 263–282.
[8] K.A. De Jong, An analysis of the behavior of a class of genetic adaptive systems, Ph. D. Dissertation, University of Michigan, Ann Arbor, 1975.
[9] D.E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison Wesley, Reading, MA, 1989.
[10] D.E. Goldberg, Real-coded genetic algorithms, virtual alphabets, and blocking, Complex Syst. 5 (1991) 139–167.
7364 H.-C. Kuo, C.-H. Lin / Applied Mathematics and Computation 219 (2013) 7348–7364
[11] R. Arora, R. Tulshyan, K. Deb, Parallelization of binary and real-coded genetic algorithms on GPU using CUDA, IEEE Congr. Evol. Comput. 2010 (2010) 1–
8.
[12] A. Wright, Genetic algorithms for real parameter optimization, in: G.J.E. Rawlin (Ed.), Foundations of Genetic Algorithms 1, Morgan Kaufmann, San
Mateo, 1991, pp. 205–218.
[13] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer, New York, 1992.
[14] L.J. Eshelman, J.D. Scahffer, Real-coded genetic algorithms and interval-schemata, in: L. Darrel Whitley (Ed.), Foundations of Genetic Algorithms 2,
Morgan Kaufmann, San Mateo, 1993, pp. 187–202.
[15] I. Ono, S. Kobayashi, A real-coded genetic algorithm for function optimization using unimodal normal distribution crossover, in: Proceedings of 7th Int’l
Conf. on Genetic Algorithms, 1997, 246–253.
[16] K. Deb, A population-based algorithm-generator for real-parameter optimization, Soft Comput. 9 (2005) 236–253.
[17] K. Depp, M. Thakur, A new mutation operator for real coded genetic algorithms, Appl. Math. Comput. 193 (2007) 211–230.
[18] M.E. Farmer, S. Bapna, A.K. Jain, Large scale feature selection using modified random mutation hill climbing, ICPR 2004, In: Proceedings of the 17th
International Conference on, Pattern Recognition, 2004.
[19] A. Neubauer, A theoretical analysis of the non-uniform mutation operator for the modified genetic algorithm, in: IEEE International Conference on,
Evolutionary Computation, 1997, 93–96.
[20] Y.J. Gong, X. Hu, J. Zhang, O. Liu, H.L. Liu, A linear map-based mutation scheme for real coded genetic algorithms, IEEE Congr. Evol. Comput. 2010 (2010)
1–7.
[21] M. Tadahiko, I. Hisao, T. Hideo, Genetic algorithms for flow-shop scheduling problems, Comput. Ind. Eng. 30 (1996) 1061–1071.
[22] S. Ayed, A. Imtiaz, A.M. Sabah, Particle swarm optimization for task assignment problem, Microprocess. Microsys. 26 (2002) 363–371.
[23] R. Chelouah, P. Siarry, Genetic and Nelder–Mead algorithms hybridized for a more accurate global optimization of continuous multi-minima functions,
Eur. J. Oper. Res. 148 (2003) 335–348.
[24] J.A. Nelder, R. Mead, A simplex method for function minimization, Comput. J. 7 (1965) 308–313.
[25] A. Adnan, G. Akin, Enhanced particle swarm optimization through external memory support, IEEE Trans. Evol. Comput. 2 (2005) 1875–1882.
[26] G. Taguchi, Techniques for Quality Engineering, Asian Productivity Organization, Tokyo, 1990.
[27] O. Edward, Wilson, Sociobiology: The New Synthesis 1975, Harvard University Press, Twenty-fifth Anniversary Edition, 2000.
[28] H.C. Kuo, J.L. Wu, A new approach with orthogonal array for global optimization in design of experiments, J. Global Optim. 44 (2009) 563–578.
[29] R.A. Makinen, E.J. Periaux, J. Toivanen, Multidisciplinary shape optimization in aerodynamics and electromagnetic using genetic algorithms, Int. J.
Numer. Methods Fluids 30 (1999) 149–159.
[30] X. Yao, Y. Liu, G. Lin, Evolutionary programming made faster, IEEE Trans. Evol. Comput. 3 (1999) 82–102.
[31] K.C. Gupta, J. Li, Robust design optimization with mathematical programming neural networks, Comput. Struct. 76 (2000) 507–516.
[32] A.V. Fiacco, G.P. McCormick, Nonlinear programming: Sequential Unconstrained Minimization Techniques, John Wiley & Sons, Inc., New York, 1968.