0% found this document useful (0 votes)
39 views20 pages

Pre Version

Uploaded by

Khadija M.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views20 pages

Pre Version

Uploaded by

Khadija M.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Int. J. Mathematical Modelling and Numerical Optimisation, Vol. x, No.

x, xxxx 1

Backtracking Search Algorithm for Multiobjective


design Optimization
Abstract: In engineering, under complex nonlinear constraints, the majority of
design problems are generally multi-objective. For multi-objective problems, the
computing effort can often rise significantly through the number of objectives and
constraints evaluations. Metaheuristics are nowadays seen as powerful algorithms
to deal with multiobjective optimization problems. In this paper, we develop
a new Backtracking Search Algorithm for Multiobjective Optimization, named
(BSAMO). This algorithm is built to solve multiobjective design problems. We
evaluate the performance of the algorithm using a benchmark of test problems and
two structural multiobjective optimisation design problems. The performance of
BSAMO have been compared to those of NSGA-II algorithm which is considered
as one of the best available metaheuristics. The numerical results show not only
the effectiveness and the best performance of BSAMO compared to NSGA-II,
but also its rapidity and efficiency.

Keywords: Multiobjective optimization; Evolutionary Algorithms; Backtracking


Search; Structural optimization; Design optimization.

1 Introduction

In a single objective optimization engineering design problem, the optimal solution can be
clearly defined for example in reliability engineering Lopeza et al. (2011), Lopeza et al.
(2013) and Kharmanda et al. (2004). However, most of the engineering design problems
involve multiple and often conflicting design objectives. In multi-objective optimization
engineering design process is usually to optimize different objectives simultaneously, that
are in conflict with each other in most cases. That fact, the ideal solution is usually located
outside the possible design space, so the aim is to get a set of solutions representing the
best of objectives also called Pareto front. In mathematics, if a solution cannot be improved
with respect to the goal without infecting at least the quality of the other objectives it is
called Pareto solution. Also in the multi-objective optimization, Pareto front designates
the space formed by Pareto solutions in objective space. To analyze tradeoffs in real life,
policymakers are usually able to analyze a limited number of solutions. Therefore, they
require a sufficiently uniformly distributed set of solutions which are representative of the
entire Pareto front. In One hand, it is possible to meet the challenges that are generating
a well spread all difficult solutions such as discontinuities in the Pareto frontier, the non-
uniform density achievable solutions, non-convexity and nonlinear objective functions
and constraints. In the scientific literature, Multiobjective Optimization Problems (MOPs)
are typically very difficult to solve. Although with all these difficulties, multiobjective
optimization has many effective algorithms with many successful applications Zhou et al.
(2011), Mohsine et al. (2006), Talbi (2009) and Ben Abdessalem and El-Hami (2014).
Recently, a significant amount of multi-objectives evolutionary algorithms (MOEAs) have
been used to solve (MOPs) in science and engineering in a single runs. The best known
state-of-the-art include (MOEAs) SPEA2 Zitzler at al. (2002), PAES Corne and Knowles

Copyright © 20xx Inderscience Enterprises Ltd.


2 ... et al.
(2000) and NSGA-II Deb et al. (2002), among some others Song and Gu (2004) and
MODE Wisittipanich and Kachitvichyanukul (2014). On the other hand, many review
articles adopted the MOEAs to solve engineering design problems, see for example Gong
et al. (2009), Yang and Deb (2013), Yang et al, (2014) and Yang (2011). The Backtracking
Search Algorithm (BSA), is a recently developed evolutionary algorithm has been applied
successfully to solve engineering applications developed by Civicioglu (2013) for solving
real-valued numerical optimization problems. this algorithm used for the synthesis of
concentric circular antenna arrays (CCAAs) by Guney et al. (2014), is used to solve the
multi-objective power dispatch problem Delshad and Abd-Rahim (2016) and El-Fergany
(2016). In this paper, we propose Backtracking Search Algorithm for Multiobjective Design
Optimization (BSAMO) which is a new way to extend Backtracking Search Algorithm
(BSA) Civicioglu (2013) to be suitable for solving engineering design problems. This
paper is organized as follows: In section 2 we will introduce the (BSA). The section 3,
is focused in the problem formulation of MOPs. In section 4 we will propose a BSAMO
method which solves the MOPs introduced in section 3. The simulations of BSAMO is
demonstrated in section 5 and will be compared with various multi-objective optimization
algorithms. In the last section we will apply this algorithm to solve design optimization
problems in engineering and we will finish by giving a short conclusion.

2 Backtracking Search Algorithm

Backtracking Search Algorithm (BSA) Civicioglu (2013) is an evolutionary algorithm


recently developed by Civicioglu. According to the author, BSA showed good convergence
performance compared to other algorithms and needs only one control parameter. As
described in Civicioglu (2013), BSA includes five evolutionary mechanisms: initialization,
selection-I, mutation 4, crossover 1, and selection-II. Hereafter, we give a detailed
description of evolutionary mechanism used in the original version of BSA:

2.1 Initialization

To generate an initial population, BSA algorithm uses a uniform random distribution


function. The individuals of the initial population are built based on Equation (1):

Pi,j ∼ rand(lowj , upi ) f or i = 1, ..., N and j = 1, ..., D. (1)

Where N is the population size and D is the number of decision variables of the problem
at hand. rand is random uniform distribution and lowj and upj are respectively lower and
upper bounds of the j-th decision variable.

2.2 Selection I

This first selection step aims to choose the historical population (oldP) for determining
the search direction. In the same way as for the initial population, the individuals of the
historical population are redefined beginning of each iteration and computed following the
equation (2):

if a < b|a,b∼rand(0,1) then oldP ← P (2)


BSAMO 3
The order of the values of this historical population (oldP) is changed randomly according
to the permuting i.e. a random shuffling function in the ’if/then’ equation (3):
oldP ← permutting(oldP ) (3)
2.3 Mutation operator
The initial from of the trial population is generated by the mutation operator of BSA using
the following equation (4):
P m = P + F × (oldP − P ) (4)
where F is a parameter controlling the amplitude of the search direction matrix computed
as the difference between the historical and the current populations matrix (OldP − P ). In
this paper, we use the value F = α ∗ N (0, 1), where α = 3 and N is the normal distribution

2.4 Crossover Operator

From the initial trial population built using the mutation process, the crossover mechanism of
BSA creates the final form of the trial population PC. Thus, the crossover operator works on
the PM population. Based on their fitness values, the best individuals in the trial population
are selected to guide the search through the target population individuals. The crossover
operator of BSA combines two main parts. First, a binary integer-valued N × D matrix
called ’map’ is computed. This matrix determines the individuals of the trial population T
to be mixed with relevant individuals of P . To update the population of T with P C = P if
mapi,j = 1, {i, j} ∈ {1, · · · , N } × {1, · · · , D}. The second step of the BSA’s crossover
strategy consists in repairing some individuals of the trial population generated in the first
step and which overflow the allowed search-sapce. Those individuals are thus regenerated
so that they keep in the allowed search-space. Algorithm 1 presents the BSA’s crossover
process:

Algorithm 1: Algorithm of crossover


Input : Pm,P, N, and, D.
Output :Crossover population: Pc
m1:N,1:D = 0
IF rand ≤ rand
FOR each particle i = 1, · · · , N Mi,1:u(1:rnd∗D) = 1 | u = permeting(1,...,D) END FOR
ELSE
Foreach particle i = 1, · · · , N Mi,randi(D) = 1 END FOR
END IF
Pc=Pm
Pc=Pm.*map+P.*∼map

2.5 Selection II

The BSA algorithm performs second selection step from the trial population generated
during the crossover step. Thus the individuals from the trial population having better fitness
than some individuals in the old population are used to updated this last one. Similarly, if
the so far obtained global minimum is has worst fitness than best individual, BSA uses this
the best individual as new global minimum in the next iteration.
4 ... et al.
3 Multi-objective optimization problems

Let us consider the following Multi-objective Optimization Problem (MOP):



 min
 f (x) = (f1 (x), · · · , fm (x))T
x∈Ω
Subject to gi (x) ≤ 0, (i = 1, · · · , l) (5)

 hj (x) = 0, (j = 1, · · · , k),

where m is the number of objective functions, x = (x1 , · · · , xn ) ∈ Ω is the n-dimensional


decision space, gi (x) (i = 1, · · · , l) are l inequality constraints and hj (x) (j = 1, ..., k) are
k equality constraints. Let us first introduce some basic but useful concepts for the rest of
our paper Bosman and Thierens (2003):

3.1 Pareto dominance:

Let a = (a1 , ..., an ) and b = (b1 , ..., bn ) be two vectors. The vector
a is said to dominate the vector b (denoted by a ≤ b) if and only if a is partially less than
b, i.e.

(∀k ∈ {1, · · · , m} fk (a) ≤ fk (b)) ∧ (∃k ∈ {1, ..., m} fk (a) < fk (b)) (6)

3.2 Pareto-optimal :

A solution a ∈ Ω is said to be Pareto-optimal with respect to (w.r.t) to Ω if and


only if there no solution b ∈ Ω for which f (b) = (f1 (b), · · · , fm (b)) dominates f (a) =
(f1 (a), · · · , fm (a)).

¬∃b ∈ Ω such that a  b (7)

In other words, a solution is said to be Pareto optimal if there no feasible solution which
can improve one criterion without simultaneously determining at least another criterion.

3.3 Pareto-optimal set:

For a given MOP, the set P of all Pareto-optimal solutions is called the Pareto set and
defined as follows:

P S = {x| ¬∃a ∈ Ω : f (a)  f (x)} (8)

3.4 Pareto-optimal front:

For a given MOP and its corresponding Pareto set P S, the Pareto Front (denoted P F ) is
defined as follows:

P F = {f (x) = (f1 (x), ..., fm (x))T |x ∈ P S} (9)

The Pareto front is the collection of all nondominated vectors when plotted in the objective
space.
BSAMO 5
4 The proposed algorithm

BSA was initially developed only for single optimisation problems. Here, we improve and
extend BSA makiong it suitable to deal with multi-objective optimisation problems. For
this purpose, BSAMO integrates the fast non-dominated sorting and the crowding distance
proposed by Deb et al. (2002).
Moreover, BSAMO uses the mutation and crossover operators presented in equation 4 and
algorithm 1.

4.1 Fast nondominated sorting

The fast non-dominated sorting was developed in the framework of algorithm NSGA-II
Deb et al. (2002). In the elaboration of the fast nondominated sorting, the domination count
np , i.e. the number of solutions which dominate the solution p, and the set of solutions that
the solution p dominates Sp are calculated for every solution. The first nondominated front
is thus created and initialised with all solutions having zero as domination count. Then, for
each solution p with np = 0, each member (q) of its set Sp is visited and its domination count
is reduced by one. As a result, if for any member, the domination count is equal to zero,
we put it in a separate list Q. The second non-dominated front is then created as the union
of all individuals belonging to Q. The procedure is repeated for subsequent fronts (F3 , F4 ,
etc.) until all individuals are assigned their ranks. The fitness is set to a level number; lower
number, and higher fitness (F1 is the best).

4.2 Crowding distance

The crowding distance proposed by Li and Liu (2011) as an estimate of the diversity
measure of individuals surrounding a given individual i in the population. The crowding
distance is the average distance between two individuals situated on either side of the
given particular solution along each objective as shown in Figure 1. The average distance
between individuals i − 1 and i + 1 bording the individual i, located on the Pareto front
is depicted in figure 1. Such distance is an estimation of the perimeter of cuboid formed
by using the nearest neighbors. This metric represents the half of perimeter of cuboid
encompassing the solution i. The main consideration from the crowding distance is to find
the Euclidian distance between each individual in a front based on their m objectives. The
computation of the crowding distance, based on the normalised values of objectives, is
max min
given by the algorithm 2, where fm and fm are the maximum and minimum value of
the mth objective function respectively. The sum of individual crowding distance values
corresponding to each objective gives the overall crowding distance value.

Algorithm 2: Crowding distance calculation for a set solution I


n = longth(I)
FOReach i,
set I[i]distance = 0 END FOR
FOReach objective m, I = sort(I, m)
I[1]distance = I[n]distance = ∞
FOR i=2 to (n-1)
max min
I[i]distance = I[i]distance + (I[i + 1].m − I[i − 1].m )/(fm − fm )
6 ... et al.
END FOR
END FOR

Here, I is an non-dominated set, n is the size of I i.e. its number of elements, I[i].m is the
mt h objective value of the ith individual in I, and sort(I, m) is sorting of the individuals
of I according to the mth objective.
In the following we give the pseudo-code of BSAMO in algorithm 3.

Algorithm 3: Pseudo code of BSAMO


Input N : size of population,
D : Number of decision variables
M axGen : generation number
g←0
Randomly initialize the population P g = {xg1 , xg2 , ..., xgN } with xgi = {xgi1 , xgi2 , ..., xgiN }
such as {i = 1, ..., N }.
Calculate F (P g ) evaluate the fitness of each vector of the xgi
Non-dominated of F (P g ) and calculate the crowding distance
WHILE g ≤ M axGeneration
Apply the mutation operator 4 to generate a mutant population P mg corresponding to the
population F (P g ) and crossover operator 1 to the trial population P mg to obtain the final
trial population P cg
Combining the populations C g = P cg ∪ P g
Evaluate the fitness of each vector of the final trial population C g+1 , to find F (C g+1 )
Identify non-dominated individuals and Crowding distance of the F (P Cg+1 ) to obtain
g =g+1
END WHILE
For each run combining Pareto solutions obtained.

5 Numerical simulations

In this section, we present the numerical experiments we performed on a benchmark of test


functions, we used eight test functions are stated . Our algorithm BSAMO was used to solve
these test problems and compared with several reference algorithms MOEAs: NSGA-II Deb
et al. (2002). Three metrics have been used for comparing the performances of BSAMO
to those of the reference MOEAs algorithms.

5.1 Performance metrics

Mutiobjective optimisation algorithms, unlike single-optimisation algorithms, aim


simultaneously to ensure the convergence to, and to maintain the diversity within, the Pareto-
optimal set. Two kinds of metrics are necessary to adequately evaluate these algorithms
according to these two goals: convergence and diversity metrics. Thus, to measure the
performances of our algorithm, we opted for the three following performance metrics. The
Generational Distance (DG) proposed by David et al. (1998) measures the extent of the
convergence to a true set of Pareto-optimal solutions, the diversity metric measures the
extent of the spread of the obtained non-dominated solutions.
BSAMO 7
5.1.1 Spacing metric:
The definition of the Spacing metric (SP) with is introduced by Schott (1995). The goals of
this metric measures the uniformity of the distribution points of the whole solution in the
plane of objectives functions. The mathematical formula for this PS as follows:
v
u
u 1 X |P |
S=t (d − di )2 , (10)
|P | i=1

where
" m
#
|P | X
di = min |fk (pi ) − fk (pj )| , i 6= j and i = 1, ..., |P | (11)
j=1
k=1

Where m is the number of the objective function and d is the average value of all di .

5.1.2 Generational distance:


Generational distance (DG) is a measure of the distance between approximate solutions of
the Pareto-optimal P and true Pareto front P ∗ .
sP
p∈P d(p, P ∗ )2
GD(P, P ∗ ) = , (12)
|P |

where d(p, P ∗ ) is the minimum Euclidean distance between p and the points in P .

5.2 Comparison of MOBSA with NSGA-II

In this paragraph, BSAMO was compared with NSGA-II Deb et al. (2002) algorithm
which is considered as a reference in multi-objective optimization scientific community.
For this purpose, we considered eight test MOPs i.e. ZDT1, ZDT2, ZDT3, ZDT6 proposed
by Zitzler and Thiele (1998) and Zitzler (1999), Kursawe, Fonseca chosen from Kursawe
(1990) and Fonseca and Flemming (1989), and two constrained test problems i.e. namely
problems OSY proposed by Osycza and Kundu (1995) and Tanaka proposed by Tanaka
(1995).
The Pareto-optimal front of these eight problems present several interesting
characteristics in convex, non-convex, concave and discontinuous of solutions.
The population size and the maximum number generations have both been set at 100 for
all problems. As it can be seen from Figure 2, for ZDT1, ZDT2 and ZDT3 problems,
the final non-dominated fronts obtained by BSAMO are more diverse, and much better
in terms of both approximations to the target original Pareto Front (PF) than those found
by NSGA-II. As for ZDT6 Kursawe and Fonseca and ZDT6 that BSAMO obtained better
diversity than NSGA-II. For Tanaka functions the final Pareto front found by algorithm
BSAMO has better approximation quality than that obtained by NSGA-II. The Pareto fronts
obtained by algorithm BSAMO for problem OSY much better than NSGA-II in terms of
both approximation and uniformity. From with 10,000 functions evolutions, we can come
to conclude that BSAMO performs similar or better than NSGA-II in both approximation
8 ... et al.

Generational distance Spacing metric


Problem Statistic BSAMO NSGA-II BSAMO NSGA-II
ZDT1 Average 1.9332057e-04 2.9551035e-04 8.3318911e-03 9.9576397e-03
STD. Dev. 5.4992218e-05 4.3787857e-05 1.4874490e-03 6.1334206e-04
Median 2.0086169e-04 2.9336616e-04 8.3536855e-03 1.0068180e-02
ZDT2 Average 9.2557854e-05 2.2868761e-04 8.6931978e-03 1.0378741e-02
STD. Dev. 6.4796965e-06 3.2438807e-05 1.5567553e-03 7.6266010e-04
Median 9.2971188e-05 2.3077645e-04 9.2323898e-03 1.0374613e-02
ZDT3 Average 1.5686597e-04 1.8580478e-04 1.0545011e-02 2.0146783e-02
STD. Dev. 1.2808089e-05 1.7098021e-05 8.5796544e-04 7.6755400e-03
Median 1.5639053e-04 1.8769260e-04 1.0446814e-02 2.0191533e-02
ZDT6 Average 7.0749926e-05 7.1167404e-05 7.3880130e-03 8.5770620e-03
STD. Dev. 4.8712366e-06 3.5178331e-06 9.8349305e-04 3.9006884e-04
Median 7.0684795e-05 7.1366966e-05 7.6396622e-03 8.5718258e-03
Fonseca Average 2.7989221e-04 3.0065864e-04 3.5203914e-03 5.3891371e-03
STD. Dev. 3.3520272e-05 3.1116652e-05 6.7429997e-04 3.8175319e-04
Median 2.7991972e-04 2.9705892e-04 6.5559554e-03 3.4124656e-03
Kursawe Average 1.6745161e-03 2.1893311e-03 8.9990255e-02 8.8457263e-02
STD. Dev. 2.6183193e-04 3.1584969e-04 1.2727878e-02 1.5233638e-02
Median 1.6009207e-03 2.1539100e-03 7.7906278e-02 9.1567442e-02
Tanaka Average 3.3164734e-01 3.0189159e-01 1.4131062e+00 1.3296550e+00
STD. Dev. 8.5356017e-02 4.5854287e-02 1.9489467e-01 2.9204362e-01
Median 3.1613798e-01 2.9237446e-01 1.3995290e+00 1.2911461e+00
OSY Average 6.7119780e-04 8.5937344e-04 6.6138209e-03 6.7974225e-03
STD. Dev. 1.0399835e-04 1.1609770e-04 1.4246481e-03 1.1234016e-03
Median 6.7181790e-04 8.3098530e-04 6.3362175e-03 6.8506880e-03
Table 1 Statistical results for Generation distance (GD) and Spacing (S) for eight test problems

and uniformity.
Table 5.2 provides the experimental results of two algorithms on the eight test problems.
It gives the mean, the standard deviation and the median of the Generation Distance (GD)
and Spacing metric (SP) of results obtained by both algorithms over 30 independent runs
on the eight test problems. As GD is a good metric to benchmark the convergence of an
algorithm, these results indicate that the BSAMO algorithm has a better convergence on
seven test problems compared to the NSGA-II. For Tanaka test problem, the mean of GD
obtained by BSAMO is very close to the average obtained by NSGA-II. Results show that
BSAMO provides the best performance for seven problems, for Spacing metric (SP), for
other problems NSGA-II shows slightly better coverage as per the results of average Spread.

6 Design optimization problems

In this section we will solve four design optimization problems taken from Gong et al.
(2009) and Yang et al, (2014) use our approach BSAMO algorithm. To test the efficiency
a applicability of the algorithm for multiobjective design optimization, the selected this
problem.
BSAMO 9
6.1 Disc brake design

In this example, we deal with the disc brake design problems studied by Gong et al. (2009)
and Yang and Deb (2013). The objectives are to minimize the mass of the brake and to
minimize the stopping time. We axamine four design variables: the inner radius of the discs
x1 , the outer radius of the discs x2 , the engaging force x1 and the number of friction surfaces
x4 . The mathematical formulation of the problem is as follows:

min
x


f1 (x) = 4.9 × 10−5 (x22 − x21 )(x4 − 1)




9.82×106 (x22 −x21 )




 f2 (x) = x3 x4 (x3 −x 3)
 2 1
subject to




c1 = 20 − (x2 − x1 ) ≤ 0 (13)
c2 (x) = 2.5(x4 + 1) − 30 ≤ 0



c (x) = 3.14(xx23 −x2 ) − 0.4 ≤ 0


 3

 2 1
 2.22×10−3 x3 (x32 −x31 )
c4 (x) = −1≤0


 (x 2 −x2 )2
 2 1
 c (x) = 900 − 2.66×10−2 x3 x4 (x32 −x31 ) ≤ 0


5 (x22 −x21 )

The plot in Figure 3 shows the approximate Pareto front obtained by two algorithms on
each of the disc brake design problems.The algorithm BSAMO generally yields better
approximations of the Pareto front compared to NSGA-II.

6.2 Design of a welded beam

We consider here the multi-objective design of a welded beam as formulated in Gong et al.
(2009), Yang and Deb (2013). In this problem, we optimize two objective functions namely:
fabrication cost f1 and end deflection f2 = δ. This problem has four design variables: the
width w and length L of the welded area, the depth d and thickness h of the main beam.
The mathematical formulation of this design is as follows:

min
x


(x) = 1.10471w2 L + 0.04811dh(14 + L)




 f 1
 f2 (x) = δ

subject to (14)



 c1 (x) = w − h ≤ 0, c2 (x) = δ − 0.25 ≤ 0, c3 (x) = τ − 13600 ≤ 0
 2



 c4 (x) = σ − 30000 ≤ 0, c5 (x) = 0.10471w + 0.04811hd(14 + L) − 5 : 0 ≤ 0
c6 (x) = 0.125 − W ≤ 0, c7 (x) = 6000 − P ≤ 0

where
 p

 Q = 6000(14 + L2 ) D = 12 (L2 + (w + d)2 )
L2 (w+d)2

] β = QD
65856
p
 δ = 30000hd3 J = (2)[ 6 +


2 J

τ = (α2 + αβL
504000 6000
p
σ = hd2 α = √ D + β2) (15)
 (2)wL
4.013×30×106 dh3
 p
P = 6 (1 − d (30/48)/28)



 196
0.1 ≤ L, d ≤ 10 and1.125 ≤ w, h ≤ 2.

10 ... et al.
The resultant Pareto front curve is depicted in Figure 4, with the fabrication cost f1 of the
welded and the end deflection f2 = δ on the horizontal and vertical axis, respectively. The
Pareto front curves obtained after 100 generations and 100 population size see Figure 4. As
it can be seen, good quality distribution and approximated Pareto front (PF) for algorithm
BSAMO compared of NSGA-II in left.

7 Conclusions

In this paper, we propose a new algorithm based on Backtracking Search Algorithm (BSA)
for solving multiobjective optimization problems. The performance of this approach is
evaluated for eight benchmark functions of multiobjective problems, experimental results
show our algorithm finds optimal solutions better than other algorithms, and also prove that
BSAMO algorithm for three metric show solutions closer to Pareto-optimal Front for the
majority of these problems. BSAMO is applied to solve two design optimization structures
which show the robustness of the our algorithm. According to the results, BSAMO involves
smaller populations, less generations and attains better efficiency and good convergence
near to the Pareto-optimal front.

Acknowledgements

References

Ben Abdessalem, A. and El-Hami, A. (2014) ’Global sensitivity analysis and multi-objective
optimisation of loading path in tube hydroforming process based on metamodelling
techniques’, International Journal of Advanced Manufacturing Technology, Vol. 71, pp.
753-773

Bosman, P A N. and Thierens, D. (2003) The balance between proximity and


diversity in multiobjective evolutionary algorithms’, IEEE Transactions on Evolutionary
Computation, Vol. 7, No. 2, pp. 174-188

Civicioglu, P. (2013) ’Backtracking Search Optimization Algorithm for numerical


optimization problems, Applied Mathematics and Computation, Vol. 219, No. 15, pp.
8121-8144

Corne, D W. and Knowles, J D. (2000) ’PAES: Approximating the nondominated front


using the Pareto Archived Evolution Strategy’, Evolutionary Computation, Vol. 8, No. 2,
pp. 149⣓172

David, A., Veldhuizen, V. and Lamont, B. (1998) ’ Evolutionary computation and


convergence to a pareto front’, In: John, R. Koza (editor) Late Breaking Papers at the
Genetic Programming 1998 Conference. Stanford University, California, pp. 1129-1141

Deb, K., Pratap, A., Agarwal, S. and Meyarivan, T. (2003) ’A fast and elitist multiobjective
genetic algorithm: NSGA-II’, IEEE Transactions on Evolutionary Computation, Vol. 6,
No. 2, pp. 182-197
BSAMO 11
Delshad, M M. and Abd-Rahim. (2016) ’Multi-objective backtracking search algorithm
for economic emission dispatch problem’, Applied Soft Computing, Vol. 44, No. 3, pp.
479-494,
El-Fergany, A. (2015) ’ Multi-objective Allocation of Multi-type Distributed Generators
along Distribution Networks Using Backtracking Search Algorithm and Fuzzy Expert
Rules’ , Engineering Optimization, Vol. 44, No. 3, pp. 252-267
Fonseca, C. M. and Flemming, P. J. (1989) ’Multiobjective optimization and multiple
constraint handling with evolutionary algorithms’ , Part II: application example. IEEE
Transactions on Systems, Man and Cybernetics, Vol. 28, pp. 38-47
Gong, W., Cai, Z. and Zhu, L. (2009) ’ An effective multiobjective differential evolution
algorithm for engineering design’, Structural and Multidisciplinary Optimization, Vol.
38, pp. 137⣓157
Guney, K., Durmus, A. and Basbug, S. (2014) ’Backtracking Search Optimization
Algorithm for Synthesis of Concentric Circular Antenna Arrays’, International Journal
of Antennas and Propagation,
Kharmanda, G., Olhoff, N. and El-Hami, A. (2004) ’Optimum values of structural safety
factors for a predefined reliability level with extension to multiple limit states’, Structural
and Multidisciplinary Optimization, Vol. 27, No. 6, pp. 421-434
Kursawe F. (1990) ’ A variant of evolution strategies for vector optimization’, Proceedings
of the parallel problem solving from nature first workshop PPSN I. Lecture notes in
computer science Berlin, Vol. 496, pp. 193-207
Li, L. and Liu, F. (2011) ’Group Search Optimization for Applications in Structural Design’,
Adaptation, Learning, and Optimization, Vol. 9
Lopeza, R.H., Lemosse, D., Souza de Cursi, J. E., Rojas and El-Hami, A. (2011)
’An approach for the reliability based design optimization of laminated composites ’,
Engineering Optimization, Vol. 43, pp. 1079-1094
Lopeza, R.H., Lemosse, D., Souza de Cursi, J. E., Rojas and El-Hami, A. (2013) ’Iterative
projection on critical states for reliability-based design optimization ’, Engineering
Optimization, Vol. 45, pp. 577-590
Mehr, F. and Azarm, A. (2007) ’Hybridization of Genetic Algorithm with Immune System
for Optimization Problems in Structural Engineering’, Structural and Multidisciplinary
Optimization, Vol. 24 pp. 415-429
Mohsine, A., Kharmanda, G. and El-Hami, A. (2006) ’ Improved hybrid method as a
robust tool for reliability-based design optimization’, Structural and Multidisciplinary
Optimization, Vol. 32, pp. 203-213
Osycza, A. and Kundu, S. (1995) ’ A new method to solve generalized multicriteria
optimization problems using the simple genetic algorithm’ , Structural and
Multidisciplinary Optimization, Vol. 10, No. 2 pp. 94-99
Song, M. P. and Gu, G. C. (2004) ’ Research on particle swarm optimization’ , a review.
Proceedings of the International Conference on Machine Learning and Cybernetics, pp.
2236-2241
12 ... et al.
Schott, J. R. (1995) ’Fault Tolerant Design Using Single and Multicriteria Genetic Algorithm
Optimization’, Masters thesis, Dept. Aeronautics and Astronautics, Massachussets
Institue of Technology
Tanaka, M. (1995) ’GA-based decision support system for multi-criteria, optimization. ’,
Proceedings of the International Conference on Systems, Man and Cybernetics pp. 1556-
1561

Talbi, E. (2009) ’Metaheuristics From Design to Implentation’, John Wiley & Sons
Yang, X. and Deb, S. (2013) ’ Multiobjective cuckoo search for design optimization’,
Computers and Operations Research, Vol. 40, pp. 1616-1624
Zhou, A., Qu, B., Li, H., Zhao, S., Suganthan, P. and Zhang, Q. (2011) ’Multiobjective
evolutionary algorithms: A survey of the state of the art’, Swarm and Evolutionary
Computation, Vol. 1, pp. 32-49
Zitzler, E. and Thiele, L. (1998) ’Multiobjective optimization using evolutionary algorithms
- A comparative case study ’, In: Eiben AE, Bäck T, Schoenauer M, Schwefel H-P (eds) In
Parallel problems solving from nature, vol 1498. Springer, Berlin, Germany pp. 292-301

Zitzler, E. (1998) ’Evolutionary algorithms for multiobjective optimization: methods and


applications’ ’, doctoral dissertation ETH 13398, Swiss federal institue of technology
(ETH). Zurich, Switzerland
Zitzler, E., Laumanns, M. and Thiele, L. (2002) ’SPEA2: improving the strength Pareto
evolutionary algorithm,A in Evolutionary Methods for Design’, Optimization and Control
with Applications to Industrial Problems, Athens, Greece, pp.95-100
Wisittipanich, W. and Kachitvichyanukul, V. (2014) ’Mutation strategies toward Pareto front
for multi-objective differential evolution algorithm’, International Journal Operational
Research, Vol. 19, No. 3, pp. 315-337

Yang,X. S., Karamanoglu, M. and He,X. S. (2014) ’Flower Pollination Algorithm: A Novel
Approach for Multiobjective Optimization’, Engineering Optimization, vol. 46, No. 9,
pp. 1222-1237
Yang, X. S., (2011) ’Bat Algorithm for Multiobjective Optimization, ’, Int. J. Bio-Inspired
Computation, Vol. 3, No. 5, pp. 267-274.
BSAMO 13

f2

• •




i−1 • •


i •
i+1 •
f1
Figure 1 Crowding distance of individual i.
14 ... et al.
BSAMO 15
16 ... et al.
BSAMO 17
18 ... et al.

Figure 2 Pareto fronts of eight test functions


BSAMO 19

Figure 3 Pareto front for BSAMO and NSGA-II the Disc brake design
20 ... et al.

Figure 4 Pareto front for BSAMO and NSGA-II the welded beam

You might also like