0% found this document useful (0 votes)
12 views14 pages

COELLO

Uploaded by

Gabriel Juc Dias
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views14 pages

COELLO

Uploaded by

Gabriel Juc Dias
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Computers and Structures 75 (2000) 647±660

www.elsevier.com/locate/compstruc

Multiobjective optimization of trusses using genetic


algorithms
C.A. Coello a,*, 1, A.D. Christiansen b
a
Laboratorio Nacional de InformaÂtica Avanzada, Xalapa, Veracruz, 91090 Mexico
b
Department of Computer Science, Tulane University, New Orleans, LA 70118, USA
Received 26 December 1996; accepted 30 April 1999

Abstract

In this paper we propose the use of the genetic algorithm (GA) as a tool to solve multiobjective optimization
problems in structures. Using the concept of min±max optimum, a new GA-based multiobjective optimization
technique is proposed and two truss design problems are solved using it. The results produced by this new approach
are compared to those produced by other mathematical programming techniques and GA-based approaches,
proving that this technique generates better trade-o€s and that the genetic algorithm can be used as a reliable
numerical optimization tool. 7 2000 Elsevier Science Ltd. All rights reserved.

Keywords: Genetic algorithms; Multiobjective optimization; Multicriteria optimization; Vector optimization; Structural optimiz-
ation; Truss optimization

1. Introduction the designer, making the optimization process a rather


dicult task.
Another common approach is the combination of all
In most real-world problems, several goals must be the objectives into a single objective function. This
satis®ed simultaneously in order to obtain an optimal technique has the drawback of modelling the original
solution. The multiple objectives are typically con¯ict- problem in an inadequate manner, generating solutions
ing and non-commensurable, and must be satis®ed that will require a further sensitivity analysis to
simultaneously. For example, we might want to be become reasonably useful to the designer.
able to minimize the total weight of a truss while mini- A more appropriate approach to deal with multiple
mizing its maximum de¯ection and maximizing its objectives is to use techniques that were originally
maximum allowable stress. The common approach in designed for that purpose in the ®eld of Operations
this sort of problem is to choose one objective (for Research. Work in that area started a century ago,
example, the weight of the structure) and incorporate and many approaches have been re®ned and com-
the other objectives as constraints. This approach has monly applied in economics and control theory.
the disadvantage of limiting the choices available to This paper addresses the importance of multiobjec-
tive structural optimization and reviews some of the
basic concepts and part of the most relevant work in
* Corresponding author. this area. Also, we discuss the suitability of a heuristic
1
This work was done while the author was at Tulane Uni- technique inspired by the mechanics of natural selec-
versity. tion (the genetic algorithm, or GA) to solve multiob-

0045-7949/00/$ - see front matter 7 2000 Elsevier Science Ltd. All rights reserved.
PII: S 0 0 4 5 - 7 9 4 9 ( 9 9 ) 0 0 1 1 0 - 8
648 C.A. Coello, A.D. Christiansen / Computers and Structures 75 (2000) 647±660

jective optimization problems. We also introduce a satisfy the m inequality constraints:


new method, based on the concept of min±max opti-
mum. The new method is compared with other GA- gi x †r0, i ˆ 1,2, . . . ,m 1†
based multiobjective optimization methods and some
the p equality constraints
mathematical programming techniques. We show that
the new method is capable of ®nding better trade-o€s hi x † ˆ 0, i ˆ 1,2, . . . ,p 2†
among the competing objectives. Our approach is
tested on two well-known truss optimization problems. and optimize the vector function
We perform these tests with a computer program  T
called Multiobjective Optimization of Systems in the f x † ˆ f1 x †,f2 x †, . . . ,fk x † 3†
Engineering sciences (MOSES), which was developed
by the authors to experiment with new and existing where x ˆ ‰x 1 ,x 2 , . . . ,x n ŠT is the vector of decision
multiobjective optimization algorithms. variables.
In other words, we wish to determine from among the
set of all numbers which satisfy Eqs. (1) and (2) the
2. Previous work on multiobjective structural particular set x 1 ,x 2 , . . . ,x k which yields the optimum
optimization values of all the objective functions. The vector x  will
be reserved to denote the optimal solutions (normally
there will be more than one).
The ®rst application of multiobjective optimization
The problem is that the meaning of optimum is not
concepts in structural mechanics appeared in a 1968
well de®ned in this context, since we rarely have an x 
paper by Krokosky [1]. In this early paper, Krokosky
such that for all i ˆ 1,2, . . . ,k
adopted a random search technique to ®nd the best
trade-o€ correlating the di€erent objectives in terms of ^
ÿ  
the a priori chosen design parameters. Since then, mul- fi x †Rfi x † 4†
x2F
tiobjective optimization has attracted a lot of attention
among structural engineers, and several surveys are where x  is a desirable solution. However, we normally
available in the literature [2±5]. 
never have a situation like this, in which all the fi x†
To facilitate the study and comparison of the most have a minimum in the feasible region F at a common
important mathematical programming and GA-based point x  :
multiobjective optimization techniques, the authors
developed MOSES, which is intended to serve as a 3.1. Pareto optimum
common platform to test any new and/or existing mul-
tiobjective optimization technique. MOSES was writ- We say that a point x  2 F is Pareto optimal if for
ten in GNU C and runs under Unix. For details on its every x 2 F either,
implementation see Ref. [4].
^ ÿ 
f x † ˆ fi x  † 5†
i2I i
3. Basic concepts or, there is at least one i 2 I such that

Multiobjective optimization (also called multicriteria fi x † > fi x  † 6†


optimization, multi-performance or vector optimiz-
ation) can be de®ned as the problem of ®nding [6]: In words, this de®nition says that x  is Pareto optimal
if there exists no feasible vector x which would
A vector of decision variables which satis®es con- decrease some criterion without causing a simultaneous
straints and optimizes a vector function whose el- increase in at least one criterion. Unfortunately, the
ements represent the objective functions. These Pareto optimum almost always gives not a single sol-
functions form a mathematical description of per- ution, but rather a set of solutions called non-domi-
formance criteria which are usually in con¯ict with nated solutions.
each other. Hence, the term `optimize' means ®nd-
ing such a solution which would give the values of 3.2. Min±max optimum
all the objective functions acceptable to the
designer. The min±max optimum compares relative deviations
from the separately attainable minima. Consider the
Formally, we can state it as follows:
ith objective function for which the relative deviation
Find the vector x  ˆ ‰x 1 ,x 2 , . . . ,x n ŠT which will can be calculated from
C.A. Coello, A.D. Christiansen / Computers and Structures 75 (2000) 647±660 649

jfi x † ÿ f 0i j assumed that


zi0 x † ˆ 7†
jf 0i j
X
k
or from wi ˆ 1 13†
iˆ1
jfi x † ÿ f 0i j
zi00 x † ˆ 8† 5. Normalized weighting method: f x†
 is used in Eq. (4).
jfi x †j
Since all these methods require the ideal vector, the
It should be clear that for Eqs. (7) and (8) we have to user is given the choice of providing it, or letting the
assume that for every i 2 I and for every x 2 F, fi x†
 6 system to ®nd it automatically using an iterative or
ˆ 0: random search method.
If all the objective functions are going to be mini-
mized, then Eq. (7) de®nes function relative incre-
ments, whereas if all of them are going to be
maximized, it de®nes relative decrements. Eq. (8) 5. Multiobjective optimization using GAs
works conversely. The optimum is de®ned as:
   The notion of genetic search in a multicriteria pro-
x  ˆ min max zi0 x †,zi00 x † 9†
blem dates back to the late 1960s, in which Rosen-
This optimum can be described in words as follows. berg's [8] study contained a suggestion that would
Knowing the extremes of the objective functions which have led to multicriteria optimization if he had carried
can be obtained by solving the optimization problems it out as presented. His suggestion was to use multiple
for each criterion separately, the desirable solution is properties (nearness to some speci®ed chemical compo-
the one which gives the smallest values of the relative sition) in his simulation of the genetics and chemistry
increments of all the objective functions. of a population of single-celled organisms. Since his
actual implementation contained only one single prop-
erty, the multiobjective approach could not be shown
in his work, but it was a starting point for researchers,
4. Osyczka's multicriterion optimization system interested in this topic.
Genetic algorithms require scalar ®tness information
Osyczka's system contains several multiobjective op- to work, which means that when approaching multicri-
timization methods [7]: teria problems, we need to perform a scalarization of
the objective vectors. One problem is that it is not
1. Min±max method: Eq. (9) is used to determine the always possible to derive a global criterion based on
elements of the vector z x†:  the formulation of the problem. In the absence of in-
2. Global criterion method: In this method, the formation, objectives tend to be given equivalent im-
equation: portance, and when we have some understanding of
!p the problem, we can combine them according to the
Xk
f 0i ÿ fi x †
f x † ˆ 10† information available, probably assigning more im-
iˆ1
f 0i portance to some objectives. Optimizing a combination
of the objectives has the advantage of producing a
is used as the global function. We assumed p ˆ 2 single compromise solution, requiring no further inter-
for our experiments. action with the decision maker [9]. The problem is,
3. Weighting min±max method: This is a combination that if the optimal solution cannot be accepted, either
of the weighting method and the min±max approach because the function used excluded aspects of the pro-
that can ®nd the Pareto set of solutions for both blem which were unknown prior to optimization or
convex and non-convex problems using the equation because we chose an inappropriate setting of the coe-
   cients of the combining function, additional runs may
x  ˆ min max wi zi0 x †,wi zi00 x † 11† be required until a suitable solution is found. Some of
4. Pure weighting method: The equation the main approaches proposed in the literature are the
following (for a detailed survey of this subject see
X
k Coello [5]):
min wi fi x † 12†
iˆ1
1. VEGA: David Scha€er [10] modi®ed the selection
operator of a Simple Genetic Algorithm (SGA) so
is used to determine a preferred solution, where that at each generation a number of sub-populations
wi r0 are the weighting coecients representing the was generated by performing proportional selection
relative importance of the objectives. It is usually according to each objective function in turn. Thus,
650 C.A. Coello, A.D. Christiansen / Computers and Structures 75 (2000) 647±660

for a problem with k objectives, k sub-populations classi®ed individuals are shared with their dummy
of size N=k each would be generated, assuming a ®tness values. Then this group of classi®ed individ-
total population size of N. These sub-populations uals is ignored and another layer of non-dominated
would be shu‚ed together to obtain a new popu- individuals is considered. The process continues
lation of size N, on which the GA would apply until all individuals in the population are classi®ed.
the crossover and mutation operators in the usual A stochastic remainder proportionate selection was
way. used for this approach.
2. Lexicographic ordering: The basic idea of this tech- 6. Niched Pareto GA: Horn and Nafpliotis [16] pro-
nique is that the designer ranks the objectives posed a tournament selection scheme based on Par-
in order of importance. The optimum solution is eto dominance. Instead of limiting the comparison
then found by minimizing the objective functions, to two individuals, a number of other individuals in
starting with the most important one and proceed- the population was used to help determine domi-
ing according to the order of importance of the nance. When both competitors were either domi-
objectives Ref. [11]. Another version of the algor- nated or non-dominated (i.e., there was a tie), the
ithm reported by Fourman [12] consisted of ran- result of the tournament was decided through ®tness
domly selecting the objective to be used at each sharing [17]. Population sizes considerably larger
generation. than usual were used so that the noise of the selec-
3. Weighted sum: Hajela and Lin [13] included the tion method could be tolerated by the emerging
weights of each objective in the chromosome, and niches in the population [9].
promoted their diversity in the population through
®tness sharing. Their goal was to be able to simul-
taneously generate a family of Pareto optimal de-
signs corresponding to di€erent weighting
coecients in a single run of the GA. Besides
using sharing, Hajela and Lin used a vector evalu-
ated approach based on VEGA to achieve their
goal. Also, a mating restriction mechanism was
imposed, to avoid members within a radius smat to
cross.
4. Multiple objective genetic algorithm: Fonseca and
Fleming [14] have proposed a scheme in which the
rank of a certain individual corresponds to the num-
ber of chromosomes in the current population by
which it is dominated. Consider, for example, an in-
dividual x i at generation t, which is dominated by
pi t† individuals in the current generation. Its current
position in the individuals' rank can be given by
Ref. [14]:

rank x i ,t† ˆ 1 ‡ pi t† 14†

All non-dominated individuals are assigned rank 1,


while dominated ones are penalized according to the
population density of the corresponding region of
the trade-o€ surface.
5. Non-dominated sorting genetic algorithm: The Non-
dominated Sorting Genetic Algorithm (NSGA) was
proposed by Srinivas and Deb [15], and is based on
several layers of classi®cations of the individuals.
Before the selection is performed, the population is
ranked on the basis of non-domination: all non-
dominated individuals are classi®ed into one cat-
egory (with a dummy ®tness value, which is pro-
portional to the population size, to provide an equal
reproductive potential for these individuals). To Fig. 1. Flowchart to illustrate the main algorithm used by the
maintain the diversity of the population, these new method proposed in this paper.
C.A. Coello, A.D. Christiansen / Computers and Structures 75 (2000) 647±660 651

6. A new GA-based approach based on a weighted min± solution, it was replaced by one of its parents (ran-
max strategy domly chosen). The best solution found was kept
through generations until a better one emerged in a
The basic algorithm proposed by the authors is the further stage of the search process (elitism).
following [6] (see Fig. 1): 5. No sharing is required in this case, since each pro-
cess spawned deals with a single point of the Pareto
1. The initial population is generated randomly, but in
front, and we do not have to avoid global conver-
such a way that all their individuals constitute feas-
gence of the population towards such point. How-
ible solutions. This can be ensured by checking that
ever, in case we wish to generate with the same
none of the constraints is violated by the solution
process, a set of points instead of only one (requir-
vector encoded by the corresponding chromosome.
ing a single GA run), we may use a sharing function
The procedure adopted in this case is death penalty
of the form:
(i.e., if a chromosome encodes an infeasible sol-
ution, it is destroyed and replaced by a newly gener- 8  a
> dij
ated string). However, a penalty function or any ÿ  <1ÿ , dij < sshare
f dij ˆ sshare 16†
other constraint-handling approach can also be >
:
used. 0, otherwise
2. The user should provide a vector of weights, which
are used to spawn as many processes as weight com- where normally a ˆ 1, dij is a metric indicative of
binations are provided (normally this number will the distance between designs i and j, and sshare is the
be reasonably small). Each process is really a separ- sharing parameter which controls the extent of shar-
ate genetic algorithm in which the given weight ing allowed (a value between 0.01 and 0.1 is nor-
combination is used in conjunction with a min±max mally used). The ®tness of a design i would then be
approach to generate a single solution (see below modi®ed as:
for details). The number of weight combinations is
fi
usually small (no more than 15 in the case of the ex- fs i ˆ 17†
periments reported in this paper) and can be gener- X
M
ÿ 
f dij
ated using a deterministic procedure in which each jˆ1
weight ranges from certain initial value to a ®nal
value using a user-de®ned increment. Notice that where M is the number of designs located in vicinity
the use of a ®le containing weight combinations of the ith design. The main reason why we did not
makes unnecessary to encode such weights in the take this approach (like in Hajela and Lin's work
chromosome itself as another decision variable. [13]) is because the de®nition of sshare is subject to
3. The ®tness value of each chromosome is computed intensive experimentation and its choice has a dra-
according to Eq. (9). In general, the ®tness function matic impact on the performance of the technique.
has the form: 6. After the n processes are terminated (n = number
of weight combinations provided by the user), a
X
n    ®nal ®le is generated containing the non-dominated
fitnessi ˆ wi min max zi0 x †,zi00 x † 15†
solutions found. This ®le is formed by picking up
iˆ1
the best solution from each of the processes
Notice that the variation of the weights will allow spawned in step 2, and will contain the min±max
us to explore di€erent parts of the Pareto front, and optimum solutions to the problem (equivalent to the
that this approach works both with convex and Pareto set).
7. Notice that the solutions produced by this method
non-convex search [4].
Since this expression requires knowing the ideal are guaranteed to be feasible, as opposed to the
vector, the user is given the choice to provide such other GA-based methods in which there could be
values directly (in case he/she knows them) or to convergence towards a non-feasible solution.
use another genetic algorithm to generate it (see The procedure described above is not only very easy to
next Section). Alternatively, an estimated set of implement, but is also very ecient (computationally
values close to the desired goals can be provided by speaking) if we use a distributed system, because the
the user. These goals can underestimate or overesti- processes can be assigned to di€erent processors and
mate the ideal vector, as long as they lie on the feas- be run in parallel.
ible region. GA-based methods that use Pareto-based techniques
4. The crossover and mutation operators were modi- need to check for non-dominated solutions, and that is
®ed to ensure that they produced only feasible sol- a process that requires k  m2 operations, where k is
utions. Whenever a child encoded an infeasible the number of objectives and m is the population size.
652 C.A. Coello, A.D. Christiansen / Computers and Structures 75 (2000) 647±660

This is therefore, an expensive process (computation- simple GA. This procedure has some resemblance with
ally speaking). Eshelman's CHC Adaptive Search Algorithm [20], but
in our case we do not use any re-feeding of the popu-
6.1. The GA optimizer for single-objective problems lation through high mutation values when it has stabil-
ized, nor a highly disruptive recombinator operator
Using the GA itself as an optimizer for single-objec- that produces o€spring that are maximally di€erent
tive problems is a controversial topic, mainly because from both parents. Our approach uses a conventional
of the diculties found to adjust its parameters (i.e., two-point crossover [21] and it exhibits its best beha-
population size, maximum number of generations, mu- vior with a ¯oating point representation in numerical
tation and crossover rate) [18]. Since one of the goals optimization problems.
of this work is to be able to produce a reliable design
optimization system, this is a natural problem to face.
In practice, GA parameters are empirically adjusted in
a trial and error process that could take quite a long
time in some cases. 7. Structural optimization using genetic algorithms
In some previous work [4,19], we have successfully
used a very simple methodology, explained below, for Goldberg and Samtani [22] appear to have ®rst
a variety of engineering design optimization problems. suggested the use of GAs for structural optimization.
The results that we obtained, led us to think that it They considered the use of a GA to optimize a 10-bar
was a reasonable choice to use in MOSES. The plane truss. Jenkins [23] used a straightforward im-
method is the following: plementation of Goldberg's SGA (Simple Genetic Al-
gorithm) [21] to optimize a trussed-beam roof
. Choose a certain value for the random number seed structure, a three-bar truss and a thin-walled coss-sec-
and make it a constant. tion.
. Make constants for the population size and the Hajela [24] analyzed the potential of GAs as func-
maximum number of generations (we normally use tion optimizers in the context of structural optimiz-
100 chromosomes and 50 generations, respectively). ation. He discussed encoding, optimal population size,
. Loop the mutation and crossover rates from 0.1 to selection, crossover and mutation over binary alpha-
0.9 at increments of 0.1 (this is actually a nested bets, making an important distinction between random
loop). This implies that 81 runs are necessary. In search and genetic search. His FORTRAN implemen-
each step of the loop, the population is not reinitia- tation of a GA was applied to problems with non-con-
lized. vex search spaces: a two-beam grillage structure, a
. For each run, update 2 ®les. One contains only the two-element thin-walled cantilever torsional rod sub-
®nal costs, and the other has a summary that
jected to sinusiodal excitation and the dynamic re-
includes, besides the cost, the corresponding values sponse of a 10-bar plane truss.
of the design parameters and the mutation and Rajeev and Khrisnamoorthy [25] used the GA for
crossover rates used. discrete optimization of generalized trusses. Schoe-
. When the whole process ends, the ®le with the costs nauer and Xanthakis [26] presented a general method
is sorted in ascending order, and the smallest value of handling constraints in genetic optimization, based
is searched for in the other ®le, returning the corre- on the Behavioral Memory paradigm. Instead of
sponding design parameters as the ®nal answer. requiring the problem-dependent design of either repair
So far, we have found much better results using ¯oat- operators (projection onto the feasible region) or pen-
ing point representation with this methodology, and alty functions (weighted sum of constraint violations
our results show that this is a trend in numerical op- and the objective function), they sampled the feasible
timization problems [4]. This approach is actually a region by evolving from an initial random population,
dynamic adjustment of parameters, because the popu- successively applying a series of di€erent ®tness func-
lation is initialized only once in the process, so that the tions which embodied constraint satisfaction. Only in
individuals' ®tness continues improving while changing the ®nal step was the optimization restricted to the
the crossover and mutation rates. Notice that even feasible region. The success of the whole process was
when we could know the crossover and mutation rates highly dependent on the genetic diversity maintained
that produced the best answer, running the GA once during the ®rst steps, ensuring a uniform sampling of
with those parameters will not necessarily generate the the feasible region. They applied this scheme to test
exact same answer. The reason is that the population problems of truss structure optimization: a 10-bar (2D)
at the moment of ®nding the best result could have and a 25-bar (3D) truss. Sharing and restricted mating
been recombined and improved several times, being were used to ensure genetic diversity in these appli-
quite di€erent for the random initial population of a cations.
C.A. Coello, A.D. Christiansen / Computers and Structures 75 (2000) 647±660 653

Lin and Hajela [27] described a design optimization ation methodology, previously used with continuum
tool based on genetic search which inspired the devel- structures, was adapted to handle skeletal structures.
opment of MOSES [4]. This system, called EVOLVE, Element connectivity and boundary conditions were
was able to handle mixes of integer, discrete and con- treated as Boolean design variables in the context of
tinuous design variables. It had automatic encoding/ topology design. Rajan used a penalty function as the
decoding facilities, automatic constraint handling, ®tness, and exception handling was considered to deal
sharing to prevent convergence of all candidate designs with unstable structures, absence of deformations in
to a single optimum, it varied the granularity of the the structure and zero force members. Also, in an
representation to increase or decrease the precision e€ort to avoid recomputing the ®tness function, a his-
with which a design space is represented, and used an tory of each chromosome was kept so that when dupli-
special directed crossover operator that identi®es sig- cates appeared, it was not necessary to recompute its
ni®cant bit positions on a string constraining the cross- ®tness. The examples used in Rajan's paper include a
over to such bit locations. 6-node truss and a 14-node truss.
Another important paper by Hajela and Lin [13] Yeh [33] used a hybrid genetic algorithm to optimize
constitutes one of the very few attempts to achieve truss structures. This work focuses on the eciency of
multiobjective structural optimization using GAs. The the optimization process using a GA, rather than in
goal of the researchers in this work was to generate the results obtained (which are, nevertheless better
the Pareto set with a single run of the GA, and a uti- than those obtained by a simple GA), and the search
lity function with sharing is used for that sake. A stati- space is considered as discrete to exploit the search
cally loaded 10-bar truss was used to exemplify their capabilities of the GA.
approach. Liu et al. [34] used a weighted sum approach [5] to
Adeli and Cheng [28] used a GA to minimize the optimize the layout and actuator placement of a 45-
total weight of a space truss subject to stress, displace- bar plane truss in which the objectives were to mini-
ment and fabricational (availability of cross-sectional mize the linear regulator quadratic control cost, the
areas) constraints. A quadratic penalty function was robustness and the modal controllability of the con-
used to transform this constrained problem into an trolled system subject to total weight, asymptotical
unconstrained one, and the ®tness function was re- stability and eigenvalues constraints. Although this is
scaled because the GA always maximizes and this was one of the few papers on evolutionary multiobjective
a minimization problem. Three space trusses were used optimization of trusses available in the literature, the
to illustrate their approach: a 12-bar truss, a 25-bar focus of Liu's is totally di€erent from the one pre-
truss and a 72-bar truss. In a further paper by the sented in this paper.
same authors [29], a hybrid GA that integrated the
penalty function method with the primal-dual method
was proposed. This approach is based on sequential
minimization of the Lagrangian method, and elimi-
nated the diculties of the unpredictability of the pen-
alty function coecient. Adeli and Kumar [30]
proposed a distributed GA for optimization of large
structures on a cluster of workstations connected via a
local area network (LAN). The GA used a centralized
population model in which the master process had glo-
bal knowledge about the search process, which resulted
in a faster convergence towards the optimal solution.
A penalty function method and the augmented
Lagrangian method were used again to eliminate the
original constraints of the problem. A 17-member truss
and a 50-story megastructure (848-element space truss)
were solved using this approach. In a further paper,
Adeli and Kumar [31] also used a GA for structural
optimization of large scale structures on massively par-
allel supercomputers.
Rajan [32] used a GA to design the size, shape and
topology of space structures. Discrete and continuous
values were used to de®ne the cross-sectional areas of
the members. The nodal locations were treated as con-
tinuous design variables and the hybrid shape-optimiz- Fig. 2. The 25-bar space truss used for the ®rst example.
654 C.A. Coello, A.D. Christiansen / Computers and Structures 75 (2000) 647±660

Table 1 8.2. Example 2 : design of a 200-bar plane truss


Loading conditions for the 25-bar space truss shown in Fig. 2
Consider the 200-bar plane truss taken from Bele-
Node Fx (lbs) Fy (lbs) Fz (lbs)
gundu [35], shown in Fig. 3 (taken from Belegundu
1 1000 ÿ10000 ÿ10000 [35]). The problem is to ®nd the cross-sectional area of
2 0 ÿ10000 ÿ10000 each member of this truss, such that we minimize its
3 500 0 0 weight, displacement of each free node, and the stress
6 600 0 0 that each member has to support.
There are a total of three loading conditions: (1) 1
kip acting in positive x-direction at node points 1, 6,
8. Examples 15, 20, 29, 34, 43, 48, 57, 62, and 71; (2) 10 kips acting
in negative y-direction at node points 1, 2, 3, 4, 5, 6, 8,
To introduce our new GA-based multiobjective op- 10, 12, 14, 15, 16, 17, 18, 19, 20, 24, 71, 72, 73, 74,
timization approach, we will use two truss design pro- and 75; and (3) loading conditions 1 and 2 acting
blems that are commonly referenced in the literature. together. The 200 elements of this truss linked to 29
On each of these examples, three objectives will be groups. The grouping information is shown in Table 4.
considered: minimize weight, maximum displacement The stress in each element is limited to a value of 10
and stress of the structure using the cross-sectional ksi for both tension and compression members.
area of each element as the design variables. Such Young's modulus of elasticity = 30,000 ksi, weight
objectives are con¯icting in nature, because if we want density = 0.283  10ÿ3 kips/in.3.
to reduce the displacement and the stress that an el-
ement supports, we have to increase the cross-sectional
area, consequently increasing the weight of the struc- 9. Comparison of results
ture. These objectives are also non-commensurable,
because whereas stress and weight usually have large We will compare the ideal vector that each method
values, maximum allowable displacement is in general generates with the best results reported in the literature
a small value. for two truss-design problems. We used the two Monte
Carlo methods included in MOSES (see Ref. [4] for
8.1. Example 1: design of a 25-bar space truss details), together with Osyczka's multiobjective optim-
ization system to obtain the ideal vector. Also, several
Consider the 25-bar space truss taken from Rajeev GA-based approaches will be tested using the same
and Khrisnamoorthy [25] shown in Fig. 2. The pro- parameters (same population size and same crossover
blem is to ®nd the cross-sectional area of each member and mutation rates). If niching is required, then the
of this truss, such that we minimize its weight, the dis- niche size will be computed according to the method-
placement of each free node, and the stress that each ology suggested by the developers of the method (see
member has to support. Ref. [4] for details). To perform the analysis required
Loading conditions are given in Table 1, member for the examples, we used the matrix factorization
groupings are given in Table 2, and node coordinates method included in Ref. [36] together with the sti€ness
are given in Table 3. The assumed data are: modulus method [36] as implemented in Ref. [37].
of elasticity, E ˆ 1  104 ksi, r ˆ 0:10 lb/in3; sa ˆ
240 ksi, ua ˆ 20:35 in. Table 3
Coordinates of the joints of the 25-bar space truss shown in
Fig. 2
Table 2
Group membership for the 25-bar space truss shown in Fig. 2 Node X Y Z

Group number Members 1 ÿ37.50 0.00 200.00


2 37.50 0.00 200.00
1 1±2 3 ÿ37.50 37.50 100.00
2 1±4, 2±3, 1±5, 2±6 4 37.50 37.50 100.00
3 2±5, 2±4, 1±3, 1±6 5 37.50 ÿ37.50 100.00
4 3±6, 4±5 6 ÿ37.50 ÿ37.50 100.00
5 3±4, 5±6 7 ÿ100.00 100.00 0.00
6 3±10, 6±7, 4±9, 5±8 8 100.00 100.00 0.00
7 3±8, 4±7, 6±9, 5±10 9 100.00 ÿ100.00 0.00
8 3±7, 4±8, 5±9, 6±10 10 ÿ100.00 ÿ100.00 0.00
C.A. Coello, A.D. Christiansen / Computers and Structures 75 (2000) 647±660 655

9.1. Example 1 computing the best trade-o€ for all the methods in
Osyczka's system. As can be seen from these results,
The ideal vector of this problem was computed the GA provided the best ideal vector, combining the
using the two Monte Carlo methods included in results produced with both binary and ¯oating point
MOSES (generating 300 points), Osyczka's multiob- representation, although the second representation
jective optimization system and a GA (with a popu- scheme provides better results in general [4]. The
lation of 300 chromosomes running during 100 mathematical programming techniques did not pro-
generations) using binary and ¯oating point represen- vide any reasonable results in this example, mainly
tation, with the procedure described before to adjust because of the high non-convexity of the search space
its parameters. The corresponding results are shown and the high number of variables involved. It should
in Table 5 including the best results reported in the be noted that the set of results reported by Coello et
literature [38]. The results for Monte Carlo method 2 al. [38] was produced optimizing only the ®rst objec-
are the same as for Method 1, and the results pre- tive (i.e., the total weight of the truss) in a discrete
sented for the min±max method are also the basis for manner. Assuming continuous variables, the GA-

Fig. 3. 200-bar plane truss used for the second example.


656 C.A. Coello, A.D. Christiansen / Computers and Structures 75 (2000) 647±660

Table 4
Group membership for the 200-bar plane truss shown in Fig. 3

Group number Member number

1 1, 2, 3, 4
2 5, 8, 11, 14, 17
3 19, 20, 21, 22, 23, 24
4 18, 25, 56, 63, 94, 101, 132, 139, 170, 177
5 26, 29, 32, 35, 38
6 6, 7, 9, 10, 12, 13, 15, 16, 27, 28, 30, 31, 33, 34, 36, 37
7 39, 40, 41, 42
8 43, 46, 49, 52, 55
9 57, 58, 59, 60, 61, 62
10 64, 67, 70, 73, 76
11 44, 45, 47, 48, 50, 51, 53, 54, 65, 66, 68, 69, 71, 72, 74, 75
12 77, 78, 79, 80
13 81, 84, 87, 90, 93
14 95, 96, 97, 98, 99, 100
15 102, 105, 108, 111, 114
16 82, 83, 85, 86, 88, 89, 91, 92, 103, 104, 106, 107, 109, 110, 112, 113
17 115, 116, 117, 118
18 119, 122, 125, 128, 131
19 133, 134, 135, 136, 137, 138
20 140, 143, 146, 149, 152
21 120, 121, 123, 124, 126, 127, 129, 130, 141, 142, 144, 145, 147, 148, 150, 151
22 153, 154, 155, 156
23 157, 160, 163, 166, 169
24 171, 172, 173, 174, 175, 176
25 178, 181, 184, 187, 190
26 158, 159, 161, 162, 164, 165, 167, 168, 179, 180, 182, 183, 185, 186, 188, 189
27 191, 192, 193, 194
28 195, 197, 198, 200
29 196, 199

Table 5
engine for single objective optimization was able to
Comparison of results computing the ideal vector of the ®rst ®nd a lighter truss.
example (design of a 25-bar space truss)a As we can see in Table 6, the new GA-based
approach proposed by the authors, named GAminmax,
Method f1 f2 f3 provide the best overall results when a ¯oating represen-
tation was used. It should be noted that our approach
Monte Carlo 1 57144.60 0.050551 1958.00 performs hardly over the average when binary represen-
Monte Carlo 1 275439.48 0.003382 207.27 tation is used. The reason for its poor performance here,
Monte Carlo 1 232253.56 0.003764 194.88
and before (when trying to ®nd the ideal vector) is that
Min±max (OS) 1166.98 0.781186 42028.65
Min±max (OS) 1359.41 0.598842 33872.70
the population size does not seem to be large enough to
Min±max (OS) 1359.41 0.598842 33872.70 guarantee convergence, considering the length of the
GA (Binary) 72845.41 1.544286 87294.85 string which, in this case is of 136 genes [6].
GA (Binary) 330717.40 0.002757 148.303585 The results obtained for this problem show how
GA (Binary) 330717.40 0.002757 148.303585 easily the mathematical programming techniques can
GA (FP) 468.93 1.565098 90959.54 be surpassed by a GA-approach, using the same num-
GA (FP) 330716.80 0.002757 148.303654 ber of points, though the GA starts with a completely
GA (FP) 330717.25 0.002757 148.303598 random population (our approach ensures that the in-
Literature 493.94 1.285167 79916.70
itial population contains only feasible individuals, but
Literature 493.94 1.285167 79916.70
Literature 493.94 1.285167 79916.70
these solutions are still randomly generated). Although
we used the same random numbers generator that the
a
For each method the best results for optimum f1 , f2 and f3 Monte Carlo techniques use, the results are quite
are shown in boldface. OS stands for Osyczka's Multiobjec- di€erent. For those who think that a simple linear
tive Optimization System. combination of objectives should be good enough to
C.A. Coello, A.D. Christiansen / Computers and Structures 75 (2000) 647±660 657

deal with multiobjective optimization problems, the 26261.05) but that violates 48 constraints. We chose to
results for GALC (see Table 6) show the contrary. Our include a solution with a higher weight, but a lower
approach used a set of ®fteen weights to compute the number of violations. Nevertheless, the number of con-
ideal vector. straints violated is still high and the GA could not
possibly converge towards such solutions.
9.2. Example 2 In this example, Monte Carlo methods provided
results that are better (in general) than the solutions
The second example (200-bar plane truss design) provided by the GA-based techniques, which is
presents a larger structure in which the time taken by remarkable, considering the large size of the search
the analysis becomes a critical issue. The Monte Carlo space (see Table 8). This re¯ects the problems of tra-
Methods 1 and 2 were used with 500 points, and the ditional GA-based techniques to ®nd reasonable trade-
GA also used a population size of 500 chromosomes o€s when the length of the chromosome string is too
(over 100 generations) with binary and ¯oating point large (493 genes in this case). Also the high amount of
representations, with the procedure previously constraints (200 total) makes this problem easier for
described to adjust its parameters. The corresponding mathematical programming techniques than for the
ideal vector is shown in Table 7 including the best GA using a penalty function. The performance of
results reported in the literature [35]. Notice that the Osyzcka's multiobjective optimization system is extre-
results presented by Belegundu violate 34 constraints mely good, but mainly because the initial guesses pro-
of the problem, which means that his solution is not vided by the user were quite close to a Pareto solution.
valid. This explains why the GA could not achieve The main use of such techniques is precisely in the
such a low weight using ¯oating point representation. cases in which we have a rough approximation of the
In fact, in Belegundu's dissertation [35] he even pro- solution, or a lot of knowledge about how the solution
vides a better solution (with a total weight of space looks like is available, and we want to exper-

Table 6
Comparison of the best overall solution found by each one of the methods included in MOSES for the ®rst example (design of a
25-bar space truss)a

Method f1 f2 f3 Lp f †

Ideal vector 468.928261 0.002757 148.303585 0.000000


Monte Carlo 1 113293.85 0.006212 363.6076 243.306621
Monte Carlo 2 110264.89 0.006925 394.8020 237.316252
Min±max (OS) 1344.32 0.676830 34793.19 479.969778
GCM (OS) 1359.41 0.598842 33872.70 445.507894
WMM (OS) 1344.32 0.676830 34793.19 479.969778
PMM (OS) 1359.41 0.598842 33872.70 445.50789
NMM (OS) 1359.41 0.598842 33872.70 445.50789
GALC (B) 254696.03 0.003078 178.85 542.46737
GALC (FP) 193849.17 0.003389 202.83 412.98462
Lexicographic (B) 219176.78 0.005791 280.06 468.38825
Lexicographic (FP) 129424.79 0.005412 303.46 277.010455
VEGA (B) 234854.32 0.003473 205.70 500.478800
VEGA (FP) 219453.21 0.003482 202.59 467.617915
NSGA (B) 250615.78 0.002975 171.74 533.680836
NSGA (FP) 226478.31 0.003971 205.53 482.796241
MOGA (B) 85297.74 0.023970 990.87 194.274927
MOGA (FP) 81778.41 0.021254 908.03 185.226194
NPGA (B) 92943.08 0.010969 585.22 203.127877
NPGA (FP) 55812.18 0.029665 1307.44 135.596546
Hajela (B) 99464.52 0.017495 1134.42 223.105294
Hajela (FP) 107947.60 0.007593 421.48 232.796738
GAminmax (B) 85604.05 0.036615 2190.83 207.605910
GAminmax (FP) 16230.99 0.037474 2227.73 60.226711

a
GA-based methods were tried with binary (B) ¯oating point (FP) representations. The following abbreviations were used: OS
Ð Osyczka's System, GCM Ð Global Criterion Method (exponent=2.0), WMM Ð Weighting Min±max, PWM Ð Pure Weight-
ing Method, NWM Ð Normalized Weighting Method, GALC Ð Genetic Algorithm with a linear combination of objectives using
scaling. In all cases, weights were assumed equal to 0.33 (equal weight for every objective).
658 C.A. Coello, A.D. Christiansen / Computers and Structures 75 (2000) 647±660

Table 7 iment within the boundaries of our partial result.


Comparison of results computing the ideal vector of the sec- Nevertheless, it should be pointed out that our tech-
ond example (design of a 200-bar plane truss)a nique was able to ®nd a better overall result than any
other approach (including mathematical programming
Method f1 f2 f3
methods) when a ¯oating point representation was
Monte Carlo 1 3019191.78 3.9983 74.99 used.
Monte Carlo 1 5423359.65 0.522427 9.5056
Monte Carlo 1 5635985.90 0.556857 8.7115
Min±max (OS) 617227.10 3.8398 91.9799
Min±max (OS) 650984.03 3.6148 86.0389 10. Conclusions
Min±max (OS) 641862.42 3.6235 85.8669
GA (Binary) 893885.79 33.6943 957.1493 We have proposed a new multiobjective optimization
GA (Binary) 9963295.72 0.370375 5.124250 method based on the min±max optimization approach.
GA (Binary) 9963295.72 0.370375 5.124250 This approach is very robust because it transforms the
GA (FP) 36167.73 38.675848 1062.77608 multiobjective optimization problem into several single
GA (FP) 9961698.96 0.370376 5.124382
objective optimization problems which are easier and
GA (FP) 9962313.62 0.370377 5.124336
Literature 35162.93 44.144661 137.7476
faster to solve. When this approach is used with a
Literature 35162.93 44.144661 1137.7476 ¯oating point representation, the technique seems to
Literature 35162.93 44.144661 1137.7476 work better (i.e., faster and more accurately) than the
other approaches considered in this paper. The main
a
For each method the best results for optimum f1 , f2 and f3 drawbacks of our approach are that it requires the
are shown in boldface. OS stands for Osyczka's Multiobjec- ideal vector and a set of weights to delineate the Par-
tive Optimization System. All the objectives are being mini- eto set. However, our GA-based engine included in
mized.

Table 8
Comparison of the best overall solution found by each one of the methods included in MOSES for the second example (design of
a 200-bar plane truss)a

Method f1 f2 f3 Lp f †

Ideal vector 36167.73 0.370376 5.124250 0.000000


Monte Carlo 1 4475679.05 0.773293 12.173738 125.211428
Monte Carlo 2 5075790.44 0.561163 9.682691 140.745011
Min±max (OS) 641862.42 3.686123 88.04811 41.881840
GCM (OS) 641862.42 3.623485 85.86687 41.287050
WMM (OS) 641862.42 3.686123 88.04811 41.881840
PMM (OS) 617227.10 3.839831 91.97989 42.382994
NMM (OS) 641862.42 3.623485 85.86687 41.287050
GALC (B) 5388876.23 0.418137 6.623930 148.418423
GALC (FP) 4662186.53 0.426666 6.736934 128.371292
Lexicographic (B) 5106929.51 0.590409 11.446706 142.029185
Lexicographic (FP) 3925963.62 0.538302 9.329877 108.822923
VEGA (B) 5956689.85 0.546020 10.048603 165.131483
VEGA (FP) 4051105.81 0.662998 10.306576 112.810251
NSGA (B) 7020831.93 0.424951 6.678441 193.569332
NSGA (FP) 5369341.87 0.548805 8.334404 148.564917
MOGA (B) 3626863.83 0.489753 8.435848 100.247575
MOGA (FP) 2910316.85 0.722471 12.477403 81.852839
NPGA (B) 4028058.04 3.723483 73.607848 132.789464
NPGA (FP) 4453361.10 0.970288 5.468708 123.817748
Hajela (B) 1924166.83 1.362296 28.222939 59.387070
Hajela (FP) 4291090.29 0.874752 11.478412 120.245983
GAminmax (B) 1508966.74 2.213371 47.516121 53.970163
GAminmax (FP) 686362.29 2.223398 43.910769 30.549494

a
GA-based methods were tried with binary (B) ¯oating point (FP) representations. The following abbreviations were used: OS
Ð Osyczka's System, GCM Ð Global Criterion Method (exponent = 2.0), WMM Ð Weighting Min±max, PWM Ð Pure
Weighting Method, NWM Ð Normalized Weighting Method, GALC Ð Genetic Algorithm with a linear combination of objec-
tives using scaling. In all cases, weights were assumed equal to 0.33 (equal weight for every objective).
C.A. Coello, A.D. Christiansen / Computers and Structures 75 (2000) 647±660 659

MOSES was used to compute the ideal vector, generat- [39] and Van Veldhuizen and Lamont [40], but several
ing results better than those previously reported in the issues remain to be solved, such as diverse aspects re-
literature. Also, if the ideal vector is not known in lated to the parallelization of evolutionary multiobjec-
advance, a set of goal (desirable) values for each objec- tive approaches (e.g., load balancing, impact on Pareto
tive can be provided instead. On the other hand, ®nd- convergence, performance issues, etc.), including new
ing proper weights is typically an easy task, since not algorithms that are more suitable for parallelization
many of them are required to get reasonably good than those currently in use.
results. In our applications, for example, no more than Finally, it is highly desirable to be able to ®nd more
®fteen weights were used by our method. ways of incorporating knowledge about the domain
Our technique ensures that only feasible points are into the GA, as long as it can be automatically assimi-
produced at generation zero, and the crossover and mu- lated by the algorithm during its execution and does
tation operators were modi®ed in such a way that in- not have to be provided by the user (to preserve its
feasible solutions are never generated by the algorithm. generality). It is also important to follow Eshelman
This property makes our approach unique, since none and Scha€er's [41] work on the pursuit of a theoretical
of the other GA-based techniques analyzed, considered framework that explains the excellent performance of
this important issue. This is mainly because most of the real-coded GAs so that practice can ®nally meet theory
previous work with multiobjective optimization tech- in the use of GAs for numerical optimization.
niques dealt only with unconstrained problems.
Finally, the importance of MOSES as a benchmark
for new and existing multiobjective optimization
methods should be obvious, since no other similar References
tools, combining GA-based approaches with math-
ematical programming techniques, were previously [1] Krokosky EM. The ideal multifunctional constructural
available. Its modular structure allows the easy incor- material. Journal of the Structural Division, ASCE
poration of new algorithms without having to modify 1968;94:958±81.
its main routines. Additional details may be found in [2] Stadler W. Multicriteria optimization in mechanics: a
Ref. [4]. Also, it should be said that the system is a survey. Applied Mechanics Review 1984;37(3):277±86.
valuable tool, as it is, for engineering design optimiz- [3] Duckstein L. Multiobjective optimization in structural
ation, because of the variety of di€erent approaches design: The model choice problem. In: Atrek E,
that it contains. Gallagher RH, Ragsdell KM, Zienkiewicz OC, editors.
New directions in optimum structural design. New York:
Wiley, 1984. p. 459±81.
[4] Coello CAC. An empirical study of evolutionary tech-
niques for multiobjective optimization in engineering de-
11. Future work sign. PhD thesis, Department of Computer Science,
Tulane University, New Orleans, LA, April 1996.
Much additional work remains to be done to [5] Coello CAC. A comprehensive survey of evolutionary-
improve the performance of our approach. One of our based multiobjective optimization techniques. Knowledge
main interests is to be able to compute the ideal vector and Information Systems: An International Journal
during run-time, instead of having to give it in advance 1999;1(3):269±308.
to the GA. In that respect, we have developed another [6] Osyczka A. Multicriteria optimization for engineering de-
sign. In: Gero JS, editor. Design optimization. New
method that is very promising, but that still has some
York: Academic Press, 1985. p. 193±227.
¯aws and does not work properly with problems like [7] Osyczka A. Multicriterion optimization in engineering
the trusses used in this paper in which one of the with FORTRAN programs. Chichester, UK: Ellis
objectives may strongly guide the search towards the Horwood, 1984.
ideal value disregarding the importance of the remain- [8] Rosenberg RS. Simulation of genetic populations with
ing objectives [4]. biochemical properties. PhD thesis, University of
It would also be desirable to parallelize the GA and Michigan, Ann Harbor, Michigan, 1967.
the analysis of the structure, to reduce the compu- [9] Fonseca CM, Fleming PJ. An overview of evolutionary
tational time required for each iteration. Adeli's algorithms in multiobjective optimization. Technical
approach [30] is an excellent example of the kind of Report, Department of Automatic Control and Systems
Engineering, University of Sheeld, Sheeld, UK, 1994.
work that can be done in that respect. We also aim to
[10] Scha€er JD. Multiple objective optimization with vector
be able to encourage theoreticians to develop a theory evaluated genetic algorithms. In: Genetic algorithms and
of convergence for GAs in multiobjective optimization their applications: Proceedings of the First International
problems by using concepts from Operations Research Conference on Genetic Algorithms. London: Lawrence
such as the min±max optimum. In this respect, some Erlbaum, 1985. p. 93±100.
important work has been recently done by Rudolph [11] Rao SS. Multiobjecive optimization in structural design
660 C.A. Coello, A.D. Christiansen / Computers and Structures 75 (2000) 647±660

with uncertain parameters and stochastic processes. ation. In: Fifth International Conference on Genetic
AIAA Journal 1984;22(11):1670±8. Algorithms. University of Illinois at Urbana-Champaign.
[12] Fourman MP. Compaction of symblic layout using gen- Los Altos, CA: Morgan Kau€man, 1993. p. 573±80.
etic algorithms. In: Genetic Algorithms and their [27] Lin CY, Hajela P. EVOLVE: A genetic search based op-
Applications: Proceedings of the First International timization code via multiple strategies. In: HernaÂndez S,
Conference on Genetic Algorithms. London: Lawrence Brebbia CA, editors. Computer Aided Optimum Design
Erlbaum, 1985. p. 141±53. of Structures III: Optimization of Structural Systems and
[13] Hajela P, Lin CY. Genetic search strategies in multicri- Applications. Amsterdam: Elsevier, 1993. p. 639±54.
terion optimal design. Structural Optimization 1992;4:99± [28] Adeli H, Cheng NT. Integrated genetic algorithm for op-
107. timization of space structures. Journal of aerospace
[14] Fonseca CM, Fleming PJ. Genetic algorithms for multi- Engineering 1993;6(4):315±28.
objective optimization: formulation, discussion and gen- [29] Adeli H, Cheng NT. Augmented lagrangian genetic al-
eralization. In: Forrest S, editor. Proceedings of the Fifth gorithm for structural optimization. Journal of
International Conference on Genetic Algorithms. Los Aerospace Engineering 1994;7(1):104±18.
Altos, CA: University of Illinois at Urbana-Champaign, [30] Adeli H, Kumar S. Distributed genetic algorithm for
Morgan Kau€man, 1993. p. 416±23. structural optimization. Journal of Aerospace
[15] Srinivas N, Deb K. Multiobjective optimization using Engineering 1995;8(3):156±63.
non-dominated sorting in genetic algorithms. Technical [31] Adeli H, Kumar S. Concurrent structural optimization
Report, Department of Mechanical Engineering, Indian on massively parallel supercomputer. Journal of
Institute of Technology, Kanpur, India, 1993. Structual Engineering 1995;121(11):1588±97.
[16] Horn J, Nafpliotis N. Multiobjective optimization using [32] Rajan SD. Sizing, shape, and topology deign optimiz-
the Niched Pareto genetic algorithm. Technical Report ation of trusses using genetic algorithm. Journal of
IlliGAL report 93005, University of Illinois at Urbana- Structural Engineering 1995;121(10):1480±7.
Champaign, Urbana, IL, USA, 1993. [33] Yeh IC. Hybrid Genetic Algorithms for Optimization of
[17] Goldberg DE, Richardson J. Genetic algorithm with Truss Structures. Microcomputers in Civil Engineering
sharing for multimodal function optimization. In: 1999;14(3):199±206.
Grefemstette JJ, editor. Genetic algorithms and their ap- [34] Liu X, Begg DW, Fishwick RJ. Genetic approach to op-
plications: Proceedings of the Second International timal topology/controller design of adaptive structures.
Conference on Genetic Algorithms. London: Lawrence International Journal for Numerical Methods in
Erlbaum, 1987. p. 41±9. Engineering 1998;41:815±30.
[18] Grefenstette JJ. Optimization of control parameters for [35] Belegundu AD. A study of mathematical programming
genetic algorithms. IEEE Transactions on Systems, Man, methods for structural optimization. PhD thesis,
and Cybernetics 1986;16(1):122±8. University of Iowa, Dept. of Civil and Environmental
[19] Coello CA, HernaÂndez FS, Farrera FA. Optimal design Engineering, 1982.
of reinforced concrete beams using genetic algorithms. [36] Gere JM, Weaver W. Analysis of framed structures.
Expert Systems with Applications An International Princeton NJ: Van Nostrand, 1965.
Journal 1997;12(1):101±8. [37] Coello CA. AnaÂlisis de estructuras reticulares por com-
[20] Eshelman LJ. The CHC adaptive search algorithm: how putadora (meÂtodo de rigideces). Tesis de Licenciatura,
to have safe search when engaging in non-traditional 1991. (in Spanish).
genetic recombination. In: Rawlins GE, editor. [38] Coello CA, Rudnick R, Christiansen AD. Using genetic
Foundations of genetic algorithms. Los Altos, CA: algorithms for optimal design of trusses. In: Proceedings
Morgan Kaufmann, 1991. p. 265±83. of the Sixth International Conference on Tools with
[21] Goldberg DE. Genetic algorithms in search, optimization Arti®cial Intelligence, New Orleans, LA. Silver Spring
and machine learning. Reading, MA: Addison-Wesley, MD: IEEE Computer Soc. Press, 1994. p. 88±94.
1989. [39] Rudolph G. On a multi-objective evolutionary algorithm
[22] Goldberg DE, Samtani MP. Engineering optimization and its convergence to the Pareto set. In: Proceedings of
via genetic algorithm. In: Ninth Conference on the Fifth IEEE Conference on Evolutionary
Electronic Computation. New York,: ASCE, 1986. p. Computation, Piscataway, New Jersey. New York: IEEE
471±82. Press, 1998. p. 511±6.
[23] Jenkins WM. Towards structural optimization via the [40] Veldhuizen DAV, Lamont GB. Evolutionary compu-
genetic algorithm. Computers and Structures tation and convergence to a Pareto front. In: Koza JR,
1991;40(5):1321±7. editor. Late Breaking Papers at the Genetic
[24] Hajela P, Shih CJ. Multiobjective optimum design in Programming 1998 Conference. Stanford University,
mixed integer and discrete design variable problems. California: Stanford University Bookstore, 1998. p. 221±
AIAA Journal 1990;28(4):670±5. 8.
[25] Rajeev S, Krishnamoorthy CS. Discrete optimization of [41] Eshelman LJ, Scha€er JD. Real-coded genetic algorithms
structures using genetic algorithms. Journal of Structural and interval-schemata. In: Whitley LD, editor.
Engineering 1992;118(5):1233±50. Foundations of Genetic Algorithms, vol. 2. Los Altos,
[26] Schoenauer M, Xanthakis S. Constrained GA optimiz- CA: Morgan Kaufmann, 1993. p. 187±202.

You might also like