0% found this document useful (0 votes)
60 views15 pages

Back Hammel Schwefel 97

Uploaded by

juan paco
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views15 pages

Back Hammel Schwefel 97

Uploaded by

juan paco
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 1, NO.

1, APRIL 1997 3

Evolutionary Computation: Comments


on the History and Current State
Thomas Bäck, Ulrich Hammel, and Hans-Paul Schwefel

Abstract— Evolutionary computation has started to receive and steady (still exponential) increase in the number of pub-
significant attention during the last decade, although the origins lications (see, e.g., the bibliography of [10]) and conferences
can be traced back to the late 1950’s. This article surveys the in this field, a clear demonstration of the scientific as well as
history as well as the current state of this rapidly growing
field. We describe the purpose, the general structure, and the economic relevance of this subject matter.
working principles of different approaches, including genetic But what are the benefits of evolutionary computation
algorithms (GA) [with links to genetic programming (GP) and (compared to other approaches) which may justify the effort
classifier systems (CS)], evolution strategies (ES), and evolutionary invested in this area? We argue that the most significant advan-
programming (EP) by analysis and comparison of their most
tage of using evolutionary search lies in the gain of flexibility
important constituents (i.e., representations, variation operators,
reproduction, and selection mechanism). Finally, we give a brief and adaptability to the task at hand, in combination with robust
overview on the manifold of application domains, although this performance (although this depends on the problem class) and
necessarily must remain incomplete. global search characteristics. In fact, evolutionary computation
Index Terms— Classifier systems, evolution strategies, evolu- should be understood as a general adaptable concept for
tionary computation, evolutionary programming, genetic algo- problem solving, especially well suited for solving difficult
rithms, genetic programming. optimization problems, rather than a collection of related and
ready-to-use algorithms.
I. EVOLUTIONARY COMPUTATION: ROOTS AND PURPOSE The majority of current implementations of evolutionary
algorithms descend from three strongly related but indepen-

T HIS first issue of the IEEE TRANSACTIONS ON


EVOLUTIONARY COMPUTATION marks an important point
in the history of the rapidly growing field of evolutionary
dently developed approaches: genetic algorithms, evolutionary
programming, and evolution strategies.
Genetic algorithms, introduced by Holland [6], [11], [12],
computation, and we are glad to participate in this event. and subsequently studied by De Jong [13]–[16], Goldberg
In preparation for this summary, we strove to provide a [17]–[21], and others such as Davis [22], Eshelman [23], [24],
comprehensive review of both the history and the state Forrest [25], Grefenstette [26]–[29], Koza [30], [31], Mitchell
of the art in the field for both the novice and the expert [32], Riolo [33], [34], and Schaffer [35]–[37], to name only
in evolutionary computation. Our selections of material a few, have been originally proposed as a general model of
are necessarily subjective, and we regret any significant adaptive processes, but by far the largest application of the
omissions. techniques is in the domain of optimization [15], [16]. Since
Although the origins of evolutionary computation can be this is true for all three of the mainstream algorithms presented
traced back to the late 1950’s (see e.g., the influencing works in this paper, we will discuss their capabilities and performance
of Bremermann [1], Friedberg [2], [3], Box [4], and others), mainly as optimization strategies.
the field remained relatively unknown to the broader scientific
Evolutionary programming, introduced by Fogel [9], [38]
community for almost three decades. This was largely due
and extended in Burgin [39], [40], Atmar [41], Fogel
to the lack of available powerful computer platforms at that
[42]–[44], and others, was originally offered as an attempt
time, but also due to some methodological shortcomings of
to create artificial intelligence. The approach was to evolve
those early approaches (see, e.g., Fogel [5, p. 103]).
finite state machines (FSM) to predict events on the basis of
The fundamental work of Holland [6], Rechenberg [7],
former observations. An FSM is an abstract machine which
Schwefel [8], and Fogel [9] served to slowly change this pic-
transforms a sequence of input symbols into a sequence of
ture during the 1970’s, and we currently observe a remarkable
output symbols. The transformation depends on a finite set of
states and a finite set of state transition rules. The performance
Manuscript received November 13, 1996; revised January 23, 1997. The
work of T. Bäck was supported by a grant from the German BMBF, Project of an FSM with respect to its environment might then be
EVOALG. measured on the basis of the machine’s prediction capability,
T. Bäck is with the Informatik Centrum Dortmund, Center for Applied i.e., by comparing each output symbol with the next input
Systems Analysis (CASA), D-44227 Dortmund, Germany, and Leiden
University, NL-2333 CA Leiden, The Netherlands (e-mail: [email protected]). symbol and measuring the worth of a prediction by some
U. Hammel and H.-P. Schwefel are with the Computer Science payoff function.
Department, Dortmund University, D-44221 Dortmund, Germany (e-mail: Evolution strategies, as developed by Rechenberg [45], [46]
[email protected]; [email protected]
dortmund.de). and Schwefel [47], [48], and extended by Herdy [49], Kursawe
Publisher Item Identifier S 1089-778X(97)03305-5. [50], Ostermeier [51], [52], Rudolph [53], Schwefel [54], and
1089–778X/97$10.00  1997 IEEE

Authorized licensed use limited to: Universidade de Lisboa Reitoria. Downloaded on February 2, 2009 at 14:58 from IEEE Xplore. Restrictions apply.
4 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 1, NO. 1, APRIL 1997

others, were initially designed with the goal of solving difficult objective functions, frequently lead to difficult if not unsolv-
discrete and continuous, mainly experimental [55], parameter able optimization tasks (see [83, p. 6]). But even in the latter
optimization problems. case, the identification of an improvement of the currently
During the 1980’s, advances in computer performance en- known best solution through optimization is often already a big
abled the application of evolutionary algorithms to solve success for practical problems, and in many cases evolutionary
difficult real-world optimization problems, and the solutions algorithms provide an efficient and effective method to achieve
received a broader audience. In addition, beginning in 1985, in- this.
ternational conferences on the techniques were offered (mainly Optimization problems occur in many technical, economic,
focusing on genetic algorithms [56]–[61], with an early em- and scientific projects, like cost-, time-, and risk-minimization
phasis on evolutionary programming [62]–[66], as small work- or quality-, profit-, and efficiency-maximization [10], [22] (see
shops on theoretical aspects of genetic algorithms [67]–[69], also [80, part G]). Thus, the development of general strategies
as a genetic programming conference [70], with the gen- is of great value.
eral theme of problem solving methods gleaned from nature In real-world situations the objective function and the
[71]–[74], and with the general topic of evolutionary computa- constraints are often not analytically treatable or are even
tion [75]–[78]). But somewhat surprisingly, the researchers in not given in closed form, e.g., if the function definition is
the various disciplines of evolutionary computation remained based on a simulation model [84], [85].
isolated from each other until the meetings in the early 1990’s The traditional approach in such cases is to develop a formal
[59], [63], [71]. model that resembles the original functions close enough but is
The remainder of this paper is intended as an overview solvable by means of traditional mathematical methods such as
of the current state of the field. We cannot claim that this linear and nonlinear programming. This approach most often
overview is close to complete. As good starting points for requires simplifications of the original problem formulation.
further studies we recommend [5], [18], [22], [31], [32], Thus, an important aspect of mathematical programming lies
[48], and [79]–[82]. In addition moderated mailing lists1 and in the design of the formal model.
newsgroups2 allow one to keep track of current events and No doubt, this approach has proven to be very successful
discussions in the field. in many applications, but has several drawbacks which mo-
In the next section we describe the application domain of tivated the search for novel approaches, where evolutionary
evolutionary algorithms and contrast them with the traditional computation is one of the most promising directions. The
approach of mathematical programming. most severe problem is that, due to oversimplifications, the
computed solutions do not solve the original problem. Such
II. OPTIMIZATION, EVOLUTIONARY COMPUTATION, problems, e.g., in the case of simulation models, are then often
AND MATHEMATICAL PROGRAMMING considered unsolvable.
The fundamental difference in the evolutionary computation
In general, an optimization problem requires finding a
approach is to adapt the method to the problem at hand. In our
setting of free parameters of the system under
opinion, evolutionary algorithms should not be considered as
consideration, such that a certain quality criterion
off-the-peg, ready-to-use algorithms but rather as a general
(typically called the objective function) is maximized (or,
concept which can be tailored to most of the real-world
equivalently, minimized)
applications that often are beyond solution by means of
(1) traditional methods. Once a successful EC-framework has been
developed it can be incrementally adapted to the problem
The objective function might be given by real-world systems under consideration [86], to changes of the requirements of
of arbitrary complexity. The solution to the global opti- the project, to modifications of the model, and to the change
mization problem (1) requires finding a vector such that of hardware resources.
. Characteristics such as
multimodality, i.e., the existence of several local maxima
with III. THE STRUCTURE OF AN EVOLUTIONARY ALGORITHM
(2) Evolutionary algorithms mimic the process of natural evo-
lution, the driving process for the emergence of complex and
(where denotes a distance measure on ), constraints, i.e., well-adapted organic structures. To put it succinctly and with
restrictions on the set by functions such that strong simplifications, evolution is the result of the interplay
the set of feasible solutions is only a subset of the between the creation of new genetic information and its
domain of the variables evaluation and selection. A single individual of a population
(3) is affected by other individuals of the population (e.g., by
food competition, predators, and mating), as well as by the
and other factors, such as large dimensionality, strong non- environment (e.g., by food supply and climate). The better an
linearities, nondifferentiability, and noisy and time-varying individual performs under these conditions the greater is the
1 For example, [email protected] and EP-List- chance for the individual to live for a longer while and generate
[email protected]. offspring, which in turn inherit the (disturbed) parental genetic
2 For example, comp.ai.genetic. information. Over the course of evolution, this leads to a

Authorized licensed use limited to: Universidade de Lisboa Reitoria. Downloaded on February 2, 2009 at 14:58 from IEEE Xplore. Restrictions apply.
BÄCK et al.: EVOLUTIONARY COMPUTATION: COMMENTS ON THE HISTORY AND CURRENT STATE 5

penetration of the population with the genetic information


of individuals of above-average fitness. The nondeterministic
nature of reproduction leads to a permanent production of
novel genetic information and therefore to the creation of
differing offspring (see [5], [79], and [87] for more details).
This neo-Darwinian model of organic evolution is reflected
by the structure of the following general evolutionary algo-
rithm.
Algorithm 1:
:=
initialize
evaluate
while not terminate do
:= variation ;
evaluate ;
:= select Fig. 1. The relation of genotype space and phenotype space [5, p. 39].
:=
od operators often work on abstract mathematical objects like
binary strings, the genotype space. Obviously, a mapping or
In this algorithm, denotes a population of individuals coding function between the phenotype and genotype space is
at generation . is a special set of individuals that might required. Fig. 1 sketches the situation (see also [5, pp. 38–43]).
be considered for selection, e.g., (but In general, two different approaches can be followed. The
is possible as well). An offspring population of first is to choose one of the standard algorithms and to design
size is generated by means of variation operators such as a decoding function according to the requirements of the
recombination and/or mutation (but others such as inversion algorithm. The second suggests designing the representation
[11, pp. 106–109] are also possible) from the population . as close as possible to the characteristics of the phenotype
The offspring individuals are then evaluated by calculating the space, almost avoiding the need for a decoding function.
objective function values for each of the solutions Many empirical and theoretical results are available for the
represented by individuals in , and selection based on standard instances of evolutionary algorithms, which is clearly
the fitness values is performed to drive the process toward an important advantage of the first approach, especially with
better solutions. It should be noted that is possible, regard to the reuse and parameter setting of operators. On
thus including so-called steady-state selection schemes [88], the other hand, a complex coding function may introduce
[89] if used in combination with . Furthermore, by additional nonlinearities and other mathematical difficulties
choosing an arbitrary value of the generation which can hinder the search process substantially [79, pp.
gap [90] is adjustable, such that the transition between strictly 221–227], [82, p. 97].
generational and steady-state variants of the algorithm is also There is no general answer to the question of which one of
taken into account by the formulation offered here. It should the two approaches mentioned above to follow for a specific
also be noted that , i.e., a reproduction surplus, is the project, but many practical applications have shown that the
normal case in nature. best solutions could be found after imposing substantial mod-
ifications to the standard algorithms [86]. We think that most
practitioners prefer natural, problem-related representations.
IV. DESIGNING AN EVOLUTIONARY ALGORITHM Michalewicz [82, p. 4] offers:
As mentioned, at least three variants of evolutionary al- It seems that a “natural” representation of a potential
gorithms have to be distinguished: genetic algorithms, evo- solution for a given problem plus a family of appli-
lutionary programming, and evolution strategies. From these cable “genetic” operators might be quite useful in the
(“canonical”) approaches innumerable variants have been de- approximation of solutions of many problems, and this
rived. Their main differences lie in: nature-modeled approach is a promising direction for
• the representation of individuals; problem solving in general.
• the design of the variation operators (mutation and/or Furthermore, many researchers also use hybrid algorithms,
recombination); i.e., combinations of evolutionary search heuristics and tradi-
• the selection/reproduction mechanism. tional as well as knowledge-based search techniques [22, p.
In most real-world applications the search space is defined 56], [91], [92].
by a set of objects, e.g., processing units, pumps, heaters, It should be emphasized that all this becomes possible
and coolers of a chemical plant, each of which have different because the requirements for the application of evolution-
parameters such as energy consumption, capacity, etc. Those ary heuristics are so modest compared to most other search
parameters which are subject to optimization constitute the techniques. In our opinion, this is one of the most important
so-called phenotype space. On the other hand the genetic strengths of the evolutionary approach and one of the rea-

Authorized licensed use limited to: Universidade de Lisboa Reitoria. Downloaded on February 2, 2009 at 14:58 from IEEE Xplore. Restrictions apply.
6 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 1, NO. 1, APRIL 1997

sons for the popularity evolutionary computation has gained an additional multimodality, thus making the combined objec-
throughout the last decade. tive function (where ) more complex
than the original problem was. In fact, the schema theory
A. The Representation relies on approximations [11, pp. 78–83] and the optimization
criterion to minimize the overall expected loss (corresponding
Surprisingly, despite the fact that the representation prob-
to the sum of all fitness values of all individuals ever sampled
lem, i.e., the choice or design of a well-suited genetic represen-
during the evolution) rather than the criterion to maximize the
tation for the problem under consideration, has been described
best fitness value ever found [15]. In concluding this brief
by many researchers [82], [93], [94] only few a publications
excursion into the theory of canonical genetic algorithms, we
explicitly deal with this subject except for specialized research
would like to emphasize the recent work by Vose [106]–[109]
directions such as genetic programming [31], [95], [96] and the
and others [110], [111] on modeling genetic algorithms by
evolution of neural networks [97], [98].
Markov chain theory. This approach has already provided
Canonical genetic algorithms use a binary representation of
a remarkable insight into their convergence properties and
individuals as fixed-length strings over the alphabet
dynamical behavior and led to the development of so-called
[11], such that they are well suited to handle pseudo-Boolean
executable models that facilitate the direct simulation of ge-
optimization problems of the form
netic algorithms by Markov chains for problems of sufficiently
(4) small dimension [112], [113].
In contrast to genetic algorithms, the representation in
Sticking to the binary representation, genetic algorithms of- evolution strategies and evolutionary programming is directly
ten enforce the utilization of encoding and decoding functions based on real-valued vectors when dealing with continuous
and that facilitate mapping parameter optimization problems of the general form
solutions to binary strings and vice
versa, which sometimes requires rather complex mappings (5)
and . In case of continuous parameter optimization problems,
Both methods have originally been developed and are also
for instance, genetic algorithms typically represent a real-
used, however, for combinatorial optimization problems [42],
valued vector by a binary string as
[43], [55]. Moreover, since many real-world problems have
follows: the binary string is logically divided into segments
complex search spaces which cannot be mapped “canonically”
of equal length (i.e., ), each segment is decoded to
to one of the representations mentioned so far, many strategy
yield the corresponding integer value, and the integer value
variants, e.g., for integer [114], mixed-integer [115], structure
is in turn linearly mapped to the interval
optimization [116], [117], and others [82, ch. 10], have been
(corresponding with the th segment of the binary string) of
introduced in the literature, but exhaustive comparative studies
real values [18].
especially for nonstandard representations are still missing.
The strong preference for using binary representations of
The actual development of the field is characterized by a
solutions in genetic algorithms is derived from schema theory
progressing integration of the different approaches, such that
[11], which analyzes genetic algorithms in terms of their
the utilization of the common labels “genetic algorithm,”
expected schema sampling behavior under the assumption
“evolution strategy,” and “evolutionary programming” might
that mutation and recombination are detrimental. The term
be sometimes even misleading.
schema denotes a similarity template that represents a subset of
, and the schema theorem of genetic algorithms offers
that the canonical genetic algorithm provides a near-optimal B. Mutation
sampling strategy (in terms of minimizing expected losses) Of course, the design of variation operators has to obey the
for schemata by increasing the number of well-performing, mathematical properties of the chosen representation, but there
short (i.e., with small distance between the left-most and right- are still many degrees of freedom.
most defined position), and low-order (i.e., with few specified Mutation in genetic algorithms was introduced as a ded-
bits) schemata (so-called building blocks) over subsequent icated “background operator” of small importance (see [11,
generations (see [18] for a more detailed introduction to the pp. 109–111]). Mutation works by inverting bits with very
schema theorem). The fundamental argument to justify the small probability such as [13],
strong emphasis on binary alphabets is derived from the fact [118], or [119], [120]. Recent studies have im-
that the number of schemata is maximized for a given finite pressively clarified, however, that much larger mutation rates,
number of search points under a binary alphabet [18, pp. decreasing over the course of evolution, are often helpful with
40–41]. Consequently, the schema theory presently seems to respect to the convergence reliability and velocity of a genetic
favor binary representations of solutions (but see [99] for an algorithm [101], [121], and that even self-adaptive mutation
alternative view and [100] for a transfer of schema theory to rates are effective for pseudo-Boolean problems [122]–[124].
-expression representations used in genetic programming). Originally, mutation in evolutionary programming was im-
Practical experience, as well as some theoretical hints re- plemented as a random change (or multiple changes) of the
garding the binary encoding of continuous object variables description of the finite state machines according to five dif-
[101]–[105], however, indicate that the binary representation ferent modifications: change of an output symbol, change of a
has some disadvantages. The coding function might introduce state transition, addition of a state, deletion of a state, or change

Authorized licensed use limited to: Universidade de Lisboa Reitoria. Downloaded on February 2, 2009 at 14:58 from IEEE Xplore. Restrictions apply.
BÄCK et al.: EVOLUTIONARY COMPUTATION: COMMENTS ON THE HISTORY AND CURRENT STATE 7

(a) (b) (c)


Fig. 2. Two-dimensional contour plot of the effect of the mutation operator in case of self-adaptation of (a) a single step size, (b) n step sizes, and (c)
covariances. x3 denotes the optimizer. The ellipses represent one line of equal probability to place an offspring that is generated by mutation from the parent
individual located at the center of the ellipses. Five sample individuals are shown in each of the plots.

of the initial state. The mutations were typically performed in the middle of Fig. 2. The locations of equal probability
with uniform probability, and the number of mutations for a density for descendants are concentric hyperellipses (just one
single offspring was either fixed or also chosen according to is depicted in Fig. 2) around the parental midpoint. In the case
a probability distribution. Currently, the most frequently used considered here, i.e., up to variances, but no covariances, the
mutation scheme as applied to real-valued representations is axes of the hyperellipses are congruent with the coordinate
very similar to that of evolution strategies. axes.
In evolution strategies, the individuals consist of object Two modifications of this scheme have to be mentioned: a
variables ( ) and so-called strategy simplified version uses just one step-size parameter for all of
parameters, which are discussed in the next section. Mutation the object variables. In this case the hyperellipses are reduced
is then performed independently on each vector element by to hyperspheres, as depicted in the left part of Fig. 2. A more
adding a normally distributed random value with expectation elaborate correlated mutation scheme allows for the rotation
zero and standard deviation (the notation indicates of hyperellipses, as shown in the right part of Fig. 2. This
that the random variable is sampled anew for each value of mechanism aims at a better adaptation to the topology of the
the index ) objective function (for details, see [79]).
The settings for the learning rates and are recom-
(6)
mended as upper bounds for the choice of these parameters
This raises the question of how to control the so-called step (see [126, pp. 167–168]), but one should have in mind that,
size of (6), which is discussed in the next section. depending on the particular topological characteristics of the
objective function, the optimal setting of these parameters
C. Self-Adaptation might differ from the values proposed. For the case of one self-
adaptable step size, however, Beyer has recently theoretically
In [125] Schwefel introduced an endogenous mechanism
shown that, for the sphere model (a quadratic bowl), the setting
for step-size control by incorporating these parameters into
is the optimal choice, maximizing the convergence
the representation in order to facilitate the evolutionary self-
velocity [127].
adaptation of these parameters by applying evolutionary op-
The amount of information included into the individuals
erators to the object variables and the strategy parameters for
by means of the self-adaptation principle increases from the
mutation at the same time, i.e., searching the space of solutions
simple case of one standard deviation up to the order of
and strategy parameters simultaneously. This way, a suitable
additional parameters, which reflects an enormous degree
adjustment and diversity of mutation parameters should be
of freedom for the internal models of the individuals. This
provided under arbitrary circumstances.
growing degree of freedom often enhances the global search
More formally, an individual consists of object
capabilities of the algorithm at the cost of the expense in
variables and strategy parameters . The
computation time, and it also reflects a shift from the precise
mutation operator works by adding a normally distributed
adaptation of a few strategy parameters (as in case of one
random vector with (i.e., the
step size) to the exploitation of a large diversity of strategy
components of are normally distributed with expectation
parameters. In case of correlated mutations, Rudolph [128]
zero and variance ).
has shown that an approximation of the Hessian could be
The effect of mutation is now defined as
computed with an upper bound of
(7) on the population size, but the typical population sizes
(8) and , independently of , are certainly not sufficient
to achieve this.
where and . The choice of a logarithmic normal distribution for the
This mutation scheme, which is most frequently used in modification of the standard deviations is presently
evolution strategies, is schematically depicted (for ) also acknowledged in evolutionary programming literature

Authorized licensed use limited to: Universidade de Lisboa Reitoria. Downloaded on February 2, 2009 at 14:58 from IEEE Xplore. Restrictions apply.
8 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 1, NO. 1, APRIL 1997

[129]–[131]. Extensive empirical investigations indicate some in the same way as the vector of binary variables is evolved.
advantage of this scheme over the original additive self- The results reported in [123] demonstrate that the mechanism
adaptation mechanism introduced independently (but about yields a significant improvement in performance of a canonical
20 years later than in evolution strategies) in evolutionary genetic algorithm on the test functions used.
programming [132] where
D. Recombination
(9)
The variation operators of canonical genetic algorithms,
(with a setting of [131]). Recent preliminary inves- mutation, and recombination are typically applied with a
tigations indicate, however, that this becomes reversed when strong emphasis on recombination. The standard algorithm
noisy objective functions are considered, where the additive performs a so-called one-point crossover, where two indi-
mechanism seems to outperform multiplicative modifications viduals are chosen randomly from the population, a position
[133]. in the bitstrings is randomly determined as the crossover
A study by Gehlhaar and Fogel [134] also indicates that the point, and an offspring is generated by concatenating the
order of the modifications of and has a strong impact left substring of one parent and the right substring of the
on the effectiveness of self-adaptation: It appears important other parent. Numerous extensions of this operator, such as
to mutate the standard deviations first and to use the mutated increasing the number of crossover points [138], uniform
standard deviations for the modification of object variables. crossover (each bit is chosen randomly from the corresponding
As the authors point out in that study, the reversed mechanism parental bits) [139], and others, have been proposed, but
might suffer from generating offspring that have useful object similar to evolution strategies no generally useful recipe for
variable vectors but poor strategy parameter vectors because the choice of a recombination operator can be given. The
these have not been used to determine the position of the theoretical analysis of recombination is still to a large extent
offspring itself. an open problem. Recent work on multi-parent recombination,
More work needs to be performed, however, to achieve any where more than two individuals participate in generating a
clear understanding of the general advantages or disadvantages single offspring individual, clarifies that this generalization
of one self-adaptation scheme compared to the other mecha- of recombination might yield a performance improvement
nisms. A recent theoretical study by Beyer presents a first step in many application examples [140]–[142]. Unlike evolution
toward this goal [127]. In this work, the author shows that strategies, where it is either utilized for the creation of all
the self-adaptation principle works for a variety of different members of the intermediate population (the default case) or
probability density functions for the modification of the step not at all, the recombination operator in genetic algorithms is
size, i.e., it is an extremely robust mechanism. Moreover, [127] typically applied with a certain probability , and commonly
clarifies that (9) is obtained from the corresponding equation proposed settings of the crossover probability are
for evolution strategies with one self-adaptable step size by [13] and [118].
Taylor expansion breaking off after the linear term, such that In evolution strategies recombination is incorporated into
both methods behave equivalently for small settings of the the main loop of the algorithm as the first operator (see
learning rates and , when . This prediction was Algorithm 1) and generates a new intermediate population of
confirmed perfectly by an experiment reported in [135]. individuals by -fold application to the parent population,
Apart from the early work by Schaffer and Morishima [37], creating one individual per application from ( )
self-adaptation has only recently been introduced in genetic individuals. Normally, or (so-called global
algorithms as a mechanism for evolving the parameters of recombination) are chosen. The recombination types for object
variation operators. In [37], punctuated crossover was offered variables and strategy parameters in evolution strategies often
as a method for adapting both the number and position differ from each other, and typical examples are discrete re-
of crossover points for a multipoint crossover operator in combination (random choices of single variables from parents,
canonical genetic algorithms. Although this approach seemed comparable to uniform crossover in genetic algorithms) and
promising, the operator has not been used widely. A simpler intermediary recombination (often arithmetic averaging, but
approach toward self-adapting the crossover operator was other variants such as geometrical crossover [143] are also
presented by Spears [136], who allowed individuals to choose possible). For further details on these operators, see [79].
between two-point crossover and uniform crossover by means The advantages or disadvantages of recombination for a
of a self-adaptable operator choice bit attached to the rep- particular objective function can hardly be assessed in advance,
resentation of individuals. The results indicated that, in case and certainly no generally useful setting of recombination op-
of crossover operators, rather than adapting to the single best erators (such as the discrete recombination of object variables
operator for a given problem, the mechanism seems to benefit and global intermediary of strategy parameters as we have
from the existing diversity of operators available for crossover. claimed in [79, pp. 82–83]) exists. Recently, Kursawe has
Concerning the mutation operator in genetic algorithms, impressively demonstrated that, using an inappropriate setting
some effort to facilitate self-adaptation of the mutation rate of the recombination operator, the (15 100)-evolution strategy
has been presented by Smith and Fogarty [123], based on with self-adaptable variances might even diverge on a sphere
earlier work by Bäck [137]. These approaches incorporate the model for [144]. Kursawe shows that the appropriate
mutation rate into the representation of individuals choice of the recombination operator not only depends on
and allow for mutation and recombination of the mutation rate the objective function topology, but also on the dimension of

Authorized licensed use limited to: Universidade de Lisboa Reitoria. Downloaded on February 2, 2009 at 14:58 from IEEE Xplore. Restrictions apply.
BÄCK et al.: EVOLUTIONARY COMPUTATION: COMMENTS ON THE HISTORY AND CURRENT STATE 9

the objective function and the number of strategy parameters selected to replace the parents (in this case, in
incorporated into the individuals. Only recently, Rechenberg Algorithm 1). Notice that this mechanism allows that the
[46] and Beyer [142] presented first results concerning the best member of the population at generation might
convergence velocity analysis of global recombination in case perform worse than the best individual at generation , i.e.,
of the sphere model. These results clarify that, for using one the method is not elitist, thus allowing the strategy to accept
(rather than as in Kursawe’s experiment) optimally chosen temporary deteriorations that might help to leave the region of
standard deviation , a -fold speedup is achieved by both attraction of a local optimum and reach a better optimum. In
recombination variants. Beyer’s interpretation of the results, contrast, the ( ) strategy selects the survivors from
however, is somewhat surprising because it does not put down the union of parents and offspring, such that a monotonic
the success of this operator on the existence of building blocks course of evolution is guaranteed [ in Algorithm
which are usefully rearranged in an offspring individual, but 1]. Due to recommendations by Schwefel, however, the ( )
rather explains it as a genetic repair of the harmful parts of strategy is preferred over the ( ) strategy, although recent
mutation. experimental findings seem to indicate that the latter performs
Concerning evolutionary programming, a rash statement as well as or better than the ( ) strategy in many practical
based on the common understanding of the contending struc- cases [134]. It should also be noted that both schemes can be
tures as individuals would be to claim that evolutionary interpreted as instances of the general ( ) strategy, where
programming simply does not use recombination. Rather than denotes the maximum life span (in generations)
focusing on the mechanism of sexual recombination, however, of an individual. For , the selection method yields the
Fogel [145] argues that one may examine and simulate its ( ) strategy, while it turns into the ( ) strategy for
functional effect and correspondingly interpret a string of [54].
symbols as a reproducing population or species, thus making A minor difference between evolutionary programming and
recombination a nonissue (refer to [145] for philosophical evolution strategies consists in the choice of a probabilistic
reasons underlining this choice). variant of ( ) selection in evolutionary programming,
where each solution out of offspring and parent individuals is
E. Selection evaluated against (typically, ) other randomly
Unlike the variation operators which work on the genetic chosen solutions from the union of parent and offspring
representation, the selection operator is based solely on the individuals [ in Algorithm 1]. For each comparison,
fitness values of the individuals. a “win” is assigned if an individual’s score is better or
In genetic algorithms, selection is typically implemented equal to that of its opponent, and the individuals with the
as a probabilistic operator, using the relative fitness greatest number of wins are retained to be parents of the next
to determine the selection probability of generation. As shown in [79, pp. 96–99], this selection method
an individual (proportional selection). This method re- is a probabilistic version of ( ) selection which becomes
quires positive fitness values and a maximization task, so that more and more deterministic as the number of competitors
scaling functions are often utilized to transform the fitness is increased. Whether or not a probabilistic selection scheme
values accordingly (see, e.g., [18, p. 124]). Rather than using should be preferable over a deterministic scheme remains an
absolute fitness values, rank-based selection methods utilize open question.
the indexes of individuals when ordered according to fitness Evolutionary algorithms can easily be ported to parallel
values to calculate the corresponding selection probabilities. computer architectures [150], [151]. Since the individuals can
Linear [146] as well as nonlinear [82, p. 60] mappings have be modified and, most importantly, evaluated independently
been proposed for this type of selection operator. Tournament of each other, we should expect a speed-up scaling linear
selection [147] works by taking a random uniform sample with the number of processing units as long as does
of a certain size from the population, selecting the not exceed the population size . But selection operates on
best of these individuals to survive for the next generation, the whole population so this operator eventually slows down
and repeating the process until the new population is filled. the overall performance, especially for massively parallel
This method gains increasing popularity because it is easy architectures where . This observation motivated the
to implement, computationally efficient, and allows for fine- development of parallel algorithms using local selection within
tuning the selective pressure by increasing or decreasing the subpopulations like in migration models [53], [152] or within
tournament size . For an overview of selection methods small neighborhoods of spatially arranged individuals like in
and a characterization of their selective pressure in terms of diffusion models [153]–[156] (also called cellular evolutionary
numerical measures, the reader should consult [148] and [149]. algorithms [157]–[159]). It can be observed that local selection
While most of these selection operators have been introduced techniques not only yield a considerable speed-up on parallel
in the framework of a generational genetic algorithm, they architectures, but also improve the robustness of the algorithms
can also be used in combination with the steady-state and [46], [116], [160].
generation gap methods outlined in Section III.
The ( )-evolution strategy uses a deterministic selection
scheme. The notation ( ) indicates that parents create F. Other Evolutionary Algorithm Variants
offspring by means of recombination and mutation, Although it is impossible to present a thorough overview
and the best offspring individuals are deterministically of all variants of evolutionary computation here, it seems

Authorized licensed use limited to: Universidade de Lisboa Reitoria. Downloaded on February 2, 2009 at 14:58 from IEEE Xplore. Restrictions apply.
10 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 1, NO. 1, APRIL 1997

appropriate to explicitly mention order-based genetic algo- tions to repair algorithms, constraint-preserving opera-
rithms [18], [82], classifier systems [161], [162], and genetic tors, and decoders; see [169] for an overview).
programming [31], [70], [81], [163] as branches of genetic 3) Expert knowledge about the problem needs to be incor-
algorithms that have developed into their own directions of porated into the representation and the operators in order
research and application. The following overview is restricted to guide the search process and increase its convergence
to a brief statement of their domain of application and some velocity—without running into the trap, however, of
literature references: being confused and misled by expert beliefs and habits
• Order-based genetic algorithms were proposed for which might not correspond with the best solutions.
searching the space of permutations 4) An objective function needs to be developed, often in
directly rather than using complex decoding cooperation with experts from the particular application
functions for mapping binary strings to permutations field.
and preserving feasible permutations under mutation and 5) The parameters of the evolutionary algorithm need to be
crossover (as proposed in [164]). They apply specialized set (or tuned) and the feasibility of the approach needs to
recombination (such as order crossover or partially be assessed by comparing the results to expert solutions
matched crossover) and mutation operators (such as (used so far) or, if applicable, solutions obtained by other
random exchanges of two elements of the permutation) algorithms.
which preserve permutations (see [82, ch. 10] for an Most of these topics require experience with evolutionary
overview). algorithms as well as cooperation between the application’s
• Classifier systems use an evolutionary algorithm to search expert and the evolutionary algorithm expert, and only few
the space of production rules (often encoded by strings general results are available to guide the design of the al-
over a ternary alphabet, but also sometimes using sym- gorithm (e.g., representation-independent recombination and
bolic rules [165]) of a learning system capable of in- mutation operators [170], [171], the requirement that small
duction and generalization [18, ch. 6], [161], [166], changes by mutation occur more frequently than large ones
[167]. Typically, the Michigan approach and the Pitts- [48], [172], and a quantification of the selective pressure im-
burgh approach are distinguished according to whether posed by the most commonly used selection operators [149]).
an individual corresponds with a single rule of the rule- Nevertheless, evolutionary algorithms often yield excellent
based system (Michigan) or with a complete rule base results when applied to complex optimization problems where
(Pittsburgh). other methods are either not applicable or turn out to be
• Genetic programming applies evolutionary search to the unsatisfactory (a variety of examples can be found in [80]).
space of tree structures which may be interpreted as Important practical problem classes where evolutionary al-
computer programs in a language suitable to modification gorithms yield solutions of high quality include engineering
by mutation and recombination. The dominant approach design applications involving continuous parameters (e.g.,
to genetic programming uses (a subset of) LISP programs for the design of aircraft [173], [174] structural mechanics
( expressions) as genotype space [31], [163], but other problems based on two-dimensional shape representations
programming languages including machine code are also [175], electromagnetic systems [176], and mobile manipula-
used (see, e.g., [70], [81], and [168]). tors [177], [178]), discrete parameters (e.g., for multiplierless
Throughout this section we made the attempt to compare digital filter optimization [179], the design of a linear collider
the constituents of evolutionary algorithms in terms of their [180], or nuclear reactor fuel arrangement optimization [181]),
canonical forms. But in practice the borders between these and mixed-integer representations (e.g., for the design of
approaches are much more fluid. We can observe a steady evo- survivable networks [182] and optical multilayer systems
lution in this field by modifying (mutating), (re)combining, and [115]). Combinatorial optimization problems with a straight-
validating (evaluating) the current approaches, permanently forward binary representation of solutions have also been
improving the population of evolutionary algorithms. treated successfully with canonical genetic algorithms and
their derivatives (e.g., set partitioning and its application to
airline crew scheduling [183], knapsack problems [184], [185],
V. APPLICATIONS
and others [186]). Relevant applications to combinatorial prob-
Practical application problems in fields as diverse as engi- lems utilizing a permutation representation of solutions are
neering, natural sciences, economics, and business (to mention also found in the domains of scheduling (e.g., production
only some of the most prominent representatives) often exhibit scheduling [187] and related problems [188]), routing (e.g.,
a number of characteristics that prevent the straightforward of vehicles [189] or telephone calls [190]), and packing (e.g.,
application of standard instances of evolutionary algorithms. of pallets on a truck [191]).
Typical problems encountered when developing an evolution- The existing range of successful applications is extremely
ary algorithm for a practical application include the following. broad, thus by far preventing an exhaustive overview—the
1) A suitable representation and corresponding operators list of fields and example applications should be taken as a
need to be developed when the canonical representation hint for further reading rather than a representative overview.
is different from binary strings or real-valued vectors. Some of the most challenging applications with a large profit
2) Various constraints need to be taken into account by potential are found in the field of biochemical drug design,
means of a suitable method (ranging from penalty func- where evolutionary algorithms have gained remarkable interest

Authorized licensed use limited to: Universidade de Lisboa Reitoria. Downloaded on February 2, 2009 at 14:58 from IEEE Xplore. Restrictions apply.
BÄCK et al.: EVOLUTIONARY COMPUTATION: COMMENTS ON THE HISTORY AND CURRENT STATE 11

and success in the past few years as an optimization proce- are encouraging [171], as well, as is the work on complex
dure to support protein engineering [134], [192]–[194]. Also, nonstandard representations such as in the field of genetic
finance and business provide a promising field of profitable programming.
applications [195], but of course few details are published • Likewise, the field still lacks a sound formal charac-
about this work (see, e.g., [196]). In fact, the relation between terization of the application domain and the limits of
evolutionary algorithms and economics has found increasing evolutionary computation. This requires future efforts in
interest in the past few years and is now widely seen as a the field of complexity theory.
promising modeling approach for agents acting in a complex, There exists a strong relationship between evolutionary
uncertain situation [197]. computation and some other techniques, e.g., fuzzy logic and
In concluding this section, we refer to the research field of neural networks, usually regarded as elements of artificial
computational intelligence (see Section VI for details) and the intelligence. Following Bezdek [208], their main common
applications of evolutionary computation to the other main characteristic lies in their numerical knowledge representation,
fields of computational intelligence, namely fuzzy logic and which differentiates them from traditional symbolic artificial
neural networks. An overview of the utilization of genetic intelligence. Bezdek suggested the term computational intelli-
algorithms to train and construct neural networks is given in gence for this special branch of artificial intelligence with the
[198], and of course other variants of evolutionary algorithms following characteristics3:
can also be used for this task (see e.g., [199] for an evolution- 1) numerical knowledge representation;
ary programming, [200] for an evolution strategy example, and 2) adaptability;
[97] and [201] for genetic algorithm examples). Similarly, both 3) fault tolerance;
the rule base and membership functions of fuzzy systems can 4) processing speed comparable to human cognition pro-
be optimized by evolutionary algorithms, typically yielding cesses;
improvements of the performance of the fuzzy system (e.g., 5) error rate optimality (e.g., with respect to a Bayesian
[202]–[206]). The interaction of computational intelligence estimate of the probability of a certain error on future
techniques and hybridization with other methods such as data).
expert systems and local optimization techniques certainly
We regard computational intelligence as one of the most
opens a new direction of research toward hybrid systems
innovative research directions in connection with evolutionary
that exhibit problem solving capabilities approaching those
computation, since we may expect that efficient, robust, and
of naturally intelligent systems in the future. Evolutionary
easy-to-use solutions to complex real-world problems will be
algorithms, seen as a technique to evolve machine intelligence
developed on the basis of these complementary techniques.
(see [5]), are one of the mandatory prerequisites for achieving
In this field, we expect an impetus from the interdisciplinary
this goal by means of algorithmic principles that are already
cooperation, e.g., techniques for tightly coupling evolutionary
working quite successfully in natural evolution [207].
and problem domain heuristics, more elaborate techniques for
self-adaptation, as well as an important step toward machine
VI. SUMMARY AND OUTLOOK intelligence.
Finally, it should be pointed out that we are far from using
To summarize, the current state of evolutionary computation
all potentially helpful features of evolution within evolutionary
research can be characterized as in the following.
algorithms. Comparing natural evolution and the algorithms
• The basic concepts have been developed more than 35 discussed here, we can immediately identify a list of important
years ago, but it took almost two decades for their differences, which all might be exploited to obtain more
potential to be recognized by a larger audience. robust search algorithms and a better understanding of natural
• Application-oriented research in evolutionary computa- evolution.
tion is quite successful and almost dominates the field (if
• Natural evolution works under dynamically changing
we consider the majority of papers). Only few potential
environmental conditions, with nonstationary optima and
application domains could be identified, if any, where
even changing optimization criteria, and the individuals
evolutionary algorithms have not been tested so far. In
themselves are also changing the structure of the adap-
many cases they have been used to produce good, if not
tive landscape during adaptation [210]. In evolutionary
superior, results.
algorithms, environmental conditions are often static,
• In contrast, the theoretical foundations are to some extent
but nonelitist variants are able to deal with changing
still weak. To say it more pithy: “We know that they
environments. It is certainly worthwhile, however, to
work, but we do not know why.” As a consequence,
consider a more flexible life span concept for individuals
inexperienced users fall into the same traps repeatedly,
in evolutionary algorithms than just the extremes of a
since there are only few rules of thumb for the design
maximum life span of one generation [as in a ( )
and parameterization of evolutionary algorithms.
strategy] and of an unlimited life span (as in an elitist
A constructive approach for the synthesis of evolution-
strategy), by introducing an aging parameter as in the
ary algorithms, i.e., the choice or design of the represen-
( ) strategy [54].
tations, variation operators, and selection mechanisms is
needed. But first investigations pointing in the direction of 3 The term “computational intelligence” was originally coined by Cercone
design principles for representation-independent operators and McCalla [209].

Authorized licensed use limited to: Universidade de Lisboa Reitoria. Downloaded on February 2, 2009 at 14:58 from IEEE Xplore. Restrictions apply.
12 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 1, NO. 1, APRIL 1997

• The long-term goal of evolution consists of the main- the idea that each individual might have its own internal
tenance of evolvability of a population [95], guaranteed strategy to deal with its environment. This strategy might
by mutation, and a preservation of diversity within the be more complex than the simple mutation parameters
population (the term meliorization describes this more presently taken into account by evolution strategies and
appropriately than optimization or adaptation does). In evolutionary programming.
contrast, evolutionary algorithms often aim at finding a With all this in mind, we are convinced that we are just
precise solution and converging to this solution. beginning to understand and to exploit the full potential of
• In natural evolution, many criteria need to be met at evolutionary computation. Concerning basic research as well
the same time, while most evolutionary algorithms are as practical applications to challenging industrial problems,
designed for single fitness criteria (see [211] for an evolutionary algorithms offer a wide range of promising
overview of the existing attempts to apply evolutionary further investigations, and it will be exciting to observe the
algorithms to multiobjective optimization). The concepts future development of the field.
of diploidy or polyploidy combined with dominance and
recessivity [50] as well as the idea of introducing two ACKNOWLEDGMENT
sexes with different selection criteria might be helpful
The authors would like to thank D. B. Fogel and three
for such problems [212], [213].
anonymous reviewers for their very valuable and detailed
• Natural evolution neither assumes global knowledge
comments that helped them improve the paper. They also
(about all fitness values of all individuals) nor a
appreciate the informal comments of another anonymous re-
generational synchronization, while many evolutionary
viewer, and the efforts of the anonymous associate editor
algorithms still identify an iteration of the algorithm
responsible for handling the paper submission and review
with one complete generation update. Fine-grained asyn-
procedure. The first author would also like to thank C. Müller
chronously parallel variants of evolutionary algorithms,
for her patience.
introducing local neighborhoods for recombination and
selection and a time-space organization like in cellular REFERENCES
automata [157]–[159] represent an attempt to overcome
these restrictions. [1] H. J. Bremermann, “Optimization through evolution and recombina-
tion,” in Self-Organizing Systems, M. C. Yovits et al., Eds. Washington,
• The co-evolution of species such as in predator-prey DC: Spartan, 1962.
interactions implies that the adaptive landscape of in- [2] R. M. Friedberg, “A learning machine: Part I,” IBM J., vol. 2, no. 1,
dividuals of one species changes as members of the pp. 2–13, Jan. 1958.
[3] R. M. Friedberg, B. Dunham, and J. H. North, “A learning machine:
other species make their adaptive moves [214]. Both the Part II,” IBM J., vol. 3, no. 7, pp. 282–287, July 1959.
work on competitive fitness evaluation presented in [215] [4] G. E. P. Box, “Evolutionary operation: A method for increasing indus-
and the co-evolution of separate populations [216], [217] trial productivity,” Appl. Statistics, vol. VI, no. 2, pp. 81–101, 1957.
[5] D. B. Fogel, Evolutionary Computation: Toward a New Philosophy of
present successful approaches to incorporate the aspect Machine Intelligence. Piscataway, NJ: IEEE Press, 1995.
of mutual interaction of different adaptive landscapes [6] J. H. Holland, “Outline for a logical theory of adaptive systems,” J.
Assoc. Comput. Mach., vol. 3, pp. 297–314, 1962.
into evolutionary algorithms. As clarified by the work [7] I. Rechenberg, “Cybernetic solution path of an experimental problem,”
of Kauffman [214], however, we are just beginning to Royal Aircraft Establishment, Library translation No. 1122, Farnbor-
explore the dynamics of co-evolving systems and to ough, Hants., U.K., Aug. 1965.
[8] H.-P. Schwefel, “Projekt MHD-Staustrahlrohr: Experimentelle Opti-
exploit the principle for practical problem solving and mierung einer Zweiphasendüse, Teil I,” Technischer Bericht 11.034/68,
evolutionary simulation. 35, AEG Forschungsinstitut, Berlin, Germany, Oct. 1968.
• The genotype-phenotype mapping in nature, realized by [9] L. J. Fogel, “Autonomous automata,” Ind. Res., vol. 4, pp. 14–19, 1962.
[10] J. T. Alander, “Indexed bibliography of genetic algorithms papers of
the genetic code as well as the epigenetic apparatus (i.e., 1996,” University of Vaasa, Department of Information Technology and
the biochemical processes facilitating the development Production Economics, Rep. 94-1-96, 1995, (ftp.uwasa.fi, cs/report94-1,
ga96bib.ps.Z).
and differentiation of an individual’s cells into organs [11] J. H. Holland, Adaptation in Natural and Artificial Systems. Ann
and systems), has evolved over time, while the mapping Arbor, MI: Univ. of Michigan Press, 1975.
is usually fixed in evolutionary algorithms (dynamic [12] J. H. Holland and J. S. Reitman, “Cognitive systems based on adaptive
algorithms,” in Pattern-Directed Inference Systems, D. A. Waterman and
parameter encoding as presented in [218] being a no- F. Hayes-Roth, Eds. New York: Academic, 1978.
table exception). An evolutionary self-adaptation of the [13] K. A. De Jong, “An analysis of the behavior of a class of genetic
genotype-phenotype mapping might be an interesting adaptive systems,” Ph.D. dissertation, Univ. of Michigan, Ann Arbor,
1975, Diss. Abstr. Int. 36(10), 5140B, University Microfilms no. 76-
way to make the search more flexible, starting with a 9381.
coarse-grained, volume-oriented search and focusing on [14] , “On using genetic algorithms to search program spaces,” in Proc.
promising regions of the search space as the evolution 2nd Int. Conf. on Genetic Algorithms and Their Applications. Hillsdale,
NJ: Lawrence Erlbaum, 1987, pp. 210–216.
proceeds. [15] , “Are genetic algorithms function optimizers?” in Parallel Prob-
• Other topics, such as multicellularity and ontogeny of lem Solving from Nature 2. Amsterdam, The Netherlands: Elsevier,
1992, pp. 3–13.
individuals, up to the development of their own brains [16] , “Genetic algorithms are NOT function optimizers,” in Foun-
(individual learning, such as accounted for by the Baldwin dations of Genetic Algorithms 2. San Mateo, CA: Morgan Kaufmann,
effect in evolution [219]), are usually not modeled in 1993, pp. 5–17.
[17] D. E. Goldberg, “Genetic algorithms and rule learning in dynamic
evolutionary algorithms. The self-adaptation of strategy system control,” in Proc. 1st Int. Conf. on Genetic Algorithms and Their
parameters is just a first step into this direction, realizing Applications. Hillsdale, NJ: Lawrence Erlbaum, 1985, pp. 8–15.

Authorized licensed use limited to: Universidade de Lisboa Reitoria. Downloaded on February 2, 2009 at 14:58 from IEEE Xplore. Restrictions apply.
BÄCK et al.: EVOLUTIONARY COMPUTATION: COMMENTS ON THE HISTORY AND CURRENT STATE 13

[18] , Genetic Algorithms in Search, Optimization and Machine Learn- [47] H.-P. Schwefel, Evolutionsstrategie und numerische Optimierung Dis-
ing. Reading, MA: Addison-Wesley, 1989. sertation, Technische Universität Berlin, Germany, May 1975.
[19] , “The theory of virtual alphabets,” in Parallel Problem Solving [48] , Evolution and Optimum Seeking. New York: Wiley, 1995
from Nature—Proc. 1st Workshop PPSN I. (Lecture Notes in Computer (Sixth-Generation Computer Technology Series).
Science, vol. 496). Berlin, Germany: Springer, 1991, pp. 13–22. [49] M. Herdy, “Reproductive isolation as strategy parameter in hierarchi-
[20] D. E. Goldberg, K. Deb, and J. H. Clark, “Genetic algorithms, noise, and cally organized evolution strategies,” in Parallel Problem Solving from
the sizing of populations,” Complex Syst., vol. 6, pp. 333–362, 1992. Nature 2. Amsterdam, The Netherlands: Elsevier, 1992, pp. 207–217.
[21] D. E. Goldberg, K. Deb, H. Kargupta, and G. Harik, “Rapid, accurate [50] F. Kursawe, “A variant of Evolution Strategies for vector optimization,”
optimization of difficult problems using fast messy genetic algorithms,” in Parallel Problem Solving from Nature—Proc. 1st Workshop PPSN
in Proc. 5th Int. Conf. on Genetic Algorithms. San Mateo, CA: Morgan I (Lecture Notes in Computer Science, vol. 496). Berlin, Germany:
Kaufmann, 1993, pp. 56–64. Springer, 1991, pp. 193–197.
[22] L. Davis, Ed., Handbook of Genetic Algorithms. New York: Van [51] A. Ostermeier, “An evolution strategy with momentum adaptation of the
Nostrand Reinhold, 1991. random number distribution,” in Parallel Problem Solving from Nature
[23] L. J. Eshelman and J. D. Schaffer, “Crossover’s niche,” in Proc. 5th 2. Amsterdam, The Netherlands: Elsevier, 1992, pp. 197–206.
Int. Conf. on Genetic Algorithms. San Mateo, CA: Morgan Kaufmann, [52] A. Ostermeier, A. Gawelczyk, and N. Hansen, “Step-size adaptation
1993, pp. 9–14. based on nonlocal use of selection information,” in Parallel Problem
[24] , “Productive recombination and propagating and preserving Solving from Nature—PPSN III, Int. Conf. on Evolutionary Computation.
schemata,” in Foundations of Genetic Algorithms 3. San Francisco, (Lecture Notes in Computer Science, vol. 866). Berlin, Germany:
CA: Morgan Kaufmann, 1995, pp. 299–313. Springer, 1994, pp. 189–198.
[25] S. Forrest and M. Mitchell, “What makes a problem hard for a genetic [53] G. Rudolph, “Global optimization by means of distributed evolution
algorithm? Some anomalous results and their explanation,” Mach. strategies,” in Parallel Problem Solving from Nature—Proc. 1st Work-
Learn., vol. 13, pp. 285–319, 1993. shop PPSN I (Lecture Notes in Computer Science, vol. 496). Berlin,
[26] J. J. Grefenstette, “Optimization of control parameters for genetic Germany: Springer, 1991, pp. 209–213.
algorithms,” IEEE Trans. Syst., Man Cybern., vol. SMC-16, no. 1, pp. [54] H.-P. Schwefel and G. Rudolph, “Contemporary evolution strategies,”
122–128, 1986. in Advances in Artificial Life. 3rd Int. Conf. on Artificial Life (Lecture
[27] , “Incorporating problem specific knowledge into genetic algo- Notes in Artificial Intelligence, vol. 929), F. Morán, A. Moreno, J. J.
rithms,” in Genetic Algorithms and Simulated Annealing, L. Davis, Ed. Merelo, and P. Chacón, Eds. Berlin, Germany: Springer, 1995, pp.
San Mateo, CA: Morgan Kaufmann, 1987, pp. 42–60. 893–907.
[28] , “Conditions for implicit parallelism,” in Foundations of Genetic [55] J. Klockgether and H.-P. Schwefel, “Two-phase nozzle and hollow
Algorithms. San Mateo, CA: Morgan Kaufmann, 1991, pp. 252–261. core jet experiments,” in Proc. 11th Symp. Engineering Aspects of
[29] , “Deception considered harmful,” in Foundations of Genetic Magnetohydrodynamics, D. G. Elliott, Ed. Pasadena, CA: California
Algorithms 2. San Mateo, CA: Morgan Kaufmann, 1993, pp. 75–91. Institute of Technology, Mar. 24–26, 1970, pp. 141–148.
[30] J. R. Koza, “Hierarchical genetic algorithms operating on populations [56] J. J. Grefenstette, Ed., Proc. 1st Int. Conf. on Genetic Algorithms and
of computer programs,” in Proc. 11th Int. Joint Conf. on Artificial Their Applications. Hillsdale, NJ: Lawrence Erlbaum, 1985.
Intelligence, N. S. Sridharan, Ed. San Mateo, CA: Morgan Kaufmann, [57] , Proc. 2nd Int. Conf. on Genetic Algorithms and Their Applica-
1989, pp. 768–774. tions. Hillsdale, NJ: Lawrence Erlbaum, 1987.
[31] , Genetic Programming: On the Programming of Computers by [58] J. D. Schaffer, Ed., Proc. 3rd Int. Conf. on Genetic Algorithms. San
Means of Natural Selection. Cambridge, MA: MIT Press, 1992. Mateo, CA: Morgan Kaufmann, 1989.
[32] M. Mitchell, An Introduction to Genetic Algorithms. Cambridge, MA: [59] R. K. Belew and L. B. Booker, Eds., Proc. 4th Int. Conf. on Genetic
MIT Press, 1996. Algorithms. San Mateo, CA, Morgan Kaufmann, 1991.
[33] R. L. Riolo, “The emergence of coupled sequences of classifiers,” in [60] S. Forrest, Ed., Proc. 5th Int. Conf. on Genetic Algorithms. San Mateo,
Proc. 3rd Int. Conf. on Genetic Algorithms. San Mateo, CA: Morgan CA: Morgan Kaufmann, 1993.
[61] L. Eshelman, Ed., Genetic Algorithms: Proc. 6th Int. Conf. San Fran-
Kaufmann, 1989, pp. 256–264.
[34] , “The emergence of default hierarchies in learning classifier cisco, CA: Morgan Kaufmann, 1995.
[62] D. B. Fogel and W. Atmar, Eds., Proc 1st Annu. Conf. on Evolutionary
systems,” in Proc. 3rd Int. Conf. on Genetic Algorithms. San Mateo,
Programming. San Diego, CA: Evolutionary Programming Society,
CA: Morgan Kaufmann, 1989, pp. 322–327.
1992.
[35] J. D. Schaffer, “Multiple objective optimization with vector evaluated
[63] , Proc. 2nd Annu. Conf. on Evolutionary Programming. San
genetic algorithms,” in Proc. 1st Int. Conf. on Genetic Algorithms
Diego, CA: Evolutionary Programming Society, 1993.
and Their Applications. Hillsdale, NJ: Lawrence Erlbaum, 1985, pp. [64] A. V. Sebald and L. J. Fogel, Eds., Proc. 3rd Annual Conf. on Evolu-
93–100. tionary Programming. Singapore: World Scientific, 1994.
[36] J. D. Schaffer and L. J. Eshelman, “On crossover as an evolutionary [65] J. R. McDonnell, R. G. Reynolds, and D. B. Fogel, Eds., Proc. 4th Annu.
viable strategy,” in Proc. 4th Int. Conf. on Genetic Algorithms. San Conf. on Evolutionary Programming. Cambridge, MA: MIT Press,
Mateo, CA: Morgan Kaufmann, 1991, pp. 61–68. 1995.
[37] J. D. Schaffer and A. Morishima, “An adaptive crossover distribution [66] L. J. Fogel, P. J. Angeline, and T. Bäck, Eds., Proc. 5th Annu. Conf. on
mechanism for genetic algorithms,” in Proc. 2nd Int. Conf. on Genetic Evolutionary Programming. Cambridge, MA: The MIT Press, 1996.
Algorithms and Their Applications. Hillsdale, NJ: Lawrence Erlbaum, [67] G. J. E. Rawlins, Ed., Foundations of Genetic Algorithms. San Mateo,
1987, pp. 36–40. CA: Morgan Kaufmann, 1991.
[38] L. J. Fogel, “On the organization of intellect,” Ph.D. dissertation, [68] L. D. Whitley, Ed., Foundations of Genetic Algorithms 2. San Mateo,
University of California, Los Angeles, 1964. CA: Morgan Kaufmann, 1993.
[39] G. H. Burgin, “On playing two-person zero-sum games against nonmin- [69] M. D. Vose and L. D. Whitley, Ed., Foundations of Genetic Algorithms
imax players,” IEEE Trans. Syst. Sci. Cybern., vol. SSC-5, no. 4, pp. 3. San Francisco, CA: Morgan Kaufmann, 1995.
369–370, Oct. 1969. [70] J. R. Koza, D. E. Goldberg, D. B. Fogel, and R. L. Riolo, Eds., Genetic
[40] , “Systems identification by quasilinearization and evolutionary Programming 1996. Proc. 1st Annu. Conf. Cambridge, MA: MIT Press,
programming,” J. Cybern., vol. 3, no. 2, pp. 56–75, 1973. 1996.
[41] J. W. Atmar, “Speculation on the evolution of intelligence and its [71] H.-P. Schwefel and R. Männer, Eds., Parallel Problem Solving from
possible realization in machine form,” Ph.D. dissertation, New Mexico Nature—Proc. 1st Workshop PPSN I. Berlin, Germany: Springer, 1991,
State Univ., Las Cruces, 1976. vol. 496 of Lecture Notes in Computer Science.
[42] L. J. Fogel, A. J. Owens, and M. J. Walsh, Artificial Intelligence Through [72] R. Männer and B. Manderick, Eds., Parallel Problem Solving from
Simulated Evolution. New York: Wiley, 1966. Nature 2. Amsterdam, The Netherlands: Elsevier, 1992.
[43] D. B. Fogel, “An evolutionary approach to the traveling salesman [73] Y. Davidor, H.-P. Schwefel, and R. Männer, Eds., Parallel Problem
problem,” Biological Cybern., vol. 60, pp. 139–144, 1988. Solving from Nature—PPSN III, Int. Conf. on Evolutionary Computation.
[44] , “Evolving artificial intelligence,” Ph.D. dissertation, Univ. of (Lecture Notes in Computer Science, vol. 866) Berlin: Springer, 1994.
California, San Diego, 1992. [74] H.-M. Voigt, W. Ebeling, I. Rechenberg, and H.-P. Schwefel, Eds., Par-
[45] I. Rechenberg, Evolutionsstrategie: Optimierung technischer Systeme allel Problem Solving from Nature IV. Proc. Int. Conf. on Evolutionary
nach Prinzipien der biologischen Evolution. Stuttgart, Germany: Computation. Berlin, Germany: Springer, 1996, vol. 1141 of Lecture
Frommann-Holzboog, 1973. Notes in Computer Science.
[46] , Evolutionsstrategie ’94, in Werkstatt Bionik und Evolutionstech- [75] Proc. 1st IEEE Conf. on Evolutionary Computation, Orlando, FL. Pis-
nik. Stuttgart, Germany: Frommann-Holzboog, 1994, vol. 1. cataway, NJ: IEEE Press, 1994.

Authorized licensed use limited to: Universidade de Lisboa Reitoria. Downloaded on February 2, 2009 at 14:58 from IEEE Xplore. Restrictions apply.
14 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 1, NO. 1, APRIL 1997

[76] Proc. 2nd IEEE Conf. on Evolutionary Computation, Perth, Australia. [102] L. J. Eshelman and J. D. Schaffer, “Real-coded genetic algorithms
Piscataway, NJ: IEEE Press, 1995. and interval-schemata,” in Foundations of Genetic Algorithms 2. San
[77] Proc. 3rd IEEE Conf. on Evolutionary Computation, Nagoya, Japan. Mateo, CA: Morgan Kaufmann, 1993, pp. 187–202.
Piscataway, NJ: IEEE Press, 1996. [103] C. Z. Janikow and Z. Michalewicz, “An experimental comparison of
[78] Proc. 4th IEEE Conf. on Evolutionary Computation, Indianapolis, IN. binary and floating point representations in genetic algorithms,” in
Piscataway, NJ: IEEE Press, 1997. Proc. 4th Int. Conf. on Genetic Algorithms. San Mateo, CA, Morgan
[79] T. Bäck, Evolutionary Algorithms in Theory and Practice. New York: Kaufmann, 1991, pp. 31–36.
Oxford Univ. Press, 1996. [104] N. J. Radcliffe, “Equivalence class analysis of genetic algorithms,”
[80] T. Bäck, D. B. Fogel, and Z. Michalewicz, Eds., Handbook of Evolu- Complex Systems, vol. 5, no. 2, pp. 183–206, 1991.
tionary Computation. New York: Oxford Univ. Press and Institute of [105] A. H. Wright, “Genetic algorithms for real parameter optimization,” in
Physics, 1997. Foundations of Genetic Algorithms. San Mateo, CA: Morgan Kauf-
[81] K. E. Kinnear, Ed., Advances in Genetic Programming. Cambridge, mann, 1991, pp. 205–218.
MA: MIT Press, 1994. [106] A. Nix and M. D. Vose, “Modeling genetic algorithms with markov
[82] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution chains,” Ann. Math. Artif. Intell., vol. 5, pp. 79–88, 1992.
Programs. Berlin, Germany: Springer, 1996. [107] M. D. Vose, “Modeling simple genetic algorithms,” in Foundations of
[83] A. Törn and A. Žilinskas, Global Optimization (Lecture Notes in Genetic Algorithms 2. San Mateo, CA: Morgan Kaufmann, 1993, pp.
Computer Science, vol. 350). Berlin: Springer, 1989. 63–73.
[84] T. Bäck, U. Hammel, M. Schütz, H.-P. Schwefel, and J. Sprave, [108] M. D. Vose and A. H. Wright, “Simple genetic algorithms with linear
“Applications of evolutionary algorithms at the center for applied fitness,” Evolutionary Computation, vol. 2, no. 4, pp. 347–368, 1994.
systems analysis,” in Computational Methods in Applied Sciences’96, [109] M. D. Vose, “Modeling simple genetic algorithms,” Evolutionary Com-
J.-A. Désidéri, C. Hirsch, P. Le Tallec, E. O~nate, M. Pandolfi, J. Périaux, putation, vol. 3, no. 4, pp. 453–472, 1995.
and E. Stein, Eds. Chichester, UK: Wiley, 1996, pp. 243–250. [110] G. Rudolph, “Convergence analysis of canonical genetic algorithms,”
[85] H.-P. Schwefel, “Direct search for optimal parameters within simulation IEEE Trans. Neural Networks, Special Issue on Evolutionary Computa-
models,” in Proc. 12th Annu. Simulation Symp., Tampa, FL, Mar. 1979, tion, vol. 5, no. 1, pp. 96–101, 1994.
pp. 91–102. [111] J. Suzuki, “A Markov chain analysis on simple genetic algorithms,”
[86] Z. Michalewicz, “A hierarchy of evolution programs: An experimental IEEE Trans. Syst., Man, Cybern., vol. 25, no. 4, pp. 655–659, Apr. 1995.
study,” Evolutionary Computation, vol. 1, no. 1, pp. 51–76, 1993. [112] K. A. De Jong, W. M. Spears, and D. F. Gordon, “Using Markov chains
[87] W. Atmar, “Notes on the simulation of evolution,” IEEE Trans. Neural to analyze GAFO’s,” in Foundations of Genetic Algorithms 3. San
Networks, vol. 5, no. 1, pp. 130–148, 1994. Francisco, CA: Morgan Kaufmann, 1995, pp. 115–137.
[88] L. D. Whitley, “The GENITOR algorithm and selection pressure: Why [113] L. D. Whitley, “An executable model of a simple genetic algorithm,”
rank-based allocation of reproductive trials is best,” in Proc. 3rd Int. in Foundations of Genetic Algorithms 3. San Francisco, CA: Morgan
Conf. on Genetic Algorithms. San Mateo, CA: Morgan Kaufmann, Kaufmann, 1995, pp. 45–62.
1989, pp. 116–121. [114] G. Rudolph, “An evolutionary algorithm for integer programming,”
[89] L. D. Whitley and J. Kauth, “GENITOR: A different genetic algorithm,” in Parallel Problem Solving from Nature—PPSN III, Int. Conf. on
in Proc. Rocky Mountain Conf. Artificial Intel., Denver, CO, 1988, pp. Evolutionary Computation (Lecture Notes in Computer Science, vol.
118–130. 866). Berlin, Germany: Springer, 1994, pp. 139–148.
[90] K. A. De Jong and J. Sarma, “Generation gaps revisited,” in Foundations [115] M. Schütz and J. Sprave, “Application of parallel mixed-integer evolu-
of Genetic Algorithms 2. San Mateo, CA: Morgan Kaufmann, 1993, tion strategies with mutation rate pooling,” in Proc. 5th Annu. Conf. on
pp. 19–28. Evolutionary Programming. Cambridge, MA: MIT Press, 1996, pp.
[91] D. J. Powell, M. M. Skolnick, and S. S. Tong, “Interdigitation: A 345–354.
hybrid technique for engineering design optimization employing genetic [116] B. Groß, U. Hammel, A. Meyer, P. Maldaner, P. Roosen, and M. Schütz,
algorithms, expert systems, and numerical optimization,” in Handbook “Optimization of heat exchanger networks by means of evolution
of Genetic Algorithms. New York: Van Nostrand Reinhold, 1991, ch. strategies,” in Parallel Problem Solving from Nature IV. Proc. Int. Conf.
20, pp. 312–321. on Evolutionary Computation. (Lecture Notes in Computer Science, vol.
[92] J.-M. Renders and S. P. Flasse, “Hybrid methods using genetic algo- 1141). Berlin: Springer, 1996, pp. 1002–1011.
rithms for global optimization,” IEEE Trans. Syst., Man, Cybern. B, vol. [117] R. Lohmann, “Structure evolution in neural systems,” in Dynamic,
26, no. 2, pp. 243–258, 1996. Genetic, and Chaotic Programming, B. Soucek and the IRIS Group,
[93] K. A. De Jong, “Evolutionary computation: Recent developments and Eds. New York: Wiley, 1992, pp. 395–411.
open issues,” in 1st Int. Conf. on Evolutionary Computation and Its [118] J. D. Schaffer, R. A. Caruana, L. J. Eshelman, and R. Das, “A study of
Applications, E. D. Goodman, B. Punch, and V. Uskov, Eds. Moskau: control parameters affecting online performance of genetic algorithms
Presidium of the Russian Academy of Science, 1996, pp. 7–17. for function optimization,” in Proc. 3rd Int. Conf. on Genetic Algorithms.
[94] M. Mitchell and S. Forrest, “Genetic algorithms and artificial life,” San Mateo, CA: Morgan Kaufmann, 1989, pp. 51–60.
Artificial Life, vol. 1, no. 3, pp. 267–289, 1995. [119] H. J. Bremermann, M. Rogson, and S. Salaff, “Global properties of
[95] L. Altenberg, “The evolution of evolvability in genetic programming,” evolution processes,” in Natural Automata and Useful Simulations, H. H.
in Advances in Genetic Programming. Cambridge, MA: MIT Press, Pattec, E. A. Edelsack, L. Fein, and A. B. Callahan, Eds. Washington,
1994, pp. 47–74. DC: Spartan, 1966, ch. 1, pp. 3–41.
[96] R. Keller and W. Banzhaf, “Genetic programming using genotype- [120] H. Mühlenbein, “How genetic algorithms really work: I. Mutation and
phenotype mapping from linear genomes into linear phenotypes,” in hillclimbing,” in Parallel Problem Solving from Nature 2. Amsterdam:
Genetic Programming 1996: Proc. 1st Annu. Conf., J. R. Koza, D. E. Elsevier, 1992, pp. 15–25.
Goldberg, D. B. Fogel, and R. L. Riolo, Eds., 1996. [121] T. C. Fogarty, “Varying the probability of mutation in the genetic
[97] F. Gruau, “Genetic synthesis of modular neural networks,” in Proc. 5th algorithm,” in Proc. 3rd Int. Conf. on Genetic Algorithms. San Mateo,
Int. Conf. on Genetic Algorithms. San Mateo, CA: Morgan Kaufmann, CA: Morgan Kaufmann, 1989, pp. 104–109.
1993, pp. 318–325. [122] T. Bäck and M. Schütz, “Intelligent mutation rate control in canonical
[98] M. Mandischer, “Representation and evolution of neural networks,” in genetic algorithms,” in Foundations of Intelligent Systems, 9th Int.
Artificial Neural Nets and Genetic Algorithms, R. F. Albrecht, C. R. Symp., ISMIS’96 (Lecture Notes in Artificial Intelligence, vol. 1079), Z.
Reeves, and N. C. Steele, Eds. Wien, Germany: Springer, 1993, pp. W. Ras and M. Michalewicz, Eds. Berlin, Germany: Springer, 1996,
643–649. pp. 158–167.
[99] H. J. Antonisse, “A new interpretation of schema notation that [123] J. Smith and T. C. Fogarty, “Self adaptation of mutation rates in a
overturns the binary encoding constraint,” in Proc. 3rd Int. Conf. steady state genetic algorithm,” in Proc. 3rd IEEE Conf. on Evolutionary
on Genetic Algorithms. San Mateo, CA: Morgan Kaufmann, 1989, Computation. Piscataway, NJ: IEEE Press, 1996, pp. 318–323.
pp. 86–91. [124] M. Yanagiya, “A simple mutation-dependent genetic algorithm,” in
[100] U.-M. O’Reilly and F. Oppacher, “The troubling aspects of a building Proc. 5th Int. Conf. on Genetic Algorithms. San Mateo, CA: Morgan
block hypothesis for genetic programming,” in Foundations of Ge- Kaufmann, 1993, p. 659.
netic Algorithms 3. San Francisco, CA: Morgan Kaufmann, 1995, pp. [125] H.-P. Schwefel, Numerical Optimization of Computer Models. Chich-
73–88. ester: Wiley, 1981.
[101] T. Bäck, “Optimal mutation rates in genetic search,” in Proc. 5th Int. [126] , Numerische Optimierung von Computer-Modellen mittels der
Conf. on Genetic Algorithms, S. Forrest, Ed. San Mateo, CA: Morgan Evolutionsstrategie, vol. 26 of Interdisciplinary Systems Research.
Kaufmann, 1993, pp. 2–8. Basel, Germany: Birkhäuser, 1977.

Authorized licensed use limited to: Universidade de Lisboa Reitoria. Downloaded on February 2, 2009 at 14:58 from IEEE Xplore. Restrictions apply.
BÄCK et al.: EVOLUTIONARY COMPUTATION: COMMENTS ON THE HISTORY AND CURRENT STATE 15

[127] H.-G. Beyer, “Toward a theory of evolution strategies: Self-adaptation,” J. Stender, Ed. Amsterdam, The Netherlands: IOS, 1993, pp. 5–42.
Evolutionary Computation, vol. 3, no. 3, pp. 311–348, 1995. [151] F. Hoffmeister, “Scalable parallelism by evolutionary algorithms,” in
[128] G. Rudolph, “On correlated mutations in evolution strategies,” in Parallel Computing and Mathematical Optimization, (Lecture Notes in
Parallel Problem Solving from Nature 2. Amsterdam, The Netherlands: Economics and Mathematical Systems, vol. 367), M. Grauer and D. B.
Elsevier, 1992, pp. 105–114. Pressmar, Eds. Berlin, Germany: Springer, 1991, pp. 177–198.
[129] N. Saravanan and D. B. Fogel, “Evolving neurocontrollers using evo- [152] M. Munetomo, Y. Takai, and Y. Sato, “An efficient migration scheme
lutionary programming,” in Proc. 1st IEEE Conf. on Evolutionary for subpopulation-based asynchronously parallel genetic algorithms,” in
Computation. Piscataway, NJ: IEEE Press, 1994, vol. 1, pp. 217–222. Proc. 5th Int. Conf. on Genetic Algorithms. San Mateo, CA: Morgan
[130] , “Learning of strategy parameters in evolutionary programming: Kaufmann, 1993, p. 649.
An empirical study,” in Proc. 3rd Annu. Conf. on Evolutionary Program- [153] S. Baluja, “Structure and performance of fine-grain parallelism in genetic
ming. Singapore: World Scientific, 1994, pp. 269–280. search,” in Proc. 5th Int. Conf. on Genetic Algorithms. San Mateo, CA:
[131] N. Saravanan, D. B. Fogel, and K. M. Nelson, “A comparison of Morgan Kaufmann, 1993, pp. 155–162.
methods for self-adaptation in evolutionary algorithms,” BioSystems, [154] R. J. Collins and D. R. Jefferson, “Selection in massively parallel genetic
vol. 36, pp. 157–166, 1995. algorithms,” in Proc. 4th Int. Conf. on Genetic Algorithms. San Mateo,
[132] D. B. Fogel, L. J. Fogel, and W. Atmar, “Meta-evolutionary program- CA, Morgan Kaufmann, 1991, pp. 249–256.
ming,” in Proc. 25th Asilomar Conf. Sig., Sys. Comp., R. R. Chen, Ed. [155] M. Gorges-Schleuter, “ASPARAGOS: An asynchronous parallel genetic
Pacific Grove, CA, 1991, pp. 540–545. optimization strategy,” in Proc. 3rd Int. Conf. on Genetic Algorithms.
[133] P. J. Angeline, “The effects of noise on self-adaptive evolutionary San Mateo, CA: Morgan Kaufmann, 1989, pp. 422–427.
optimization,” in Proc. 5th Annu. Conf. on Evolutionary Programming. [156] P. Spiessens and B. Manderick, “Fine-grained parallel genetic algo-
Cambridge, MA: MIT Press, 1996, pp. 433–440. rithms,” in Proc. 3rd Int. Conf. on Genetic Algorithms. San Mateo,
[134] D. K. Gehlhaar and D. B. Fogel, “Tuning evolutionary programming for CA: Morgan Kaufmann, 1989, pp. 428–433.
conformationally flexible molecular docking,” in Proc. 5th Annu. Conf. [157] V. S. Gordon, K. Mathias, and L. D. Whitley, “Cellular genetic
on Evolutionary Programming. Cambridge, MA: MIT Press, 1996, pp. algorithms as function optimizers: Locality effects,” in Proc. 1994 ACM
419–429. Symp. on Applied Computing, E. Deaton, D. Oppenheim, J. Urban, and
[135] T. Bäck and H.-P. Schwefel, “Evolutionary computation: An overview,” H. Berghel, Eds. New York: ACM, 1994, pp. 237–241.
in Proc. 3rd IEEE Conf. on Evolutionary Computation. Piscataway, [158] G. Rudolph and J. Sprave, “A cellular genetic algorithm with self-
NJ: IEEE Press, 1996, pp. 20–29. adjusting acceptance threshold,” in Proc. 1st IEE/IEEE Int. Conf. Genetic
[136] W. M. Spears, “Adapting crossover in evolutionary algorithms,” in Proc. Algorithms in Eng. Sys.: Innovations and Appl. London: IEE, 1995,
4th Annu. Conf. on Evolutionary Programming. Cambridge, MA: MIT pp. 365–372.
Press, 1995, pp. 367–384. [159] L. D. Whitley, “Cellular genetic algorithms,” in Proc. 5th Int. Conf. on
[137] T. Bäck, “Self-Adaptation in Genetic Algorithms,” in Proceedings of the Genetic Algorithms. San Mateo, CA: Morgan Kaufmann, 1993, p. 658.
1st European Conference on Artificial Life, F. J. Varela and P. Bourgine, [160] M. Gorges-Schleuter, “Comparison of local mating strategies in mas-
Eds. Cambridge, MA: MIT Press, 1992, pp. 263–271. sively parallel genetic algorithms,” in Parallel Problem Solving from
[138] L. J. Eshelman, R. A. Caruna, and J. D. Schaffer, “Biases in the Nature 2. Amsterdam: Elsevier, 1992, pp. 553–562.
crossover landscape,” in Proc. 3rd Int. Conf. on Genetic Algorithms. [161] J. H. Holland, K. J. Holyoak, R. E. Nisbett, and P. R. Thagard, Induction:
San Mateo, CA: Morgan Kaufmann, 1989, pp. 10–19. Processes of Inference, Learning, and Discovery. Cambridge, MA:
[139] G. Syswerda, “Uniform crossover in genetic algorithms,” in Proc. 3rd MIT Press, 1986.
Int. Conf. on Genetic Algorithms. San Mateo, CA: Morgan Kaufmann, [162] R. Serra and G. Zanarini, Complex Systems and Cognitive Processes.
1989, pp. 2–9. Berlin: Springer, 1990.
[140] A. E. Eiben, P.-E. Raué, and Zs. Ruttkay, “Genetic algorithms [163] M. L. Cramer, “A representation for the adaptive generation of simple
with multi-parent recombination,” in Parallel Problem Solving from sequential programs,” in Proc. 1st Int. Conf. on Genetic Algorithms
Nature—PPSN III, Int. Conf. on Evolutionary Computation. Berlin: and Their Applications. Hillsdale, NJ: Lawrence Erlbaum, 1985, pp.
Springer, 1994, vol. 866 of Lecture Notes in Computer Science, pp. 183–187.
78–87. [164] J. C. Bean, “Genetics and random keys for sequences and optimization,”
[141] A. E. Eiben, C. H. M. van Kemenade, and J. N. Kok, “Orgy in the Department of Industrial and Operations Engineering, The Univ. of
computer: Multi-parent reproduction in genetic algorithms,” in Advances Michigan, Ann Arbor, Tech. Rep. 92-43, 1993.
in Artificial Life. 3rd Int. Conf. on Artificial Life, F. Morán, A. Moreno, [165] D. A. Gordon and J. J. Grefenstette, “Explanations of empirically
J. J. Merelo, and P. Chacón, Eds. Berlin: Springer, 1995, vol. 929 of derived reactive plans,” in Proc. Seventh Int. Conf. on Machine Learning.
Lecture Notes in Artificial Intelligence, pp. 934–945. San Mateo, CA: Morgan Kaufmann, June 1990, pp. 198–203.
[142] H.-G. Beyer, “Toward a theory of evolution strategies: On the benefits [166] L. B. Booker, D. E. Goldberg, and J. H. Holland, “Classifier systems
of sex—the (=; )-theory,” Evolutionary Computation, vol. 3, no. 1, and genetic algorithms,” in Machine Learning: Paradigms and Methods,
pp. 81–111, 1995. J. G. Carbonell, Ed. Cambridge, MA: MIT Press/Elsevier, 1989, pp.
[143] Z. Michalewicz, G. Nazhiyath, and M. Michalewicz, “A note on use- 235–282.
fulness of geometrical crossover for numerical optimization problems,” [167] S. W. Wilson, “ZCS: A zeroth level classifier system,” Evolutionary
in Proc. 5th Annu. Conf. on Evolutionary Programming. Cambridge, Computation, vol. 2, no. 1, pp. 1–18, 1994.
MA: The MIT Press, 1996, pp. 305–312. [168] F. D. Francone, P. Nordin, and W. Banzhaf, “Benchmarking the
[144] F. Kursawe, “Toward self-adapting evolution strategies,” in Proc. 2nd generalization capabilities of a compiling genetic programming system
IEEE Conf. Evolutionary Computation, Perth, Australia. Piscataway, using sparse data sets,” in Genetic Programming 1996. Proc. 1st Annu.
NJ: IEEE Press, 1995, pp. 283–288. Conf. Cambridge, MA: MIT Press, 1996, pp. 72–80.
[145] D. B. Fogel, “On the philosophical differences between evolutionary [169] Z. Michalewicz and M. Schoenauer, “Evolutionary algorithms for con-
algorithms and genetic algorithms,” in Proc. 2nd Annu. Conf. on Evo- strained parameter optimization problems,” Evolutionary Computation,
lutionary Programming. San Diego, CA: Evolutionary Programming vol. 4, no. 1, pp. 1–32, 1996.
Society, 1993, pp. 23–29. [170] N. J. Radcliffe, “The algebra of genetic algorithms,” Ann. Math. Artif.
[146] J. E. Baker, “Adaptive selection methods for genetic algorithms,” in Intell., vol. 10, pp. 339–384, 1994.
Proc. 1st Int. Conf. on Genetic Algorithms and Their Applications. [171] P. D. Surry and N. J. Radcliffe, “Formal algorithms + formal represen-
Hillsdale, NJ: Lawrence Erlbaum, 1985, pp. 101–111. tations = search strategies,” in Parallel Problem Solving from Nature
[147] D. E. Goldberg, B. Korb, and K. Deb, “Messy genetic algorithms: IV. Proc. Int. Conf. on Evolutionary Computation. (Lecture Notes in
Motivation, analysis, and first results,” Complex Syst., vol. 3, no. 5, Computer Science, vol. 1141) Berlin, Germany: Springer, 1996, pp.
pp. 493–530, Oct. 1989. 366–375.
[148] T. Bäck, “Selective pressure in evolutionary algorithms: A characteriza- [172] N. J. Radcliffe and P. D. Surry, “Fitness variance of formae and
tion of selection mechanisms,” in Proc. 1st IEEE Conf. on Evolutionary performance prediction,” in Foundations of Genetic Algorithms 3. San
Computation. Piscataway, NJ: IEEE Press, 1994, pp. 57–62. Francisco, CA: Morgan Kaufmann, 1995, pp. 51–72.
[149] D. E. Goldberg and K. Deb, “A comparative analysis of selection [173] M. F. Bramlette and E. E. Bouchard, “Genetic algorithms in parametric
schemes used in genetic algorithms,” in Foundations of Genetic Algo- design of aircraft,” in Handbook of Genetic Algorithms. New York:
rithms. San Mateo, CA: Morgan Kaufmann, 1991, pp. 69–93. Van Nostrand Reinhold, 1991, ch. 10, pp. 109–123.
[150] M. Dorigo and V. Maniezzo, “Parallel genetic algorithms: Introduction [174] J. Périaux, M. Sefrioui, B. Stoufflet, B. Mantel, and E. Laporte, “Robust
and overview of current research,” in Parallel Genetic Algorithms: The- genetic algorithms for optimization problems in aerodynamic design,”
ory & Applications, Frontiers in Artificial Intelligence and Applications, in Genetic Algorithms in Engineering and Computer Science, G. Winter,

Authorized licensed use limited to: Universidade de Lisboa Reitoria. Downloaded on February 2, 2009 at 14:58 from IEEE Xplore. Restrictions apply.
16 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 1, NO. 1, APRIL 1997

J. Périaux, M. Galán, and P. Cuesta, Eds. Chichester: Wiley, 1995, ch. [196] S. Goonatilake and P. Treleaven, Eds., Intelligent Systems for Finance
19, pp. 371–396. and Business. Chichester: Wiley, 1995.
[175] M. Schoenauer, “Shape representations for evolutionary optimization [197] P. G. Harrald, “Evolutionary algorithms and economic models: A view,”
and identification in structural mechanics,” in Genetic Algorithms in in Proc. 5th Annu. Conf. on Evolutionary Programming. Cambridge,
Engineering and Computer Science, G. Winter, J. Périaux, M. Galán, MA: MIT Press, 1996, pp. 3–7.
and P. Cuesta, Eds. Chichester: Wiley, 1995, ch. 22, pp. 443–463. [198] L. D. Whitley, “Genetic algorithms and neural networks,” in Genetic
[176] E. Michielssen and D. S. Weile, “Electromagnetic system design using Algorithms in Engineering and Computer Science, G. Winter, J. Périaux,
genetic algorithms,” in Genetic Algorithms in Engineering and Com- M. Galán, and P. Cuesta, Eds. Chichester, UK: Wiley, 1995, ch. 11,
puter Science, G. Winter, J. Périaux, M. Galán, and P. Cuesta, Eds. pp. 203–216.
Chichester: Wiley, 1995, ch. 18, pp. 345–369. [199] P. J. Angeline, G. M. Saunders, and J. B. Pollack, “An evolutionary
[177] B. Anderson, J. McDonnell, and W. Page, “Configuration optimization algorithm that constructs recurrent neural networks,” IEEE Trans. Neural
of mobile manipulators with equality constraints using evolutionary Networks, vol. 5, no. 1, pp. 54–65, 1994.
programming,” in Proc. 1st Annu. Conf. on Evolutionary Programming. [200] W. Wienholt, “Minimizing the system error in feedforward neural
San Diego, CA: Evolutionary Programming Society, 1992, pp. 71–79. networks with evolution strategy,” in Proc. Int. Conf. on Artificial Neural
[178] J. R. McDonnell, B. L. Anderson, W. C. Page, and F. G. Pin, “Mobile Networks, S. Gielen and B. Kappen, Eds. London: Springer, 1993, pp.
manipulator configuration optimization using evolutionary program- 490–493.
ming,” in Proc 1st Annu. Conf. on Evolutionary Programming. San [201] M. Mandischer, “Genetic optimization and representation of neural
Diego, CA: Evolutionary Programming Society, 1992, pp. 52–62. networks,” in Proc. 4th Australian Conf. on Neural Networks, P.
[179] J. D. Schaffer and L. J. Eshelman, “Designing multiplierless digital Leong and M. Jabri, Eds. Sidney Univ., Dept. Elect. Eng., 1993,
filters using genetic algorithms,” in Proc. 5th Int. Conf. on Genetic pp. 122–125.
Algorithms. San Mateo, CA: Morgan Kaufmann, 1993, pp. 439–444. [202] A. Homaifar and E. McCormick, “Full design of fuzzy controllers
[180] H.-G. Beyer, “Some aspects of the ‘evolution strategy’ for solving TSP- using genetic algorithms,” in Neural and Stochastic Methods in Image
like optimization problems appearing at the design studies of a 0.5 and Signal Processing, S.-S. Chen, Ed. The International Society for
TeV e+ e0 -linear collider,” in Parallel Problem Solving from Nature Optical Engineering, 1992, vol. SPIE-1766, pp. 393–404.
2. Amsterdam: Elsevier, 1992, pp. 361–370. [203] C. L. Karr, “Genetic algorithms for fuzzy controllers,” AI Expert, vol.
[181] T. Bäck, J. Heistermann, C. Kappler, and M. Zamparelli, “Evolutionary 6, no. 2, pp. 27–33, 1991.
algorithms support refueling of pressurized water reactors,” in Proc. [204] S. B. Haffner and A. V. Sebald, “Computer-aided design of fuzzy
3rd IEEE Conference on Evolutionary Computation. Piscataway, NJ: HVAC controllers using evolutionary programming,” in Proc. 2nd Annu.
IEEE Press, 1996, pp. 104–108. Conf. on Evolutionary Programming. San Diego, CA: Evolutionary
[182] L. Davis, D. Orvosh, A. Cox, and Y. Qiu, “A genetic algorithm Programming Society, 1993, , pp. 98–107.
for survivable network design,” in Proc. 5th Int. Conf. on Genetic [205] P. Thrift, “Fuzzy logic synthesis with genetic algorithms,” in Proc. 4th
Algorithms. San Mateo, CA: Morgan Kaufmann, 1993, pp. 408–415. Int. Conf. on Genetic Algorithms. San Mateo, CA, Morgan Kaufmann,
[183] D. M. Levine, “A genetic algorithm for the set partitioning problem,” in 1991, pp. 514–518.
Proc. 5th Int. Conf. on Genetic Algorithms. San Mateo, CA: Morgan [206] P. Wang and D. P. Kwok, “Optimal fuzzy PID control based on genetic
Kaufmann, 1993, pp. 481–487. algorithm,” in Proc. 1992 Int. Conf. on Industrial Electronics, Control,
[184] V. S. Gordon, A. P. W. Böhm, and L. D. Whitley, “A note on the and Instrumentation. Piscataway, NJ: IEEE Press, 1992, vol. 2, pp.
performance of genetic algorithms on zero-one knapsack problems,” in 977–981.
Proc. 1994 ACM Symp. Applied Computing, E. Deaton, D. Oppenheim, [207] D. C. Dennett, Darwin’s Dangerous Idea, New York: Touchstone,
J. Urban, and H. Berghel, Eds. New York: ACM, 1994, pp. 194–195. 1995.
[185] A. Olsen, “Penalty functions and the knapsack problem,” in Proc. [208] J. C. Bezdek, “What is computational intelligence?” in Computational
1st IEEE Conf. on Evolutionary Computation. Piscataway, NJ: IEEE Intelligence: Imitating Life, J. M. Zurada, R. J. Marks II, and Ch. J.
Press, 1994, pp. 554–558. Robinson, Eds. New York: IEEE Press, 1994, pp. 1–12.
[186] S. Khuri, T. Bäck, and J. Heitkötter, “An evolutionary approach to com- [209] N. Cercone and G. McCalla, “Ten years of computational intelligence,”
binatorial optimization problems,” in Proc. 22nd Annu. ACM Computer Computational Intelligence, vol. 10, no. 4, pp. i–vi, 1994.
Science Conf., D. Cizmar, Ed. New York: ACM, 1994, pp. 66–73. [210] J. Schull, “The view from the adaptive landscape,” in Parallel Problem
[187] R. Bruns, “Direct chromosome representation and advanced genetic Solving from Nature—Proc. 1st Workshop PPSN I, Berlin: Springer,
operators for production scheduling,” in Proc. 1st Annu. Conf. on Evo- 1991, vol. 496 of Lecture Notes in Computer Science, pp. 415–427.
lutionary Programming. San Diego, CA: Evolutionary Programming [211] C. M. Fonseca and P. J. Fleming, “An overview of evolutionary
Society, 1992, pp. 352–359. algorithms in multiobjective optimization,” Evolutionary Computation,
[188] H.-L. Fang, P. Ross, and D. Corne, “A promising genetic algorithm vol. 3, no. 1, pp. 1–16, 1995.
approach to job-shop scheduling, rescheduling, and open-shop schedul- [212] J. Lis and A. E. Eiben, “Multi-sexual genetic algorithm for multiobjec-
ing problems,” in Proc. 1st Annu. Conf. on Evolutionary Programming. tive optimization,” in Proc. 4th IEEE Conf. Evolutionary Computation,
San Diego, CA: Evolutionary Programming Society, 1992, pp. 375–382. Indianapolis, IN Piscataway, NJ: IEEE Press, 1997.
[189] J. L. Blanton and R. L. Wainwright, “Multiple vehicle routing with [213] E. Ronald, “When selection meets seduction,” in Genetic Algorithms:
time and capacity constraints using genetic algorithms,” in Proc. 5th Proc. 6th Int. Conf. San Francisco, CA: Morgan Kaufmann, 1995, pp.
Int. Conf. on Genetic Algorithms. San Mateo, CA: Morgan Kaufmann, 167–173.
1993, pp. 452–459. [214] S. A. Kauffman, The Origins of Order. Self-Organization and Selection
[190] L. A. Cox, L. Davis, and Y. Qiu, “Dynamic anticipatory routing in in Evolution. New York: Oxford Univ. Press, 1993.
circuit-switched telecommunications networks,” in Handbook of Genetic [215] P. J. Angeline and J. B. Pollack, “Competitive environments evolve
Algorithms. New York: Van Nostrand Reinhold, 1991, ch. 11, pp. better solutions for complex tasks,” in Proc. 5th Int. Conf. on Genetic
109–143. Algorithms. San Mateo, CA: Morgan Kaufmann, 1993, pp. 264–270.
[191] K. Juliff, “A multi-chromosome genetic algorithm for pallet loading,” in [216] W. D. Hillis, “Co-evolving parasites improve simulated evolution as
Proc. 5th Int. Conf. on Genetic Algorithms. San Mateo, CA: Morgan an optimization procedure,” in Emergent Computation. Self-Organizing,
Kaufmann, 1993, pp. 467–473. Collective, and Cooperative Phenomena in Natural and Artificial Com-
[192] S. Schulze-Kremer, “Genetic algorithms for protein ternary structure puting Networks. Cambridge, MA: MIT Press, 1990, pp. 228–234.
prediction,” in Parallel Genetic Algorithms: Theory & Applications, J. [217] J. Paredis, “Coevolutionary life-time learning,” in Parallel Problem
Stender, Ed. Amsterdam: IOS, 1993, Frontiers in Artificial Intelligence Solving from Nature IV. Proc. Int. Conf. on Evolutionary Computation.
and Applications, pp. 129–150. (Lecture Notes in Computer Science, vol. 1141). Berlin, Germany:
[193] R. Unger and J. Moult, “A genetic algorithm for 3D protein folding Springer, 1996, pp. 72–80.
simulation,” in Proc. 5th Int. Conf. on Genetic Algorithms. San Mateo, [218] N. N. Schraudolph and R. K. Belew, “Dynamic parameter encoding for
CA: Morgan Kaufmann, 1993, pp. 581–588. genetic algorithms,” Machine Learning, vol. 9, pp. 9–21, 1992.
[194] D. C. Youvan, A. P. Arkin, and M. M. Yang, “Recursive ensemble [219] R. W. Anderson, “Genetic mechanisms underlying the Baldwin ef-
mutagenesis: A combinatorial optimization technique for protein en- fect are evident in natural antibodies,” in Proc. 4th Annu. Conf. on
gineering,” in Parallel Problem Solving from Nature 2. Amsterdam: Evolutionary Programming. Cambridge, MA: MIT Press, 1995, pp.
Elsevier, 1992, pp. 401–410. 547–564.
[195] R. F. Walker, E. W. Haasdijk, and M. C. Gerrets, “Credit evaluation [220] S. Forrest, Ed., Emergent Computation. Self-Organizing, Collective, and
using a genetic algorithm,” in Intelligent Systems for Finance and Cooperative Phenomena in Natural and Artificial Computing Networks.
Business. Chichester: Wiley, 1995, ch. 3, pp. 39–59. Cambridge, MA: MIT Press, 1990.

Authorized licensed use limited to: Universidade de Lisboa Reitoria. Downloaded on February 2, 2009 at 14:58 from IEEE Xplore. Restrictions apply.
BÄCK et al.: EVOLUTIONARY COMPUTATION: COMMENTS ON THE HISTORY AND CURRENT STATE 17

Thomas Bäck received the Diploma degree in com- Hans-Paul Schwefel received the Diploma degree
puter science in 1990 and the Ph.D. degree in in engineering (aero- and space-technology) in 1965
computer science in 1994, both from the University and the Ph.D. degree in process engineering in
of Dortmund, Germany. In 1995, he received the 1975, both from the Technical University of Berlin,
best dissertation award of the German Association Germany.
for Computer Science (GI) for his Ph.D. thesis on From 1963–1966, he worked as a Junior Assistant
evolutionary algorithms. and Research Assistant at the Hermann-Föttinger
From 1990–1994, he worked as a Scientific As- Institute for Hydrodynamics, Technical University
sistant at the Department of Computer Science of of Berlin. Subsequently (1967–1970), he was an
the University of Dortmund. From 1994, he was a Research and Development Engineer at the AEG
Senior Research Fellow at the Center for Applied Research Center, Berlin. From 1970 to 1976, he
Systems Analysis within the Informatik Centrum Dortmund and Managing was a Research Consultant and DFG Grantee for various research projects
Director of the Center for Applied Systems Analysis since 1996. He also concerning the development of evolution strategies at the Technical University
serves as an Associate Professor in the Computer Science Department of of Berlin and the Medical School of the University of Hanover, Germany.
Leiden University, The Netherlands, and teaches courses on evolutionary He was a Senior Research Fellow at the Nuclear Research Center (KFA),
computation at the University of Dortmund and at Leiden University. His Jülich, a Group Leader within the Program Group of Systems Analysis and
current research interests are in the areas of theory and application of Technological Development during 1976–1984, and has held the Chair of
evolutionary computation and related areas of computational intelligence. Systems Analysis as a Full Professor at the Department of Computer Science
He is author of the book Evolutionary Algorithms in Theory and Practice: of the University of Dortmund since 1985. He teaches courses on systems
Evolution Strategies, Evolutionary Programming, Genetic Algorithms (New analysis, programming, evolutionary computation, and self-organization. He
York: Oxford Univ. Press, 1996), co-editor-in-chief of the Handbook of is President of the Informatik Centrum Dortmund since 1989 and has been
Evolutionary Computation (New York: Oxford Univ. Press and Institute Speaker of the Sonderforschungsbereich (Collaborative Research Center)
of Physics, 1997), and a member of the editorial board of Evolutionary “Design and Management of Complex Technical Processes and Systems by
Computation. Computational Intelligence Methods” since 1996. He is author of the books
Dr. Bäck is an associate editor of the IEEE TRANSACTIONS ON EVOLUTIONARY Numerical Optimization of Computer Models (Chichester, UK: Wiley, 1981)
COMPUTATION. He is a Member of the Dutch Association for Theoretical and Evolution and Optimum Seeking (New York: Wiley, 1995). He is a
Computer Science (NVTI), has served on the IEEE Neural Networks Council’s member of the editorial boards of Evolutionary Computation and BioSystems.
technical committee on evolutionary computation since 1995, was a co- Dr. Schwefel received the Lifetime Achievement Award from the Evolu-
program chair of the 1996 and 1997 IEEE International Conferences on tionary Programming Society (La Jolla, CA) in 1995. He is an Associate
Evolutionary Computation (ICEC) and the Fifth Annual Conference on Editor of the IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION. He is a
Evolutionary Programming (EP’96), and is program chair of the Seventh member of the steering committee of the Parallel Problem Solving from Nature
International Conference on Genetic Algorithms and Their Applications Conference Series (PPSN), elected member of the International Society of
(ICGA’97). Genetic Algorithms Council (ISGA) since 1995, and advisory board member
of the Handbook of Evolutionary Computation (New York, NY: Oxford Univ.
Press and Institute of Physics, 1997).

Ulrich Hammel received the Diploma degree in


computer science in 1985 from the University of
Dortmund, Germany.
Since 1985, he has been a Scientific Assistant
at the Department of Computer Science of the
University of Dortmund and is Managing Director
of the Sonderforschungsbereich (Collaborative Re-
search Center) “Design and Management of Com-
plex Technical Processes and Systems by Computa-
tional Intelligence Methods,” which began in 1997,
involving partners from the Departments of Com-
puter Science, Electrical Engineering, Mechanical Engineering, and Chemical
Engineering at the University of Dortmund. His research interests are in the
intersection of modeling and optimization. Currently, he works on genetic
representations and robust design strategies in evolutionary computation.

Authorized licensed use limited to: Universidade de Lisboa Reitoria. Downloaded on February 2, 2009 at 14:58 from IEEE Xplore. Restrictions apply.

You might also like