0% found this document useful (0 votes)
5 views7 pages

Untitled 3

The document discusses the evolution and foundations of evolutionary computation (EC), a field inspired by natural evolution that includes various algorithms such as genetic algorithms and genetic programming. It reviews key algorithms, their historical development, and current challenges while highlighting recent advancements and applications across diverse domains. The paper also presents a collection of articles showcasing innovative approaches and solutions in evolutionary computation, emphasizing its broad applicability and theoretical foundations.

Uploaded by

therajsenior
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views7 pages

Untitled 3

The document discusses the evolution and foundations of evolutionary computation (EC), a field inspired by natural evolution that includes various algorithms such as genetic algorithms and genetic programming. It reviews key algorithms, their historical development, and current challenges while highlighting recent advancements and applications across diverse domains. The paper also presents a collection of articles showcasing innovative approaches and solutions in evolutionary computation, emphasizing its broad applicability and theoretical foundations.

Uploaded by

therajsenior
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
You are on page 1/ 7

Object 1

Object 2

Hello,
I'm
.
I'm a next generation AI assistant built for work and trained to be safe, accurate, and secure.
I'd love for us to get to know each other a bit better.
TT
Nice to meet you, I'm...
Title: The Evolution and Foundations of Evolutionary Computaion Abstract: Evolutionary computation (EC) is a branch of

Lovely to meet you Title: The Evolution and Foundations of Evolutionary Computaion Abstract:
Evolutionary computation (EC) is a branch of computational intelligence inspired by natural
evolution and adaptation, incorporating algorithms such as genetic algorithms (GA), evolutionary
programming (EP), evolution strategies (ES), and genetic programming (GP). These algorithms
share common principles of solution generation, evaluation, selection, and evolution, with distinct
origins—from Holland’s GA for self-adaptive control in the 1950s to Koza’s GP for automatic
program design in the 1990s—yet their fundamental computational processes reveal notable
similarities. This paper explores "The Evolution and Foundations of Evolutionary Computation,"
offering a comprehensive review of the key algorithms and their connections to traditional search
techniques. We trace the early history and development of EC, discussing significant theoretical
advancements and highlighting innovations in fields such as artificial life, evolving programs, and
evolvable hardware. The paper concludes by examining current challenges in the field and
proposing future research directions, particularly in advancing the applicability and theoretical rigor
of evolutionary approaches to optimization and adaptive learning across various scientific and
engineering domains. 1. Introduction Evolutionary computation is now nearly 50 years old,
originating with the seminal work of John Holland at the University of Michigan in 1975 which
introduced the genetic algorithm [1]. Evolutionary computation [2] encompasses a variety of
problem-solving methodologies that take inspiration from natural evolutionary and genetic
processes. The most well-known form of evolutionary computation is the genetic algorithm [3,4],
which evolves a population of solutions to the problem at hand, each represented as a bit-string—
the genotype—with a fitness function measuring the fitness of the bit-string within the context of
the problem (i.e., mapping a genotype to a phenotype). Evolutionary operators, such as mutation,
crossover, and selection, control the simulated evolution over several generations. There are now
many forms of evolutionary computation (a few of which are illustrated in Figure 1) that have
developed over the years, including genetic programming [5], evolution strategies [6], differential
evolution [7,8], evolutionary programming [9], permutation-based evolutionary algorithms [10],
memetic algorithms [11], the estimation of distribution algorithms [12], particle swarm optimization
[13], interactive evolutionary algorithms [14], ant colony optimization [15,16], and artificial
immune systems [17], among others [18,19]. Among the characteristics of evolutionary algorithms
that lead to powerful problem solving is the fact that they lend themselves very well to parallel
implementation [20,21,22], enabling the exploitation of today’s multicore and manycore computer
architectures. Rich theoretical foundations also exist which are related to convergence properties
[23,24,25], parameter optimization, and control [26], as well as the powerful analytical tools of
fitness landscape analysis [27,28,29], such as fitness–distance correlation [30] and search landscape
calculus [31], among others. These theoretical foundations inform the engineering of evolutionary
solutions to specific problems. There are also many open-source libraries and toolkits available for
evolutionary computation in a variety of programming languages [32,33,34,35,36,37,38,39,40,41],
making the application of evolutionary algorithms to new problems and domains particularly easy.
Figure 1. A few of the many forms of evolutionary computation. Evolutionary computation has been
effective in solving problems with a variety of characteristics, and within many application
domains, such as multiobjective optimization [42,43,44,45], data science [46], machine learning
[47,48,49], classification [50], feature selection [51], neural architecture search [52], neuroevolution
[53], bioinformatics [54], scheduling [55], algorithm selection [56], computer vision [57], hardware
validation [58], software engineering [59,60], and multi-task optimization [61,62], among many
others. This Special Issue brings together recent advances in the theory and application of
evolutionary computation. It includes 13 articles. The authors of the 13 articles represent
institutions from 11 different countries, demonstrating the global reach of the topic of evolutionary
computation. The published articles span the breadth of evolutionary computation techniques, and
cover a variety of applications. The remainder of this Editorial briefly describes the articles
included within this Special Issue; and I encourage you to read and explore each. 2. Overview of the
Published Articles This overview of the articles is organized in the order in which the contributions
to the Special Issue were published. Cicirello (contribution 1) presents a new mutation operator for
evolutionary algorithms where solutions are represented by permutations. The new mutation
operator, cycle mutation, is inspired by cycle crossover. Cycle mutation is designed specifically for
assignment and mapping problems (e.g., quadratic assignment, largest common subgraph, etc.)
rather than ordering problems like the traveling salesperson. This article includes a fitness landscape
analysis exploring the strengths and weaknesses of cycle mutation in terms of permutation features.
Osuna-Enciso and Guevara-Martínez (contribution 2) propose a variation of differential evolution
that they call stigmergic differential evolution which can be used for solving continuous
optimization problems. Their approach integrates the concept of stigmergy with differential
evolution. Stigmergy originated from swarm intelligence, and refers to the indirect communication
among members of a swarm that occurs when swarm members manipulate the environment and
detect modifications made by others (e.g.,the pheromone trail-following behavior of ants, among
others). Córdoba, Gata, and Reina (contribution 3) consider a problem related to energy access in
remote, rural areas. Namely, they utilize a (𝜇+𝜆) -evolutionary algorithm to optimize the design of
mini hydropower plants, using cubic Hermite splines to model the terrain in 3D, rather than the
more common 2D simplifications. Parra, et al. (contribution 4) consider the binary classification
problem of predicting obesity. In their experiments, they explore utilizing evolutionary computation
in feature selection for binary classifier systems. They consider ten different machine learning
classifiers, combined with four feature-selection strategies. Two of the feature-selection strategies
considered use the classic bit-string-encoded genetic algorithm. Fan and Liang (contribution 5)
consider directional sensor networks and target coverage. In their approach to target coverage, they
developed a hybrid of particle swarm optimization and a genetic algorithm. Their experiments
demonstrate that the hybrid approach outperforms both particle swarm optimization and the genetic
algorithm alone for the problem of maximizing covered targets and minimizing active sensors.
Wang, et al. (contribution 6) developed a hybrid between particle swarm optimization and
differential evolution for real-valued function optimization. Their hybrid combines a self-adaptive
form of differential evolution with particle swarm optimization, and they experiment with their
approach on a variety of function optimization benchmarks. Chen, et al. (contribution 7) explore the
constrained optimization problem of optimizing the linkage system for vehicle wipers. Their aim
was to improve steadiness of wipers. They utilize differential evolution to optimize the maximal
magnitude of the angular acceleration of the links in the system subject to a set of constraints. They
were able to reduce the maximal magnitude of angular acceleration by 10%. Tong, Sung, and Wong
(contribution 8) analyze the performance of a parameter-free evolutionary algorithm known as pure
random orthogonal search. They propose improvements to the algorithm involving local search.
They performed experiments on a variety of benchmark function optimization problems with a
variety of features (e.g., unimodal vs. multi-modal, convex vs. non-convex, separable vs. non-
separable). Anđelić, et al. (contribution 9) approach the problem of searching for candidates for
dark matter particles, so-called weakly interacting massive particles, using symbolic regression via
genetic programming. Their approach estimates the interaction locations with high accuracy. Wu, et
al. (contribution 10) developed a recommender system utilizing an interactive evolutionary
algorithm for making personalized recommendations. In an interactive evolutionary algorithm,
human users are directly involved in evaluating the fitness of members of the population. Wu, et al.
use a surrogate model in their approach to reduce the number of evaluations required by users.
Dubey and Louis (contribution 11) utilize a (𝜇+𝜆) -evolutionary algorithm. They developed an
approach to deploying a UAV-based ad hoc network to cover an area of interest. UAV motion is
controlled by a set of potential fields that are optimized by the (𝜇+𝜆) -evolutionary algorithm using
polynomial mutation and simulated binary crossover. Lazari and Chassiakos (contribution 12) take
on the problem of deploying electric vehicle charging stations. They define it as a multi-objective
optimization problem with two cost functions: station deployment costs and user travel costs
between areas of demand and the station’s location. Their evolutionary algorithm’s chromosome
representation combines x and y coordinates of candidate charging station locations, using the
classic bit-string of genetic algorithms to model whether or not each candidate station is deployed.
Reffad and Alti (contribution 13) use NSGA-II to optimize enterprise resource planning
performance. They aimed to optimize average service quality and average energy consumption.
They propose an adaptive and dynamic solution within IoT, fog, and cloud environments. 3.
Conclusions This collection of articles spans a variety of forms of evolutionary computation,
including genetic algorithms, genetic programming, differential evolution, particle swarm
optimization, and evolutionary algorithms more generally, as well as hybrids of multiple forms of
evolutionary computation. The evolutionary algorithms represent solutions in several ways,
including the common bit-string representation, vectors of reals, and permutations, as well as
custom representations. The authors of the articles tackle a very diverse collection of problems of
different types and from many application domains. For example, some of the problems considered
are discrete optimization problems, while others optimize continuous functions. Although many of
the articles focus on optimizing a single objective function, others involve multi-objective
optimization. Some of the articles primarily utilize common benchmarking optimization functions
and problems, while several others explore a variety of real-world applications, such as optimizing
mini hydropower plants, UAV deployment, the deployment of electric vehicle charging stations,
target coverage in wireless sensor networks, enterprise resource planning, recommender systems,
dark matter detection, and optimizing vehicle wiper linkage systems, among others. The diversity of
evolutionary techniques, evolutionary operators, problem features, and applications that are covered
within this collection of articles demonstrates the wide reach and applicability of evolutionary
computation. The essential aspects of an evolutionary algorithm are shown in Figure 1. A
population of candidate solutions to a problem is scored with respect to a so-called fitness criterion.
These solutions serve as parents for offspring. These offspring are created via random variation of
the parents in the form of mutations and/or recombination, or possibly other operations. The
offspring are scored and the solutions compete for survival and the right to become parents of the
next generation. This basic protocol or variants of it appear in virtually all evolutionary algorithms,
with the exception of artificial life studies that seek to determine emergent properties of
simulations, and also some extended variations of evolutionary computing such as particle swarm
and ant colony methods. In the past 20 years, there have been numerous successful applications of
this general evolutionary approach, with various extensions, in the areas of medicine,
bioinformatics, military planning, scheduling, forecasting, and other areas. It is reasonable to
believe that over 3000 papers are published annually in evolutionary computing around the world
Evolutionary computation has a long history, which extends over 55 years. Some of the earliest
progenitors of that history can be found even 20 years still earlier. For example, in 1932 Cannon1
noted that evolution was a learning process and made a direct comparison to individual learning. In
addition, Turing2 offered that there is an “obvious connection between [machine learning] and
evolution.” By the mid-1950s, the idea of simulating evolution on a computer had already taken
root. One of the first theoretical works on simulated evolution involved a robot learning to move in
an arena. In this work, Friedman3,4 also remarked that mutation and selection would be able to
design thinking machines, including chess-playing machines. The concept that evolution and
learning are intimately connected was also offered by Campbell5,6,7 in the late 1950s and also
1960. Unfortunately, these historical contributions, and likely the majority of other early
contributions, are virtually unknown in the evolutionary computation community. This unawareness
is a result of a lack of rigor and a reflection of the immaturity of the field, as compared with say,
physics, mathematics, or chemistry. Many innovative approaches to evolutionary computing were
undertaken in the earliest days of computing. With current computational capabilities, we now have
the computing power to explore these seminal ideas further. If this review paper encourages such
efforts, it will have served its purpose. This paper focuses on historical contributions in artificial
life, modeling genetic systems, evolving programs, and evolving hardware which may not be as
well known as others in popular literature (e.g., those featured routinely in trade magazines). It also
offers remarks on the future of evolutionary algorithms in light of current knowledge. Not all early
and unrecognized efforts are given due attention owing to space limitations. Interested readers may
find these covered in Fogel19. 2. HISTORICAL FOUNDATIONS 1.1 Artificial Life As noted in
Fogel8,13, perhaps the earliest published record of any work in evolutionary computation is
Barricelli9. Nils Aall Barricelli worked on John von Neumann’s high-speed computer at the
Institute for Advanced Study in Princeton, New Jersey, in 1953. Barricelli’s experiments were
essentially trials in the area of artificial life, in which numbers were placed in a grid and moved
based on local interaction rules. His original research was published in Italian, but was republished
in 1957 in English10. Two additional publications in 1962 and 1963 extended his work11, 12.
Barricelli’s essential experiments worked as follows. Numbers were entered into a grid of
predetermined size. Positive numbers shifted to the right, while negative numbers shifted to the left.
When collisions occurred between two more numbers that entered the same cell in the grid, other
rules were applied to determine how to alter the numbers. For example, suppose that a one-
dimensional grid had N = 20 cells and numbers were distributed in those cells at the initial
generation g = 0. For g = 1, the numbers would shift to different cells based on the arrangement of
numbers at g = 0, and then progress further or “migrate” based on their then-current positions. To
explain Barricelli’s specific migration rules, say xi,g is the numeric entry at the gth generation in
cell i. Barricelli [2] used rules such as: (1) A number shifts n cells to the right if it is positive, or |n|
cells to the left if it is negative, unless this results in a “collision” with another number (in which
two numbers arrive at the same cell location). (2) The same number xi,g = n may reproduce m cells
to the right (or left) if xi+n,g = m, again with the exception of a collision. (3) Reproduction may
occur more than once if xi+m,g = r (where r ≠ 0, which designated an empty square), then xi+r,g+1
= n, again with the exception of a collision. (4) If two numeric elements collide in a cell, then if
they are equal only one copy of the number is placed in the cell. If the numbers are not equal, other
rules are applied to determine which number to place in the cell or other cells. Figure 2, taken from
Barricelli10, shows an example of numbers propagating in time through a series of 20- cell grids.
The numbers involved are −3, 1, and 5, and also 0 representing an empty square. Starting from the
arrangement at g = 0 at the top of the grid (usually found by using a set of playing cards), by the
fourth generation, the pattern (5, −3, 1, −3, 0, −3, 1) appears and persists in every other generation.
This example is a “flat” grid, but Barricelli10 noted experiments with 512 cells in a tubular design
(connecting the left and right edge The interest in modeling genetic systems to explore and better
understand biology dates back at least to the work of Fraser27 in 1957, who conducted simulations
involving diploid organisms represented by binary strings of a given length (n bits). Each bit
represented an allele (dominant or recessive) and the phenotype of each organism was determined
by its genetic composition. Reproduction was accomplished using an n- point crossover operator in
which each position along an organism’s genetic string was assigned a probability of breaking for
recombination. Linkage groups could form between genes. This was accomplished by varying the
probability of crossover occurring at each locus along either string. Fraser’s general procedure used
a population of P parents that generated P′ offspring via recombination. Selection then eliminated
all but P of the offspring (and all of the parents were also eliminated). Readers who are well versed
in evolution strategies will note that this anticipated the (μ,λ) selection method. Selection could be
applied toward extreme values of a phenotype (the equivalent of function maximization or
minimization) or the mean values (stabilizing selection against extremes). The possibility for
varying the number of progeny per parent was also introduced. Fraser and colleagues published
numerous papers in this line of research. These works included Fraser28,29,30,31,32,33,
Barker34,35, Fraser and Hansche36, Fraser et al.37, Fraser and Burnell38,39. Fraser and Burnell40,
Computer Models in Genetics, was the second book in the history of evolutionary computation.
Given the close overlap in subsequent work in genetic algorithms, as say offered in Holland41,
including epistasis, linkage groups, recombination, binary strings, diploid representations, and so
forth, it is perhaps surprising and certainly unfortunate that these earlier works were mostly ignored
outside the field of genetics for over two decades. By the time of Fraser’s later contributions, say
Fraser42, the models also incorporated inversion to build arbitrary linkages between alternative
genes and were described in clear terms as “purposive” and “learning” (Figure 6). Thus Fraser’s
model was essentially equivalent to the canonical genetic algorithm popularized in Holland41 1.3.
Evolving Programs Conrad’s artificial ecosystem experiments described earlier involved organisms
that were programs, which reacted to inputs, followed a sequence of steps, and generated output;
however, two other earlier efforts were made to evolve programs to execute specific tasks. In 1958
and 1959, Friedberg53 and Friedberg et al.54 evolved machine language programs (binary
language) to perform simple operations such as moving data from one location to another or
performing the sum of numbers in two data locations. The speed of the available computers at the
time demanded several procedural shortcuts in order to test the devised methods. There was a hope
that structurally similar programs could be grouped together in classes. This concept was another
forerunner of what was described as intrinsic parallelism in Holland41. Specifically, a class was
defined to consist of all programs having a certain instruction at a certain location. A learning
procedure was intended to compare the performance of two nonoverlapping classes of programs,
essentially evaluating alternative instructions that might occur at a given location. This concept is
also similar to what was later described as schemata in Holland41. A credit assignment method was
invented to partition the influence of individual instructions, which recorded the number of overall
successes and failures for each instruction. The hope was that by combining programs with more
“good” instructions this would lead to improved programs overall. Friedberg et al.54 noted that the
evolved programs compared favorably to completely random generate-and-test methods, but
complex problems that were broken in parts were not always solved to completion. Misky55
offered an unfavorable and apparently erroneous criticism of Friedberg’s work, saying that “The
machine did learn to solve some extremely simple problems. But it took on the order of 1,000 times
longer than pure chance would expect.” There appears to be no basis for this description.
Unfortunately, Minksy’s criticism was well publicized and left a lasting impression of this work.
Recent interest in evolvable hardware has focused on evolving electronic circuits (which was
suggested in Atmar68; however, perhaps the earliest effort in evolvable hardware evolved physical
devices such as bent pipes, airfoils, and flashing nozzles. This research was performed by Ingo
Rechenberg, Hans-Paul Schwefel, and Peter Bienert in 1964 and subsequently at the Technical
University of Berlin, Germany. Instead of using conventional gradient methods to optimize these
devices for various tasks, e.g., to design an airfoil that minimizes drag, they used a random variation
and selection method that employed throwing dice to make physical alterations to devices. This was
published first in 1965 by Rechenberg74. In 1966, Schwefel75 used this “evolution strategy” to
design a nozzle for convergent-divergent flow of hot water that would offer maximum efficiency.
He built a nozzle out of brass rings of varying diameter and the concatenated them into a single
device. By throwing dice, he changed both the number of rings in the nozzle as well as the diameter.
Starting with a conventional design shown in Figure 9, through 45 iterations of variation and
selection, the final design shown in the figure was obtained. These efforts led to many other
developments in evolution strategies for numerical optimization, including the use of a population
of solutions, the common (μ+λ) and (μ,λ) forms of selection, self-adaptation, and various forms of
recombination. I tudying the geomaterial constitutive model is a very important aspect of the
theoretical study for geomechanics and geotechnical engineering. This model is the basis of
geotechnical engineering research. Currently, there are many studies in this field, and numerous
theoretical models have been developed for geomaterials [58], [87], [91]. However, the
development trend is that models become increasingly complicated and model parameters become
increasingly numerous. Thus, the practicability of constitutive models becomes poorer. However,
for real engineering practices, it is more important that a constitutive model can describe the
engineering behavior very well, whereas the precision of the material model is not crucial.
Therefore, it is significant to study a geomaterial constitutive model based on real engineering
behavior. This is an issue that back analysis can solve [71]. In reality, back analysis for a
geomaterial constitutive model, called model identification, has existed since the 1970s [46].
However, although it is an important aspect of back analysis, model identification has not been
rapidly developed, primarily because great controversy exists regarding the need for model
identification. One viewpoint suggests that the merit of back analysis is using a simple constitutive
model [31] and the reasons are as follows. First, the model parameters can determine the
consistency of computing results with real measurements. Moreover, the mechanical behaviors
described by the model probably do not reflect the real behaviors of the geomaterial. In other words,
no model can comprehensively describe the real behaviors of geomaterials. Therefore, if a
complicated model is constructed in the back analysis, which considers more aspects of the
geomaterial, the original problem of computing the actual engineering will be faced again, thus
violating the original intention of the back analysis [77]. Another viewpoint suggests that the back
analysis of mechanical parameters should be called “parameter identification” [70]. The real back
analysis must simultaneously inverse the mechanical parameters and constitutive model.
Theoretically, identification of the geomaterial constitutive model is more important than
identification of the mechanical parameters [85] and the reason is as follows. If the constitutive
model does not reflect the actual behaviors of the geomaterial, the actual engineering behaviors
cannot be described, regardless of the precision of the mechanical parameters. Thus, if the
constitutive model is suitable, it is easy to back-calculate suitable mechanical parameters. However,
another possible reason which may retard the development of model identification is that compared
to parameter identification, model identification is an extremely complicated problem that cannot be
solved well using a traditional method [24]. In 1987, Gioda and Sakurai proposed that model
identification based on measurement displacement should be the main aspect of back analysis
development [32]. Moreover, in 1997, Sakurai demonstrated that the identification of a constitutive
model was critical [70]. Therefore, to solve the complicated problem of constitutive model
identification, it is essential to broaden the research and determine appropriate methods. Previous
studies [16], [33], [39] have shown that a multidisciplinary approach is the development tendency in
all applied sciences and that the introduction of intelligent science can stimulate the development of
geotechnical engineering research. Therefore, computational intelligence has been introduced into
model identification research, and many related studies have been conducted. The main studies in
computational intelligence are summarized in Table 1. In this study, previous studies are reviewed
in Table 1 based on two comput

You might also like