0% found this document useful (0 votes)
35 views

Optimization Problems: Initialization

1. Genetic algorithms use populations of candidate solutions that are evolved toward better solutions over generations. Each candidate has a genotype that can be mutated and altered. 2. In each generation, individuals are stochastically selected from the current population based on fitness and modified through recombination and mutation to form a new generation. 3. Genetic algorithms require a genetic representation of solutions and a fitness function to evaluate solutions. They initialize a population randomly and improve it through operators like mutation, crossover, and selection that breed a new generation.

Uploaded by

edona
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Optimization Problems: Initialization

1. Genetic algorithms use populations of candidate solutions that are evolved toward better solutions over generations. Each candidate has a genotype that can be mutated and altered. 2. In each generation, individuals are stochastically selected from the current population based on fitness and modified through recombination and mutation to form a new generation. 3. Genetic algorithms require a genetic representation of solutions and a fitness function to evaluate solutions. They initialize a population randomly and improve it through operators like mutation, crossover, and selection that breed a new generation.

Uploaded by

edona
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Optimization problems[edit]

In a genetic algorithm, a population of candidate solutions (called individuals, creatures,


or phenotypes) to an optimization problem is evolved toward better solutions. Each candidate
solution has a set of properties (its chromosomes or genotype) which can be mutated and altered;
traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also
possible.[2]
The evolution usually starts from a population of randomly generated individuals, and is an iterative
process, with the population in each iteration called a generation. In each generation, the fitness of
every individual in the population is evaluated; the fitness is usually the value of the objective
function in the optimization problem being solved. The more fit individuals are stochastically selected
from the current population, and each individual's genome is modified (recombined and possibly
randomly mutated) to form a new generation. The new generation of candidate solutions is then
used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a
maximum number of generations has been produced, or a satisfactory fitness level has been
reached for the population.
A typical genetic algorithm requires:

1. a genetic representation of the solution domain,


2. a fitness function to evaluate the solution domain.
A standard representation of each candidate solution is as an array of bits.[2] Arrays of other types
and structures can be used in essentially the same way. The main property that makes these
genetic representations convenient is that their parts are easily aligned due to their fixed size, which
facilitates simple crossover operations. Variable length representations may also be used, but
crossover implementation is more complex in this case. Tree-like representations are explored
in genetic programming and graph-form representations are explored in evolutionary programming;
a mix of both linear chromosomes and trees is explored in gene expression programming.
Once the genetic representation and the fitness function are defined, a GA proceeds to initialize a
population of solutions and then to improve it through repetitive application of the mutation,
crossover, inversion and selection operators.
Initialization[edit]
The population size depends on the nature of the problem, but typically contains several hundreds or
thousands of possible solutions. Often, the initial population is generated randomly, allowing the
entire range of possible solutions (the search space). Occasionally, the solutions may be "seeded" in
areas where optimal solutions are likely to be found.
Selection[edit]
Main article: Selection (genetic algorithm)
During each successive generation, a portion of the existing population is selected to breed a new
generation. Individual solutions are selected through a fitness-based process, where fitter solutions
(as measured by a fitness function) are typically more likely to be selected. Certain selection
methods rate the fitness of each solution and preferentially select the best solutions. Other methods
rate only a random sample of the population, as the former process may be very time-consuming.
The fitness function is defined over the genetic representation and measures the quality of the
represented solution. The fitness function is always problem dependent. For instance, in
the knapsack problem one wants to maximize the total value of objects that can be put in a
knapsack of some fixed capacity. A representation of a solution might be an array of bits, where
each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the
object is in the knapsack. Not every such representation is valid, as the size of objects may exceed
the capacity of the knapsack. The fitness of the solution is the sum of values of all objects in the
knapsack if the representation is valid, or 0 otherwise.
In some problems, it is hard or even impossible to define the fitness expression; in these cases,
a simulation may be used to determine the fitness function value of a phenotype (e.g. computational
fluid dynamics is used to determine the air resistance of a vehicle whose shape is encoded as the
phenotype), or even interactive genetic algorithms are used.
Genetic operators[edit]
Main articles: Crossover (genetic algorithm) and Mutation (genetic algorithm)
The next step is to generate a second generation population of solutions from those selected
through a combination of genetic operators: crossover (also called recombination), and mutation.
For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the
pool selected previously. By producing a "child" solution using the above methods of crossover and
mutation, a new solution is created which typically shares many of the characteristics of its "parents".
New parents are selected for each new child, and the process continues until a new population of
solutions of appropriate size is generated. Although reproduction methods that are based on the use
of two parents are more "biology inspired", some research [3][4] suggests that more than two "parents"
generate higher quality chromosomes.
These processes ultimately result in the next generation population of chromosomes that is different
from the initial generation. Generally, the average fitness will have increased by this procedure for
the population, since only the best organisms from the first generation are selected for breeding,
along with a small proportion of less fit solutions. These less fit solutions ensure genetic diversity
within the genetic pool of the parents and therefore ensure the genetic diversity of the subsequent
generation of children.
Opinion is divided over the importance of crossover versus mutation. There are many references
in Fogel (2006) that support the importance of mutation-based search.
Although crossover and mutation are known as the main genetic operators, it is possible to use other
operators such as regrouping, colonization-extinction, or migration in genetic algorithms. [citation needed]

You might also like