0% found this document useful (0 votes)
10 views

Notes - Unit III Genetic Algorithms - Print

The document discusses genetic algorithms and their components. It describes genetic algorithms as optimization techniques based on natural selection, where the fittest solutions are selected to produce the next generation. The key steps are initializing a random population, evaluating fitness, selecting parents for reproduction, performing crossover and mutation to create offspring, and iterating this process until optimal solutions are found. It then provides details on chromosome encoding schemes, population initialization and selection methods, and the basic genetic algorithm process.

Uploaded by

Mohamed Shaaheen
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Notes - Unit III Genetic Algorithms - Print

The document discusses genetic algorithms and their components. It describes genetic algorithms as optimization techniques based on natural selection, where the fittest solutions are selected to produce the next generation. The key steps are initializing a random population, evaluating fitness, selecting parents for reproduction, performing crossover and mutation to create offspring, and iterating this process until optimal solutions are found. It then provides details on chromosome encoding schemes, population initialization and selection methods, and the basic genetic algorithm process.

Uploaded by

Mohamed Shaaheen
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 19

UNIT III GENETIC ALGORITHMS

Chromosome Encoding Schemes -Population initialization and selection


methods - Evaluation function - Genetic operators- Cross over – Mutation -
Fitness Function – Maximizing function

Genetic Algorithm (GA) is a search-based optimization technique based on the


principles of Genetics and Natural Selection. It is frequently used to find optimal
or near-optimal solutions to difficult problems which otherwise would take a
lifetime to solve. It is frequently used to solve optimization problems, in research,
and in machine learning.
Introduction to Optimization
Optimization is the process of making something better. In any process, we have
a set of inputs and a set of outputs as shown in the following figure.

Optimization refers to finding the values of inputs in such a way that we get the
“best” output values. The definition of “best” varies from problem to problem, but
in mathematical terms, it refers to maximizing or minimizing one or more
objective functions, by varying the input parameters.
The set of all possible solutions or values which the inputs can take make up the
search space. In this search space, lies a point or a set of points which gives the
optimal solution. The aim of optimization is to find that point or set of points in the
search space.

This algorithm reflects the process of natural selection where the fittest individuals
are selected for reproduction in order to produce offspring of the next generation.

The process of natural selection starts with the selection of fittest individuals
from a population. They produce offspring which inherit the characteristics of the
parents and will be added to the next generation. If parents have better fitness,
their offspring will be better than parents and have a better chance at surviving.
This process keeps on iterating and at the end, a generation with the fittest
individuals will be found.
This notion can be applied for a search problem. We consider a set of solutions
for a problem and select the set of best ones out of them.
Five phases are considered in a genetic algorithm.
1. Initial population
2. Fitness function
3. Selection
4. Crossover
5. Mutation

Common Term used in Genetic Algorithm (GA):

1. Individual :- Any possible solution


2. Population :- Group of all individuals
3. Search Space :- All possible solutions to the problem
4. Chromosome :- Blueprint for an individual
5. Trait :- Possible aspect (features) of an individual
6. Allele : - Possible settings of trait (black, blond, etc.)
7. Locus :- The position of a gene on the chromosome
8. Genome :- Collection of all chromosomes for an individual

Outline of Basic Genetic Algorithm

1. [Start] Generate random population of n chromosomes (suitable solutions for the


problem).
2. [Fitness] Evaluate the fitness f(x) of each chromosome x in the population.
3. [New population] Create a new population by repeating following steps until the
new population is complete.
a. [Selection] Select two parent chromosomes from a population according to their
fitness (the better fitness, the bigger chance to be selected).
b. [Crossover] with a crossover probability cross over the parents to form a new
offspring (children). If no crossover was performed, offspring is an exact copy of
parents.
c. [Mutation] with a mutation probability mutate new offspring at each locus
(position in chromosome).
d. [Accepting] Place new offspring in a new population.
4. [Replace] Use new generated population for a further run of algorithm.
5. [Test] If the end condition is satisfied, stop and return the best solution in current
population.
6. [Loop] Go to step 2.

GA Steps:

Step 1 : Set population size and probability


Step 2 : Define fitness function
Step 3 : Generate initial population
Step 4 : Calculate fitness for each chromosome
Step 5 : Mating of chromosomes
Step 6 : Create offspring-crossover and mutation
Step 7 : Offspring in new population
Step 8 : Repeat Step 5 until new = initial population
Step 9 : Replace initial population with new
Step 10 : To Step 4 and repeat until criteria achieved

Fig. : Flow diagram of the genetic algorithm process


3.1 Chromosome Encoding Schemes:

 Chromosome: All living organisms consist of cells. In each cell, there is


the same set of Chromosomes. Chromosomes are strings of DNA and
consist of genes, blocks of DNA. Each gene encodes a trait, for example,
the color of the eye.

Basic principles:
 An individual is characterized by a set of parameters: Genes
 The genes are joined into a string: Chromosome
 The chromosome forms the genotype
 The genotype contains all information to construct an organism: Phenotype
 Reproduction is a “dumb” process on the chromosome of the genotype
 Fitness is measured in the real world (‘Struggle for life’) of the phenotype.

Simple_Genetic_Algorithm()
{
Initialize the population;
Calculate Fitness Function;

while(Fitness Value != Optimal Value)


{
Selection; //Natural Selection, survival of fittest
Crossover; //Reproduction, propagate favorable characteristics
Mutation;
Calculate Fitness Function;
}
}

Encoding of chromosomes is the first step in solving the problem and it depends
entirely on the problem heavily. The process of representing the solution in the
form of a string of bits that conveys the necessary information. just as in a
chromosome, each gene controls particular characteristics of the individual,
similarly, each bit in the string represents characteristics of the solution.

 Encoding is the process of representing the solution in the form of a string that
conveys the necessary information. It is a process of representing individual genes.
The process can be performed using bits, numbers, trees, array or any other
objects.

 Just as in a chromosome, each gene controls a particular characteristic of the


individual; similarly, each bit in the string represents a characteristic of the
solution.

 When choosing an encoding method rely on the following key ideas:


1. Use a data structure as close as possible to the natural representation.
2. Write appropriate genetic operators as needed.
3. If possible, ensure that all genotypes correspond to feasible solutions.
4. If possible, ensure that genetic operators preserve feasibility

Binary Encoding

 Most common method of encoding. Chromosomes are strings of 1s and 0s and


each position in the chromosome represents a particular characteristic of the
problem. The length of the string is usually determined according to the desired
solution accuracy.

Chromosome A 110100011010
Chromosome B 011111111100

 Octal encoding : This encoding uses string made up of octal numbers 0 to 7.


Chromosome A 03467216
Chromosome B 15723314

 Hexadecimal encoding : This encoding uses string made up of hexadecimal


numbers 0 to 9 and A to F.
Chromosome A 9CE7
Chromosome B 3DBA

Permutation Encoding
 Useful in ordering problems such as the Traveling Salesman Problem (TSP).
Example : In TSP, every chromosome is a string of numbers, each of which
represents a city to be visited.

 Permutation encoding is used in scheduling and ordering problems. Every


chromosome is a string of numbers, which represents numbers in a sequence. The
kind of encoding usually needs to make some corrections after the crossover and
mutation operating procedures because any transformation might create an illegal
situation.

Chromosome A 1 5 3 2 6 4 7 9 8
Chromosome B 8 5 6 7 2 3 1 4 9

 Example of problem : Traveling Salesman Problem (TSP).

 The problem : There are cities and given distances between them. Traveling
salesman has to visit all of them, but he does not to travel very much. Find a
sequence of cities to minimize traveled distance.

Encoding : Chromosome says order of cities, in which salesman will visit them.

Value Encoding

 Used in problems where complicated values, such as real numbers, are used and
where binary encoding would not sufficient. It is good for some problems, but
often necessary to develop some specific crossover and mutation techniques for
these chromosomes.

 Example of problem : Finding weights for neural network.

 The problem : There is some neural network with given architecture. Find
weights for inputs of neurons to train the network for wanted output.

 Encoding : Real values in chromosomes represent corresponding weights for


inputs.

Chromosome A 1.2324 5.3243 0.4556 2.3293 2.4545


Chromosome B ABDJEIFJDHDIERJFDLDFLFEGT
Chromosome C (back), (back), (right), (forward), (left)

Tree Encoding

 In tree encoding every chromosome is a tree of some objects, such as functions


or commands in programming language. It is used in genetic programming.

 Example of problem : Finding a function from given values.

 The problem : Some input and output values are given. Task is to find a
function, which will give the best (closest to wanted) output to all inputs.

 Encoding : Chromosome is functions represented in a tree.


3.2. Population Initialization and Selection Method:

The process begins with a set of individuals which is called a Population. Each
individual is a solution to the problem you want to solve.

An individual is characterized by a set of parameters (variables) known as Genes.


Genes are joined into a string to form a Chromosome (solution).

In a genetic algorithm, the set of genes of an individual is represented using a


string, in terms of an alphabet. Usually, binary values are used (string of 1s and 0s).
We say that we encode the genes in a chromosome.

Population is a subset of solutions in the current generation. It can also be defined


as a set of chromosomes.
There are several things to be kept in mind when dealing with GA population −
 The diversity of the population should be maintained otherwise it might lead
to premature convergence.
 The population size should not be kept very large as it can cause a GA to
slow down, while a smaller population might not be enough for a good
mating pool. Therefore, an optimal population size needs to be decided by
trial and error.

The population is usually defined as a two dimensional array of – size population,


size x, chromosome size.
Population Initialization
There are two primary methods to initialize a population in a GA. They are −
 Random Initialization − Populate the initial population with completely
random solutions.
 Heuristic initialization − Populate the initial population using a known
heuristic for the problem.
It has been observed that the entire population should not be initialized using a
heuristic, as it can result in the population having similar solutions and very little
diversity. It has been experimentally observed that the random solutions are the
ones to drive the population to optimality. Therefore, with heuristic initialization,
we just seed the population with a couple of good solutions, filling up the rest with
random solutions rather than filling the entire population with heuristic based
solutions.
It has also been observed that heuristic initialization in some cases, only effects the
initial fitness of the population, but in the end, it is the diversity of the solutions
which lead to optimality.
Population Models
There are two population models widely in use −
Steady State - In steady state GA, we generate one or two off-springs in each
iteration and they replace one or two individuals from the population. A steady
state GA is also known as Incremental GA.

Generational - In a generational model, we generate ‘n’ off-springs, where n is the


population size, and the entire population is replaced by the new one at the end of
the iteration.

3.3. Selection method in GA

Selection is the stage of a genetic algorithm in which individual genomes are


chosen from a population for later breeding (e.g., using the crossover operator).
Selection mechanisms are also used to choose candidate solutions (individuals) for
the next generation.
Retaining the best individuals in a generation unchanged in the next generation, is
called elitism or elitist selection. It is a successful (slight) variant of the general
process of constructing a new population.
A selection procedure for breeding used early on may be implemented as follows:

1. The fitness values that have been computed (fitness function) are
normalized, such that the sum of all resulting fitness values equals 1.
2. Accumulated normalized fitness values are computed.
Accumulated fitness value of an individual = its own fitness value + the
fitness values of all the previous individuals.

The accumulated fitness of the last individual should be 1, otherwise


something went wrong in the normalization step.

3. A random number R between 0 and 1 is chosen.


4. The selected individual is the first one whose accumulated normalized value
is greater than or equal to R.

If this procedure is repeated until there are enough selected individuals, this
selection method is called fitness proportionate selection or roulette-wheel
selection.
If instead of a single pointer spun multiple times, there are multiple, equally
spaced pointers on a wheel that is spun once, it is called stochastic universal
sampling.
Repeatedly selecting the best individual of a randomly chosen subset is tournament
selection.
Taking the best half, third or another proportion of the individuals is truncation
selection.
Roulette Wheel Selection
In the roulette wheel selection, the probability of choosing an individual for
breeding of the next generation is proportional to its fitness, the better the fitness
is, the higher chance for that individual to be chosen. Choosing individuals can be
depicted as spinning a roulette that has as many pockets as there are individuals in
the current generation, with sizes depending on their probability. Probability of
choosing individual i is equal to

where fi is the fitness of i and N is the size of current generation.


Rank Selection
In rank selection, the selection probability does not depend directly on the fitness,
but on the fitness rank of an individual within the population. This puts large
fitness differences into perspective; moreover, the exact fitness values themselves
do not have to be available, but only a sorting of the individuals according to
quality.
Linear ranking, which goes back to Baker is often used. It allows the selection
pressure to be set by the parameter sp, which can take values between 1.0 (no
selection pressure) and 2.0 (high selection pressure). The probability P for rank
positions Ri is obtained as follows:

Steady State Selection


In every generation few chromosomes are selected (good - with high fitness) for
creating a new offspring. Then some (bad - with low fitness) chromosomes are
removed and the new offspring is placed in their place. The rest of population
survives to new generation.
Tournament Selection
Tournament Selection is a method of choosing the individual from the set of
individuals. The winner of each tournament is selected to perform crossover.
Elitist Selection
Often to get better results, strategies with partial reproduction are used. One of
them is elitism, in which a small portion of the best individuals from the last
generation is carried over (without any changes) to the next one.

3.4 Genetic operators:

Generation of successors is determined by a set of operators that recombine and


mutate selected members of the current population. Operators correspond to
idealized versions of the genetic operations found in biological evolution.

 Genetic operator is the operator for generating new gene based on fitness of
each individual.

Selection Operator

 Key idea : give preference to better individuals, allowing them to pass on their
genes to the next generation.
 It determines which parents participate in producing offspring for the next
generation. The goodness of each individual depends on its fitness.
 Fitness may be determined by an objective function or by a subjective
judgement.
 Selection replicates the most successful solutions found in a population at a rate
proportional to their relative quality.
 A common selection method in genetic algorithm is fitness proportionate
selection, in which the number of times an individual is expected to reproduce is
equal to its fitness divided by the average of fitness in the population.
 Common selection methods used in GAs are
a. Fitness Proportionate Selection
b. Rank Selection
c. Tournament Selection

Crossover Operator

 A binary variation operator is called recombination or crossover. A method of


mixing good solutions to produce better ones is called crossover.
 Crossover means choosing a random position in the string (say, after 2 digits)
and exchanging the segments either to the right or to the left of this point with
another string partitioned similarly to produce two new off spring.
 Crossover produces two new offspring from two parent strings by copying
selected bits from each parent. Bit at position “i” in each offspring is copied from
the bit at position “I” in one of the two parents. The choice which parent
contributes bit “I” is determined by an additional string, called cross-over mask.
 In the crossover operator, new strings are created by exchanging information
among strings of the mating pool.
 The primary objective of the recombination operator is to emphasize the good
solutions and eliminate the bad solutions in a population, while keeping the
population size constant.
 “Selects The Best, Discards The Rest”. “Recombination” is different from
“Reproduction”.
 Identify the good solutions in a population. Make multiple copies of the good
solutions. Eliminate bad solutions from the population so that multiple copies of
good solutions can be placed in the population. It is the process in which two
chromosomes (strings) combine their genetic material (bits) to produce a new
offspring which possesses both their characteristics. Two strings are picked from
the mating pool at random to cross over. The method chosen depends on the
Encoding Method.
 Crossover can be rather complicated and very depends on encoding of the
encoding of chromosome. Specific crossover made for a specific problem can
improve performance of the genetic algorithm.

Steps of crossover :

1. The reproduction operator selects at random a pair of two individual strings for
the mating.
2. A cross site is selected at random along the string length.
3. Finally, the position values are swapped between the two strings following the
cross site.
Crossover Types

1. Single-point crossover,
2. Two-point crossover,
3. Uniform crossover

1. One point crossover

 One point crossover is the most basic crossover operator. Crossover point on the
genetic code is selected at random and two parent chromosomes are interchanged
at this point. Binary string from beginning of chromosome to the crossover point is
copied from one parent, the rest is copied from the second parent.

2. Two Point Crossover

 Two-point crossover is very similar to single-point crossover except that two


cut-points are randomly generated instead of one.
 Two point crossover : Two crossover points are selected and the part of the
chromosome string between these two points is then swapped to generate two
children. binary string from beginning of chromosome to the first crossover point
is copied from one parent, the part from the first to the second crossover point is
copied from the second parent and the rest is copied from the first parent.
3. Uniform Crossover
 In uniform crossover, a value of the first parent’s gene is assigned to the first
offspring and the value of the second parent’s gene is to the second offspring with
a probability value pc.
 With probability pc the value of the first parent’s gene is assigned to the second
offspring and the value of the second parent’s gene is assigned to the first
offspring.

 Crossover between 2 good solutions MAY NOT ALWAYS yield a better or as


good a solution. Since parents are good, probability of the child being good is high.
If offspring is not good (poor solution), it will be removed in the next iteration
during “Selection”.

 One site crossover is more suitable when string length is small while two site
crossover is suitable for large strings.

 Crossover example :
Parent A 011011
Parent B 101100

 Mate the parents by splitting each number as shown between the second and
third digits (position is randomly selected)
01*1011 10*1100
 Now combine the first digits of A with the last digits of B and the first digits of
B with the last digits of A. This gives you two new offspring.
011100
101011
 If these new solutions or offspring, are better solutions than the parent solutions,
the system will keep these as more optimal solutions and they will become parents.
This is repeated until some condition (for example number of populations or
improvement of the best solution) is satisfied.

Matrix Crossover

 Some real problems are naturally suitable for two-dimensional representation. If


this kind of problems is to be solved by genetic algorithms, then each possible
solution can very conveniently and naturally be conceptually represented as a two-
dimensional table.
 Two-dimensional substring crossover generates two offspring chromosomes by
choosing only one of the two crossover strategies (horizontal or vertical).

 Alternatively, the two-dimensional crossover operator can be easily modified to


generate four offspring chromosomes from a pair of parents by executing the
horizontal and the vertical crossovers at the same time.

 The new offspring chromosomes that result from executing the crossover
operation may become infeasible for some application problems.

Fig. : Horizontal substring crossover

Fig. : Vertical substring crossover

Effects of Genetic Operators

 Using selection alone will tend to fill the population with copies of the best
individual from the population
 Using selection and crossover operators will tend to cause the algorithms to
converge on a good but sub-optimal solution
 Using mutation alone induces a random walk through the search space.
 Using selection and mutation creates a parallel, noise-tolerant, hill climbing
algorithm
 Only crossover can combine information from two parents
 Only mutation can introduce new information (alleles)
 Crossover does not change the allele frequencies of the population.

3.5. Fitness Function

 Fitness is an important concept in genetic algorithms. The fitness of a


chromosome determines how likely it is that it will reproduce. Fitness is usually
measured in terms of how well the chromosome solves some goal problem.
 Fitness can also be subjective (aesthetic). E.g., if the genetic algorithm is to be
used to sort numbers, then the fitness of a chromosome will be determined by how
close to a correct sorting it produces.

 A fitness function quantifies the optimality of a solution (chromosome) so that


particular solution may be ranked against all the other solutions. A fitness value is
assigned to each solution depending on how close it actually is to solving the
problem.
 Ideal fitness function correlates closely to goal plus quickly computable.

 Example : In TSP, f(x) is sum of distances between the cities in solution. The
lesser the value, the fitter the solution is.

 The performance of the individual strings is measured by a fitness function. A


fitness function is a problem specific user defined heuristic. After each iteration,
the members are given a performance measure derived from the fitness function
and the “fittest” members of the population will propagate the next iteration.

The fitness function is defined over the genetic representation and measures the
quality of the represented solution. The fitness function is always problem
dependent.

 For instance, in the knapsack problem we want to maximize the total value of
objects that we can put in a knapsack of some fixed capacity. A representation of a
solution might be an array of bits, where each bit represents a different object and
the value of the bit (0 or 1) represents whether or not the object is in the knapsack.
 Not every such representation is valid, as the size of objects may exceed the
capacity of the knapsack. The fitness of the solution is the sum of values of all
objects in the knapsack if the representation is valid or 0 otherwise. In some
problems, it is hard or even impossible to define the fitness expression; in these
cases, interactive genetic algorithms are used.
3.6. Maximizing Function

Maximizing a Function Consider the problem of maximizing the function, f(x) = x 2


where x is permitted to vary between 0 to 31.

The steps involved in solving this problem are as follows:

Step 1: For using genetic algorithms approach, one must first code the decision
variable ‘x’ into a finite length string. Using a five bit (binary integer) unsigned
integer, numbers between 0(00000) and 31(11111) can be obtained. The objective
function here is f(x) = x2 which is to be maximized. A single generation of a
genetic algorithm is performed here with encoding, selection, crossover and
mutation. To start with, select initial population at random. Here initial population
of size 4 is chosen, but any number of populations can be elected based on the
requirement and application. Table 1. shows an initial population randomly
selected.

Table 1.Selection

Step 2: Obtain the decoded x values for the initial population generated. Consider
string 1,thus for all the four strings the decoded values are obtained.

Step 3: Calculate the fitness or objective function. This is obtained by simply


squaring the ‘x’ value, since the given function is f(x) = x 2. When, x = 12, the
fitness value is, f(x) = 144 for x = 25, f(x) = 625 and so on, until the entire
population is computed .

Step 4: Compute the probability of selection,


where n = no of populations
f(x)=fitness value corresponding to a particular individual in the population.
Σf(x)- Summation of all the fitness value of the entire population.
Considering string 1, Fitness f(x) = 144
Σf(x) = 1155
The probability that string 1 occurs is given by, P1 = 144/1155 = 0.1247
The percentage probability is obtained as, 0.1247∗100 = 12.47%

The same operation is done for all the strings. It should be noted that, summation
of probability select is 1.

Step 5: The next step is to calculate the expected count, which is calculated as,

For string 1, Expected count = Fitness/Average = 144/288.75 = 0.4987

Computing the expected count for the entire population. The expected count gives
an idea of which population can be selected for further processing in the mating
pool.

Step 6: Now the actual count is to be obtained to select the individuals, which
would participate in the crossover cycle using Roulette wheel selection. The
Roulette wheel is formed as shown in Fig. 3.15. Roulette wheel is of 100% and the
probability of selection as calculated in step4 for the entire populations are used as
indicators to fit into the Roulette wheel. Now the wheel may be spun and the no of
occurrences of population is noted to get actual count. String 1 occupies 12.47%,
so there is a chance for it to occur at least once. Hence its actual count may be 1.
With string 2 occupying 54.11% of the Roulette wheel, it has a fair chance of being
selected twice. Thus its actual count can be considered as 2.

On the other hand, string 4 has the least probability percentage of 2.16%, so
their occurrence for next cycle is very poor. As a result, it actual count is 0. String
3 with 31.26% has at least one chance for occurring while Roulette wheel is spun,
thus its actual count is 1. The above values of actual count are tabulated as shown
is Table 1 Fig.15 Roulette Wheel Selection.
Step 7: Now, writing the mating pool based upon the actual count as shown in
Table 2 The actual count of string no 1 is 1, hence it occurs once in the mating
pool. The actual count of string no 2 is 2, hence it occurs twice in the mating pool.
Since the actual count of string no 3 is 0, it does not occur in the mating pool.
Similarly, the actual count of string no 4 being 1, it occurs once in the mating pool.
Based on this, the mating pool is formed.

Step 8: Crossover operation is performed to produce new offspring (children). The


crossover point is specified and based on the crossover point, single point
crossover is performed and new offspring is produced.
The parents are:
Parent 1 0 1 1 0 0
Parent 2 1 1 0 0 1
The offspring is produced as,
Offspring 1 0 1 1 0 1
Offspring 2 1 1 0 0 0

In a similar manner, crossover is performed for the next strings.

Step 9: After crossover operations, new off springs are produced and ‘x’ values are
decodes and fitness is calculated.
Step 10: In this step, mutation operation is performed to produce new off springs
after crossover operation. As discussed in mutation-flipping operation is performed
and new off springs are produced.

Table 3. shows the new offspring after mutation. Once the off springs are obtained
after mutation, they are decoded to x value and find fitness values are computed.
This completes one generation. The mutation is performed on a bit-bit by basis.
The crossover probability and mutation probability was assumed to 1.0 and 0.001
respectively. Once selection, crossover and mutation are performed, the new
population is now ready to be tested. This is performed by decoding the new
strings created by the simple genetic algorithm after mutation and calculates the
fitness function values from the x values thus decoded.

The results for successive cycles of simulation are shown in Tables 1–3.From the
tables, it can be observed how genetic algorithms combine high performance
notions to achieve better performance. In the tables, it can be noted how maximal
and average performance has improved in the new population.
The population average fitness has improved from 288.75 to 636.5 in one
generation.

The maximum fitness has increased from 625 to 841 during same period. Although
random processes make this best solution, its improvement can also be seen
successively.

The best string of the initial population (1 1 0 0 1) receives two chances for its
existence because of its high, above-average performance. When this combines at
random with the next highest string (1 0 0 1 1) and is crossed at crossover point 2
(as shown in Table 3.2), one of the resulting strings (1 1 0 1 1) proves to be a very
best solution indeed.

Thus after mutation at random, a new offspring (1 1 1 0 1) is produced which is an


excellent choice. This example has shown one generation of a simple genetic
algorithm.

You might also like