Genetic Algorithm Implementation in Python - by Ahmed Gad - Towards Data Science
Genetic Algorithm Implementation in Python - by Ahmed Gad - Towards Data Science
Python
Ahmed Gad Follow
Jul 15, 2018 · 11 min read
This tutorial will implement the genetic algorithm optimization technique in Python
based on a simple example in which we are trying to maximize the output of an
equation. The tutorial uses the decimal representation for genes, one point
crossover, and uniform mutation.
The GitHub project of this tutorial is updated where major changes to the project are
made to support multiple features:
https://github.com/ahmedfgad/GeneticAlgorithmPython. For example, multiple types of
mutation and crossover are implemented in addition to the ability to customize the fitness
function to work on any type of problem. Based on the project, a library named PyGAD is
deployed to PyPI where you can install using pip:https://pypi.org/project/pygad
The original code of this tutorial is available under the Tutorial Project directory which is
available at this link:
https://github.com/ahmedfgad/GeneticAlgorithmPython/tree/master/Tutorial%20Proj
ect
Genetic Algorithm Implementation in Python — By Ahmed F. Gad
For example, there are different types of representations for genes such as binary,
decimal, integer, and others. Each type is treated differently. There are different
types of mutation such as bit flip, swap, inverse, uniform, non-uniform, Gaussian,
shrink, and others. Also, crossover has different types such as blend, one point, two
points, uniform, and others. This tutorial will not implement all of them but just
implements one type of each step involved in GA. The tutorial uses the decimal
representation for genes, one point crossover, and uniform mutation. The Reader
should have an understanding of how GA works. If not, please read this article titled
“Introduction to Optimization with Genetic Algorithm” found in these links:
LinkedIn: https://www.linkedin.com/pulse/introduction-optimization-genetic-
algorithm-ahmed-gad/
KDnuggets: https://www.kdnuggets.com/2018/03/introduction-optimization-with-
genetic-algorithm.html
TowardsDataScience: https://towardsdatascience.com/introduction-to-
optimization-with-genetic-algorithm-2f5001d9964b
SlideShare: https://www.slideshare.net/AhmedGadFCIT/introduction-to-
optimization-with-genetic-algorithm-ga
Tutorial Example
The tutorial starts by presenting the equation that we are going to implement. The
equation is shown below:
The equation has 6 inputs (x1 to x6) and 6 weights (w1 to w6) as shown and inputs
values are (x1,x2,x3,x4,x5,x6)=(4,-2,7,5,11,1). We are looking to find the
parameters (weights) that maximize such equation. The idea of maximizing such
equation seems simple. The positive input is to be multiplied by the largest possible
positive number and the negative number is to be multiplied by the smallest possible
negative number. But the idea we are looking to implement is how to make GA do
that its own in order to know that it is better to use positive weight with positive
inputs and negative weights with negative inputs. Let us start implementing GA.
At first, let us create a list of the 6 inputs and a variable to hold the number of
weights as follows:
The next step is to define the initial population. Based on the number of weights,
each chromosome (solution or individual) in the population will definitely have 6
genes, one gene for each weight. But the question is how many solutions per the
population? There is no fixed value for that and we can select the value that fits well
with our problem. But we could leave it generic so that it can be changed in the
code. Next, we create a variable that holds the number of solutions per population,
another to hold the size of the population, and finally, a variable that holds the
actual initial population:
import numpy
sol_per_pop = 8
After importing the numpy library, we are able to create the initial population
randomly using the numpy.random.uniform function. According to the selected
parameters, it will be of shape (8, 6). That is 8 chromosomes and each one has 6
genes, one for each weight. After running this code, the population is as follows:
[[-2.19134006 -2.88907857 2.02365737 -3.97346034 3.45160502
2.05773249]
Note that it is generated randomly and thus it will definitely change when get run
again.
After preparing the population, next is to follow the flowchart in figure 1. Based on
the fitness function, we are going to select the best individuals within the current
population as parents for mating. Next is to apply the GA variants (crossover and
mutation) to produce the offspring of the next generation, creating the new
population by appending both parents and offspring, and repeating such steps for a
number of iterations/generations. The next code applies these steps:
import ga
num_generations = 5
num_parents_mating = 4
for generation in range(num_generations):
# Measuring the fitness of each chromosome in the population.
fitness = ga.cal_pop_fitness(equation_inputs, new_population)
# Selecting the best parents in the population for mating.
parents = ga.select_mating_pool(new_population, fitness,
num_parents_mating)
The first step is to find the fitness value of each solution within the population
using the ga .cal_pop_fitness function. The implementation of such function inside
the GA module is as follows:
The fitness function accepts both the equation inputs values (x1 to x6) in addition
to the population. The fitness value is calculated as the sum of product (SOP)
between each input and its corresponding gene (weight) according to our function.
According to the number of solutions per population, there will be a number of
SOPs. As we previously set the number of solutions to 8 in the variable named
sol_per_pop , there will be 8 SOPs as shown below:
After calculating the fitness values for all solutions, next is to select the best of them
as parents in the mating pool according to the next function ga.select_mating_pool .
Such function accepts the population, the fitness values, and the number of parents
needed. It returns the parents selected. Its implementation inside the GA module is
as follows:
max_fitness_idx = numpy.where(fitness ==
numpy.max(fitness))
max_fitness_idx = max_fitness_idx[0][0]
parents[parent_num, :] = pop[max_fitness_idx, :]
fitness[max_fitness_idx] = -99999999999
return parents
Looping through the current population, the function gets the index of the highest
fitness value because it is the best solution to be selected according to this line:
parents[parent_num, :] = pop[max_fitness_idx, :]
To avoid selecting such solution again, its fitness value is set to a very small value
that is likely to not be selected again which is -99999999999. The parents array is
returned finally which will be as follows according to our example:
Note that these three parents are the best individuals within the current population
based on their fitness values which are 18.24112489, 17.0688537, 15.99527402, and
14.40299221, respectively.
Next step is to use such selected parents for mating in order to generate the
offspring. The mating starts with the crossover operation according to the
ga.crossover function. This function accepts the parents and the offspring size. It
uses the offspring size to know the number of offspring to produce from such
parents. Such a function is implemented as follows inside the GA module:
for k in range(offspring_size[0]):
# Index of the first parent to mate.
parent1_idx = k%parents.shape[0]
# Index of the second parent to mate.
parent2_idx = (k+1)%parents.shape[0]
# The new offspring will have its first half of its genes
taken from the first parent.
offspring[k, 0:crossover_point] = parents[parent1_idx,
0:crossover_point]
# The new offspring will have its second half of its genes
taken from the second parent.
offspring[k, crossover_point:] = parents[parent2_idx,
crossover_point:]
return offspring
The function starts by creating an empty array based on the offspring size as in this
line:
offspring = numpy.empty(offspring_size)
Because we are using single point crossover, we need to specify the point at which
crossover takes place. The point is selected to divide the solution into two equal
halves according to this line:
crossover_point = numpy.uint8(offspring_size[1]/2)
Then we need to select the two parents to crossover. The indices of these parents
are selected according to these two lines:
parent1_idx = k%parents.shape[0]
parent2_idx = (k+1)%parents.shape[0]
The parents are selected in a way similar to a ring. The first with indices 0 and 1 are
selected at first to produce two offspring. If there still remaining offspring to
produce, then we select the parent 1 with parent 2 to produce another two
offspring. If we are in need of more offspring, then we select the next two parents
with indices 2 and 3. By index 3, we reached the last parent. If we need to produce
more offspring, then we select parent with index 3 and go back to the parent with
index 0, and so on.
The solutions after applying the crossover operation to the parents are stored into
the offspring variable and they are as follows:
Next is to apply the second GA variant, mutation, to the results of the crossover
stored in the offspring variable using the ga.mutation function inside the GA
module. Such function accepts the crossover offspring and returns them after
applying uniform mutation. That function is implemented as follows:
def mutation(offspring_crossover):
offspring_crossover[idx, 4] = offspring_crossover[idx, 4]
+ random_value
return offspring_crossover
It loops through each offspring and adds a uniformly generated random number in
the range from -1 to 1 according to this line:
random_value = numpy.random.uniform(-1.0, 1.0, 1)
Such random number is then added to the gene with index 4 of the offspring
according to this line:
offspring_crossover[idx, 4] = offspring_crossover[idx, 4] +
random_value
Note that the index could be changed to any other index. The offspring after
applying mutation are as follows:
Such results are added to the variable offspring_crossover and got returned by the
function.
At this point, we successfully produced 4 offspring from the 4 selected parents and
we are ready to create the new population of the next generation.
new_population[0:parents.shape[0], :] = parents
new_population[parents.shape[0]:, :] = offspring_mutation
By calculating the fitness of all solutions (parents and offspring) of the first
generation, their fitness is as follows:
The highest fitness previously was 18.24112489 but now it is 31.7328971158. That
means that the random changes moved towards a better solution. This is GREAT.
But such results could be enhanced by going through more generations. Below are
the results of each step for another 4 generations:
Generation : 1
Fitness values:
Selected parents:
Mutation result:
Generation : 2
Fitness values:
Selected Parents:
Mutation result:
Generation : 3
Fitness values:
Selected parents:
Mutation result:
Generation : 4
Fitness values
Selected parents:
Mutation result:
After the above 5 generations, the best result now has a fitness value equal to
44.8169235189 compared to the best result after the first generation which is
18.24112489.
E-mail: [email protected]
Your email
By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more
information about our privacy practices.