0% found this document useful (0 votes)
110 views20 pages

10 Ga

This document discusses genetic algorithms. It begins by defining genetic algorithms as optimization algorithms that are inspired by biological evolution, using techniques like reproduction, mutation and recombination. It then covers the basic components of a genetic algorithm, including representation, initialization, selection, crossover and mutation operators. The document provides examples of how genetic algorithms can be applied to problems like optimization and discusses their advantages and disadvantages.

Uploaded by

Noor Waleed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
110 views20 pages

10 Ga

This document discusses genetic algorithms. It begins by defining genetic algorithms as optimization algorithms that are inspired by biological evolution, using techniques like reproduction, mutation and recombination. It then covers the basic components of a genetic algorithm, including representation, initialization, selection, crossover and mutation operators. The document provides examples of how genetic algorithms can be applied to problems like optimization and discusses their advantages and disadvantages.

Uploaded by

Noor Waleed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

University of Mosul

College of Engineering
Electrical Engineering Department

Genetic Algorithms
1. INTRODUCTION
Genetic algorithms are a type of optimization algorithm, meaning they are
used to find the optimal solution(s) to a given computational problem that
maximizes or minimizes a particular function. Genetic algorithms
represent one branch of the field of study called evolutionary computation
[1], in that they imitate the biological processes of reproduction and
natural selection to solve for the ‘fittest’ solutions [2]. Like in evolution,
many of a genetic a algorithm’s processes are random, however this
optimization technique allows one to set the level of randomization and
the level of control [2]. These algorithms are far more powerful and
efficient than random search and exhaustive search algorithms [1], yet
require no extra information about the given problem. This feature allows
them to find solutions to problems that other optimization methods
cannot handle due to a lack of continuity, derivatives, linearity, or other
features.

Fig(1): Classes Of Search Techniques


2. Basic idea behind GAs
GAs begin with a set of candidate solutions (chromosomes) called
population. A new population is created from solutions of an old
population in hope of getting a better population. Solutions which are
chosen to form new solutions (offspring) are selected according to their
fitness. The more suitable the solutions are the bigger chances they have
to reproduce. This process is repeated until some condition is satisfied[3]

3. Genetic Algorithms Techniques


GA is used when the objective function is:

a. Discontinuous
b. Highly nonlinear
c. Stochastic
d. Has unreliable or undefined derivatives

4. Components of a GA

In order to solve a problem we have to define the following:

1. Encoding technique (gene, chromosome)

2. Initialization procedure (creation)

3. Evaluation function (environment)

4. Selection of parents (reproduction)

5. Genetic operators (mutation, recombination)

6. Parameter settings (practice and art)


4.1. Representation (encoding)
Possible individuals encoding :

1. Bit strings (0101 ... 1100)

2. Real numbers (43.2 -33.1 ... 0.0 89.2)

3. Permutations of element (E11 E3 E7 ... E1 E15)

4. Lists of rules (R1 R2 R3 ... R22 R23)

5. Program elements (genetic programming)

6. Any data structure

4.2. Initialization
Initially many individual solutions are (usually) randomly generated to
form an initial population. The population size depends on the nature of
the problem, but typically contains several hundreds or thousands of
possible solutions. Traditionally, the population is generated randomly,
allowing the entire range of possible solutions (the search space).
Occasionally, the solutions may be "seeded" Traditionally; the population
is generated randomly, allowing the entire range of possible solutions (the
search space). Occasionally, the solutions may be "seeded" in areas where
optimal solutions are likely to be found .

4.3. Selection

During each successive generation, a proportion of the existing population


is selected to breed a new generation. Individual solutions are selected
through a fitness-based process, where fittest solutions (as measured by a
fitness function) are typically more likely to be selected. Certain selection
methods rate the fitness of each solution and preferentially select the best
solutions. Other methods rate only a random sample of the population, as
the later process may be very time-consuming.
4.5. Reproduction
The next step is to generate a second generation population of solutions
from those selected through genetic operators: crossover (also called
recombination), and/or mutation. For each new solution to be produced, a
pair of "parent" solutions is selected for breeding from the pool selected
previously. By producing a "child" solution using the above methods of
crossover and mutation, a new solution is created which typically shares
many of the characteristics of its "parents". New parents are selected for
each new child and the process continues until a new population of
solutions of appropriate size is generated. Although reproduction methods
that are based on the use of two parents are more "biology inspired",
some research suggests that more than two "parents" generate higher
quality chromosomes. These processes ultimately result in the next
generation population of chromosomes that is different from the initial
generation. Generally the average fitness will have increased by this
procedure for the population. Since only the best organisms from the first
generation are selected for breeding, along with a small proportion of less
fit solutions. This is for the reasons already mentioned above. Although
Crossover and Mutation are known as the main genetic operators, it is
possible to use other operators such as regrouping, colonization-
extinction, or migration in genetic algorithms.

5. GA operators

5.1. Fitness Function


Genetic algorithm requires a fitness function which allocates a score to each
chromosome in the current population. Thus, it can calculate how well the
solutions are coded and how well they solve the problem.
 The fitness function is defined over the genetic representation and
measures the quality of the represented solution.

 The fitness function is always problem dependent.

 A representation of a solution might be an array of bits, where each bit


represents a different object, and the value of the bit (0 or 1)
represents whether or not the object is in the knapsack.

 Not every such representation is valid, as the size of objects may


exceed the capacity of the knapsack.

 The fitness of the solution is the sum of values of all objects in the
knapsack if the representation is valid, or 0 otherwise. In some
problems, it is hard or even impossible to define the fitness expression;
in these cases, interactive genetic algorithms are used.

Fig.2 interactive genetic algorithms


5.2. Crossover

1. Enabling the evolutionary process to move towards the promising region


of search space .

2. Choose a random point on the two parents

3. Split parents at this crossover point

4. Create children by exchanging tails

Several possible crossover strategies

•Randomly select a single point for a crossover


•Multi point crossover
•Uniform crossover
Two-point crossover

•Avoids cases where genes at the beginning and end of a


chromosome are always split

5.3. Mutation
Mutation is performed after crossover to prevent falling all solutions in the
population into a local optimum of solved problem. Mutation changes the
new offspring by flipping bits from 1 to 0 or from 0 to 1. Mutation can
occur at each bit position in the string with some probability, usually very
small (e.g. 0.001). For example, consider the following chromosome with
mutation point at position 2:

Not mutated chromosome: 1 0 0 0 1 1 1


Mutated: 1 1 0 0 1 1 1

The 0 at position 2 flips to 1 after mutation.

6. Genetic Algorithm – Basic algorithm

1. Start: Randomly generate a population of N chromosomes.


2. Fitness: Calculate the fitness of all chromosomes.
3. Create a new population:
a. Selection: According to the selection method select 2 chromosomes
from the population.
b. Crossover: Perform crossover on the 2 chromosomes selected.
c. Mutation: Perform mutation on the chromosomes obtained.
4. Replace: Replace the current population with the new population.
5. Test: Test whether the end condition is satisfied. If so, stop. If not, return
the best solution in current population and go to Step 2.
Each iteration of this process is called generation
7. Advantage of Genetic Algorithms
• Concept is easy to understand

• Modular, separate from application

• Supports multi-objective optimization

• Always an answer; answer gets better with time.

• Easy to exploit previous or alternate solutions

• Flexible building blocks for hybrid applications.

8. Disadvantage of binary coded GA:

• more computation
• lower accuracy
• longer computing time
• solution space discontinuity
• hamming cliff

GA Applications
9. Optimization

•Optimization can be defined as the process of finding the conditions that


give them a maximum or minimum value of a function.

• It can be seen from Fig.1 that if appoint 𝑥𝑥∗ corresponds to them


minimum value of function (𝑥𝑥) ,these a me point also corresponds to
them maximum value o f the negative of the function,−𝑓𝑓(𝑥𝑥).

Fig.(3). Minimum of (𝑥𝑥)is same as maximum of −𝑓𝑓(𝑥𝑥)

• A local minimum of a function is a point where the function value


is smaller than or equal to the value at nearby points, but possibly
greater than at a distant point.
• A global minimum is a point where the function value is smaller
than or equal to the value at all other feasible points.
Fig(4):Local and Global optimal

Fig.(5): Multiple optimal solutions


10. Example about Genetic Algorithm, Numerical Example:

▪ A simple example will help us to understand how a GA works.


Let us find the maximum value of the function (15x – x^ 2 ) where
parameter x varies between 0 and 15.
▪ For simplicity, we may assume that x takes only integer values.
Thus, chromosomes can be built with only four genes:

Table.1 parameter x varies

▪ Suppose that the size of the chromosome population N is 6, the


crossover probability pc equals 0.7, and the mutation probability
pm equals 0.001. (The values chosen for pc and pm are fairly typical
in GAs.) The fitness function in our example is defined by:
f(x) =15x - x ^2

• The GA creates an initial population of chromosomes by filling six


4- bit strings with randomly generated ones and zeros. The initial
population might look like that shown in Table 1. The
chromosomes’ initial locations on the fitness function are
illustrated in Figure 5.

Table.2. The initial randomly generated population of chromosomes


Fig.5 The fitness function and chromosome locations:
(a) chromosome initial locations; (b) chromosome final locations
▪ The next step is to calculate the fitness of each individual
chromosome. The results are also shown in Table 1. The average
fitness of the initial population is 36. In order to improve it, the
initial population is modified by using selection, crossover and
mutation, the genetic operators.
▪ In natural selection, only the fittest species can survive,
breed, and thereby pass their genes on to the next
generation. GAs use a similar approach, but unlike nature,
the size of the chromosome population remains unchanged
from one generation to the next.

▪ The last column in Table 1 shows the ratio of the individual


chromosome’s fitness to the population’s total fitness. This
ratio determines the chromosome’s chance of being
selected for mating.
▪ Thus, the chromosomes X5 and X6 stand a fair chance,
while the chromosomes X3 and X4 have a very low
probability of being selected. As a result, the chromosome’s
average fitness improves from one generation to the next.
▪ One of the most commonly used chromosome selection
▪ techniques is the roulette wheel selection as shown in Fig 6.
▪ To select a chromosome for mating, a random number is
generated in the interval (0-100), and the chromosome
whose segment spans the random number is selected. It is
like spinning a roulette wheel where each chromosome has
a segment on the wheel proportional to its fitness

Fig.6 spinning a roulette wheel


▪ In our example, we have an initial population of six
chromosomes. Thus, to establish the same population in
the next generation, the roulette wheel would be spun six
times.

▪ The first two spins might select chromosomes X6 and X2 to


become parents, the second pair of spins might choose
chromosomes X1 and X5, and the last two spins might
select chromosomes X2 and X5. Once a pair of parent
chromosomes is selected, the crossover operator is applied.
10.1 How does the crossover operator work:
▪ First, the crossover operator randomly chooses a crossover
point where two parent chromosomes ‘break’, and then
exchanges the chromosome parts after that point. As a
result, two new offspring are created.
▪ For example, the chromosomes X6 and X2 could be crossed
over after the second gene in‫ ؟‬each to produce the two
offspring.

Fig.7 crossover operator work

10.2 What does mutation represent


▪ Mutation, which is rare in nature, represents a change in
the gene. It may lead to a significant improvement in
fitness, but more often has rather harmful results. So why
use mutation at all? Its role is to provide a guarantee that
the search algorithm is not trapped on a local optimum*.
▪ The sequence of selection and crossover operations may
stagnate at any homogeneous set of solutions. Under such
conditions, all chromosomes are identical, and thus the
average fitness of the population cannot be improved.
▪ However, the solution might appear to become optimal, or
rather locally optimal, only because the search algorithm is
not able to proceed any further. Mutation is equivalent to a
random search, and aids us in avoiding loss of genetic
diversity.
10.3 How does the mutation operator work:
▪ The mutation operator flips a randomly selected gene in a
chromosome. For example, the chromosome X1 might be
mutated in its second gene, and the chromosome X2 in its
third gene, as shown in Figure 8.
▪ Mutation can occur at any gene in a chromosome with
some probability. The mutation probability is quite small in
nature, and is kept quite low for GAs, typically in the range
between 0.001 and 0.01.

10.4 Final Result


▪ Genetic algorithms assure the continuous improvement of
the average fitness of the population, and after a number of
generations (typically several hundred) the population
evolves to a near-optimal solution.
▪ In our example, the final population would consist of only
chromosomes [0,1,1,1] and [1,0,0,0] .

▪ Figure 8 represents the whole process of applying the main


three operations (selection, crossover and mutation), and
the repetition of these operations from one generation to
another until acquiring the desired fitness
Fig.(8): GA operation

11. Simulink :
1. The function is defined as follows:

Fig.(9) : M-file showing defined (15x – x^2) function


2. The program code for optimization process is as follows:

Fig(10) program code for optimization

3. Creation of Ga tool On typing “gatool” in the command prompt, the GA


tool box opens. In tool, for fitness value type @func and mention the
number of variables defined in the function. Select best fitness in plot and
specify the other parameters as shown in Fig. (11)

Fig.(11): Genetic algorithm tool for (15x – x^2) function


Fig.(12): plot function in optimization toolbox

4. The output for 51 generations is as shown in Fig.(13).

Fig.(13) Different Plots during optimization of the function


Reference
1. Kinnear , K. E. (1994). A Perspective on the Work in this Book. In K. E. Kinnear
(Ed.), Advances in Genetic Programming (pp. 3-17). Cambridge: MIT Press.

2. Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization, and Machine


Learning. Reading: Addison-Wesley.

3. Mitchell, M. (1998). An Introduction to Genetic Algorithms. Massa chusettss:


The MIT Press.
4. Engineering Optimization Theory and Practice.

You might also like