0% found this document useful (0 votes)
13 views56 pages

Evolutionary Computing

Evolutionary computing is a computer science research area inspired by natural evolution, utilizing trial-and-error problem-solving techniques to optimize solutions. Key components include genetic algorithms, natural selection, mutation, and crossover, which collectively enhance solution quality over generations. The approach is particularly valuable for addressing complex problems requiring automated and adaptable algorithms.

Uploaded by

chetanvasantala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views56 pages

Evolutionary Computing

Evolutionary computing is a computer science research area inspired by natural evolution, utilizing trial-and-error problem-solving techniques to optimize solutions. Key components include genetic algorithms, natural selection, mutation, and crossover, which collectively enhance solution quality over generations. The approach is particularly valuable for addressing complex problems requiring automated and adaptable algorithms.

Uploaded by

chetanvasantala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 56

Evolutionary Computing

Evolutionary Computing Metaphor

● Evolutionary computing is a research area within computer science.

● It draws inspiration from the process of natural evolution.

● Natural evolution demonstrates its power through the diverse species, each adapted to
survive in its niche.

● Computer scientists have adopted evolution as a model due to its effectiveness in adaptation
and optimization.

● The fundamental metaphor of evolutionary computing relates natural evolution to a trial-and-


error problem-solving approach.
Natural Evolution and Problem Solving

Natural Evolution:

● A population of individuals competes for survival and reproduction.

● Fitness is determined by the environment and affects survival chances.

Evolutionary Computing Analogy:

● Problem-solving follows a trial-and-error (generate-and-test) approach.

● A collection of candidate solutions is evaluated based on quality.

● Better solutions have a higher chance of being retained and refined.


Brief History of Evolutionary Computing

Origins (1940s–1960s):

● 1948: Alan Turing proposed “genetical or evolutionary search.”


● 1962: Bremermann conducted computer experiments on evolutionary optimization.

Key Developments in the 1960s:

● USA:
○ Fogel, Owens, and Walsh introduced Evolutionary Programming.
○ Holland developed Genetic Algorithms.
● Germany:
○ Rechenberg and Schwefel pioneered Evolution Strategies.

Unification (1990s–Present):

● Evolutionary Programming, Evolution Strategies, and Genetic Algorithms unified under


Evolutionary Computing (EC).
Darwinian Evolution – Natural Selection
● Darwin’s Theory: Explains the origin of biological diversity through natural selection.

● Limited Resources & Competition:

○ The environment can only support a limited number of individuals.

○ Individuals compete for survival; the fittest (best adapted) have a higher chance of
reproducing.

● Survival of the Fittest:

○ Those best suited to the environment survive and pass on their traits.

○ Unfit individuals are less likely to reproduce.


Darwinian Evolution – Variation & Mutation
● Phenotypic Variation:

○ Traits (physical & behavioral) determine fitness.


○ Favorable traits increase survival and reproduction chances.

● Role of Mutation:

○ Small, random changes in traits occur during reproduction.


○ Some mutations improve fitness, leading to evolutionary progress.

● Evolution Over Time:

○ Successful individuals reproduce, passing beneficial traits.


○ Over generations, the population’s characteristics evolve.
Why Evolutionary Computing?
1. Inspiration from Nature:

● Engineers and scientists often look to nature for problem-solving techniques.


● Two powerful natural problem solvers:

○ Human Brain → Basis for Neurocomputing


○ Evolutionary Process → Basis for Evolutionary Computing

2. Need for Automated Problem Solving:

● Increasing complexity of problems in computer science and engineering.

● Limited time for manual problem analysis and custom algorithm design.

● Demand for robust, adaptable algorithms that work across various problems.
What Is an Evolutionary Algorithm?
● Core Idea: Inspired by natural selection (survival of the fittest).

● Process Overview:

1. Create a random population of candidate solutions.

2. Evaluate fitness using a quality function.

3. Select the better candidates for reproduction.

4. Apply Genetic Operators:


■ Recombination: Combines traits of parent candidates.
■ Mutation: Introduces random changes to individuals.

5. Generate Offspring: Evaluate their fitness and compete for survival.

6. Repeat until an optimal or acceptable solution is found.


Key Forces in Evolutionary Algorithms

● 1. Variation (Diversity Creation):

○ Recombination & Mutation introduce new traits.

○ Facilitates novelty and exploration of solution space.

● 2. Selection (Quality Improvement):

○ Higher fitness individuals have a better chance of survival.

○ Drives optimization by favoring stronger solutions.

● Outcome: Over generations, solutions adapt and improve based on fitness.


Stochastic Nature of Evolutionary Algorithms

● Not Fully Deterministic:

○ Selection: Even weak individuals have a chance to survive.

○ Recombination: Randomly selects traits from parents.

○ Mutation: Randomly alters parts of a solution.

● Why Randomness?

○ Prevents premature convergence.

○ Ensures a diverse search for better solutions.

○ Mimics natural evolutionary processes.


Key Features of Natural Evolution Relevant to Computation

1. Chromosomes
2. Natural Selection
3. Crossover
4. Mutation
1. Chromosomes
🔹 What are Chromosomes?

● Encoded structures storing traits of an individual

● Composed of genes, which determine characteristics like eye color, lip shape, etc.

🔹 Example in Humans

● Humans have 23 pairs (46 total) chromosomes

Chromosomes in Genetic Algorithms (GAs):

🔹 How Chromosomes are Used in GAs?

● Represent feasible solutions to an optimization problem


● Encoded as strings of bits, characters, etc.
● Evolve through successive generations
2. Natural Selection – Survival of the Fittest
● Nature’s way of promoting ‘good’ traits and eliminating ‘bad’ traits in a species.

● Leads to permanent genetic adaptations over generations.

🔹 Example: Polar Bears


✅ White fur – Camouflage & heat retention
✅ Fat storage – Provides energy during food scarcity in winter
✅ Survival advantage – Only the best-adapted bears survived

● Adaptable traits increase survival chances and are passed to future generations.

● This principle forms the basis of Genetic Algorithms (GAs), where better solutions
evolve over time
3. Crossover
● A genetic operation that exchanges genetic material between two chromosomes.

● Occurs in nature during reproduction when parental cells combine to form a zygote.

🔹 How It Works?

● Similar to a string operation where two strings of the same length swap partial contents.

● Ensures a reshuffling of traits from parents to offspring.

🔹 Role in Genetic Algorithms (GAs)

● Helps in exploring new solutions by combining features from different individuals.

● Enhances diversity and improves optimization efficiency.


4. Mutation
● A permanent change in the DNA sequence of a gene.

● Introduces variations in an organism’s lineage.

🔹 How Does Mutation Occur?

1. Inherited – Passed from parents through crossover.

2. Acquired – Due to environmental factors, habits, or external influences (e.g., radiation).

🔹 Why is Mutation Important?

● Creates diversity in species.

● Drives evolution, leading to the emergence of new traits and species.


GENETIC ALGORITHM
● A computational search process for optimization.

● Continuously improves solutions to get the best possible outcome.

● Works with large search spaces that may have multiple local maxima.

● Goal: Find the global maximum or a close approximation.


Features associated with a GA
1. The chromosomes.
2. Procedures to encode a solution as a chromosome, and procedure to decode a
chromosome to the corresponding solution.
3. Fitness function to evaluate each solution, i.e., each chromosome.
4. Population size.
5. Initial population.
6. The mating pool, i.e., the set of chromosomes selected from current population who will
generate the new population/generation.
7. GA operators, e.g., selection, crossover, and mutation.
8. Termination condition.
Chromosomes
● In its simplest form, a chromosome is a one-dimensional string of bits.
● Select an appropriate encoding scheme on the basis of the nature of the problem to be
solved.
Criteria to decide the effectiveness of an encoding scheme:
➢ Space required for encoded chromosomes.

➢ Time for crossover, mutation, and fitness evaluation.

➢ All encoded chromosomes must map to feasible solutions.

➢ Chromosomes after crossover and mutation must be feasible.


Example: Chromosome for Travelling Salesperson Problem

TSP problem: To find a minimal cost tour i.e., a cycle containing each node of the
graph exactly once and total cost of the tour being minimal.

● TSP is essentially a minimization problem.


● In order to apply a GA on it, we must
transform it to a suitable equivalent
maximization problem.
● The cost associated with a link is
converted into a reward by subtracting it
from the maximum cost
Issues:
● Consider the chromosome ch = 101 011 001 110 001.
1. 101 and 110 do not represent any node.
2. pattern 001 has occurred twice, though a node is allowed to be visited
exactly once in a tour.

Solution: Select the next available node in the list of nodes.

Chromosome ch = 101 011 001 110 001 will be interpreted as the tour

a→d→b→c→e
Fitness Function
● Fitness functions are objective functions that are used to evaluate a particular
solution.
● Higher fitness values may represent better solutions.
● The fitness of a → b → c → d → e is 43 + 39 + 42 + 23 + 46 = 193.
Population
● The GA starts with a group of chromosomes known as population.
● The complexity of the problem, which is reflected by the size of the search
space, is a factor to be considered while fixing the size of the population.
● The initial population is normally randomly generated.
GA Operators
● GA employ three operators, viz. selection, crossover and mutation.

● Selection operator ensures continuation of good qualities in the solutions

● Crossover and mutation operators help to explore the entire search space by
providing reshuffle of individual traits and variations.
1. Selection.
● Chromosomes with higher fitness values have a greater chance of being selected for
the mating pool.
● Lower fit chromosomes should also have their chance of producing off spring
● Most widely used selection operators, viz.,
1. roulette wheel selection
2. tournament selection
Roulette Wheel
Ensures that the survival probability of a chromosome is proportional to its fitness
value.
Tournament
● A selection method where chromosomes compete in a mini-tournament.

● Winners are added to the mating pool for reproduction.

🔹 How It Works?

1. Two chromosomes (e.g., chr₁ & chr₂) are randomly picked.

2. Chromosome with higher fitness wins.

3. Repeat this process for the entire population size (PopSize).


2. Crossover – Sharing Genetic Information
🔹 Purpose of Crossover

● Improves solution quality over generations.

● Allows exchange of genetic material among chromosomes.

● Helps GA explore the search space effectively.

🔹 How It Works?

1. Select two parent chromosomes randomly from the mating pool.

2. Shuffle their genetic material to create offspring.

3. New generation inherits traits from parents.


Crossover Probability & Process
🔹 Crossover Probability (pc​)

● A predefined probability that determines whether crossover occurs.

● A random number r in [0,1] is generated:

○ If r≤pc​, crossover happens.


○ Else, parents are copied unchanged.

🔹 Crossover Process:

1. Select crossover point – a random integer in [1, ChrLength].

2. Swap genetic segments from the crossover point to the end.


3. Mutation.
● A small random change at a position within a chromosome.
● Helps escape local optima and explore the search space more effectively.

🔹 Mutation Probability (pμ​)

● Determines if a chromosome undergoes mutation.


● Kept low to avoid excessive disruption in the search process.

🔹 How Mutation Works?

1. For each chromosome, decide mutation based on pμ​.


2. If selected, choose a random mutation point.
3. In binary representation, the bit at that point is flipped (0 → 1 or 1 → 0).
Convergence
● GA approaches a global optimum, the fitness’s of the average and the best
chromosomes approaches equality.
● GA is converged when over 90% of the population have approximately the same
fitness value.
Genetic Drift in GA
What is Genetic Drift?

● A random accumulation of stochastic errors in a GA population.

● Can cause convergence to suboptimal solutions even without selection pressure.

● Once a gene becomes dominant, it may spread across generations and become fixed.

🔹 Why is Genetic Drift a Problem?


❌ Fixation of genes – Crossover can't introduce new values.
❌ Loss of diversity – Reduces exploration in the search space.
❌ Can slow or mislead convergence towards the optimal solution.

🔹 How to Reduce Genetic Drift?


✔ Increase mutation rate – Introduces variations to prevent fixation.
✔ Balance mutation carefully – Too high a rate makes search random, losing gradient-based
optimization.
Simulated Annealing
Introduction to Simulated Annealing
Introduced by Kirkpatrick et al. (1983) as a metaheuristic optimization technique, initially
applied to the Travelling Salesman Problem (TSP).

Inspired by the annealing process in metallurgy:

● Metal is heated and then gradually cooled.


● High temperature → Atoms move rapidly.
● Cooling → Atoms settle into a stable, low-energy state.

Simulated Annealing mimics this process:

● Starts with a high-energy state (initial solution).

● Gradually lowers temperature (control parameter).


Simulated Annealing Algorithm

● Starts with an initial solution.

● Iteratively modifies the solution by introducing random changes.

● Acceptance of new solutions:

○ Better solutions → Always accepted.

○ Worse solutions → Accepted with a probability that decreases over time.

● Early stages: High probability of accepting worse solutions → Encourages exploration.

● Later stages: Lower probability → Gradually converges to an optimal solution.


How simulated annealing is better than other algorithms
Simulated annealing’s strength is that it avoids getting caught at local maxima.
● Best solution is at the yellow star.
● If a simple algorithm finds its way to the green stars it won’t move away from it.
● All of the neighboring solutions are worse. The green star is a local maximum.
● Simulated annealing injects just the right amount of randomness into things to escape
local maxima early in the process.
Step 1 – Defining the Problem
● Goal: Optimize a given problem by minimizing or maximizing an energy function.

● Energy function examples:

○ For a mathematical function: f(x, y) = x² + y² (minimization).

○ For TSP: Energy = Total travel distance.

● Initialization:

○ Set initial temperature.

○ Choose an initial candidate solution (randomly or heuristically).

○ Compute the energy of the initial solution.


Step 2 – Defining the Perturbation Function

● Purpose: Generate new candidate solutions that are close but not too similar to the
current solution.

● Example:

○ Function minimization: Modify (x, y) by adding a small random value (e.g., between -
0.1 and 0.1).

○ TSP optimization: Swap two cities in the current travel sequence.

● Ensures exploration of the solution space while maintaining proximity to the current
solution.
Step 3 – Acceptance Criterion

● Determines whether to accept or reject a new solution.

● Rules for acceptance:

○ Better solution (lower energy) → Always accepted.

○ Worse solution (higher energy) → Accepted with probability:


P = exp⁡(−ΔE/T)
where ΔE = Energy difference, T = Current temperature.

● Implementation:
○ Generate a random value r in the range [0,1).

○ Accept worse solution if P > r.

● Helps avoid local minima and ensures better exploration.


Step 4 – Temperature Schedule
● Controls how temperature decreases over time.

● Early stages:

○ High temperature → Allows exploration of diverse solutions, including worse ones.

● Later stages:

○ Lower temperature → Becomes more selective, favoring better solutions.

● Simple temperature update rule:


T=α⋅T,α<1
○ Ensures gradual cooling for effective convergence.
Step 5 – Running the SA Algorithm

● Iteratively apply:

○ Perturbation function (generate new solutions).

○ Acceptance criterion (decide whether to accept the new solution).

● Stopping conditions:

○ Temperature reaches a minimum value (T_min).

○ Energy of the current solution falls below a threshold (E_th).

● Ensures gradual convergence to an optimal or near-optimal solution.


Flowchart

You might also like