0% found this document useful (0 votes)
12 views116 pages

Mathematical Programming CO4

The document discusses infinite dimensional optimization, focusing on various heuristic and metaheuristic methods such as genetic algorithms, particle swarm optimization, and ant-colony optimization. It explains mathematical optimization as the process of maximizing or minimizing an objective function under certain constraints. Additionally, it outlines the characteristics and classifications of metaheuristics, along with their applications in solving optimization problems.

Uploaded by

dhakshayani568
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views116 pages

Mathematical Programming CO4

The document discusses infinite dimensional optimization, focusing on various heuristic and metaheuristic methods such as genetic algorithms, particle swarm optimization, and ant-colony optimization. It explains mathematical optimization as the process of maximizing or minimizing an objective function under certain constraints. Additionally, it outlines the characteristics and classifications of metaheuristics, along with their applications in solving optimization problems.

Uploaded by

dhakshayani568
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 116

MATHEMATICAL PROGRAMMING

CO4
INFINITE DIMENSIONAL OPTIMIZATION
DR.VUDA SREENIVASA RAO
ASSOCIATE PROFESSOR
CO4-INFINITE DIMENSIONAL OPTIMIZATION

• Heuristic and Meta heuristics. • Genetic Algorithm.


• Single solution vs. population- • Ant-colony optimization.
based. • Particle swarm optimization.
• Parallel meta heuristics. • Simulated annealing.
• Evolutionary algorithms. • Tabu Search.
• Nature-inspired metaheuristics.

2
MATHEMATICAL OPTIMIZATION
• Mathematical optimization is the process of finding the best set of inputs that
maximizes (or minimizes) the output of a function.
• In the field of optimization, the function being optimized is called the
objective function.

𝑚𝑖𝑛 𝑥1 𝑥4 𝑥1 + 𝑥2 + 𝑥3 𝑂𝑏𝑗𝑒𝑐𝑡𝑖𝑣𝑒 𝐹𝑢𝑛𝑐𝑡𝑖𝑜𝑛


𝑠. 𝑡. 𝑥1 𝑥2 𝑥3 𝑥4 ≥ 26 𝐼𝑛𝑒𝑞𝑢𝑎𝑙𝑖𝑡𝑦 𝐶𝑜𝑛𝑠𝑡𝑟𝑎𝑖𝑛𝑡
𝑥12 + 𝑥22 + 𝑥32 + 𝑥42 = 40 𝐸𝑞𝑢𝑎𝑙𝑖𝑡𝑦 𝑐𝑜𝑛𝑠𝑡𝑟𝑎𝑖𝑛𝑡
1 ≤ 𝑥1 𝑥2 𝑥3 ≥ 25 𝑏𝑜𝑢𝑛𝑑𝑠 𝑜𝑛 𝑉𝑎𝑟𝑖𝑎𝑏𝑙𝑒𝑠
𝑥1 = 10,12,46 𝐼𝑛𝑖𝑡𝑖𝑎𝑙 𝑉𝑎𝑙𝑢𝑒𝑠

3
APPLICATIONS

5
SOLUTION STRATEGIES FOR OPTIMIZATION PROBLEMS
Methods to solve Optimization Problems Nature of Solution

Linear or Non Linear programming Exact Solution


Branch and Bound Exact Solution
Heuristic Method Inexact, Near optimal Solution

Metaheuristic Method Inexact, Near optimal Solution

• In Heuristic and Metaheuristic method, we make a trade-off


between solution quality and computational time.

6
HEURISTIC METHOD VS METAHEURISTIC METHOD

Heuristic Method Metaheuristic Method


Nature Deterministic Randomization + Heuristic
Type Algorithmic Nature Inspired,
Iterative
Example Nearest Neighbourhood Genetic Algorithm for
Travelling salesman problems Travelling Salesman Problems
Nature of Inexact, Near optimal Solution Inexact, Near optimal
Solution Solution

7
METAHEURISTIC
• The word ‘meta’ means higher level, where as the word ‘heuristics’ means
to find.
• In computer science, metaheuristic designates a computational method that
optimizes problem by iteratively trying to improve a candidate solution with
regard to given measure of quality.
• Metaheuristic optimization is the best approach to optimizing such non-
convex functions.
• Metaheuristics do not guarantee an optimal solution.
• Metaheuristics implement some form of stochastic optimization.
11
METAHEURISTICS HAVE FUNDAMENTAL CHARACTERISTICS :

• Heuristics can be employed by a metaheuristic as a domain-specific knowledge


which is dominated by the upper-level strategy.
• Metaheuristics are not for a particular problem.
• Metaheuristics are usually approximate.
• Metaheuristics essentially can be described by abstraction level.
• Metaheuristics usually allow an easy parallel implementation.
• Metaheuristics extend from basic local search to advanced learning techniques.
• Metaheuristics may incorporate various mechanisms in order to avoid premature
convergence.
12
Algorithmic framework for metaheuristics

13
CLASSIFICATION OF METAHEURISTICS

• Nature-inspired vs. non-nature inspired

• Population-based vs. single point search

• Dynamic vs. static objective function

• Memory usage vs. memory-less methods


14
Nature Inspired Algorithms for Optimization
FEATURES OF THE EAS
WHAT IS GA
Basic Structure of GA
EXAMPLE

• Using Genetic algorithm maximize the function

f(x) = x^2

with x in interval [0, 31] i.e., x = 0,1,2,…..30,31.

• Select Encoding Technique: Binary encoding Technique

• The minimum value is 0 and maximum value is 31

• To represent the values, use 5-digit binary code numbers between 0 to 31

0 (00000) to 31 (11111) is obtained

• The objective function is to be maximized (f(x) = x^2 )


String No. Initial X value Fitness Prob. %prob. Expected count Actual
Population value f(x)/f(x) f(x)/Avg(f(x)) count
(Randomly f(x) = x^2
selected)
1 01100 12 144 0.1247 12.47 0.4987 1
2 11001 25 625 0.5411 54.11 2.1645 2
3 00101 5 25 0.0216 2.16 0.0866 0
4 10011 19 361 0.3126 31.26 1.2502 1
Sum 1155 1.0 100 4 4
Average 288.75 0.25 25 1 1
Max. 625 0.5411 51.11 2.1645 2

String No. Mating pool Crossover Offspring after X value Fitness


point crossover value
f(x) = x^2
1 01100 4 01101 13 169
2 11001 4 11000 24 576
3 11001 2 11011 27 729
4 10011 2 10001 17 289
Sum 1763
Average 440.75
Max. 729
String No. Offspring after Mutation Offspring after X value Fitness
crossover chromosome mutation value
for flipping f(x) = x^2
1 01101 10000 11101 29 841
2 11000 00000 11000 24 576
3 11011 00000 11011 27 729
4 10001 00101 10100 20 400
Sum 2546
Average 636.5
Max. 841
PARTICLE SWARM OPTIMIZATION
(PSO)

26
INTRODUCTION TO THE PSO: ORIGINS

• Inspired from the nature social behavior and dynamic movements with
communications of insects, birds and fish.
CONT..

• In 1986, Craig Reynolds described this process in 3 simple behaviors:

Separation Alignment Cohesion


avoid crowding local move towards the average move toward the average
flockmates heading of local flockmates position of local flockmates

28
CONT....

• Application to optimization: Particle Swarm Optimization


• Proposed by James Kennedy & Russell Eberhart (1995)
• Combines self-experiences with social experiences

29
INTRODUCTION TO THE PSO: CONCEPT

• Uses a number of agents (particles) that


constitute a swarm moving around in the
search space looking for the best solution.
• Each particle in search space adjusts its “flying”
according to its own flying experience as well
as the flying experience of other particles.
CONT....
• Collection of flying particles (swarm) - Changing solutions
• Search area - Possible solutions
• Movement towards a promising area to get the global optimum.
• Each particle keeps track:
• its best solution, personal best, pbest
• the best value of any particle, global best, gbest

31
CONTD...

• Each particle adjusts its travelling speed dynamically corresponding to the flying
experiences of itself and its colleagues.

 Each particle modifies its position according to:


• its current position
• its current velocity

• the distance between its current position and pbest

• the distance between its current position and gbest

32
PARTICLE SWARM optimization (PSO) ALGORITHM

• Basic Algorithm of PSO


1. Initialize the swarm from the solution space.
2. Evaluate fitness of each particle.
3. Update individual and global bests.
4. Update velocity and position of each particle.
5. Go to step 2, and repeat until termination condition.

33
UPDATE VELOCITY AND POSITION OF EACH PARTICLE.
• Velocity of particle

v(t + 1) = {V(t) + c1 ∗ r1 ∗ (Pbest − x) + c2 ∗ r2 ∗ (Gbest − x)}


Where
x : particle’s position, v: path direction

r1,r2 are the random numbers in the range of (0, 1)


c1: weight of local information, c2: weight of global information

pbest: best position of the particle

gbest: best position of the swarm

• Position of particle: x(t + 1) = x(t) + v(t + 1)

34
PSO ALGORITHM - PARAMETERS

• Number of particles usually between 10 and 50

• C1 is the importance of personal best value

• C2 is the importance of neighborhood best value

• Usually C1 + C2 = 4 (empirically chosen value)

• If velocity is too low → algorithm too slow

• If velocity is too high → algorithm too unstable

35
PROBLEM ANALYSIS

1. Size of a swarm.
2. How to generate initial particles with position and velocity.
3. Finding fitness function.
4. Finding Pbest and Gbest.
5. Updating Velocity. (values of C1,C2,W, etc.)
6. limits for velocity (Vmax,Vmin)
7. Updating position.
8. Terminating condition.

36
37
DEFINE THE PROBLEM

Find the maximum of the function


f (x) = −x2 + 5x + 20
• with −10 ≤ x ≤ 10 using the PSO algorithm.

38
PROBLEM ANALYSIS

• 1 Size of a swarm.
• 2 How to generate initial particles with position and velocity.
• 3 Finding fitness function.
• 4 Finding Pbest and Gbest.
• 5 Updating Velocity. (values of C1,C2,W, etc.)
• 6 limits for velocity (Vmax,Vmin)
• 7 Updating position.
• 8 Terminating condition.
• 9 ....

39
INITIALIZATION

• Use 9 particles with the initial positions x1= -9.6, x2= -6, x3= -2.6, x4= -1.1, x5 = 0.6, x6 =
2.3, x7 =2.8, x8 = 8.3 and x9 =10.
Particle Particles Initial Evaluate the objective function
Number Position f (x) = −x2 + 5x + 20
1 -9.6 -120.16
2 -6 -46
3 -2.6 0.24
4 -1.1 13.29
5 0.6 22.64
6 2.3 26.21
7 2.8 26.16
8 8.3 -7.39
9 10 -30

40
PARTICLE VELOCITY INITIALIZATION
• Let C1 = C2 = 1 and set initial velocities of the particles to zero.

Particles v10 v20 v30 v40 v50 v60 v70 v80 v90
Velocities 0 0 0 0 0 0 0 0 0

• Step2: Set the iteration no as t=0+1 and go to step 3

• Step 3. Find the personal best (Pbest) for every particle.

41
• Step 4: Gbest = max(Pbest) so Gbest = 2.3.
• Step 5: Updating the velocities of the particle by considering the value of random numbers

r1 =0.213, r2 = 0.876, C1 = C2 = 1, w = 1.

42
Step 6: Update the values of position as well.

Step 7: Finding objective function values of

Step 8: Stopping Criteria.

If the terminal rule is satisfied, go to step 2. Otherwise stop the iteration and note the result.

43
SWARM INTELLIGENCE
(ANT-COLONY OPTIMIZATION)

44
WHAT IS A SWARM?

• A loosely structured collection of interacting agents.


• Agents:
• Individuals that belong to a group (but are not necessarily identical).
• They contribute to and benefit from the group.
• They can recognize, communicate, and/or interact with each other.
• The natural perception of swarms is a group of agents in motion – but that
does not always have to be the case.
• A swarm is better understood if thought of as agents exhibiting a collective
behavior.
SWARM INTELLIGENCE (SI)

• An artificial intelligence (AI) technique based on the collective behavior


in decentralized, self-organized systems.
• Generally made up of agents who interact with each other and the
environment.
• No centralized control structures.
• Based on group behavior found in nature.
• “The emergent collective intelligence of groups of simple agents.”
(Bonabeau et al, 1999)
EXAMPLES OF SWARMS IN NATURE:
• Classic Example: Swarm of Bees.
• Can be extended to other similar systems:
• Ant colony
• Agents: ants
• Flock of birds
• Agents: birds
• Traffic
• Agents: cars
• Crowd
• Agents: humans
• Immune system
• Agents: cells and molecules
CHARACTERISTICS OF SWARMS

• Composed of many individuals


• Individuals are homogeneous
• Local interaction based on simple rules
• Self-organization ( No centralized Control)
SWARM INTELLIGENCE (SI) - ALGORITHM

• Inspiration from swarm intelligence has led to some highly successful

optimisation algorithm.

• Ant Colony (-based) Optimisation – a way to solve optimisation


problems based on the way that ants indirectly communicate directions
to each other.
ANT COLONY OPTIMIZATION (ACO)

• The study of artificial systems modeled after the behavior of real ant colonies and are
useful in solving discrete optimization problems.

• Introduced in 1992 by Marco Dorigo.

• Originally called it the Ant System (AS).

• Has been applied to

• Traveling Salesman Problem (and other shortest path problems).

• Several NP-hard Problems.

• It is a population-based metaheuristic used to find approximate solutions to difficult


optimization problems.
ACO CONCEPT

• Ant Colony Optimization (ACO) studies artificial systems that take


inspiration from the behavior of real ant colonies and which are used to solve
discrete optimization problems.”

• Ants navigate from nest to food source. Ants are blind!

• Shortest path is discovered via pheromone trails. Each ant moves at random

• Pheromone is deposited on path

• More pheromone on path increases probability of path being followed


A KEY CONCEPT: STIGMERGY

• Stigmergy is: indirect communication via interaction with the environment.

• A problem gets solved bit by bit ..

• Individuals communicate with each other in the above way, affecting what each other
does on the task.

• Individuals leave markers or messages – these don’t solve the problem in themselves,
but they affect other individuals in a way that helps them solve the problem …
NATURALLY OBSERVED ANT BEHAVIOR

All is well in the world of the ant.


NATURALLY OBSERVED ANT BEHAVIOR

Oh no! An obstacle has blocked our path!


NATURALLY OBSERVED ANT BEHAVIOR

Where do we go? Everybody, flip a coin.


NATURALLY OBSERVED ANT BEHAVIOR

Shorter path reinforced.


STIGMERGY IN ANTS

• Ants are behaviorally unsophisticated, but collectively they can


perform complex tasks.
• Ants have highly developed sophisticated sign-based stigmergy

• They communicate using pheromones;


• They lay trails of pheromone that can be followed by other ants.

• If an ant has a choice of two pheromone trails to follow, one to the


NW, one to the NE, but the NW one is stronger – which one will it
follow?
PHEROMONE TRAILS

• Individual ants lay pheromone trails while travelling from


the nest, to the nest or possibly in both directions.
• The pheromone trail gradually evaporates over time.
• But pheromone trail strength accumulate with multiple
ants using path.
Food source
Nest
PROPERTIES OF THE PHEROMONE

• The pheromone is olfactive and volatile.


• The pheromone is stronger if more ants go along the same path(
reinforced by number).
• The pheromone is stronger if the path from the nest to the food is
shorter.

Food source
Nest
Pheromone Trails continued
Initial state:
no ants
E t=0 E t=1 E

30 ants 30 ants

D 15 ants
D 10 ants
D
15 ants 20 ants
d=1 d = 0.5
τ = 15 τ = 30

H C H C H C
τ = 15 τ = 30
d=1 d = 0.5
15 ants 15 ants 10 ants 20 ants
B B B

30 ants 30 ants

A A A

(a) (b) (c)


ANT COLONY OPTIMISATION ALGORITHMS: BASIC IDEAS

• Ants are agents that:

• Move along between nodes in a graph.

• They choose where to go based on pheromone strength.

• An ant’s path represents a specific candidate solution.

• When an ant has finished a solution, pheromone is laid on its path,


according to quality of solution.

• This pheromone trail affects behaviour of other ants by `stigmergy’ …


USING ACO
• The optimization problem must be written in the form of a path finding problem
with a weighted graph
• The artificial ants search for “good” solutions by moving on the graph
• Ants can also build infeasible solutions – which could be helpful in solving
some optimization problems
• The meta heuristic is constructed using three procedures:
• Construct Ants Solutions
• Update Pheromones
• Daemon Actions
CONSTRUCT ANTS SOLUTIONS

• Manages the colony of ants.

• Ants move to neighboring nodes of the graph.

• Moves are determined by stochastic local decision policies based on pheromone

trails and heuristic information.

• Evaluates the current partial solution to determine the quantity of pheromones

the ants should deposit at a given node.


UPDATE PHEROMONES
• Process for modifying the pheromone trails
• Modified by
• Increase
• Ants deposit pheromones on the nodes (or the edges)
• Decrease
• Ants don’t replace the pheromones and they evaporate
• Increasing the pheromones increases the probability of paths being used (i.e., building
the solution)
• Decreasing the pheromones decreases the probability of the paths being used (i.e.,
forgetting)
DAEMON ACTIONS

• Used to implement larger actions that require more than one ant
• Examples:
• Perform a local search
• Collection of global information
A GENERAL ALGORITHM

• Step1: Initialize the pheromone information.

• Step 2 : for each ant, do the following:

• Find a solution (a path) based on the current pheromone trail.

• Reinforcement : add pheromone.

• Evaporation: reduce pheromone.

• Step 3 : stop if terminating condition satisfied, return to step 2 other wise.


APPLICATIONS OF ACO

• Vehicle routing with time window constraints

• Network routing problems

• Assembly line balancing

• Heating oil distribution

• Data mining

• Robotic Path Problem


E.G. A 4-CITY TSP

Initially, random levels of pheromone are scattered on the edges

A B

Pheromone

D
C
Ant
AB: 10, AC: 10, AD, 30, BC, 40, CD 20
E.G. A 4-CITY TSP

An ant is placed at a random node

A B

Pheromone
D
C
Ant
AB: 10, AC: 10, AD, 30, BC, 40, CD 20
E.G. A 4-CITY TSP

The ant decides where to go from that node,


based on probabilities
calculated from: A B
- pheromone strengths,
- next-hop distances.

Suppose this one chooses BC

Pheromone
D
C
Ant AB: 10, AC: 10, AD, 30, BC, 40, CD 20
E.G. A 4-CITY TSP

The ant is now at C, and has a `tour memory’ = {B, C} – so he cannot


visit B or C again.
A B
Again, he decides next hop
(from those allowed) based
on pheromone strength
and distance;
suppose he chooses
CD

Pheromone
D
C
Ant AB: 10, AC: 10, AD, 30, BC, 40, CD 20
E.G. A 4-CITY TSP

The ant is now at D, and has a `tour memory’ = {B, C, D}


There is only one place he can go now:
A B

Pheromone

D
Ant C
AB: 10, AC: 10, AD, 30, BC, 40, CD 20
E.G. A 4-CITY TSP

So, he has nearly finished his tour, having gone over the links:
BC, CD, and DA.
A B

Pheromone
D
C
Ant AB: 10, AC: 10, AD, 30, BC, 40, CD 20
E.G. A 4-CITY TSP

So, he has nearly finished his tour, having gone over the links:
BC, CD, and DA. AB is added to complete the round trip.
A B

Now, pheromone on the tour


is increased, in line with the
fitness of that tour.

D
Pheromone C

AB: 10, AC: 10, AD, 30, BC, 40, CD 20


Ant
E.G. A 4-CITY TSP

A B
Next, pheromone everywhere
is decreased a little, to model
decay of trail strength over
time

D
Pheromone C

AB: 10, AC: 10, AD, 30, BC, 40, CD 20


Ant
E.G. A 4-CITY TSP

We start again, with another ant in a random position.

A B
Where will he go?

Next , the actual algorithm


and variants.

Pheromone D
C
AB: 10, AC: 10, AD, 30, BC, 40, CD 20
Ant
THE ACO ALGORITHM FOR THE TSP

We have a TSP, with n cities.


1. We place some ants at each city. Each ant then does this:
• It makes a complete tour of the cities, coming back to its starting city, using a transition rule
to decide which links to follow. By this rule, it chooses each next-city at random, but based
partly by the pheromone levels existing at each path, and based partly by heuristic
information.

2. When all ants have completed their tours.


Global Pheromone Updating occurs.
• The current pheromone levels on all links are reduced (I.e. pheromone levels decay over
time).
• Pheromone is laid (belatedly) by each ant as follows: it places pheromone on all links of its
tour, with strength depending on how good the tour was.
THE ACO ALGORITHM FOR THE TSP
[A SIMPLIFIED VERSION WITH ALL ESSENTIAL DETAILS]

We have a TSP, with n cities.


1. We place some ants at each city. Each ant then does this:
• It makes a complete tour of the cities, coming back to its starting city, using a transition rule
to decide which links to follow. By this rule, it chooses each next-city at random, but biased
partly by the pheromone levels existing at each path, and biased partly by heuristic
information.

2. When all ants have completed their tours.


Global Pheromone Updating occurs.
• The current pheromone levels on all links are reduced (I.e. pheromone levels decay over
time).
• Pheromone is lain (belatedly) by each ant as follows: it places pheromone on all links of its
tour, with strength depending on how good the tour was.

Then we go back to 1 and repeat the whole process many times,


until we reach a termination criterion.
ACO Algorithm

Set all parameters and initialize the pheromone trails


Loop
Sub-Loop
Construct solutions based on the state transition rule
Apply the online pheromone update rule
Continue until all ants have been generated
Apply Local Search
Evaluate all solutions and record the best solution so far
Apply the offline pheromone update rule
Continue until the stopping criterion is reached
THE TRANSITION RULE

T(r,s) is the amount of pheromone currently on the path that goes directly from city r to city s.

H(r,s) is the heuristic value of this link – in the classic TSP application, this is chosen to be
1/distance(r,s) -- i.e. the shorter the distance, the higher the heuristic value.

p k (r , s ) is the probability that ant k will choose the link that goes from r to s

 is a parameter that we can call the heuristic strength


T (r , s)  H (r , s) 
The rule is: pk (r , s ) =
T (r , c)  H (r , c) 
unvisited cities c

Where our ant is at city r, and s is a city as yet unvisited on its tour, and the summation is over all
of k’s unvisited cities

80
GLOBAL PHEROMONE UPDATE

Ak(r,s) is amount of pheromone added to the (r, s) link by ant k.

m is the number of ants

 is a parameter called the pheromone decay rate.

Lk is the length of the tour completed by ant k

T(r, s) at the next iteration becomes: m

Where Ak (r , s ) = 1 / Lk
  T (r , s) +  A (r , s)
k =1
k

81
Ant Colony Optimization
Characteristics

• An ant is a solution.

• Solutions (ants) are at different places in the solution space.

• How they change is based on the probability of changing to a different schedule.

• An ant completes its tour after selection a choice for each stand.

• Utilities (objective function values) of each tour are calculated.

• Pheromone levels are updated after all of the ants have completed all of
their tours.
Ant Colony Optimization
Advantages:
• It is intuitive to biologically-minded people, mimicking nature.

• The system is built on positive feedback (pheromone attraction) and


negative attractiveness (pheromone evaporation).

• Pheromone evaporation helps avoid convergence to a local optima.


Disadvantages:

• For routing problems it may make more sense, but for harvest
scheduling problems, it requires a conceptual leap of faith.

• Fine-tuning the sensitive parameters may require significant effort.


Example
An ANT is at a distance of 5m from the TREE 15m from
CAR and 4m from a DOLL, the distance between TREE and
the CAR is 4m,CAR and DOLL is 1m ,DOLL and TREE is
8m.Create a matrix and solve using Ant Colony
Optimization.
85
ANT COLONY OPTIMIZATION
• Ant Colony Optimization(ACO) is a nature-inspired metaheuristic algorithm
that simulates the foraging behaviour of ants to find optimal solutions to
complex problems.
• Initially proposed by Marco Dorigo in 1992.
• Aims to search for an optimal path in a graph, based on the behaviour of
ants seeking path between their colony and a source of food.
ACO CONCEPT
ACO is inspired by the foraging behaviour of ants, where they find the
shortest path to food sources using pheromone communication.
The first version of ACO was called Ant Systems.
ACO applied to the Travelling Salesman Problem, demonstrating improved
solutions over traditional algorithms.
STIGMERGY
• The main inspiration of the ACO algorithm comes from stigmergy.
• Stigmergy refers to the interaction and coordination of organisms in nature
by modifying the environment.
• Ants produces chemicals called pheromone. They use pheromone to
communicate.
• Ants are more likely to choose path with higher pheromone.
GENERAL ALGORITHM
• Ant Movement: Place artificial ants randomly and let them move around the
problem space.
• Pheromone Update: Ants leave pheromone on their paths based on
solution quality.
• Solution Evaluation: Evaluate the quality of solutions based on an
objective function.
• Repeat and Improve: Keep repeating steps 1-3, allowing ants to iteratively
improve their paths.
NUMERICAL EXAMPLE
• An ant is at a distance of 5m from the Tree, 15m from CAR
and 4m from a POND, the distance between TREE and
CAR is 4m, CAR and POND is 1m, POND and TREE is
8m. Create a matrix and solve using Ant Colony
Optimization.
Cost Matrix
MATHEMATICAL MODEL
Cost And Pheromone Graph
CALCULATING THE PROBABILITIES
SIMULATED ANNEALING

102
103
104
105
106
107
108
109
110
111
112
113
114
115
116

You might also like