Team 9 CIA 3
Team 9 CIA 3
CIA 3 Report
Submitted by
FERDENO (Reg No.: 2062018, Email ID: [email protected])
CHATHERIYAN (Reg No.: 2062016, Email ID: [email protected])
YOGESH KUMAR S(Reg No.: 2062040, Email ID: [email protected])
MERLA GANESH REDDY (Reg No.: 2062030, Email ID: [email protected])
Under Supervision of
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1 Steps/Phases of ABC algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2 Recent Development on ABC algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3 Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.1 Steps/Phases of Differential evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.2 Recent Development on Differential Evolution algorithm . . . . . . . . . . . . . . . . . 6
4 Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.1 Steps/Phases of Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.2 Recent Development on Genetic algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 9
5 Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5.1 Steps/Phases of Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 10
5.2 Recent Development on Particle Swarm Optimization Algorithm . . . . . . . . . . . . 12
6 Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
6.1 Steps/Phases of Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
6.2 Recent Development on Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 15
7 Cuckoo Search Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
7.1 Steps/Phases of Cuckoo Search Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 16
7.2 Recent Development on Cuckoo Search Algorithm . . . . . . . . . . . . . . . . . . . . 18
8 Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
8.1 Steps/Phases of Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
8.2 Recent Development on Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 21
9 Spider Monkey Optimization (SMO) Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 22
9.1 Steps/Phases of SMO Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
9.2 Recent Development on Spider Monkey Optimization Algorithm . . . . . . . . . . . . 23
10 Ant Colony Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
10.1 Steps/Phases of SMO Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
10.2 Recent Development on Ant Colony Algorithm . . . . . . . . . . . . . . . . . . . . . . 25
11 Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
11.1 Steps/Phases of Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
11.2 Recent Development on Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . 28
12 Intelligent Image Color Reduction and Quantization . . . . . . . . . . . . . . . . . . . . . . . 29
12.1 Solution by the Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . 29
12.2 Solution by the Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
12.3 Solution by the Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
12.4 Solution by the Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 30
12.5 Solution by the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
12.6 Solution by the Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
12.7 Solution by the Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
12.8 Solution by the Spider Monkey Optimization (SMO) Algorithm . . . . . . . . . . . . . 31
1
12.9 Solution by the Ant Colony Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 31
12.10 Solution by the Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
12.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
13 Minimum Spanning Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
13.1 Solution by the Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . 32
13.2 Solution by the Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
13.3 Solution by the Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
13.4 Solution by the Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 33
13.5 Solution by the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
13.6 Solution by the Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
13.7 Solution by the Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
13.8 Solution by the Spider Monkey Optimization (SMO) Algorithm . . . . . . . . . . . . . 34
13.9 Solution by the Ant Colony Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 34
13.10 Solution by the Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
13.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
14 Robot Path Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
14.1 Solution by the Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . 36
14.2 Solution by the Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
14.3 Solution by the Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
14.4 Solution by the Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 37
14.5 Solution by the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
14.6 Solution by the Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
14.7 Solution by the Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
14.8 Solution by the Spider Monkey Optimization (SMO) Algorithm . . . . . . . . . . . . . 38
14.9 Solution by the Ant Colony Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 39
14.10 Solution by the Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
14.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
15 Data Envelopment Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
15.1 Solution by the Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . 40
15.2 Solution by the Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
15.3 Solution by the Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
15.4 Solution by the Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 41
15.5 Solution by the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
15.6 Solution by the Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
15.7 Solution by the Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
15.8 Solution by the Spider Monkey Optimization (SMO) Algorithm . . . . . . . . . . . . . 42
15.9 Solution by the Ant Colony Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 42
15.10 Solution by the Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
15.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
16 Portfolio Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
16.1 Solution by the Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . 43
16.2 Solution by the Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
16.3 Solution by the Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
16.4 Solution by the Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 44
16.5 Solution by the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
16.6 Solution by the Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
16.7 Solution by the Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
16.8 Solution by the Spider Monkey Optimization (SMO) Algorithm . . . . . . . . . . . . . 46
16.9 Solution by the Ant Colony Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 46
16.10 Solution by the Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
16.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2
17 Facility Layout Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
17.1 Solution by the Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . 48
17.2 Solution by the Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
17.3 Solution by the Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
17.4 Solution by the Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 49
17.5 Solution by the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
17.6 Solution by the Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
17.7 Solution by the Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
17.8 Solution by the Spider Monkey Optimization (SMO) Algorithm . . . . . . . . . . . . . 51
17.9 Solution by the Ant Colony Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 51
17.10 Solution by the Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
17.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
18 Vehicle Routing Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
18.1 Solution by the Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . 53
18.2 Solution by the Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
18.3 Solution by the Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
18.4 Solution by the Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 54
18.5 Solution by the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
18.6 Solution by the Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
18.7 Solution by the Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
18.8 Solution by the Spider Monkey Optimization (SMO) Algorithm . . . . . . . . . . . . . 56
18.9 Solution by the Ant Colony Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 56
18.10 Solution by the Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
18.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
19 Parallel Machine Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
19.1 Solution by the Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . 58
19.2 Solution by the Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
19.3 Solution by the Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
19.4 Solution by the Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 59
19.5 Solution by the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
19.6 Solution by the Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
19.7 Solution by the Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
19.8 Solution by the Spider Monkey Optimization (SMO) Algorithm . . . . . . . . . . . . . 61
19.9 Solution by the Ant Colony Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 61
19.10 Solution by the Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
19.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
20 Bin Packing Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
20.1 Solution by the Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . 63
20.2 Solution by the Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
20.3 Solution by the Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
20.4 Solution by the Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 65
20.5 Solution by the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
20.6 Solution by the Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
20.7 Solution by the Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
20.8 Solution by the Ant Colony Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 67
20.9 Solution by the Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
20.10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
21 Assignment problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
21.1 Solution by the Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . 68
21.2 Solution by the Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
21.3 Solution by the Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3
21.4 Solution by the Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 70
21.5 Solution by the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
21.6 Solution by the Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
21.7 Solution by the Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
21.8 Solution by the Spider Monkey Optimization (SMO) Algorithm . . . . . . . . . . . . . 73
21.9 Solution by the Ant Colony Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 73
21.10 Solution by the Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
21.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4
Abstract
Abstract - You have to write about all the algorithms (please refer unit IV of syllabus for Soft Computing)
and all ten problem (please refer unit V of syllabus for Soft Computing). Make sure that each team member
have significant contribution in this report. Please note that identical report may lead to penalty.
1 Introduction
This report contains detailed study of following algorithms
5. Firefly Algorithm
6. Cuckoo Search
7. Bat Algorithm
1
2 Artificial Bee Colony (ABC) Algorithms
Artificial Bee Colony Algorithm (ABC) is an optimization algorithm based on the intelligent foraging be-
havior of a honey bee swarm.
• Introduced by Karaboga in 2005, the Artificial Bee Colony (ABC) algorithm is a swarm-based meta-
heuristic algorithm for optimizing numerical problems and was inspired by the intelligent foraging
behavior of honey bees.
• The model consists of four essential components:
– Food Sources
– Employed bees
– Onlooker bees
– Scout bees
In the initialization phase, we generate enough food sources for each of the employed bees. The
actual food source generation depends on the type of problem we are solving. The bees are distributed
in the solution space, having been generated randomly. Some researchers also distribute them evenly
in the space and that might work better for some solution spaces.
X = [x1, x2, ..., xN pop] where xi = [xi1, xi2, ..., xiD ]
The employed bee phase consists of each of the bees going out to explore a food source. In the
process, the bees explore the neighbourhood and, if they find a food source with more nectar, their
food source gets replaced by a newer, better food source.
x′ ij = xij + φij (xik − xij ) where φij ∈ [−1, 1]
f (x′i ) < f (xi ) then replace xi with x′i
• Onlooker Bee Phase
The employed bees then return home and begin their waggle dance. Each onlooker bee perceives,
with some error, the amount of nectar that each bee got from its food source. So, each onlooker bee,
according to their perception of the nectar produced by the food source, will pick that food source.
The higher the nectar, the more probable it is that the onlooker bee will pick it.
f (xi)
pi = P
j = 1N pop f (xj )
x′i = select a solution based on the probability pi
f (x′i ) < f (xi ) then replace xi with x′i
• Scout Bee Phase
When the neighborhood of the food source has been explored enough, it is abandoned. Every time a
food source is explored, we increment the trial counter. When the trial count exceeds the maximum
configured value, we delete it from the array of food sources, as shown in the image below, and find a
new, random food source.
2
Flow-chart (For reference)
Start
Initialize population
Evaluate fitness
Stop
Algorithm(For reference)
1. Initialize the population of N candidate solutions, Xi , i = 1, 2, . . . , N .
2. Evaluate the fitness of each candidate solution using a fitness function f (Xi ).
3
(b) Onlooker bees phase:
i. Calculate the probability pi of each candidate solution Xi as follows:
f (Xi )
pi = PN .
j=1 f (Xj )
ii. For each onlooker bee k = 1, 2, . . . , nonlooker , select a candidate solution Xi with probability
proportional to pi .
iii. Generate a new candidate solution Vi for each selected candidate solution Xi using the same
procedure as in the employed bees phase.
iv. Evaluate the fitness of the new candidate solutions Vi .
v. If f (Vi ) < f (Xi ), replace Xi with Vi .
(c) Scout bees phase:
i. For each candidate solution Xi that has not been improved for a certain number of iterations,
replace it with a new candidate solution randomly generated within the search space.
– Abdolrasol, Maher GM and Hussain [1] combines the ABC algorithm with other optimization
methods, such as local search algorithms, gradient-based methods, or swarm intelligence algo-
rithms.
– Advantages: Improves the convergence speed and robustness of the ABC algorithm.
– Applications: Feature selection, image registration, and data clustering.
• Dynamic ABC (DABC):
– Thirunavukkarasu, M and Sawle, Yashwant and Lala, Himadri [10] introduces a dynamic search
space based on the fitness landscape for improving the performance of the ABC algorithm on
complex optimization problems with changing landscapes.
– Advantages: Improves the performance of the ABC algorithm on complex optimization problems
with changing landscapes.
– Applications: Feature selection, machine learning, and control engineering.
• Adaptive ABC (AABC):
– Li, Shuijia and Gong [6] adapt the search strategy of the ABC algorithm based on the convergence
behaviour of the algorithm.
– Advantages: Improves the ABC algorithm’s convergence speed and exploration capability.
– Applications:Engineering design optimization, data mining, and image processing.
4
3 Differential Evolution
Differential evolution (DE) is a method that optimizes a problem by trying to improve a candidate solution
with regard to a given measure of quality.
• Differential evolution was proposed by Price, Kenneth V [8] in 1995
• Advantages of DE are: simplicity, efficiency and real coding, easy use, local searching property and
speediness.
• This phase engenders a try-out vector for every candidate solution of the present population. An
objective vector is altered by means of a biased differential to produce the try-out vector.
2. Crossover Phase:
• In this stage, a crossover operator is applied to the mutated solutions, creating offspring solutions
by combining the parameter values of two or more parent solutions. The crossover operator helps
explore the search space and combine promising features from different solutions.
(
vi,g+1 if rand(0, 1) ≤ Cr or i = j
ui,g+1 =
xi,g otherwise
3. Selection Phase:
• In this stage, the offspring solutions are evaluated based on a fitness function that measures their
performance. The fitness function determines which solutions are selected to become part of the
next generation population. Typically, solutions with higher fitness values are selected, as they
are considered more promising.
(
ui,g+1 if f (ui,g+1 ) ≤ f (xi,g )
xi,g+1 =
xi,g otherwise
5
Flow-chart (For reference)
Start
Initialize population
Evaluate population
Mutation
Crossover
Selection
Update population
No
Termination?
Yes
Stop
Algorithm(For Reference)
6
Algorithm Differential Equation Algorithm
Initialize the initial condition x0 , time step h, and number of time steps N
Set t ← 0
for n = 0 to N − 1 do
Compute the slope fn at time tn using the differential equation
Update the solution at time tn+1 = tn + h using the Euler’s method: xn+1 ← xn + h · fn
Set t ← t + h
end for
7
4 Genetic Algorithms
Genetic algorithms are often used to solve complex optimization problems that are difficult to solve using
traditional computational methods by Mirjalili, Seyedali [7]
• One of the main advantages of using genetic algorithms in soft computing is their ability to find
near-optimal solutions to complex problems in a relatively short amount of time.
xc = crossover(xp1 , xp2 )
3. Mutation Phase:
• The mutation phase involves introducing a small random change to the genetic material of the
offspring solutions. Let xm be the mutated of f spring solution.
xm = mutation(xc )
Flow-chat(For reference)
Initialize
Population
Selection
Crossover
Mutation
Terminate
Algorithm
8
Algorithm(For reference)
1: Initialize population
2: Evaluate fitness of each individual
3: while termination condition not met do
4: Select parents for reproduction
5: Perform crossover to create offspring
6: Perform mutation on offspring
7: Evaluate fitness of offspring
8: Replace least fit individuals in population with offspring
9: end while
9
5 Particle Swarm Optimization
• PSO is a stochastic optimization technique based on the movement and intelligence of swarms, originally
proposed by Kennedy and Eberhart [3] in 1995.
• In PSO, the concept of social interaction is used for solving a problem.
• It uses a number of particles (agents) that constitute a swarm moving around in the search space,
looking for the best solution.
2. Fitness evaluation:
• Evaluate the fitness of each particle in the population.
• The fitness value is calculated using the fitness function of the problem being solved.
fi = f (xi )
where w is the inertia weight, c1 and c2 are the acceleration constants, and rand() is a random
number between 0 and 1.
10
Flow-chart(For reference)
Start
Initialize swarm
Evaluate fitness
Evaluate fitness
no
yes
Stop
11
Algorithm(For reference)
12
6 Firefly Algorithm
• Firefly Algorithm (FA) is a swarm intelligence algorithm that was inspired by the flashing behavior of
fireflies.
• It was first proposed by Xin-She Yang [4] in 2008 and has been used in a wide range of optimization
problems.
2. Evaluation:
• Evaluate the fitness of each firefly in the population using the objective function of the optimization
problem.
fi = f (xi ) for i = 1, 2, . . . , N
3. Attraction:
• Calculate the attractiveness of each firefly, which depends on its brightness (fitness value) and its
distance to other fireflies in the population.
• The attractiveness can be calculated using a function that incorporates these factors, such as the
inverse square law used in the original FA formulation.
2
aij = β0 e−γrij where β0 and γ are parameters and rij is the Euclidean distance between fireflies i and j
4. Movement:
• Move each firefly towards the most attractive firefly in its neighborhood, which is defined based
on the attractiveness calculated in the previous step.
• The movement of each firefly can be calculated using a formula that includes a random component
to allow for exploration.
xi (t+1) = xi (t)+α(xj (t)−xi (t))+ϵ where α is the step size, xj (t) is the position of the most attractive firefly to i
5. Updating:
• Update the brightness of each firefly based on the new position it has moved to.
• Evaluate the fitness of the new solution and compare it to the previous solution.
• If the new solution is better, update the brightness of the firefly accordingly.
fi′ = f (x′i ) where fi′ is the new fitness value of firefly i after moving to position x′i
6. Termination:
• Repeat steps 3-5 until a stopping criterion is met, such as a maximum number of iterations, a
convergence threshold, or a time limit.
13
Flow-chart(For reference)
Initialize
Evaluate fitness
Calculate attractiveness
Move fireflies
Update fitness
Termination criterion
No
Yes
14
6.2 Recent Development on Firefly Algorithm
• Hybrid Firefly Algorithm:
– The Hybrid Firefly Algorithm (HFA) combines the Firefly Algorithm with other optimization
algorithms such as the Genetic Algorithm and Particle Swarm Optimization.
– Advantages: The advantage of this modification is that it can improve the convergence rate and
global search ability of the algorithm.
– Applications: HFA has been applied to various optimization problems such as feature selection,
image processing, and parameter tuning.
• Chaotic Firefly Algorithm:
– The Chaotic Firefly Algorithm (CFA) adds a chaotic map to the Firefly Algorithm to increase
the diversity of the fireflies movements and avoid getting stuck in local optima.
– Advantages: The advantage of this modification is that it can improve the global search ability
and robustness of the algorithm.
– Applications: CFA has been applied to various optimization problems such as image segmenta-
tion, parameter estimation, and function optimization.
• Multi-Objective Firefly Algorithm:
– The Multi-Objective Firefly Algorithm (MOFA) is an extension of the Firefly Algorithm that can
handle multiple objectives simultaneously.
– Advantages: The advantage of this modification is that it can provide a set of solutions that
represent the trade-offs between different objectives.
– Applications: MOFA has been applied to various multi-objective optimization problems such
as feature selection, clustering, and image segmentation.
• Adaptive Firefly Algorithm:
– The Adaptive Firefly Algorithm (AFA) adjusts the step size of the fireflies’ movements based on
their fitness values to balance exploration and exploitation.
– Advantages: The advantage of this modification is that it can improve the convergence rate and
global search ability of the algorithm.
– Applications: AFA has been applied to various optimization problems such as feature selection,
image segmentation, and load forecasting.
• Self-Adaptive Firefly Algorithm:
– The Self-Adaptive Firefly Algorithm (SAFA) adapts the parameters of the Firefly Algorithm, such
as the attraction coefficient and randomization parameter, during the optimization process.
– Advantages: The advantage of this modification is that it can improve the global search ability
and robustness of the algorithm.
– Applications: SAFA has been applied to various optimization problems such as feature selection,
image segmentation, and function optimization.
15
7 Cuckoo Search Algorithm
• Cuckoo search is a nature-inspired optimization algorithm that is based on the behavior of cuckoo
birds.
• It was first proposed by Xin-She Yang and Suash Deb [11] in 2009.
• The algorithm is designed to solve optimization problems, especially those that involve complex, multi-
dimensional search spaces.
• The basic idea behind cuckoo search is to mimic the behavior of cuckoo birds in laying their eggs in
the nests of other bird species.
• The algorithm starts with an initial population of candidate solutions called nests.
• The nests are randomly generated in the search space.
2. Egg Laying:
• The algorithm selects some of the eggs to be laid in the nests of other birds.
• The selection is based on the quality of the egg, with higher-quality eggs being more likely to be
selected.
4. Abandonment:
• The algorithm checks if any eggs have been laid in a nest that is already occupied by another egg.
• If this is the case, the algorithm randomly chooses one of the eggs to be removed from the nest.
5. Local Search:
• The algorithm applies a local search operator to some of the nests to improve their quality.
6. Nest Selection:
• The algorithm selects some of the nests to be replaced by the new candidate solutions created in
steps 2-5.
• The selection is based on the quality of the new solution, with higher-quality solutions being more
likely to be selected.
7. Termination:
• The algorithm terminates when a stopping criterion is met, such as a maximum number of itera-
tions or a minimum level of improvement.
16
Flow-chart(For reference)
Start
yes
Any nest is already occupied? Remove one egg randomly
no
no
Apply local search operator to some nests
yes
Stop
17
Algorithm(For reference)
18
8 Bat Algorithm
• The Bat Algorithm is a metaheuristic optimization algorithm that is inspired by the echolocation
behavior of bats.
• It was first proposed by Xin-She Yang [12] in 2010.
• The basic idea of the Bat Algorithm is to simulate the behavior of bats in searching for prey.
Xti = Xt−1
i + Vit−1
f (Xti ) = Evaluate f at Xti
Generate a random loudness: rit ∼ Uniform(0, 1)
Generate a random frequency: Ati ∼ Uniform(0, 1)
Vit = Vt−1 i + (Xbest − Xti )Ati
Xti = Xti + rit sin(2πAti )
2. Movement Phase:
• Based on the echolocation results, each bat updates its position and velocity to move towards
better solutions.
• The speed and direction of the movement are determined by the frequency and loudness of the
bat.
3. Pulse Rate Phase:
• The pulse rate of each bat is adjusted according to its fitness and the global best solution found
so far.
• Bats with better fitness or closer to the global best solution will have a higher pulse rate.
N N
1 X 1 X t t
favg = f (Xti )ravg = r fbest = Minimum of f (Xbest) and f avgAti = A0 αf (Xi )−fbest
N i=1 N i=1 i
4. Loudness Phase:
• The loudness of each bat is also adjusted based on its fitness and the global best solution found
so far.
• Bats with better fitness or closer to the global best solution will have a higher loudness.
19
Flow-chart(For reference)
Start
Initialize population
Evaluate fitness
Echolocation
Update loudness
Stop
Algorithm(For reference)
1: Initialize population
2: Evaluate fitness
3: Update best solution
4: while stopping criterion not met do
5: for each bat i do
6: Generate a new solution xi (t + 1) by updating the velocity and position
7: With probability ri , perform local search on xi (t + 1)
8: if f (xi (t + 1)) < f (xbest ) then
9: Update the best solution: xbest = xi (t + 1)
10: end if
11: Update loudness Ai and pulse rate ri
12: end for
13: end while
20
8.2 Recent Development on Bat Algorithm
• Enhanced Bat Algorithm (EBA):
– The EBA incorporates a new mechanism for updating the frequency and velocity of the bats, as
well as an adaptive local search strategy.
– The EBA has shown improved convergence speed and accuracy compared to the original Bat
Algorithm.
– Advantages: Improved convergence speed and accuracy.
– Applications: Optimization problems such as feature selection, clustering, and image segmen-
tation.
• Multi-Objective Bat Algorithm (MOBA):
– The MOBA is an extension of the Bat Algorithm for solving multi-objective optimization prob-
lems.
– The MOBA uses a Pareto-based approach to maintain a set of non-dominated solutions.
– Advantages: Ability to solve multi-objective optimization problems.
– Applications: Engineering design optimization, financial portfolio optimization.
• Quantum Bat Algorithm (QBA):
– The QBA applies quantum computing principles to the Bat Algorithm to improve its search
ability.
– The QBA uses quantum-inspired operators to update the positions and velocities of the bats.
– Advantages: Improved search ability.
– Applications: Combinatorial optimization, scheduling problems.
• Hybrid Bat Algorithm (HBA):
– The HBA combines the Bat Algorithm with other optimization algorithms, such as the Genetic
Algorithm or Particle Swarm Optimization, to improve its search ability and convergence speed.
– Advantages: Improved search ability and convergence speed.
– Applications: Engineering design optimization, data clustering.
• Improved Hybrid Bat Algorithm (IHBA):
– The IHBA further improves the HBA by incorporating an adaptive local search strategy and a
new operator for updating the loudness and pulse rate of the bats.
– Advantages: Improved searchability, convergence speed, and accuracy.
– Applications: Feature selection, image classification, scheduling problems.
21
9 Spider Monkey Optimization (SMO) Algorithm
• Spider Monkey Optimization (SMO) Algorithm is a metaheuristic optimization algorithm inspired by
the social behaviour of spider monkeys in their search for food.
• SMO was first introduced by Mirjalili et al. [7] in 2017.
• The SMO algorithm is based on the following three main behaviours of spider monkeys:
1. Exploration: In this behaviour, spider monkeys move randomly to search for new food sources.Some
spider monkeys are randomly selected to move to new positions in the search space. This step is
intended to introduce diversity into the population and prevent the algorithm from getting stuck
in local optima.
2. Exploitation: When spider monkeys find a promising food source, they exploit it by gathering
as much food as possible.The remaining spider monkeys exploit the best food sources that have
been found so far. This step is intended to converge the population towards promising regions of
the search space.
3. Homecoming: After spider monkeys have gathered enough food, they return to their home
location.
Initialize population
Explore
Exploit
Homecoming
Stopping criterion
22
Algorithm(For reference)
– This algorithm combines the SMO algorithm with other optimization algorithms such as Particle
Swarm Optimization (PSO) or Genetic Algorithm (GA) to enhance its performance.
– The advantage of HSMO is its ability to combine the strengths of different algorithms to obtain
better results.
– HSMO has been applied to various applications, such as economic load dispatch, feature selection,
and power system optimization.
• Multi-Objective SMO Algorithm (MOSMO):
– This algorithm extends the SMO algorithm to handle multi-objective optimization problems.
– The advantage of MOSMO is its ability to handle multiple conflicting objectives simultaneously.
– MOSMO has been applied to various applications, such as scheduling problems, image segmenta-
tion, and clustering.
• Parallel SMO Algorithm (PSMO):
– This algorithm proposes a parallel implementation of the SMO algorithm using multiple threads
or processors.
– PSMO splits the population into multiple sub-populations and applies the SMO algorithm inde-
pendently on each sub-population.
– The advantage of PSMO is its ability to reduce the computation time and obtain better results
by exploring a larger search space.
– PSMO has been applied to various applications, such as pattern recognition, image processing,
and machine learning.
23
10 Ant Colony Algorithm
• Ant colony algorithm (also known as ant colony optimization) is a metaheuristic algorithm inspired by
the behaviour of ants in their search for food.
• The ant colony optimization algorithm was proposed by Marco Dorigo [2] in his PhD thesis in 1992.
2. Pheromone update:
• In this phase, the pheromone level on each path is updated based on the quality of the solution
found by the ants.
• The shorter the path, the higher the amount of pheromone deposited on it.
3. Local search:
• In this phase, the ants perform a local search on their current path to improve their solution.
• Local search can be done by swapping two nodes on the path or by using a 2-opt heuristic.
Flow-chart(For reference)
Iteration
Start
Initialization
Pheromone update
Local search
Stop
24
Algorithm Ant Colony Optimization
Initialize ants and pheromone levels on all paths Set termination criterion termination criterion not
m
k
P
met Let each ant construct a solution Update the pheromone levels on all paths using: ∆τij = ∆τij
( k=1
Algorithm(For reference)
– MMAS is a modification of the standard ant colony optimization algorithm that uses a max-min
strategy for updating the pheromone trails.
– This strategy ensures that the pheromone levels remain within a predefined range, which prevents
the algorithm from getting stuck in local optima.
– MMAS has been shown to improve the performance of the ant colony algorithm on various op-
timization problems, including the traveling salesman problem (TSP), vehicle routing problem
(VRP), and job shop scheduling problem (JSSP).
– Applications: TSP, VRP, JSSP, and other combinatorial optimization problems.
– Advantages:
∗ Prevents the algorithm from getting stuck in local optima.
∗ Improves the performance of the ant colony algorithm on various optimization problems.
• Ant-Q:
– Ant-Q is a modification of the ant colony algorithm that uses a reinforcement learning mechanism
to update the pheromone trails.
– In Ant-Q, the pheromone levels are updated based on a combination of the current pheromone
level, the quality of the solution, and a reinforcement signal that indicates the quality of the
solution relative to the best solution found so far.
– Ant-Q has been shown to outperform the standard ant colony algorithm on various optimization
problems.
– Applications: TSP, VRP, JSSP, and other combinatorial optimization problems.
– Advantages:
∗ Uses a reinforcement learning mechanism to update the pheromone trails.
∗ Outperforms the standard ant colony algorithm on various optimization problems.
25
11 Simulated Annealing
• Simulated annealing is a metaheuristic optimization algorithm used to find the global minimum or
maximum of a function.
• It is inspired by the physical process of annealing, which is the gradual cooling of a material to reduce
its defects and improve its strength.
• Simulated annealing was first proposed by S. Kirkpatrick, et.al, [5] in a seminal paper published in
Science in 1983.
1. Evaluation:
• At each iteration, a new candidate solution is generated by making a small random perturbation
to the current solution.
• The perturbation can be chosen in different ways, depending on the problem being solved and the
nature of the solution space.
2. Perturbation:
• The algorithm evaluates the objective function at the new solution and computes a change in
the objective function value, which represents the improvement or deterioration of the candidate
solution relative to the current solution.
3. Acceptance:
• The algorithm decides whether to accept or reject the candidate solution based on a probabilistic
criterion.
• The acceptance probability is a function of the change in the objective function value and a
temperature parameter that controls the degree of randomness in the search.
• At high temperatures, the algorithm accepts solutions with a higher probability, while at low tem-
peratures, the acceptance probability decreases, and the algorithm becomes more deterministic.
4. Cooling:
• The temperature parameter is gradually decreased over time, according to a cooling schedule that
determines how fast the temperature decreases.
• The cooling schedule can be chosen in different ways, depending on the problem being solved and
the desired level of convergence.
26
Flow-chart(For reference)
Initialize
Current solution
Evaluate
no
Compute ∆E
no
yes
Update current solution
Decrease temperature
27
Algorithm(For reference)
– In ASA, the temperature schedule is automatically adjusted based on the progress made during
the search. ASA uses the Metropolis criterion with an adaptive temperature that decreases as
the search progresses. This helps in achieving a faster convergence to the optimal solution. ASA
has been successfully applied to various optimization problems such as the travelling salesman
problem, the quadratic assignment problem, and the job shop scheduling problem.
• Quantum Simulated Annealing (QSA):
– QSA is a hybrid optimization algorithm that combines the principles of quantum computing and
simulated annealing. QSA uses a quantum-inspired operator to generate new candidate solutions
and then applies the Metropolis criterion to accept or reject the new solution. QSA has been
shown to outperform SA in various optimization problems such as the MAX-CUT problem and
the travelling salesman problem.
28
12 Intelligent Image Color Reduction and Quantization
The method of colour quantization involves minimising the amount of colours in a digital image. The basic
goal of the quantization process is to maintain important information while lightening an image’s colour
palette. Due to their limitations, the majority of image printing and display equipment cannot replicate an
image’s true colours. small colour palettes. As a result, to match the amount of palette colours available,
fewer picture colours must be used.
29
12.4 Solution by the Particle Swarm Optimization
Three different species of bees are present in the swarm under consideration: working bees, observers, and
scout bees. Each bee in the workforce is connected to a particular food source; it brings nectar to the beehive
and notifies the curious bees about the food source. Based on the information given by the employed bees,
each observer bee chooses a food source. The scout bees haphazardly look for fresh food sources. One of the
most crucial methods for image processing and compression is colour quantization (CQ). The vast majority
of quantization techniques rely on clustering algorithms. Unsupervised classification methods like data
clustering fall within the category of NP-hard problems. Using swarm intelligence algorithms is one strategy
for tackling NP-hard problems.Swarm intelligence techniques include the Artificial Fish Swarm Algorithm
(AFSA). This paper suggests a modified AFSA for conducting CQ. The proposed algorithm makes changes to
behaviours, settings, and the algorithm’s process in order to increase the AFSA’s effectiveness and eliminate
its flaws. Four well-known photos were subjected to CQ using the suggested approach as well as other other
known algorithms. Comparison of experimental findings demonstrates the suggested algorithm’s tolerable
efficiency.
30
B3. There are an infinite number of alternative colour spaces for the representation of the RGB image.We
create optimum thresholds appropriate for each plane of the transformed RGB image after translating it
into B1B2B3 space. The picture decompression stage is maintained using the bat method, where the cost
function is built to balance the perfect deletion of unnecessary DCT coefficients without squandering the
energy information.To increase the compression rate, we use a lossless coding strategy built on the TRE
coding and adaptive scanning suggested
12.11 Conclusion
It has been suggested to use a colour quantization technique that mixes artificial ants and bees. This
approach is based on the algorithm Ozturk. used to solve the same issue by fusing the K means algorithm
and artificial bees. As a result of the K-means algorithm’s high time requirement, the ATCQ method is
used instead. The new method can produce images with a similar quality to the old method, according to
31
computational results.compared to that proposed by Ozturk et al., but takes a lot less time to complete.
Also, compared to other widely used colour quantization techniques as Wu’s approach, Neuquant, K means,
Octree, or the Variance-based technique, this technique produces better images. Even combining ABC +
ATCQ can sometimes produce better photos than PSO, it should be noted that PSO takes an unnecessary
amount of time to do so. The standard ATCQ algorithm, which creates a multilayer tree, and the variant
of the algorithm that produces a 3-level tree have both been merged with the ABC algorithm. The study of
the data reveals that the second example generates images more quickly and in a manner that is remarkably
similar to the first scenario.
32
for listing every Pareto optimum spanning tree was proposed in a paper by Zhou and Gen [10] to assess
their suggested GA. However Knowles [5] argued that the suggested enumeration algorithm was flawed. This
study proposes an enhanced enumeration approach to produce all actual Pareto optimum solutions for the
mc-MST issue in order to assess the proposed non-generational GA. The experimental findings demonstrate
that the majority of the outcomes obtained using the suggested GA approach are genuine Pareto optimum
solutions. Hence, the suggested non-generational GA is very successful.The idea behind the non-dominated
sorting procedure is that a ranking selection method is used to emphasize good points and a niche method
is used to maintain stable sub population of good points.
33
they direct the cuckoos to fly towards the closest cuckoo in the MST. This helps to preserve population
variety and prevents too early convergence. Ultimately, the Cuckoo Search and MST strengths are combined
in the MSTCS algorithm to provide a powerful and successful optimisation technique.
34
random starting points.Evaluation: Using the objective function, assess each ant’s fitness.MST Construction:
Use Kruskal’s algorithm to build the MST for the ant population.Ant Movement: Using the following
calculation, move each ant to a new position depending on the pheromone trails and the distances in the
MST:The formula for p i(j) is (tau i(j) alpha) * (eta i(j) beta) / sum.(Eta i(j) beta) ((tau i(j) alpha))x
i(t+1) = p i(j) + argmin j where p i(j) denotes the likelihood of an ant moving to node j, tau(j) denotes the
pheromone level on the edge between node I and node j, eta(j) denotes heuristic information, alpha and beta
denote the parameters that control the relationship between pheromone and distance, and argmin j denotes
the node j with the lowest probability of p i (j).Evaluation: Using the objective function, assess each ant’s
fitness.Pheromone Update: Use the equation below to update the pheromone trails:rho * (1 / f(x i) * tau
i(j) + rho * tau i(j) = tau i(j),where f(x i) is the fitness of ant I rho is the evaporation rate, and 1 / f(x
i) is the quantity of pheromone deposited by ant i.Update each ant’s nest by choosing the best one from
among those in its current and prior positions. Repeat steps 3 through 7 until the termination requirement
is satisfied.
13.11 Conclusion
As a result, the Minimum Spanning Tree (MST) is a fundamental issue in graph theory that aims to identify
the shortest path between a group of nodes or vertices, ensuring that every vertex is connected and that
there are no cycles. Network design, picture segmentation, and clustering are just a few of the many useful
uses of the MST.The MST has served as the foundation for the development of numerous optimisation
algorithms, including Ant Colony Optimization, Simulated Annealing, Genetic Algorithms, and Particle
Swarm Optimization. These algorithms direct the search for the best solutions using the MST as a guiding
principle.The incorporation of the MST into optimisation algorithms can result in effective and efficient
solutions for a variety of issues. The algorithms can take advantage of the MST’s characteristics to preserve
population diversity, prevent early convergence, and look for the world’s best solution. The MST is a potent
idea that has been applied in a variety of domains, and its application to optimisation methods has shown
to be a viable route for resolving challenging optimisation issues.
35
heuristic optimization algorithm that is inspired by the foraging behavior of honey bees. The algorithm
mimics the food source discovery process of bees to search for optimal solutions in a search space.The
RP-ABC method provides a number of benefits, such as the capacity to handle complicated settings with
numerous barriers, robustness to noise and uncertainty in the environment, and the ability to swiftly and
effectively locate optimal pathways. In dynamic contexts, where barriers or the position of the target may
change over time, the RP-ABC algorithm can also be expanded to address those situations.
36
an evaluation.Termination: Return to step 3 if the termination requirement is not satisfied; else, return the
most advantageous path discovered throughout the search process.The GA serves as a global search algo-
rithm in the RP-GA algorithm, while the local search algorithm serves as a refinement phase to enhance the
quality of the best path discovered by the GA.The RP-GA method provides a number of benefits, including
the capacity to handle challenging situations with numerous barriers, robustness to noise and ambiguity in
the environment, and speed and effectiveness in locating the best pathways. In dynamic contexts, where
barriers or the location of the target may change over time, the RP-GA algorithm can also be expanded to
handle those situations.Overall, the RP-GA algorithm is a promising method for planning robot paths and
can offer a practical resolution to a variety of robotic navigation issues.
37
14.6 Solution by the Cuckoo Search
Robot path planning and the Cuckoo Search algorithm can be combined to create a powerful system that
can determine the best routes across challenging terrain for robots to travel. The robot path planning based
cuckoo search (RP-CS) algorithm is the outcome.Here is how the RP-CS algorithm operates: Initialization:
Provide a population of potential routes for the robot to follow as it explores its surroundings.Evaluation:
Determine the candidate paths’ fitness using a fitness function that considers the length of the path, how
to avoid obstacles, and other restrictions.Apply the Cuckoo Search algorithm on the population of potential
paths in order to identify the path that best satisfies the problem’s requirements.Local Search: To enhance the
quality of the optimal path discovered by the Cuckoo Search algorithm, use a local search technique, such as
gradient descent or hill climbing. Return the top route uncovered throughout the search procedure.The local
search algorithm serves as a refining phase to enhance the quality of the best path discovered by the Cuckoo
Search algorithm, which serves as a global search algorithm in the RP-CS algorithm.The RP-CS algorithm
provides a number of benefits, such as the capacity to handle complicated environments with numerous
barriers, robustness to noise and uncertainty in the environment, and speed and efficiency in locating optimal
pathways. In dynamic contexts, where barriers or the location of the target may vary over time, the RP-CS
algorithm can also be expanded to handle those situations.Overall, the RP-CS algorithm offers a promising
method for planning robot paths and can effectively address a variety of robotic navigational issues.
38
of benefits, including the capacity to handle challenging situations with numerous barriers, robustness to
noise and uncertainty in the environment, and speed and effectiveness in locating the best routes. Dynamic
environments can also be handled with the RP-SMO algorithm.where the obstacles or the target position
may change over time.Overall, the RP-SMO algorithm is a promising approach to robot path planning, and
it can provide an efficient and effective solution to a wide range of robotic navigation problems.
14.11 Conclusion
In Conclusion, determining the best path for a robot to take from its starting point to a goal position
while avoiding obstacles is a difficult challenge in robotics. The Artificial Bee Colony, Differential Evolution,
Genetic Algorithms, Particle Swarm Optimization, Firefly Algorithm, Cuckoo Search, Bat Algorithm, Spider
Monkey Optimization, Ant Colony Optimization, and Simulated Annealing are a few optimisation algorithms
that can be used to solve the robot path planning problem.The choice of algorithm depends on the unique
aspects of the problem and the performance requirements. Each algorithm has strengths and limitations. A
39
few population-based algorithms, like Genetic Algorithms and Particle Swarm Optimization, can effectively
search the solution space.The Minimal Spanning Tree algorithm can make optimisation algorithms more
effective and economical, particularly when tackling complicated situations with numerous barriers. The A*
algorithm, a common pathfinding technique in robotics, can be incorporated into optimisation algorithms to
increase their effectiveness by offering a heuristic function to direct the search.In Conclusion, optimisation
algorithms like those described here have the potential to help robots navigate in complicated situations
more effectively and safely. They can offer an efficient and effective solution to the robot path planning
problem.
40
space to find the optimal solution.In summary, the use of Differential Evolution (DE) algorithm for solving
Data Envelopment Analysis (DEA) problems has shown promising results. DE algorithm is able to handle
large and complex data sets, non-linear and non-convex problems, and is a global optimization algorithm.
41
handle large and complex data sets. Second, it can handle non-linear and non-convex problems, which are
common in DEA. Third, it is a global optimization algorithm, which means that it can search the entire
solution space to find the optimal solution.In summary, the use of Cuckoo Search (CS) for solving Data
Envelopment Analysis (DEA) problems has shown promising results. CS algorithm is able to handle large
and complex data sets, non-linear and non-convex problems, and is a global optimization algorithm.
42
the annealing process in metallurgy. SA algorithm has several advantages when applied to DEA problems.
First, it is a global optimization algorithm, which means that it can search the entire solution space to
find the optimal solution. Second, it can handle non-linear and non-convex problems, which are common in
DEA. Third, it is able to escape local optima, which is a common problem in optimization algorithms. In
summary, the use of Simulated Annealing (SA) for solving Data Envelopment Analysis (DEA) problems has
shown promising results. SA algorithm is able to handle non-linear and non-convex problems, is a global
optimization algorithm, and is able to escape local optima.
15.11 Conclusion
Data Envelopment Analysis (DEA) is a widely used method for evaluating the relative efficiency of decision-
making units (DMUs) in various industries. DEA has been applied in various fields such as healthcare,
education, manufacturing, and finance. The goal of DEA is to identify the most efficient DMUs and to
provide recommendations for improving the efficiency of inefficient DMUs.Recently, several metaheuristic
optimization algorithms, such as Artificial Bee Colony (ABC), Differential Evolution (DE), Genetic Algo-
rithms (GA), Particle Swarm Optimization (PSO), Firefly Algorithm, Cuckoo Search, Bat Algorithm, Spider
Monkey Optimization (SMO), Ant Colony Algorithms, and Simulated Annealing (SA), have been proposed
for solving DEA problems. These algorithms have shown promising results in terms of efficiency and accu-
racy. They have also demonstrated the ability to handle non-linear and non-convex problems and to escape
local optima.Overall, the use of metaheuristic optimization algorithms in DEA has improved the accuracy
and efficiency of DEA models and provided new insights into the efficiency of decision-making units. The
choice of the most appropriate algorithm depends on the specific characteristics of the DEA problem at
hand.
16 Portfolio Optimization
The process of designing an investment portfolio that maximises returns while minimising risks is known
as Portfolio Optimisation. To optimise a portfolio, the ultimate combination of assets that will yield the
best return possible at each risk level needs to be found.Mean-variance optimization, minimum variance
optimization, and risk parity are only a few of the techniques available for portfolio optimization. The most
popular method, mean-variance optimization, involves increasing the expected return on the portfolio while
reducing its variance. On the other hand, minimum variance optimization aims to reduce the portfolio’s
volatility while preserving a specific amount of expected return. Risk parity optimisation seeks to distribute
the assets in a portfolio so that the risk contributions from each item are equal.
43
16.2 Solution by the Differential Evolution
The population-based optimisation technique Differential Evolution (DE) has also been used to solve portfolio
optimisation issues. This algorithm evolves a population of potential solutions (i.e., portfolios) through a
series of rounds.The DE algorithm for portfolio optimisation functions by treating each portfolio as a vector
of decision variables that represents how the assets in the portfolio are distributed. The algorithm then
mutates and recombines the current solutions in the population to produce new candidate solutions. These
potential solutions are assessed using a fitness function that considers the expected return and risk of the
portfolio.Portfolio optimisation has been demonstrated to benefit from the DE method, especially when
the issue is complicated and multidimensional. The algorithm can deal with nonlinear and non-convex
optimisation issues and swiftly converge to a close-to-optimal solution. The DE algorithm’s sensitivity to
the parameters’ selection, such as the mutation and crossover rates, is one of its possible drawbacks. However,
there are ways to adjust these settings to boost the algorithm’s effectiveness. The DE algorithm has been
proven to occasionally outperform other well-known optimisation algorithms, making it a viable method
for portfolio optimisation in general. It is crucial to carefully examine the outcomes and consider various
portfolio optimisation strategies, as with any optimisation technique.
44
It is crucial to carefully examine the outcomes and consider various portfolio optimisation strategies, as with
any optimisation technique.
45
bat are considered when the bats fly through the search area and modify their jobs accordingly. Using a
fitness function that evaluates the projected return and risk of the portfolio, the candidate solutions are
assessed. The BA algorithm has been demonstrated to perform well in portfolio optimization, especially
when the issue is complicated and multidimensional. The algorithm is capable of dealing with nonlinear and
non-convex optimisation issues and can swiftly converge to a close-to-optimal solution. The BA method may
have certain drawbacks because it is susceptible to the selected parameters, such as the bats’ volume and
heart rate. There are ways to adjust these settings, though, to boost the algorithm’s effectiveness. The BA
algorithm has been proven to occasionally outperform other well-known optimisation algorithms, making it
a viable method for portfolio optimisation in general. It is crucial to carefully examine the outcomes and
consider various portfolio optimisation strategies, as with any optimisation technique.
46
16.10 Solution by the Simulated Annealing
The metaheuristic optimisation technique Simulated Annealing (SA) has addressed portfolio optimisation
issues. It is modelled after the metallurgical annealing process, in which a metal is heated and then gradually
cooled to increase its strength and minimise its flaws. The SA algorithm for portfolio optimisation functions
by treating each portfolio as a vector of choice variables representing the portfolio’s distribution of assets. The
method starts with a preliminary solution and iteratively changes it by switching out the asset allocations.
The algorithm uses a fitness function that considers the portfolio’s predicted return and risk to evaluate each
proposed solution. The temperature parameter, which regulates the likelihood of accepting a worse solution
as the algorithm advances, is the main component of the SA algorithm. The algorithm’s temperature is
set high at the start to investigate many potential solutions. The temperature gradually drops as the
algorithm continues, lowering the likelihood of an inferior answer being accepted. It has been demonstrated
that the SA algorithm performs well in portfolio optimisation, especially when the issue is complicated and
multidimensional. The algorithm can deal with nonlinear and non-convex optimisation issues and swiftly
converge to a close-to-optimal solution. One potential drawback is the SA algorithm’s sensitivity to the
selection of its parameters, such as the starting temperature and the cooling schedule. However, there are
ways to adjust these settings to boost the algorithm’s effectiveness. The SA algorithm has been demonstrated
to occasionally outperform other well-known optimisation algorithms, making it a viable method for portfolio
optimisation in general. It is crucial to carefully examine the outcomes and consider various portfolio
optimisation strategies, as with any optimisation technique.
16.11 Conclusion
Building an investment portfolio that maximises profits while minimising risk is known as portfolio optimisa-
tion. As it aids in balancing risk and return objectives, it is a crucial duty for investors, fund managers, and
financial institutions. Many different optimisation methods exist, including Mean-Variance Optimization,
Minimum Variance Optimization, Risk Parity, Genetic Algorithms, Firefly algorithms, and many others.
Choosing an appropriate technique depends on the problem’s complexity, size, and aim. Each method has
advantages and limits. While newer algorithms like Differential Evolution, Bat Algorithm, Cuckoo Search,
Spider Monkey Optimization, and Simulated Annealing are becoming more popular due to their capacity
to handle complex, multi-dimensional, and non-convex problems, traditional optimisation techniques like
Mean-Variance Optimization continue to be widely used. Finally, portfolio optimisation is an important
task that calls for careful consideration of several variables, such as asset allocation, diversification, and risk
management. Investors and financial institutions should keep researching and comparing various strategies
to discover the best one because choosing the proper optimisation methodology is essential for obtaining
optimal portfolio performance.
47
4. Safety: The layout should be planned with personnel safety in mind, ensuring enough room for people
to move around safely and that machinery and equipment are positioned in areas without worker risk.
5. Accessibility: The facility’s plan should be thought out such that all areas are easily accessible and
that people, tools, and materials may move about freely without being constrained.
Designing a facility’s layout can be complicated. Still, several techniques and technologies can help, such
as computer-aided design (CAD) software, simulation modelling, and flowcharting. The ultimate objective
is to design a facility plan that meets the organisation’s and its clients’ needs while being functional, safe,
and economical.
48
preset stopping criterion—like a set number of iterations or a minimum improvement in the fitness func-
tion—is satisfied. Differential Evolution can be utilized as a sophisticated optimisation algorithm to resolve
challenging facility layout design issues. We may use DE to create effective, efficient, and customized layouts
to the facility’s unique needs by specifying suitable decision variables and fitness functions.
49
17.5 Solution by the Firefly Algorithm
A population-based optimisation technique called the Firefly technique (FA) was developed in response to
the flashing behaviour of fireflies. Facility layout design, which entails maximising the arrangement of offices,
workstations, and equipment within a facility, is one issue to which FA has been successfully applied. The
fundamental idea behind FA is to use firefly attraction and their proximity to each other to replicate the
flashing behaviour of fireflies. The brilliance of the firefly’s flash, a function of its fitness or objective worth,
indicates attractiveness. The Euclidean distance between the firefly in the search space is used to measure
distance. The programme works by iteratively adjusting each firefly’s position based on how appealing it
is and how attractive its neighbours are. We must specify the choice variables characterising the layout to
use FA to facilitate layout design. These are a few examples of each department’s location and size, the
location of each workstation, and the setup of the facility’s equipment. The fitness function would gauge the
layout’s effectiveness and productivity, considering things like the space between departments, the movement
of personnel and goods, and the ease of access to equipment. Depending on the unique requirements of the
situation, FA can be applied in various ways. Using a swarm of fireflies that develops over several iterations
or generations is one typical strategy. Each iteration includes changing each firefly’s position, which results
in a fresh multitude of potential improvements. The algorithm ends when a workable solution is discovered
or a preset stopping criterion—like a set number of iterations or a minimum improvement in the fitness
function—is satisfied. As a sophisticated optimisation technique, the Firefly technique can be utilised to
resolve challenging facility layout design issues. By specifying proper decision variables and fitness functions,
we may employ FA to develop layouts that are efficient, productive, and suited to the specific demands of
the facility.
50
ness of the bat’s echolocation pulse. Iteratively changing each bat’s position and frequency as the algorithm
moves forward produces a fresh set of potentially better solutions.We must establish the decision factors that
describe the layout Toto apply BA to design facility layouts. These are a few examples of each department’s
location and size, the location of each workstation, and the setup of the facility’s equipment. The fitness
function would gauge the layout’s effectiveness and productivity, considering things like the space between
departments, the movement of personnel and goods, and the ease of access to equipment. Depending on
the unique requirements of the problem, BA can be applied in various ways. One typical strategy is using a
population of bats that has evolved over several iterations or generations. The position and frequency of each
bat are updated throughout each loop, creating a fresh population of potentially more effective solutions.
When a workable solution is discovered or a preset stopping criterion—like a set number of iterations or a
minimum improvement in the fitness function—is satisfied, the algorithm ends. The Bat technique can be
utilised as a sophisticated optimisation technique to resolve challenging facility layout design issues. We can
use BA to create layouts that are effective, productive, and suited to the facility’s particular requirements
by establishing suitable decision variables and fitness functions.
51
around the facility throughout each iteration and create solutions using the pheromone trails and heuristic
data. The method ends when a good solution is discovered or a specified stopping criterion is satisfied, such
as a maximum number of iterations or a minimum improvement in the fitness function. The pheromone
trails are updated based on the quality of the solutions discovered by the ants. In Conclusion, Ant Colony
Optimization is a potent optimisation approach that can be utilised to address challenging facility layout
design issues. We can use ACO to create layouts that are effective, productive, and suited to the facility’s
unique needs by specifying suitable decision variables and fitness functions.
17.11 Conclusion
The arrangement of offices, workstations, and equipment within a facility must be optimized, which is
a difficult challenge. The objective is to create a layout that is effective, productive, and suited to the
facility’s particular requirements. Numerous optimization strategies, such as genetic algorithms, particle
swarm optimization, ant colony optimization, simulated annealing, firefly algorithm, bat algorithm, cuckoo
search, and spider monkey optimization, can be used to tackle this problem. Each algorithm has advantages
and disadvantages, and the best one to use depends on the particulars of the issue. Generally speaking,
specifying the proper decision variables, objective functions, and the problem’s parameters and restrictions
is necessary for optimization methods. The algorithms produce new answers iteratively, assess their quality
or fitness, and change the solutions by some search strategy. Optimization algorithms in facility layout design
can help save expenses, boost production, and enhance a facility’s overall efficiency. Facility managers can
use these algorithms to design layouts optimized for their unique requirements and flexible enough to alter
when the production process or the business environment does.
52
18 Vehicle Routing Problem
To visit a group of clients as efficiently as feasible, a collection of cars must be routed according to the Vehicle
Routing Problem (VRP), a well-known optimisation problem. The challenge is to identify the best possible
set of routes for the vehicles to take to reduce the overall distance travelled and any associated costs, such as
time, fuel use, or vehicle capacity utilisation. VRP has many valuable applications, including logistics, waste
collection, package delivery, and public transit. Depending on the particular requirements of the situation, the
VRP can be formulated in various ways. In its most basic version, the VRP assumes that every customer must
be visited precisely once and that every vehicle has the same capacity and speed. The challenge is choosing the
best possible set of routes for the cars to reduce their overall mileage or other associated expenditures. The
Capacitated Vehicle Routing Problem (CVRP) is the name given to this issue. The VRP can be expanded to
include more restrictions and complexities, such as time windows (when customers must be visited), various
depots (where the vehicles start and conclude their routes), heterogeneous cars (with varying capacity or
speeds), and other pragmatic considerations. Numerous optimisation strategies, including heuristic and
metaheuristic ones like genetic algorithms, simulated annealing, ant colony optimization, and particle swarm
optimization, as well as accurate approaches like branch-and-bound and dynamic programming, can be used
to solve the VRP. The particulars of the problem, such as its size, the number of variables and constraints,
and the needed level of excellence and optimality of the solution, determine the algorithm to be used.
Straightforward approaches are typically more suited for small to medium-sized issues, while heuristic and
metaheuristic algorithms are better suited for more significant, trickier issues. The VRP can be solved,
resulting in significant cost reductions and logistical and transportation efficiency gains. Businesses may
save fuel consumption, vehicle wear and tear, and labour expenses by optimising the routing of their cars
while also improving customer satisfaction and overall productivity.
53
or fitness using an objective function. The evolution of natural species, in which individuals struggle for
resources and reproduce to produce new offspring, serves as an inspiration for the search process. Three sig-
nificant operations make up the DE algorithm: mutation, crossover, and selection. A new solution is created
in the mutation operation by scaling the difference between two previously chosen solutions and adding it to
a third solution. The present and new solutions are joined to create a trial solution in the crossover process.
The trial solution is either accepted or rejected during the selection operation, depending on its suitability
or quality. The answer is expressed as vehicle routes that visit the consumers to apply the DE algorithm
to the VRP. Typically, the total distance covered by the vehicles or some related cost parameter is used
to define the objective function. The method continues by applying the mutation, crossover, and selection
operations to the existing solutions to generate new ones iteratively. According to experimental research, the
DE algorithm may generate high-quality solutions for the VRP with a low computational cost. Additionally,
the algorithm has been expanded to consider the VRP’s additional restrictions and complications, including
time windows, various depots, and heterogeneous vehicles. However, the DE algorithm’s performance may
vary depending on the particular issue instance and parameter choices, and it may only sometimes ensure
the discovery of the ideal solution.
54
have demonstrated that the PSO algorithm can generate superior VRP solutions at a cost-effective level of
computation. Additionally, the algorithm has been expanded to consider the VRP’s additional restrictions
and complications, including time windows, various depots, and heterogeneous vehicles. However, the PSO
algorithm’s performance may vary depending on the particular issue instance and parameter choices, and it
may only sometimes ensure the discovery of the ideal solution.
55
BA to the VRP. The total distance covered by the vehicles or some related cost parameter is typically
used to define the objective function. Based on the quality of each bat’s individual best solution and the
best solution the swarm has so far discovered, the algorithm continues by repeatedly updating the position
and velocity of the bats. Exploring the search space and utilising potential solutions must be balanced
during the search process. According to experimental research, the BA algorithm can generate superior
VRP solutions with a reasonable computational cost. Additionally, the algorithm has been expanded to
consider the VRP’s additional restrictions and complications, including time windows, various depots, and
heterogeneous vehicles. However, the BA algorithm’s performance may vary depending on the particular
issue instance and parameter choices, and it may not always ensure the discovery of the best solution.
56
18.10 Solution by the Simulated Annealing
Simulated Annealing (SA) is a metaheuristic optimization algorithm commonly used to solve combinatorial
optimization problems such as the vehicle routing problem (VRP). The annealing process inspires SA in
metallurgy. The annealing process involves heating and cooling the metal to reduce defects and improve its
properties. Here is an overview of how to apply the SA algorithm to solve his VRP. Step 1:Initialization
Could you initialize the initial solution? This can be a series of routes for vehicles. Could you evaluate the
suitability of the initial solution based on the VRP objective function? This is usually to minimize the total
distance travelled or the cost incurred. Step 2:Neighbourhood search Generate a new solution by making
minor changes to the current solution. B. Swap two customers between two routes or add a new route
to the solution. Could you evaluate the suitability of new solutions? Step 3: Predict Decide whether to
accept or reject the new solution based on the Metropolis criteria, which compares the suitability difference
between the current and unique solutions and the calculated probabilities based on the cooling plan. If
the new solution is better, I will accept it unconditionally. If the new solution worsens, accept it as the
probability decreases as the temperature drops. Step 4:cooling Cools down the system based on a cooling
schedule determining how quickly the temperature cools down. A cooling program can be linear, geometric,
or any other function that decreases temperature over time. Step 5:completion Stop the algorithm when
the termination criteria are met. B. a Maximum number of iterations or a satisfactory solution found. By
repeating steps 2-4 until the termination criteria are met, the SA algorithm can explore the solution space
and converge to a near-optimal solution for the VRP. Additionally, the performance of the SA algorithm
can be improved using various techniques such as B. Perturbing cooling schedules, integrating local search
algorithms, or using adaptive temperature regulation strategies.
18.11 Conclusion
In Conclusion, the Vehicle Routing Problem (VRP) is a challenging optimization issue that involves deter-
mining the most cost- or distance-effective routes for a fleet of vehicles to serve a group of clients. The
VRP can be solved using a variety of methods, such as accurate ones like branch and bound and dynamic
programming, and metaheuristic ones like Genetic Algorithms (GA), Particle Swarm Optimization (PSO),
Ant Colony Optimization (ACO), and Simulated Annealing (SA). Each method has pros and cons, and the
best solution depends on the scope of the issue, the available computing power, and the amount of optimality
required. While metaheuristic algorithms can handle more significant instances of the problem and produce
reasonable solutions in a reasonable period, exact approaches are best suited for small cases of the problem
and can ensure an ideal answer. The VRP is a critical challenge in logistics and transportation overall, and
breakthroughs in optimization algorithms can result in considerable increases in effectiveness, cost savings,
and environmental impact.
57
Simulated annealing: This approach is based on the physical annealing process, which involves heating
and cooling a material to reduce its energy. In simulated annealing, a solution is discovered by gradually
cooling the system while the system is regarded as physical. Ant colonies leave pheromone trails to commu-
nicate with one another, which is how this approach was developed. In this approach, a colony of virtual
ants follows pheromone trails to find a satisfactory solution to the scheduling problem.
The tabu search algorithm avoids revisiting previously investigated solutions by tracking them in a
memory structure. A collection of tabu rules limiting possible moves is used to direct the search process.
Generally, parallel machine scheduling is a complex issue without a universally applicable answer. The
particular task and associated limitations determine the algorithm or technique to use.
58
To create the next generation, choose the best answers from the parent, guardian and new candidate
populations.
Up until a stopping condition is met, repeat steps 3-6. (e.g., a maximum number of generations or a
minimum fitness level is reached).
DE has successfully solved the parallel machine scheduling problem, and it can produce good results with
only a few function evaluations. However, the specific situation and the selection of parameters, such as the
mutation rate and crossover frequency, could affect how well the algorithm performs.
59
acceleration coefficients, may affect how well the method performs. The algorithm’s success also depends on
creating a vital fitness function.
60
19.7 Solution by the Bat Algorithm
The Bat Algorithm (BA), a metaheuristic optimisation tool, can resolve the parallel machine scheduling
issue. BA’s core concept is to mimic bats’ echolocation behaviour while looking for the best answer.
Finding the best work distribution among machines in parallel machine scheduling aims to reduce overall
completion time. Using the BA algorithm is as follows:
Create a colony of bats and assign each one to a potential duty that might be given to a machine.
Could you look at each bat’s fitness? The fitness function determines the overall job completion time for
a particular task assignment to machines. Set each bat’s frequency and velocity at random.
Adjust each bat’s frequency and velocity based on the population’s and the bat’s current best answer.
Could you assess each new contender solution’s suitability?
Could you update the population’s and each bat’s best solution so far?
Up until a stopping condition is met, repeat steps 4-6. (e.g., a maximum number of iterations or a
minimum fitness level is reached). The parallel machine scheduling problem has been successfully solved
using BA, which can produce satisfactory results with few function evaluations. However, the algorithm’s
performance may vary depending on the particular issue at hand and the parameters are chosen, such as the
bats’ volume and heart rate. The algorithm’s success also relies on creating a vital fitness function.
61
Each ant selects a machine for each task based on a probabilistic decision rule that takes into account
the pheromone trail levels and heuristics (such as the task’s processing time and the machines’ capacity).
Analyze each ant’s physical condition. For a specific job assignment to machines, the fitness function
determines the overall completion time of all tasks.
According to each ant’s degree of fitness, update the pheromone trail levels. For the machine-task
assignments that lead to better solutions, the pheromone levels are raised, while those that lead to worse
results are lowered.
Steps 3-5 should be repeated until a stopping condition is satisfied (such as the maximum number of
iterations or the minimum fitness level). The parallel machine scheduling problem has been successfully
solved by ACO, which can produce satisfactory results with a minimal number of function evaluations. The
specific problem and the parameters used, such as the pheromone update rate and the ratio of exploration
to exploitation, could, however, affect how well the method performs. The success of the algorithm also
depends on the creation of a strong fitness function.
19.11 Conclusion
To reduce the overall completion time, parallel machine scheduling is a challenging and crucial topic in
manufacturing and other sectors. It entails distributing a set of jobs across various machines. The efficient
exploration of the solution space and quick discovery of suitable solutions using metaheuristic optimisation
algorithms have shown to help resolve this issue.
The scheduling of parallel machines has been accomplished using a variety of metaheuristic algorithms, in-
cluding Genetic Algorithms, Particle Swarm Optimization, Ant Colony Optimization, Simulated Annealing,
Differential Evolution, Firefly Algorithm, Cuckoo Search, and Spider Monkey Optimization. Each method
has advantages and disadvantages, and the best solution relies on the particular problem and limitations.
In addition to picking the correct algorithm, selecting the proper algorithm parameters and designing a
solid fitness function is also essential for getting good performance. Combining these algorithms with other
optimisation strategies, such as hybrid or local search algorithms, can further boost their performance.
62
Overall, research on efficient metaheuristic optimisation methods for scheduling parallel machines is still
ongoing, and future developments in this area will allow for more effective and efficient scheduling in various
industries.
63
exchanging two items or transferring one thing to a different location for each employed bee. Examine the
fitness of each adjusted solution, and if the improved modification improves the bee’s position, update it.
The phase of the onlooker bees: For each bee, choose a solution created by an employed bee using a
probability distribution established by the solutions’ fitness. Assess the suitability of the selected solution,
and if it is more effective, modify the spectator bee’s location.
The phase of the scout bees: Produce a new random solution for each scout bee and assess its fitness. If
the new solution is superior, change the scout bee’s position.The employed, onlooker, and scout bee phases
should be repeated until a termination requirement is fulfilled. Update: Update the best solution discovered
thus far.
Din Packing Problem solutions produced by the ABC algorithm have been demonstrated to be compet-
itive with those of other cutting-edge algorithms. However, the choice of algorithm parameters, such as the
population size, the number of iterations, and the probability distribution used in the onlooker bee phase,
might have an impact on the performance of the ABC method. To get good performance, careful adjustment
of the algorithm parameters is required.
64
The GA then iteratively repeats this process until a stopping requirement is satisfied, such as a maximum
number of generations or a good solution is found. Usually, the best person in the final population is chosen
asGAs may not always find the best only sometimeso the DPPGAs may not always find the best solution to
the DPPays be found by GAs, but they frequently a reasonable to do it in a fair length of time, especially
for more significant instances of the problem. They can also be simply modified to deal DPP variables like
numerous containers or restrictions on the orientation of the dins.
65
You can use a random walk algorithm that considers the attraction and unpredictability parameters to
move each firefly towards its brighter Could you update updates?
Could you update the population and assess their fitness? Does he relocate? Ted firefly?
Up until a stopping requirement is satisfied, repeat steps 3-5.
We may modify the procedure above to apply the FA to the DPP as follows: Create a population of
packing arrangements representing a potential solution in the initial stage.
Utilizing the specified fitness function, determine the packing density for each arrangement.
Based on their proximity and packing density, evaluate each arrangement’s desirability to its neighbours.
Use a random walk algorithm that considers the attractiveness and unpredictability parameters to move
each arrangement towards its brighter neighbours. Shifting an account in this context entails switching out
or turning objects inside containers.
Could you update the population and determine the packing density of the relocated arrangements?
Up until a stopping requirement is satisfied, repeat steps 3-5. Any provision supplied by the user, such
as achieving a specific level of packing density or a maximum number of repeats, might serve as the stopping
The successfully resolving the federalisation issue, been demonstrated, including the DPP.
66
following each repetition. The algorithm terminates and gives the best result whenever the allotted number
of iterations has been reached, or a predetermined threshold has been hit.
In Conclusion, the Din Packing Problem, which is challenging to address optimally using conventional
techniques, may be resolved to utilise the Bat Algorithm. To find the optimum answer in a reasonable
period, the method iteratively modifies a population of candidate solutions.
67
the algorithm can explore different regions of the solution space by eluding local optima. The Metropolis-
Hastings method, on which the probability function is built, strikes a compromise between using the existing
solution and pursuing novel ones.
Step 4: Update the temperature
After a predetermined number of rounds, the temperature parameter is decreased. This aids the algo-
rithm’s ability to concentrate on the present solution and move closer to the best one. Step 5: Finishing
When a predetermined stopping requirement is satisfied, the algorithm is stopped. This can be a maxi-
mum quantity of iterations or a minimal modification to the value of the objective function.
In Conclusion, Simulated Annealing is a powerful approach for resolving Din Packing Problem and other
combinatorial optimisation issues. The algorithm avoids local optima and converges to the best solution by
balancing exploring new alternatives with exploiting the existing solution.
20.10 Conclusion
A well-known and in-depth researched problem in computer science is the Din Packing Problem. The
challenge is effectively cram various-sized goods into a small area. Since there are numerous potential
options and it can frequently be tricky to locate the ideal packing, the problem is complex.
Heuristics or approximation algorithms are one of the most popular methods for solving the Din Packing
Problem; while they offer good answers, they cannot ensure optimality. The first-fit, best-fit, and worst-fit
algorithms are the heuristics most frequently utilised for packing problems. These algorithms are simple to
use and typically yield good outcomes. The development of optimisation algorithms, including metaheuristics
like genetic algorithms and simulated annealing, has advanced significantly in recent years. It has been
demonstrated that these algorithms, which are more sophisticated than conventional heuristics, produce
superior outcomes.
Overall, the Din Packing Problem in computer science continues to be challenging and exciting. Future
studies should focus on creating more sophisticated optimisation algorithms and heuristics that can effectively
tackle the issue for more extensive and complicated datasets.
21 Assignment problem
The assignment problem is a well-known optimisation issue that includes determining the best distribution
of jobs or resources to maximise overall gain or reduce overall loss. This issue can be encountered in various
situations, including matching hospital programmes for medical residents, assigning individuals to jobs,
allocating robots to tasks, and scheduling airline crew members.
The goal of the assignment problem is to choose the best assignment possible to minimise or maximise
the overall cost or profit of the project. The issue can be expressed mathematically as a linear programming
problem, where the objective function and constraints indicate the cost or benefit of each project, and
decision variables represent the job or resource assignments to agents. The Hungarian method, the auction
algorithm, and the branch-and-bound approach are just a few of the many algorithms and techniques that
can be used to solve the assignment problem. The computational sophistication, memory consumption, and
scalability of these approaches vary.
Numerous real-world applications of the assignment problem exist, including supply chain management,
logistics, sports analytics, and online dating. The solutions to this fundamental issue in operations research
and decision science have significant repercussions for effectiveness, equity, and competitiveness.
68
The honeybees’ foraging strategy inspired the metaheuristic optimisation algorithm known as the ABC
algorithm. The algorithm comprises a swarm of artificial bees that iteratively update their positions while
assessing the fitness of potential solutions to find the best one. We can define the decision variables as a
binary vector X = x1, x2,..., xn, where xi=1 if agent i is assigned to a task and xi=0 otherwise to apply the
ABC algorithm to the assignment problem. The objective function f(X), which is the total of the expenses
of the agents assigned to their various tasks, can be used to indicate the cost of the assignment.
The initial step of the ABC algorithm is to generate a population of potential solutions at random,
represented by the placement of the bees in the search space. Each bee receives a randomly chosen solution,
and the fitness of the answer is assessed by calculating the assignment’s cost. The best worldwide answer is
the one that has been identified as of yet. The algorithm then repeats the subsequent actions:
The phase of the employed bees: Each bee chooses a nearby solution and assesses its fitness. The bee
changes its position to the new key if it is superior to the existing one. If not, the bee stays in its current
place.
The phase of the onlooker bees: Each one chooses a solution based on probabilities proportionate to the
solutions’ fitness. The preferred option is then assessed, and if the new one is superior, the bee changes its
location.
The phase of the scout bee: If an employed bee or an observer bee fails to discover a better solution after
a predetermined number of iterations, the bee turns into a scout and develops a new key randomly. Global
update: The previous best solution is updated if a new global best solution is discovered.
The algorithm stops when a stopping requirement is satisfied, such as a set number of iterations or the
satisfaction of the global best solution.
The assignment problem and other combinatorial optimisation issues can both be resolved using the ABC
algorithm. Optimizing the algorithm’s parameters, such as the population size and iterations, determines
how well it performs.
69
21.3 Solution by the Genetic Algorithms
A well-known metaheuristic optimisation technique called the Genetic technique (GA) was developed due
to natural biological selection. The assignment problem is just one of the many optimisation issues that GA
has successfully resolved. By identifying an ideal or nearly ideal assignment of agents to tasks, the GA can
be utilised to reduce the overall assignment cost in this situation.
The GA solves the assignment problem by keeping track of a population of potential solutions, each
represented by a binary string of length n, where n is the total number of agents or tasks. The value of
the ith bit in the line indicates whether the ith agent is assigned to the jth job, either (1) or (0). The GA
begins by initialising at random.The GA solves the assignment problem by keeping track of a population of
potential solutions, each represented by a binary string of length n, where n is the total number of agents or
tasks. The value of the ith bit in the line indicates whether the ith agent is assigned to the jth job, either
(1) or (0). The GA begins by initialising a population of potential solutions at random.
The GA moves forward iteratively by carrying out the following actions: Selection: Using a fitness
function that assesses each potential solution’s fitness based on its overall cost, a subset of the population is
chosen for reproduction. For the algorithm to minimise the overall cost, the fitness function can be considered
the assignment’s total cost divided by its negative.
Crossover: A crossover operator is employed to produce a new offspring by combining two chosen parent
solutions. The offspring can be made using uniform, one-point or two-point crossover operators. Mutation:
A mutation operator is employed to inject new genetic information into the population by randomly flipping
bits in the progeny.
Replacement: If the offspring’s fitness is higher than a candidate solution, it replaces the key in the
population. The offspring take the place of the population’s least suitable candidate solution.
Termination: When a stopping requirement is satisfied, such as a maximum number of iterations or when
the fitness of the best candidate solution reaches an acceptable level, the algorithm is said to have terminated.
The GA successfully solved the assignment problem, which can discover nearly optimal solutions with a
modest computational effort. However, the choice of the algorithm’s parameters, such as the population size,
the crossover rate, the mutation rate, and the halting criterion, affects how well it performs. To enhance the
performance of the essential GA, numerous modifications can be made, such as the use of elitism or adaptive
operators.
70
Vi , j(t + 1) = wVi , j(t) + (c1r1 ∗ pbesti , j − c2r2(gbestj − Xi , j(t))W hereVi , j(t) is the velocity of the ith
particle in the jth dimension at time t, w is the inertia weight that controls the balance between exploration
and exploitation, c1 and c2 are the cognitive and social learning factors that control the influence of best and
best on the particle’s movement, r1 and r2 are random numbers between 0 and 1, pbesti , j is the personal
best of the ith particle in the jth dimension, and gbestj is the global best in the jth extent.
Each particle’s new position is calculated by multiplying its present position by its updated velocity.
Termination: When a stopping requirement is satisfied, such as a maximum number of iterations or when
the fitness of the best particle reaches a desirable level, the algorithm ends.
In Conclusion, the PSO method is a valuable optimisation approach for resolving the assignment problem.
The parameters you choose for it—such as the number of particles, the inertia weight, the cognitive and
social learning components, and the halting criterion—impact how well it performs. The performance of
the basic PSO can also be enhanced by incorporating other modifications, such as using different velocity
update rules or adaptive parameters.
71
yj(t+1) = yj(t)*(yi - yj(t)) + y Where xi and yi are the coordinates of firefly I, xj and yj are the
coordinates of firefly j, r is the distance between firefly I and firefly j, is the attractiveness of firefly j towards
firefly I, is the attractiveness parameter, is the random step size, is the step size scaling factor, and rand()
is a random number generator.
The updating function can be defined as follows:
(t+1) = 0e( − (t/T )2 )
Where 0 is the initial step size, T is the maximum number of iterations, and is the step size adjustment
parameter.
Conclusion
The assignment problem can be resolved using the Firefly method, a robust optimization algorithm for
solving different optimization problems. Even for complex issues, the process is simple and produces reliable
answers. It is crucial to correctly set the algorithm’s parameters to ensure convergence and performance.
72
Ranking: List the bats in order of fitness, and note the best answer thus far. Use the following formulae
to update each bat’s velocity and position:
A*(Xb est − xi (t)) + r ∗ (Xj k − xi (t)) = vi (t + 1) = vi (t)
xi (t + 1)equalsxi (t)plusvi (t + 1).
Where A and r are parameters that control the amplitude and randomness of the search, respectively, and
vi (t)andxi (t)arethevelocityandpositionof thebatIattimet, Xb estisthepositionof thebestsolutiondiscoveredthusf ar, Xj kisthep
Local search: You might use a regional search strategy to help position each bat. This can be achieved
by repeatedly switching how agents are assigned tasks and assessing the resulting fitness.
Return the best solution (i.e., the assignment with the lowest cost or highest profit) as an output.
Compared to other metaheuristic algorithms, the Bat Algorithm offers some advantages, such as quick
convergence and the capacity to handle complex problems. It does have some disadvantages, though, such
as as the necessity to fine-tune several parameters and the potential for being stuck in local optima.
The Bat Algorithm can be implemented in any programming language, making it a helpful tool for solving
the Assignment Problems and other optimisation issues.
73
are exploration and exploitation. Here is a step-by-step answer to the Assignment Problem using the ACO
algorithm:
First, could you set an initial value of tau0 for the pheromone trail on each task? The pheromone trail
indicates to the agents how desirable each job is.
Create an ant population in step two. Every ant stands for a potential answer to the assignment problem.
The ants will move around the problem’s graph by tracing the pheromone trails.
Step 3: Randomly choose an initial task for each ant, then decide which work to perform next based on
the pheromone trail and the task’s popularity. Step 5: Determine each ant’s fitness by adding up the cost
or profit of the job represented by its tour.
Step 6: Update the pheromone trail depending on the health of the ants that have visited each task. The
following equation updates the pheromone trail:
(1 - rho) * tau(i,j) + sum(deltat au(i, j)) = tau(i, j).
Tau (i,j) denotes the pheromone trail between task i and agent j, rho distinguishes the rate at which
pheromones evaporate, and deltat au(i, j)isthequantityof pheromonethatantsthathavevisitedtaskiandagentjhavelef tonthetr
Step 7: Verify the stopping standard. Stop the algorithm if the maximum number of iterations has been
reached or the solution has converged. If not, return to Step 3. Step 8: Decide which of the ants’ top
solutions to the assignment problem should be used.
The ACO algorithm can successfully solve the Assignment Problem by employing these stages. It is
crucial to remember that the algorithm’s performance depends on the selection of parameters like the rate
at which pheromones evaporate, their starting concentration, and how the probability distribution of the
ants is calculated. As a result, it is advised to try out several parameter values to see which works best for
a particular issue.
74
21.11 Conclusion
In Conclusion, the Assignment issue is a well-known optimisation issue that entails distributing N agents
among N jobs with the aim of either maximising total profit or minimising total cost. It can be solved using
various methods and has practical applications in many disciplines, including operations research, computer
science, and engineering.
The Hungarian algorithm, branch and bound, genetic algorithm, particle swarm optimisation, ant colony
optimisation, and simulated annealing are just a few of the precise and heuristic techniques suggested for
addressing the Assignment Problem. Each method has advantages and disadvantages and is appropriate
for particular problems and environments. Overall, the size, complexity, and requirements of the particular
problem determine the algorithm and parameter settings. The Assignment Problem is a significant issue
that has received much attention and has real-world implications in numerous fields. Its effective solution
can result in substantial advantages, including enhanced resource allocation, scheduling, and logistics,
75
Bibliography
[1] Maher GM Abdolrasol, SM Suhail Hussain, Taha Selim Ustun, Mahidur R Sarker, Mahammad A
Hannan, Ramizi Mohamed, Jamal Abd Ali, Saad Mekhilef, and Abdalrhman Milad. Artificial neural
networks based optimization techniques: A review. Electronics, 10(21):2689, 2021.
[2] Marco Dorigo, Vittorio Maniezzo, and Alberto Colorni. Ant system: optimization by a colony of
cooperating agents. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics),
26(1):29–41, 1996.
[3] Russell Eberhart and James Kennedy. Particle swarm optimization. In Proceedings of the IEEE inter-
national conference on neural networks, volume 4, pages 1942–1948. Citeseer, 1995.
[4] Iztok Fister, Iztok Fister Jr, Xin-She Yang, and Janez Brest. A comprehensive review of firefly algo-
rithms. Swarm and Evolutionary Computation, 13:34–46, 2013.
[5] Scott Kirkpatrick, C Daniel Gelatt Jr, and Mario P Vecchi. Optimization by simulated annealing.
science, 220(4598):671–680, 1983.
[6] Shuijia Li, Wenyin Gong, Ling Wang, Xuesong Yan, and Chengyu Hu. Optimal power flow by means
of improved adaptive differential evolution. Energy, 198:117314, 2020.
[7] Seyedali Mirjalili and Seyedali Mirjalili. Genetic algorithm. Evolutionary Algorithms and Neural Net-
works: Theory and Applications, pages 43–55, 2019.
[8] Kenneth V Price. Differential evolution. Handbook of Optimization: From Classical to Modern Approach,
pages 187–214, 2013.
[9] SALAH MORTADA SHAHEN. Enhancement on the modified artificial bee colony algorithm to optimize
the vehicle routing problem with time windows.
[10] M Thirunavukkarasu, Yashwant Sawle, and Himadri Lala. A comprehensive review on optimization
of hybrid renewable energy systems using various optimization techniques. Renewable and Sustainable
Energy Reviews, 176:113192, 2023.
[11] Xin-She Yang and Suash Deb. Cuckoo search via lévy flights. In 2009 World congress on nature &
biologically inspired computing (NaBIC), pages 210–214. Ieee, 2009.
[12] Xin-She Yang and Xingshi He. Bat algorithm: literature review and applications. International Journal
of Bio-inspired computation, 5(3):141–149, 2013.
76