0% found this document useful (0 votes)
16 views82 pages

Team 9 CIA 3

The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms for those who already suffer from conditions like depression and anxiety.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views82 pages

Team 9 CIA 3

The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms for those who already suffer from conditions like depression and anxiety.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 82

Soft Computing

CIA 3 Report

Submitted by
FERDENO (Reg No.: 2062018, Email ID: [email protected])
CHATHERIYAN (Reg No.: 2062016, Email ID: [email protected])
YOGESH KUMAR S(Reg No.: 2062040, Email ID: [email protected])
MERLA GANESH REDDY (Reg No.: 2062030, Email ID: [email protected])

Under Supervision of

D Dr. Sandeep Kumar

Professor, Department of Computer Science and Engineering,


CHRIST (Deemed to be University), Bangalore.

March 25, 2024


Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1 Steps/Phases of ABC algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2 Recent Development on ABC algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3 Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.1 Steps/Phases of Differential evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.2 Recent Development on Differential Evolution algorithm . . . . . . . . . . . . . . . . . 6
4 Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.1 Steps/Phases of Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.2 Recent Development on Genetic algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 9
5 Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5.1 Steps/Phases of Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 10
5.2 Recent Development on Particle Swarm Optimization Algorithm . . . . . . . . . . . . 12
6 Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
6.1 Steps/Phases of Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
6.2 Recent Development on Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 15
7 Cuckoo Search Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
7.1 Steps/Phases of Cuckoo Search Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 16
7.2 Recent Development on Cuckoo Search Algorithm . . . . . . . . . . . . . . . . . . . . 18
8 Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
8.1 Steps/Phases of Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
8.2 Recent Development on Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 21
9 Spider Monkey Optimization (SMO) Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 22
9.1 Steps/Phases of SMO Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
9.2 Recent Development on Spider Monkey Optimization Algorithm . . . . . . . . . . . . 23
10 Ant Colony Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
10.1 Steps/Phases of SMO Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
10.2 Recent Development on Ant Colony Algorithm . . . . . . . . . . . . . . . . . . . . . . 25
11 Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
11.1 Steps/Phases of Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
11.2 Recent Development on Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . 28
12 Intelligent Image Color Reduction and Quantization . . . . . . . . . . . . . . . . . . . . . . . 29
12.1 Solution by the Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . 29
12.2 Solution by the Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
12.3 Solution by the Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
12.4 Solution by the Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 30
12.5 Solution by the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
12.6 Solution by the Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
12.7 Solution by the Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
12.8 Solution by the Spider Monkey Optimization (SMO) Algorithm . . . . . . . . . . . . . 31

1
12.9 Solution by the Ant Colony Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 31
12.10 Solution by the Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
12.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
13 Minimum Spanning Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
13.1 Solution by the Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . 32
13.2 Solution by the Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
13.3 Solution by the Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
13.4 Solution by the Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 33
13.5 Solution by the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
13.6 Solution by the Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
13.7 Solution by the Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
13.8 Solution by the Spider Monkey Optimization (SMO) Algorithm . . . . . . . . . . . . . 34
13.9 Solution by the Ant Colony Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 34
13.10 Solution by the Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
13.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
14 Robot Path Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
14.1 Solution by the Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . 36
14.2 Solution by the Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
14.3 Solution by the Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
14.4 Solution by the Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 37
14.5 Solution by the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
14.6 Solution by the Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
14.7 Solution by the Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
14.8 Solution by the Spider Monkey Optimization (SMO) Algorithm . . . . . . . . . . . . . 38
14.9 Solution by the Ant Colony Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 39
14.10 Solution by the Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
14.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
15 Data Envelopment Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
15.1 Solution by the Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . 40
15.2 Solution by the Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
15.3 Solution by the Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
15.4 Solution by the Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 41
15.5 Solution by the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
15.6 Solution by the Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
15.7 Solution by the Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
15.8 Solution by the Spider Monkey Optimization (SMO) Algorithm . . . . . . . . . . . . . 42
15.9 Solution by the Ant Colony Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 42
15.10 Solution by the Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
15.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
16 Portfolio Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
16.1 Solution by the Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . 43
16.2 Solution by the Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
16.3 Solution by the Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
16.4 Solution by the Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 44
16.5 Solution by the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
16.6 Solution by the Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
16.7 Solution by the Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
16.8 Solution by the Spider Monkey Optimization (SMO) Algorithm . . . . . . . . . . . . . 46
16.9 Solution by the Ant Colony Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 46
16.10 Solution by the Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
16.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

2
17 Facility Layout Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
17.1 Solution by the Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . 48
17.2 Solution by the Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
17.3 Solution by the Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
17.4 Solution by the Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 49
17.5 Solution by the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
17.6 Solution by the Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
17.7 Solution by the Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
17.8 Solution by the Spider Monkey Optimization (SMO) Algorithm . . . . . . . . . . . . . 51
17.9 Solution by the Ant Colony Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 51
17.10 Solution by the Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
17.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
18 Vehicle Routing Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
18.1 Solution by the Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . 53
18.2 Solution by the Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
18.3 Solution by the Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
18.4 Solution by the Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 54
18.5 Solution by the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
18.6 Solution by the Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
18.7 Solution by the Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
18.8 Solution by the Spider Monkey Optimization (SMO) Algorithm . . . . . . . . . . . . . 56
18.9 Solution by the Ant Colony Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 56
18.10 Solution by the Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
18.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
19 Parallel Machine Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
19.1 Solution by the Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . 58
19.2 Solution by the Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
19.3 Solution by the Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
19.4 Solution by the Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 59
19.5 Solution by the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
19.6 Solution by the Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
19.7 Solution by the Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
19.8 Solution by the Spider Monkey Optimization (SMO) Algorithm . . . . . . . . . . . . . 61
19.9 Solution by the Ant Colony Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 61
19.10 Solution by the Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
19.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
20 Bin Packing Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
20.1 Solution by the Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . 63
20.2 Solution by the Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
20.3 Solution by the Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
20.4 Solution by the Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 65
20.5 Solution by the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
20.6 Solution by the Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
20.7 Solution by the Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
20.8 Solution by the Ant Colony Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 67
20.9 Solution by the Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
20.10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
21 Assignment problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
21.1 Solution by the Artificial Bee Colony (ABC) Algorithms . . . . . . . . . . . . . . . . . 68
21.2 Solution by the Differential Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
21.3 Solution by the Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

3
21.4 Solution by the Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . 70
21.5 Solution by the Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
21.6 Solution by the Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
21.7 Solution by the Bat Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
21.8 Solution by the Spider Monkey Optimization (SMO) Algorithm . . . . . . . . . . . . . 73
21.9 Solution by the Ant Colony Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 73
21.10 Solution by the Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
21.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

4
Abstract

Abstract - You have to write about all the algorithms (please refer unit IV of syllabus for Soft Computing)
and all ten problem (please refer unit V of syllabus for Soft Computing). Make sure that each team member
have significant contribution in this report. Please note that identical report may lead to penalty.
1 Introduction
This report contains detailed study of following algorithms

1. Artificial Bee Colony (ABC) Algorithms [?]


2. Differential Evolution [8]
3. Genetic Algorithms [?]
4. Particle Swarm Optimization [?]

5. Firefly Algorithm
6. Cuckoo Search
7. Bat Algorithm

8. Spider Monkey Optimization (SMO) Algorithm


9. Ant Colony Algorithms
10. Simulated Annealing
Our team implemented above mentioned ten algorithms for solving ten real-world problems. Selected prob-
lems are as follows:
1. Intelligent Image Color Reduction and Quantization
2. Minimum Spanning Tree
3. Robot Path Planning

4. Data Envelopment Analysis


5. Portfolio Optimization
6. Facility Layout Design

7. Vehicle Routing Problem


8. Parallel Machine Scheduling
9. Bin Packing Problem
10. Assignment problem

The contribution of Team members is as follows:


• Chatheriyan T written about All the algorithms and explained about the steps and recent developments
on that algorithm.
• Ferdeno A written about i mplemented algorithms for solving Intelligent Image Color Reduction and
Quantization, Minimum Spanning Tree, Robot Path Planning, Data Envelopment Analysis.
• Yogesh Kumar written about i mplemented algorithms for solving Portfolio Optimization , Facility
Layout Design , Vehicle Routing Problem.
• Merla Ganesh Reddy written about and implemented algorithms for solving Parallel Machine Schem-
ing,Din Packing Problem , Assignment Problem.

1
2 Artificial Bee Colony (ABC) Algorithms
Artificial Bee Colony Algorithm (ABC) is an optimization algorithm based on the intelligent foraging be-
havior of a honey bee swarm.
• Introduced by Karaboga in 2005, the Artificial Bee Colony (ABC) algorithm is a swarm-based meta-
heuristic algorithm for optimizing numerical problems and was inspired by the intelligent foraging
behavior of honey bees.
• The model consists of four essential components:
– Food Sources
– Employed bees
– Onlooker bees
– Scout bees

2.1 Steps/Phases of ABC algorithm


After initializing the food sources, the main loop is run. In the loop, we perform these phases:
• Initialization Phase

In the initialization phase, we generate enough food sources for each of the employed bees. The
actual food source generation depends on the type of problem we are solving. The bees are distributed
in the solution space, having been generated randomly. Some researchers also distribute them evenly
in the space and that might work better for some solution spaces.
X = [x1, x2, ..., xN pop] where xi = [xi1, xi2, ..., xiD ]

• Employed Bee Phase

The employed bee phase consists of each of the bees going out to explore a food source. In the
process, the bees explore the neighbourhood and, if they find a food source with more nectar, their
food source gets replaced by a newer, better food source.
x′ ij = xij + φij (xik − xij ) where φij ∈ [−1, 1]
f (x′i ) < f (xi ) then replace xi with x′i
• Onlooker Bee Phase

The employed bees then return home and begin their waggle dance. Each onlooker bee perceives,
with some error, the amount of nectar that each bee got from its food source. So, each onlooker bee,
according to their perception of the nectar produced by the food source, will pick that food source.
The higher the nectar, the more probable it is that the onlooker bee will pick it.
f (xi)
pi = P
j = 1N pop f (xj )
x′i = select a solution based on the probability pi
f (x′i ) < f (xi ) then replace xi with x′i
• Scout Bee Phase

When the neighborhood of the food source has been explored enough, it is abandoned. Every time a
food source is explored, we increment the trial counter. When the trial count exceeds the maximum
configured value, we delete it from the array of food sources, as shown in the image below, and find a
new, random food source.

2
Flow-chart (For reference)

Start

Initialize population

Evaluate fitness

Employed bees phase

Onlooker bees phase

Scout bees phase

Stop

Algorithm(For reference)
1. Initialize the population of N candidate solutions, Xi , i = 1, 2, . . . , N .
2. Evaluate the fitness of each candidate solution using a fitness function f (Xi ).

3. Repeat the following steps until a stopping criterion is met:


(a) Employed bees phase:
i. For each candidate solution Xi , select a random candidate solution Xj ̸= Xi and generate a
new candidate solution Vi as follows:

Vi,k = Xi,k + ϕij (Xj,k − Xi,k ),

where k = 1, 2, . . . , D is the dimension of the candidate solution, ϕij is a random number


drawn from a uniform distribution in [−1, 1], and D is the dimensionality of the problem.
ii. Evaluate the fitness of the new candidate solution Vi .
iii. If f (Vi ) < f (Xi ), replace Xi with Vi .

3
(b) Onlooker bees phase:
i. Calculate the probability pi of each candidate solution Xi as follows:

f (Xi )
pi = PN .
j=1 f (Xj )

ii. For each onlooker bee k = 1, 2, . . . , nonlooker , select a candidate solution Xi with probability
proportional to pi .
iii. Generate a new candidate solution Vi for each selected candidate solution Xi using the same
procedure as in the employed bees phase.
iv. Evaluate the fitness of the new candidate solutions Vi .
v. If f (Vi ) < f (Xi ), replace Xi with Vi .
(c) Scout bees phase:
i. For each candidate solution Xi that has not been improved for a certain number of iterations,
replace it with a new candidate solution randomly generated within the search space.

2.2 Recent Development on ABC algorithm


• Multi-objective ABC (MOABC):
– Shahen [9] incorporated a non-dominated sorting mechanism and an archive to store non-dominated
solutions for handling multi-objective optimization problems.
– Advantages: Outperforms other state-of-the-art multi-objective optimization algorithms on a
variety of benchmark problems.
– Applications: Design optimization, scheduling problems, and image segmentation.
• Hybrid ABC algorithms:

– Abdolrasol, Maher GM and Hussain [1] combines the ABC algorithm with other optimization
methods, such as local search algorithms, gradient-based methods, or swarm intelligence algo-
rithms.
– Advantages: Improves the convergence speed and robustness of the ABC algorithm.
– Applications: Feature selection, image registration, and data clustering.
• Dynamic ABC (DABC):
– Thirunavukkarasu, M and Sawle, Yashwant and Lala, Himadri [10] introduces a dynamic search
space based on the fitness landscape for improving the performance of the ABC algorithm on
complex optimization problems with changing landscapes.
– Advantages: Improves the performance of the ABC algorithm on complex optimization problems
with changing landscapes.
– Applications: Feature selection, machine learning, and control engineering.
• Adaptive ABC (AABC):

– Li, Shuijia and Gong [6] adapt the search strategy of the ABC algorithm based on the convergence
behaviour of the algorithm.
– Advantages: Improves the ABC algorithm’s convergence speed and exploration capability.
– Applications:Engineering design optimization, data mining, and image processing.

4
3 Differential Evolution
Differential evolution (DE) is a method that optimizes a problem by trying to improve a candidate solution
with regard to a given measure of quality.
• Differential evolution was proposed by Price, Kenneth V [8] in 1995
• Advantages of DE are: simplicity, efficiency and real coding, easy use, local searching property and
speediness.

3.1 Steps/Phases of Differential evolution


It starts with arbitrarily generated and evenly distributed initial population. Subsequently, it repeatedly
applies mutation, crossover and the selection process in order to engender a fresh population.
1. Mutation Phase:

• This phase engenders a try-out vector for every candidate solution of the present population. An
objective vector is altered by means of a biased differential to produce the try-out vector.

vi,g+1 = xr1,g + F · (xr2,g − xr3,g )

2. Crossover Phase:
• In this stage, a crossover operator is applied to the mutated solutions, creating offspring solutions
by combining the parameter values of two or more parent solutions. The crossover operator helps
explore the search space and combine promising features from different solutions.
(
vi,g+1 if rand(0, 1) ≤ Cr or i = j
ui,g+1 =
xi,g otherwise

3. Selection Phase:
• In this stage, the offspring solutions are evaluated based on a fitness function that measures their
performance. The fitness function determines which solutions are selected to become part of the
next generation population. Typically, solutions with higher fitness values are selected, as they
are considered more promising.
(
ui,g+1 if f (ui,g+1 ) ≤ f (xi,g )
xi,g+1 =
xi,g otherwise

5
Flow-chart (For reference)

Start

Initialize population

Evaluate population

Mutation

Crossover

Selection

Update population

No

Termination?

Yes

Stop

Algorithm(For Reference)

3.2 Recent Development on Differential Evolution algorithm


• Adaptive Differential Evolution (JADE):
– JADE is an adaptive version of DE that improves its performance by adapting the mutation and
crossover parameters during the optimization process. modification can significantly enhance the

6
Algorithm Differential Equation Algorithm
Initialize the initial condition x0 , time step h, and number of time steps N
Set t ← 0
for n = 0 to N − 1 do
Compute the slope fn at time tn using the differential equation
Update the solution at time tn+1 = tn + h using the Euler’s method: xn+1 ← xn + h · fn
Set t ← t + h
end for

convergence speed and accuracy of DE.


– JADE has been applied to solve many complex optimization problems, including feature selection,
image segmentation, and parameter tuning.
• Opposition-Based Differential Evolution (ODE):
– ODE is a variant of DE that enhances its exploration capabilities by using an opposition-based
learning strategy.
– This modification generates additional candidate solutions by considering the opposite of the
current best solutions in the population.
– ODE has been applied to solve various optimization problems, including feature selection, image
clustering, and power system optimization.

• Multi-Objective Differential Evolution (MODE):


– MODE is a modification of DE that is designed to solve multi-objective optimization problems.
– This variant uses a Pareto-based approach to maintain a diverse set of non-dominated solutions
during the optimization process.
– MODE has been applied to solve many multi-objective problems, including portfolio optimization,
job-shop scheduling, and power system planning.
• Dynamic Differential Evolution (DDE):
– DDE is a modification of DE that is designed to handle dynamic optimization problems, where
the objective function or constraints change over time.
– This variant adapts the mutation and crossover parameters dynamically based on the current
state of the optimization problem.
– DDE has been applied to solve many dynamic optimization problems, including mobile robot
navigation, wireless sensor network optimization, and parameter identification.

7
4 Genetic Algorithms
Genetic algorithms are often used to solve complex optimization problems that are difficult to solve using
traditional computational methods by Mirjalili, Seyedali [7]
• One of the main advantages of using genetic algorithms in soft computing is their ability to find
near-optimal solutions to complex problems in a relatively short amount of time.

4.1 Steps/Phases of Genetic Algorithm


1. Selection Phase:
• A population of potential solutions to the problem is randomly generated. Each solution is
represented as a chromosome, which is a string of binary digits or other data types.
F = x1 , x2 , ..., xk , k < N
2. Crossover Phase:
• The crossover phase involves creating new offspring solutions by exchanging genetic information
between two parent solutions. Let xp1 and xp2 be the parent solutions, and xc be the of f spring solution.

xc = crossover(xp1 , xp2 )

3. Mutation Phase:
• The mutation phase involves introducing a small random change to the genetic material of the
offspring solutions. Let xm be the mutated of f spring solution.

xm = mutation(xc )

Flow-chat(For reference)

Initialize
Population

Selection

Crossover

Mutation

Terminate
Algorithm

8
Algorithm(For reference)

1: Initialize population
2: Evaluate fitness of each individual
3: while termination condition not met do
4: Select parents for reproduction
5: Perform crossover to create offspring
6: Perform mutation on offspring
7: Evaluate fitness of offspring
8: Replace least fit individuals in population with offspring
9: end while

4.2 Recent Development on Genetic algorithm


• Multi-objective Genetic Algorithms (MOGA):
– MOGAs are designed to optimize multiple objectives simultaneously, which is a significant im-
provement over traditional genetic algorithms that focus on a single objective.
– MOGAs enable the decision-makers to balance the trade-offs among multiple objectives in a better
way.
– Advantages: More efficient and effective than traditional genetic algorithms in handling multiple
objectives simultaneously.
• Adaptive Genetic Algorithms:
– These algorithms are designed to adjust their parameters, such as population size and crossover
probability, during the course of the optimization process.
– The adaptive genetic algorithm can help in achieving better performance and higher convergence
speed.
– Advantages: Can adjust the parameters in real-time, leading to faster convergence and better
optimization results.
• Hybrid Genetic Algorithms:
– Hybrid genetic algorithms combine the principles of genetic algorithms with other optimization
methods such as simulated annealing, tabu search, and ant colony optimization.
– The hybrid approach aims to leverage the strengths of multiple optimization techniques to achieve
better performance and faster convergence.
– Advantages: Can leverage the strengths of multiple optimization techniques and achieve better
results than individual optimization methods.
• Parallel Genetic Algorithms:
– Parallel genetic algorithms use multiple processors or computers to solve a problem simultaneously.
– The parallel approach can speed up the optimization process and handle larger datasets.
– Advantages: Can speed up the optimization process and handle larger datasets.
• Dynamic Genetic Algorithms:
– These algorithms are designed to dynamically adapt the genetic operators, such as selection,
crossover, and mutation, based on the state of the population.
– Advantages: Can adapt the genetic operators in real-time and achieve better optimization
results.

9
5 Particle Swarm Optimization
• PSO is a stochastic optimization technique based on the movement and intelligence of swarms, originally
proposed by Kennedy and Eberhart [3] in 1995.
• In PSO, the concept of social interaction is used for solving a problem.
• It uses a number of particles (agents) that constitute a swarm moving around in the search space,
looking for the best solution.

5.1 Steps/Phases of Particle Swarm Optimization


1. Initialization:
• Initialize the population with a group of random particles.
• Each particle is assigned a random position in the search space and a random velocity.

2. Fitness evaluation:
• Evaluate the fitness of each particle in the population.
• The fitness value is calculated using the fitness function of the problem being solved.
fi = f (xi )

3. Update the personal best:


• Update the personal best position of each particle based on its current position and fitness value.
• The personal best position is the best position that the particle has achieved so far.
if fi > fpbesti then pbesti = xi

4. Update the global best:


• Update the global best position of the swarm based on the best position achieved by any particle
in the population.
if fpbesti > fgbest then gbest = pbesti

5. Update the velocity and position of each particle:


• Update the velocity and position of each particle using the following equations:

velocity = w * velocity + c1 * rand() * (personal best - current position) + c2 * rand() *


(global best - current position)

position = position + velocity

where w is the inertia weight, c1 and c2 are the acceleration constants, and rand() is a random
number between 0 and 1.

10
Flow-chart(For reference)

Start

Initialize swarm

Evaluate fitness

Update personal best

Update global best

Update velocity and position

Evaluate fitness

no

Stopping criteria met?

yes

Stop

11
Algorithm(For reference)

Algorithm Particle Swarm Optimization


Require: Objective function f (x), swarm size N , maximum iterations Tmax
Ensure: Best position found xbest and corresponding fitness value fbest
1: Initialize swarm with random positions and velocities
2: Evaluate fitness of each particle
3: Set personal best position and fitness for each particle
4: Set global best position and fitness to the best particle in the swarm
5: for t ← 1 to Tmax do
6: for i ← 1 to N do
7: Update velocity of particle i
8: Update position of particle i
9: Evaluate fitness of particle i
10: if fitness of particle i is better than personal best fitness then
11: Set personal best position and fitness for particle i
12: if fitness of particle i is better than global best fitness then
13: Update global best position and fitness
14: end if
15: end if
16: end for
17: end for

5.2 Recent Development on Particle Swarm Optimization Algorithm


• PSO with Different Topologies:
– Recently, researchers have explored different topologies for the swarm, such as ring, wheel, and
star topologies, to improve the search performance.
– Advantages: These topologies can increase the diversity of the swarm and avoid premature
convergence.
• PSO with Hybrid Initialization:
– PSO initialization plays a crucial role in the search process, and recent research has focused on
combining PSO with other optimization algorithms for initialization.
– Advantages: This hybrid approach can improve the diversity of the swarm and lead to better
solutions.
• PSO with Adaptive Parameters:
– The performance of PSO is highly dependent on the choice of its parameters, such as inertia
weight and learning rate.
– Advantages: This can improve the convergence speed and avoid stagnation in the search process.
• Multi-Objective PSO with Diversity Maintenance:
– Multi-objective PSO (MOPSO) is used to optimize multiple objectives simultaneously, but it can
suffer from premature convergence and lack of diversity.
– Advantages: Recent research has focused on incorporating diversity maintenance strategies,
such as crowding distance and niching techniques, into MOPSO to improve the diversity of the
solutions and provide a better set of trade-off solutions.

12
6 Firefly Algorithm
• Firefly Algorithm (FA) is a swarm intelligence algorithm that was inspired by the flashing behavior of
fireflies.
• It was first proposed by Xin-She Yang [4] in 2008 and has been used in a wide range of optimization
problems.

6.1 Steps/Phases of Genetic Algorithm


1. Initialization:
• Generate an initial population of fireflies, where each firefly represents a candidate solution to the
optimization problem.
• The population size and the search space bounds must be defined.

Generate N fireflies: xi ∈ [lb, ub] for i = 1, 2, . . . , N

2. Evaluation:
• Evaluate the fitness of each firefly in the population using the objective function of the optimization
problem.
fi = f (xi ) for i = 1, 2, . . . , N
3. Attraction:
• Calculate the attractiveness of each firefly, which depends on its brightness (fitness value) and its
distance to other fireflies in the population.
• The attractiveness can be calculated using a function that incorporates these factors, such as the
inverse square law used in the original FA formulation.
2
aij = β0 e−γrij where β0 and γ are parameters and rij is the Euclidean distance between fireflies i and j

4. Movement:
• Move each firefly towards the most attractive firefly in its neighborhood, which is defined based
on the attractiveness calculated in the previous step.
• The movement of each firefly can be calculated using a formula that includes a random component
to allow for exploration.

xi (t+1) = xi (t)+α(xj (t)−xi (t))+ϵ where α is the step size, xj (t) is the position of the most attractive firefly to i

5. Updating:
• Update the brightness of each firefly based on the new position it has moved to.
• Evaluate the fitness of the new solution and compare it to the previous solution.
• If the new solution is better, update the brightness of the firefly accordingly.

fi′ = f (x′i ) where fi′ is the new fitness value of firefly i after moving to position x′i

6. Termination:
• Repeat steps 3-5 until a stopping criterion is met, such as a maximum number of iterations, a
convergence threshold, or a time limit.

13
Flow-chart(For reference)

Initialize

Evaluate fitness

Calculate attractiveness

Move fireflies

Update fitness

Termination criterion
No

Yes

Return best solution

Algorithm (For reference)

Algorithm Firefly Algorithm


1: Initialize the population of fireflies X = {x1 , x2 , ..., xn } randomly.
2: Evaluate the fitness of each firefly xi .
3: while termination criterion not met do
4: for each firefly xi do
5: for each firefly xj do
6: if f (xj ) < f (xi ) then
7: Calculate the attractiveness βij
8: Move firefly xi towards firefly xj
9: end if
10: end for
11: Evaluate the fitness of the moved firefly xi .
12: end for
13: end while
14: Return the best solution found during the algorithm.

14
6.2 Recent Development on Firefly Algorithm
• Hybrid Firefly Algorithm:
– The Hybrid Firefly Algorithm (HFA) combines the Firefly Algorithm with other optimization
algorithms such as the Genetic Algorithm and Particle Swarm Optimization.
– Advantages: The advantage of this modification is that it can improve the convergence rate and
global search ability of the algorithm.
– Applications: HFA has been applied to various optimization problems such as feature selection,
image processing, and parameter tuning.
• Chaotic Firefly Algorithm:

– The Chaotic Firefly Algorithm (CFA) adds a chaotic map to the Firefly Algorithm to increase
the diversity of the fireflies movements and avoid getting stuck in local optima.
– Advantages: The advantage of this modification is that it can improve the global search ability
and robustness of the algorithm.
– Applications: CFA has been applied to various optimization problems such as image segmenta-
tion, parameter estimation, and function optimization.
• Multi-Objective Firefly Algorithm:
– The Multi-Objective Firefly Algorithm (MOFA) is an extension of the Firefly Algorithm that can
handle multiple objectives simultaneously.
– Advantages: The advantage of this modification is that it can provide a set of solutions that
represent the trade-offs between different objectives.
– Applications: MOFA has been applied to various multi-objective optimization problems such
as feature selection, clustering, and image segmentation.
• Adaptive Firefly Algorithm:

– The Adaptive Firefly Algorithm (AFA) adjusts the step size of the fireflies’ movements based on
their fitness values to balance exploration and exploitation.
– Advantages: The advantage of this modification is that it can improve the convergence rate and
global search ability of the algorithm.
– Applications: AFA has been applied to various optimization problems such as feature selection,
image segmentation, and load forecasting.
• Self-Adaptive Firefly Algorithm:
– The Self-Adaptive Firefly Algorithm (SAFA) adapts the parameters of the Firefly Algorithm, such
as the attraction coefficient and randomization parameter, during the optimization process.
– Advantages: The advantage of this modification is that it can improve the global search ability
and robustness of the algorithm.
– Applications: SAFA has been applied to various optimization problems such as feature selection,
image segmentation, and function optimization.

15
7 Cuckoo Search Algorithm
• Cuckoo search is a nature-inspired optimization algorithm that is based on the behavior of cuckoo
birds.
• It was first proposed by Xin-She Yang and Suash Deb [11] in 2009.
• The algorithm is designed to solve optimization problems, especially those that involve complex, multi-
dimensional search spaces.

• The basic idea behind cuckoo search is to mimic the behavior of cuckoo birds in laying their eggs in
the nests of other bird species.

7.1 Steps/Phases of Cuckoo Search Algorithm


1. Initialization:

• The algorithm starts with an initial population of candidate solutions called nests.
• The nests are randomly generated in the search space.
2. Egg Laying:

• The algorithm generates new candidate solutions called eggs.


• The eggs are created by combining elements from two or more existing nests using a random
crossover operator.
3. Egg Selection:

• The algorithm selects some of the eggs to be laid in the nests of other birds.
• The selection is based on the quality of the egg, with higher-quality eggs being more likely to be
selected.
4. Abandonment:
• The algorithm checks if any eggs have been laid in a nest that is already occupied by another egg.
• If this is the case, the algorithm randomly chooses one of the eggs to be removed from the nest.
5. Local Search:
• The algorithm applies a local search operator to some of the nests to improve their quality.

6. Nest Selection:
• The algorithm selects some of the nests to be replaced by the new candidate solutions created in
steps 2-5.
• The selection is based on the quality of the new solution, with higher-quality solutions being more
likely to be selected.

7. Termination:
• The algorithm terminates when a stopping criterion is met, such as a maximum number of itera-
tions or a minimum level of improvement.

16
Flow-chart(For reference)

Start

Initialize the nests randomly

Lay eggs by combining elements of nests

Select eggs based on their quality

yes
Any nest is already occupied? Remove one egg randomly

no
no
Apply local search operator to some nests

New solution better than old one?

yes

Replace old nests with new solutions

Stop

17
Algorithm(For reference)

Algorithm Cuckoo Search Algorithm


Require: Population size N , number of iterations T , initial step size s, initial probability p, fitness function
f (x)
Ensure: Best solution found x∗
1: Initialize population of N nests randomly
2: Evaluate fitness of each nest f (xi )
3: for t = 1 to T do
4: Generate new solution by combining elements of two nests
5: Evaluate fitness of new solution f (xnew )
6: if f (xnew ) < f (xworst ) then
7: Replace worst nest with new solution
8: end if
9: for all nests do
10: if nest has probability p > rand() then
11: Generate new solution by random walk
12: Evaluate fitness of new solution f (xnew )
13: if f (xnew ) < f (xn ) then
14: Replace nest with new solution
15: end if
16: end if
17: end for
18: end for
19: return best solution x∗

7.2 Recent Development on Cuckoo Search Algorithm


• Lévy Flight Cuckoo Search (LFCS):
– The LFCS modification incorporates Lévy flights, which are long-range jumps with heavy-tailed
distributions, to enhance exploration of the search space.
– LFCS has been shown to outperform the original cuckoo search algorithm in terms of convergence
rate and accuracy, particularly in high-dimensional and multimodal optimization problems.
– Applications: Feature selection, image processing, and medical diagnosis.
• Chaotic Cuckoo Search (CCS):
– The CCS modification incorporates chaos theory to enhance the diversity and convergence rate
of the cuckoo search algorithm.
– This is achieved by introducing a chaotic map to generate random numbers used in the search
process.
– Applications: Function optimization, image processing, and data clustering.
• Opposition-based Cuckoo Search (OCS):
– The OCS modification improves the exploitation process of the cuckoo search algorithm by consid-
ering not only the current solution, but also its opposite (i.e., the inverse of the decision variables).
– This allows for a more comprehensive search of the solution space and increases the probability
of finding the global optimum.
– Applications: Data clustering, function optimization, and feature selection.

18
8 Bat Algorithm
• The Bat Algorithm is a metaheuristic optimization algorithm that is inspired by the echolocation
behavior of bats.
• It was first proposed by Xin-She Yang [12] in 2010.
• The basic idea of the Bat Algorithm is to simulate the behavior of bats in searching for prey.

8.1 Steps/Phases of Bat Algorithm


1. Echolocation Phase:
• In this phase, each bat searches for the optimal solution by emitting a pulse and listening to the
echoes.
• The frequency of the pulse is determined by the position of the bat, and the loudness of the pulse
is determined by the fitness of the corresponding solution.

Xti = Xt−1
i + Vit−1
f (Xti ) = Evaluate f at Xti
Generate a random loudness: rit ∼ Uniform(0, 1)
Generate a random frequency: Ati ∼ Uniform(0, 1)
Vit = Vt−1 i + (Xbest − Xti )Ati
Xti = Xti + rit sin(2πAti )

2. Movement Phase:
• Based on the echolocation results, each bat updates its position and velocity to move towards
better solutions.
• The speed and direction of the movement are determined by the frequency and loudness of the
bat.
3. Pulse Rate Phase:
• The pulse rate of each bat is adjusted according to its fitness and the global best solution found
so far.
• Bats with better fitness or closer to the global best solution will have a higher pulse rate.
N N
1 X 1 X t t
favg = f (Xti )ravg = r fbest = Minimum of f (Xbest) and f avgAti = A0 αf (Xi )−fbest
N i=1 N i=1 i

4. Loudness Phase:

• The loudness of each bat is also adjusted based on its fitness and the global best solution found
so far.
• Bats with better fitness or closer to the global best solution will have a higher loudness.

rit = γrit + β(f (Xti ) − fbest )

19
Flow-chart(For reference)

Start

Initialize population

Evaluate fitness

Update best solution

Echolocation

Update positions and velocities

Update pulse rates

Update loudness

Stop

Algorithm(For reference)
1: Initialize population
2: Evaluate fitness
3: Update best solution
4: while stopping criterion not met do
5: for each bat i do
6: Generate a new solution xi (t + 1) by updating the velocity and position
7: With probability ri , perform local search on xi (t + 1)
8: if f (xi (t + 1)) < f (xbest ) then
9: Update the best solution: xbest = xi (t + 1)
10: end if
11: Update loudness Ai and pulse rate ri
12: end for
13: end while

20
8.2 Recent Development on Bat Algorithm
• Enhanced Bat Algorithm (EBA):
– The EBA incorporates a new mechanism for updating the frequency and velocity of the bats, as
well as an adaptive local search strategy.
– The EBA has shown improved convergence speed and accuracy compared to the original Bat
Algorithm.
– Advantages: Improved convergence speed and accuracy.
– Applications: Optimization problems such as feature selection, clustering, and image segmen-
tation.
• Multi-Objective Bat Algorithm (MOBA):

– The MOBA is an extension of the Bat Algorithm for solving multi-objective optimization prob-
lems.
– The MOBA uses a Pareto-based approach to maintain a set of non-dominated solutions.
– Advantages: Ability to solve multi-objective optimization problems.
– Applications: Engineering design optimization, financial portfolio optimization.
• Quantum Bat Algorithm (QBA):
– The QBA applies quantum computing principles to the Bat Algorithm to improve its search
ability.
– The QBA uses quantum-inspired operators to update the positions and velocities of the bats.
– Advantages: Improved search ability.
– Applications: Combinatorial optimization, scheduling problems.
• Hybrid Bat Algorithm (HBA):

– The HBA combines the Bat Algorithm with other optimization algorithms, such as the Genetic
Algorithm or Particle Swarm Optimization, to improve its search ability and convergence speed.
– Advantages: Improved search ability and convergence speed.
– Applications: Engineering design optimization, data clustering.
• Improved Hybrid Bat Algorithm (IHBA):

– The IHBA further improves the HBA by incorporating an adaptive local search strategy and a
new operator for updating the loudness and pulse rate of the bats.
– Advantages: Improved searchability, convergence speed, and accuracy.
– Applications: Feature selection, image classification, scheduling problems.

21
9 Spider Monkey Optimization (SMO) Algorithm
• Spider Monkey Optimization (SMO) Algorithm is a metaheuristic optimization algorithm inspired by
the social behaviour of spider monkeys in their search for food.
• SMO was first introduced by Mirjalili et al. [7] in 2017.
• The SMO algorithm is based on the following three main behaviours of spider monkeys:
1. Exploration: In this behaviour, spider monkeys move randomly to search for new food sources.Some
spider monkeys are randomly selected to move to new positions in the search space. This step is
intended to introduce diversity into the population and prevent the algorithm from getting stuck
in local optima.
2. Exploitation: When spider monkeys find a promising food source, they exploit it by gathering
as much food as possible.The remaining spider monkeys exploit the best food sources that have
been found so far. This step is intended to converge the population towards promising regions of
the search space.
3. Homecoming: After spider monkeys have gathered enough food, they return to their home
location.

9.1 Steps/Phases of SMO Algorithm


Flow-chart(For reference)

Initialize population

Explore

Exploit

Homecoming

Stopping criterion

22
Algorithm(For reference)

Algorithm Spider Monkey Optimization (SMO) Algorithm


1: Initialize population: Generate a set of spider monkeys randomly in the search space.
2: Evaluate fitness: Evaluate the fitness of each spider monkey by applying the objective function to its
position.
3: while stopping criterion not met do
4: Select a subset of spider monkeys randomly and move them to new positions in the search space
(Exploration).
5: Select the remaining spider monkeys and exploit the best food sources found so far (Exploitation).
6: Return the spider monkeys to their home location (Homecoming).
7: Evaluate fitness: Evaluate the fitness of each spider monkey by applying the objective function to its
position.
8: end while
9: Return: The best solution found in the population.

9.2 Recent Development on Spider Monkey Optimization Algorithm


• Hybrid SMO Algorithm (HSMO):

– This algorithm combines the SMO algorithm with other optimization algorithms such as Particle
Swarm Optimization (PSO) or Genetic Algorithm (GA) to enhance its performance.
– The advantage of HSMO is its ability to combine the strengths of different algorithms to obtain
better results.
– HSMO has been applied to various applications, such as economic load dispatch, feature selection,
and power system optimization.
• Multi-Objective SMO Algorithm (MOSMO):
– This algorithm extends the SMO algorithm to handle multi-objective optimization problems.
– The advantage of MOSMO is its ability to handle multiple conflicting objectives simultaneously.
– MOSMO has been applied to various applications, such as scheduling problems, image segmenta-
tion, and clustering.
• Parallel SMO Algorithm (PSMO):
– This algorithm proposes a parallel implementation of the SMO algorithm using multiple threads
or processors.
– PSMO splits the population into multiple sub-populations and applies the SMO algorithm inde-
pendently on each sub-population.
– The advantage of PSMO is its ability to reduce the computation time and obtain better results
by exploring a larger search space.
– PSMO has been applied to various applications, such as pattern recognition, image processing,
and machine learning.

23
10 Ant Colony Algorithm
• Ant colony algorithm (also known as ant colony optimization) is a metaheuristic algorithm inspired by
the behaviour of ants in their search for food.
• The ant colony optimization algorithm was proposed by Marco Dorigo [2] in his PhD thesis in 1992.

10.1 Steps/Phases of SMO Algorithm


1. Ant movement:
• In this phase, each ant moves from its current position to the next position based on certain rules.
• The ant chooses the next node to visit based on the pheromone level on the path and the heuristic
information.

2. Pheromone update:
• In this phase, the pheromone level on each path is updated based on the quality of the solution
found by the ants.
• The shorter the path, the higher the amount of pheromone deposited on it.
3. Local search:

• In this phase, the ants perform a local search on their current path to improve their solution.
• Local search can be done by swapping two nodes on the path or by using a 2-opt heuristic.

Flow-chart(For reference)

Iteration
Start

Initialization

Ant movement Termination criterion?

Pheromone update

Local search

Stop

24
Algorithm Ant Colony Optimization
Initialize ants and pheromone levels on all paths Set termination criterion termination criterion not
m
k
P
met Let each ant construct a solution Update the pheromone levels on all paths using: ∆τij = ∆τij
( k=1

k Q/Lk if ant k uses path (i, j)


∆τij = τij ← (1−ρ)τij +∆τij Perform local search on the best solution
0 otherwise
found

Algorithm(For reference)

10.2 Recent Development on Ant Colony Algorithm


• Max-Min Ant System (MMAS):

– MMAS is a modification of the standard ant colony optimization algorithm that uses a max-min
strategy for updating the pheromone trails.
– This strategy ensures that the pheromone levels remain within a predefined range, which prevents
the algorithm from getting stuck in local optima.
– MMAS has been shown to improve the performance of the ant colony algorithm on various op-
timization problems, including the traveling salesman problem (TSP), vehicle routing problem
(VRP), and job shop scheduling problem (JSSP).
– Applications: TSP, VRP, JSSP, and other combinatorial optimization problems.
– Advantages:
∗ Prevents the algorithm from getting stuck in local optima.
∗ Improves the performance of the ant colony algorithm on various optimization problems.

• Ant-Q:

– Ant-Q is a modification of the ant colony algorithm that uses a reinforcement learning mechanism
to update the pheromone trails.
– In Ant-Q, the pheromone levels are updated based on a combination of the current pheromone
level, the quality of the solution, and a reinforcement signal that indicates the quality of the
solution relative to the best solution found so far.
– Ant-Q has been shown to outperform the standard ant colony algorithm on various optimization
problems.
– Applications: TSP, VRP, JSSP, and other combinatorial optimization problems.
– Advantages:
∗ Uses a reinforcement learning mechanism to update the pheromone trails.
∗ Outperforms the standard ant colony algorithm on various optimization problems.

25
11 Simulated Annealing
• Simulated annealing is a metaheuristic optimization algorithm used to find the global minimum or
maximum of a function.
• It is inspired by the physical process of annealing, which is the gradual cooling of a material to reduce
its defects and improve its strength.
• Simulated annealing was first proposed by S. Kirkpatrick, et.al, [5] in a seminal paper published in
Science in 1983.

11.1 Steps/Phases of Simulated Annealing


The success of the algorithm depends on the choice of the initial solution, the perturbation strategy, the
cooling schedule, and the acceptance criterion, as well as the characteristics of the problem being solved.

1. Evaluation:
• At each iteration, a new candidate solution is generated by making a small random perturbation
to the current solution.
• The perturbation can be chosen in different ways, depending on the problem being solved and the
nature of the solution space.

2. Perturbation:
• The algorithm evaluates the objective function at the new solution and computes a change in
the objective function value, which represents the improvement or deterioration of the candidate
solution relative to the current solution.

3. Acceptance:
• The algorithm decides whether to accept or reject the candidate solution based on a probabilistic
criterion.
• The acceptance probability is a function of the change in the objective function value and a
temperature parameter that controls the degree of randomness in the search.
• At high temperatures, the algorithm accepts solutions with a higher probability, while at low tem-
peratures, the acceptance probability decreases, and the algorithm becomes more deterministic.

4. Cooling:
• The temperature parameter is gradually decreased over time, according to a cooling schedule that
determines how fast the temperature decreases.
• The cooling schedule can be chosen in different ways, depending on the problem being solved and
the desired level of convergence.

26
Flow-chart(For reference)

Initialize

Current solution

New candidate solution

Evaluate

no

Compute ∆E

Accept new solution?

no
yes
Update current solution

Decrease temperature

Stop? yes Output final solution

27
Algorithm(For reference)

Algorithm Simulated Annealing


Initialize: set the initial temperature T , the cooling rate r, and the initial solution x
Set the current solution xc to x
while T > 0 do
Generate a new candidate solution xn by perturbing the current solution xc
Calculate the energy difference ∆E = E(xn ) − E(xc ) between the current and candidate solutions
if ∆E < 0 then
Accept the candidate solution by setting xc to xn
else
Accept the candidate solution with probability p = e−∆E/T
if the candidate solution is accepted then
Set xc to xn
end if
end if
Decrease the temperature according to the cooling rate: T ← rT
end while
Output the final solution xc

11.2 Recent Development on Simulated Annealing


• Adaptive Simulated Annealing (ASA):

– In ASA, the temperature schedule is automatically adjusted based on the progress made during
the search. ASA uses the Metropolis criterion with an adaptive temperature that decreases as
the search progresses. This helps in achieving a faster convergence to the optimal solution. ASA
has been successfully applied to various optimization problems such as the travelling salesman
problem, the quadratic assignment problem, and the job shop scheduling problem.
• Quantum Simulated Annealing (QSA):

– QSA is a hybrid optimization algorithm that combines the principles of quantum computing and
simulated annealing. QSA uses a quantum-inspired operator to generate new candidate solutions
and then applies the Metropolis criterion to accept or reject the new solution. QSA has been
shown to outperform SA in various optimization problems such as the MAX-CUT problem and
the travelling salesman problem.

• Hybrid Simulated Annealing (HSA):


– HSA combines SA with other optimization algorithms such as genetic algorithms or tabu search.
This helps in exploiting the strengths of each algorithm and overcoming their weaknesses. For
example, HSA has been applied to the vehicle routing problem and has been shown to achieve
better results than SA or genetic algorithms alone.
• Parallel Simulated Annealing (PSA):
– PSA uses parallel computing to speed up the search process by exploring multiple solutions
simultaneously. PSA has been applied to various optimization problems such as the quadratic
assignment problem and the job shop scheduling problem. PSA has been shown to significantly
reduce search time and improve the quality of the solutions.

28
12 Intelligent Image Color Reduction and Quantization
The method of colour quantization involves minimising the amount of colours in a digital image. The basic
goal of the quantization process is to maintain important information while lightening an image’s colour
palette. Due to their limitations, the majority of image printing and display equipment cannot replicate an
image’s true colours. small colour palettes. As a result, to match the amount of palette colours available,
fewer picture colours must be used.

12.1 Solution by the Artificial Bee Colony (ABC) Algorithms


The social insects known as honey bees reside in beehives. A beehive contains a variety of bee species, each
of which plays a particular role in the hunt for food. The goal of the bee colony’s work is to locate reliable
food sources and transport their nectar to the beehive. A food source is abandoned and replaced by another
one that has more nectar when the amount of nectar in that source sharply declines. Karaboga created the
ABC method to handle optimisation issues based on the behaviour of these bees. Three different species of
bees are present in the swarm under consideration: working bees, observers, and scout bees. Each bee in the
workforce is connected to a particular food source; it brings nectar to the beehive and notifies the curious
bees about the food source. Based on the information given by the employed bees, each observer bee chooses
a food source. The scout bees randomly look for fresh food sources.

12.2 Solution by the Differential Evolution


One of the popular image processing methods is colour picture quantization, which involves lowering the
amount of colours displayed in a colour image while minimising distortion.The basic goals of colour quan-
tization are to use less storage space and transfer images more quickly. There are two crucial stages to
colour image quantization. The first is to create a colormap that has fewer colours than a colour picture
(usually 8–256 colours). The second involves assigning a specific colour from the colormap to each pixel of
the colour picture. The majority of colour quantization techniques emphasise producing the best possible
colormap.Finding the ideal colormap would take an unreasonable amount of time because it is an NPhard
problem . Researchers have used GA and PSO among other stochastic optimisation techniques to solve this
issue. In particular, the PSO-CIQ colour image quantization algorithm and a number of other well-known
colour image quantization techniques have been compared in the literature. According to the experimental
findings, PSO-CIQ performs better.

12.3 Solution by the Genetic Algorithms


This technique’s main goal is to create a bi-level image from an old manuscript. In this illustration, the
backdrop or back-to-front interference is white, while the text is black. We then created a clever technique
that quantized the original image using a genetic process. The goal is to narrow the colour spectrum so
that the remaining colours (or grey levels) correspond to categories like text, background, or back-to-back
interference. You’ll notice that different colours can be used to symbolise certain classes.The method is then
followed by a threshold to categorise the pixels as text or non-text, omitting the background and the back-
to-front interference, assuming that the text is made up of the darkest grayscale levels. The thresholding
separates the classes into text and non-text (the darkest remaining hues).In our instance, we made advantage
of 256-level grayscale photos from old texts. A binary bit string was used to encode the information about
each grayscale level that stands in for a class. Also, we formalised the parameters that the quantization
process uses to define the threshold. To indicate each factor and each limit, we used 8 bits. As the fitness
function, we used an image fidelity index (Q).

29
12.4 Solution by the Particle Swarm Optimization
Three different species of bees are present in the swarm under consideration: working bees, observers, and
scout bees. Each bee in the workforce is connected to a particular food source; it brings nectar to the beehive
and notifies the curious bees about the food source. Based on the information given by the employed bees,
each observer bee chooses a food source. The scout bees haphazardly look for fresh food sources. One of the
most crucial methods for image processing and compression is colour quantization (CQ). The vast majority
of quantization techniques rely on clustering algorithms. Unsupervised classification methods like data
clustering fall within the category of NP-hard problems. Using swarm intelligence algorithms is one strategy
for tackling NP-hard problems.Swarm intelligence techniques include the Artificial Fish Swarm Algorithm
(AFSA). This paper suggests a modified AFSA for conducting CQ. The proposed algorithm makes changes to
behaviours, settings, and the algorithm’s process in order to increase the AFSA’s effectiveness and eliminate
its flaws. Four well-known photos were subjected to CQ using the suggested approach as well as other other
known algorithms. Comparison of experimental findings demonstrates the suggested algorithm’s tolerable
efficiency.

12.5 Solution by the Firefly Algorithm


Color quantization (CQ), also known as colour picture quantization, is a technique used in image processing
to reduce the amount of colours in an image due to limits in visual display, data storage, and transmission.
Color quantization methods come in two different flavours: splitting algorithms like median-cut, center-cut,
and octree, and clustering algorithms. You can perform colour quantization via clustering by assembling
colour points into manageable groups and then identifying a representative for each group. This clustering-
based quantization problem is an optimisation problem since it seeks to reduce the total distance between the
centres of each cluster and its constituents in order to minimise the quantization error.To obtain the overall
solution of the quantization image, the greatest inter-cluster distance is minimised. Color quantization uses
the K-means algorithm, the most used clustering technique. For executing colour quantization, a variety
of optimisation techniques, including swarm intelligence algorithms or nature-inspired algorithms, are used,
such as Particle Swarm Optimization (PSO), a modified artificial fish swarm algorithm, Bacteria Foraging
Optimization, and Honey Bee Optimization. An approach that is hybrid in nature

12.6 Solution by the Cuckoo Search


Using fewer carefully chosen colours to represent an image is a procedure called colour image quantization.
It is well known that the quick advancement of computer hardware and software makes it simple to show and
store high-quality photographs. Unfortunately, the degree of detail in these photographs can be extremely
high, which can slow down processing and transfer. Before processing and transmission, photos should un-
dergo some pre-processing to remove extraneous data in order to prevent these issues. Color quantization is
a pre-processing technique that is widely used to minimise the number of colours in photographs while min-
imising distortion so that the visual quality of the reproduced image is extremely similar to the original.Color
quantization is typically done in two phases. Palette design, or choosing the right quantity of colours, comes
first (generally 8-256).Pixel mapping, or changing each pixel’s colour to one from the palette, is the second
phase. Color quantization can therefore be considered a lossy image compression process. Researchers have
used a number of stochastic optimisation techniques, including PSO and the Genetic Algorithm (GA), to
solve this issue. Particularly, the literature contrasts the PSO-based colour picture quantization algorithm
with a number of other widely used colour image quantization techniques.

12.7 Solution by the Bat Algorithm


In order to distribute the energy of the RGB image through the DCT with higher conductivity, we first
construct an improved colour space, B1B2B3: The right option among these colour spaces is determined by
using BA, where the cost function is designed to maximise the energy in the plane B1 relative to B2 and

30
B3. There are an infinite number of alternative colour spaces for the representation of the RGB image.We
create optimum thresholds appropriate for each plane of the transformed RGB image after translating it
into B1B2B3 space. The picture decompression stage is maintained using the bat method, where the cost
function is built to balance the perfect deletion of unnecessary DCT coefficients without squandering the
energy information.To increase the compression rate, we use a lossless coding strategy built on the TRE
coding and adaptive scanning suggested

12.8 Solution by the Spider Monkey Optimization (SMO) Algorithm


According to recent study, distributed generation using renewable energy will significantly increase in the
power networks of the future (DG). The distribution networks’ voltage security statuses can be improved
immediately and simply by implementing the effects of DG and distribution management system (DMS)
activity. In the first section of this study, the voltage security state of the distribution network is determined
using a combination of the learning vector quantization (LVQ) algorithm and Kohonen’s self-organizing
feature map (SOFM). In this research, the classification result is verified using two indicators: the voltage
stability index (VSI) and the distribution system stability indicator (DSSI).The system’s voltage profile must
be improved in order to guarantee voltage security. In the following step, genetic algorithm (GA) and spider
monkey optimisation (SMO) approaches have been used to determine the ideal location and size of DG
for the reconfigured distribution system’s improved voltage security condition. In a smart grid situation,
accurate DG allocation can support demand side management (DSM) for better real-time service to the
customers. The IEEE 33 bus, IEEE 69 bus, and Indian 85 bus actual radial distribution systems have all
been used to test this concept.

12.9 Solution by the Ant Colony Algorithms


This section introduces the general structure, block diagram, and algorithm of the suggested method. The
proposed method’s block diagram is depicted .A modified ant-based clustering algorithm is used on picture
pixels in the first stage. The selection of a representative for each cluster is the focus of the second phase. At
the third stage, colour reduction is finished and pixels are mapped to the created palette. Pixels are mapped
to the palette using the most straightforward technique.As previously mentioned, the proposed approach
begins with a sampling phase that involves randomly selecting certain image pixels. This stage has a direct
bearing on the outcome. In order to prevent oscillating results caused by various images, we employed a
large number of pixels in all samples. Additionally, the average of a few consecutive outcomes is employed
in studies to improve precision and reliability. Four photos were chosen for analysis.

12.10 Solution by the Simulated Annealing


The fundamental benefit of black-box optimisation algorithms is that they can deliver a solution that is
almost optimal while not requiring any domain-specific expertise. By Kirkpatrick simulated annealing (SA)
was initially presented as a broad optimisation technique (1983). It mimics the annealing process, which
involves heating metal to a temperature close to its melting point and then gradually cooling it down.As a
result, the particles can gravitate towards a state of minimal energy that has a more crystalline structure.
As a result, the technique allows for some microstructure control. A version of the hill-climbing algorithm
is simulated annealing. Both begin at a predetermined random location in the search space of all potential
solutions.

12.11 Conclusion
It has been suggested to use a colour quantization technique that mixes artificial ants and bees. This
approach is based on the algorithm Ozturk. used to solve the same issue by fusing the K means algorithm
and artificial bees. As a result of the K-means algorithm’s high time requirement, the ATCQ method is
used instead. The new method can produce images with a similar quality to the old method, according to

31
computational results.compared to that proposed by Ozturk et al., but takes a lot less time to complete.
Also, compared to other widely used colour quantization techniques as Wu’s approach, Neuquant, K means,
Octree, or the Variance-based technique, this technique produces better images. Even combining ABC +
ATCQ can sometimes produce better photos than PSO, it should be noted that PSO takes an unnecessary
amount of time to do so. The standard ATCQ algorithm, which creates a multilayer tree, and the variant
of the algorithm that produces a 3-level tree have both been merged with the ABC algorithm. The study of
the data reveals that the second example generates images more quickly and in a manner that is remarkably
similar to the first scenario.

13 Minimum Spanning Tree


A subset of the edges of a connected, edge-weighted, undirected graph that binds all of the vertices together,
without any cycles, and with the least amount of total edge weight is known as a minimal spanning tree
(MST) or minimum weight spanning tree. In other words, it is a spanning tree whose total edge weights are
as low as they may be. A minimal spanning forest, which is a union of the minimum spanning trees for all of
an edge-weighted undirected graph’s connected components, exists for any edge-weighted undirected graph
(not just connected ones).

13.1 Solution by the Artificial Bee Colony (ABC) Algorithms


A novel population-based metaheuristic strategy called the artificial bee colony algorithm was put forth
by Karaboga [9] and improved upon by Karaboga and Basturk. This strategy is based on how honeybee
swarms forage intelligently. The three types of foraging bees are employed, observers, and scouts. All bees
are categorised as ”employed” if they are currently utilising a food source. The working bees transport
copious amounts of honey from the source of food to the ABC-LCMST was developed in C and run under
Red Hat Linux 9.0 on a Pentium 4 system with 512 MB of RAM operating at 3.0 GHz. We have employed
a colony of 100 bees as per [13]. These bees are divided into two groups: workers and observers. If an
employed bee’s solution to a problem of size n does not get better after 2n iterations, a randomly generated
solution is used in its place. An empirical limit of 2n iterations has been established.

13.2 Solution by the Differential Evolution


The goal of multimodal optimisation problems (MMOPs) is to precisely pinpoint as many global optima as
possible. Due to the benefit of maintaining population variety, many niching strategies have been widely
used in evolutionary algorithms to solve MMOPs. Unfortunately, the majority of multimodal algorithms
are sensitive to the niching parameters and lack efficient ways to update the ”stagnant people,” such as
those that have converged or locked inside the same local optima. In order to more effectively address the
aforementioned problems, a minimum spanning tree niching-based differential evolution (MSTNDE) with
knowledge-driven update (KDU) strategy is proposed in this paper. The ”knowledge” in this context refers
to information on past evolutionary trends, fitness distribution information, and individual distribution
information.A minimum spanning tree niching (MSTN) technique is proposed in MSTNDE to split the
population in an adaptive way, which can change the number of niches on the fly. The KDU approach
is suggested as a way to find and update persons who are stagnant. To speed convergence and increase
the accuracy of solutions, respectively, an improved differential evolution using local stage-based mutation
(LSM) and directional guided selection (DGS) procedures is suggested. The results of the comparison with
16 cutting-edge algorithms on CEC2013 demonstrate that MSTNDE achieves notable benefits on issues with
high dimensions or problems with several global optima.

13.3 Solution by the Genetic Algorithms


Because of the NP-hardness of the mc-MST problem, approximation methods must be utilised to solve it
effectively. In order to address the mc-MST, a non-generational GA is proposed in this study. An approach

32
for listing every Pareto optimum spanning tree was proposed in a paper by Zhou and Gen [10] to assess
their suggested GA. However Knowles [5] argued that the suggested enumeration algorithm was flawed. This
study proposes an enhanced enumeration approach to produce all actual Pareto optimum solutions for the
mc-MST issue in order to assess the proposed non-generational GA. The experimental findings demonstrate
that the majority of the outcomes obtained using the suggested GA approach are genuine Pareto optimum
solutions. Hence, the suggested non-generational GA is very successful.The idea behind the non-dominated
sorting procedure is that a ranking selection method is used to emphasize good points and a niche method
is used to maintain stable sub population of good points.

13.4 Solution by the Particle Swarm Optimization


Traditional algorithms are unable to solve the minimum spanning tree of length constraint issue (MSTLCP),
thus improved particle swarm optimisation (PSO), which is based on the concept of global and workable
searching, is proposed as a solution. A more reasonable fitness function is designed in the improved PSO
based on the relationship between the spanning tree and its cotree, and improvements for updating the
particle position make the current position of the particle advantageously be close to the best position of
the particle in its neighbourhood. These features ensure that the particle swarm is feasible. Better PSO
on MSTLCP is therefore more fair than standard PSO on MSTLCP. We then get to the Conclusion that
enhanced PSO is a useful algorithm by conducting simulation experiments, analysing parameter change,
particle swarm scale change, and iteration number change.

13.5 Solution by the Firefly Algorithm


A powerful optimisation method that can address a variety of optimisation issues can be created by combining
the Firefly Algorithm and MST. The resulting technique is known as the Firefly Algorithm based on Minimum
Spanning Trees (MSTFA).The MSTFA algorithm incorporates the MST construction process in addition to
the Firefly Algorithm’s fundamental design. Here is how the algorithm operates: Initialization: Place fireflies
in random locations to start the population in the search area.Determine: Determine each firefly’s fitness
using the objective function.MST Construction: Use Kruskal’s algorithm to build the MST for the population
of fireflies.Change each firefly’s location by using the equation below:beta * (x j(t) - x i(t)) + alpha * (x
MST(i) - x i(t)) = x i(t+1) where x MST(i) is the position of the closest firefly in the MST, x j(t) is the position
of the brightest firefly at time t, and beta and alpha are the step sizes.Determine: Determine each firefly’s
fitness using the objective function.Update: Use the fitness value to adjust each firefly’s brightness.Repeat
steps 3 through 6 until the termination requirement is satisfied.The MSTFA method depends on the MST
creation process because it directs the fireflies to fly to the closest firefly in the MST. This helps to preserve
population variety and prevents too early convergence.The MSTFA algorithm, in general, combines the
advantages of the Firefly Algorithm and MST to offer an effective and efficient optimisation solution for
resolving a variety of optimisation issues.

13.6 Solution by the Cuckoo Search


A powerful optimisation method that can address a variety of optimisation issues can be created by combining
the Cuckoo Search with MST. the Minimal Spanning Tree based Cuckoo Search is the name of the resulting
algorithm (MSTCS).The MST building process is incorporated into the MSTCS algorithm in addition to
the Cuckoo Search’s core framework. Here is how the algorithm operates:Initialization: Assign cuckoos in
the search space random starting places.Determine: Determine each cuckoo’s fitness using the objective
function. MST Construction: Use Kruskal’s algorithm to build the MST for the cuckoo population.Use
the Lévy flying algorithm to generate a new position for each cuckoo.MST Update: Go towards the closest
cuckoo in the MST to update each cuckoo’s location.Determine: Determine each cuckoo’s fitness using the
objective function.Nest selection involves choosing the best answers from the cuckoo population and then
replacing them with fresh answers produced by Lévy flight.Repeat steps 3 through 7 until the termination
requirement is satisfied. The MSTCS algorithm depends on the MST construction and update steps because

33
they direct the cuckoos to fly towards the closest cuckoo in the MST. This helps to preserve population
variety and prevents too early convergence. Ultimately, the Cuckoo Search and MST strengths are combined
in the MSTCS algorithm to provide a powerful and successful optimisation technique.

13.7 Solution by the Bat Algorithm


The Bat Algorithm and MST can be combined to create a powerful optimisation method that can handle
a variety of optimisation issues. The algorithm that results is known as the Minimum Spanning Tree-based
Bat Algorithm (MSTBA). The MSTBA algorithm incorporates the MST building process while maintaining
the basic architecture of the Bat Algorithm. Here is how the algorithm operates: Initialization: Place bats
in the search space at random starting points. Determine: Determine each bat’s fitness using the objective
function. MST Construction: Use Kruskal’s algorithm to build the MST for the population of bats.Frequency
Update: Use the equation below to update the frequency of each bat:rand = f i(t+1) = f min + (f max - f
min) (),where f min and f max are the minimum and maximum frequencies, f i(t) is the current frequency of
bat I at time t, and rand() generates a random number between 0 and 1.The following equation should be
used to update each bat’s velocity:x j(t) - x i(t) * f i = v i(t+1) = v i(t) (t),where x j(t) is the position of the
brightest bat at time t and v i(t) is the bat i’s current velocity at time t.MST Update: Go towards the closest
bat in the MST to update each bat’s position.Determine: Determine each bat’s fitness using the objective
function.Loudness Update: Use the equation below to update each bat’s volume:Alpha * Alpha = A i(t+1)
(t),where alpha is a constant and A i(t) is the current volume of bat I at time t.Pulse Update: Using the
equation shown below, update each bat’s pulse rate.r i(t+1) is equal to r i(t) * (1 - exp(-gamma * t)).where
gamma is a constant and r i(t) is the bat i’s current pulse rate at time t.Repeat steps 3 through 9 until the
termination requirement is satisfied.The MSTBA algorithm’s MST construction and MST updating steps
are essential because they direct the bats to migrate towards the nearby bat in the MST.

13.8 Solution by the Spider Monkey Optimization (SMO) Algorithm


A powerful optimisation method that can address a variety of optimisation issues can be created by combining
the SMO and MST. The Spider Monkey Optimization using Minimum Spanning Trees is the name of the final
algorithm (MSTSMO).The MSTSMO algorithm incorporates the MST construction procedure in addition to
the conventional SMO structure. Here is how the algorithm operates: Initialization: Place spider monkeys in
the search space at random starting points.Use the goal function to assess each spider monkey’s fitness.MST
Construction: Use Kruskal’s algorithm to build the MST for the spider monkey population.The following
equation should be used to update each spider monkey’s movement:x j(t) - x i(t) = x i(t+1) = x i(t) + (d
i(t) / d max) *where x i(t) is the spider monkey’s current location in the MST at time t, d i(t) is the distance
between it and its closest neighbour at time t, d max is the maximum distance in the MST, and x j(t) is the
spider monkey’s position in the MST at time t that is closest to spider monkey i.Use the goal function to
assess each spider monkey’s fitness.Update each spider monkey’s nest by choosing the best one from among
those in its current and former positions.Update the spiders of each spider monkey using the equation shown
below:Random() * (x best - x i(t)) = x i(t+1)where rand() generates a random number between 0 and 1, and
x best indicates where the best spider among the spider monkeys is located.Use the goal function to assess
each spider monkey’s fitness.Choose the finest answers from the population of spider monkeys, then swap
them out for fresh answers produced by the spider update.Repeat steps 3 through 9 until the termination
requirement is satisfied.The MSTSMO algorithm’s MST formation procedure and movement update step
are essential because they direct the spider monkeys to go towards the closest spider monkey in the MST.

13.9 Solution by the Ant Colony Algorithms


A powerful optimisation method that can address a variety of optimisation issues can be created by combining
the ACO and MST. The resultant approach is known as the Ant Colony Algorithm based on Minimum
Spanning Trees (MSTACO). The MST building process is added to the ACO’s basic framework in the
MSTACO algorithm. Here is how the algorithm operates:Initialization: Place ants in the search space at

34
random starting points.Evaluation: Using the objective function, assess each ant’s fitness.MST Construction:
Use Kruskal’s algorithm to build the MST for the ant population.Ant Movement: Using the following
calculation, move each ant to a new position depending on the pheromone trails and the distances in the
MST:The formula for p i(j) is (tau i(j) alpha) * (eta i(j) beta) / sum.(Eta i(j) beta) ((tau i(j) alpha))x
i(t+1) = p i(j) + argmin j where p i(j) denotes the likelihood of an ant moving to node j, tau(j) denotes the
pheromone level on the edge between node I and node j, eta(j) denotes heuristic information, alpha and beta
denote the parameters that control the relationship between pheromone and distance, and argmin j denotes
the node j with the lowest probability of p i (j).Evaluation: Using the objective function, assess each ant’s
fitness.Pheromone Update: Use the equation below to update the pheromone trails:rho * (1 / f(x i) * tau
i(j) + rho * tau i(j) = tau i(j),where f(x i) is the fitness of ant I rho is the evaporation rate, and 1 / f(x
i) is the quantity of pheromone deposited by ant i.Update each ant’s nest by choosing the best one from
among those in its current and prior positions. Repeat steps 3 through 7 until the termination requirement
is satisfied.

13.10 Solution by the Simulated Annealing


A powerful optimisation method that can address a variety of optimisation issues can be created by combining
the SA and MST. Simulated Annealing with Minimal Spanning Trees is the name of the resulting algorithm
(MSTSA).The MST building process is added to the SA’s fundamental structure in the MSTSA algorithm.
Here is how the algorithm operates:Initialization: Set the solution’s state in the search space to a random
one.Evaluation: Using the objective function, assess the solution’s fitness.MST Construction: Use Kruskal’s
algorithm to build the MST for the solution.Annealing: Keep going until the halting requirement is satisfied.a.
Perturbation: Change the current solution in a random way to perturb it.b. Evaluation: Using the objective
function, assess the modified solution’s fitness.c. MST Construction: Use Kruskal’s technique to build
the MST for the perturbed solution. d. Acceptance: With a probability determined by the Metropolis
criterion, accept the altered solution:P equals exp (-delta/T),where T is the current temperature, exp is
the exponential function, and delta is the fitness difference between the perturbed and current solutions.e.
Temperature Update: Use a cooling schedule to update the temperature.Finish: Give back the best answer
uncovered during the annealing procedure.The MSTSA algorithm’s MST creation procedure is essential
because it directs disturbances towards the MST’s shortest pathways. This helps to preserve population
variety and prevents too early convergence.Essentially, the MSTSA algorithm combines the SA and MST’s
advantages to create a powerful optimisation algorithm that can solve a variety of optimisation issues.

13.11 Conclusion
As a result, the Minimum Spanning Tree (MST) is a fundamental issue in graph theory that aims to identify
the shortest path between a group of nodes or vertices, ensuring that every vertex is connected and that
there are no cycles. Network design, picture segmentation, and clustering are just a few of the many useful
uses of the MST.The MST has served as the foundation for the development of numerous optimisation
algorithms, including Ant Colony Optimization, Simulated Annealing, Genetic Algorithms, and Particle
Swarm Optimization. These algorithms direct the search for the best solutions using the MST as a guiding
principle.The incorporation of the MST into optimisation algorithms can result in effective and efficient
solutions for a variety of issues. The algorithms can take advantage of the MST’s characteristics to preserve
population diversity, prevent early convergence, and look for the world’s best solution. The MST is a potent
idea that has been applied in a variety of domains, and its application to optimisation methods has shown
to be a viable route for resolving challenging optimisation issues.

14 Robot Path Planning


Robot path planning is a critical task in robotics, where the goal is to find an optimal path for a robot to move
from its initial position to a target position while avoiding obstacles. Artificial Bee Colony (ABC) is a meta

35
heuristic optimization algorithm that is inspired by the foraging behavior of honey bees. The algorithm
mimics the food source discovery process of bees to search for optimal solutions in a search space.The
RP-ABC method provides a number of benefits, such as the capacity to handle complicated settings with
numerous barriers, robustness to noise and uncertainty in the environment, and the ability to swiftly and
effectively locate optimal pathways. In dynamic contexts, where barriers or the position of the target may
change over time, the RP-ABC algorithm can also be expanded to address those situations.

14.1 Solution by the Artificial Bee Colony (ABC) Algorithms


Robot path planning and ABC can be combined to create a powerful algorithm that can locate the best
routes for a robot to go in challenging environments. The Robot Path Planning based Artificial Bee Colony
(RP-ABC) algorithm is the outcome.Here is how the RP-ABC algorithm operates:Initialization: Provide a
population of potential routes for the robot to follow as it explores its surroundings.Evaluation: Determine
the candidate paths’ fitness using a fitness function that considers the length of the path, how to avoid
obstacles, and other restrictions.ABC Search: Apply the ABC algorithm to the population of potential paths
in order to identify the one that best satisfies the problem’s requirements.Using the ABC method to the
population of potential paths can help you locate the one that best satisfies the problem’s requirements.Use
a local search method, such as gradient descent or hill climbing, to enhance the ABC algorithm’s best path’s
quality.Finish: Provide the best path uncovered during the search process.In the RP-ABC method, the local
search algorithm serves as a refinement phase to enhance the quality of the best path discovered by the ABC
algorithm. The ABC algorithm serves as a global search algorithm.

14.2 Solution by the Differential Evolution


Robot path planning and DE can be combined to create a powerful algorithm that can locate the best
routes for a robot to go in challenging environments. The Robot Path Planning based Differential Evolution
(RP-DE) algorithm is the outcome.Here is how the RP-DE algorithm operates:Initialization: Provide a
population of potential routes for the robot to follow as it explores its surroundings.Evaluation: Determine
the candidate paths’ fitness using a fitness function that considers the length of the path, how to avoid
obstacles, and other restrictions.DE Search: Apply the DE algorithm to the population of potential paths
in order to identify the path that best satisfies the problem’s requirements.Local Search: To enhance the
quality of the optimal path discovered by the DE algorithm, use a local search technique, such as gradient
descent or hill climbing.Return the top route uncovered throughout the search procedure.The local search
algorithm serves as a refining phase to enhance the quality of the best path discovered by the DE algorithm,
which serves as a global search algorithm in the RP-DE method.The RP-DE algorithm provides a number of
benefits, including the capacity to handle challenging situations with numerous barriers, robustness to noise
and uncertainty in the environment, and speed and effectiveness in locating the best routes. In dynamic
contexts, where barriers or the position of the target may change over time, the RP-DE algorithm can also
be expanded to address those situations.Overall, the RP-DE algorithm is a promising method for planning
robot paths and can offer a practical resolution to a variety of robotic navigation issues.

14.3 Solution by the Genetic Algorithms


An efficient and effective method that can determine the best routes for a robot to navigate in complicated
situations can be created by combining GAs and robot path planning. Robot Path Planning based Genetic
Algorithm (RP-GA) is the name of the resulting algorithm.Here is how the RP-GA algorithm operates:
Initialization: Provide a population of potential routes for the robot to follow as it explores its surround-
ings.Evaluation: Determine the candidate paths’ fitness using a fitness function that considers the length of
the path, how to avoid obstacles, and other restrictions.Selection: To produce the following round of can-
didate pathways, choose the population’s best-fitting members.Crossover: To create new candidate paths,
apply crossover operators to the individuals you’ve chosen.Mutation: To add diversity to the population, use
mutation operators on the new candidate pathways.Examine the new candidate paths’ fitness by performing

36
an evaluation.Termination: Return to step 3 if the termination requirement is not satisfied; else, return the
most advantageous path discovered throughout the search process.The GA serves as a global search algo-
rithm in the RP-GA algorithm, while the local search algorithm serves as a refinement phase to enhance the
quality of the best path discovered by the GA.The RP-GA method provides a number of benefits, including
the capacity to handle challenging situations with numerous barriers, robustness to noise and ambiguity in
the environment, and speed and effectiveness in locating the best pathways. In dynamic contexts, where
barriers or the location of the target may change over time, the RP-GA algorithm can also be expanded to
handle those situations.Overall, the RP-GA algorithm is a promising method for planning robot paths and
can offer a practical resolution to a variety of robotic navigation issues.

14.4 Solution by the Particle Swarm Optimization


Robot path planning and PSO can be combined to create a powerful algorithm that can determine the
best routes across challenging situations for a robot to go. The robot path planning based particle swarm
optimisation (RP-PSO) algorithm is the outcome.This is how the RP-PSO algorithm operates:Initialization:
Provide a population of potential routes for the robot to follow as it explores its surroundings.Evaluation:
Determine the candidate paths’ fitness using a fitness function that considers the length of the path, how to
avoid obstacles, and other restrictions.PSO Search: Use the PSO algorithm to search through the population
of potential paths in order to identify the one that best satisfies the problem’s constraints.Local Search: To
enhance the PSO algorithm’s optimal path’s quality, use a local search method like gradient descent or hill
climbing.Return the top route uncovered throughout the search procedure. While the local search algorithm
serves as a refinement phase to enhance the quality of the best path discovered by the PSO method, the PSO
algorithm serves as a global search algorithm in the RP-PSO algorithm.The RP-PSO method has a number
of benefits, including the ability to handle complicated settings with numerous obstacles, its tolerance to
noise and uncertainty in the environment, and its capacity to swiftly and effectively locate optimal pathways.
In dynamic contexts, where barriers or the position of the target may vary over time, the RP-PSO algorithm
can also be expanded to address those situations.Overall, the RP-PSO algorithm is a promising method for
planning robot paths and can offer a practical and efficient answer to a variety of robotic navigation issues.

14.5 Solution by the Firefly Algorithm


Robot path planning and the Firefly Algorithm can be combined to create a powerful algorithm that can
locate the best routes across challenging situations for robot navigation. The Robot Path Planning based
Firefly Algorithm (RP-FA) algorithm is the outcome.Here is how the RP-FA algorithm operates:Initialization:
Provide a population of potential routes for the robot to follow as it explores its surroundings.Evaluation:
Determine the candidate paths’ fitness using a fitness function that considers the length of the path, how
to avoid obstacles, and other restrictions.Firefly Search: Apply the Firefly Algorithm to the population of
potential paths in order to identify the one that best satisfies the problem’s requirements.Local Search: To
enhance the quality of the Firefly Algorithm’s optimal path, use a local search method, such as gradient
descent or hill climbing.Return the top route uncovered throughout the search procedure.The Firefly Algo-
rithm performs the role of a global search algorithm in the RP-FA algorithm, while the local search algorithm
performs the role of a refinement step to enhance the quality of the best path discovered by the Firefly Al-
gorithm.The RP-FA algorithm provides a number of benefits, such as the capacity to handle complicated
environments with numerous barriers, robustness to noise and uncertainty in the environment, and speed
and efficiency in locating optimal pathways. In dynamic contexts, where barriers or the location of the target
may change over time, the RP-FA algorithm can also be expanded to handle those situations.Overall, the
RP-FA algorithm offers a promising method for designing robot paths and can effectively address a variety
of robotic navigational issues.

37
14.6 Solution by the Cuckoo Search
Robot path planning and the Cuckoo Search algorithm can be combined to create a powerful system that
can determine the best routes across challenging terrain for robots to travel. The robot path planning based
cuckoo search (RP-CS) algorithm is the outcome.Here is how the RP-CS algorithm operates: Initialization:
Provide a population of potential routes for the robot to follow as it explores its surroundings.Evaluation:
Determine the candidate paths’ fitness using a fitness function that considers the length of the path, how
to avoid obstacles, and other restrictions.Apply the Cuckoo Search algorithm on the population of potential
paths in order to identify the path that best satisfies the problem’s requirements.Local Search: To enhance the
quality of the optimal path discovered by the Cuckoo Search algorithm, use a local search technique, such as
gradient descent or hill climbing. Return the top route uncovered throughout the search procedure.The local
search algorithm serves as a refining phase to enhance the quality of the best path discovered by the Cuckoo
Search algorithm, which serves as a global search algorithm in the RP-CS algorithm.The RP-CS algorithm
provides a number of benefits, such as the capacity to handle complicated environments with numerous
barriers, robustness to noise and uncertainty in the environment, and speed and efficiency in locating optimal
pathways. In dynamic contexts, where barriers or the location of the target may vary over time, the RP-CS
algorithm can also be expanded to handle those situations.Overall, the RP-CS algorithm offers a promising
method for planning robot paths and can effectively address a variety of robotic navigational issues.

14.7 Solution by the Bat Algorithm


Robot path planning and the Bat Algorithm can be combined to create a powerful algorithm that can
determine the best routes across challenging situations for robots to travel. The Robot Path Planning based
Bat Algorithm (RP-BA) algorithm is the name of the final algorithm.Here is how the RP-BA algorithm
operates: Initialization: Provide a population of potential routes for the robot to follow as it explores its
surroundings.Evaluation: Determine the candidate paths’ fitness using a fitness function that considers the
length of the path, how to avoid obstacles, and other restrictions.Bat Search: Apply the Bat Algorithm to the
population of potential paths in order to identify the one that best satisfies the problem’s requirements.Local
Search: To enhance the quality of the optimal path discovered by the Bat Algorithm, use a local search
method, such as gradient descent or hill climbing.Return the top route uncovered throughout the search
procedure.The local search algorithm serves as a refining phase to enhance the quality of the best path
discovered by the Bat Algorithm, which serves as a global search algorithm in the RP-BA algorithm. The
RP-BA method provides a number of benefits, such as the capacity to handle complicated environments
with numerous barriers, robustness to noise and uncertainty in the environment, and speed and efficiency in
locating optimal pathways. In dynamic contexts, where barriers or the location of the target may change over
time, the RP-BA algorithm can also be expanded to address those situations.Overall, the RP-BA algorithm
is a promising method for planning robot paths and can offer a practical resolution to a variety of robotic
navigation issues.

14.8 Solution by the Spider Monkey Optimization (SMO) Algorithm


an efficient algorithm that can locate the best routes for a robot to take when navigating challenging sur-
roundings. The robot path planning-based spider monkey optimisation (RP-SMO) algorithm is the name
of the final algorithm. Here is how the RP-SMO algorithm operates:Initialization: Provide a population of
potential routes for the robot to follow as it explores its surroundings. Evaluation: Determine the candidate
paths’ fitness using a fitness function that considers the length of the path, how to avoid obstacles, and other
restrictions.SMO Search: Use the SMO algorithm to search through the population of potential paths in
order to identify the one that best satisfies the problem’s requirements.Local Search: To enhance the SMO
algorithm’s optimal path’s quality, use a local search algorithm like gradient descent or hill climbing.Return
the top route uncovered throughout the search procedure. While the local search algorithm serves as a re-
finement phase to enhance the quality of the best path discovered by the SMO method, the SMO algorithm
serves as a global search algorithm in the RP-SMO algorithm.The RP-SMO algorithm provides a number

38
of benefits, including the capacity to handle challenging situations with numerous barriers, robustness to
noise and uncertainty in the environment, and speed and effectiveness in locating the best routes. Dynamic
environments can also be handled with the RP-SMO algorithm.where the obstacles or the target position
may change over time.Overall, the RP-SMO algorithm is a promising approach to robot path planning, and
it can provide an efficient and effective solution to a wide range of robotic navigation problems.

14.9 Solution by the Ant Colony Algorithms


Robot path planning and the ACO algorithm can be combined to create a powerful algorithm that can
determine the best routes across challenging situations for a robot to go. The Robot Path Planning based
Ant Colony Optimization (RP-ACO) algorithm is the outcome.This is how the RP-ACO algorithm operates:
Initialization: Provide a population of potential routes for the robot to follow as it explores its surround-
ings.Evaluation: Determine the candidate paths’ fitness using a fitness function that considers the length of
the path, how to avoid obstacles, and other restrictions.ACO Search: Use the ACO algorithm to the popula-
tion of potential paths in order to identify the one that best satisfies the problem’s constraints.Local Search:
To enhance the ACO algorithm’s best path’s quality, use a local search algorithm like gradient descent or hill
climbing.Return the top route uncovered throughout the search procedure.While the local search algorithm
serves as a refinement phase to enhance the quality of the best path discovered by the ACO method, the
ACO algorithm serves as a global search algorithm in the RP-ACO algorithm.The RP-ACO algorithm has
a number of benefits, including the ability to handle complicated environments with numerous barriers, its
tolerance to noise and ambiguity in the environment, and its capacity to swiftly and effectively locate opti-
mal pathways. In dynamic contexts, where barriers or the position of the target may change over time, the
RP-ACO algorithm can also be expanded to handle those situations.Overall, the RP-ACO algorithm offers
a promising method for planning robot paths and can effectively address a variety of robotic navigational
issues.

14.10 Solution by the Simulated Annealing


Robot path planning and the SA algorithm can be combined to create a powerful algorithm that can
determine the best routes across challenging situations for a robot to go. Robot Path Planning based
Simulated Annealing (RP-SA) algorithm is the name of the final method.Here is how the RP-SA algorithm
operates: Initialization: Create a first-passage potential route for the robot to take while navigating its
surroundings.Evaluation: Determine the candidate path’s fitness using a fitness function that considers the
length of the path, how to avoid obstacles, and other restrictions.SA Search: Apply the SA algorithm to
the candidate path in order to identify the path that best satisfies the problem’s constraints.Return the top
route uncovered throughout the search procedure.The SA algorithm serves as a global search algorithm in the
RP-SA algorithm, with the ability to escape from local optima, while the fitness function directs the search
towards viable paths that adhere to the problem’s restrictions.The RP-SA algorithm provides a number
of benefits, such as the capacity to handle complicated environments with numerous barriers, robustness
to noise and uncertainty in the environment, and speed and efficiency in locating optimal pathways. In
dynamic contexts, where barriers or the location of the target may change over time, the RP-SA algorithm
can also be expanded to handle those situations.Overall, the RP-SA algorithm offers a promising method
for planning robot paths and can effectively address a variety of robotic navigational issues.

14.11 Conclusion
In Conclusion, determining the best path for a robot to take from its starting point to a goal position
while avoiding obstacles is a difficult challenge in robotics. The Artificial Bee Colony, Differential Evolution,
Genetic Algorithms, Particle Swarm Optimization, Firefly Algorithm, Cuckoo Search, Bat Algorithm, Spider
Monkey Optimization, Ant Colony Optimization, and Simulated Annealing are a few optimisation algorithms
that can be used to solve the robot path planning problem.The choice of algorithm depends on the unique
aspects of the problem and the performance requirements. Each algorithm has strengths and limitations. A

39
few population-based algorithms, like Genetic Algorithms and Particle Swarm Optimization, can effectively
search the solution space.The Minimal Spanning Tree algorithm can make optimisation algorithms more
effective and economical, particularly when tackling complicated situations with numerous barriers. The A*
algorithm, a common pathfinding technique in robotics, can be incorporated into optimisation algorithms to
increase their effectiveness by offering a heuristic function to direct the search.In Conclusion, optimisation
algorithms like those described here have the potential to help robots navigate in complicated situations
more effectively and safely. They can offer an efficient and effective solution to the robot path planning
problem.

15 Data Envelopment Analysis


A non-parametric technique for assessing the relative effectiveness of decision-making units is called data
envelopment analysis (DEA) (DMUs). A DMU is an organisation that uses inputs to generate outputs, such a
hospital, a division of a university, or a branch of a bank. Operations research, economics, and management
science all make extensive use of DEA to assess DMU performance and pinpoint optimal practises.The
DEA approach evaluates DMU performance by comparing how well they can transform inputs into outputs.
The strategy creates a frontier of effective DMUs that represents the greatest performance achievable given
the inputs and outputs of the DMUs using linear programming techniques.The DEA method has several
advantages over other methods used for evaluating the performance of DMUs. First, it is non-parametric,
which means that it does not require any assumptions about the functional form of the production process.
Second, it can handle multiple inputs and outputs simultaneously, which allows for a more comprehensive
evaluation of performance. Finally, DEA can handle both constant returns to scale and variable returns to
scale, which allows for a more nuanced evaluation of the performance of DMUs.

15.1 Solution by the Artificial Bee Colony (ABC) Algorithms


By modelling honeybee foraging behaviour, the ABC algorithm operates. The population of possible solu-
tions, which correspond to the weights of each DMU’s inputs and outputs, is initialised by the algorithm
first. The efficacy of each potential remedy is assessed using the DEA model. By a process of exploration
and exploitation that mimics bees seeking for food sources, the best candidate solutions are then employed
to produce new candidate solutions.Using the ABC approach to DEA issues has a number of benefits. It
can manage complicated and huge data sets, to start. Second, it can deal with DEA-related non-linear and
non-convex situations. Thirdly, it is reliable and doesn’t call for any prior understanding of the issue.A
popular mathematical method for assessing the relative effectiveness of decision-making units (DMUs) that
convert numerous inputs into multiple outputs is called data envelopement analysis (DEA). Finding the
input and output weights that can maximise each DMU’s efficiency is the goal of DEA. Finding the best
solution to the DEA problem, however, can be difficult, particularly for large and complicated data sets.A
meta-heuristic method called the Artificial Bee Colony (ABC) algorithm is based on honeybee behaviour.
The ABC algorithm has been successfully used to address a variety of optimisation issues in many industries.
Researchers have recently suggested using the ABC algorithm to solve DEA problems.

15.2 Solution by the Differential Evolution


Finding the ideal input and output weights that optimise each DMU’s efficiency is the goal of the DEA
problem. The DE method can be used to resolve this issue by creating a population of potential solutions
that correspond to the weights of each DMU’s inputs and outputs. The efficacy of each potential remedy is
assessed using the DEA model. Via a process of mutation and crossover, which mimics natural selection and
reproduction in biological systems, the best candidate solutions are then employed to develop new candidate
solutions. The DE algorithm has a number of benefits when used to solve DEA issues. It can manage
complicated and huge data sets, to start. Second, it can deal with DEA-related non-linear and non-convex
situations.Third, it is a global optimization algorithm, which means that it can search the entire solution

40
space to find the optimal solution.In summary, the use of Differential Evolution (DE) algorithm for solving
Data Envelopment Analysis (DEA) problems has shown promising results. DE algorithm is able to handle
large and complex data sets, non-linear and non-convex problems, and is a global optimization algorithm.

15.3 Solution by the Genetic Algorithms


The GA algorithm can be used to solve the DEA problem by generating a population of candidate solutions,
which represent the weights of inputs and outputs of each DMU. Each candidate solution is evaluated using
the DEA model to determine its efficiency. The best candidate solutions are then used to generate new
candidate solutions through a process of selection, crossover, and mutation, which simulates the natural
selection and reproduction process in biological systems.GA algorithm has several advantages when applied
to DEA problems. First, it can handle large and complex data sets. Second, it can handle non-linear and
non-convex problems, which are common in DEA. Third, it is a global optimization algorithm, which means
that it can search the entire solution space to find the optimal solution.In summary, the use of Genetic
Algorithm (GA) for solving Data Envelopment Analysis (DEA) problems has shown promising results. GA
algorithm is able to handle large and complex data sets, non-linear and non-convex problems, and is a global
optimization algorithm.

15.4 Solution by the Particle Swarm Optimization


The PSO algorithm can be used to solve the DEA problem by generating a population of candidate solutions,
which represent the weights of inputs and outputs of each DMU. Each candidate solution is evaluated using
the DEA model to determine its efficiency. The best candidate solutions are then used to generate new
candidate solutions through a process of velocity update and position update, which simulates the social
behavior of birds or fish. PSO algorithm has several advantages when applied to DEA problems. First, it
can handle large and complex data sets. Second, it can handle non-linear and non-convex problems, which
are common in DEA. Third, it is a global optimization algorithm, which means that it can search the entire
solution space to find the optimal solution.In summary, the use of Particle Swarm Optimization (PSO) for
solving Data Envelopment Analysis (DEA) problems has shown promising results. PSO algorithm is able
to handle large and complex data sets, non-linear and non-convex problems, and is a global optimization
algorithm.

15.5 Solution by the Firefly Algorithm


The FA algorithm can be used to solve the DEA problem by generating a population of candidate solutions,
which represent the weights of inputs and outputs of each DMU. Each candidate solution is evaluated using
the DEA model to determine its efficiency. The best candidate solutions are then used to generate new
candidate solutions through a process of attraction and movement, which simulates the flashing behavior of
fireflies.FA algorithm has several advantages when applied to DEA problems. First, it can handle large and
complex data sets. Second, it can handle non-linear and non-convex problems, which are common in DEA.
Third, it is a global optimization algorithm, which means that it can search the entire solution space to find
the optimal solution. In summary, the use of Firefly Algorithm (FA) for solving Data Envelopment Analysis
(DEA) problems has shown promising results. FA algorithm is able to handle large and complex data sets,
non-linear and non-convex problems, and is a global optimization algorithm.

15.6 Solution by the Cuckoo Search


The CS algorithm can be used to solve the DEA problem by generating a population of candidate solutions,
which represent the weights of inputs and outputs of each DMU. Each candidate solution is evaluated using
the DEA model to determine its efficiency. The best candidate solutions are then used to generate new
candidate solutions through a process of random walk and levy flight, which simulates the brood parasitism
behavior of cuckoo birds.CS algorithm has several advantages when applied to DEA problems. First, it can

41
handle large and complex data sets. Second, it can handle non-linear and non-convex problems, which are
common in DEA. Third, it is a global optimization algorithm, which means that it can search the entire
solution space to find the optimal solution.In summary, the use of Cuckoo Search (CS) for solving Data
Envelopment Analysis (DEA) problems has shown promising results. CS algorithm is able to handle large
and complex data sets, non-linear and non-convex problems, and is a global optimization algorithm.

15.7 Solution by the Bat Algorithm


The BA algorithm can be used to solve the DEA problem by generating a population of candidate solutions,
which represent the weights of inputs and outputs of each DMU. Each candidate solution is evaluated using
the DEA model to determine its efficiency. The best candidate solutions are then used to generate new
candidate solutions through a process of frequency modulation and pulse emission, which simulates the
echolocation behavior of bats.BA algorithm has several advantages when applied to DEA problems. First, it
can handle large and complex data sets. Second, it can handle non-linear and non-convex problems, which
are common in DEA. Third, it is a global optimization algorithm, which means that it can search the entire
solution space to find the optimal solution.In summary, the use of Bat Algorithm (BA) for solving Data
Envelopment Analysis (DEA) problems has shown promising results. BA algorithm is able to handle large
and complex data sets, non-linear and non-convex problems, and is a global optimization algorithm.

15.8 Solution by the Spider Monkey Optimization (SMO) Algorithm


The SMO algorithm can be used to solve the DEA problem by generating a population of candidate solutions,
which represent the weights of inputs and outputs of each DMU. Each candidate solution is evaluated using
the DEA model to determine its efficiency. The best candidate solutions are then used to generate new
candidate solutions through a process of social learning and personal learning, which simulates the foraging
behavior of spider monkeys. SMO algorithm has several advantages when applied to DEA problems. First,
it can handle large and complex data sets. Second, it can handle non-linear and non-convex problems, which
are common in DEA. Third, it is a global optimization algorithm, which means that it can search the entire
solution space to find the optimal solution.In summary, the use of Spider Monkey Optimization (SMO) for
solving Data Envelopment Analysis (DEA) problems has shown promising results. SMO algorithm is able
to handle large and complex data sets, non-linear and non-convex problems, and is a global optimization
algorithm.

15.9 Solution by the Ant Colony Algorithms


The ACO algorithm can be used to solve the DEA problem by generating a population of candidate solutions,
which represent the weights of inputs and outputs of each DMU. Each candidate solution is evaluated using
the DEA model to determine its efficiency. The best candidate solutions are then used to generate new
candidate solutions through a process of pheromone trail update, which simulates the foraging behavior of
ants.ACO algorithm has several advantages when applied to DEA problems. First, it can handle large and
complex data sets. Second, it can handle non-linear and non-convex problems, which are common in DEA.
Third, it is a global optimization algorithm, which means that it can search the entire solution space to find
the optimal solution.In summary, the use of Ant Colony Optimization (ACO) for solving Data Envelopment
Analysis (DEA) problems has shown promising results. ACO algorithm is able to handle large and complex
data sets, non-linear and non-convex problems, and is a global optimization algorithm.

15.10 Solution by the Simulated Annealing


The SA algorithm can be used to solve the DEA problem by generating a population of candidate solutions,
which represent the weights of inputs and outputs of each decision-making unit (DMU). Each candidate
solution is evaluated using the DEA model to determine its efficiency. The best candidate solutions are
then used to generate new candidate solutions through a process of simulated annealing, which simulates

42
the annealing process in metallurgy. SA algorithm has several advantages when applied to DEA problems.
First, it is a global optimization algorithm, which means that it can search the entire solution space to
find the optimal solution. Second, it can handle non-linear and non-convex problems, which are common in
DEA. Third, it is able to escape local optima, which is a common problem in optimization algorithms. In
summary, the use of Simulated Annealing (SA) for solving Data Envelopment Analysis (DEA) problems has
shown promising results. SA algorithm is able to handle non-linear and non-convex problems, is a global
optimization algorithm, and is able to escape local optima.

15.11 Conclusion
Data Envelopment Analysis (DEA) is a widely used method for evaluating the relative efficiency of decision-
making units (DMUs) in various industries. DEA has been applied in various fields such as healthcare,
education, manufacturing, and finance. The goal of DEA is to identify the most efficient DMUs and to
provide recommendations for improving the efficiency of inefficient DMUs.Recently, several metaheuristic
optimization algorithms, such as Artificial Bee Colony (ABC), Differential Evolution (DE), Genetic Algo-
rithms (GA), Particle Swarm Optimization (PSO), Firefly Algorithm, Cuckoo Search, Bat Algorithm, Spider
Monkey Optimization (SMO), Ant Colony Algorithms, and Simulated Annealing (SA), have been proposed
for solving DEA problems. These algorithms have shown promising results in terms of efficiency and accu-
racy. They have also demonstrated the ability to handle non-linear and non-convex problems and to escape
local optima.Overall, the use of metaheuristic optimization algorithms in DEA has improved the accuracy
and efficiency of DEA models and provided new insights into the efficiency of decision-making units. The
choice of the most appropriate algorithm depends on the specific characteristics of the DEA problem at
hand.

16 Portfolio Optimization
The process of designing an investment portfolio that maximises returns while minimising risks is known
as Portfolio Optimisation. To optimise a portfolio, the ultimate combination of assets that will yield the
best return possible at each risk level needs to be found.Mean-variance optimization, minimum variance
optimization, and risk parity are only a few of the techniques available for portfolio optimization. The most
popular method, mean-variance optimization, involves increasing the expected return on the portfolio while
reducing its variance. On the other hand, minimum variance optimization aims to reduce the portfolio’s
volatility while preserving a specific amount of expected return. Risk parity optimisation seeks to distribute
the assets in a portfolio so that the risk contributions from each item are equal.

16.1 Solution by the Artificial Bee Colony (ABC) Algorithms


A swarm intelligence optimization system called the Artificial Bee Colony (ABC) was developed due to re-
search on honeybee foraging behaviour. Numerous optimization issues, including portfolio optimization,
have been solved using it. The ABC algorithm for portfolio optimization uses each portfolio’s asset alloca-
tion as one of its decision factors, treating each portfolio as a potential solution to the issue. The system
then searches the solution space to find the ideal asset allocation using a combination of employed, onlooker,
and scout bees. The employed bees tweak the existing solutions to create new candidate solutions, which are
then evaluated by the observer bees, who choose which keys to pursue. To prevent becoming trapped in local
optima, the scout bees probe new regions of the solution space.The ABC method has been demonstrated
to perform well in portfolio optimization, especially when the problem is complicated and multidimensional.
The ABC method only sometimes finds the global optimum, much like any optimization technique. It is
crucial to carefully examine the outcomes and consider various portfolio optimization strategies, as with any
optimization technique.

43
16.2 Solution by the Differential Evolution
The population-based optimisation technique Differential Evolution (DE) has also been used to solve portfolio
optimisation issues. This algorithm evolves a population of potential solutions (i.e., portfolios) through a
series of rounds.The DE algorithm for portfolio optimisation functions by treating each portfolio as a vector
of decision variables that represents how the assets in the portfolio are distributed. The algorithm then
mutates and recombines the current solutions in the population to produce new candidate solutions. These
potential solutions are assessed using a fitness function that considers the expected return and risk of the
portfolio.Portfolio optimisation has been demonstrated to benefit from the DE method, especially when
the issue is complicated and multidimensional. The algorithm can deal with nonlinear and non-convex
optimisation issues and swiftly converge to a close-to-optimal solution. The DE algorithm’s sensitivity to
the parameters’ selection, such as the mutation and crossover rates, is one of its possible drawbacks. However,
there are ways to adjust these settings to boost the algorithm’s effectiveness. The DE algorithm has been
proven to occasionally outperform other well-known optimisation algorithms, making it a viable method
for portfolio optimisation in general. It is crucial to carefully examine the outcomes and consider various
portfolio optimisation strategies, as with any optimisation technique.

16.3 Solution by the Genetic Algorithms


Another population-based optimisation approach used to solve portfolio optimisation issues is genetic al-
gorithms (GAs). After several generations, this algorithm evolves a population of potential solutions (i.e.,
portfolios).The GA algorithm for portfolio optimisation functions by treating each portfolio as a binary vec-
tor representing asset allocation decision variables. The algorithm then produces new candidate solutions
by using genetic operators like crossover and mutation on the population’s already-existing solutions. These
potential solutions are assessed using a fitness function that considers the expected return and risk of the
portfolio.It has been demonstrated that the GA method performs well in portfolio optimisation, especially
when the issue is complicated and multidimensional. The algorithm can deal with nonlinear and non-convex
optimisation issues and swiftly converge to a close-to-optimal solution. The GA method may have several
drawbacks because it is susceptible to the selected parameters, including population size, crossover rate, and
mutation rate. However, there are ways to adjust these settings to boost the algorithm’s effectiveness. The
GA algorithm has been proven to occasionally outperform other well-known optimisation methods, making
it a viable way for portfolio optimisation in general. It is crucial to carefully examine the outcomes and
consider various portfolio optimisation strategies, as with any optimisation technique.

16.4 Solution by the Particle Swarm Optimization


The popular population-based optimisation method, Particle Swarm Optimization (PSO), has been used to
solve portfolio optimisation issues. This algorithm evolves a population of potential solutions (i.e., portfolios)
through a series of rounds based on particle behaviour. The PSO algorithm for portfolio optimisation treats
each portfolio as a vector of decision variables representing the portfolio’s distribution of assets. After then,
the programme simulates how particles behave as they move around the search space depending on their
best-known position and the best-known position of the swarm. The portfolio’s fitness, in this instance,
corresponds to the most famous work. The PSO algorithm creates new candidate solutions by altering
the population’s current solutions based on particle movement, with the movement’s strength depending
on the portfolio’s fitness. Using a fitness function that considers the projected return and risk of the
portfolio, the candidate solutions are assessed. It has been demonstrated that the PSO method works well
for portfolio optimization, especially when the issue is complicated and multidimensional. The algorithm is
capable of dealing with nonlinear and non-convex optimisation issues and can swiftly converge to a close-
to-optimal solution. The PSO algorithm may have the drawback of being sensitive to selecting specific
parameters, such as population size, inertia weight, and acceleration coefficients. However, there are ways
to adjust these settings to boost the algorithm’s effectiveness. In general, the PSO algorithm is a promising
portfolio optimisation method and has occasionally outperformed other well-known optimisation methods.

44
It is crucial to carefully examine the outcomes and consider various portfolio optimisation strategies, as with
any optimisation technique.

16.5 Solution by the Firefly Algorithm


Another swarm intelligence optimization technique used to solve portfolio optimization issues is the Firefly
Algorithm (FA). Based on the behaviour of fireflies, this method evolves a population of potential solutions
(i.e., portfolios) through a series of iterations.While optimizing a portfolio, the FA algorithm treats each
portfolio as a vector of decision variables that represents how the assets in the portfolio are distributed.
The algorithm then simulates the behaviour of fireflies, which are drawn to others brighter than themselves
(i.e., fitness value). In this instance, the brightness is consistent with the portfolio’s suitability.Based on
the attraction between the fireflies, with the attraction’s strength depending on the firefly’s brightness, the
FA algorithm creates new candidate solutions by altering the existing solutions in the population. The
candidate solutions are assessed using a fitness function that considers the portfolio’s projected return and
risk.The FA algorithm has been demonstrated to perform well in portfolio optimization, especially when
the issue is complicated and multidimensional. The algorithm can deal with nonlinear and non-convex
optimization issues and swiftly converge to a close-to-optimal solution.For example, the initial attraction
coefficient and the randomization value can significantly impact the FA algorithm’s performance. However,
there are ways to adjust these settings to boost the algorithm’s effectiveness.The FA algorithm has been
proven to occasionally outperform other well-known optimisation algorithms, making it a potential method
for portfolio optimisation in general. It is crucial to carefully examine the outcomes and consider various
portfolio optimisation strategies, as with any optimisation technique.

16.6 Solution by the Cuckoo Search


A population-based optimisation technique called Cuckoo Search (CS) has been used to solve portfolio
optimisation issues. Based on the behaviour of cuckoo birds, this algorithm evolves a population of potential
solutions (i.e., portfolios) through a series of iterations. When optimising a portfolio, the CS algorithm
treats each portfolio as a vector of decision variables that represents how the assets in the portfolio are
distributed. Next, the algorithm mimics the actions of cuckoo birds, who occasionally swap the eggs in other
birds’ nests with their own. In this instance, the nests stand in for the population, while the eggs represent
prospective solutions (i.e., portfolios). Based on the behaviour of the cuckoo birds, the CS algorithm modifies
the population’s existing solutions to produce new candidate solutions. The nests (i.e., keys) are updated
based on the quality of the eggs, and the eggs (i.e., prospective answers) are generated using a random walk
in the search space. Using a fitness function that considers the projected return and risk of the portfolio, the
candidate solutions are assessed. It has been demonstrated that the CS algorithm performs well in portfolio
optimization, especially when the problem is intricate and multidimensional. The algorithm is capable of
dealing with nonlinear and non-convex optimisation issues and can swiftly converge to a close-to-optimal
solution. The CS algorithm may have the drawback of being sensitive to selecting specific parameters, such
as step size and population size. However, there are ways to adjust these settings to boost the algorithm’s
effectiveness. The CS algorithm has been proven to occasionally outperform other well-known optimisation
algorithms, making it a viable method for portfolio optimisation in general. It is crucial to carefully examine
the outcomes and consider various portfolio optimisation strategies, as with any optimisation technique.

16.7 Solution by the Bat Algorithm


The Bat (BA) is a population-based optimisation technique that draws inspiration from bats’ use of echolo-
cation. It has also been used to solve portfolio optimisation issues. When optimising a portfolio, the BA
algorithm treats each portfolio as a vector of decision variables that represents how the assets in the portfolio
are distributed. The system mimics bat behaviour, which uses echolocation to find food and travel. In this
instance, the prey is the ideal portfolio. Based on bat behaviour, the BA algorithm modifies the population’s
existing solutions to provide new candidate solutions. The best-known positions of the swarm and each

45
bat are considered when the bats fly through the search area and modify their jobs accordingly. Using a
fitness function that evaluates the projected return and risk of the portfolio, the candidate solutions are
assessed. The BA algorithm has been demonstrated to perform well in portfolio optimization, especially
when the issue is complicated and multidimensional. The algorithm is capable of dealing with nonlinear and
non-convex optimisation issues and can swiftly converge to a close-to-optimal solution. The BA method may
have certain drawbacks because it is susceptible to the selected parameters, such as the bats’ volume and
heart rate. There are ways to adjust these settings, though, to boost the algorithm’s effectiveness. The BA
algorithm has been proven to occasionally outperform other well-known optimisation algorithms, making it
a viable method for portfolio optimisation in general. It is crucial to carefully examine the outcomes and
consider various portfolio optimisation strategies, as with any optimisation technique.

16.8 Solution by the Spider Monkey Optimization (SMO) Algorithm


An optimisation approach called Spider Monkey Optimization (SMO) has been used to solve portfolio op-
timisation issues. The social interactions and foraging skills of spider monkeys served as the basis for this
programme. The SMO algorithm for portfolio optimisation treats each portfolio as a vector of decision vari-
ables representing the portfolio’s distribution of assets. The programme mimics spider monkey behaviour,
which involves communal living and communication to locate food. The meal here complies with the best
possible portfolio. Based on the behaviour of spider monkeys, the SMO algorithm modifies the population’s
existing solutions to produce new candidate solutions. The spider monkeys wander around the search area
and talk to one another to exchange ideas about potential fixes. Using a fitness function that considers the
projected return and risk of the portfolio, the candidate solutions are assessed. The SMO algorithm has
been demonstrated to perform well in portfolio optimisation, especially when the issue is intricate and mul-
tidimensional. The algorithm is capable of dealing with nonlinear and non-convex optimisation issues and
can swiftly converge to a close-to-optimal solution. The SMO algorithm’s sensitivity to the selection of its
parameters, such as the communication radius and the population density of spider monkeys, is one potential
drawback. However, there are ways to adjust these settings to boost the algorithm’s effectiveness.SMO has
been proven to occasionally outperform other well-known optimisation algorithms, making it a potential
method for portfolio optimisation in general. It is crucial to carefully examine the outcomes and consider
various portfolio optimisation strategies, as with any optimisation technique.

16.9 Solution by the Ant Colony Algorithms


A meta-heuristic algorithm called Ant Colony Optimization (ACO) has been used to solve portfolio optimi-
sation issues. It draws inspiration from ant behaviour, specifically how they may choose the quickest route
between their nest and a food supply. When doing portfolio optimisation, the ACO algorithm treats each
portfolio as a vector of decision variables that represents how the assets in the portfolio are distributed. The
programme imitates ant behaviour, which involves leaving pheromone trails while it forages for food. The
ideal portfolio is represented by the pheromone trails in this situation. Based on ant behaviour, the ACO
algorithm modifies the population’s existing solutions to produce new candidate solutions. The ants search
the area and follow pheromone trails to find potential solutions. Using a fitness function that considers the
projected return and risk of the portfolio, the candidate solutions are assessed. The ACO algorithm has
been demonstrated to perform well in portfolio optimisation, especially when the issue is complicated and
multidimensional. The algorithm can deal with nonlinear and non-convex optimisation issues and swiftly
converge to a close-to-optimal solution. The ACO algorithm may have the drawback of being sensitive to the
selection of its parameters, such as the rate of pheromone evaporation and the number of pheromones left
behind by the ants. However, there are ways to adjust these settings to boost the algorithm’s effectiveness.
The ACO algorithm has occasionally outperformed other well-known optimisation algorithms, making it a
potential method for portfolio optimisation in general. It is crucial to carefully examine the outcomes and
consider various portfolio optimisation strategies, as with any optimisation technique.

46
16.10 Solution by the Simulated Annealing
The metaheuristic optimisation technique Simulated Annealing (SA) has addressed portfolio optimisation
issues. It is modelled after the metallurgical annealing process, in which a metal is heated and then gradually
cooled to increase its strength and minimise its flaws. The SA algorithm for portfolio optimisation functions
by treating each portfolio as a vector of choice variables representing the portfolio’s distribution of assets. The
method starts with a preliminary solution and iteratively changes it by switching out the asset allocations.
The algorithm uses a fitness function that considers the portfolio’s predicted return and risk to evaluate each
proposed solution. The temperature parameter, which regulates the likelihood of accepting a worse solution
as the algorithm advances, is the main component of the SA algorithm. The algorithm’s temperature is
set high at the start to investigate many potential solutions. The temperature gradually drops as the
algorithm continues, lowering the likelihood of an inferior answer being accepted. It has been demonstrated
that the SA algorithm performs well in portfolio optimisation, especially when the issue is complicated and
multidimensional. The algorithm can deal with nonlinear and non-convex optimisation issues and swiftly
converge to a close-to-optimal solution. One potential drawback is the SA algorithm’s sensitivity to the
selection of its parameters, such as the starting temperature and the cooling schedule. However, there are
ways to adjust these settings to boost the algorithm’s effectiveness. The SA algorithm has been demonstrated
to occasionally outperform other well-known optimisation algorithms, making it a viable method for portfolio
optimisation in general. It is crucial to carefully examine the outcomes and consider various portfolio
optimisation strategies, as with any optimisation technique.

16.11 Conclusion
Building an investment portfolio that maximises profits while minimising risk is known as portfolio optimisa-
tion. As it aids in balancing risk and return objectives, it is a crucial duty for investors, fund managers, and
financial institutions. Many different optimisation methods exist, including Mean-Variance Optimization,
Minimum Variance Optimization, Risk Parity, Genetic Algorithms, Firefly algorithms, and many others.
Choosing an appropriate technique depends on the problem’s complexity, size, and aim. Each method has
advantages and limits. While newer algorithms like Differential Evolution, Bat Algorithm, Cuckoo Search,
Spider Monkey Optimization, and Simulated Annealing are becoming more popular due to their capacity
to handle complex, multi-dimensional, and non-convex problems, traditional optimisation techniques like
Mean-Variance Optimization continue to be widely used. Finally, portfolio optimisation is an important
task that calls for careful consideration of several variables, such as asset allocation, diversification, and risk
management. Investors and financial institutions should keep researching and comparing various strategies
to discover the best one because choosing the proper optimisation methodology is essential for obtaining
optimal portfolio performance.

17 Facility Layout Design


To make the most use of the available space, reduce material handling, and increase productivity, facility
layout design involves choosing the most effective configuration of physical resources, such as people, equip-
ment, and materials. The aim of facility layout design is to create a layout that maximises throughput while
minimising expenses and waste.
When planning a facility layout, several things should be taken into account, such as:
1. Space accessibility: The design of the layout will be significantly influenced by the accessibility of the
space. The layout should be optimised to utilise the limited area as effectively as possible.
2. Equipment and machinery: The layout should consider the kind and scale of the machinery and
equipment needed to run the facility.
3. Material flow: The plan should be created to limit the amount of material handling, the time and
effort needed to transport materials from one area to another, and to guarantee that the flow of materials is
adequate.

47
4. Safety: The layout should be planned with personnel safety in mind, ensuring enough room for people
to move around safely and that machinery and equipment are positioned in areas without worker risk.
5. Accessibility: The facility’s plan should be thought out such that all areas are easily accessible and
that people, tools, and materials may move about freely without being constrained.
Designing a facility’s layout can be complicated. Still, several techniques and technologies can help, such
as computer-aided design (CAD) software, simulation modelling, and flowcharting. The ultimate objective
is to design a facility plan that meets the organisation’s and its clients’ needs while being functional, safe,
and economical.

17.1 Solution by the Artificial Bee Colony (ABC) Algorithms


A metaheuristic optimisation system called the Artificial Bee Colony (ABC) was developed after studying
honey bee behaviour. It has been applied to solve various optimisation problems, including facility layout
design. By replicating honey bee foraging behaviour, the ABC algorithm operates. The algorithm begins with
a population of potential answers, symbolised as bees. Every bee represents a possible facility arrangement.
The programme then employs three different sorts of bees—employed bees, observer bees, and scout bees—to
iteratively refine the population. Employed bees are in charge of taking advantage of the current solutions
by making little adjustments to them. They accomplish this by picking a nearby answer and making minor
adjustments. The best solution is kept when the quality of the final product is assessed. Based on the calibre
of the employed bees they are affiliated with, observer bees choose solutions. Solutions with employed bees
of better calibre are more likely to be selected. The employed bees then further refine these solutions. Scout
bees investigate new areas of the solution space. They produce fresh solutions at random and assess their
effectiveness. A new solution is added to the population if it outperforms any of the existing ones. The
ABC algorithm can be applied to facility layout design to improve a facility’s layout by lowering material
handling expenses, limiting the time and effort needed to move items, and increasing throughput. The
algorithm can consider several restrictions, including the amount of space available, the kind and size of
machinery and equipment, and safety issues. Overall, the ABC method has been demonstrated to perform
better than other optimisation algorithms in various situations, making it a useful tool for designing facility
layouts. However, the algorithm’s success will be determined by the issue being resolved and the selection
of algorithmic parameters.

17.2 Solution by the Differential Evolution


Facility layout design entails organizing various departments, workstations, and equipment to maximize pro-
duction and efficiency. It is a challenging optimization problem. Differential Evolution (DE), a population-
based optimization technique that has been successfully used to solve a wide variety of situations, is one
strategy for resolving this issue. Maintaining a population of potential solutions, each represented as a vector
of choice variables, is the fundamental tenet of DE. The population is subjected to the algorithm’s iteratively
applied procedures: mutation, crossover, and selection. An alternative solution similar to the original but
differs from it is produced by randomly perturbing one or more decision variables of a candidate solution.
Crossover entails mixing two potential solutions to create a new one that incorporates some characteristics
from both parents. The selection process entails comparing the fitness of the new solution to the parent
solution and keeping the solution fitted. We must establish the decision variables that describe the layout
before applying DE to the design of facility layouts. These can consist of the location and size of each depart-
ment, the location of each workstation, and the arrangement of the facility’s equipment. The fitness function
would serve as a gauge for the layout’s effectiveness and productivity, taking into account things like the
space between departments, the movement of personnel and goods, and the ease of access to equipment.DE
can be applied in a variety of ways; depending on the particular conditions of the issue. One typical strategy
is utilising a population of potential solutions that changes over several generations or iterations. With each
iteration, the existing population is subjected to mutation, crossover, and selection operations to create new
people with possibly superior solutions. The algorithm ends when a workable solution is discovered, or a

48
preset stopping criterion—like a set number of iterations or a minimum improvement in the fitness func-
tion—is satisfied. Differential Evolution can be utilized as a sophisticated optimisation algorithm to resolve
challenging facility layout design issues. We may use DE to create effective, efficient, and customized layouts
to the facility’s unique needs by specifying suitable decision variables and fitness functions.

17.3 Solution by the Genetic Algorithms


Natural selection is the basis for the powerful class of optimisation algorithms known as genetic algorithms
(GA). Facility layout design, which entails maximising the positioning of offices, workstations, and equipment
inside a facility, is one issue GA has successfully applied. Maintaining a population of candidate solutions,
each represented as a chromosome or a string of decision variables, is the fundamental tenet of GA. Three
basic operations—selection, crossover, and mutation—are applied to the population iteratively as the algo-
rithm advances. Every chromosome in the population is compared for fitness, and the fittest are chosen to
serve as parents for the following generation. Crossover refers to joining two-parent chromosomes’ genetic
material to produce a new offspring that includes some traits from both parents. A chromosome’s decision
variables can be arbitrarily perturbed through the mutation process to introduce new genetic material that
could produce better results.We must specify the choice variables that characterise the layout toto use GA
to facilitate layout design. These are a few examples of each department’s location and size, the location of
each workstation, and the setup of the facility’s equipment. The fitness function would gauge the layout’s
effectiveness and productivity, considering things like the space between departments, the movement of per-
sonnel and goods, and the ease of access to equipment. Depending on the particular needs of the situation,
GA can be applied in various ways. One typical strategy is utilising a population of potential solutions that
changes over several generations or iterations. With each iteration, the existing population is subjected to
selection, crossover, and mutation procedures to create new people with possibly superior solutions. The
algorithm stops when a suitable outcome or a predefined stopping requirement, such as a maximum number
of iterations or a minimum improvement in the fitness function, is satisfied. As a powerful optimisation tool,
genetic algorithms can be utilised to resolve challenging facility layout design issues. We can use GA to
create effective, efficient, and customised layouts to the facility’s unique needs by specifying suitable decision
variables and fitness functions.

17.4 Solution by the Particle Swarm Optimization


The social behaviour of fish schools and bird flocks served as the basis for the population-based optimisation
technique known as particle swarm optimisation (PSO). PSO has been used to solve various issues, including
facility layout design, which entails strategically placing offices, workstations, and other equipment inside
a facility. Maintaining a population of particles, each representing a potential solution to the issue, is the
fundamental tenet of PSO. Each particle’s position denotes a possible solution, and each particle’s velocity
indicates the magnitude and direction of its journey in the search space. The algorithm advances by iter-
atively updating each particle’s position and velocity depending on both its own and its neighbours’ best
solutions so far. Before using PSO in design facility layouts, we must identify the decision variables that
specify the format. These are a few examples of each department’s location and size, the location of each
workstation, and the setup of the facility’s equipment. The fitness function would gauge the layout’s effec-
tiveness and productivity, considering things like the space between departments, the movement of personnel
and goods, and the ease of access to equipment. Depending on the unique requirements of the situation,
PSO can be implemented in various methods. One typical strategy is using a swarm of particles that evolves
over several iterations or generations. Every iteration entails updating each particle’s position and speed,
creating a new multitude of potentially superior solutions. The algorithm ends when a workable solution is
discovered, or a preset stopping criterion—like a set number of iterations or a minimum improvement in the
fitness function—is satisfied. Particle Swarm Optimization can be utilised as a robust and powerful optimi-
sation algorithm to resolve challenging facility layout design issues. We can use PSO to create layouts that
are effective, productive, and suited to the facility’s particular requirements by specifying suitable decision
variables and fitness functions.

49
17.5 Solution by the Firefly Algorithm
A population-based optimisation technique called the Firefly technique (FA) was developed in response to
the flashing behaviour of fireflies. Facility layout design, which entails maximising the arrangement of offices,
workstations, and equipment within a facility, is one issue to which FA has been successfully applied. The
fundamental idea behind FA is to use firefly attraction and their proximity to each other to replicate the
flashing behaviour of fireflies. The brilliance of the firefly’s flash, a function of its fitness or objective worth,
indicates attractiveness. The Euclidean distance between the firefly in the search space is used to measure
distance. The programme works by iteratively adjusting each firefly’s position based on how appealing it
is and how attractive its neighbours are. We must specify the choice variables characterising the layout to
use FA to facilitate layout design. These are a few examples of each department’s location and size, the
location of each workstation, and the setup of the facility’s equipment. The fitness function would gauge the
layout’s effectiveness and productivity, considering things like the space between departments, the movement
of personnel and goods, and the ease of access to equipment. Depending on the unique requirements of the
situation, FA can be applied in various ways. Using a swarm of fireflies that develops over several iterations
or generations is one typical strategy. Each iteration includes changing each firefly’s position, which results
in a fresh multitude of potential improvements. The algorithm ends when a workable solution is discovered
or a preset stopping criterion—like a set number of iterations or a minimum improvement in the fitness
function—is satisfied. As a sophisticated optimisation technique, the Firefly technique can be utilised to
resolve challenging facility layout design issues. By specifying proper decision variables and fitness functions,
we may employ FA to develop layouts that are efficient, productive, and suited to the specific demands of
the facility.

17.6 Solution by the Cuckoo Search


A population-based optimisation algorithm called Cuckoo Search (CS) was partly developed due to cuckoo
bird behaviour. Facility layout design, which entails maximising the placement of departments, workstations,
and equipment within a facility, is one of the many challenges to which CS has been successfully addressed.
The fundamental tenet of CS is to mimic how cuckoo birds behave when they deposit eggs and deceive other
birds into incubating them. Each cuckoo in the optimisation algorithm represents a potential solution, and
each egg represents a prospective candidate. The algorithm iteratively generates new solutions using random
walks in the search space to go forward. Then it replaces the old keys with better ones based on a fitness
function. We must specify the choice variables that characterise the layout to use CS to facilitate layout
design. These are a few examples of each department’s location and size, the location of each workstation,
and the setup of the facility’s equipment. The fitness function would gauge the layout’s effectiveness and
productivity, considering things like the space between departments, the movement of personnel and goods,
and the ease of access to equipment.CS can be applied in various methods, depending on the particular
conditions of the issue. One typical strategy is using a population of cuckoos that has evolved over several
iterations or generations. Random walks in the search space produce new solutions during each iteration,
creating a fresh population of potentially better solutions. The algorithm ends when a workable solution is
discovered or a preset stopping criterion—like a set number of iterations or a minimum improvement in the
fitness function—is satisfied. As a powerful optimisation technique, Cuckoo Search can be utilised to resolve
challenging facility layout design issues. We can use CS to create effective, efficient, and customised layouts
to the facility’s unique needs by specifying suitable decision variables and fitness functions.

17.7 Solution by the Bat Algorithm


The Bat (BA) is a population-based optimisation technique that draws inspiration from bats’ use of echolo-
cation. Facility layout design, which entails maximising the location of offices, workstations, and equipment
inside a facility, is one issue that BA has successfully applied. The fundamental idea behind BA is to simulate
how bats might behave when using echolocation to find prey. Each bat acts as a potential solution to the
problem in the optimisation method, and the quality of the solution is determined by the frequency and loud-

50
ness of the bat’s echolocation pulse. Iteratively changing each bat’s position and frequency as the algorithm
moves forward produces a fresh set of potentially better solutions.We must establish the decision factors that
describe the layout Toto apply BA to design facility layouts. These are a few examples of each department’s
location and size, the location of each workstation, and the setup of the facility’s equipment. The fitness
function would gauge the layout’s effectiveness and productivity, considering things like the space between
departments, the movement of personnel and goods, and the ease of access to equipment. Depending on
the unique requirements of the problem, BA can be applied in various ways. One typical strategy is using a
population of bats that has evolved over several iterations or generations. The position and frequency of each
bat are updated throughout each loop, creating a fresh population of potentially more effective solutions.
When a workable solution is discovered or a preset stopping criterion—like a set number of iterations or a
minimum improvement in the fitness function—is satisfied, the algorithm ends. The Bat technique can be
utilised as a sophisticated optimisation technique to resolve challenging facility layout design issues. We can
use BA to create layouts that are effective, productive, and suited to the facility’s particular requirements
by establishing suitable decision variables and fitness functions.

17.8 Solution by the Spider Monkey Optimization (SMO) Algorithm


A metaheuristic optimisation algorithm called Spider Monkey Optimization (SMO) was developed after
studying spider monkey behaviour. Facility layout planning, which involves optimising the location of
departments, workstations, and equipment within a facility, is one optimisation problem for which SMO
has been effectively used. The main goal of SMO is to simulate how spider monkeys behave when looking for
food and avoiding predators. Each spider monkey in the optimisation algorithm is an alternative solution
to the issue, and the meal is the most effective. A fresh set of potentially better solutions is produced by
iteratively updating each spider monkey’s position as the programme moves forward. Before using SMO
in design facility layouts, we must identify the decision variables that specify the format. These are a
few examples of each department’s location and size, the location of each workstation, and the setup of the
facility’s equipment. The fitness function would gauge the layout’s effectiveness and productivity, considering
things like the space between departments, the movement of personnel and goods, and the ease of access
to equipment. Depending on the unique requirements of the situation, SMO can be applied in various
ways. Using a population of spider monkeys that has evolved over several iterations or generations is one
typical strategy. Each iteration entails changing each spider monkey’s position, creating a fresh population of
potentially better answers. The algorithm ends when a workable solution is discovered, or a preset stopping
criterion—like a set number of iterations or a minimum improvement in the fitness function—is satisfied.
Spider Monkey Optimization can be utilised as a robust optimisation algorithm to resolve challenging facility
layout design issues. We can use SMO to create layouts that are effective, productive, and suited to the
facility’s particular requirements by specifying suitable decision variables and fitness functions.

17.9 Solution by the Ant Colony Algorithms


ACO is a metaheuristic optimisation technique that draws inspiration from how ant colonies behave. Facility
layout planning, which involves optimising the location of departments, workstations, and equipment within
a facility, is one optimisation problem to which ACO has been effectively applied.ACO primarily aims to
simulate how ants behave when foraging for food and leaving pheromone trails. Each ant in the optimisation
algorithm symbolises a potential solution to the issue, and the pheromone trails represent the quality of the
answers. The programme continues by modifying the pheromone trails repeatedly based on the effectiveness
of the responses discovered by the ants, producing a fresh set of possibly superior solutions.Before using ACO
to construct facility layouts, we must identify the decision factors that characterise the design. These are a
few examples of each department’s location and size, the location of each workstation, and the setup of the
facility’s equipment. The fitness function would gauge the layout’s effectiveness and productivity, considering
things like the space between departments, the movement of personnel and goods, and the ease of access to
equipment. Depending on the unique requirements of the situation, ACO can be implemented in various
ways. One typical method is using a population of ants to build the layout iteratively. The ants wander

51
around the facility throughout each iteration and create solutions using the pheromone trails and heuristic
data. The method ends when a good solution is discovered or a specified stopping criterion is satisfied, such
as a maximum number of iterations or a minimum improvement in the fitness function. The pheromone
trails are updated based on the quality of the solutions discovered by the ants. In Conclusion, Ant Colony
Optimization is a potent optimisation approach that can be utilised to address challenging facility layout
design issues. We can use ACO to create layouts that are effective, productive, and suited to the facility’s
unique needs by specifying suitable decision variables and fitness functions.

17.10 Solution by the Simulated Annealing


The metallurgical annealing process inspired the metaheuristic optimisation technique known as Simulated
Annealing (SA). Facility layout planning, which involves optimising the location of departments, worksta-
tions, and equipment within a facility, is one optimisation problem for which SA has been effectively used.
The fundamental concept of SA is to simulate the cooling of a heated metal to determine the ideal atom
configuration. Each alternative answer to the problem in the optimisation procedure corresponds to a state
of the metal at a specific temperature. Iteratively altering the metal’s state by randomly swapping the
positions of its atoms, the algorithm advances and accepts the new form if it improves the objective func-
tion or is based on a probability dictated by the temperature at the time. We must establish the decision
factors that describe the layout toto apply SA to design facility layouts. These are a few examples of each
department’s location and size, the location of each workstation, and the setup of the facility’s equipment.
The objective function would be a measurement of the layout’s effectiveness and productivity, accounting
for elements like the separation between departments, the movement of personnel and supplies, and the
accessibility of equipment. Depending on the unique requirements of the situation, SA can be applied in
various ways. One typical strategy is to start with a high initial temperature and slowly lower it over several
iterations or epochs. With each iteration, a new solution is created by randomly switching the locations of
the various departments, workstations, and pieces of machinery. Based on the objective function and the
current temperature, the algorithm decides whether to accept the new solution, with a higher probability of
doing so at higher temperatures. When a satisfying outcome is achieved or a predefined ending requirement,
such as a maximum number of iterations or a minimal improvement in the objective function, is satisfied,
the algorithm stops. Simulated Annealing can be utilised as a robust optimisation algorithm to resolve
challenging facility layout design issues. We can use SA to create effective, productive layouts suited to the
particular demands of the facility by establishing suitable decision variables and objective functions.

17.11 Conclusion
The arrangement of offices, workstations, and equipment within a facility must be optimized, which is
a difficult challenge. The objective is to create a layout that is effective, productive, and suited to the
facility’s particular requirements. Numerous optimization strategies, such as genetic algorithms, particle
swarm optimization, ant colony optimization, simulated annealing, firefly algorithm, bat algorithm, cuckoo
search, and spider monkey optimization, can be used to tackle this problem. Each algorithm has advantages
and disadvantages, and the best one to use depends on the particulars of the issue. Generally speaking,
specifying the proper decision variables, objective functions, and the problem’s parameters and restrictions
is necessary for optimization methods. The algorithms produce new answers iteratively, assess their quality
or fitness, and change the solutions by some search strategy. Optimization algorithms in facility layout design
can help save expenses, boost production, and enhance a facility’s overall efficiency. Facility managers can
use these algorithms to design layouts optimized for their unique requirements and flexible enough to alter
when the production process or the business environment does.

52
18 Vehicle Routing Problem
To visit a group of clients as efficiently as feasible, a collection of cars must be routed according to the Vehicle
Routing Problem (VRP), a well-known optimisation problem. The challenge is to identify the best possible
set of routes for the vehicles to take to reduce the overall distance travelled and any associated costs, such as
time, fuel use, or vehicle capacity utilisation. VRP has many valuable applications, including logistics, waste
collection, package delivery, and public transit. Depending on the particular requirements of the situation, the
VRP can be formulated in various ways. In its most basic version, the VRP assumes that every customer must
be visited precisely once and that every vehicle has the same capacity and speed. The challenge is choosing the
best possible set of routes for the cars to reduce their overall mileage or other associated expenditures. The
Capacitated Vehicle Routing Problem (CVRP) is the name given to this issue. The VRP can be expanded to
include more restrictions and complexities, such as time windows (when customers must be visited), various
depots (where the vehicles start and conclude their routes), heterogeneous cars (with varying capacity or
speeds), and other pragmatic considerations. Numerous optimisation strategies, including heuristic and
metaheuristic ones like genetic algorithms, simulated annealing, ant colony optimization, and particle swarm
optimization, as well as accurate approaches like branch-and-bound and dynamic programming, can be used
to solve the VRP. The particulars of the problem, such as its size, the number of variables and constraints,
and the needed level of excellence and optimality of the solution, determine the algorithm to be used.
Straightforward approaches are typically more suited for small to medium-sized issues, while heuristic and
metaheuristic algorithms are better suited for more significant, trickier issues. The VRP can be solved,
resulting in significant cost reductions and logistical and transportation efficiency gains. Businesses may
save fuel consumption, vehicle wear and tear, and labour expenses by optimising the routing of their cars
while also improving customer satisfaction and overall productivity.

18.1 Solution by the Artificial Bee Colony (ABC) Algorithms


A metaheuristic optimisation algorithm called the Artificial Bee Colony (ABC) algorithm is based on how
honey bees behave in a colony. The Vehicle Routing Problem (VRP) and many other optimisation issues
have been successfully resolved using this technique. A colony of fake bees is used in the ABC algorithm
to depict a set of potential solutions. The algorithm advances by producing new solutions iteratively and
assessing their quality or fitness using an objective function. The search procedure is modelled after bees’
foraging behaviour, in which bees investigate their surroundings for food sources and communicate to share
information on their location and calibre.Three different kinds of bees are used in the ABC algorithm:
working bees, observers, and scout bees. While the spectator bees choose potential solutions based on
the calibre of the food sources conveyed by the employed bees, the employed bees undertake local search
operations around their existing solutions to explore the search space. The scout bees randomly explore the
search space to find fresh approaches. The answer is vehicle routes that visit the consumers to apply the
ABC algorithm to the VRP. The total distance covered by the vehicles or some related cost parameter is
typically used to define the objective function. The algorithm keeps updating the solutions based on the
quality of the food sources and executing local search operations around the current solutions. According to
experimental investigations, the ABC method can create high-quality VRP solutions with a low computing
cost. Additionally, the algorithm has been expanded to consider the VRP’s additional restrictions and
complications, including time windows, various depots, and heterogeneous vehicles. However, the ABC
algorithm’s performance may vary depending on the particular issue instance and parameter choices, and it
may only sometimes ensure that the best solution will be found.

18.2 Solution by the Differential Evolution


The differential evolution of potential solutions is the foundation for the metaheuristic optimisation technique
known as Differential Evolution (DE). The Vehicle Routing Problem (VRP) is one of many optimisation is-
sues that DE has successfully used to resolve. A population of solution candidates is called an individual in
the DE algorithm. The algorithm advances by producing new solutions iteratively and assessing their quality

53
or fitness using an objective function. The evolution of natural species, in which individuals struggle for
resources and reproduce to produce new offspring, serves as an inspiration for the search process. Three sig-
nificant operations make up the DE algorithm: mutation, crossover, and selection. A new solution is created
in the mutation operation by scaling the difference between two previously chosen solutions and adding it to
a third solution. The present and new solutions are joined to create a trial solution in the crossover process.
The trial solution is either accepted or rejected during the selection operation, depending on its suitability
or quality. The answer is expressed as vehicle routes that visit the consumers to apply the DE algorithm
to the VRP. Typically, the total distance covered by the vehicles or some related cost parameter is used
to define the objective function. The method continues by applying the mutation, crossover, and selection
operations to the existing solutions to generate new ones iteratively. According to experimental research, the
DE algorithm may generate high-quality solutions for the VRP with a low computational cost. Additionally,
the algorithm has been expanded to consider the VRP’s additional restrictions and complications, including
time windows, various depots, and heterogeneous vehicles. However, the DE algorithm’s performance may
vary depending on the particular issue instance and parameter choices, and it may only sometimes ensure
the discovery of the ideal solution.

18.3 Solution by the Genetic Algorithms


The concept of natural selection and genetics served as the basis for the metaheuristic optimisation algorithms
known as genetic algorithms (GAs). They have been used to address a variety of optimisation issues,
including the Vehicle Routing Problem (VRP), with success. The GA algorithm represents a set of potential
solutions as a population of individuals that have evolved over several generations. Each member of the
population is a vehicle route that visits a customer, therefore representing a solution to the VRP. The
programme creates new individuals iteratively using genetic operations like crossover and mutation and
assessing their quality utilising objective function. Three fundamental processes make up the GA algorithm:
selection, reproduction, and modification. Individuals with higher fitness values or solutions nearer the
ideal solution are more likely to be chosen for replay during the selection phase. In reproduction, new
people are produced by combining the selected individuals using crossover and mutation operators. Some
individuals are randomly modified during the mutation stage to add variation to the population. The
objective function is often the sum of the vehicle trip distances or other cost metric. The solution is typically
represented as a set of vehicle routes when applying the GA algorithm to the VRP. The method works by
using genetic operators and measuring the fitness of the individuals to produce new solutions iteratively.
Experimental research has demonstrated that even for big and complex issue instances, the GA algorithm
can generate high-quality solutions for the VRP. The individual issue instance and parameter settings, such
as population size, selection and reproduction operators, and mutation rate, may affect the GA algorithm’s
performance. Additional restrictions and complications of the VRP, such as time windows, different depots,
and heterogeneous vehicles, can be incorporated into the GA algorithm.

18.4 Solution by the Particle Swarm Optimization


A metaheuristic optimisation technique called Particle Swarm Optimization (PSO) is motivated by the social
behaviour of fish schools and flocks of birds. Numerous optimisation issues, like the Vehicle Routing Problem
(VRP), have been solved using PSO. A collection of particles is utilised in the PSO algorithm to represent the
potential solutions. In the search space, every particle has a position and a velocity. Based on each particle’s
individual best solution and the best solution the swarm has so far discovered, the algorithm iteratively
adjusts the position and speed of the particles. The search is propelled by the particles’ migration in the
direction of the swarm’s finest discoveries and their individual histories. Each particle represents a collection
of vehicle routes that visit the consumers when PSO is applied to the VRP. The total distance covered by
the vehicles or some related cost parameter is typically used to define the objective function. Based on the
quality of each particle’s individual best solution and the best solution the swarm has so far discovered, the
method continues by iteratively updating the position and velocity of the particles. Exploring the search
space and utilising potential solutions must be balanced during the search process. Experimental studies

54
have demonstrated that the PSO algorithm can generate superior VRP solutions at a cost-effective level of
computation. Additionally, the algorithm has been expanded to consider the VRP’s additional restrictions
and complications, including time windows, various depots, and heterogeneous vehicles. However, the PSO
algorithm’s performance may vary depending on the particular issue instance and parameter choices, and it
may only sometimes ensure the discovery of the ideal solution.

18.5 Solution by the Firefly Algorithm


The behaviour of fireflies, which attract one another through bioluminescence, served as the basis for the
Firefly Algorithm, a metaheuristic optimisation algorithm. The Vehicle Routing Problem and other optimi-
sation issues have been resolved using this approach. The candidate solutions are shown as a collection of
fireflies in the FA algorithm. Based on how attractive they are to one another, the programme repeatedly
adjusts the positions of the fireflies. The attraction between fireflies, which is affected by their proximity
and brightness or fitness value, propels the search process. Each firefly represents a series of vehicle routes
that visit the consumers to apply FA to the VRP. Typically, the total distance covered by the vehicles or
some related cost parameter is used to define the objective function. The method works by adjusting the
firefly positions iteratively according to how appealing they are to one another and how good their individual
best solution is. Exploring the search space and utilising potential solutions must be balanced during the
search process. According to experimental tests, the FA algorithm may generate high-quality solutions for
the VRP with a low computational cost. Additionally, the algorithm has been expanded to consider the
VRP’s additional restrictions and complications, including time windows, various depots, and heterogeneous
vehicles. However, the FA algorithm’s performance may vary depending on the particular issue instance and
parameter choices, and it may not always ensure the discovery of the ideal solution.

18.6 Solution by the Cuckoo Search


A metaheuristic optimisation algorithm called Cuckoo Search (CS) was developed due to cuckoo birds’
breeding habits. The Vehicle Routing Problem (VRP) and other optimisation issues have been resolved
using the technique. A population of cuckoo nests reflects the potential solutions in the CS algorithm.
A set of vehicle routes that visit the customers correspond to each cuckoo nest. Based on the quality of
the cuckoos’ answers and their capacity to reproduce new solutions, the algorithm iteratively updates the
cuckoo nests. The balance between local search focused on practical solutions and global tracking to discover
new search spaces drives the search process. Experimental studies have shown that the CS algorithm can
produce high-quality solutions for the VRP with a reasonable computational cost. The algorithm has also
been extended to incorporate additional constraints and complexities of the VRP, such as time windows,
multiple depots, and heterogeneous vehicles. However, the performance of the CS algorithm may depend on
the specific problem instance and parameter settings, and it may not always guarantee to find of the optimal
solution. Experimental tests have demonstrated that the CS algorithm can generate superior VRP solutions
at a cost-effective level of computation. Additionally, the algorithm has been expanded to consider the
VRP’s additional restrictions and complications, including time windows, various depots, and heterogeneous
vehicles. However, the CS algorithm’s performance may vary depending on the particular issue instance and
parameter choices, and it may not always ensure the discovery of the ideal solution.

18.7 Solution by the Bat Algorithm


A metaheuristic optimisation technique called the Bat technique (BA) is based on how bats use echolocation.
The Vehicle Routing Problem (VRP) and other optimisation issues have been resolved using the technique.
The candidate solutions in the BA algorithm are shown as a collection of bats. Each bat in the search area
has a position and a speed, emitting ultrasonic pulses to find the best answer. Based on each bat’s best
solution and the best solution the swarm has so far discovered, the algorithm iteratively adjusts the position
and velocity of the bats. The frequency and intensity of the ultrasonic pulses that the bats generate control
the search process. Each bat represents a collection of vehicle routes that visit the customers when applying

55
BA to the VRP. The total distance covered by the vehicles or some related cost parameter is typically
used to define the objective function. Based on the quality of each bat’s individual best solution and the
best solution the swarm has so far discovered, the algorithm continues by repeatedly updating the position
and velocity of the bats. Exploring the search space and utilising potential solutions must be balanced
during the search process. According to experimental research, the BA algorithm can generate superior
VRP solutions with a reasonable computational cost. Additionally, the algorithm has been expanded to
consider the VRP’s additional restrictions and complications, including time windows, various depots, and
heterogeneous vehicles. However, the BA algorithm’s performance may vary depending on the particular
issue instance and parameter choices, and it may not always ensure the discovery of the best solution.

18.8 Solution by the Spider Monkey Optimization (SMO) Algorithm


The Vehicle Routing Problem (VRP) is a combinatorial optimization problem that entails determining the
optimum set of routes for vehicles to drive while reducing the overall distance or cost required to visit a group
of clients. A metaheuristic optimization method called the Spider Monkey Optimization (SMO) algorithm
was partly developed due to spider monkeys’ social behaviour. A general description of how the SMO method
can be used to resolve the VRP is given below: Step 1: Initialization. You can randomly initialize the spider
monkey population with workable solutions following the VRP’s restrictions, where each key corresponds to
a set of possible car paths. Consider each solution’s fitness in light of the VRP’s objective function, which
is often to reduce the overall cost or distance travelled. Step 2: Choice Based on their fitness scores, choose
the population’s top spider monkeys. Use a tournament selection process to select the top individuals from
the population for the following generation. Step 3: duplication To produce new spider monkeys, use the
crossover and mutation operators on the selected individuals. To establish new paths for the progeny, two
spider monkeys’ routes can be switched around using the crossover operator. The paths of the offspring can
be randomly altered using the mutation operator. Step 4. Do a local search Use a local search method to raise
the standard of the progeny. By improving the spider monkeys’ pathways, you can identify better solutions
using a local optimization technique like 2-opt or 3-opt. Step 5: Substitution Replace the population’s worst
spider monkeys with the new offspring produced in step 3. Include the new offspring into the population after
determining their fitness. Step 6: Finishing When a termination criterion, such as a maximum number of
iterations or a successful solution, is reached, the algorithm should be terminated. The SMO algorithm can
search the solution space and converge to a close-to-optimal solution for the VRP by repeatedly performing
steps 2 through 5 until the termination requirement is satisfied.

18.9 Solution by the Ant Colony Algorithms


The Ant Colony Optimization (ACO) algorithm is a meta-heuristic optimization algorithm inspired by the
foraging behaviour of ants. It has been successfully applied to solve various combinatorial optimization
problems, including the Vehicle Routing Problem (VRP). Here is an overview of how to apply the ACO
algorithm to solve the VRP. Step 1:Initialization You can initialize a line of artificial ants at the depot
location, each with a small amount of pheromone on its way. Could you assign the closest depot location
to each customer? Step 2:Solution structure Each builds a solution by visiting each customer once and
returning to the depot. Ants choose their next customer based on the number of pheromones on edge and
the probability influenced by the distance between the current customer and the next customer. Step 3:local
search Apply a local search algorithm after the solution is built. B. Two options to improve the resolution
by optimizing ant routes. Step 4:Update pheromone trajectory After all, the ants have completed the tour,
the amount of pheromone on each edge is updated based on the quality of the solution. Reasonable solutions
are given higher pheromone levels. Step 5:completion Stop the algorithm when the termination criteria are
met. B. a Maximum number of iterations or a satisfactory solution found. By repeating steps 2-4 until
the termination criteria are met, the ACO algorithm can explore the solution space and converge to a near-
optimal solution for the VRP. Additionally, the ACO algorithm has several variations that can be used to
improve performance. B. Max-Min-Ant-System (MMAS) and Ant Colony System ”ACS”. These variations
use different pheromone update rules and local search algorithms to improve algorithm performance further.

56
18.10 Solution by the Simulated Annealing
Simulated Annealing (SA) is a metaheuristic optimization algorithm commonly used to solve combinatorial
optimization problems such as the vehicle routing problem (VRP). The annealing process inspires SA in
metallurgy. The annealing process involves heating and cooling the metal to reduce defects and improve its
properties. Here is an overview of how to apply the SA algorithm to solve his VRP. Step 1:Initialization
Could you initialize the initial solution? This can be a series of routes for vehicles. Could you evaluate the
suitability of the initial solution based on the VRP objective function? This is usually to minimize the total
distance travelled or the cost incurred. Step 2:Neighbourhood search Generate a new solution by making
minor changes to the current solution. B. Swap two customers between two routes or add a new route
to the solution. Could you evaluate the suitability of new solutions? Step 3: Predict Decide whether to
accept or reject the new solution based on the Metropolis criteria, which compares the suitability difference
between the current and unique solutions and the calculated probabilities based on the cooling plan. If
the new solution is better, I will accept it unconditionally. If the new solution worsens, accept it as the
probability decreases as the temperature drops. Step 4:cooling Cools down the system based on a cooling
schedule determining how quickly the temperature cools down. A cooling program can be linear, geometric,
or any other function that decreases temperature over time. Step 5:completion Stop the algorithm when
the termination criteria are met. B. a Maximum number of iterations or a satisfactory solution found. By
repeating steps 2-4 until the termination criteria are met, the SA algorithm can explore the solution space
and converge to a near-optimal solution for the VRP. Additionally, the performance of the SA algorithm
can be improved using various techniques such as B. Perturbing cooling schedules, integrating local search
algorithms, or using adaptive temperature regulation strategies.

18.11 Conclusion
In Conclusion, the Vehicle Routing Problem (VRP) is a challenging optimization issue that involves deter-
mining the most cost- or distance-effective routes for a fleet of vehicles to serve a group of clients. The
VRP can be solved using a variety of methods, such as accurate ones like branch and bound and dynamic
programming, and metaheuristic ones like Genetic Algorithms (GA), Particle Swarm Optimization (PSO),
Ant Colony Optimization (ACO), and Simulated Annealing (SA). Each method has pros and cons, and the
best solution depends on the scope of the issue, the available computing power, and the amount of optimality
required. While metaheuristic algorithms can handle more significant instances of the problem and produce
reasonable solutions in a reasonable period, exact approaches are best suited for small cases of the problem
and can ensure an ideal answer. The VRP is a critical challenge in logistics and transportation overall, and
breakthroughs in optimization algorithms can result in considerable increases in effectiveness, cost savings,
and environmental impact.

19 Parallel Machine Scheduling


Scheduling tasks across numerous computers in parallel to achieve the fastest possible completion times for
all activities is known as parallel machine scheduling. It is a typical issue in the industrial, logistics, and other
sectors where work must be done swiftly and effectively. The fundamental concept behind parallel machine
scheduling is to split up jobs among numerous machines to reduce the time needed to finish everything.
Each task requires a specific amount of processing time, and each device can only do a particular number
of jobs simultaneously. The objective is to determine how to assign work to machines best to reduce the
overall completion time. The challenges associated with parallel machine scheduling can be resolved using a
variety of algorithms and methods, including. List scheduling: This algorithm distributes jobs to machines
according to how quickly they can process them. The device that is currently the least occupied receives the
shortest task.
Using selection, crossover, and mutation processes, genetic algorithms use a population of candidate
solutions to evolve them through time.

57
Simulated annealing: This approach is based on the physical annealing process, which involves heating
and cooling a material to reduce its energy. In simulated annealing, a solution is discovered by gradually
cooling the system while the system is regarded as physical. Ant colonies leave pheromone trails to commu-
nicate with one another, which is how this approach was developed. In this approach, a colony of virtual
ants follows pheromone trails to find a satisfactory solution to the scheduling problem.
The tabu search algorithm avoids revisiting previously investigated solutions by tracking them in a
memory structure. A collection of tabu rules limiting possible moves is used to direct the search process.
Generally, parallel machine scheduling is a complex issue without a universally applicable answer. The
particular task and associated limitations determine the algorithm or technique to use.

19.1 Solution by the Artificial Bee Colony (ABC) Algorithms


A swarm-based optimisation system called the Artificial Bee Colony (ABC) was developed to mimic honey
bees’ behaviour. It has been used to solve various optimisation issues, such as scheduling parallel machines.
The ABC algorithm first creates a population of potential solutions, represented as vectors of choice
variables. Each candidate solution in the parallel machine scheduling problem reflects a potential work
assignment to the machines. The programme then assesses each likely answer using an objective function
that gauges the overall time to complete all jobs. The employed bees, observer bees, and scout bees—three
different types of artificial bees—are used in the ABC algorithm to update the candidate solutions iteratively.
The bees stand in for the potential solutions being considered and look at their current solution to see if
there are any better ones. Based on their fitness value and the calibre of the information offered by the hired
bees, observer bees choose potential solutions. To avoid becoming stuck in local optima, scout bees search
new areas of the search space. The population size, the number of bees working, and the number of bees
watching are three crucial parameters that affect how the bees behave and are used in the ABC algorithm.
These parameters can be changed to balance the use of practical solutions and the exploration of the search
space.
The job allocations to the machines are the decision variables when using the ABC algorithm for parallel
machine scheduling. The total completion time—which is determined as the sum of the maximum completion
times for all machines—is the objective function. Each candidate answer produced by the ABC algorithm
represents a potential task that might be delegated to a device. The algorithm then iteratively updates the
population based on the behaviour of the artificial bees after evaluating each solution using the goal function.
In Conclusion, using the ABC algorithm to solve scheduling issues for parallel machines can be successful.
But, just like other optimisation methods, the effectiveness of this one is influenced by the difficulty of the
issue at hand, the parameter choices, and the calibre of the initial population of potential solutions.

19.2 Solution by the Differential Evolution


The parallel machine scheduling issue might be resolved via the well-liked metaheuristic optimisation ap-
proach, Differential Evolution (DE). With a population of potential solutions as a starting point, mutation,
crossover, and selection operators are used to evolve the population of solutions gradually.
Finding the best work distribution among machines in parallel machine scheduling aims to reduce overall
completion time. You can use the DE algorithm as follows:
Create a population of potential solutions from scratch. Each answer offers a possible way to assign work
to machines.
Could you assess the appropriateness of each response? The fitness function determines the overall job
completion time for a particular task assignment to machines. Each population solution should be subjected
to the mutation operator. To produce a new candidate solution, the mutation operator perturbs the original
solution randomly.
Use the crossover operator for each pair of parent solutions in the population. The crossover operator
creates a new candidate solution by combining the characteristics of the parent solutions.
Could you assess each new contender solution’s suitability?

58
To create the next generation, choose the best answers from the parent, guardian and new candidate
populations.
Up until a stopping condition is met, repeat steps 3-6. (e.g., a maximum number of generations or a
minimum fitness level is reached).
DE has successfully solved the parallel machine scheduling problem, and it can produce good results with
only a few function evaluations. However, the specific situation and the selection of parameters, such as the
mutation rate and crossover frequency, could affect how well the algorithm performs.

19.3 Solution by the Genetic Algorithms


A typical class of metaheuristic optimization techniques called genetic algorithms (GAs) can be utilized to
tackle the parallel machine scheduling problem. Using selection, crossover, and mutation operators, the
fundamental principle of GA is to start with a population of candidate solutions and gradually evolve them
over time.
Finding the best work distribution among machines in parallel machine scheduling aims to reduce overall
completion time. Applying the GA algorithm is as follows:
Create a population of potential solutions from scratch. Each answer offers a possible way to assign work
to machines. Could you assess the appropriateness of each response? The fitness function determines the
overall job completion time for a particular task assignment to machines.
To create the next generation, choose the population’s top solutions. A selection operator is used to
accomplish this, choosing solutions with greater fitness values to be the parents of the following generation.
You can apply crossover to the parent-child couples you chose in step 3 to produce fresh offspring ideas.
In the crossover process, components from two or more solutions are combined to create a new solution.
Utilize mutation to add genetic material to the population by adding it to the progeny solutions produced
in step 4.
Could you assess each new contender solution’s suitability?
Substitute the new population for the previous one. To reach a stopping condition (such as the maximum
number of generations or the minimum fitness level), repeat steps 3 through 7 as necessary.
The parallel machine scheduling problem has been successfully solved using gas, which can produce
good results with only a few function evaluations. However, the particular situation and the selection
of parameters, such as population size, crossover frequency, and mutation rate, may affect how well the
algorithm performs. The algorithm’s success also depends on creating a vital fitness function.

19.4 Solution by the Particle Swarm Optimization


A well-liked metaheuristic optimisation approach called Particle Swarm Optimization (PSO) can be used to
address the issue of scheduling parallel machines. Using a swarm of particles that travel around the search
space and modify their placements based on the locations of the best answers thus far is the fundamental
concept of PSO.
The objective of finding the best job assignment to machines that minimises the overall completion time
is what parallel machine scheduling is all about. Application examples for the PSO algorithm include:
Create a particle swarm that symbolises each potential task that could be given to a machine.
Randomize each particle’s initial position and speed.
Adjust each particle’s position and velocity based on the best answer the particle itself and the swarm
have discovered.
Could you assess each new contender solution’s suitability?
We are updating the swarms and each particle’s current best solution.
Until a stopping condition (such as a maximum number of iterations or a minimum fitness level) is
satisfied, repeat steps 4-6. PSO has successfully solved the parallel machine scheduling problem, and it
can produce satisfactory results with only a small number of function evaluations. However, the particular
situation and the selection of parameters, such as the number of particles, the maximum velocity, and the

59
acceleration coefficients, may affect how well the method performs. The algorithm’s success also depends on
creating a vital fitness function.

19.5 Solution by the Firefly Algorithm


A metaheuristic optimisation approach called the Firefly Algorithm (FA) can be utilised to resolve the issue
of scheduling parallel machines. To find the best answer, FA’s central concept is to replicate the flashing
behaviour of fireflies.
The objective of finding the best task assignment to machines that reduces the overall completion time
is what parallel machine scheduling is all about. These steps can be used to apply the FA algorithm:
Create an initial population of fireflies, each is symbolising a potential task that may be given to a
computer.
Consider each firefly’s fitness level. For a given assignment of tasks to machines, the fitness function
determines the overall completion time of all charges. Could you create a random starting point for each
firefly’s light intensity?
Each firefly should be moved towards the population’s most brilliant fireflies. A firefly’s movement
depends on the attraction function, which is affected by the distance between the insects and the brightness
of their lights.
Could you assess each new contender solution’s suitability?
Adapt each firefly’s light output based on how well the solution fits the problem.
Up until a stopping condition is met, repeat steps 4-6. (e.g., a maximum number of iterations or a
minimum fitness level is reached). FA can produce satisfactory results with only a few function evaluations,
and it has been demonstrated to be effective in solving the parallel machine scheduling problem. However,
the algorithm’s performance may vary depending on the issue and the selection of variables, such as the
attraction coefficient and the maximum movement distance. The algorithm’s success depends on how well
the fitness function is designed.

19.6 Solution by the Cuckoo Search


A metaheuristic optimisation approach called Cuckoo Search (CS) can be used to resolve the issue of schedul-
ing parallel machines. To identify the best answer, the primary principle of CS is to imitate the behaviour
of cuckoo birds, which lay their eggs in the nests of other birds.
Finding the best work distribution among machines in parallel machine scheduling aims to reduce overall
completion time. This is how the CS algorithm can be used:
Create a population of cuckoo eggs, each symbolising a potential task that might be given to a computer.
Could you assess each cuckoo egg’s fitness? The fitness function determines the overall job completion time
for a particular task assignment to machines.
Choose a cuckoo egg randomly, then randomly change its values to create a new one.
Could you assess the fresh cuckoo egg’s fitness?
If the new egg fits more than the original, swap it out for it. Otherwise, determine the fitness values of
the original and fresh eggs.
Replace some of the population’s eggs with fresh, randomly produced eggs.
Up until a stopping condition is met, repeat steps 3-6. (e.g., a maximum number of iterations or a
minimum fitness level is reached). The parallel machine scheduling problem has been successfully solved
using CS, and it can produce satisfactory results with only a small number of function evaluations. The
number of cuckoo eggs and the likelihood of replacing an egg with a fresh one are two parameters that may
affect how well the algorithm performs depending on the particular challenge. The algorithm’s success also
relies on creating a vital fitness function.

60
19.7 Solution by the Bat Algorithm
The Bat Algorithm (BA), a metaheuristic optimisation tool, can resolve the parallel machine scheduling
issue. BA’s core concept is to mimic bats’ echolocation behaviour while looking for the best answer.
Finding the best work distribution among machines in parallel machine scheduling aims to reduce overall
completion time. Using the BA algorithm is as follows:
Create a colony of bats and assign each one to a potential duty that might be given to a machine.
Could you look at each bat’s fitness? The fitness function determines the overall job completion time for
a particular task assignment to machines. Set each bat’s frequency and velocity at random.
Adjust each bat’s frequency and velocity based on the population’s and the bat’s current best answer.
Could you assess each new contender solution’s suitability?
Could you update the population’s and each bat’s best solution so far?
Up until a stopping condition is met, repeat steps 4-6. (e.g., a maximum number of iterations or a
minimum fitness level is reached). The parallel machine scheduling problem has been successfully solved
using BA, which can produce satisfactory results with few function evaluations. However, the algorithm’s
performance may vary depending on the particular issue at hand and the parameters are chosen, such as the
bats’ volume and heart rate. The algorithm’s success also relies on creating a vital fitness function.

19.8 Solution by the Spider Monkey Optimization (SMO) Algorithm


A metaheuristic optimisation method called the Spider Monkey Optimization (SMO) algorithm was partly
developed due to spider monkeys’ social behaviour. SMO’s core concept is to mimic how spider monkeys
interact with one another while looking for food and scouting their surroundings for the best solution.
Finding the best work distribution among machines in parallel machine scheduling aims to reduce overall
completion time. You can use the SMO algorithm as follows:
Create a population of spider monkeys, each of which stands in for a potential task that might be given
to a computer.
Could you evaluate each spider monkey’s level of fitness? The fitness function determines the overall job
completion time for a particular task assignment to machines. A spider monkey should be chosen as the
population’s leader, and its position should be updated based on its present situation and the works of other
spider monkeys.
Could you look over the new position’s suitability?
Could you update the population’s and the leader’s best solution so far?
Until a stopping condition is satisfied (such as a maximum number of iterations or a minimum fitness
level), select a new leader and repeat steps 3-5.
SMO has been demonstrated to resolve the parallel machine scheduling issue successfully and can produce
satisfactory outcomes with a small number of function evaluations. However, the algorithm’s effectiveness
may vary depending on the particular case and the chosen parameters, such as the number of spider monkeys
and the ratio of exploitation to exploration. The algorithm’s success also depends on creating a vital fitness
function.

19.9 Solution by the Ant Colony Algorithms


ACO is a metaheuristic optimisation method that draws inspiration from ant foraging behaviour. ACO’s
main goal is to mimic how ants locate the shortest route to food sources by leaving and following pheromone
trails.
Finding the best work distribution among machines in the context of parallel machine scheduling aims
to reduce overall completion time. Applying the ACO algorithm is as follows:
Create an ant population from scratch, with each ant standing in for a potential task that might be given
to a machine.
Set each machine-task assignment’s pheromone trail levels to a low value.

61
Each ant selects a machine for each task based on a probabilistic decision rule that takes into account
the pheromone trail levels and heuristics (such as the task’s processing time and the machines’ capacity).
Analyze each ant’s physical condition. For a specific job assignment to machines, the fitness function
determines the overall completion time of all tasks.
According to each ant’s degree of fitness, update the pheromone trail levels. For the machine-task
assignments that lead to better solutions, the pheromone levels are raised, while those that lead to worse
results are lowered.
Steps 3-5 should be repeated until a stopping condition is satisfied (such as the maximum number of
iterations or the minimum fitness level). The parallel machine scheduling problem has been successfully
solved by ACO, which can produce satisfactory results with a minimal number of function evaluations. The
specific problem and the parameters used, such as the pheromone update rate and the ratio of exploration
to exploitation, could, however, affect how well the method performs. The success of the algorithm also
depends on the creation of a strong fitness function.

19.10 Solution by the Simulated Annealing


The physical annealing process used in metallurgy served as the inspiration for the metaheuristic optimisation
algorithm known as Simulated Annealing (SA). The fundamental concept of SA is to replicate the cooling
of a hot metal in order to determine the ideal arrangement of its atoms.
Finding the best work distribution among machines in the context of parallel machine scheduling aims
to reduce overall completion time. Applying the SA algorithm is as follows:
Set up a random workload distribution for machines.
Establish the initial temperature and the rate of cooling. Repeat until a halting circumstance is satisfied:
A. Produce a neighbouring solution by arbitrarily switching two jobs that were allocated to other ma-
chines.
b. Determine how the neighbouring solution and the present solution differ in terms of the goal function
(i.e., the total completion time).
c. Adopt the neighbouring solution as the new current solution if it is superior. If not, accept it with a
probability that is based on how the goal function has changed and how hot it is right now.
d. Lower the temperature in accordance with the rate of cooling.
Return the top answer gleaned from the iterations.
SA can produce satisfactory results with only a few number of function evaluations, and it has been
demonstrated to be effective in solving the parallel machine scheduling problem. The specific problem and
the selection of parameters, such as the starting temperature and cooling rate, could, however, affect how well
the algorithm performs. The algorithm’s performance also depends on the creation of a strong neighbourhood
structure and cooling schedule.

19.11 Conclusion
To reduce the overall completion time, parallel machine scheduling is a challenging and crucial topic in
manufacturing and other sectors. It entails distributing a set of jobs across various machines. The efficient
exploration of the solution space and quick discovery of suitable solutions using metaheuristic optimisation
algorithms have shown to help resolve this issue.
The scheduling of parallel machines has been accomplished using a variety of metaheuristic algorithms, in-
cluding Genetic Algorithms, Particle Swarm Optimization, Ant Colony Optimization, Simulated Annealing,
Differential Evolution, Firefly Algorithm, Cuckoo Search, and Spider Monkey Optimization. Each method
has advantages and disadvantages, and the best solution relies on the particular problem and limitations.
In addition to picking the correct algorithm, selecting the proper algorithm parameters and designing a
solid fitness function is also essential for getting good performance. Combining these algorithms with other
optimisation strategies, such as hybrid or local search algorithms, can further boost their performance.

62
Overall, research on efficient metaheuristic optimisation methods for scheduling parallel machines is still
ongoing, and future developments in this area will allow for more effective and efficient scheduling in various
industries.

20 Bin Packing Problem


The Bin Packing Problem (BPP), also known as the Bin Packing or Container Loading Problem, is a well-
known optimisation problem in which a set of objects must be packed into a minimum number of bins or
containers while taking into consideration factors like the weight or volume of the objects and the capacity
of the bins. The issue is NP-hard in its broadest sense, necessitating heuristic or approximation techniques
in order to find workable solutions.
The Bin Packing Problem with Discrete Items (also known as the Din Packing Problem) is one of the
BPP’s many variations. In this variation, the items to be packed are discrete, as opposed to constantly
changeable, and can only occupy a given set of spots in the bin. The Din Packing Problem’s objective is to
determine the best way to fit the discrete objects into the fewest possible bins while ensuring that each item
is packed into a legal position and that the sum of the weights or volumes in each bin does not exceed the
capacity of that bin.
The Din Packing Problem can be expressed as an integer programming problem, but due to its compu-
tational difficulty, it is usually resolved via heuristic or approximation approaches. The following are some
of the most popular algorithms for addressing the Din Packing Problem:
The first bin that can fit each item is where it is placed using the first-fit algorithm.
The ”best-fit” algorithm places each item in the bin with the least amount of space left over after packing
it.
The worst-fit algorithm places each item in the bin that still has the most room after packing it. The
Next-Fit algorithm places every item in the next available bin that has room for it, rather than looking for
the bin with the least amount of space left.
Genetic algorithms are evolutionary techniques that mimic the process of natural selection in order to
get the best answer. They can be applied to arrange the products in the best possible way when packing.
Simulated annealing: In order to find the best answer, this algorithm mimics the cooling of a hot metal.
It can be applied to arrange the products optimally for packing.
The Tabu Search algorithm keeps track of previously visited solutions and does not return to them.
It can be applied to arrange the products optimally for packing. Overall, the Din Packing Problem is a
difficult optimisation problem for which efficient and ideal solutions must be found using specialised meth-
ods. Although there isn’t a one-size-fits-all answer to this issue, the aforementioned algorithms have been
demonstrated to work well in the real world and may be customised to match a variety of conditions.

20.1 Solution by the Artificial Bee Colony (ABC) Algorithms


A population-based metaheuristic optimisation method named the Artificial Bee Colony (ABC) was devel-
oped due to research on honey bee foraging behaviour. The Din Packing Problem is one of many optimisation
issues for which the ABC method has been used.
The ABC algorithm population comprises three different categories of bees: employed bees, observer bees,
and scout bees. The employed bees create new solutions by shifting the objects in the bin. The observer
bees choose the best solutions generated by the employed bees. The scout bees investigate novel solutions
by making new placements for the items at random. The positions of the items in the bin are represented as
a binary string, where each bit denotes whether an item is present or absent in a specific place, to apply the
ABC algorithm to the Din Packing Problem. Under the restrictions of bin capacity and item placements,
the ABC algorithm is utilised to look for the best configuration of the items in the bin.
The ABC algorithm’s critical steps for the Din Packing Problem are:
Initialization: Create a population of bees representing each solution to the Din Packing Problem as
the initial population. The phase of employed bees: Change the arrangement of the items in the bin by

63
exchanging two items or transferring one thing to a different location for each employed bee. Examine the
fitness of each adjusted solution, and if the improved modification improves the bee’s position, update it.
The phase of the onlooker bees: For each bee, choose a solution created by an employed bee using a
probability distribution established by the solutions’ fitness. Assess the suitability of the selected solution,
and if it is more effective, modify the spectator bee’s location.
The phase of the scout bees: Produce a new random solution for each scout bee and assess its fitness. If
the new solution is superior, change the scout bee’s position.The employed, onlooker, and scout bee phases
should be repeated until a termination requirement is fulfilled. Update: Update the best solution discovered
thus far.
Din Packing Problem solutions produced by the ABC algorithm have been demonstrated to be compet-
itive with those of other cutting-edge algorithms. However, the choice of algorithm parameters, such as the
population size, the number of iterations, and the probability distribution used in the onlooker bee phase,
might have an impact on the performance of the ABC method. To get good performance, careful adjustment
of the algorithm parameters is required.

20.2 Solution by the Differential Evolution


The Din Packing Problem has been successfully optimised using the population-based metaheuristic opti-
misation process known as Differential Evolution (DE). DE is built on continuously enhancing a population
of candidate solutions by combining old solutions to create new ones.The positions of the items in the bin
are represented as a binary string, where each bit denotes whether an item is present or absent in a specific
place, to apply DE to the Din Packing Problem. Under the restrictions of bin capacity and item placements,
the DE algorithm is utilised to look for the best configuration of the items in the bin.The DE algorithm’s
primary steps for the Din Packing Problem are:Initialization: Produce a population of possible solutions,
each corresponding to a position in the bin for the items.Mutation: Create a new solution for each candidate
solution in the population by selecting three other solutions at random and computing a new solution based
on the difference between the chosen solutions.Crossover: Combine the original key and mutant solutions to
generate a trial solution.Please look at the trial solution’s fitness; if it is superior, substitute it for the origi-
nal key in the selection process.The best solution currently available should be updated, and the mutation,
crossover, and selection stages should be repeated until a termination requirement is satisfied.The Din Pack-
ing Problem has been successfully solved using the DE method, and its results are competitive with those
of other cutting-edge algorithms. However, the choice of algorithm parameters, such as the population size,
the scaling factor employed in the mutation step, and the crossover frequency, can impact the DE method’s
performance. To get good performance, careful adjustment of the algorithm parameters is required.

20.3 Solution by the Genetic Algorithms


A group of rectangular objects, known as dins, must be packed into a rectangular container while minimising
the amount of wasted space. This problem is known as the Din Packing Problem (DPP). The DPP can
be used in various real-world situations, includithe manufacturingturing sector and the cargo packaging for
transportation on trucks, ships, and aeroplanes.
Applying genetic algorithms (GAs) is one well-liked technique for resolving the DPP. GAs are stochastic
search algorithms using biological evolution as their source of inspiration. They function by mimicking the
natural selection process, whereby those with more advantageous features are more likely to survive and
pobject A GA first creates an initial population of candidate solutions in the context of the DPP, where
each represents a potential packing configuration for the dins.EEach person’s fitnesshen assessed, usually
according to how well it reduces wasted space. The GA then carries out a sequence of crossover, mutation,
and selection processes to generate a new population of people who inherit their parents’ desirable qualities.
Higher-fitness individuals are more likely to be chosen as parents for the f,ollowing generation, thanks to
the selection procedure. The crossover procedure then mergesparent and guardian’son of each parent and
guardian’s genetic makeup to create a newin uncovering. To assist in uncovering new areas of the search
space, the mutation procedure induces random alterations to a person’s genetic makeup.

64
The GA then iteratively repeats this process until a stopping requirement is satisfied, such as a maximum
number of generations or a good solution is found. Usually, the best person in the final population is chosen
asGAs may not always find the best only sometimeso the DPPGAs may not always find the best solution to
the DPPays be found by GAs, but they frequently a reasonable to do it in a fair length of time, especially
for more significant instances of the problem. They can also be simply modified to deal DPP variables like
numerous containers or restrictions on the orientation of the dins.

20.4 Solution by the Particle Swarm Optimization


The Din fitting Problem is a classical combinatorial optimisation problem that requires fitting rectangles of
varying sizes into a large rectangular container while minimising the amount of unused space. To tthebtain
answer it answers, the well-known metaheuristic algorithm Particle Swarm Optimization (PSO) imitates the
movement of a swarm of particles in a search area.
We must characterise the problem in a form that enables us to represent potential solutions and evaluate
their fitness before we can use PSO for the Din Packing Problem. In this instance, each rectangle may be
considered a particle, and the packing solution can be seen as a swarm of moving particles in the search space.
The ratio of the total area of the packed rectangles to the size of the container can be used to quantify the
fitness of a packing solution, with a more excellent ratio suggesting a better solution. Based on the particles’
current best positions and the swarm’s overall best position, the PSO algorithm iteratively updates the
positions and velocities of the particles. In the Din Packing Problem, a particle’s position is represented
by its bottom-left corner coordinates in the container, and its velocity is the amount of movement it makes
during each iteration.
The following is how the PSO algorithm works:
Could calculate the fitness of each packing solution after randomly placing each rectangle in the container
to create the swarm Set the social and cognitive criteria for determining the optimum placements, then
initialise the position and velocity of each particle. Up until a termination circumstance is satisfied, repeat
the following actions: Update the position and speed of each particle for each particle using the social and
cognitive characteristics. b. Verify the validity of the new position (i.e., the rectangle does not overlap
with other rectangles or go outside the container). Reset the particle to a random place if it is invalid.
c. Calculate the newCould you calculatesolution’s fitness, and update the personal best and overall best
positions as necessary. The swarm’s gretest packaging idea was returnThe cognitive and social factors utilised
in PSO determine how cognitive and social factors utilised in PSO determine how cognitive and social factors
utilised in PSO determine how How much each particle relies on its personal best position and the global
best position to update its velocity and posi PSO. While the social parameter can be adjusted to a low value
to encourage the swarm to converge to a single ideal solution, the cognitive parameter can be set to a high
value to encourage each particle to explore its own search space.
PSO can be improved further by using auxiliary methods like local search, which can improve the packing
solutions discovered by the swarm, or by using multi-objective optimisation methods to take into account
various goals, like minimising unused space and maximising stability of the packed rectangles.

20.5 Solution by the Firefly Algorithm


The best way to pack objects into containers is the goal of the Din Packing Problem (DPP), a combinatorial
optimization problem. A swarm intelligence-based optimization technique called the Firefly Algorithm (FA)
was developed in response to the flashing patterns of fireflies.
We must first define the problem fitness function before we can apply the FA to the DPP. In this situation,
the packing of items into containers represents a candidate solution, and the fitness function should evaluate
the quality of the candidate solution. The packing density, or percentage of the container capacity that the
objects occupy, is one potential fitness function following is a succinct summary of the FA algorithm: Create
a starting population of fireflies, each representing a potential answer.
.....You can use the specified fitness function to determine each firefly’s fitness level.
Based on their proximity and fitness, evaluate each firefly’s appeal to its neighbours.

65
You can use a random walk algorithm that considers the attraction and unpredictability parameters to
move each firefly towards its brighter Could you update updates?
Could you update the population and assess their fitness? Does he relocate? Ted firefly?
Up until a stopping requirement is satisfied, repeat steps 3-5.
We may modify the procedure above to apply the FA to the DPP as follows: Create a population of
packing arrangements representing a potential solution in the initial stage.
Utilizing the specified fitness function, determine the packing density for each arrangement.
Based on their proximity and packing density, evaluate each arrangement’s desirability to its neighbours.
Use a random walk algorithm that considers the attractiveness and unpredictability parameters to move
each arrangement towards its brighter neighbours. Shifting an account in this context entails switching out
or turning objects inside containers.
Could you update the population and determine the packing density of the relocated arrangements?
Up until a stopping requirement is satisfied, repeat steps 3-5. Any provision supplied by the user, such
as achieving a specific level of packing density or a maximum number of repeats, might serve as the stopping
The successfully resolving the federalisation issue, been demonstrated, including the DPP.

20.6 Solution by the Cuckoo Search


A metaheuristic algorithm called the cuckoo search was developed to mimic the behaviour of cuckoo birds,
which lay their eggs in other birds’ nests. The algorithm starts with a base set of solutions (nests) and creates
new solutions (eggs) iteratively by fusing components of previous solutions. The algorithm chooses the top
solutions to construct the next generation after evaluating each one’s quality using a fitness function. When
a stopping requirement is satisfied, the procedure continues. The cuckoo search method can determine
the best way to pack a set of rectangular boxes in the Din Packing Problem. The algorithm begins by
creating an initial set of packing arrangements at random. Then it assesses their fitness using a cost function
that accounts for the overall amount of space and the number of unpackable boxes. The system then
generates novel packing arrangements and assesses their fitness using a variety of cuckoo search operators
(such as mutation, crossover, and selection). The most effective packing method is chosen to create the
following generation. The cuckoo search method is adaptable and straightforward to modify for various
issue articulations. Numerous optimisation issues, including engineering design, logistics, and finance, have
been effectively tackled using it. It may take multiple runs with various parameter settings to converge to a
satisfactory solution, as with all metaheuristic algorithms.

20.7 Solution by the Bat Algorithm


In the well-known Din Packing Problem, the objective is to fit a collection of things into the fewest number
of bins possible. The issue falls within the category of NP-hard problems, which means that computing
an accurate solution may be costly. However, this issue can be effectively solved by the Bat Algorithm.
Based on bat behaviour, the Bat Algorithm is a swarm intelligence method. The method runs through a
population of bats, each a potential answer to the optimisation problem. After sensing their surroundings,
flying towards a destination, and regulating their velocity to avoid becoming stuck in local optima, the bats
travel randomly in pursuit of a better solution.
Defining the restrictions and goals of the task is the first step in using the Bat Algorithm to solve the
Din Packing task. The size of each item and the capacity of each bin will serve as the limitations in this
scenario. The goal is to use the fewest possible containers. The next step is to create a colony of bats, each
serving as a potential solution. Each key is a vector of binary values representing whether an item belongs in
a particular bin. The number of containers used to measure the fitness of the bats as they randomly wander
across the search area.
The bats utilise echolocation during the search to feel their surroundings, which entails assessing nearby
solutions to see if they are superior. If a better solution is discovered, the bat changes its course and picks
up speed in that direction. The bat also has a chance of engaging in random exploration, which allows it
to overlook superior answers in favour of discovering new search space.The best solution thus far is updated

66
following each repetition. The algorithm terminates and gives the best result whenever the allotted number
of iterations has been reached, or a predetermined threshold has been hit.
In Conclusion, the Din Packing Problem, which is challenging to address optimally using conventional
techniques, may be resolved to utilise the Bat Algorithm. To find the optimum answer in a reasonable
period, the method iteratively modifies a population of candidate solutions.

20.8 Solution by the Ant Colony Algorithms


A well-known combinatorial optimisation problem, the Din Packing Problem, concerns fitting products of
various sizes into containers with a set capacity. The goal is to employ the fewest possible containers while
ensuring each item is wholly contained within a container.
The spider monkey’s foraging style served as the Spider Monkey Optimization (SMO) algorithm’s model,
a metaheuristic optimisation algorithm. The algorithm is renowned for its capacity to explore the search
space and take advantage of promising solutions to quickly and effectively tackle challenging optimisation
issues. We can use the following steps to apply the SMO algorithm to the Din Packing Problem:
Initialization, first.
Create a population of spider monkeys with each monkey standing in for a possible solution to the Din
Packing Problem. Create a set of starting solutions at random that contain the things in containers. Could
you determine each solution’s fitness, which is the number of containers used? Second step: moving
Introduce a collection of movement operators that mimic the way spider monkeys might move throughout
the search space. To create new solutions, apply these operators to randomly chosen solutions. Could you
assess and contrast the new solutions’ fitness with the previous solutions’ fitness? Step 3: Choice
Choose the top answers from the starting population and the new answers produced in Step 2. Up until
a suitable resolution is reached, please repeat Steps 2 and 3. Termination in Step 4
When a preset stopping criterion is satisfied, such as a maximum number of iterations or achieving a
specific fitness level, the algorithm should be stopped. By continuously scouring the search area, producing
fresh solutions, and picking the best ones, the SMO algorithm can successfully resolve the Din Packing
Problem. The algorithm effectively navigates the search space and discovers optimal solutions by employing
movement operators that are motivated by the behaviour of spider monkeys.

20.9 Solution by the Simulated Annealing


To pack a collection of rectangular objects of various sizes into the fewest possible fixed-size rectangular bins,
we must solve the combinatorial optimisation problem known as the Din Packing Problem. The goal is to
use the most infrequent possible containers to pack everything.
A metaheuristic approach called Simulated Annealing effectively resolves combinatorial optimisation
issues like the Din Packing Problem. The algorithm is a probabilistic optimisation method based on a
comparison to the actual annealing procedure. The main idea is first to explore the solution space using a
probabilistic search, then gradually narrow the search field as the algorithm develops. The steps involved in
using Simulated Annealing to solve the Din Packing Problem are as follows:
Initialization, first.
The algorithm must initially be initialised by producing an initial solution. An initial explanation for the
Din Packing Problem can be created by randomly assigning items to bins. The objective function value for
this solution, or the number of bins employed, is then determined. Generation of neighbours in Step 2
This step slightly alters the existing solution to create a brand-new one. This could entail switching
two objects between bins or shifting one thing from one to the other. For this new solution, the objective
function value is determined.
Step 3: Accepting the New Approach
A probability function based on the objective function’s value and the algorithm’s current temperature
determines whether a new solution will be accepted. If the new solution uses fewer bins and is superior to
the existing one, it is approved as the current solution. Based on the probability function, even though the
new solution is inferior to the existing one, it might still be approved. Thanks to this probabilistic strategy,

67
the algorithm can explore different regions of the solution space by eluding local optima. The Metropolis-
Hastings method, on which the probability function is built, strikes a compromise between using the existing
solution and pursuing novel ones.
Step 4: Update the temperature
After a predetermined number of rounds, the temperature parameter is decreased. This aids the algo-
rithm’s ability to concentrate on the present solution and move closer to the best one. Step 5: Finishing
When a predetermined stopping requirement is satisfied, the algorithm is stopped. This can be a maxi-
mum quantity of iterations or a minimal modification to the value of the objective function.
In Conclusion, Simulated Annealing is a powerful approach for resolving Din Packing Problem and other
combinatorial optimisation issues. The algorithm avoids local optima and converges to the best solution by
balancing exploring new alternatives with exploiting the existing solution.

20.10 Conclusion
A well-known and in-depth researched problem in computer science is the Din Packing Problem. The
challenge is effectively cram various-sized goods into a small area. Since there are numerous potential
options and it can frequently be tricky to locate the ideal packing, the problem is complex.
Heuristics or approximation algorithms are one of the most popular methods for solving the Din Packing
Problem; while they offer good answers, they cannot ensure optimality. The first-fit, best-fit, and worst-fit
algorithms are the heuristics most frequently utilised for packing problems. These algorithms are simple to
use and typically yield good outcomes. The development of optimisation algorithms, including metaheuristics
like genetic algorithms and simulated annealing, has advanced significantly in recent years. It has been
demonstrated that these algorithms, which are more sophisticated than conventional heuristics, produce
superior outcomes.
Overall, the Din Packing Problem in computer science continues to be challenging and exciting. Future
studies should focus on creating more sophisticated optimisation algorithms and heuristics that can effectively
tackle the issue for more extensive and complicated datasets.

21 Assignment problem
The assignment problem is a well-known optimisation issue that includes determining the best distribution
of jobs or resources to maximise overall gain or reduce overall loss. This issue can be encountered in various
situations, including matching hospital programmes for medical residents, assigning individuals to jobs,
allocating robots to tasks, and scheduling airline crew members.
The goal of the assignment problem is to choose the best assignment possible to minimise or maximise
the overall cost or profit of the project. The issue can be expressed mathematically as a linear programming
problem, where the objective function and constraints indicate the cost or benefit of each project, and
decision variables represent the job or resource assignments to agents. The Hungarian method, the auction
algorithm, and the branch-and-bound approach are just a few of the many algorithms and techniques that
can be used to solve the assignment problem. The computational sophistication, memory consumption, and
scalability of these approaches vary.
Numerous real-world applications of the assignment problem exist, including supply chain management,
logistics, sports analytics, and online dating. The solutions to this fundamental issue in operations research
and decision science have significant repercussions for effectiveness, equity, and competitiveness.

21.1 Solution by the Artificial Bee Colony (ABC) Algorithms


The assignment problem is a well-known example of a combinatorial optimisation problem in which a group
of agents must be distributed among various activities while adhering to several restrictions to minimise total
cost or maximising total profit. The Artificial Bee Colony (ABC) algorithm is one of many optimisation
techniques that can be used to solve the issue.

68
The honeybees’ foraging strategy inspired the metaheuristic optimisation algorithm known as the ABC
algorithm. The algorithm comprises a swarm of artificial bees that iteratively update their positions while
assessing the fitness of potential solutions to find the best one. We can define the decision variables as a
binary vector X = x1, x2,..., xn, where xi=1 if agent i is assigned to a task and xi=0 otherwise to apply the
ABC algorithm to the assignment problem. The objective function f(X), which is the total of the expenses
of the agents assigned to their various tasks, can be used to indicate the cost of the assignment.
The initial step of the ABC algorithm is to generate a population of potential solutions at random,
represented by the placement of the bees in the search space. Each bee receives a randomly chosen solution,
and the fitness of the answer is assessed by calculating the assignment’s cost. The best worldwide answer is
the one that has been identified as of yet. The algorithm then repeats the subsequent actions:
The phase of the employed bees: Each bee chooses a nearby solution and assesses its fitness. The bee
changes its position to the new key if it is superior to the existing one. If not, the bee stays in its current
place.
The phase of the onlooker bees: Each one chooses a solution based on probabilities proportionate to the
solutions’ fitness. The preferred option is then assessed, and if the new one is superior, the bee changes its
location.
The phase of the scout bee: If an employed bee or an observer bee fails to discover a better solution after
a predetermined number of iterations, the bee turns into a scout and develops a new key randomly. Global
update: The previous best solution is updated if a new global best solution is discovered.
The algorithm stops when a stopping requirement is satisfied, such as a set number of iterations or the
satisfaction of the global best solution.
The assignment problem and other combinatorial optimisation issues can both be resolved using the ABC
algorithm. Optimizing the algorithm’s parameters, such as the population size and iterations, determines
how well it performs.

21.2 Solution by the Differential Evolution


A popular metaheuristic optimisation technique used to handle many different optimisation issues is the
Differential Evolution (DE) algorithm. It has been used to solve the assignment problem, which entails
allocating a group of agents to various activities cost-effectively. The DE algorithm can be applied to reduce
the assignment’s overall cost.
In a high-dimensional search space, the DE algorithm maintains a population of candidate solutions
represented as vectors. Every vector is a potential response to the optimisation issue. The method begins
by randomly initialising a population of possible solutions. Each vector in the population in the assignment
problem indicates a potential assignment of agents to tasks. Each assignment’s cost is computed, and the
goal is to keep that cost as low as possible. Therefore, each vector’s fitness in the population equals the
inverse of the assignment’s total cost.
The following steps are carried out iteratively by the DE algorithm:
Mutation: A new candidate solution, a linear combination of three randomly chosen vectors from the
population, is produced using a mutation operator. The definition of the mutation operator is: Vi = Xr 1+F ∗
(Xr 2−Xr 3)W hereF isascalingf actorthatregulatestheamplif yingof thedif f erencebetweenXr 2andXr 3, Vi isthenewcandidat
Crossover: The new candidate solution produced by the mutation operator is combined with the current
vector using a crossover operator. According to this definition, the crossover operator is Ui = Xi +CR∗(Vi −
Xi ).W hereCRisthecrossoverratethatregulatesthelikelihoodof thenewcandidatesolutionbeingincludedinthepopulation, Ui is
Termination: The method ends when a stopping requirement is satisfied, such as a set number of iterations
or a suitable fitness level for the best vector in the population.
To address the assignment problem, the DE method is a valuable optimisation algorithm. The parameters
you choose for it, like the population size, mutation rate, crossover rate, and halting criterion, will determine
how well it performs.

69
21.3 Solution by the Genetic Algorithms
A well-known metaheuristic optimisation technique called the Genetic technique (GA) was developed due
to natural biological selection. The assignment problem is just one of the many optimisation issues that GA
has successfully resolved. By identifying an ideal or nearly ideal assignment of agents to tasks, the GA can
be utilised to reduce the overall assignment cost in this situation.
The GA solves the assignment problem by keeping track of a population of potential solutions, each
represented by a binary string of length n, where n is the total number of agents or tasks. The value of
the ith bit in the line indicates whether the ith agent is assigned to the jth job, either (1) or (0). The GA
begins by initialising at random.The GA solves the assignment problem by keeping track of a population of
potential solutions, each represented by a binary string of length n, where n is the total number of agents or
tasks. The value of the ith bit in the line indicates whether the ith agent is assigned to the jth job, either
(1) or (0). The GA begins by initialising a population of potential solutions at random.
The GA moves forward iteratively by carrying out the following actions: Selection: Using a fitness
function that assesses each potential solution’s fitness based on its overall cost, a subset of the population is
chosen for reproduction. For the algorithm to minimise the overall cost, the fitness function can be considered
the assignment’s total cost divided by its negative.
Crossover: A crossover operator is employed to produce a new offspring by combining two chosen parent
solutions. The offspring can be made using uniform, one-point or two-point crossover operators. Mutation:
A mutation operator is employed to inject new genetic information into the population by randomly flipping
bits in the progeny.
Replacement: If the offspring’s fitness is higher than a candidate solution, it replaces the key in the
population. The offspring take the place of the population’s least suitable candidate solution.
Termination: When a stopping requirement is satisfied, such as a maximum number of iterations or when
the fitness of the best candidate solution reaches an acceptable level, the algorithm is said to have terminated.
The GA successfully solved the assignment problem, which can discover nearly optimal solutions with a
modest computational effort. However, the choice of the algorithm’s parameters, such as the population size,
the crossover rate, the mutation rate, and the halting criterion, affects how well it performs. To enhance the
performance of the essential GA, numerous modifications can be made, such as the use of elitism or adaptive
operators.

21.4 Solution by the Particle Swarm Optimization


A population-based metaheuristic optimisation technique, the Particle Swarm Optimization (PSO) algorithm
is motivated by the cooperative behaviour of fish schools and bird flocks. PSO has been used to resolve several
optimisation issues, including the assignment problem, which entails allocating a group of agents to a group
of tasks cost-effectively. The PSO method can be applied to the assignment problem to reduce the overall
cost. Maintaining a swarm of particles, each representing a potential answer to the assignment problem, is
how the PSO algorithm functions. Every particle is defined as a vector of binary values, with the ith value
indicating whether the ith agent is assigned to the jth duty (1) or not (0). The process begins by initialising
a swarm of particles at random.
The PSO algorithm advances iteratively by carrying out the subsequent actions: Evaluation: The total
cost of each particle in the swarm, which stands in for the objective function to be reduced, is used to
assess each particle’s fitness. Each particle’s wellness is determined by subtracting the assignment’s total
cost from each particle. Personal best update: Based on the particle’s current position and fitness value,
each particle’s best work, or personal best (best), is updated. The current situation is corrected as the new
best if the particle’s fitness there is higher than it was in its last best place. Global best update: The fitness
value of each particle in the swarm is used to update the internationalswarm’s best (best). The best position
is determined to have the lowest fitness value.
Update of position and velocity: Based on the current situation, the best, and the best, the position and
velocity of each particle are updated. The following equation is used to update each particle’s velocity:

70
Vi , j(t + 1) = wVi , j(t) + (c1r1 ∗ pbesti , j − c2r2(gbestj − Xi , j(t))W hereVi , j(t) is the velocity of the ith
particle in the jth dimension at time t, w is the inertia weight that controls the balance between exploration
and exploitation, c1 and c2 are the cognitive and social learning factors that control the influence of best and
best on the particle’s movement, r1 and r2 are random numbers between 0 and 1, pbesti , j is the personal
best of the ith particle in the jth dimension, and gbestj is the global best in the jth extent.
Each particle’s new position is calculated by multiplying its present position by its updated velocity.
Termination: When a stopping requirement is satisfied, such as a maximum number of iterations or when
the fitness of the best particle reaches a desirable level, the algorithm ends.
In Conclusion, the PSO method is a valuable optimisation approach for resolving the assignment problem.
The parameters you choose for it—such as the number of particles, the inertia weight, the cognitive and
social learning components, and the halting criterion—impact how well it performs. The performance of
the basic PSO can also be enhanced by incorporating other modifications, such as using different velocity
update rules or adaptive parameters.

21.5 Solution by the Firefly Algorithm


The assignment problem is a well-known optimization issue in mathematics and computer science. The
challenge is discovering an assignment that minimizes the overall cost given a collection of n jobs and
n workers and a matrix of costs C, where Cij represents the cost of assigning task I to worker j. The
issue can be approached as a linear programming problem, and several strategies, including the Hungarian,
auction, and simplex algorithms, can be used to resolve it. In this post, we’ll use the Firefly algorithm—an
optimization technique inspired by the flashing behaviour of fireflies—to solve this issue. Algorithm Firefly
A swarm-based optimization system called the Firefly algorithm imitates the behaviour of fireflies. Each
firefly in this method symbolizes a potential answer to the problem, and the amount each firefly flashes
depends on its fit. The process moves the fireflies towards brighter ones iteratively until a maximum or
minimum is discovered. There are four critical steps in the algorithm:
Initialization: Fireflies are formed at random in a population.
Each firefly is drawn to the brighter ones according to their distance and brightness. Each firefly moves
randomly and with an arbitrary step size towards the ones that have attracted it.
Each firefly’s brightness is updated based on its fitness value.
Using the Firefly algorithm, a solution to the assignment problem
We must specify the fitness, movement, and updating functions to solve the assignment problem using
the Firefly algorithm. The fitness function should assess the assignment’s quality and be inversely correlated
to its overall cost. According to the movement function, every firefly should go towards the ones that
have attracted it. The updating function should update each firefly’s brightness based on its fitness value.
Initialization: Randomly produce a colony of fireflies.
Calculate each firefly’s brightness based on its fitness value for attraction. Then, based on each firefly’s
distance from and intelligence from the others, determine how enticing it is to them all.
Calculate each firefly’s progress towards the ones it has attracted using a random step size and random
direction. Update its position after that.
Update: After you figure out each firefly’s fitness value, adjust its brightness accordingly.
Termination: Carry out steps 2-4 repeatedly until the allotted number of iterations has been achieved,
or a workable solution has been discovered. The following definition applies to the fitness function:
f(A)=-i=1n Cij*Ai,j
Where Cij is the price associated with assigning task I to worker j, A is a matrix indicating the assignment;
Ai,j is equal to 1 if job i is set to worker j, and 0 otherwise.
The following definition applies to the movement function:
r = —xj - xi—
= 0*e− r
x = (2rand()-1)
y = (2rand()-1)
xj(t+1) = xj(t)*(xi - xj(t)) + xj(t)

71
yj(t+1) = yj(t)*(yi - yj(t)) + y Where xi and yi are the coordinates of firefly I, xj and yj are the
coordinates of firefly j, r is the distance between firefly I and firefly j, is the attractiveness of firefly j towards
firefly I, is the attractiveness parameter, is the random step size, is the step size scaling factor, and rand()
is a random number generator.
The updating function can be defined as follows:
(t+1) = 0e( − (t/T )2 )
Where 0 is the initial step size, T is the maximum number of iterations, and is the step size adjustment
parameter.
Conclusion
The assignment problem can be resolved using the Firefly method, a robust optimization algorithm for
solving different optimization problems. Even for complex issues, the process is simple and produces reliable
answers. It is crucial to correctly set the algorithm’s parameters to ensure convergence and performance.

21.6 Solution by the Cuckoo Search


Finding the best worker assignments to reduce the overall cost or time taken is the goal of the assignment
problem. The assignment problem can be resolved using the Cuckoo Search algorithm, an optimisation
technique that takes inspiration from nature.
The cuckoo bird species, which lays its eggs in the nests of other bird species, is the basis for the Cuckoo
Search algorithm. The method employs a population of potential solutions, or nests, which are assessed
according to the fitness or value of the objective function. The algorithm continues by randomly creating
new candidate solutions and comparing them to the existing population. The best answers are retained, and
the worst ones are either abandoned or changed with new ones.
The Cuckoo Search algorithm can determine the best work assignments in the assignment problem to
minimise total cost or processing time. The steps listed below can be used to implement the algorithm:
Initialization: Initialise a population of potential answers or worker assignments at random.
Evaluation: Determine each population-level solution’s fitness or objective function value, which relates
to the overall expenditure of resources or time.
Selection: Based on their fitness levels, choose the top solutions or assignments and keep them as elite
nests.
Replace the poorest solutions or nests with new ones produced by a random walk with a step size
corresponding to the fitness value of the solution.
For a specific number of iterations or until a stopping criterion is satisfied, repeat steps 2 through 4.
Termination: Provide the top choice made throughout the optimisation process.
We can acquire an ideal assignment of tasks to workers by using the Cuckoo Search algorithm to solve
the assignment problem. This optimal assignment minimises the overall cost or time required and has
applications in job scheduling, resource allocation, and project management.

21.7 Solution by the Bat Algorithm


The Assignment issue is a well-known optimisation issue where a set of N agents must be matched with a
group of N tasks, ensuring that each job is matched with exactly one agent and each agent with precisely
one study. The goal is to reduce the overall cost of the assignment (or increase the overall profit).
A metaheuristic algorithm called the ”Bat Algorithm” was developed in response to bats’ use of echolo-
cation. It was created by Xin-She Yang in 2010, and since then, various optimisation issues, including the
Assignment Problem, have been effectively addressed with its use. The Assignment Problem can be resolved
by using the Bat Algorithm in the following manner:
Initialization: Create an initial population of N bats, each representing a potential task that an agent
might be assigned. Each bat’s position is represented by a binary vector of length N, each of whose elements
represents a task and takes the value of the agent assigned to it (0 denotes no assignment).
Calculate the entire cost (or profit) of the associated assignment to determine each bat’s fitness (or cost).
The pricesgiven pairings’ (or gains) can be added together to achieve this.

72
Ranking: List the bats in order of fitness, and note the best answer thus far. Use the following formulae
to update each bat’s velocity and position:
A*(Xb est − xi (t)) + r ∗ (Xj k − xi (t)) = vi (t + 1) = vi (t)
xi (t + 1)equalsxi (t)plusvi (t + 1).
Where A and r are parameters that control the amplitude and randomness of the search, respectively, and
vi (t)andxi (t)arethevelocityandpositionof thebatIattimet, Xb estisthepositionof thebestsolutiondiscoveredthusf ar, Xj kisthep
Local search: You might use a regional search strategy to help position each bat. This can be achieved
by repeatedly switching how agents are assigned tasks and assessing the resulting fitness.
Return the best solution (i.e., the assignment with the lowest cost or highest profit) as an output.
Compared to other metaheuristic algorithms, the Bat Algorithm offers some advantages, such as quick
convergence and the capacity to handle complex problems. It does have some disadvantages, though, such
as as the necessity to fine-tune several parameters and the potential for being stuck in local optima.
The Bat Algorithm can be implemented in any programming language, making it a helpful tool for solving
the Assignment Problems and other optimisation issues.

21.8 Solution by the Spider Monkey Optimization (SMO) Algorithm


Finding the ideal pairing between N agents and N tasks, where each agent can be assigned to only one job, and
each lesson can be given to only one agent, is the goal of the classic optimisation problem, The Assignment
Problem. The purpose of the assignment is to maximise total profit or minimise total expenditure.
A metaheuristic optimisation algorithm called the Spider Monkey Optimisation (SMO) algorithm was
developed after studying how spider monkeys forage for food. Exploration, exploitation, and convergence
are the three primary phases in which SMO imitates the actions of spider monkeys. Here is a step-by-step
answer to the Assignment Problem using the SMO algorithm:
Could you initialise the spider monkey population at random in step one? Every spider monkey stands
for a potential answer to the assignment problem. Each spider monkey’s location illustrates how agents are
given tasks to do.
Step 2: Determine the overall cost or profit of the task represented by each spider monkey’s position to
assess their fitness.
Step 3: Sort the spider monkeys based on their fitness level, then choose the best ones for the next phase.
Step 4: Carry out the exploration stage by randomly putting the chosen spider monkeys in new locations.
This increases population diversity and prevents the population from becoming trapped in local optimums.
Step 5: Reorder the spider monkeys after assessing their fitness.
The sixth step is to carry out the exploitation stage, in which the best spider monkeys are chosen, and
new ones are produced through crossover and mutation. To raise the level of the solutions, this is done.
Step 7: Reorder the spider monkeys after assessing their fitness.
Step 8: Complete the convergence step by deciding which spider monkey is the best to use as the answer
to the assignment problem.
Step 9: Verify the stopping standard. Stop the algorithm if the maximum number of iterations has been
reached or the solution has converged. If not, return to Step 4.The SMO method can successfully solve the
Assignment Problem by employing these stages. It is crucial to remember that the algorithm’s performance
is influenced by the selection of variables, including population size, mutation rate, and crossover rate, as
well as the fitness function employed to assess the solutions. To identify the ideal setup for a given issue, it
is advised to experiment with various parameter values and fitness functions.

21.9 Solution by the Ant Colony Algorithms


Finding the ideal pairing between N agents and N tasks, where each agent can be assigned to only one job, and
each lesson can be given to only one agent, is the goal of the classic optimisation problem, The Assignment
Problem. The purpose of the assignment is to maximise total profit or minimise total expenditure.
A metaheuristic optimisation algorithm called the Ant Colony Optimisation (ACO) algorithm was devel-
oped after studying how ants find food. The two fundamental phases of ACO’s imitation of ant behaviour

73
are exploration and exploitation. Here is a step-by-step answer to the Assignment Problem using the ACO
algorithm:
First, could you set an initial value of tau0 for the pheromone trail on each task? The pheromone trail
indicates to the agents how desirable each job is.
Create an ant population in step two. Every ant stands for a potential answer to the assignment problem.
The ants will move around the problem’s graph by tracing the pheromone trails.
Step 3: Randomly choose an initial task for each ant, then decide which work to perform next based on
the pheromone trail and the task’s popularity. Step 5: Determine each ant’s fitness by adding up the cost
or profit of the job represented by its tour.
Step 6: Update the pheromone trail depending on the health of the ants that have visited each task. The
following equation updates the pheromone trail:
(1 - rho) * tau(i,j) + sum(deltat au(i, j)) = tau(i, j).
Tau (i,j) denotes the pheromone trail between task i and agent j, rho distinguishes the rate at which
pheromones evaporate, and deltat au(i, j)isthequantityof pheromonethatantsthathavevisitedtaskiandagentjhavelef tonthetr
Step 7: Verify the stopping standard. Stop the algorithm if the maximum number of iterations has been
reached or the solution has converged. If not, return to Step 3. Step 8: Decide which of the ants’ top
solutions to the assignment problem should be used.
The ACO algorithm can successfully solve the Assignment Problem by employing these stages. It is
crucial to remember that the algorithm’s performance depends on the selection of parameters like the rate
at which pheromones evaporate, their starting concentration, and how the probability distribution of the
ants is calculated. As a result, it is advised to try out several parameter values to see which works best for
a particular issue.

21.10 Solution by the Simulated Annealing


Finding the ideal pairing between N agents and N tasks, where each agent can be assigned to only one job, and
each lesson can be given to only one agent, is the goal of the classic optimisation problem, The Assignment
Problem. The purpose of the assignment is to maximise total profit or minimise total expenditure.
The metallurgical annealing process inspired the metaheuristic optimisation technique known as Simu-
lated Annealing (SA). SA simulates the procedure of heating and cooling a material to achieve the ideal
atomic arrangement. SA is built on gradually lowering the chance of accepting worse solutions as the search
moves forward while initially taking worse options. Here is a step-by-step answer to the Assignment Problem
using the SA algorithm:
Step 1: Randomly generate the first answer to the assignment problem.
Step 2: Determine the assignment’s overall cost or profit to determine whether the initial solution is fit
for use.
Step 3: Increase the starting temperature, T0.
Step 4: Continue doing these things until the halting requirement is satisfied.
By randomly switching the jobs of two agents, provide a new answer to the Assignment Problem.
Could you look over the new solution’s suitability?
Please accept the new answer as the existing one if it is better than the old one.
Calculate the acceptance probability using the following formula if the new solution is worse than the
existing one:P equals exp(-f/T).
Where T is the current temperature, and f is the fitness difference between the old and new solutions.
Please be aware of the new answer with probability P. Set the new solution as the current one if approved.
According to a cooling schedule, such as T = alpha * T, where alpha is a cooling rate, lower the
temperature.
Step 5: Provide the assignment problem’s best solution discovered during the search.The SA algorithm
can successfully solve the Assignment Problem by using these stages. It is crucial to remember that the
selection of variables such as the initial temperature, cooling rate, and acceptance probability function
influences the algorithm’s performanceresult, it is advised to try out several parameter values to see which
works best for a particular issue.

74
21.11 Conclusion
In Conclusion, the Assignment issue is a well-known optimisation issue that entails distributing N agents
among N jobs with the aim of either maximising total profit or minimising total cost. It can be solved using
various methods and has practical applications in many disciplines, including operations research, computer
science, and engineering.
The Hungarian algorithm, branch and bound, genetic algorithm, particle swarm optimisation, ant colony
optimisation, and simulated annealing are just a few of the precise and heuristic techniques suggested for
addressing the Assignment Problem. Each method has advantages and disadvantages and is appropriate
for particular problems and environments. Overall, the size, complexity, and requirements of the particular
problem determine the algorithm and parameter settings. The Assignment Problem is a significant issue
that has received much attention and has real-world implications in numerous fields. Its effective solution
can result in substantial advantages, including enhanced resource allocation, scheduling, and logistics,

75
Bibliography

[1] Maher GM Abdolrasol, SM Suhail Hussain, Taha Selim Ustun, Mahidur R Sarker, Mahammad A
Hannan, Ramizi Mohamed, Jamal Abd Ali, Saad Mekhilef, and Abdalrhman Milad. Artificial neural
networks based optimization techniques: A review. Electronics, 10(21):2689, 2021.

[2] Marco Dorigo, Vittorio Maniezzo, and Alberto Colorni. Ant system: optimization by a colony of
cooperating agents. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics),
26(1):29–41, 1996.
[3] Russell Eberhart and James Kennedy. Particle swarm optimization. In Proceedings of the IEEE inter-
national conference on neural networks, volume 4, pages 1942–1948. Citeseer, 1995.

[4] Iztok Fister, Iztok Fister Jr, Xin-She Yang, and Janez Brest. A comprehensive review of firefly algo-
rithms. Swarm and Evolutionary Computation, 13:34–46, 2013.
[5] Scott Kirkpatrick, C Daniel Gelatt Jr, and Mario P Vecchi. Optimization by simulated annealing.
science, 220(4598):671–680, 1983.

[6] Shuijia Li, Wenyin Gong, Ling Wang, Xuesong Yan, and Chengyu Hu. Optimal power flow by means
of improved adaptive differential evolution. Energy, 198:117314, 2020.
[7] Seyedali Mirjalili and Seyedali Mirjalili. Genetic algorithm. Evolutionary Algorithms and Neural Net-
works: Theory and Applications, pages 43–55, 2019.

[8] Kenneth V Price. Differential evolution. Handbook of Optimization: From Classical to Modern Approach,
pages 187–214, 2013.
[9] SALAH MORTADA SHAHEN. Enhancement on the modified artificial bee colony algorithm to optimize
the vehicle routing problem with time windows.
[10] M Thirunavukkarasu, Yashwant Sawle, and Himadri Lala. A comprehensive review on optimization
of hybrid renewable energy systems using various optimization techniques. Renewable and Sustainable
Energy Reviews, 176:113192, 2023.
[11] Xin-She Yang and Suash Deb. Cuckoo search via lévy flights. In 2009 World congress on nature &
biologically inspired computing (NaBIC), pages 210–214. Ieee, 2009.

[12] Xin-She Yang and Xingshi He. Bat algorithm: literature review and applications. International Journal
of Bio-inspired computation, 5(3):141–149, 2013.

76

You might also like