0% found this document useful (0 votes)
19 views14 pages

A Review of Optimization Techniques - Applications and Comparative

This document presents a comprehensive review of various optimization techniques, highlighting their applications and comparative analysis across multiple fields such as machine learning, engineering, and operations research. It discusses key algorithms including gradient descent, genetic algorithms, simulated annealing, particle swarm optimization, and ant colony optimization, among others, detailing their principles and effectiveness in solving complex problems. The paper aims to provide insights into the historical development of these algorithms and their impact on modern computational methods.

Uploaded by

Ngwenya Mk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views14 pages

A Review of Optimization Techniques - Applications and Comparative

This document presents a comprehensive review of various optimization techniques, highlighting their applications and comparative analysis across multiple fields such as machine learning, engineering, and operations research. It discusses key algorithms including gradient descent, genetic algorithms, simulated annealing, particle swarm optimization, and ant colony optimization, among others, detailing their principles and effectiveness in solving complex problems. The paper aims to provide insights into the historical development of these algorithms and their impact on modern computational methods.

Uploaded by

Ngwenya Mk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Iraqi Journal for Computer Science and Mathematics

Volume 5 Issue 2 Article 5

2024

A Review of Optimization Techniques: Applications and


Comparative Analysis
Ahmed Hasan Alridha
[email protected]

Fouad H. Abd Alsharify

Zahir Al-Khafaji

Follow this and additional works at: https://ijcsm.researchcommons.org/ijcsm

Part of the Computer Engineering Commons

Recommended Citation
Alridha, Ahmed Hasan; Abd Alsharify, Fouad H.; and Al-Khafaji, Zahir (2024) "A Review of Optimization
Techniques: Applications and Comparative Analysis," Iraqi Journal for Computer Science and
Mathematics: Vol. 5: Iss. 2, Article 5.
DOI: https://doi.org/10.52866/ijcsm.2024.05.02.011
Available at: https://ijcsm.researchcommons.org/ijcsm/vol5/iss2/5

This Original Study is brought to you for free and open access by Iraqi Journal for Computer Science and
Mathematics. It has been accepted for inclusion in Iraqi Journal for Computer Science and Mathematics by an
authorized editor of Iraqi Journal for Computer Science and Mathematics. For more information, please contact
[email protected].
Iraqi Journal for Computer Science and Mathematics
Journal Homepage: http://journal.esj.edu.iq/index.php/IJCM
e-ISSN: 2788-7421 p-ISSN: 2958-0544

A Review of Optimization Techniques: Applications and


Comparative Analysis
Ahmed Hasan Alridha 1 , Fouad H. Abd Alsharify 2 , Zahir Al-Khafaji 3 *

1
Department of Mathematics, General Directorate of Education, Ministry of Education, Babylon, IRAQ
2
Department of Physics, College of Science, University of Babylon, Babylon, IRAQ
3
Department of Mathematics, College of Education, University of Babylon, Babylon, IRAQ

*Corresponding Author: Ahmed Hasan Alridha

DOI: https://doi.org/10.52866/ijcsm. 2024-05-02-011


Received January 2024; Accepted March 2024; Available online May 2024

ABSTRACT: Optimization algorithms exist to find solutions to various problems and then find out the optimal
solutions. These algorithms are designed to reach desired goals with high accuracy and low error, as well as
improve performance in various fields, including machine learning, operations research, physics, chemistry, and
engineering. As technology continues to advance, optimization algorithms are increasingly needed to address
complex real-world challenges and drive innovation across all disciplines. Quantitative leaps have been achieved in
improving the efficiency of optimization algorithms through the diversity of sources of information feeding these
algorithms according to the type of optimization problem, based on scientific and organized foundations. The
objectives of this paper are to discuss the most important optimization algorithms, classify the scientific fields
involved in their application, and optimize problems involved in this regard, in addition to providing a brief
overview for comparison among these algorithms.
Keywords: Optimization algorithms, applications field, comparison approach, algorithm classification.

1. INTRODUCTION
Optimization algorithms have been a suitable approach to solving the most difficult problems in various real-world
fields and systems, from engineering and economic sciences to health care. Nowadays, optimization algorithms have
become widespread thanks to technological development. To keep pace with development and modernity, there has
been an urgent need to develop and improve a wide range of optimization algorithms so that they are classified
according to their speed and strength to achieve the required optimization goals with high efficiency. These algorithms
have played a pioneering role in making constructive decisions to find optimal solutions to various problems. In this
paper, it will be an extensive journey through time to explore and review the wonderful evidence of the most important
optimization algorithms from the past to the present day. By delving deeper into the historical development patterns of
these algorithms, insight into their development methods, basic principles, and notable applications can be gained.
Among the highlights of our journey are early developments in the approach to optimization algorithms, for example,
the gradient descent approach, which laid the foundation for many optimization techniques in machine learning as well
as parameter optimization [1-3]. In addition, more research has delved into the origins of genetic algorithms, whose
influence was inspired by the foundations of natural evolution, where their distinct influence was effective on resource
allocation as well as engineering design problems [4-8]. Our journey also includes the detection of the emergence of
simulated annealing, which was originally inspired by the physical process of annealing, as well as revolutionary
harmonic optimization [9-15]. Particle swarm optimization (PSO) has joined the scope of flight, mimicking nature
through organisms in their behavior to address the challenges of control engineering and parameter optimization [16-
21], as well as the ant colony algorithm, inspired by the foraging behavior of ants, to solve routing and scheduling
problems [22-24]. A motive for progress in various fields [25-27]. In addition, exploring the world of constrained
programming, which is concerned with solving combinatorial problems with complex constraints, allows for effective
scheduling and resource allocation in various applications [28-29]. The emergence of inner point methods provides
efficiency and accuracy for linear and nonlinear programming problems, leading to great strides in optimization
techniques [30-31]. Moreover, the innovative approach of Tobu searches with non-convex objectives was discussed,
which, in its approach, uses memory-based strategies to navigate complex search spaces and excels in combinatorial
optimization problems [32], [33]. The reality of convex optimization is also revealed, the power of which highlights
applications in portfolio optimization, signal processing, and beyond [34-36]. By conducting this study and delving into
the historical development of optimization algorithms, the tracker can estimate the evolution and transformation of

122 *Corresponding author: [email protected]


http://journal.esj.edu.iq/index.php/IJCM
Ahmed Hasan Alridha., Iraqi Journal for Computer Science and Mathematics Vol. 5 No. 2 (2024) p. 122-134

optimization techniques over time. A map can be drawn showing each algorithm’s contribution to solving real-world
problems and shaping developments in various fields, and the recipient can appreciate the evolution and transformation
of optimization techniques over time. Our goal is also to provide a comprehensive perspective on the most important
optimization algorithms, their basic principles, and their impact on various fields. By understanding their strengths,
limitations, and historical context, the current state of optimization algorithms can be better estimated, and the
upcoming trends in this area of dynamic development can be predicted. As we embark on a journey into the world of
optimization algorithms and reveal their impact on problem-solving, a visual guide awaits us at the doorstep. The
following flowchart briefly captures the various types of optimization algorithms, serving as our navigation tool in
understanding their multifaceted applications (see Fig. 1).

FIGURE 1. Flowchart depicting target types of optimization algorithms.

2. EXPLORING OPTIMIZATION ALGORITHMS FOR DIVERSE CHALLENGES


Optimization algorithms serve as guides for navigating the vast landscape of possible solutions to most problems.
These algorithms have revolutionized our ability to tackle complex problems across many domains. This section takes a
magnifying glass to some of the most important and notable optimization techniques, highlights their basic principles,
and unveils quite a few of the applications that have propelled them to the forefront of modern computational methods.

2.1 GRADIENT DESCENT


The gradient descent algorithm is one of the fundamental algorithms that dates back to the early 19th century. In
fact, this method was first introduced by French scientist Augustin-Louis Cauchy, and it gained a lot of attention for
solving optimization problems during the 20th century [37]. With the introduction of machine learning and neural
networks, the gradient descent approach became a basic algorithm for optimizing model parameters, as it played an
important role in this regard. By taking the reduction of the cost or loss function as a main goal and by modifying the
model parameters iteratively, this is achieved. The technique determines the gradient descent and the steepest descent
direction by calculating the gradient of the cost function concerning the parameters, and it updates the parameters step
by step accordingly. Right up to the modern era, with the advent of deep learning, the gradient descent approach has
been of distinct importance due to its ability to efficiently optimize and process complex models.

2.2 GENETIC ALGORITHMS


Based on natural selection and genetics, the genetic algorithm was inspired, as it is a series of steps that simulate an
evolutionary process to obtain optimal solutions. Genetic algorithm procedures involve creating and generating
numerous possible solutions, and these solutions are evaluated according to a predetermined objective function, which
goes on to select the best individuals for reproduction. Population fitness is gradually improved over generations by
generating new offspring through a process of cross-breeding and mutation. John Holland and his colleagues continued
to develop the genetic algorithm approach in the 1960s and 1970s [38]. As a result, the inspiration of natural selection
and evolutionary biology led to the crystallization of the idea of formulating genetic algorithms as a means to solve
optimization problems that are at a level of complexity due to the advantage of this algorithm through large search
areas within complex constraints. His book, “Adaptation in Natural and Artificial Systems,” was the basis for this field,
and it was published in 1975.

123
Ahmed Hasan Alridha., Iraqi Journal for Computer Science and Mathematics Vol. 5 No. 2 (2024) p. 122-134

2.3 SIMULATED ANNEALING


Simulated annealing is a procedure inspired by the physical annealing process, which includes several procedures
to obtain a solution to the optimization problems. This approach is particularly effective and valuable when dealing
with optimization problems of discrete or combined types. The first steps are that the algorithm starts with an initial
guess of the solution and then explores the solution space iteratively, and suboptimal moves are accepted based on the
probability distribution as the approach allows a greater probability of accepting the worst solutions based on
simulating the idea of high-temperature annealing, and then the solutions are selected for optimization problems. As the
temperature decreases over time, the acceptability of the worst-case solution decreases and converges toward the global
optimum. S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi simulated annealing in 1983 [39]. The physical process of
annealing in metallurgy, in which a material is gradually heated and cooled to reduce its defects inspired the algorithm.
From this principle, simulated annealing applies a similar concept to optimization problems, progressively exploring
the solution space with controlled acceptance of suboptimal motions, as the solutions through it are almost ideal for
harmonic optimization problems. An algorithm called process annealing is used to reduce defects and improve the
crystal structure of the material through a gradual heating and cooling process. Simulated annealing provides the
opportunity to explore the search space by accepting the worst-case solutions early in the search process when the
temperature is at its highest. The algorithm becomes more selective and tends to converge toward better solutions as the
temperature decreases. This balance between exploitation and exploration thus enables the simulated annealing process
to escape local optimization and find near-optimal solutions. Applications of this algorithm have taken a wide range
and have been successful in solving a range of optimization problems, including scheduling problems, traveling
salesman problem, combinatorial optimization, and circuit design in different domains. It is particularly useful for
processing when the objective function is not convex or noisy where traditional optimization methods have great
difficulty finding good solutions.

2.4 PARTICLE SWARM OPTIMIZATION (PSO)


A population-based metaheuristic method called PSO draws its inspiration from the social behavior of fish schools
and bird flocks. A collection of particles that have a function and speed in the solution area show capacity answers in
PSO. Based on their collective revel in and the modern best global solution, the particles collaborate and communicate
to transport in the direction of superior solutions. PSO has applications in fields such as statistics clustering, photo
processing, and neural community education and is specifically a success for non-stop optimization problems. James
Kennedy and Russell Eberhart proposed PSO in 1995 [40]. PSO simulates the movement of debris in a
multidimensional search area and is stimulated by the social conduct of fish faculties and flocks of birds. The set of
rules’s success in handling issues involving non-stop optimization helped it end up being broadly used. The approach is
based on a set of the particles, each of which represents a capacity answer to the optimization problem and moves
around the search space. In addition to the studies of the particles in the swarm, the debris interact with one another and
switch positions in reaction to their personal stories. This is acting in a fine manner. In PSO, particles collaborate
interact with one another to trade expertise approximately about successful answers discovered within the search area.
A particle’s speed is altered in step with both its personal highest quality function and the greatest function determined
by means of the swarm. The particles can discover the hunting space and converge over iterations to advanced answers
thanks to their cooperative behavior. PSO is renowned for solving continuous and discrete optimization problems
without difficulty and effectiveness. It has been used for a variety of specific functions, such as characteristic choice,
statistics clustering, neural community education, and feature optimization. PSO, however, is susceptible to parameter
settings and can be afflicted by untimely convergence to inferior answers. As a result, rigorous parameter adjustment
and PSO versions, such as hybrid procedures or adaptive techniques, are frequently applied to enhance their
performance on specific problem domains.

2.5 ANT COLONY OPTIMIZATION (ACO)


Ant colony optimization (ACO) phenomenon mimics the foraging conduct of ants to locate the finest paths through
networks or graphs. This set of rules is applied to routing, scheduling, and logistics optimization issues. The program
mimics the pheromone trail left behind by ants, which draws other ants to useful passageways. As more ants cross the
problem vicinity, the pheromone path is bolstered, and the algorithm moves in the ideal direction. ACO has
demonstrated its efficacy in resolving complex problems with a lot of boundaries and dynamic situations. Marco
Doregio came up with the concept for an ACO task in the early 1990s [41]. Deriving the concept from the foraging
pastime of ants, he found the inspiration for his paintings on this route and laid out an approach to ant colony
improvement. This optimization method is an effective way to resolve optimization problems regarding graph and
network traversal..

2.6 EVOLUTIONARY ALGORITHMS


Evolutionary algorithms (EAs) is a term that refers to a set of optimization algorithms whose suggestions come
from the principles of natural selection and genetics. Central to those algorithms are iterative production processes and

124
Ahmed Hasan Alridha., Iraqi Journal for Computer Science and Mathematics Vol. 5 No. 2 (2024) p. 122-134

the assessment of feasible solutions, inclusive of genetic programming (GP) and evolutionary strategies (ES). The basis
process of those algorithms is to simulate organic evolution through the incorporation of factors of selection,
intersection, and mutation to push a collection of individuals toward optimal solutions. When it involves fixing
complicated and multidimensional optimization issues, EAs are very useful. Researchers John Holland, Ingo
Rechenberg, and Hans-Paul Schwefel did paintings within the area of EAs in the 1960s and 1970s [42]. EAs represent a
distinct family of algorithms made possible by means of their contributions to genetic algorithms, evolutionary
techniques, and evolutionary programming.

2.7 CONSTRAINT PROGRAMMING


The constraint programming (CP) method is a pioneering problem-solving method wherein issues are optimized while
ensuring that a set of special constraints are met. The problem is formulated with the aid of defining a collection of
variables, their respective domains, and a fixed set of constraints. Constraint propagation techniques are then employed
to steadily lessen the search space through iterative strategies. Through the utilization of smart seek techniques,
computational hassle-solving algorithms efficaciously navigate the solution space while maintaining adherence to the
required barriers. The usage of CP is universal within the domains of scheduling, resource allocation, and making plans
quandaries. The beginnings of CP can be traced back to the 1960s, when algorithms were first developed to deal with
the demanding situations posed with the aid of constraint fulfilment troubles [43]. Over time, scholars have made
improvements and expansions to the approach, resulting in the status quo of CP as a distinct and recognized field.

2.8 INTERIOR POINT METHODS


Interior point methods (IPMs) are extraordinarily powerful optimization techniques for the solution of both linear
and nonlinear programming issues. In contrast to conventional approaches that contain the exploration of the bounds of
the viable region, interior factor techniques (IPMs) navigate into the interiors of the viable area. With barrier features,
indoors point techniques (IPMs) have the capacity to transform restricted optimization troubles into unconstrained ones,
subsequently allowing the iterative approximation toward the top-of-the-line solution. Integrated pest management
(IPM) has demonstrated its efficacy in addressing optimization problems of considerable size, characterized by a
substantial number of variables and restrictions. The inception of interior point approaches for optimization occurred
throughout the latter years of the 1980s and the early years of the 1990s [44]. Prominent scholars, such as Narendra
Karmarkar and Yurii Nesterov, have achieved noteworthy advancements in the realm of interior point techniques,
thereby bringing about a transformative impact on the domain of linear and nonlinear programming.

2.9 TABU SEARCH


Tabu search (TS) is a heuristic algorithm that is specially used to optimize model parameters for combinatorial
optimization problems. Moreover, descriptive inference is a general strategy used to direct and control actual inference.
TS works by incorporating memory structures into local search strategies because local search has many limitations. TS
is designed to address common problems of this kind. It was first proposed by Glover and further developed by Hansen
[45]. Nowadays, TS is a well-established research procedure, and its applications have been effective and successful in
solving a wide range of optimization problems [46]. TS encourages the exploration of unvisited areas in the solution
space by imposing restrictions on moves that may lead to revisiting previously encountered solutions. This mechanism
facilitates the algorithm’s ability to avoid the trap of local optimization and enhances its ability to discover optimal
solutions. TS application has shown successful results on combinatorial optimization problems. Fred W. Glover
introduced TS during the late 1980s. The algorithm was developed as a means of expanding upon local search
approaches by integrating a memory mechanism to overcome the limitations of local optima. Glover’s influential
publication titled “Tabu Search: Part I,” which was released in 1986, included a thorough examination of the algorithm
and its various uses.

2.10 CONVEX OPTIMIZATION


Convex optimization pertains to the resolution of optimization problems in which both the goal function and the
constraints exhibit convexity. The property of convexity guarantees that any local minimum discovered is, in fact, the
global minimum. Convex optimization algorithms, such as inner-point methods and sequential quadratic programming,
take advantage of the inherent properties of convex functions to effectively address optimization problems. Convex
optimization has been extensively employed in the fields of machine learning and signal processing. The field of
convex optimization possesses a significant historical background that can be traced back to the initial decades of the
20th century. Prominent scholars, namely R.L. Graves, D.G. Luenberger, and Stephen Boyd, have made noteworthy
advancements in the realm of convex optimization algorithms, in terms of both theoretical foundations and practical
implementations [47-49].

125
Ahmed Hasan Alridha., Iraqi Journal for Computer Science and Mathematics Vol. 5 No. 2 (2024) p. 122-134

3. APPLICATIONS OF OPTIMIZATION ALGORITHM


This extensive section aims to explore the exceptional adaptability of optimization methods in various sectors.
Tables 1 and 2 serve as evidence of the significant influence exerted by these algorithms. These tables provide valuable
insight into the capabilities of optimization algorithms to bypass practical limitations and deliver solutions and
workarounds across a wide range of contexts.

3.1 NAVIGATING OPTIMIZATION PROBLEM TYPES AND CORRESPONDING ALGORITHMS


As a result of the difficulties in the complex world of optimization, there is an urgent need to find specialized
techniques to solve these difficulties and to classify the solutions that have been reached according to the type of
problem, which saves time and effort for the stakeholders, as Table 1 provides a comprehensive investigation for this
purpose. This visual guide serves as a navigational tool, emphasizing the dynamic relationships between algorithms and
various problem scenarios as well as the vast terrain of optimization problem-solving.

Table 1. Optimization algorithms and their applications to corresponding optimization problems in the real
world.
Optimization Algorithm Optimization Problems

Genetic Algorithms Resource allocation


Scheduling problems
Simulated Annealing Engineering design problems
Combinatorial optimization
Particle Swarm Optimization Parameter optimization in machine
learning
Control engineering problems
Energy system optimization
Ant Colony Optimization Routing and scheduling problems
Data clustering
Evolutionary Algorithms Multi-objective optimization problems
Graph problems and optimization
Constraint Programming Scheduling and resource allocation
problems
Complex constraint optimization problems
Interior Point Methods Linear programming problems
Combinatorial problems with complex
constraints
Tabu Search Combinatorial optimization problems
Nonlinear programming problems
Convex Optimization Portfolio optimization
Signal processing applications
Non-convex and discontinuous objective
functions
Gradient Descent Machine learning model training

Fig. 2 shows a mutually beneficial relationship between problem landscapes and optimization strategies. As we
examine the specifics, each row reveals how well an algorithm handles a particular problem. The deliberate coupling of
algorithms and their domains is strikingly highlighted by this organized arrangement. Each submission demonstrates
the adaptability and creativity these algorithms bring to both practical and theoretical concerns.

126
Ahmed Hasan Alridha., Iraqi Journal for Computer Science and Mathematics Vol. 5 No. 2 (2024) p. 122-134

FIGURE 2. A comprehensive mapping for optimization algorithms and corresponding problem types.

4. APPLICATIONS OF OPTIMIZATION ALGORITHMS IN THE APPLIED SCIENCES

Table 2 focuses on particular applications in several sciences, with a particular emphasis on the crucial roles that
optimization algorithms play in the fields of chemistry, physics, and engineering. Here, the main situations in which
these algorithms have a significant impact will be examined, highlighting their importance and contributions in these
dynamic areas.

Table 2. Optimization algorithms applications in applied Sciences


Optimization Optimization Chemistry Physics Engineering
Algorithm Problem
Genetic Resource Optimal Optimal Allocation of
Algorithms allocation allocation of resource resources in
reagents, allocation in production
materials, and experiments. systems.
resources.
Scheduling Lab Experimental Task
problems experiment setup scheduling,
scheduling, scheduling. project
reaction scheduling.
scheduling,
process
scheduling.
Engineering Molecular Material Optimal design
design structure design, of structures,
problems optimization, structure systems,
catalyst optimization. circuits.
design.
Simulated Combinatorial Molecular Spin glass Optimal
Annealing optimization conformation models, Ising configuration
Applications search, model of networks,
combinatorial

127
Ahmed Hasan Alridha., Iraqi Journal for Computer Science and Mathematics Vol. 5 No. 2 (2024) p. 122-134

library design. circuits.


Energy system Optimal Ground state Energy-
optimization reaction calculations. efficient
conditions, systems,
energy building
landscape optimization.
exploration.
Particle Swarm Parameter Molecular Parameter Optimization
Optimization optimization in property estimation, of control
Applications machine prediction, fitting models. systems,
learning molecular system
docking. identification.
Control Optimal Optimal PID controller
engineering control of control of tuning,
problems chemical physical trajectory
processes, systems. optimization.
automation
systems.
Data clustering Chemical data Cluster Image
clustering, identification, segmentation,
structure- phase pattern
activity transitions. recognition.
relationship
analysis.
Ant Colony Routing and Supply chain Network Production
Optimization scheduling logistics, routing, traffic scheduling,
Applications problems delivery route flow vehicle
optimization. optimization. routing.
Graph Molecular Network Circuit design,
problems and graph analysis, analysis, graph-based
optimization molecular optimization optimization.
structure on graphs.
generation.
Evolutionary Multi- Drug Optimization Optimization
Algorithms objective discovery with of physical of complex
Applications optimization multiple systems with engineering
problems objectives, conflicting systems.
molecular objectives.
diversity.
Complex Molecular Optimization Optimization
constraint structure with physical of engineering
optimization optimization and systems with
problems with mathematical complex
constraints. constraints. constraints.
Constraint Scheduling Lab Experimental Resource
Programming and resource experiment setup allocation,
Applications allocation scheduling, scheduling, project
problems production equipment scheduling.
scheduling. scheduling.
Combinatorial Design of Constraint Design
problems with molecules with satisfaction optimization
complex specific problems in with complex
constraints properties, physics constraints,
combinatorial simulations. configuration
library design. problems.
Interior Point Linear Optimal Optimization Production
Methods programming resource of planning,
Applications problems allocation, experimental supply chain
mixture conditions. optimization.
design.

128
Ahmed Hasan Alridha., Iraqi Journal for Computer Science and Mathematics Vol. 5 No. 2 (2024) p. 122-134

Nonlinear Reaction Quantum Optimal


programming optimization, mechanical control
problems parameter calculations, systems,
estimation. model fitting. process
optimization.

Combinatorial Molecular Spin glass Network


Tabu Search optimization conformation models, design,
Applications search, optimization optimization of
combinatorial problems. processes.
library design.
Non-convex Optimization Optimization Optimization
and of reaction of physical of engineering
discontinuous conditions systems with systems with
objective with complex non-convex non-convex
functions objective objectives. objectives.
functions.
Convex Portfolio Optimal Portfolio Optimal
Optimization optimization allocation of optimization, allocation of
Applications investments, risk analysis. resources,
risk investment
management. planning.
Signal Spectral Signal Image
processing analysis, signal processing, processing,
applications denoising image audio signal
reconstruction. enhancement.

5. A COMPARISON OF THE OPTIMIZATION ALGORITHMS


In light of the aforementioned, it was necessary to establish a comparative analysis of the characteristics exhibited
by the algorithms under investigation in our research. The present analysis presents a comparative examination of the
fundamental characteristics of the designated optimization methodologies, as summarized in Table 3.

Table 3. An overview highlighting key attributes of optimization algorithms through comparison.


Algorithm Approach Problem Types Search Space Memory Usage Key Advantage
Gradient Iterative, Continuous, Large Low Efficient for
Descent gradient-based differentiable optimizing
machine learning
models and
neural networks.
Genetic Evolutionary, Combinatorial Large, discrete High Effective for
Algorithms population- problems with
based large search
spaces and
complex
constraints.
Particle Swarm Population- Continuous, Large Low Efficient for
Optimization based, social combinatorial continuous
behavior optimization
problems and
inspired by
natural collective
behavior.
Ant Colony Stochastic, trail- Combinatorial Large, discrete Low Suitable for
Optimization based, collective problems
involving
routing,
scheduling, and
logistics,

129
Ahmed Hasan Alridha., Iraqi Journal for Computer Science and Mathematics Vol. 5 No. 2 (2024) p. 122-134

effective in
dynamic and
changing
environments.
Evolutionary Evolutionary, Combinatorial Large High Effective for
Algorithms population- complex
based optimization
problems with
multidimensional
search spaces.
Constraint Constraint- Combinatorial Large, discrete Low Suitable for
Programming based, intelligent problems with a
search set of constraints
that need to be
satisfied.
Interior Point Convex Linear, Large Low Efficient for
Methods optimization, nonlinear solving linear
interior traversal and nonlinear
programming
problems.
Tabu Search Metaheuristic, Combinatorial Large, discrete Moderate Escapes local
adaptive optima, explores
memory new regions in
the solution
space.
Convex Convex Convex Large Low Efficient for
Optimization function-based, solving
specific methods optimization
problems with
convex
objectives and
constraints.

Finally, Table 4 provides an effective comparison of the characteristics of optimization algorithms. The table
addresses key aspects, such as execution speed, computing cost, and compatibility with the real environment, in
addition to a comprehensive analysis that enables the user to understand the prominent differences between these
algorithms.

Table 4. Comprehensive comparative analysis of the performance and integration of optimization algorithms.
Algorithm Speed of Computing cost Compatibility Analysis
implementation with the real
environment
Genetic Variable Medium Strong Flexibility and
Algorithms strength in
solving complex
problems.
Simulated Medium to slow Low to medium Good Average
Annealing performance,
change
efficiency,
balance between
exploration and
exploitation.
Particle Swarm Medium to slow Low to medium Good Fast, versatile
Optimization adaptation, large-
scale problems.
Ant Colony Medium to slow Low to medium Good Excellent
Optimization guidance,
environmental
adaptation,

130
Ahmed Hasan Alridha., Iraqi Journal for Computer Science and Mathematics Vol. 5 No. 2 (2024) p. 122-134

effective in
improving
distribution.
Evolutionary variable Medium Strong Strength in
Algorithms optimization and
design, interest
in complexity.
Constraint variable Medium Good Excellence in
Programming problem-solving,
strength in
resource
planning, interest
in memory and
complexity.
Interior Point Fast to medium Low to medium Strong Fast, effective in
Methods solving specific
problems,
powerful in
software
improvement.
Tabu Search Medium to slow Low to medium Good Average results,
good in
optimization and
scheduling,
attention to
memory

6. DISCUSSION AND CONCLUSION


Optimization algorithms are important to many scientific disciplines, including chemistry, physics, and
engineering. Each algorithm has strengths and distinct features that make it better suited for specific applications. The
gradient descent algorithm plays a crucial role in the training of machine learning models and the optimization of
parameters, while genetic algorithms demonstrate exceptional performance in addressing resource allocation and
engineering design challenges. The simulation annealing algorithm has proven effective in the field of combinatorial
optimization and power system optimization, while the particle swarm algorithm has proven effective in the context of
parameter optimization and control engineering. The study demonstrated that ACO was useful in addressing routing
and scheduling difficulties, while EAs were particularly effective in dealing with multi-objective optimization and
complex constraint problems. The comparison showed that constrained programming was characterized by high
efficiency in solving scheduling and combinatorial problems that included complex constraints, while internal point
approaches excelled in both linear and nonlinear programming. Furthermore, the Tabu search algorithm was suitable
for dealing with non-convex targets, which enhances its role in the field of combinatorial optimization. In the context of
portfolio optimization and signal processing applications, convex optimization has proven effective. Finally, the nature
of the problem dictates the use of certain optimization algorithms over others, as the nature of the problem, its inherent
features, and the specific requirements of the application are more compatible with certain types of algorithms than
others.

Funding
None
ACKNOWLEDGEMENT
The authors would like to thank the reviewers and journal staff for their valuable efforts in publishing this paper.
CONFLICTS OF INTEREST
The authors declare no conflict of interest.

131
Ahmed Hasan Alridha., Iraqi Journal for Computer Science and Mathematics Vol. 5 No. 2 (2024) p. 122-134

REFERENCES

[1] T. Chen and M. C. Messner, “Training material models using gradient descent algorithms,” Int. J. Plast., vol.
165, no. 103605, p. 103605, 2023. https://doi.org/10.1016/j.ijplas.2023.103605
[2] S. H. Haji and A. M. Abdulazeez, "Comparison of optimization techniques based on gradient descent
algorithm: A review," PalArch's Journal of Archaeology of Egypt/Egyptology, vol. 18, no. 4, pp. 2715-2743,
2021. https://archives.palarch.nl/index.php/jae/article/view/6705
[3] D. Seeli and K. K. Thanammal, "Quantitative Analysis of Gradient Descent Algorithm using scaling methods
for improving the prediction process based on Artificial Neural Network," Multimedia Tools and Applications,
pp. 1-15, 2023. https://doi.org/10.1007/s11042-023-16136-9
[4] A. S. Al-Jilawi and F. H. Abd Alsharify, "Review of Mathematical Modelling Techniques with Applications in
Biosciences," Iraqi Journal For Computer Science and Mathematics, vol. 3, no. 1, pp. 135-144, 2022.
https://doi.org/10.52866/ijcsm.2022.01.01.015
[5] J. Alcaraz and C. Maroto, "A robust genetic algorithm for resource allocation in project scheduling," Annals of
operations Research, vol. 102, pp. 83-109, 2001. https://doi.org/10.1023/A:1010949931021
[6] C. Zhang and T. Yang, "Optimal maintenance planning and resource allocation for wind farms based on non-
dominated sorting genetic algorithm-ΙΙ," Renewable Energy, vol. 164, pp. 1540-1549, 2021.
https://doi.org/10.1016/j.renene.2020.10.125
[7] A. Alridha, A. M. Salman, and A. S. Al-Jilawi, “The Applications of NP-hardness optimizations problem,” J.
Phys. Conf. Ser., vol. 1818, no. 1, p. 012179, 2021.
https://ui.adsabs.harvard.edu/link_gateway/2021JPhCS1818a2179A/doi:10.1088/1742-6596/1818/1/012179
[8] S. Mirjalili and S. Mirjalili, "Genetic algorithm," in Evolutionary Algorithms and Neural Networks: Theory and
Applications, pp. 43-55, 2019. https://dl.acm.org/doi/abs/10.5555/3271472
[9] K. L. Du and M. N. S. Swamy, "Simulated annealing," in Search and Optimization by Metaheuristics:
Techniques and Algorithms Inspired by Nature, pp. 29-36, 2016.
https://link.springer.com/book/10.1007/978-3-319-41192-7
[10] B. Chopard and M. Tomassini, "Simulated annealing," in An introduction to metaheuristics for optimization,
pp. 59-79, 2018. https://doi.org/10.1007/978-3-319-93073-2_4
[11] R. fadhil and Z. Hassan, “Improvement of Network Reliability by Hybridization of the Penalty Technique
Based on Metaheuristic Algorithms”, Iraqi Journal For Computer Science and Mathematics, vol. 5, no. 1, pp.
99–111, Jan. 2024. https://doi.org/10.52866/ijcsm.2024.05.01.007
[12] M. Lin et al., "Lithium-ion batteries health prognosis via differential thermal capacity with simulated annealing
and support vector regression," Energy, vol. 250, pp. 123829, 2022.
https://doi.org/10.1016/j.energy.2022.123829
[13] K. Brezinski, M. Guevarra, and K. Ferens, "Population based equilibrium in hybrid sa/pso for combinatorial
optimization: hybrid sa/pso for combinatorial optimization," International Journal of Software Science and
Computational Intelligence (IJSSCI), vol. 12, no. 2, pp. 74-86, 2020.
[14] N. Sekkal and F. Belkaid, "A multi-objective simulated annealing to solve an identical parallel machine
scheduling problem with deterioration effect and resources consumption constraints," Journal of Combinatorial
Optimization, vol. 40, no. 3, pp. 660-696, 2020. https://doi.org/10.1007/s10878-020-00607-y
[15] B. Rabbouch, F. Saâdaoui, and R. Mraihi, "Empirical-type simulated annealing for solving the capacitated
vehicle routing problem," Journal of Experimental & Theoretical Artificial Intelligence, vol. 32, no. 3, pp. 437-
452, 2020. https://doi.org/10.1080/0952813X.2019.1652356
[16] H. Suwoyo et al., "The Role of Block Particles Swarm Optimization to Enhance The PID-WFR Algorithm,"
International Journal of Engineering Continuity, vol. 1, no. 1, pp. 9-23, 2022.
https://doi.org/10.58291/ijec.v1i1.37
[17] Saad Abbas Abed, Mona Ghassan, Shaemaa Qaes, Mahmood S. Fiadh, and Zaid Amer Mohammed, “Structural
Reliability and Optimization Using Differential Geometric Approaches”, Iraqi Journal For Computer Science
and Mathematics, vol. 5, no. 1, pp. 168–174, Jan. 2024. https://doi.org/10.52866/ijcsm.2024.05.01.012
[18] D. Wang, D. Tan, and L. Liu, "Particle swarm optimization algorithm: an overview," Soft computing, vol. 22,
pp. 387-408, 2018. https://doi.org/10.1007/s00500-016-2474-6
[19] J. Nayak et al., "25 years of particle swarm optimization: Flourishing voyage of two decades," Archives of
Computational Methods in Engineering, vol. 30, no. 3, pp. 1663-1725, 2023.
https://doi.org/10.1007/s11831-022-09849-x
[20] F. H. A. Alsharify, G. Abdullah, A. S. A. A. L. Razzak, and Z. Al-Khafaji, “Solving bi-objective reliability
optimization problem of mixed system by firefly algorithm,” in 2023 6th International Conference on

132
Ahmed Hasan Alridha., Iraqi Journal for Computer Science and Mathematics Vol. 5 No. 2 (2024) p. 122-134

Engineering Technology and its Applications (IICETA), 2023. doi: 10.1109/IICETA57613.2023.10351435.


[21] A. Banks, J. Vincent, and C. Anyakoha, "A review of particle swarm optimization. Part II: hybridisation,
combinatorial, multicriteria and constrained optimization, and indicative applications," Natural Computing, vol.
7, pp. 109-124, 2008. http://dx.doi.org/10.1007%2Fs11047-007-9050-z
[22] O. Engin and A. Güçlü, "A new hybrid ant colony optimization algorithm for solving the no-wait flow shop
scheduling problems," Applied Soft Computing, vol. 72, pp. 166-176, 2018.
https://doi.org/10.1016/j.asoc.2018.08.002
[23] Y. M. Huang and J. C. Lin, "A new bee colony optimization algorithm with idle-time-based filtering scheme
for open shop-scheduling problems," Expert Systems with Applications, vol. 38, no. 5, pp. 5438-5447, 2011.
https://doi.org/10.1016/j.eswa.2010.10.010
[24] G. F. Deng and W. T. Lin, "Ant colony optimization-based algorithm for airline crew scheduling problem,"
Expert Systems with Applications, vol. 38, no. 5, pp. 5787-5793, 2011.
https://doi.org/10.1016/j.eswa.2010.10.053
[25] K. Deb, "Multi-objective optimisation using evolutionary algorithms: an introduction," in Multi-objective
evolutionary optimisation for product design and manufacturing, pp. 3-34, London: Springer London, 2011.
http://dx.doi.org/10.1007%2F978-0-85729-652-8_1
[26] R. Azzouz, S. Bechikh, and L. Ben Said, "Dynamic multi-objective optimization using evolutionary algorithms:
a survey," in Recent advances in evolutionary multi-objective optimization, pp. 31-70, 2017.
https://doi.org/10.1007/978-3-319-42978-6_2
[27] J. L. L. García et al., "COARSE-EMOA: An indicator-based evolutionary algorithm for solving equality
constrained multi-objective optimization problems," Swarm and Evolutionary Computation, vol. 67, pp.
100983, 2021. https://doi.org/10.1016/j.swevo.2021.100983
[28] L. R. de Abreu et al., "A new variable neighbourhood search with a constraint programming search strategy for
the open shop scheduling problem with operation repetitions," Engineering Optimization, vol. 54, no. 9, pp.
1563-1582, 2022. https://doi.org/10.1080/0305215X.2021.1957101
[29] G. Da Col and E. C. Teppan, "Industrial-size job shop scheduling with constraint programming," Operations
Research Perspectives, vol. 9, pp. 100249, 2022. DOI: 10.1016/j.orp.2022.100249
[30] J. A. Momoh, M. E. El-Hawary, and R. Adapa, "A review of selected optimal power flow literature to 1993. II.
Newton, linear programming and interior point methods," IEEE transactions on power systems, vol. 14, no. 1,
pp. 105-111, 1999. DOI: 10.1109/59.744495
[31] M. Wright, "The interior-point revolution in optimization: history, recent developments, and lasting
consequences," Bulletin of the American mathematical society, vol. 42, no. 1, pp. 39-56, 2005.
https://doi.org/10.1090/S0273-0979-04-01040-7
[32] M. Hafsa, "New prediction and planning for digital learning based on optimization methods," Doctoral
dissertation, Université de Lille, 2023.
[33] M. B. de Moraes and G. P. Coelho, "A diversity preservation method for expensive multi-objective
combinatorial optimization problems using Novel-First Tabu Search and MOEA/D," Expert Systems with
Applications, vol. 202, pp. 117251, 2022. https://doi.org/10.1016/j.eswa.2022.117251
[34] L. Wu, Y. Feng, and D. P. Palomar, "General sparse risk parity portfolio design via successive convex
optimization," Signal Processing, vol. 170, pp. 107433, 2020.
https://doi.org/10.1016/j.sigpro.2019.107433
[35] H. Guo et al., "Online convex optimization with hard constraints: Towards the best of two worlds and beyond,"
Advances in Neural Information Processing Systems, vol. 35, pp. 36426-36439, 2022.
[36] Z. Algamal, F. . AL-Taie, and O. . Qasim, “Kernel semi-parametric model improvement based on quasi-
oppositional learning pelican optimization algorithm”, Iraqi Journal For Computer Science and Mathematics.
https://doi.org/10.52866/ijcsm.2023.02.02.013
[37] V. Ungureanu, “Steepest Descent Method in the Wolfram Language and Mathematica System,” in The Fifth
Conference of Mathematical Society of the Republic of Moldova, 2019.
[38] C. Karr and L. M. Freeman, Industrial applications of genetic algorithms, vol. 5. CRC press, 1998.
[39] A. M. Salman, A. Alridha, and A. H. Hussain, “Some topics on convex optimization,” J. Phys. Conf. Ser., vol.
1818, no. 1, p. 012171, 2021. DOI 10.1088/1742-6596/1818/1/012171
[40] T. Blackwell and J. Kennedy, “Impact of communication topology in particle swarm optimization,” IEEE
Trans. Evol. Comput., vol. 23, no. 4, pp. 689–702, 2019. doi: 10.1109/TEVC.2018.2880894
[41] S. Li, Y. Wei, X. Liu, H. Zhu, and Z. Yu, “A New Fast Ant Colony Optimization Algorithm: The Saltatory
Evolution Ant Colony Optimization Algorithm,” Mathematics, no. 6, 2022. DOI:10.3390/math10060925
[42] D. Veit, “Genetic algorithms and evolution strategy in textile engineering,” in Advances in Modeling and
Simulation in Textile Engineering, Woodhead Publishing, 2021, pp. 99–138. https://doi.org/10.1016/B978-

133
Ahmed Hasan Alridha., Iraqi Journal for Computer Science and Mathematics Vol. 5 No. 2 (2024) p. 122-134

0-12-822977-4.00012-1
[43] W. Tessaro Lunardi, A Real-World Flexible Job Shop Scheduling Problem With Sequencing Flexibility:
Mathematical Programming, Constraint Programming, and Metaheuristics (Doctoral dissertation). 2020.
[44] A. H. Alridha, A. M. Salman, and E. A. Mousa, “Numerical optimization software for solving stochastic
optimal control,” J. Interdiscip. Math., vol. 26, no. 5, pp. 889–895, 2023. DOI: 10.47974/JIM-1525
[45] S. Rahdar, R. Ghanbari, and K. Ghorbani-Moghadam, “Tabu search and variable neighborhood search
algorithms for solving interval bus terminal location problem,” Appl. Soft Comput., vol. 116, no. 108367, p.
108367, 2022. https://doi.org/10.1016/j.asoc.2021.108367
[46] C. Venkateswarlu, A metaheuristic tabu search optimization algorithm: Applications to chemical and
environmental processes. In Engineering Problems-Uncertainties, Constraints and Optimization Techniques.
DOI: 10.5772/intechopen.982402021.
[47] S. Bubeck, “Convex optimization: Algorithms and complexity,” Found. Trends® Mach. Learn., vol. 8, no. 3–4,
pp. 231–357, 2015. http://dx.doi.org/10.1561/2200000050
[48] M. G. Younis, “Optimal Control of Dynamical Systems using Calculus of Variations,” Babylonian Journal of
Mathematics, vol. 2023, pp. 1–6, 2023.
[49] M. Damak, “Numerical Methods for Fractional Optimal Control and Estimation,” Babylonian Journal of
Mathematics, pp. 32–40, 2023.

134

You might also like