2022 - Solving Multiobjective Functions of Dynamics Optimization
2022 - Solving Multiobjective Functions of Dynamics Optimization
Aljawad, R. A., & Al-Jilawi, A. S. (2022). Solving multiobjective functions of dynamics optimization
based on constraint and unconstraint non-linear programming. International Journal of Health
Sciences, 6(S1), 5236–5248. https://doi.org/10.53730/ijhs.v6nS1.6041
Introduction
Numerous DOP benchmarks for various DOP domains have been suggested in the
literature [2], [5], including combinatorial [6], [7], continuous [4], [8], multi-
objective [9]- [11], and restricted DOPs [12] - [14]. This article discusses dynamic
continuous unconstrained, and restricted optimization problems, including
single- and multiobjective. The most extensively widely used and renowned
benchmark generators in this subject are based on the notion of a landscape
made up of multiple components. The width, height, and location of these
components change over time in most published DOP benchmarks [5].
Optimization
Optimization has evolved into a flexible technique with industry and information
technology applications to the social sciences. Methods for addressing
optimization issues are many, and they constitute a vast pool of problem-solving
technology. There are many different approaches that it is impossible to fully use
[28], [29]. They are specified in several technical languages and implemented in
various software programs. Many are never implemented. It is difficult to
determine which one is best for a specific situation, and there are far too few
opportunities to mix strategies with complementary qualities. The ideal situation
would be to combine various techniques under one roof so that they and their
combinations could all be used to address an issue. As it turns out, many of them
have a similar problem-solving style on some level [16], [25], and [26]
5238
Differential Equations
Nonlinear programming
Dynamic Optimization
Numerous optimization issues encountered in the real world are dynamic [20].
Additional jobs must should be included the timetable, the fundamental
constituent composition may vary, and new orders may be received, contributing
to the vehicle routing complexity. Due to the unpredictability inherent in dynamic
issues, fixing them is often more difficult than solving their static counterparts.
Additionally, solutions to dynamic issues must be discovered in real time as new
information is received. Even the optimization objective shifts from finding the
best solution to a static issue to monitoring the dynamic problem's shifting
optimum in real time.
𝑑𝑥
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑓 ( , 𝑥 , 𝑦 , 𝑝) = 0
𝑑𝑡
𝑑𝑥
0 ≤ 𝑔 ( 𝑑𝑡 , 𝑥 , 𝑦 , 𝑝 )
scale linear, quadratic, nonlinear, and mixed-integer programming (LP, QP, NLP,
MILP, MINLP). All types of operation are available, including parametric
regression, data reconciliation, real-time optimization, dynamic simulation, and
nonlinear predictive control. GEKKO is a Python object-oriented library for
executing AP Monitor on a local machine [26].
Any computer procedure that receives an input value or set of values and outputs
an output value or set of values is a formula. Thus, an algorithm is a sequence of
computations that alter the data state. Additionally, an algorithm may be seen as
a tool to solve a well-defined computing issue. The issue statement provides a
comprehensive overview of the intended input/output relationship. The algorithm
defines the computational technique for creating the suitable input/output
connection. For instance, we may need to ascend a series of integers. This is a
common occurrence in reality and gives an excellent opportunity to introduce a
variety of conventional design methodologies and analytical tools [21] and [30].
analytical and global optimum solution due to two state variables. As an example,
consider the following mathematical model:
min 𝑥2 (𝑡𝑓 )
𝑢(𝑡)
subject to
𝑑𝑥1
=𝑢
𝑑𝑡
𝑑𝑥2
= 𝑥12 + 𝑢2
𝑑𝑡
𝑥(0) = [2 0]𝑇
𝑡𝑓 = 1
To solve this dynamic optimization system, we use GEKKO python code to find
the optimal solution by minimizing the final state of the nonlinear system with an
unconstraint dynamic such that the value of 𝑢 = −0.5.
Figure (3): Dynamic optimization created the control and state profiles
min 𝑥4 (𝑡𝑓 )
𝑢(𝑡)
subject to
𝑑𝑥1
= 𝑥2
𝑑𝑡
𝑑𝑥2
= −𝑥3 𝑢 + 16𝑡 − 8
𝑑𝑡
𝑑𝑥3
=𝑢
𝑑𝑡
𝑑𝑥4
= 𝑥12 + 𝑥22 + 0.005(𝑥2 + 16𝑡 − 8 − 0.1𝑥3 𝑢2 )2
𝑑𝑡
𝑥(0) = [0 −1 −√5 1]𝑇
−3 ≤ 𝑢 ≤ 11
5242
𝑡𝑓 = 1
𝑥1 (𝑡) to 𝑥4 (𝑡) are state vectors, while 𝑢(𝑡) is the control vector.
For solving nonlinear systems that have the constraint to minimize the final state,
we use code GEKKO of Python and get:
Figure (4): The control and state profiles were developed using dynamic
optimization
Multiobjective Functions
We will look at optimization problems with a single objective function in this part,
both with and without constraints. Cost reduction, efficiency optimization, and
weight reduction are examples of conventional single-variable goal functions. Two
or more objective functions must be optimized concurrently in multiobjective
optimization situations. For example, the criteria may have cost reduction and
efficiency maximization throughout the product's manufacturing process. The
formal formulation of a multiobjective optimization problem is [24], [25], and [26]
hierarchy, there are competing aims. If extra degrees of freedom are available, the
highest-level goals are met first, followed by lower-level objectives. The L1-norm
objective is a natural approach to ranking goals explicitly and optimizing many
priorities concurrently using a single optimization problem. For example, consider
the limits or purposes associated with safety, the environment, and economics.
Which are the most critical and why?
Case 3: Create a probable optimum trajectory for the following multiobjective
optimization problem.
Each of the instances discussed in this study may demonstrate the final results
after they have been numerically solved using Python scripts, particularly the
library GEKKO. They can explain the final results of each case with a simple
demonstration.
Case1: With a two-variable solution to the dynamic issue, we'll also minimize by
altering the value of 𝑢, which we'll do by changing 𝑥1and 𝑥2. 𝑥1 is equal to 2, and
𝑥2 is equal to 0; therefore, we're going to compute the value until 𝑥1 is equal to 2,
and we'll do the same thing for 𝑥2 until it is equal to 0, as you can see from our
input or variables used to manipulate the X conditions, note Figure (3). Then we
get:
5245
Number of Iterations: 3
Table (1)
Final Results when Solve Case 1
(scaled) (unscaled)
Objective 3.0347753804326336e+00 3.0347753804326336e+00
Case 2:
With the four-variable solution to the dynamic issue, we'll also minimize by
altering the value of 𝑢, which we'll do by changing 𝑥1 , 𝑥2 , 𝑥3 and 𝑥4. 𝑥1 is equal to
2, 𝑥2= -1, 𝑥3= √3 and 𝑥4 is equal to 1; therefore, we're going to compute the value
of x's, as you can see from our input or variables being used to manipulate the X
conditions, see figure (4).
Number of Iterations: 19
Table (2):
Final Results when Solve Case 2
(scaled) (unscaled)
Objective 2.5121002922896420e+00 2.5121002922896420e+00
Number of Iterations: 28
Table (2)
Final Results when Solve Case 3
(scaled) (unscaled)
Objective 1.2277417296878346e+04 3.6832251890635038e+04
Conclusion
References
[9] S. B. Gee, K. C. Tan, and H. A. Abbass, "A benchmark test suite for dynamic
evolutionary multiobjective optimization," IEEE Transactions on
Cybernetics, vol. 47, no. 2, pp. 461–472, 2017.
[10] S. Jiang, M. Kaiser, S. Yang, S. Kollias, and N. Krasnogor, "A scalable test
suite for continuous dynamic multiobjective optimization," IEEE
Transactions on Cybernetics, pp. 1–13, 2019.
[11] S. Jiang and S. Yang, "Evolutionary dynamic multiobjective optimization:
Benchmarks and algorithm comparisons," IEEE Transactions on
Cybernetics, vol. 47, no. 1, pp. 198–211, 2017.
[12] T. T. Nguyen and X. Yao, "Continuous dynamic constrained optimization—
the challenges," IEEE Transactions on Evolutionary Computation, vol. 16,
no. 6, pp. 769–786, 2012.
[13] C. Bu, W. Luo, and L. Yue, "Continuous dynamic constrained optimization
with ensemble of locating and tracking feasible regions strategies," IEEE
Transactions on Evolutionary Computation, vol. 21, no. 1, pp. 14–33, 2017.
[14] Y. Wang, J. Yu, S. Yang, S. Jiang, and S. Zhao, "Evolutionary dynamic
constrained optimization: Test suite construction and algorithm
comparisons," Swarm and Evolutionary Computation, vol. 50, p. 100559,
2019.
[15] Yazdani, D, Omidvar, MN orcid.org/0000-0003-1944-4624, Cheng, R et al.
(3 more authors) (2020) Benchmarking Continuous Dynamic Optimization:
Survey and Generalized Test Suite. IEEE Transactions on Cybernetics. ISSN
2168-2267 https://doi.org/10.1109/TCYB.2020.3011828.
[16] INTEGRATED METHODS FOR OPTIMIZATION by JOHN N. HOOKER.
[17] introduction to numerical methods of differential equations.
[18] INTRODUCTION TO APPLIED OPTIMIZATION, Second Edition. By URMILA
DIWEKAR, Vishwamitra Research Institute, Clarendon Hills, IL, USA.
[19] Convex Optimization, Stephen Boyd, Department of Electrical Engineering,
Stanford University, Lieven Vandenberghe, Electrical Engineering
Department, University of California, Los Angeles.
[20] L. Bianchi, Notes on dynamic vehicle routing - state of the art - Technical
report idsia 05-01, Italy, 2000.
[21] Introduction to Algorithms. Third Edition, Thomas H. Cormen, Charles E.
Leiserson, Ronald L. Rivest, Clifford Stein.
[22] J. Branke, Evolutionary Optimization in Dynamic Environments. Kluwer
Academic Publishers, 2002.
[23] J. Grefenstette, Evolvability in Dynamic Fitness Landscapes: A Genetic
Algorithm Approach. In Proc. 1999 Congress on Evolutionary Computation
(CEC 99), Washington, DC. IEEE Press, pp. 2031-2038.
[24] OPTIMIZATION Algorithms and Applications Rajesh Kumar Arora, Senior
Engineer, Vikram Sarabhai Space Centre, Indian Space Research
Organization, Trivandrum, India.
[25] Al-Jilawi, A. S., & Abd Alsharify, F. H. (2022). Review of Mathematical
Modelling Techniques with Applications in Biosciences. Iraqi Journal For
Computer Science and Mathematics, 3(1), 135-144.
[26] Alridha, A., Wahbi, F. A., & Kadhim, M. K. (2021). Training analysis of
optimization models in machine learning. International Journal of Nonlinear
Analysis and Applications, 12(2), 1453-1461.
[27] Kadhim, M. K., Wahbi, F. A., & Hasan Alridha, A. (2022). Mathematical
optimization modeling for estimating the incidence of clinical diseases.
5248