SwagatamDas Constrainedoptimization
SwagatamDas Constrainedoptimization
Swagatam Das
Head, Electronics and Communication Sciences Unit,
Indian Statistical Institute, Kolkata – 700 108, India.
E-mail: [email protected]
Problem Statement: Real Parameter global
optimization
Quiz…..
f ( x1 , x2 ,..., xD )
Let D = 1000 (not really a big one!).
How much time will an ordinary PC take to compare all possible values of the
function and pick up the best?
In Diverse Domains:
H ( z1 , z2 ) H 0 N
(1 q z
k 1
k 1 rk z2 sk z1.z2 )
Minimize: N1 N2
n1 n2 n1 n2
J [ M (
n1 0 n2 0 N1
,
N2
) Md (
N1
,
N2
)]b
Subject to constraints: q k rk 1 s k 1 q k rk ,
where 1 ( N 1 )n1 ;
M (1 , 2 ) H ( z1 , z 2 ) z1 e j1 and
z 2 e j 2 2 ( N 2 )n2 ;
Solution Representation for the above problem:
S. Das and A. Konar: A swarm intelligence approach to the synthesis of two-dimensional IIR
filters, Engineering Applications of Artificial Intelligence, Vol. 20(8), 1086-1096 (2007)
Constrained Optimization
Optimization is a numerical process used to determine the decision variables for
minimizing or maximizing an objective function while satisfying the linear and/or
non-linear constraints imposed on the decision-space.
In a plethora of real-world applications, the problems contain non-convex non-linear
objective function and constraints with multiple local optima with a smaller feasible
region.
A non-convex, non-linear, constrained optimization problem (COP) can be defined as:
Efrén Mezura-Montes, Carlos A. Coello Coello: Constraint-handling in nature-inspired numerical optimization: Past,
present and future. Swarm and Evolutionary Computation, 1(4): 173-194 (2011)
How do the constraints modify the search space?
Christodoulos A. Floudas, Amy R. Ciric, and Ignacio E. Grossmann. "Automatic synthesis of optimum heat exchanger network configurations." AIChE
Journal 32, no. 2 (1986): 276-290.
Abhishek Kumar, Guohua Wu, Mostafa Z. Ali, Rammohan Mallipeddi, P. N. Suganthan, and Swagatam Das. “A test-suite of non-convex constrained
optimization problems from the real-world and some baseline results”, Swarm and Evolutionary Computation 56 (2020).
Heat exchanger network design
One example of heat exchanger network design problem is given below.
Christodoulos A. Floudas, Amy R. Ciric, and Ignacio E. Grossmann. "Automatic synthesis of optimum heat exchanger network configurations." AIChE
Journal 32, no. 2 (1986): 276-290.
Real-world Constrained Optimization Problems: A test-suite of 57 problems
Real-world constrained optimization problems have been comparatively difficult to
solve due to:
the complex nature of the objective and constraint functions,
the substantial number of non-linear constraints, and
the low volume of feasible region.
A list of standard real-life problems appears to be the need of the time for
benchmarking new algorithms in an efficient and unbiased manner.
A set of 57 real-world Constrained Optimization Problems are described and
presented as a benchmark suite in
Abhishek Kumar, Guohua Wu, Mostafa Z. Ali, Rammohan Mallipeddi, P. N. Suganthan, and Swagatam Das. “A test-suite of
non-convex constrained optimization problems from the real-world and some baseline results”, Swarm and Evolutionary
Computation 56 (2020).
These problems are shown to capture a wide range of difficulties and challenges that
arise from the real-life optimization scenarios.
Real-world Constrained Optimization Problems: A test-suite of 57 problems
A set of 57 real-world constrained problems selected from different real-world
applications are considered as a benchmark problems.
In these problems, the number of decision variables vary from 2 to 158, number of
equality constraints vary from 0 to 148, and number of inequality constraints vary
from 0 to 91.
Abhishek Kumar, Guohua Wu, Mostafa Z. Ali, Rammohan Mallipeddi, P. N. Suganthan, and Swagatam Das. "A test-suite of non-convex constrained
optimization problems from the real-world and some baseline results”, Swarm and Evolutionary Computation 56 (2020).
Special Session & Competitions on Real-World Single Objective Constrained Optimization, at CEC-2020, Glasgow, UK, 19-24 July
2020 and GECCO 2020, Cancun, Mexico, 8-12 July 2020.
Self-Adaptive Spherical Search Algorithm: Winner
Spherical Search (SS) is a recently proposed population-based optimization technique for solving
continuous non-convex optimization problems.
It has shown adequate success while determining the global optimum for optimization problems with a
wide variety of difficulties.
In non-convex real-world optimization problems, SS outperforms the state-of-the-art algorithms
because of following characteristics.
Less number of parameters to tune,
Better balance between exploration and exploitation during the search,
Rotational-Invariance,
Maintains high diversity during the search, and
Maps the contour of search-space.
The following modifications are incorporated in SS to deal with the constrained problem-space:
A self-adaptive parameter tuning procedure is proposed and employed within the basic structure of SS.
A popular 𝜖-constrained selection scheme and a gradient-based repair method are integrated within the
framework of the algorithm to efficiently handle the constraints.
Abhishek Kumar, Swagatam Das, and Ivan Zelinka. "A self-adaptive spherical search algorithm for real-world constrained optimization problems." In
Proceedings of the 2020 Genetic and Evolutionary Computation Conference (GECCO) Companion, pp. 13-14. 2020.
Abhishek Kumar, Rakesh Kumar Misra, Devender Singh, Sujeet Mishra, and Swagatam Das. "The spherical search algorithm for
bound-constrained global optimization problems”, Applied Soft Computing 85 (2019): 105734.
Real-world Constrained Optimization Problems: A test-suite of 57
problems 2020 competition:
Accepted Algorithms in IEEE CEC 2020/GECCO
Constraint Handling Techniques
Penalty Methods
Feasibility Criteria
Stochastic Ranking
-constrained Approach
Multi-objective Optimization-based Approach
Repair Methods
1. Powell, David, and Michael M. Skolnick. "Using genetic algorithms in engineering design optimization with non-linear constraints”,
In Proceedings of the 5th International conference on Genetic Algorithms, pp. 424-431. 1993.
2. Deb, Kalyanmoy. "An efficient constraint handling method for genetic algorithms”, Computer methods in applied mechanics and engineering 186,
no. 2-4 (2000): 311-338.
3. Runarsson, Thomas P., and Xin Yao. "Stochastic ranking for constrained evolutionary optimization”, IEEE Transactions on evolutionary
computation 4, no. 3 (2000): 284-294.
4. Takahama, Tetsuyuki, and Setsuko Sakai. "Solving constrained optimization problems by the ε constrained particle swarm optimizer with adaptive
velocity limit control”, In 2006 IEEE Conference on Cybernetics and Intelligent Systems, pp. 1-7. IEEE, 2006.
5. Coello, Carlos A. Coello. "Treating constraints as objectives for single-objective evolutionary optimization”, Engineering Optimization+ A35 32,
no. 3 (2000): 275-308.
6. Chootinan, Piya, and Anthony Chen. "Constraint handling in genetic algorithms using a gradient-based repair method”, Computers & operations
research 33, no. 8 (2006): 2263-2281.
Penalty Methods
Penalty methods are most commonly used constrained handling technique for
evolutionary algorithms.
Penalty methods transform constrained optimization problem into unconstrained
optimization one by adding penalty terms based on the degree of infeasibility of the
solution.
The main advantage of these methods is that the optimization problem becomes
unconstrained and evolutionary operators can be easily applied.
However, these methods modify the original objective landscape, which may become
less smooth.
Moreover, parameters such as the penalty constants are introduced into the problem,
and their values need to be set or tuned properly beforehand.
Powell, David, and Michael M. Skolnick. "Using genetic algorithms in engineering design optimization with non-linear
constraints." In Proceedings of the 5th International conference on Genetic Algorithms, pp. 424-431. 1993.
Illustrating the penalty function method….
Simply put, the technique is to add a term to the objective function such that it produces a high cost
for violation of constraints. This is known as the Penalty function method. Mathematically,
Deb, Kalyanmoy. "An efficient constraint handling method for genetic algorithms." Computer methods in applied mechanics and
engineering 186, no. 2-4 (2000): 311-338.
Stochastic Ranking
In stochastic ranking, a control parameter is pre-defined by the user to balance
feasibility and infeasibility, while no penalty parameter is used.
The choice and preference between two solutions are mainly based on their relative
objective values and the degree of constraint violations.
Ranking of solutions can be done by any sorting algorithms such as the bubble sort.
The main step involves first to draw a uniformly-distributed random number and
compare with the pre-defined :
If or both solutions are feasible, then swap them if ,
Else if, both solutions are infeasible, swap if .
The aim is to select the minimum of the objective values and the lower degree of the
constraint violations.
Runarsson, Thomas P., and Xin Yao. "Stochastic ranking for constrained evolutionary optimization." IEEE Transactions on evolutionary
computation 4, no. 3 (2000): 284-294.
-constrained Approach
Another technique for handling constraints, called the -constrained method, consists
of two steps: the relaxation limits for feasibility consideration and lexicographical
ordering.
Basically, two solutions and can be compared and ranked by their objective
values and degree of the constraint violation, i.e.
Takahama, Tetsuyuki, and Setsuko Sakai. "Solving constrained optimization problems by the ε constrained particle swarm optimizer
with adaptive velocity limit control”, In 2006 IEEE Conference on Cybernetics and Intelligent Systems, pp. 1-7. IEEE, 2006.
Multi-objective Optimization-based Approach
During the last two decades, the notion of multi-objective optimization has been
successfully adopted to solve the nonconvex constrained optimization problems in their
most general forms.
In this approach, constrained optimization problem is converted into multi-objective
optimization problems in which two objectives are considered:
the first is to optimize the original objective function, and
the second is to minimize the degree of constraint violation.
Although some researchers have suggested that multi-objective optimization techniques are
not suitable, this kind of technique has still attracted researchers in recent years.
The essential difference between the constrained and multi-objective optimization is:
the aim of the former is to find the global optimal solution in the feasible region,
however, the goal of the latter is to obtain a final population with a diversity of
nondominated individuals.
Coello, Carlos A. Coello. "Treating constraints as objectives for single-objective evolutionary optimization." Engineering Optimization+ A35 32, no. 3
(2000): 275-308.
Multi-objectivization – a conceptual view
Gradient-based Repair Methods
The gradient information derived from the constraint set can be utilized to
systematically repair the infeasible solutions, i.e. the gradient information can be used
to direct the infeasible solutions toward the feasible region.
This repair method works effectively if the relationship between decision
variables and constraints could be characterized.
Even if the gradient-based repair can be considered as a constraint-handling technique
in itself, using it alone would be computationally expensive.
This procedure might require many iterations to reach the feasible region, and in
extreme cases, a feasible solution could be impossible to obtain.
Therefore, usually this techniques are coupled with any other constraint-handling
technique.
Chootinan, Piya, and Anthony Chen. "Constraint handling in genetic algorithms using a gradient-based repair method." Computers
& operations research 33, no. 8 (2006): 2263-2281.
-level Penalty Function
-level modification in constraints:
Kumar, Abhishek, Swagatam Das, Rakesh Kumar Misra, and Devender Singh. "A υ-Constrained Matrix Adaptation Evolution
Strategy with Broyden-Based Mutation for Constrained Optimization”, IEEE Transactions on Cybernetics (2021).
-level Penalty Function
Constraint violation and -level penalty function:
In each iteration, we can use the following equation to tune the value of
Kumar, Abhishek, Swagatam Das, Rakesh Kumar Misra, and Devender Singh. "A υ-Constrained Matrix Adaptation Evolution Strategy With Broyden-Based Mutation for
Constrained Optimization." IEEE Transactions on Cybernetics (2021).
-level Penalty Function
Performance on IEEE CEC 2017 benchmark Suite
5 4
8 7
9 2
3
11
15
13
11
8
Kumar, Abhishek, Swagatam Das, Rakesh Kumar Misra, and Devender Singh. "A υ-Constrained Matrix Adaptation Evolution Strategy With Broyden-Based Mutation for
Constrained Optimization." IEEE Transactions on Cybernetics (2021).
Reference points and vectors in MOOP
Reference Vector-based Constrained Handling Technique
Reference Vector-based Ranking Scheme:
First, constrained optimization problem is transformed into bi-
objective problem.
For each solution , a set of reference vector is
determined according to the minimum angle between
and position vector of the solution .
For each reference vector, the corresponding solutions are
ranked according to the minimum value of the projection.
After ranking of solutions of each reference vector, overall
ranking of solutions is done by supplying more preference to
the solutions of a reference vector with lower value of .
Ranking of the solutions using feasible criteria: 1, 2, 3, 4, 5, 6,
7, 8, 9, 10, 11, 12, 13, 14, 15, 16, and 17.
Ranking calculated by above-mentioned steps are as follows:
1, 3, 4, 2, 6, 5, 7, 9, 11, 16, 8, 10, 13, 14, 15, 12, and 17.
Kumar, Abhishek, Swagatam Das, and Rammohan Mallipeddi. "A Reference Vector-Based Simplified Covariance Matrix Adaptation Evolution
Strategy for Constrained Global Optimization." IEEE Transactions on Cybernetics (2020).
Reference Vector-based Constrained Handling Technique
Reference Vectors Adjustment Strategy:
Here, only feasible solutions (solution at they-axis of the objective space) are
needed, which is different from actual bi-objective optimization problem.
To address this condition, we put forth an adjustment strategy for calculating
Reference Vectors to locate the feasible solutions.
To rank the non-dominated solutions according to the feasibility rule, an
optimum Reference Vector is calculated.
Next, a uniformly spaced set of Reference vectors with is generated.
Note that if the number of non-dominated solutions in current population is less than 2,
then whole population is used to calculate in place of non-dominated
solutions.
Kumar, Abhishek, Swagatam Das, and Rammohan Mallipeddi. "A Reference Vector-Based Simplified Covariance Matrix Adaptation Evolution Strategy for
Constrained Global Optimization." IEEE Transactions on Cybernetics (2020).
Reference Vector-based Constrained Handling Technique
Calculation of optimum Reference Vector:
Here, can be calculated by solving the following optimization problem:
Kumar, Abhishek, Swagatam Das, and Rammohan Mallipeddi. "A Reference Vector-Based Simplified Covariance Matrix Adaptation
Evolution Strategy for Constrained Global Optimization." IEEE Transactions on Cybernetics (2020).
Reference Vector-based Constrained Handling Technique
Performance on IEEE CEC 2017 benchmark Suite
2 2 2 4
12
16 16 14
14
10 10 10
Kumar, Abhishek, Swagatam Das, and Rammohan Mallipeddi. "A Reference Vector-Based Simplified Covariance Matrix Adaptation Evolution Strategy for Constrained Global
Optimization." IEEE Transactions on Cybernetics (2020).
Broyden-based Repair Method
Kumar, Abhishek, Swagatam Das, Rakesh Kumar Misra, and Devender Singh. "A υ-Constrained Matrix Adaptation Evolution Strategy With Broyden-Based Mutation for
Constrained Optimization." IEEE Transactions on Cybernetics (2021).
Broyden-based Repair Method
Comparison of Broyden-based with Gradient-based repair method on IEEE CEC 2017 Problems
Kumar, Abhishek, Swagatam Das, Rakesh Kumar Misra, and Devender Singh. "A υ-Constrained Matrix Adaptation Evolution Strategy With Broyden-Based Mutation for
Constrained Optimization." IEEE Transactions on Cybernetics (2021).
Broyden-based Repair Method
Performance on IEEE CEC 2017 benchmark Suite
4 3 4 3
8
9 11 10
13
11 10 10
Kumar, Abhishek, Swagatam Das, Rakesh Kumar Misra, and Devender Singh. "A υ-Constrained Matrix Adaptation Evolution Strategy With Broyden-Based Mutation for
Constrained Optimization." IEEE Transactions on Cybernetics (2021).
Levenberg-Marquardt with Broyden’s Update-based Repair Method
The previous repair method is not sufficiently robust because it is highly
sensitive to the initial seed of the solutions.
If the initial seed is very far from the boundary of the feasible region, then this
method fails to repair the solution and lots of FEs are actually wasted.
To overcome these issues, we propose the Levenberg-Marquardt with Broyden’s
Update-based repair method.
Kumar, Abhishek, Swagatam Das, and Rammohan Mallipeddi. "A Reference Vector-Based Simplified Covariance Matrix Adaptation
Evolution Strategy for Constrained Global Optimization." IEEE Transactions on Cybernetics (2020).
Levenberg-Marquardt with Broyden’s Update-based Repair Method
4 2 2 3
16 14 13
13
11 10 12 12
Kumar, Abhishek, Swagatam Das, and Rammohan Mallipeddi. "A Reference Vector-Based Simplified Covariance Matrix Adaptation Evolution
Strategy for Constrained Global Optimization." IEEE Transactions on Cybernetics (2020).
A sample run of the constrained optimizer…..
• Problem Description:
𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑓 𝑥, 𝑦 = −𝑥 − 𝑦,
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜:
𝐺1: −2𝑥 4 + 8𝑥 3 − 8𝑥 2 + 𝑦 − 2 ≤ 0,
𝐺2: −4𝑥 4 + 32𝑥 3 − 88𝑥 2 + 96𝑥 + 𝑦 − 36 ≤ 0.
where the bounds are 0 ≤ 𝑥 ≤ 3 and 0 ≤ 𝑦 ≤ 4.
The feasible global minimum is at 𝑥 ∗ = 2.3295,3.1785 ,
where 𝑓 𝑥 ∗ = −5.5080. This problem has a feasible region consisting
on two disconnected sub-regions.
C. Floudas. Handbook of Test Problems in Local and Global Optimization. Nonconvex Optimization and its
Applications. Kluwer Academic Publishers, The Netherlands, 1999.
sCMA-ES as a Core Optimizer
Covariance matrix adaptation evolution strategy (CMA-ES) is the distribution-based
stochastic and derivative-free algorithm to solve non-convex continuous optimization
problems.
The main steps of distribution-based algorithms can be defined as follows:
Kumar, Abhishek, Swagatam Das, and Rammohan Mallipeddi. "A Reference Vector-Based Simplified Covariance Matrix Adaptation Evolution Strategy for Constrained Global
Optimization." IEEE Transactions on Cybernetics (2020).
sCMA-ES as a Core Optimizer
CMA-ES
Kumar, Abhishek, Swagatam Das, and Rammohan Mallipeddi. "A Reference Vector-Based Simplified Covariance Matrix Adaptation Evolution
Strategy for Constrained Global Optimization." IEEE Transactions on Cybernetics (2020).
sCMA-ES as a Core Optimizer
sCMA-ES is derived from original CMA-ES after replacing the adaptation of
covariance matrix with an adaptation of the square root of the covariance matrix,
xx directly.
Therefore, there is no need to calculate the square root of the covariance matrix by
using the spectral decomposition and Cholesky decomposition at each iteration.
This reduces the algorithmic complexity as well as simplifies the steps of the
algorithm due to the use of only operations of numerical algebra.
The proposed steps of adapting matrix are presented as follows:
Kumar, Abhishek, Swagatam Das, and Rammohan Mallipeddi. "A Reference Vector-Based Simplified Covariance Matrix Adaptation Evolution
Strategy for Constrained Global Optimization”, IEEE Transactions on Cybernetics (2020).
sCMA-ES as a Core Optimizer
Kumar, Abhishek, Swagatam Das, and Rammohan Mallipeddi. "A Reference Vector-Based Simplified Covariance Matrix Adaptation Evolution Strategy for Constrained Global
Optimization." IEEE Transactions on Cybernetics (2020).
Conclusions…
Several research topics need to be tackled or revisited:
Further improvements of CMA-ES and similar algorithms could be attained with metamodeling:
constraint boundary surrogates can be used for prediction of feasibility of mutations and for repair of
infeasible solutions
Use of machine learning techniques like a rule-based reinforcement learning to guide the population
of an EA towards the feasible optimal basin of attraction.
Memetic algorithms for solving COPs are becoming more popular in the literature, even surpassing
other hybrid approaches. Thus, developing new local search strategies that are specifically designed for
constrained search spaces is a promising research topic.
…and that’s it!