0% found this document useful (0 votes)
55 views6 pages

On The Use Non-Stationary Penalty Functions T o Solve Nonlinear Constrained Optimization Problems With GA's

This document discusses using non-stationary penalty functions with genetic algorithms to solve nonlinear constrained optimization problems. It proposes increasing the penalty value over generations to put more pressure on finding feasible solutions. Four variations of this approach are tested on four test problems. The results report the average best solution found, feasibility percentage, and best feasible solution to evaluate the effectiveness of the methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views6 pages

On The Use Non-Stationary Penalty Functions T o Solve Nonlinear Constrained Optimization Problems With GA's

This document discusses using non-stationary penalty functions with genetic algorithms to solve nonlinear constrained optimization problems. It proposes increasing the penalty value over generations to put more pressure on finding feasible solutions. Four variations of this approach are tested on four test problems. The results report the average best solution found, feasibility percentage, and best feasible solution to evaluate the effectiveness of the methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

On the Use of Non-Stationary Penalty Functions to Solve

Nonlinear Constrained Optimization Problems with GA’s


Jeffrey A. Joines and Christopher R. Houck
jjoineQeos.ncsu.edu chouckOeos.ncsu.edu
Department of Industrial Engineering
North Carolina State University, NC 27695-7906
Abstract
In this paper we discuss the use of non-stationary penalty functions to solve general nonlinear programming
problems (NP)using real-valued GAS. The non-stationary penalty is a function of the generation number;
as the number of generations increases so does the penalty. Therefore, as the penalty increases it puts more
and more selective pressure on the GA t o find a feasible solution. The ideas presented in this paper come
from two basic areas: calculus-based nonlinear programming and simulated annealing. The non-stationary
penalty methods are tested on four N P test cases and the effectiveness of these methods are reported..

1 Introduction
Constrained function optimization is an extremely important tool used in almost every facet of engineering,
operations research, mathematics, and etc. Constrained optimization can be represented as a nonlinear
programming problem. The general nonlinear programming problem is defined as follows:

subject to (nonlinear and linear) constraints:

g@) 2 0.0, i = 1,...,m


h j ( X ) = 0.0, j = 1,...,p

In the last few years, there has been a growing effort t o apply genetic algorithms (GA) to general con-
strained optimization problems [5, 6, 71. GAS have been widely applied to unconstrained optimization where
their appeal is their ability to solve ill-conditioned problems. Traditional calculus-based or deterministic
global search methods typically make strong assumptions regarding the objective function, i.e., continuity,
differentiability, satisfaction of the Lipschitz Condition, etc., in order t o make the search method justifiable.
These conditions also hold for any linear and nonlinear constraints of a constrained optimization problem.
It is our expectation that GAS can overcome these limitations as well.
This paper is organized into four sections. The following section, Section 2, describes existing methods
used t o solve constraint optimization problems with GAS. Section 3 introduces and explains the concepts of
non-stationary penalty functions used in solving general nonlinear programming problems. Section 4 details
the results obtained from applying these techniques t o four test cases. The last section of this paper will
state the conclusions developed from the experiments and the direction of future research.

2 Previous GA Constrained Optimization Met hods


Traditional GAS have encompassed constraints in the form of bounds on the variables. However, only recently
have researchers begun t o attack general constrained optimization problems. The difficulty of using GAS in
constraint optimization is that the genetic operators used t o manipulate the chromosomes of the population
often produce solutions which are not feasible. Presently, there are four methods used t o handle constraints
with GAS: rejection of the offspring, repair algorithms, modified genetic operators, and penalty functions.
When infeasible offspring have been created, these offspring can be rejected from entering the population.
This technique can spend a great deal of time in the evaluation and rejection of these infeasible solutions.
When an infeasible solution has been created by an operator, special repair algorithms for that operator can

0-7803-1899-4194 $4.00 01994 IEEE 579


be employed to restore feasibility. However, repair algorithms are problem specific, the children often do not
resemble their parents, and restoring feasibility may be as difficult as the optimization problem [5].
Michalewicz has developed a system, GENOCOP, that can solve any linear constrained problem using
a set of special genetic operators.The operators designed in the system are designed to exploit the convex
region produced by the linear constraints. Any linear combination of two feasible points in a convex region
will produce another feasible point [5,61. The limitation of this system is that it cannot handle nonlinear
constraints because they do not necessarily produce convex regions.
Penalty function techniques transform the constrained problem into an unconstrained problem by pe-
nalizing those solutions which are infeasible. It has been shown that penalty functions based on the distance
from feasibility outperform those based upon the number of violated constraints [7]. The main limitation
of the penalty functions is the degree to which each constraint is penalized. Researchers have stated that if
one imposes a high degree of penalty, more emphasis is placed on obtaining feasibility and the GA will move
very quickly towards a feasible solution. The system will tend to converge to feasible point even if it is far
from optimal. However, if one impose a low degree of penalty, less emphasis is placed on feasibility, and the
system may never converge to a feasible solution [2, 51.

3 Non-Stationary(NS) Penalty Methods


The use of penalty methods to solve nonlinear programming problems originated in 1943 by Courant.
Courant, however, used these penalty methods only to obtain solutions to differential equations. It was
Fiacco and McCormick, and Zangwill who started applying these penalty methods to general N P prob-
lems in the late 1960’s [I, 21. These penalty methods solve the general N P problem through a sequence of
unconstrained optimization problems. These methods force infeasible points toward the feasible region by
step-wise increasing the penalty, P k used in the penalizing function, P ( p k , X ) . It has been shown that a
solution, x which minimizes N P ’ also minimizes N P as, k approaches infinity:

(NP’) minimize F ( X ,P k ) = f(X)+ P ( P k 1


where:
lim lim
k-tm pk = 00 x-feasibility P ( p k , X )= 0

3.1 Non-Stationary Methods with GA’s


This sequential unconstrained optimization approach could be used for genetic algorithms. Michalewicz
has used such an approach in his GENOCOP I1 system [6]. This system runs a genetic algorithm with
a constant penalty, then increases the penalty and runs the GA again. This procedure repeats until an
acceptable solution is found. The GENOCOP I1 system produces a series of solutions by initially optimizing
the unconstrained problem and then gradually optimizing the constrained function. We incorporate a step-
wise penalty increases by increasing the penalty, P k , within a run of the genetic algorithm.

Pk = C x k, C = Constant and k = Generation #

Our penalizing function, P ( p k , X ) ,is based on the sum of the violated constraints, S V C ( p , X ) .

0
Si(X) 2 -6 l<i<m
1 g i ( x ) ) otherwise
Dj(X) =
0 -€ 5 h j ( X ) < €
I hj (x)
I otherwise lljip

m P
svC(p,x)=
i=l j=1

We are introducing a family of penalty functions based up traditional calculus-based methods.

P(,lP) = P; x SVC(P1

5 80
Additionally, we tried a family of methods where their origin comes from simulated annealing.

P(Q,P)
R(a1P)= e

Four variations of the penalty methods were tested. The first three penalty methods were based on the
traditional family: P ( 1 , l),P(1,2), and P ( 1 , 2 ) . The fourth variation was based on the simulated annealing
family, R ( 1 , l ) .

4 Test Cases
To test the performance of the system, four test cases were selected and were evaluated using several criterion.
The average and standard deviation of the best solution from each of the 10 replications is reported as well
as the average of the sum of the violated constraints of the best solution for each run. The %Feasible is the
percent of the runs in which the best solution was a feasible solution. The "Best" (Distance) is the minimum
solution of the 10 runs and its corresponding sum of constraint violations. The "Best Feasible"(Distance) is
the most feasible solution from the 10 runs and its corresponding sum of constraint violations.
A floating point representation was used along with geometric ranking selection with normalization.
The same six operators were used as were in GENOCOP [5, 61. However, for non-linear constraints, they
do not maintain feasibility. For each test case, we made 10 replications using the same random seed for
each method. All runs were performed with the following GA parameters: pupsize = 80, k = 28 (number of
parents in each generation), b = 2 (coefficient for non-uniform mutation), a = 0.25 (parameter of arithmetical
crossover), C = 0.5 and 0.05 (constant for P family and R family respectively).
We have been able t o solve several problems similar to that of Test Case #1 involving two to three
variables and constraints. Therefore, the set of problems was chosen t o illustrate the potential of the
proposed methods t o solving difficult problems. These problems include nonlinear equalities, involving up
t o 8 variables and 6 binding constraints.

4.1 Test Case #1


This problem (taken from [3]) is

minimize f(T)= (21 - 1 0 ) +


~ (22 - 2 0 ) ~

subject to the nonlinear constraints:

(21 - 5)2 + ( 2 2 - 5)2 - 100 2 0.0


I.(- - 6)' - ( 2 2 - 5)' + 82.81 2 0.0

and bounds: 13 5 21 5 100 and 0 5 2 2 _< 100.


The known global solution is x' = (14.095,0.84296) and f(x*)
= -6961.81381. P ( 1 , l),P(2, 2), and
R ( 1 , l ) consistently returned the optimal answer. However, P ( 1 , 2 ) had problems due t o the selective pressure
being too small.

"
Std. Dev. 0.96693163 4.9265354 0.96693163 0.99235469
-
Avg Dist. 0.0 0.0481 0.0 1.68 x lo-'
% Feasible 100% 0% 100% 90%
Best -6961.7853 -7028.0926 -6961.2399 -6961.8253
(Distance) -(O.O) (0.05711) (0.0) (1.68 x
Best Fease. -6961.7853 -7011.2646 -6961.2399 -6961.7853
(Distance) (0.0) (0.04252) (0.0) (0.0)

Table 1: Results for Test Case #1

58 1
4.2 Test Case #2
This problem (problem 100 taken from [4]) is

minimize f(x) = (21 - 10)'+ ~ ( Z -


Z 12)' + + 3(24 - 11)' +
2;

102; + 72; + 2; - 42627 - 1026 - 827

subject to the nonlinear constraints:

127 - 22: - 32; - 2 3 - 42; - 525 2 0.0


282 - 721 - 322 - 102: - 2 4 2 5 2 + 0.0
196 - 2321 - 2: -62; +827 2 + 0.0
+
-42; - 2; 32122 - 22: - 5 2 6 1 1 2 7 2 + 0.0

The best known solution to the problem has a value of f(x*)


= 680.6400573 which is infeasible by the
sum of violated constraints was 0.90 x which is less than E, the criteria used for machine precision.

I P(1J) I P(1,2) 1 P(2,2) I R(l,l) 1

Avg. Dist. 0.1322

Best
(Distance) (0.0) (0.0) (0.0) (0.2178)
Best Fease. 681.76981 682.23860 681.45717 680.8643
(Distance) (0.0) (0.0) (0.0) (0.04986)

Table 3: Results for Test Case #3

P ( 1 , l),P ( 1 , 2 ) , and P(2,Z)found solutions which were completely feasible and only 0.12% higher than
the known best solution. The R(1, l),penalizing function encountered difficulties due to the initial sum of
the violated constraints being very large, thus leading to overflows.

4.3 Test Case # 3


This problem ( problem 100 taken from [4] ) is

subject to the linear and nonlinear constraints:

24 - 23 + 0.55 2 0.0
23 - 2 4 + 0.55 > 0.0

+
10008iTL(-23 - 0.25) 10008in(-24 - 0.25) + 894.8 - 21 = 0.0
+
10008in(23 - 0.25) lOOOsin(23 - 2 4 - 0.25) 894.8 - z 2 = 0.0 +
+
10008in(24 - 0.25) " 3 i ? ~ ( 2 4 - 23 - 0.25) 1294.8 = 0.0 +
and bounds: 0 5 zi 5 1200, i = 1 , 2 and -0.55 5 zi 5 0.55, i = 3,4
The best known optimal for Test Case #4 is f(x*)
= 5126.4981. According to [8] this point was not
completely feasible because the sum of the violated constraints was 0.75 x l o p 7 which was still less than E .

582
72.616

I (Distance) I (0.0002236) 1 (0.000189) I 9.198 x lo-' I (0.0005527) 1


Table 4: Results for Test Case #4
No method returned a feasible solution due to the three equality constraints. If we had manipulated the
constants used, or allowed the simulation to run longer, we may have been able to achieve a more feasible
point.

4.4 Test Case #4


This problem ( problem 369 taken from [8] ) is
minimize f(Z)= z1+ 2 2 + 23
subject to the linear and nonlinear constraints:

P(111) P(112) P(212) Wl)


Avg. 7244.2786 4181.3247 7535.0518 7732.7845
Std. Dev. 107.75158 50.077184 257.78297 571.75755
Avg. Dist. 0.0 0.5079 0.0 0.0
% Feasible 100% 0% 100% 100%
1 Best I 7068.6880 I 4112.324 I 7235.8714 I 7173.7845
~~~~ ~~ ~ I
(Distance) (0.0) (0.505) (0.0) (0.0)
Best Fease. 7068.6880 4245.2536 7235.8714 7173.7845
(Distance) (0.0) (0.4732) (0.0) (0.0)

583
5 Conclusions
Penalty functions have been used t o transform a constrained optimization problem into an unconstrained
optimization problem. Traditional calculus-based penalty methods gradually increase the penalty to obtain
the optimal feasible value. We have incorporated this concept of a non-stationary penalty function by
increasing the penalty proportionally to the generation number. The goal is to allow the GA t o explore more
of the space before confining t o it the feasible region. However, if enough pressure is not placed on the GA,
it may never converge t o a feasible solution. Therefore, the constant C plays a n important role of controlling
the convergence rate.
When comparing the two traditional methods, P ( 1 , l ) and P ( 1 , 2 ) the following observation was noted.
As the constraints become barely violated, (i.e. distance from feasibility << l ) , it takes a greater selective
pressure to force that solution to feasibility. Test Case #1 and Test Case #4 demonstrated this fact, none
of the points were feasible using just P ( 1 , 2 ) . P ( 2 , 2 ) squares the generation number putting a greater
selective pressure forcing P ( 1 , 2 ) to converge to feasible solutions. P ( 1 , l ) on average did better than the
other methods. However, it takes longer to converge t o a feasible optimal solution than the other three
methods.
With all four methods, we encountered problems with constraint normalization. This was especially
apparent in Test Case #3 with the R ( 1 , l ) method. Initial points in the population violated one constraint
by 1 x lo1', which would lead to overflow errors for A e exponential. This also led to this constraint
dominating the GA for all methods. Therefore, we would like to extend this method to include some
constraint normalization techniques.
For each test case the parameters were fixed. There was no attempt to optimize any of the standard pa-
rameters associated with GAS, or the penalty methods. By simply running the GA longer, we are guaranteed
to obtain a more feasible solution. We would like to examine the effect the parameters have on obtaining
feasible optimal solutions (i.e. the constant C).
In GENOCOP I1 and GENOCOP, linear constraints are handled through special genetic operators. We
would like t o adopt this idea t o see if it would help with solving problems with linear constraints like Test
Case #3 and Test Case #4. As already stated, the linear constraints produce a convex hull and the genetic
operators will force the GA to stay in this convex hull. This convex hull will act as limiting covering convex
set of the nonlinear region. This confines the search space to this hull which should help with convergence and
optimally. Other future directions include the use of double elitist model where not only the best individual
is saved from generation to generation but also the best feasible point is kept.

References
[l] Avriel, M., Nonlinear Programming Analysis and Methods, Prentice-Hall, Inc., Englewood Cliffs, 1976.
[2] Bazaraa, M . S., Nonlinear programming : theory and algorithms, Wiley, New York, 1979.
[3] Floudas, C.A. and Pardalos, P.M., A Collectin of Test Problems for Constrained Global Opti mization
Algorithms, Springer-Verlag, Lecture Notes in Computer Science, Vol. 455, 1987.
[4] Hock, W . and Schittkowski, K., Test Examples for Nonlinear Programming Codes, Springer-Verlag,
Lecture Notes in Economics and Mathematical Systems, Vol. 187, 198 .
[5] Michalewicz, Z., Genetic Algorithms + Data Structures = Evolution Programs, Springer-Verlag, AI
Series, New York, 1992.
[6] Michalewicz, Z and Attia, N, Genetic Algorithm + Simulated Annealing = GENOCOP 11: A Tool for
Nonlinear Programming, submitted for publication.
[7] Richardson, J . , Palmer, M., Liepins, G. and Hilliard, M., Some Guidelines for Genetic Algorithms with
Penalty Functions, Proceedings of the Third International Conference on Genetic Algorithms, 1989.
[8] Schittkowski, K., More Test Examples for Nonlinear Programming Codes, Springer-Verlag, Lecture Notes
in Economics and Mathematical Systems, Vol. 282, 198 .

584

You might also like