Genetic Algorithms and Fuzzy Multiobjective Optimization
Genetic Algorithms and Fuzzy Multiobjective Optimization
Series Editors
Barth, Peter
Logic-Based 0-1 Conslrainl Programming
Jones, Christopher V.
Visualization and Optimization
Woodruff, David L.
Advances in Compulational & SlOchastic Optimizalion, Logic Programming, and Heuristic Search
Klein, Robert
Scheduling of Resource-Constrained Projecis
Bierwirth, Christian
Adaptive Search and the Managemenl of Logistics Systems
Stilman, Boris
Linguistic Geomeuy: From Sea1'ch to Construction
GENETIC ALGORITHMS ANO
FUZZY MULTIOBJECTIVE OPTIMIZATION
MASATOSHI SAKAWA
Department of Artificial Complex Systems Engineering
Graduate School of Engineering
Hiroshima University
Higashi-Hiroshima, 739-8527, Japan
Preface IX
1. INTRODUCTION 1
1.1 Introduction and historical remarks 1
1.2 Organization of the book 7
2. FOUNDATIONS OF GENETIC ALGORITHMS 11
2.1 Outline of genetic algorithms 11
2.2 Coding, fitness, and genetic operators 15
3. GENETIC ALGORITHMS FOR 0-1 PROGRAMMING 29
3.1 Introduction 29
3.2 Multidimensional 0-1 knapsack problems 30
3.3 0-1 programming 39
3.4 Conclusion 52
4. FUZZY MULTIOBJECTIVE 0-1 PROGRAMMING 53
4.1 Introduction 53
4.2 Fuzzy multiobjective 0-1 programming 54
4.3 Fuzzy multiobjective 0-1 programming wit.h fuzzy numbers 70
4.4 Conclusion 81
5. GENETIC ALGORITHMS FOR INTEGER PROGRAMMING 83
5.1 Introduction 83
5.2 Multidimensional integer knapsack problems 84
5.3 Integer programming 98
5.4 Conclusion 104
6. FUZZY MULTIOBJECTIVE INTEGER PROGRAMMING 107
6.1 Introduction 107
6.2 Fuzzy multiobjective integer programming 108
6.3 Fuzzy multiobjective integer programming wit.h fuzzy
numbers 118
Vlll Contents
INTRODUCTION
revised and extended edition was issued in 1996, and nowadays, genetic
algorithms are considered to make a major contribution to optimiza-
tion, adaptation, and learning in a wide variety of unexpected fields
[11, 13, 20, 24, 32, 39, 40, 43, 60, 72, 74, 112, 127, 165, 189].
As we look at recent applications of genetic algorithms to optimization
problems, especially to various kind of discrete optimization problems,
global optimization problems, or other hard optimization problems, we
can see continuing advances [13, 55, 60, 61, 111, 136]. However, t.here
seemed to be no genetic algorithm approach to deal with multiobjective
programming problems until Schaffer [187] first proposed the so-called
Vector Evaluated Genetic Algorit.hm (VEGA) as a natural extension of
GrefensteUe's GENESIS program [69] to include multiobject.ive nonlin-
ear functions. Although VEGA was implemented to find Pareto optimal
solutions of several multiobjective nonlinear optimization test problems,
the algorithm seems to have bias toward some Pareto optimal solutions.
In his famous book, Goldberg [66] suggested a nondominated sorting
procedure to overcome the weakness of VEGA. By extending the idea of
Goldberg [66], Fonseca and Fleming [62, 63] proposed the Multiple Ob-
jective GA (MOGA). Horn, Nafpliot.is, and Goldberg [76] int.roduced the
Niched Pareto GA (NPGA) as an algorithm for finding diverse Pareto
optimal solutions based on Pareto domination tournaments and sharing
on the nondominated surface. Similarly, to eliminat.e the bias in VEGA,
Srinivas and Deb [197] proposed the Nondominat.ed Sorting GA (NSGA)
on the basis of Goldberg's idea of nondominat.ed sorting together with
a niche and speciation method. However, these papers focused on mul-
tiobjective nonlinear programming problems wit.h continuous variables
and were mainly weighted toward finding Pareto optimal solutions, not
toward deriving a compromise or satisficing 1 solution for the decision
maker.
As a natural extension of single-objective 0-1 knapsack problems, in
the mid-1990s, Sakawa et al. [138, 144, 148] formulated multiobjective
multidimensional 0-1 knapsack problems by assuming that the decision
maker may have a fuzzy goal for each of the objective functions. Once
the linear membership functions that well-represent the fuzzy goals of
the decision maker have been elicited, the fuzzy decision of Bellman and
Zadeh [22] can be adopted for combining them. In order to derive a
compromise solution for the decision maker by solving the formulated
problem, genetic algorithms with double strings that decode an individ-
et al. [145] proposed genetic algorithms with double strings for multidi-
mensional integer knapsack problems through the use of information of
optimal solutions to the corresponding continuous relaxation problems.
Furthermore, Sakawa et al. extended genetic algorithms with double
strings based on reference solution updating for 0-1 programming prob-
lems into integer programming problems [146]. For dealing with general
integer programming problems involving positive and negative coeffi-
cients, Sakawa et al. [140, 155] further extended coding and decoding of
the genetic algorithms with double strings based on reference solution
updating for multiobjective multidimensional 0-1 knapsack problems.
Since De Jong [46] considered genetic algorithms in a function opti-
mization setting, genetic algorithms have attracted considerable atten-
tion as global methods for complex function optimization. However,
many of the test function minimization problems solved by a lot of re-
searchers during the past 20 years involve only specified domains of
variables. Only recently several approaches have been proposed for solv-
ing general nonlinear programming problems through genetic algorithms
[60, 88, 112, 165].
For handling nonlinear constraints of general nonlinear programming
problems through genetic algorithms, most of them are based on the
concept of penalty functions, which penalize infeasible solutions [60, 88,
112, 114, 119]. Although several ideas have been proposed about how the
penalty function is designed and applied to infeasible solutions, penalty-
based methods have several drawbacks, and the experimental results on
many test cases have been disappointing [112, 119], as pointed out in
the field of nonlinear optimization.
In 1995, as a new constraint-handling method for avoiding many draw-
backs of these penalty met.hods, Michalewicz and Nazhiyat.h [118] and
Michalewicz and Schoenauer [119] proposed GENOCOP (GEnetic algo-
rithm for Numerical Optimization of COnstrained Problems) III for solv-
ing general nonlinear programming problems. GENOCOP III incorpo-
rates the original GENOCOP syst.em for linear constraints [112, 113, 116]
but extends it. by maint.aining two separate populations in which a de-
velopment in one population influences evaluations of individuals in the
other population. The first population consists of so-called search points
that satisfy linear constraints of the problem as in the original GENO-
COP system. The second population consists of so-called reference
points that satisfy all constraints of the problem. Recent excellent sur-
vey papers of Michalewicz and Schoenauer [119] and Michalewicz and
associates [115] are devoted to reviewing and classifying the major tech-
niques for constrained optimization problems.
1.1. Introduction and historical remarks 5
for solving the job-shop scheduling problems proposed through the mid-
1990s can be found in the invited review of Blazewicz et a1. [26J.
In 1997, by incorporating the concept of similarity among individuals
into the genetic algorithm that uses a set of completion times as in-
dividual representation and the GifRer and Thompson algorithm-based
crossover [219], Sakawa and Mori [157J proposed an efficient genetic al-
gorithm for job-shop scheduling problems.
However, when formulating job-shop scheduling problems that closely
describe and represent the real-world problems, various factors involved
in the problems are often only imprecisely or ambiguously known to
the analyst. This is particularly true in the real-world situations when
human-centered factors are incorporated into the problems. In such sit-
uations, it may be more appropriate to consider fuzzy processing time
because of man-made factors and fuzzy due date, tolerating a certain
amount of delay in the due date [81, 139, 206J. Recently, in order to
reflect such situations, a mathematical programming approach to a sin-
gle machine fuzzy scheduling problem with fuzzy precedence relation
[81J and job-shop scheduling incorporating fuzzy processing time using
genetic algorithms [206] have been proposed.
In order to more suitably model actual scheduling situations, Sakawa
and Mori [158, 159J formulated job-shop scheduling problems incorpo-
rating fuzzy processing time and fuzzy due date. On the basis of the
concept of an agreement index for fuzzy due date and fuzzy completion
time for each job, the formulated problem is interpreted as seeking for a
schedule that maximizes the minimum agreement index. For solving the
formulated fuzzy job-shop scheduling problems, an efficient genetic al-
gorithm for job-shop scheduling problems proposed by Sakawa and Mori
[157J is extended to deal with the fuzzy due dates and fuzzy completion
time.
Unfortunately, however, in these fuzzy job-scheduling problems, only
a single objective function is considered, and extensions to multiobjec-
tive job-scheduling problems are desired for reflecting real-world situa-
tions more adequately. On the basis of the agreement index of fuzzy
due date and fuzzy completion time, multiobjective job-shop scheduling
problems with fuzzy due date and fuzzy processing time are formulated
as three-objective problems that not only maximize the minimum agree-
ment index but also maximize the average agreement index and mini-
mize the maximum fuzzy completion time. Moreover, by considering
the imprecise nature of human judgments, the fuzzy goals of the deci-
sion maker for the objective functions are introduced. After eliciting the
linear membership functions through the interaction with the decision
maker, the fuzzy decision of Bellman and Zadeh or minimum operator
1.2. Organization of the book 7
FOUNDATIONS OF GENETIC
ALGORITHMS
converge to the best string s*, which hopefully represents the optimal
or approximate optimal solution x* to the optimization problem.
In genetic algorithms, the three main genetic operators-reproduction,
crossover, and mutation-are usually used to create the next generation.
Reproduction: According to the fitness values, increase or decrease
the number of offspring for each individual in the population P(t).
Crossover: Select two distinct individuals from the population at ran-
dom and exchange some portion of the strings between the strings
with a probability equal to the crossover rate Pc.
Mutation: Alter one or more genes of a selected individual with a
probability equal to the mutation rate Pm.
The probability to perform crossover operation is chosen in a way
so that recombination of potential strings (highly fitted individuals) in-
creases without disruption. Generally, the crossover rate lies between
0.6 to 0.9. Since mutation occurs occasionally, it is clear that the prob-
ability of performing mutation operation will be quite low. Typically,
the mutation rate lies between 0.001 to 0.01.
After the preceding discussions, the fundamental procedures of genetic
algorithms can be summarized as follows:
No
( Stop
Problem Space
(Phenotype)
.~.
.-,I" ~ ................. .
/
/
\, [I::J ~ ~~r" ~ :. /
Figure 2.2. Fundamental structure of genetic algorithms
(1) Genetic algorithms work with a coding of the solution set, not the
solutions themselves.
(2) Genetic algorithms search from a population of solutions, not a sin-
gle solution.
(3) Genetic algorithms use fitness information, not derivatives or other
auxiliary knowledge.
(4) Genetic algorithms use probabilistic transformation rules, not de-
terministic ones.
(2.1)
f' = emuI!f
f :laX --- ---- ---- -- ----- -------- ---- f'=f
f~ean ----------------.,.'.:.'---
OL-------~----~--~--------~
o fmin fmc'dII f
that the low fitness values become negative after scaling, as shown in
Figure 2.4.
O~------r_~----~--~-------.
f
Figure 2.5 illustrates the roulette selection. Observe that each indi-
vidual is assigned a slice of a circular roulette wheel, the size of the slice
being proportional to the individual's fitness. Then, conceptually, the
wheel is spun N times, where N is the number of individuals in the
population. On each spin, an individual marked by the roulette wheel
pointer is selected as a parent for the next generation.
Step 2: Generate a real random number randO in [0, 1J, and set s =
randO x fsum.
Step 3: Obtain the minimal k such that L:7=1 h ~ s, and select the
kth individual at generation t + 1.
(2.9)
Then, the integer part of Ni denotes the deterministic number of the ith
individual preserved in the next population. The fractional part of Ni is
regarded as a probability for one of the individual i to survive, in other
words, N - l:~Il Nd; individuals are determined on the basis of this
probability.
An example of reproduction by expected-value selection is shown in
Table 2.1.
fitness 6 1 10 11 17 32 4 12 5 2
expected value 0.6 0.1 1.0 1.1 1.7 3.2 0.4 1.2 0.5 0.2
number of offspring 1 0 1 1 2 3 0 1 1 0
Ranking selection means that only the rank order of the fitness of the
individuals within the current population determines the probability of
selection. In ranking selection, the population is sorted from the best
to the worst, the expected value of each individual depends on its rank
rather than on its absolute fitness, and the selection probability of each
individual is assigned according to the ranking rather than its raw fitness.
There is no need to scale fitness in ranking selection, because absolute
differences in fitness are obscured. There are many methods to assign a
selection probability to each individual on the basis of ranking, including
linear and nonlinear ranking methods.
In the linear ranking method proposed by Baker [18], each individual
in the population is ranked in increasing order of fitness and the selection
probability of each individual i in the population is determined by
Pi = N
1(+ + _
'fJ - ('fJ
i-I)
- Tl ). N _ 1 (2.10)
22 2. FOUNDATIONS OF GENETIC ALGORITHMS
where the constants 7]+ and 7]- denote the maximum and minimum
expected values, respectively, and determine the slope of the linear
function. The condition 2:[':1 Pi = 1 requires that 1 :S 7]+ :S 2 and
",- = 2 - ",+ are fulfilled. Normally, a value of ",+ = 1.1 is recommended.
An example of reproduction by linear ranking selection with 7]+ = 2
and rounding is shown in Table 2.2.
fitness 32 17 12 11 10 6 5 4 2 1
rank 1 2 3 4 5 6 7 8 9 10
number of offspring 2 2 2 1 1 1 0 0 0
2.2.3.2 Crossover
It is well-recognized that the main distinguishing feature of genetic
algorithms is the use of crossover. Crossover, also called recombination,
is an operator that creates new individuals from the current population.
The main role of this operator is to combine pieces of information COIIl-
ing from different individuals in the population. Actually, it recombines
2.2. Coding, fitness, and genetic operators 23
genetic material of two parent individuals to create offspring for the next
generation. The basic crossover operation, introduced by Holland [75],
is a three-step procedure. First, two individuals are selected at ran-
dom from the population of parent strings generated by the selection.
Second, one or more string locations are selected as crossover points de-
lineating the string segment to exchange. Finally, parent string segments
are exchanged and then combined to produce two resulting offspring in-
dividuals. The proportion of parent strings undergoing crossover during
a generation is controlled by the crossover rate Pc E [0,1], which deter-
mines how frequently the crossover operator is invoked.
In addition to the crossover rate Pc and the number of crossover points
CP, generation gap G was introduced by De Jong [46] to permit over-
lapping populations, where G = 1 and < G < I, respectively, imply
nonoverlap ping populations and overlapping populations.
The general algorithm of crossover is summarized as follows:
(a) Set j := h.
(b) Find j' such that Sx' (j') = Sy (j). Then, interchange Sx' (j) with
S;r:' (j') and set j := j + 1.
2.2.3.3 Mutation
It is well-recognized that a mutation operator plays a role of local
random search in genetic algorithms. For bit strings, the following algo-
rithm of mutation of bit reverse type is proposed.
Inversion
Step 0: Set r := 1.
Step 3: Invert the substring between the two inversion points hand k
(h ~ k).
Step 4: If r < N, set r := r + 1 and return to step 1. Otherwise, stop.
An example of inversion is illustrated as
h k h k
Parent: 111001110 =} Offspring: 111100110
3.1 Introduction
Recently, as a natural extension of single-objective 0-1 knapsack prob-
lems, Sakawa et al. [144, 148] formulated multiobjective multidimen-
sional 0-1 knapsack problems by assuming that the decision maker (DM)
may have a fuzzy goal for each of the objective functions. Having elicited
the linear membership functions that represent the fuzzy goals of the
DM well, the fuzzy decision of Bellman and Zadeh [22] is adopted for
combining them. In order to derive a compromise solution for the DM
by solving the formulated problem, GADS that decode an individual
represented by a double string to the corresponding feasible solution
for treating the constraints of the knapsack type have been proposed
[144, 148]. Also, through the combination of the desirable features of
both the interactive fuzzy satisficing methods for continuous variables
30 3. GENETIC ALGORITHMS FOR 0-1 PROGRAMMING
[135] and the GADS [144], an interactive fuzzy satisficing method to de-
rive a satisficing solution for the DM to multiobjective multidimensional
0-1 knapsack problems has been proposed [160, 161]. These results are
immediately extended to multiobjective multidimensional 0-1 knapsack
problems involving fuzzy numbers [162].
Unfortunately, however, because these GADS are based mainly on the
decoding algorithm for treating the constraints of the knapsack type,
they cannot be applied to more general 0-1 programming problems in-
volving positive and negative coefficients in both sides of the constraints.
In this chapter, we first revisit GADS for multidimensional 0-1 knap-
sack problems [144, 148] with some modifications and examine their
comput.ational efficiency and effectiveness t.hrough computational exper-
iments. Then we extend the GADS for 0-1 knapsack problems to deal
with more general 0-1 programming problems involving both positive
and negative coefficients in the constraints. New decoding algorithms
for double strings using reference solutions both without and with the
reference solution updating procedure are especially proposed so that
each individual will be decoded to the corresponding feasible solution
for the general 0-1 programming problems. Moreover, to demonstrate
the efficiency and effectiveness of the proposed genetic algorithms, the
proposed methods and the branch and bound method for several numer-
ical examples are compared with respect to the solution accuracy and
computational time.
-ex, if Ax ~b
f(8) = { (3.2)
0, if Ax >b
or
f(8) = _ ex - (). max
i==l""jm
{a aX-b}
'
z
b i
z
'
(3.3)
index of variable
0-1 value
i=l
is calculated. Then, the integral part of Ni denotes the deterministic
number of t.he string Si preserved in the next population. The decimal
part of Ni is regarded as probability for one of the string Si to survive;
in other words, N - 2:~ll Nd strings are determined on the basis of
this probability.
3.2.4.2 Crossover
If a single-point or multipoint crossover operator is applied to indi-
viduals represented by double strings, an index s(j) in an offspring may
take the same number that an index S(j') (j -=I j') takes. Recall that the
same violation occurs in solving TSPs or scheduling problems through
genetic algorithms. One possible approach to circumvent such violation,
the crossover method called PMX is useful. It enables us to generate
desirable offsprings without changing the double string structure, unlike
the ordinal representation [73]. However, in order to process each el-
ement gs(j) in the double string structure efficiently, it is necessary to
modify some points of the procedures. The PMX for double strings can
be described as follows:
34 3. GENETIC ALGORITHMS FOR 0-1 PROGRAMMING
Step 0: Set T := 1.
Step 3: Choose two crossover points h, k (h -=I- k) from {I, 2, ... ,n}
at random. Then, set 1 := h. First, perform operations in steps 4
through 6 for X' and Y.
Step 6: 1) If h < k, let 9s x ,(j) := 9s y (j) for all j su"h that h ::::; .i : : ; k,
and go to step 7. 2) If h > k, let 9s x ,(j) := gsy(j) for all j such that
1 ::::; j ::::; k or h ::::; j ::::; IL, and go to step 7.
Step 7: Carry out the same operations as in steps 4 through 6 for Y'
and X.
It should be noted here that the original PMX for double strings is
extended to deal with the substrings not only between hand k but also
between k and h.
3.2. Multidimensional 0-1 knapsack problems 35
3.2.4.3 Mutation
It is well-recognized that a mutation operator plays a role in local
random search in genetic algorithms. Here, for the lower string of a
double string, mutation of bit reverse type is adopted.
Step 0: Set r := 1.
Step 1: Set j := 1.
Step 0: Set r := 1.
Step 1: Generate a random number randO in [0, 1J. For a given inver-
sion rate Pi, if rand 0 ~ Pi, then go to step 2. Otherwise, go to step
4.
Step 2: Choose two points h, k (h -# k) from {I, 2, ... , n} at random.
Then, set l := h.
Table .1.1. Experimental results for 30 variables and 10 constraints (10 trials)
0.50
GADS -9661 J -9661.0 I
-9661 5.97 10/10
LP_SOLVE -9661 (optimal) 7.71 x 10 1 -
0.75
GADS -12051 I I
-12051.0 -12051 6.24 10/10
LP_SOLVE -12051 (optimal) 9.00 x 10 2 -
1 LP .-SOLVE [23J solves (mixed integer) linear programming problems. The imple-
mentation of the simplex kernel was mainly based on the text by Orchard-Hays [125J.
The mixed integer branch and bound part was inspired by Dakin [37J.
38 S. GENETIC ALGORITHMS FOR 0-1 PROGRAMMING
and 8 times out of 10 trials for 'Y = 0.50. Furthermore, for 'Y = 0.25 and
0.50, the required processing time of GADS is about 40 to 50% of that
of LP _SOLVE. As a result, for problems with 50 variables, GADS seems
to be more desirable than LP _SOLVE is.
Table 3.2. Experimental results for 50 variables and 20 constraints (10 trials)
0.50
GADS -16485 I
-16483.2 I
-16476 1.37 x 10 1 8/10
LP_SOLVE -16485 (optimal) 3.34 x 10 1 --
0.75
GADS -21931 I
-21931.0 I
-21931 1.47 x 10 1 10/10
1
LP_SOLVE -21931 (optimal) 8.91 x 10 -
Table 3.3. Experimental results for 100 variables and 30 constraints (10 trials)
0.50
GADS -36643 I
-36242.1 I
-35802 4.33 x 10 1 0/10
LP_SOLVE -36818 (optimal) 1.50 x 10 3 ---
0.75
GADS -46097 I
-45892.5 I
-45771 4.77 x 10 1 0/10
LP_SOLVE -46198 (optimal) 1.13 x 10 2 --
3.3. 0-1 programming 39
Table 3.4. Experimental results for 150 variables and 40 constraints (10 trials)
0.50
GADS -52768 I
-51700.0 I
-51283 8.57 x 10 1 -
0.75
GADS -65989 I
-65285.8 I
-64708 9.30 x 10 1 0/10
LP_SOLVE -66543 (optimal) 4.44 x 10 3 --
Table 3.5. Experimental results for 200 variables and 50 constraints (10 trials)
0.50
GADS -67060 I
-65384.1 I
-64148 2.85 x 10 2 -
0.75
GADS -85169 I
-84341.3 I
-83657 3.11 x 10 2 0/10
LP_SOLVE -86757 (optimal) 1.10 x 10 4 -
if gs(j) = 0, or,
{
xs(j) = { 0 if gs(j) = 1 and the constraints are not satisfied,
1, if gs(j) = 1 and the constraints are satisfied.
(3.9)
R(O = {~' ~ ~
0, ~ <
(3.10)
a) 1=4
f----l---+--+---+---+---I ------- x * = ( 1 , 1 , 0, 1 , 0 , 0 )
b) [=0
f----1-1-1-1-1-----1 ------- x' = xO =( 1 ,0,0, 1,0, I)
n
A reference solution x * = ( I , I , 0 , 0,
I ,I ) sum == ~ p . x
j= ] J J
2 4 3 5 6 I
1 0 0 I 0 I
2 4 3 5 6 I sum - p 4~b
- - - ' - -.... x = ( , I , ,2, , )
I 0 0 I 0 I
2 4 3 5 6 I
I 0 0 I 0 I
2 4 3 5 6 I sum + PS ~ b
-------''-----.... x = ( ,I , D, 2, I, )
1 0 0 1 0 1
---
2 4 3 5 6 1 sum - p4 ~ b
---'-----.... x= ( , 1, 0,0, 1, )
I 0 o I 0 1 sum - = P 4
2 5 3 4 6 I
(Individual modification)
I I I 0 0 I
.......
"'---"'"
2 5 3 4 6 1 ----
I I I 0 0 1
3.3.2.1 Fitness
For general 0-1 programming problems, it seems quite natural to de-
fine the fitness function of each individual 8 by
ex - L Cj
jot
1(8) = --=--=--------:=-- (3.12)
L Cj - L C/
jEJ~ jEJt
where J: = {j I Cj ~ 0, 1 ::; j ::; n} and J;; = {j I Cj < 0,1 ::; j ::; n}. It
should be noted here that the fitness 1(8) becomes as
Step 12: If 9s(j) = x:(j)' let :rs(j) := 9s(j) and j := j + 1, and go to step
14. If gs(j) oft x:(j)' go to step 13.
Step 13: 1) In the case of 9s(j) = 1: If sum + Ps(j) :::; b, let :rs(j) := 1,
sum:= sum+ps(j) andj:= ]+1, and go to step 14. Ifsum+ps(j) 1.:
b, let :cs(j) := 0 and j := j + 1, and go to step 14. 2) In the case of
gs(j) = 0: If sum - Ps(j) :::; b, let. xs(j) := 0, sum := sum - Ps(j) and
] := j + 1, and go to step 14. If sum - Ps(j) 1.: b, let xs(j) := 1 and
j := j + 1, and go to step 14.
Step 14: If j > n, stop. Otherwise, return to step 12.
In the decoding algorithm, from st.ep 1 to step 4, the values of variables
satisfying only the constraints corresponding to the positive right-hand
side const.ants are decoded into q, and from step 5 to step 14, each
individual to the corresponding feasible solution is decoded into x.
Examples of t.he decoding algorithm are illustrated in Figure 3.4.
First, in a), according to st.eps 1 through 4, decode the individual satis-
fying the constraints corresponding to the positive right-hand side con-
stants into q. Then, in b), according to steps 7 through 11, I is found.
In this example, since I = 0, go to step 11. Then, in c), according to
st.eps 11 through 14, using a reference solut.ion x*, decode the individual
satisfying the constraints int.o x.
As can be seen from the previous discussions, for general 0-1 pro-
gramming problems involving positive and negative coefficients in the
const.raints, this newly developed decoding algorithm enables us to de-
code t.he individuals represent.ed by the double st.rings to t.he correspond-
ing feasible solution. However, the diversity of phenot.ypes x greatly
3.3. 0-1 programming 47
a)
4 1 5 2 3 6
- - - -.... q= (1,0, 1, 1,0,0)
I 1 0 0 I I
b)
4 1 5 2 3 6
------
.. 1=0
I I 01 10
x X X X X X
c)
n
A reference solution x*=(O,O,O, 1, 1,0) sum =.1: pJ' x)~
J=I
4 I 5 2 3 6 84= x.;
- - - -- . x= (, , , 1, , )
1 I
1 1
4 1 5 2 3 6 sum+PI~ b
-----0 x=(I , , ,1, , )
1 1
I 1
sum -Psi. b
4 1 5 2 3 6
-----'=----0'-0 x = (1, , ,I, I, )
I
4
I 0
1 5 2 3 6
1 1
1 1 0 0 1 1
3.3.3.1 Fitness
In GADS based on reference solution updating (GADSRSU), two
kinds of fitness functions are defined as
ex - L Cj
!I(8) = jEJt
. 1~
exp ( - - ~ I) (3.14)
L L
Igs(j) - Xs(j)
Cj - c) n j=1
jEJ;; jEJt
eq - L Cj
jut
12(8) = -=---=- (3.15)
~C,- ~c,'
~ J ~.1
jEJ;; jEJt
where J: = {j I Cj ?: 0,1 :::; j :::; n}, J; = {j I Cj < 0,1 :::; j :::; n}, and
the last terrn of !I (8) is added so that the smaller the difference between
the genotype 9 and the phenotype x is, the larger the corresponding
fitness becomes. Observe that !I (8) and 12 (s) indicate the goodness of
the phenotype x of an individual s and that of the phenotype q of an
individual s, respectively. Using these two kinds of fitness functions,
we attempt to reduce the reference solution dependence. For these two
kinds of fitness functions, the linear scaling technique is used.
3.3. 0-1 programming 49
{3.17}
Table 3.6. Experimental results for 30 variables and 10 constraints (10 trials)
Table 3.7. Experimental results for 50 variables and 20 constraints (10 trials)
Table 3.8. Experimental results for 100 variables and 30 constraints (10 trials)
3.4 Conclusion
In this chapter, GADS for multidimensional 0-1 knapsack problems
were first revisited with some modifications, and their computational
efficiency and effectiveness were examined through a number of compu-
tational experiments. Then, to deal with more general 0-1 programming
problems involving positive and negative coefficients in both sides of the
constraints, GADS were extended to GADSRS By introducing a ref-
erence solution with backtracking and individual modification, a new
decoding algorithm for double strings was proposed so that individuals
could be decoded to the corresponding feasible solution for the general
0-1 programming problems. Moreover, to reduce computational time
required for backtracking and individual modification as well as to cir-
cumvent the dependence on the reference solution, through the introduc-
tion of a modified decoding algorithm using a reference solution without
backtracking and individual modification as well as the reference solu-
tion updating procedure, GADSRSU were proposed. From a number
of computational results for several numerical examples, the efficiency
and effectiveness of the proposed GADSRSU and GADSRSU were ex-
amined. As a result, proposed genetic algorithms, especially GADSRSU,
were shown to be efficient and effective with respect to both the solution
accuracy and the processing time. Extensions of the proposed method
to more general cases such as multiobjective multidimensional 0-1 pro-
gramming problems with positive and negative coefficients are now un-
der investigation and will be reported elsewhere. Further extensions to
interactive decision making will also be reported elsewhere.
Chapter 4
4.1 Introduction
In the mid-1990s, as a natural extension of single-objective 0-1 knap-
sack problems, Sakawa et al. [138, 144, 148] formulated multiobjective
multidimensional 0-1 knapsack problems by assuming that the decision
maker may have a fuzzy goal for each of the objective functions. Having
elicited the linear membership functions that well-represent the fuzzy
goals of the decision maker, the fuzzy decision of Bellman and Zadeh
[22] is adopted for combining them. In order to derive a compromise so-
lution for the decision maker by solving the formulated problem, GADS
that decode an individual represented by a double string to the corre-
54 4. FUZZY MULTIOBJECTIVE 0-1 PROGRAMMING
muumlze
subject to (4.1 )
where Ci = (Cil, ... ,Cin), i = 1, ... ,k, are n-dimensional row vectors;
x = (Xl"'" xnf is an n-dimensional column vector of 0-1 decision
4.2. Fuzzy multiobjective 0-1 programming 55
(4.3)
~i( ~
0, ctx> zi
CiX - zp zl < Ci X < (4.4)
Ci"') { zl - zO ' t -
zO
t
t t
1, Ci X :s; z}
is assumed for representing the fuzzy goal of the DM [135, 225, 228],
where z? zI
and denote the values of the objective function CiX whose
degree of membership function are 0 and 1, respectively. These values
are subjectively determined through an interaction with the DM. Figure
4.1 illustrates the graph of the possible shape of the linear membership
function.
As one of the possible ways to help the DM determine and zl, itzp
is convenient to calculate the individual minimum ziin = min"'Ex CiX
and maximum zi ax = max",EX CiX of each objective function under the
given constrained set.
Then by taking account of the calculated individual minimum and
maximum of each objective function, the DM is asked to assess and zp
l I d ' I
zi In t Ie c ose Interva zi
l ' [min max]
, Zi , 1 = 1 k
, ... , .
Zimmermann [225J suggested a way to determine the linear member-
ship function J.Li (CiX) by assuming the existence of an optimal solution
x io of the individual objective function minimization problem under the
4.2. Fuzzy multiobjective 0-1 programming 57
1. 0 1---______
o z? CiX
constraints defined by
together with
(4.8)
maximize {l D ( x ) . (4.9)
",EX
Observe that the value of the aggregation function {lD(X) can be inter-
preted as representing an overall degree of satisfaction with the DM's
multiple fuzzy goals [135, 170, 176, 184, 225]. As the aggregation func-
tion, if we adopt the well-known fuzzy decision of Bellman and Zadeh
or minimum operator
(4.10)
58 4. FUZZY MULTIOBJECTIVE 0-1 PROGRAMMING
mlnlIlllze
i;re~k {.fli - /-Li (Zi (x))} }
(4.12)
subject to Ax ::; b
x E {O, l}n
Table 4-1. Budget, manpower, profit, probability, and waste for each project
maximal search generation Ima:x = 500, E = 0.02, and Cmult = 1.8. The
coefficient p of the augmented minimax problem is set as 0.005.
Concerning the fuzzy goals of the DM, following Zimmermann [225],
for i = 1,2, after calculating the individual minimum ziin together with
zi, each linear membership function I~i (CiX) is determined by choosing
zl = ziin and z? = zi. For this numerical example, zjin = -9126.77,
zfin = 0, zj = 0, and zf = 8725 are obtained through GADS.
At each interaction with the DM, the corresponding augmented min-
imax problem is solved through 10 trials of GADS, as shown in Table
4.2.
The augmented minimax problem is solved for the initial reference
membership levels, and the DM is supplied with the corresponding
Pareto optimal solution and membership values, as is shown in the first
interaction of Table 4.2. On the basis of such information, because the
DM is not satisfied with the current membership values, the DM up-
dates the reference membership values to Pi = 0.8 and P2 = 1.0 for
improving the satisfaction levels for fL2 (C2X) at the expense of ILl (C1 x).
For the updated reference membership values, the corresponding aug-
mented minimax problem yields the Pareto optimal solution and the
membership values, as is shown in the second interaction of Table 4.2.
The same procedure continues in t.his manner until the DM is satisfied
with the current values of the membership functions. In this example,
as shown in Table 4.2, at the third iteration, a satisficing solution for
the DM is derived.
"'a
n
bt -- 'Y'
I ~ tJ' (4.16)
j=l
Cl -997 -909 -881 -151 -218 -87 -528 -365 -291 -224
-893 -241 -581 -232 -975 -189 -897 -458 -388 -64
-904 -57 -244 -982 -645 -258 -220 -169 -271 -329
-421 -210 -585 -97 -828 -77 -523 -722 -715 -610
-99 -355 -510 -291 -339 -470 -782 -134 -323 -315
C2 358 710 369 503 558 330 712 560 19 521
889 735 761 363 27 689 69 613 155 961
444 789 13 302 888 892 859 23 841 745
681 128 97 377 580 327 42 628 208 258
473 135 262 632 135 82 807 52 905 348
C3 -148 -243 98 342 470 161 397 109 -1 -214
-215 121 -342 -323 -140 -371 -339 245 58 -59
-416 -370 476 -24 -132 -342 461 -160 -216 -161
-158 -27 -44 370 343 416 336 -27 -352 -405
33 261 472 -122 72 -152 278 158 -349 -97
zi zi
Ilj(CjX) -9700.0 0.0
1l2(C2X) 0.0 9500.0
1l3(C3 X ) -4000.0 4000.0
(4.17)
where J;t = {j I aij 2' 0,1 ::; j ::; n}, .1;;' = {j I aij < 0,1 ::; j ::; n}, and
the value of I = 0.50 is adopted in this example.
4.2. Fuzzy multiobjective 0-1 programming 67
zt zi
J.!I (CIX) -19000.0 -4000.0
J.!2(C2 X ) 6000.0 19000.0
J.!3(C3 X ) -4000.0 3000.0
minimize
subject to (4.18)
Now suppose that the DM decides that the degree of all of the mem-
bership functions of the fuzzy numbers involved in the MOO-I-FN should
be greater than or equal to some value a. Then for such a degree
a, the MOO-I-FN can be interpreted as a non fuzzy multiobjective 0-1
programming (MOO-I-FN(A, b, c)) problem that depends on the coeffi-
cient vector (A, b, c) E (A, b, c)oo. Observe that there exists an infinite
number of such MOO-I-FN(A, b, c) depending on the coefficient vector
(A, b, c) E (A, b, c)oo, and the values of (A, b, c) are arbitrary for any
(A, b, c) E (A, b, c)oo in the sense that the degree of all of the member-
ship functions for the fuzzy numbers in the MOO-I-FN exceeds the level
a. However, if possible, it would be desirable for the DM to choose
(A, b, c) E (A, b, c)oo in the MOO-I-FN(A, b, c) to minimize the objec-
tive functions under the constraints. From such a point of view, for a
certain degree a, it seems to be quite natural to have the MOO-I-FN
as the following nonfuzzy a-multiobjective 0-1 programming (a-MOO-l)
problem.
minimize
subject to
(4.19 )
(4.21)
J.i (c,x)
1. 0 f-------.
o z; z? CiX
where P(a) is the set of a-Pareto optimal solutions and the correspond-
ing a-level optimal parameters to the problem (4.19). Observe that the
value of the aggregation function J.L D(-) can be interpreted as represent-
ing an overall degree of satisfaction with the DM's k fuzzy goals [135]
If J.LD(-) can be explicitly identified, then (4.26) reduces to a standard
mathematical programming problem. However, this rarely happens, and
as an alternative an interaction with the DM is necessary for finding a
satisficing solution for the DM to (4.26).
To generate a candidate for the satisficing solution that is also a-
Pareto optimal, in our interactive decision-making method, the DM is
asked to specify the degree a of the a-level set and the reference mem-
bership values. Observe that the idea of the reference membership val-
ues can be viewed as an obvious extension of the idea of the reference
point in Wierzbicki [215]. To be more explicit, for the DM's degree a
and reference membership values Pi, i = 1, ... , k, the corresponding a-
Pareto optimal solution, which is, in the minimax sense, nearest to the
requirement or better than that if the reference membership values are
attainable, is obtained by solving the following minimax problem.
k
minimize . max {(Pi - J.L(Ci X )) + P 2:(Pi - J.L(Ci X ))}
t=l, ... ,k i=l
subject to Ax :S b (4.28)
Xj E {a,l}, j = 1, ... ,n
(A, b, c) E (A, b, c)a
where p is a sufficiently small positive number.
In this formulation, however, constraints are nonlinear because the
parameters A, b, and C are treated as decision variables. To deal with
such nonlinearities, we introduce the set-valued functions Si(-) and T(-')
4.3. Fuzzy multiobjective 0-1 programming with fuzzy numbers 75
for i = 1, ... ,k.
Then it can be easily verified that the following relations hold for Si ( .)
and T(-,') when x ~ O.
PROPOSITION 4.1
(1) c; cr
~ => Si(Ct) 2 Si(Cr)
(2) b i ~ b2 => T(A, b l ) ~ T(A, b;)
(3) Al ~ A2 => T(Al,b) 2 T(A2,b)
Now from the properties of the a-level set for the vectors and/or
matrices of fuzzy numbers, it should be noted here that the feasible
regions for A, b, Ci can be denoted by the closed intervals [A~, A~J,
[b~, b~J, [cfo" c{;J, respectively, where y~ or y;; represents the left or
right extreme point of the a-level set Yo,. Therefore, through the use of
Proposition 4.1, we can obtain an optimal solution of the problem (4.28)
by solving the following 0-1 programming problem.
x J E {0,1}, j = 1, ... ,n
Observe that this problem preserves the linearities of the constraints and
hence it is quite natural to define the fitness function by
k
1(8) = (1.0 + kp) - . max {(fii - /l(cfo,x)) + P i)fii - /l(cfo,x))},
t=l,oo.,k i=l
(4.31)
where s and x respectively denote an individual represented by a double
string and phenotype of s.
With this observation in mind, the augmented minimax problem (4.30)
can be effectively solved through GADS or GADSRSU, introduced in the
preceding sections.
We can now construct the interactive algorithm in order to derive a
satisficing solution for the DM from the a-Pareto optimal solution set.
The steps marked with an asterisk involve interaction with the DM.
76 4. FUZZY MULTIOBJECTIVE 0-1 PROGRAMMING
zl ,
ZO
zt zi
tLl (CIX) -20000.0 -3000.0
tL2 (C2X) 5000.0 20000.0
tL3(C3 X ) -4000.0 3000.0
4.4 Conclusion
In this chapter, as a natural extension of single-objective 0-1 program-
ming problems discussed in the previous chapter, multiobjective 0-1 pro-
gramming problems are formulated by assuming that the decision maker
82 4 FUZZY MULT/OBJECTIVE 0-1 PROGRAMMING
Interaction /1-1 (CIX) /1-2 (C2 X ) /1-3 (C3 X ) CjX C2X C3X #
1st 0.624471 0.624267 0.685143 -13616.0 10636.0 -1796.0 1
(1.00,1.00,1.00) 0.628588 0.635667 0.639714 -13686.0 10465.0 -1478.0 1
Q = 1.0 0.662412 0.643733 0.625571 -14261.0 10344.0 -1379.0 4
0.624059 0.630400 0.642286 -13609.0 10544.0 -1496.0 2
0.615000 0.593933 0.599000 -14162.0 11091.0 -1193.0 1
0.616118 0.628733 0.611571 -13474.0 10569.0 -1281.0 1
2nd 0.498235 0.706067 0.720286 -11470.0 9409.0 -2042.0 3
(0.80,1.00,1.00) 0.506471 0.696133 0.707429 -11610.0 9558.0 -1952.0 1
Q = 1.0 0.505118 0.693467 0.693714 -11587.0 9598.0 -1856.0 1
0.510882 0.692467 0.739714 -11685.0 9613.0 -2178.0 3
0.492294 0.692333 0.735429 -11369.0 9615.0 -2148.0 1
0.483118 0.683667 0.709000 -11213.0 9745.0 -1963.0 1
3rd 0.615847 0.640480 0.743343 -13469.4 10392.8 -2203.4 1
(0.80,0.90,1.00) 0.558035 0.655480 0.747200 -12486.6 10167.8 -2230.4 1
Q = 0.6 0.544682 0.652547 0.826486 -12259.6 10211.8 -2785.4 1
0.542624 0.643133 0.810543 -12224.6 10353.0 -2673.8 1
0.570800 0.652080 0.742343 -12703.6 10218.8 -2196.4 2
0.540647 0.671187 0.741543 -12191.0 9932.2 -2190.8 1
0.551153 0.639200 0.743343 -12369.6 10412.0 -2203.4 3
#: Number
T
of solutIOns
may have a fuzzy goal for each of the objective functions. Through the
combination of the desirable features of both the interactive fuzzy sat-
isficing methods for continuous variables and the GADS discussed in
the previous chapter, an interactive fuzzy satisficing method to derive a
satisficing solution for the decision maker is presented. Furthermore, by
considering the experts' imprecise or fuzzy understanding of the nature
of the parameters in the problem-formulation process, the multiobjective
0-1 programming problems involving fuzzy parameters are formulated.
Through the introduction of extended Pareto optimality concepts, an in-
teractive decision-making method for deriving a satisficing solution of the
DM from among the extended Pareto optimal solution set is presented
together with detailed numerical examples. An integer generalization
along the same lines as in Chapter 3 will be found in Chapter 6.
Chapter 5
5.1 Introduction
As discussed in Chapter 3, GADS performed efficiently for not only
multidimensional 0-1 knapsack problems but also general 0-1 program-
ming problems involving positive and negative coefficients. To deal with
multidimensional integer knapsack problems, a direct generalization of
our previous results along this line is first performed. Unfortunately,
however, observing that integer ones have a vast search space compared
with 0-1 ones, it is quite evident that the computational time for finding
an approximate optimal solution with high accuracy becomes enormous.
Realizing this difficulty, information about an optimal solution to the
corresponding linear programming relaxation problems is incorporated
for improving the search efficiency and processing time, because it is
expected to be useful for searching an approximate optimal solution to
the integer programming problem. Furthermore, GADS based on refer-
ence solution updating (GADSRSU) for 0-1 programming problems are
extended to deal with integer programming problems [140, 146].
84 5. GENETIC ALGORITHMS FOR INTEGER PROGRAMMING
mmlmlze
subject to ~: ~ b }, (5.1 )
Xj E {O, ... , lIj}, j = 1, ... ,n
where C = (CI,' .. , cn) is an n-dimensional row vector, x = (Xl"'" xn)T
is an n-dimensional column vector of integer decision variables, A =
[aij], i = 1, ... , rn, j = 1, ... , n, is an rn x n coefficient matrix, b =
(b l , ... , bmf is an rn-dimensional column vector, and lIj, j = 1, ... , n are
nonnegative integers. It should be noted here that, in a multidimensional
integer knapsack problem, each element of c is assumed to be nonposit.ive
and each clement of A and b is assumed to be nonnegat.ive.
xj = Xj xj # Xj
x* = 0 328 2
xi # 0 585 85
u
'hequency
927
30
-5 o 5
Indices of variables
Values of variables
xs(j) := .( . l
mm ._mm
~-l, ... ,rn
bi - sumi
ais(j)
j )
,gs(j) , (5.3)
where ais(j) f- O.
Step 3: Let SUmi := SUmi + ais(j)xs(j), i = 1, ... , Tn and j := j + l.
Step 4: If j > n, stop. Otherwise, go to step 2.
In the preceding decoding algorithm for double strings depicted in
Figure 5.2, a double string is decoded from the left edge to the right
edge in order. Namely, a gene (s(j), gs(j)T located in the left part of a
double string tends to be decoded to a value around gs(j), whereas one
in the right part is apt to be decoded to nearly O. By taking account
of the relationships between optimal solutions to integer knapsack prob-
lems and those to the corresponding continuous relaxation problems,
we propose a new decoding algorithm in which decision variables for
the corresponding solution to the continuous relaxation problem greater
than 0 are given priority in decoding. As a result, decoded solutions will
be closer to an optimal solution to the continuous relaxation problem.
However, some optimal solutions x* to integer programming problems
may not be very close to optimal solutions x to the corresponding linear
programming relaxation problems even if about 90% of elements of x*
are equal to those of x, as shown in the previous section. In consider-
at.ion of the estrangement between x* and x, we introduce a constant
R that means the degree of use of information about solutions to linear
programming relaxation problems. Now we are ready to introduce the
following decoding algorithm for double strings using linear program-
ming relaxation.
5.2. Multidimensional integer knapsack problems 87
Xs(j) := ( l
min_min
z-l, ... ,m
b. - sum'J
Z
ais(j)
Z ,9s(j)
) , (5.4)
where ais(j) =I O.
Step 4: Let sumi := sumi + ais(j)xs(j) , i = 1, ... , m and j := j + 1.
Step 5: If j > n, proceed to step 6. Otherwise, return to step 2.
Step 6: Let j := 1.
Step 7: If xs(j) = 0, proceed to step 8. Otherwise, i.e., if xs(j) > 0, let
j := j + 1 and go to step 10.
Step 8: Let ais(j) denote the (i, s(j)) element of the coefficient matrix
.( . l
A. Then, xs(j) is determined as
where ais(j) =I O.
Step 9: Let sumi := sumi + ais(j)Xs(j), i = 1, ... ,m and j := j + 1.
Step 10: If j > n, stop. Otherwise, return to step 7.
In the previous algorithm, the optimal solution x to the linear pro-
gramming relaxation problem of the integer programming problem is
supposed to be obtained in advance.
Figure 5.3 shows an example of decoding a double string. When the
original decoding algorithm is used, the double string will be decoded
from left to right in order. On the other hand, when the proposed
decoding algorithm using linear programming relaxation is used, after
such genes (S(j),9s(j)T as xs(j) > 0 are decoded from left to right, the
remainder, in other words, genes such as xs(j) = 0 will be decoded from
left to right.
88 5. GENETIC ALGORITHMS FOR INTEGER PROGRAMMING
.b (1.4,0,3.8,2.2,0)
that the whole genetic algorithm works better after repeated trial and
error at present.
The procedure of generation of initial population is summarized as
follows.
Step 1: Let r := 1.
Step 3: Let j := 1.
f(8) = n ex (5.6)
LCjl/j
j=l
minimum of the objective function, and hence the fitness f (8) satisfies
0:::::f(8):::::1.
When the variance of fitness in a population is small, the ordinary
roulette wheel selection often does not work well because there is little
difference between the probability of a good individual surviving and
that of a bad one surviving. In order to overcome this problem, quite
similar to [138, 144, 147, 148, 160, 161, 163], the linear scaling technique
[66] is adopted.
Step 1: Calculate the mean fitness fmean, the maximal fitness fmax, and
the minimal fitness fmin in the current population.
2 If f mlii
St ep. . > Cmul e . fmean - fmax 1 t
,
._
e a.-
1.0) . fmean b-
(Cmult -
, .-
Cmult - 1.0 fmax - fmean
fmean . (fmax _
- fCmult . fmean) an d go t,0 s t ep.
30therWlse,.
. let a '.=
f
rnax mean
--::----"------:--, b .-
fmean
.- _ fmin . fmean an d go t 0 s t ep 3 .
/Inean - f min f mean - lInin
Elitist preserving selection: One or more individuals with the largest fit-
ness up to the current population is unconditionally preserved in the
next generation.
5.2. Multidimensional integer knapsack problems 91
N
L(Ni - lNd)
i=l
5.2.6.2 Crossover
If a single-point crossover or multipoint crossover is directly applied
to individuals of double string type, the kth element of an offspring
may take the same number that the k'th element takes. Similar viola-
tion occurs in solving traveling salesman problems (TSPs) or scheduling
problems through genetic algorithms. In order to avoid this violation,
a crossover method called partially matched crossover (PMX) was pro-
posed [68J and was modified to be suitable for double strings [148]. The
PMX for double strings can be described as follows:
Step 6: 1) If h < k, let 9s x ,(j) := 9s y (j) for all j such that h :::; j :::; k,
and go to step 7. 2) If h > k, let 9s x ,(j) := 9s y (j) for all j such that
1 :::; j :::; k or h :::; j :::; n, and go to step 7.
Step 7: Carry out the same operations as in steps 4 through 6 for Y'
and X.
Step 8: Preserve X' and Y' as the offsprings of X and Y.
Step 9: If r < N, set r := r + 1 and return to step 1. Otherwise, go to
step 10.
Step 10: Choose N . G individuals from 2 . N preserved individuals
randomly, and replace N . G individuals of the current population
consisting of N individuals with the N . G chosen individuals. Here,
G is a constant called a generation gap.
It should be noted here that the original PMX for double strings is
extended to deal with the substrings not only between hand k but also
between k and h.
An illustrative example of crossover is shown in Figure 5.5.
Step 0: Let r := 1.
Step 1: Let j := 1.
distribution with mean xs(j) and variance T2, and go to step 4. Other-
wise, determine xs(j) randomly according to the uniform distribution
in [0, Vj]' and go to step 4.
Step 0: Set r := 1.
Mutation
1 LP _SOLVE [23] solves (mixed integer) linear programming problems. The imple-
mentation of the simplex kernel was mainly based on the text by Orchard-Hays [125].
The mixed integer branch and bound part was inspired by Dakin [37].
5.2. Multidimensional integer knapsack problems 97
Table 5.2. Experimental results for 50 variables and 20 constraints (10 trials)
Table 5.3. Experimental results for 80 variables and 25 constraints (10 trials)
Table 5.4- Experimental results for 100 variables and 30 constraints (10 trials)
From the results in Tables 5.2, 5.3, and 5.4, we can conclude that
GADSLPR is considerably effective as an approximate solution method
for multidimensional integer knapsack problems because it can obtain
more accurate approximate optimal solutions in much (substantially)
shorter computation time than GADS and LP _SOLVE can in most cases,
and the information about solutions to linear programming relaxation
problems is indispensable for efficient search.
depending on the value of 9s(j) and whether the constraints are satis-
fied. Unfortunately, however, this decoding algorithm cannot be applied
directly to integer programming problems with negative elements in b.
Realizing this difficulty, by introducing a reference solution, we pro-
pose a new decoding algorithm for double strings that is applicable to
more general integer programming problems with positive and negative
coefficients in the constraints.
For that purpose, a feasible solution xO for generating a reference so-
lution by some method is required. One possible way to obtain a feasible
solution to the integer programming problem (5.7) is to maximize the
exponential function for the violation of constraints defined by
(5.8)
where ai, i = 1, ... ,m, denotes an n-dimensional ith row vector of the
coefficient matrix A; Jt; = {j I aij 2:: 0,1 ::; j ::; n}; J~ = {j I aij <
0,1 ::; j ::; n}; LJEJ+
a
aij and L EJ-. aij are the maximum and minimum
J a
of aiX, respectively; '() is a positive 'parameter to adjust the severity of
100 5. GENETIC ALGORITHMS FOR INTEGER PROGRAMMING
Step 3: If psum + P~j) . gs(j) :=:; b t , set qs(j) := 9s(j)' psum := psum +
P~j) . 9s(j) and j := j + 1, and go to step 4. Otherwise, set qs(j) :=
and j := j + 1, and go to step 4.
Step 4: If j > n, go to step 5. Otherwise, return to step 2.
Step 10: For xs(j) satisfying 1 ::; j ::; I, let xs(j) . - gs(j)' For xs(j)
satisfying l + 1 ::; j ::; n, let xs(j) := 0, and stop.
Step 11: Let sum := Lk=l Ps(k) . x;(k) and j := 1.
Step 12: If gs(j) = x:(j)' let xs(j) := gs(j) and j := j + 1, and go to step
16. If gs(j) =f. x;(j)' go to step 13.
Step 14: Let ts(j) : = lO.5 . (x;(j) + gs(j) J and go to step 15.
As can be seen from the previous discussions, for general integer pro-
gramming problems involving positive and negative coefficients in the
constraints, this newly developed decoding algorithm enables us to de-
code each individual represented by the double strings to the correspond-
ing feasible solution. However, the diversity of phenotypes x greatly
depends on the reference solution used in the preceding decoding algo-
rithm. To overcome such situations, we propose the following reference
solution updating procedure in which the current reference solution is
updated by another feasible solution if the diversity of phenotypes seems
to be lost. To do so, for every generation, check the dependence on the
reference solution through the calculation of the mean of the L1 dis-
tance between all individuals of phenotypes and the reference solution,
and when the dependence on the reference solution is strong, replace the
reference solution by the individual of phenotype having maximum L1
distance.
Let N, x*, TJ 1.0), and x T denote the number of individuals, the
reference solution, a parameter for reference solution updating and a
feasible solution decoded by the rth individual, respectively; then the
reference solution updating procedure can be described as follows:
Step 2: Calculate d r = '2:,)=1 IX~(j) -x;(j)1 and let dsum := dsum +dr . If
d r > d max and exT < ex*, let d max := dr, rmax := T, and r := r + 1,
and go to step 3. Otherwise, let r := r + 1 and go to step 3.
Step 3: If r > n, go to step 4. Otherwise, return to step 2.
Step 4: If dsum / (N . '2:,j!=l Vj) < T], then update the reference solution
as x* := x rmax , and stop. Otherwise, stop without updating the
reference solution.
It should be observed here that when the constraints of the problem
are strict, a possibility exists that all of the individuals in the neighbor-
hood of the reference solution are decoded. To avoid a such possibility,
in addition to the reference solution updating procedure, after every P
generations, the reference solution is replaced by the feasible solution
obtained by solving the maximization problem (5.10) through GADS
(without using the decoding algorithm).
(5.11)
(5.12)
(5.13)
n=l n=l
(5.14)
where (3 = maXj=l, ... ,n Vj and a positive constant 'T denotes the degree
of strictness of the constraints.
In these numerical experiments, GADSLPRRSU is applied 10 times
to every problem, in which the following parameter values are used in
both genetic algorithms: the population size N = 100, the generation
gap G = 0.9, the probability of crossover Pc = 0.9, the probability of
mutation Pm = 0.05, the probability of inversion Pi = 0.03, the minimal
search generation [min = 100, the maximal search generation [max(>
[min) = 1000, the scaling constant Cmult = 2.0, the convergence criterion
c = 0.001, the degree of use of information about solutions to linear
programming relaxation problems R = 0.9, a parameter for reproduction
104 5. GENETIC ALGORITHMS FOR INTEGER PROGRAMMING
Table 5.5. Experimental results for 50 variables and 20 constraints (10 trials)
0.55 GADSLPRRSU I I
-152968 -152796.9 -152703 6.00 x 10 1 553.7
LP~OLVE -153053 (optimal) 1.10 x 10 3 -
5.4 Conclusion
In this chapter, GADS for multidimensional 0-1 knapsack problems
have been extended to deal with multidimensional integer knapsack
5.4. Conclusion 105
Table 5.6. Experimental results for 80 variables and 25 constraints (10 trials)
Table 5.7. Experimental results for 100 variables and 30 constraints (10 trials)
0.55 GADSLPRRSU I I
-380573 -379438.5 -377287 1.35 x 10 2 576.0
LP_SOLVE -381085 (optimal) 1.10 x 10 3 -
6.1 Introduction
(6.2)
is assumed for representing the fuzzy goal of the DM [135, 225, 228],
where z?and zldenote the values of the objective function CiX whose
6.2. Fuzzy multiobjective integer programming 109
degree of membership function are and 1, respectively. These values
are subjectively determined through an interaction with the DM. Figure
6.1 illustrates the graph of the possible shape of the linear membership
function.
1. 0 1------.
o o CiX
Zi
(6.4)
Observe that the value of the aggregation function /1D(X) can be inter-
preted as representing an overall degree of satisfaction with the DM's
multiple fuzzy goals [135, 170, 176, 184, 225]. If we adopt the well-
known fuzzy decision of Bellman and Zadeh or minimum operator as
llO 6. FUZZY MULTIOBJECTIVE INTEGER PROGRAMMING
(6.6)
~
t=l, ... ,k
subject to Ax ~ b (6.7)
xjE{O, ... ,Vj}, j 1, ... , n }
minimum and the maximum of each objective function. Then, ask the
DM to select the initial reference membership levels Pi, i = 1, ... , k
(if it is difficult to determine these values, set them to 1).
Step 2: For the reference membership values specified by the DM, solve
the corresponding augmented minimax problem.
Step 3*: If the DM is satisfied with the current values of the mem-
bership functions and/or objective functions given by the current
best solution, stop. Otherwise, ask the DM to update the reference
membership levels by taking into account the current values of mem-
bership functions and/or ohjective functions, and return to step 2.
Cl -566 -589 -791 -438 -381 -604 -126 -442 -646 -525
-914 -602 -398 -206 -976 -877 -271 -186 -500 -392
-927 -94 -525 -905 -405 -971 -105 -553 -565 -740
C2 225 612 975 836 837 644 140 718 2 160
956 650 457 182 466 732 198 929 702 161
601 443 505 869 754 941 808 312 197 613
C3 -299 262 391 170 221 243 -202 -413 -4 328
85 -45 6 67 464 479 -8 38 -359 94
458 21 400 -308 -163 302 316 490 96 -472
al 597 453 763 487 657 362 827 171 586 95
464 74 134 591 748 617 772 161 343 598
925 3 350 978 432 710 4 40 727 833
a2 825 298 475 579 357 668 816 522 677 82
347 700 92 567 986 947 841 620 887 389
62 316 193 271 12 38 689 917 992 837
a3 323 93 693 716 670 480 125 954 117 466
352 344 568 401 707 753 529 579 975 615
554 305 964 964 652 222 423 437 320 947
a4 159 604 567 187 677 634 60 659 323 305
163 711 957 542 757 815 291 363 950 476
281 480 905 282 705 257 43 504 309 2
as 84 963 701 564 629 638 416 482 778 2
180 47 541 262 858 533 821 998 108 784
30 26 887 359 712 814 535 505 850 27
a6 596 302 965 460 325 196 330 127 307 419
622 213 187 100 893 207 856 93 631 958
349 312 93 232 570 55 572 776 823 907
a7 799 989 192 966 937 556 437 205 874 162
146 754 925 777 585 602 135 566 115 154
926 470 816 958 964 806 743 19 125 579
as 434 171 361 460 432 289 830 709 745 926
927 82 42 685 761 741 230 561 887 472
659 659 696 941 642 488 296 992 435 337
a9 428 155 920 677 469 839 391 172 934 598
351 796 41 412 290 372 542 583 257 947
311 176 581 315 874 368 864 954 178 309
alO 721 173 422 984 28 990 392 503 202 651
891 761 719 229 129 256 427 551 789 971
274 915 193 823 522 517 271 748 272 861
b 36255.0 40005.0 40620.0 34920.0 37835.0 33690.0 43205.0 42225.0 37760.0 40462.5
ation Imax = 2000, E = 0.02, Cmult = 1.8, a = 2.0, p = 3.0, and R = 0.9.
The coefficient p of the augmented minimax problem is set as 0.005.
First, the individual minimum zr
in and maximum zr
ax of each of
objective functions Zi(X) = CiX, i = 1,2,3 are calculated by GADSLPR,
as shown in Table 6.2.
Table 6.2. Individual minimum and maximum of each of the objective functions
zt zy
{l1(C1X) -60000.0 0.0
J-t2( C2 X ) 0.0 65000.0
{l3(C3 X ) -20000.0 30000.0
improving the satisfaction levels for /11 and /13 at the expense of /12.
For the updated reference membership values, the corresponding aug-
mented minimax problem (6.9) is solved by the GADSLPR, and the
corresponding membership function values are calculated as shown in
the third interaction of Table 6.4. The same procedure continues in this
manner until the DM is satisfied with the current values of the member-
ship functions and the objective functions. In this example, a satisficing
solution for the DM is derived at the fourth interaction.
(6.12)
where (3 = maxj=l, ... ,n l/j, and the value of I = 0.40 is adopted in this
example.
As a numerical example generated in this way, we use the coefficients
as shown in Tables 6.5 and 6.6.
The parameter values of GADSLPRRSU are set as population size
N = 100, generation gap G = 0.9, probability of crossover Pc = 0.9,
probability of mutation Pm = 0.05, probability of inversion Pi = 0.03,
minimal search generation I min = 100, maximal search generation Imax =
2000, E = 0.02, Cmult = 1.8, a = 2.0, p = 3.0, R = 0.9, A = 0.9, 'f/ = 0.1,
() = 5.0, and P = 100.
First, the individual minimum zr
in and maximum zr
ax of each of
objective functions Zi(X) = CiX, i = 1,2,3 were calculated by GADSL-
PRRSU, as shown in Table 6.7.
By considering these values, the DM subjectively determined linear
membership functions /ti(CiX), i = 1,2,3, as shown in Table 6.8.
Having determined the linear membership functions in this way, the
augmented minimax problem (6.9) is solved by the GADSLPRRSU for
the initial reference membership levels (1.00,1.00,1.00), which can be
viewed as the ideal values, and the DM is supplied with the correspond-
ing membership function values as shown in the first interaction of Table
6.9. On the basis of such information, because the DM is not satisfied
with the current membership function values, the DM updates the ref-
erence membership values as PI = 1.00, P2 = 0.70, and P3 = 1.00 for
improving the satisfaction levels for ttl and /t3 at the expense of /t2.
For the updated reference membership values, the corresponding aug-
mented minimax problem (6.9) is solved by the GADSLPRRSU and the
corresponding membership function values are calculated as shown in
the second interaction of Table 6.9. Because the DM is not satisfied
with the current membership function values, the DM updates the ref-
erence membership values as PI = 0.80, ih = 0.70, and P3 = 1.00 for
improving the satisfaction levels for /t2 and /t3 at the expense of ttl, For
the updated reference membership values, the corresponding augmented
minimax problem (6.9) is solved by the GADSLPRRSU, and the cor-
responding membership function values are calculated as shown in the
third interaction of Table 6.9. The same procedure continues in this
manner until the DM is satisfied with the current values of the member-
6.2. Fuzzy multiobjective integer programming 117
Cl -529 -59 -629 -413 -306 -415 -608 -898 -584 -188
-167 -593 -236 -450 -599 -284 -534 -468 -195 -586
-223 -373 -393 -464 -451 -200 -55 -65 -360 -732
C2 32 37 15 794 126 634 30 685 123 666
253 632 688 918 854 61 884 981 206 414
82 787 469 84 877 785 206 747 863 66
C3 367 -215 217 5 -245 216 72 -66 -157 378
-290 302 -3 246 -366 -130 -222 283 -18 -159
445 112 457 23 127 -367 -332 -74 -242 40
al -306 258 150 79 400 122 243 -50 -412 -116
-386 462 384 190 194 -431 248 316 -191 -199
386 295 -176 151 -315 256 387 -5 -153 -290
a2 343 168 -206 -250 -209 337 175 -332 -268 317
-459 -307 43 14 -485 84 -278 106 357 -468
-398 432 261 352 -318 -387 -180 260 -36 -210
a3 -116 309 -387 108 356 -418 -63 -473 80 -213
-160 139 478 479 333 350 -154 -384 -170 147
-23 -416 -138 -336 242 186 -59 59 -103 -150
a4 31 225 428 -151 -178 -463 438 345 344 -252
-449 -350 -311 -83 -49 -1 16 147 -327 -29
-32 40 -251 75 -430 -264 -72 -406 -41 3
a5 271 -448 -330 -7 327 -412 306 223 -385 -66
287 -137 8 17 -297 -349 118 195 84 441
-215 27 242 -325 -172 232 -109 176 7 -36
a6 376 -247 -384 330 -64 -97 -294 114 -311 492
-463 137 284 439 8 210 289 150 -346 -360
-369 -353 444 225 -279 -40 -398 466 399 -186
a7 438 34 -420 142 283 -156 -241 -336 164 239
288 -474 -371 -177 327 263 139 10 379 -185
444 265 -231 -450 -313 -306 -373 189 71 463
a8 270 -440 -314 -193 27 68 -208 242 -280 203
-109 -127 -325 386 -276 -37 -406 -382 -427 212
-199 206 92 182 103 -353 -274 -198 357 225
ag -241 -296 156 0 209 24 217 -432 -125 453
-408 120 -224 431 136 -249 -90 -56 429 299
-6 -56 -216 -16 -26 -295 -301 -422 433 -118
alO 100 -250 217 -12 72 -18 -20 -148 -171 -256
379 -497 -382 -497 195 -365 53 242 -460 240
131 -264 281 405 -137 -445 268 -478 -201 -148
Zi zy
IldclX) -100000.0 -30000.0
112 (C2X) -10000.0 130000.0
/13 (C3 X ) -20000.0 20000.0
minimize
subject to (6.13)
6.3. Fuzzy multiobjective integer programming with fuzzy numbers 119
Now suppose that the DM decides that the degree of all of the mem-
bership functions of the fuzzy numbers involved in the MOIP-FN should
be greater than or equal to some value n. Then for such a degree 0:,
the MOIP-FN can be interpreted as a nonfuzzy multiobjective integer
programming (MOIP-FN(A, b, c)) problem that depends on the coeffi-
cient vector (A, b, c) E (A, b, c)Q. Observe that there exist an infinite
number of such MOIP-FN(A, b, c) depending on the coefficient vector
(A, b, c) E (A., b, c)Q' and the values of (A, b, c) are arbitrary for any
(A, b, c) E (A, il, c)Q in the sense that the degree of all of the member-
ship functions for the fuzzy numbers in the MOIP-FN exceeds the level
a. However, if possible, it would be desirable for the DM to choose
(A, b, c) E (A, il, c)Q in the MOIP-FN(A, b, c) to minimize the objective
functions under the constraints. From such a point of view, for a cer-
tain degree n, it seems to be quite natural to have the MOIP-FN as the
following nonfuzzy o:-multiobjective integer programming (O'-MOO-IP)
problem.
minimize
subject to
(6.14)
(6.15)
1.01-------....
o CiX
(6.16)
the problem to be solved is transformed into a fuzzy multiobjective
decision-making problem defined by
maxnnlze {lD({ll(CIX), {l2(C2 X ), ... , {lk(CkX), a) }
(6.17)
subject to (x, a, b, c) E P(a), a E [0,1] ,
where P{a) is the set. of a-Pareto optimal solutions and the wrrespond-
ing a-level optimal parameters to the problem (6.14). Observe that the
value of the aggregation function {lD (-) can be interpreted as represent-
ing an overall degree of satisfaction with the OM's k fuzzy goals [135]
If {lD(-) can be explicitly identified, then (6.17) reduces to a standard
mathematical programming problem. However, this rarely happens, and
as an alternative, an interaction with the OM is necessary for finding a
satisficing solution for the OM to (6.17).
To generate a candidate for the satisficing solution that is also a-
Pareto optimal, in our interactive decision-making method, the OM is
asked to specify the degree a of the a-level set and the reference mem-
bership values. Observe that the idea of the reference membership val-
ues can be viewed as an obvious extension of the idea of the reference
point in Wierzbicki [215]. To be more explicit, for the OM's degree a
and reference membership values {ii, i = 1, ... , k, the corresponding a-
Pareto optimal solution, which is, in the minimax sense, nearest to the
6.3. Fuzzy multiobjective integer programming with fuzzy numbers 123
PROPOSITION 6.1
(1) cl : :; c;
=} Si (en ;2 Si (e;)
2
(2) b :::; b =} T(A, b I ) ~ T(A, bJ)
1
Now, from the properties of the a-level set for the vectors and/or
matrices of fuzzy numbers, it should be noted here that the feasible
regions for A, b, Ci can be denoted by the closed intervals [A~, A1:"l,
124 6. FUZZY MULTIOBJECTIVE INTEGER PROGRAMMING
mInImIZe
subject to
Step 1 *: Elicit a membership function fLi (Ci x) from the D M for each
of the objective functions by considering the calculated individual
minimum and maximum of each objective function.
Step 3: For the degree a and the reference membership values Pi,
i = 1, ... ,k specified by the DM, solve the corresponding augmented
minimax problem.
6.3. Fuzzy multiobjective integer programming with fuzzy numbers 125
Step 4*: If the DM is satisfied with the current values of the member-
ship functions and/or objective functions given by the current best
solution, stop. Otherwise, ask the DM to update the reference mem-
bership levels and/or a by taking into account the current values of
membership functions and/or objective functions, and return to step
3.
zt zy
1l1(CI X ) -60000.0 0.0
112 (C2X) 0.0 70000.0
113 (C3 X ) -21000.0 32000.0
6.,'1. Fuzzy multiobjective integer programming with fuzzy numbers 127
z; zy
Ml(CIX) -120000.0 -20000.0
M2(C2 X ) 20000.0 140000.0
M3(C3 X ) -30000.0 30000.0
6.4 Conclusion
In this chapter, as the fuzzy multiobjective version of Chapter 5 and
an integer generalization along the same lines as Chapter 3, interactive
fuzzy multiobjective integer programming methods have been discussed.
Through the use of GADS for integer programming introduced in Chap-
ter 4, considerable effort is devoted to the development of interactive
fuzzy multiobjective integer programming as well as fuzzy multiobjec-
tive integer programming with fuzzy numbers together with several nu-
merical experiments.
In the next chapter, we will proceed to genetic algorithms for non-
linear programming. In Chapter 8, attention is focused on not only
multiobjective nonlinear programming problems but also multiobjective
nonlinear programming problems with fuzzy numbers as a generalized
version of this chapter.
Chapter 7
7.1 Introduction
Genetic algorithms (GAs) initiated by Holland [75] have attracted
considerable attention as global methods for complex function optimiza-
tion since De Jong considered GAs in a function optimization setting
[66], However, many of the test function minimization problems solved
by a number of researchers during the past 20 years involve only spec-
ified domains of variables. Only recently, several approaches have been
proposed for solving general nonlinear programming problems through
GAs [60, 88, 112, 165].
For handling nonlinear constraints of general nonlinear programming
problems by GAs, most of them are based on the concept of penalty
functions, that penalize infeasible solutions [60, 88, 112, 114, 115, 119].
Although several ideas have been proposed about how the penalty func-
134 7. GENETIC ALGORITHMS FOR NONLINEAR PROGRAMMING
1 A nonempty set C is called convex if the line segment joining any two points of the set also
belongs to the set, i.e., >'::Cl + (1 - >')::C2 E C V::Cl,::C2 E C and V>' E [0,1]. A function f(::c)
defined on a non empty convex set C in Rn is said to be convex if f(>'::Cl +(1->')::C2) :<::: >'f(::Cl)+
136 7. GENETIC ALGORITHMS FOR NONLINEAR PROGRAMMING
where D is a feasible region defined by the lower and the upper bounds
of the decision variables (li 'S Xi 'S Ui, i = 1, ... , n) and by a set of
convex constraints C. Hence the feasible region D is a convex set.
From the convexity of the feasible region D, it follows that for each
point in the search space (:r 1, ... ,X n ) E D there exists a feasible range
(l(i), u(i)) of a variable Xi, i = 1, ... , n, where other variables Xj, j =
1, ... , i - 1, i + 1, ... ,n, remain fixed. To be more explicit, for a given
(:rl, ... ,:ri, ... ,X n ) E D, it holds that
Y E (l(i), u(i)) if and only if (Xl"'" Xi-I, y, Xi+l ... , :r: n ) ED, (7.4)
then for a given point (2,5) E D, l(i) and u(i), i = 1,2 become as
This means that the first element of the vector (2,5) can vary from 1 to
V5 while X2 = 5 remains constant and the second element of the vector
(2,5) can vary from 4 to 6 while Xl = 2 remains constant.. Naturally, if
the set of constraint C is empty, then the convex feasible region becomes
as D = n~=:l(li' Ui) with l(i) = Ii and u(i) = Ui, i = 1, ... , n.
(1- A)J(a!2) \fa!j,a!2 E C and \fA E [0,1]. A function J(re) is said to be concave if -f(re) is
convex. It should be noted here that if all the functions f(re) and 9j (re), ] =: 1, ... , Tn), are
convex and all the functions hj(re), j "" Tn1 + 1, ... ,m, are linear, (7.1) becomes a convex
programming problem.
2 A point re* is said to be a local minimum point of (7.1) if there exists a real number 6 > 0
such that f(re) 2: J(a!*) for all re E X satisfying lire - re*11 < 6. A point re* is called a global
minimum point of (7.1) if J(re) ::: J(re') for all re E X.
7.2. Floating-point genetic algorithms 137
7.2.4 Crossover
For the floating-point genetic algorithms, several interesting types of
crossover operators have been proposed. Some of them are discussed in
turn following Michalewicz [112, 116J.
v' = (Vl, ... ,V~, ... ,vn) and w' = (Wl, ... ,W~, ... ,wn), (7.7)
where
V~ = aWi + {I - a)vi and w~ = aVi + {I - a)wi (7.8)
Here, a is a parameter so that the resulting two offspring v' and w' are
in the the convex feasible region D. Actually, as can been seen from
simple calculations, the value of a is randomly chosen as follows:
The operator uses a random value a E [0,1], and newly generated two
offspring v' and w' always become feasible when the feasible region D
is convex.
7.2. Floating-point genetic algorithms 139
Feasible region
z=r(w-v)+w, (7.13)
7.2.5 Mutation
Mutation operators are somehow different from the traditional ones.
Following Michalewicz [112], some of them are discussed in turn.
140 7. GENETIC ALGORITHMS FOR NONLINEAJ( PROGRAMMING
Boundary mutation
-- ,
"
the space uniformly initially (when t is small) and very locally at later
stage. In Michalewicz et al. [117], the function
Such a repair process is performed for one reference point and all
search points. When a feasible region is nonconvex or a feasible region
is very small, it becomes very difficult to generate a feasible individual.
Thus, if a newly generated point z is infeasible, a process of generating
random numbers a is repeated until either a feasible point is found or
the prescribed number of iterations is reached .
In this way, two separate populations in GENOCOP III coevolve in
such a way that a development in one population influences evaluations
of individuals in the other population. Also, by evaluating reference
points directly via the objective function, fully feasible individuals can be
obtained. Although GENOCOP III can be applied to general nonconvex
7.4. Revised GENOCOP III 143
oS
search
point z
(I)
reference
search point point
r
~
S
(2)
reference
point
search space
r
feasible space
minimize (7.20)
feES
7.4. Revised GENOCOP III 145
and solves the formulated problem (7.20) for obtaining one initial refer-
ence point or yielding the information that none exists through the orig-
inal GENOCOP system [112, 113, 116] that uses the elementary opera-
tors consisting of simple crossover, whole arithmetic crossover, boundary
mutation, uniform mutation, and nonuniform mutation.
Then an initial population of reference points is created via multiple
copies of the initial reference point obtained in this way.
An initial population of search points is created randomly from indi-
viduals satisfying the lower and upper bounds determined by both the
linear constraints and the original lower and upper bounds.
In this way, two initial separate populations consisting of search points
and reference points, respectively, can be generated efficiently.
Similar to GENOCOP III, our revised method searches a new point
on the line segment between a search point and a reference point. To
overcome the difficulty for creating feasible points from a segment be-
tween a search point and a reference point in GENOCOP III, we propose
a bisection method for generating a new search point on the line segment
between a search point and a reference point efficiently.
In the proposed method, we consider two cases-(a) search points are
feasible individuals and (b) search points are infeasible individuals-and
present an efficient search method for generating feasible individuals.
If search points are feasible, we generate a new point on the line
segment between a search point and a reference point. In this case, if
the feasible space is convex, a newly generated point becomes feasible. If
the feasible space is nonconvex, a newly generated point does not always
become feasible. Thus we search for a feasible point using a bisection
method in the following way.
Let 8 E Sand rEX be a search point and a reference point, respec-
tively, and set s = 8 and r = r.
Step 3: If the distance between sand f' is less than the prescribed
sufficiently small value, set t = f' as a boundary point and go to step
4. Otherwise, return to step 1.
bisection
method
,' .... ,." r
(I) Z
search
point boundary
~
S t z reference
r
~
search space
feasible space
Pi = c (1 - c)i-l, (7.21)
x* = (2.171996,2.363683,8.773926,5.095984,0.9906548,
1.430574,1.321644,9.828726,8.280092,8.375927)
f(x*) = 24.3062091,
where the first six constraints are active at the global minimum.
For this numerical example, the parameter values ofthe revised GENO-
COP III are set as follows: both population sizes are 70, replacement
probability Pr = 0.2, c = 0.1 in the exponential ranking selection, the
number of generations is 5000, and the trials are performed 10 times.
In all trials, the same initial populations, operators, and probabilities
of all operators are used.
The maximum number of searches for generating an initial reference
point in GENOCOP III is set to be 100, and the distance parameter for
judging a boundary point in a bisection method is set to be 0.001.
The obtained solutions with computat.ional times are shown in Table
7.2.
From Table 7.2 it can been seen that the obt.ained optimal solutions
of the revised GENOCOP III are a little bit better than those of GENO-
COP III. Furthermore, the required number of searches of the revised
GENOCOP III is three times smaller than that of GENOCOP III.
that purpose, Sakawa and Yauchi [179, 180] used the following single
objective nonconvex programming problem.
The parameter values of the revised GENOCOP III are set to the
same values in the quadratic programming example.
The obtained number of search and the obtained solutions are shown
in Table 7.3 and Table 7.4, respectively.
From Table 7.3, the required search number of the revised GENOCOP
III is much smaller than that of GENOCOP III, and in the worst case,
GENOCOP III sometimes requires the maximum search number 100,
which means that an initial feasible point cannot be located.
Naturally, the difference of the search numbers between GENOCOP
III and the revised GENOCOP III has great influence on the computa-
tion time shown in Table 7.4.
As can be seen from Table 7.4, the revised GENOCOP III gives better
results than GENOCOP III does.
Furthermore, for comparing the generation methods of an initial fea-
sible point, 10 trials are performed for both the revised GENOCOP III
and the GENOCOP III. The experimental results show that the revised
7.5. Conclusion 151
7.5 Conclusion
In this chapter, we focused on general nonlinear programming prob-
lems and considered an applicability of the coevolutionary genetic al-
gorithm, called GENOCOP III. Unfortunately, however, in GENOCOP
III, because an initial population is randomly generated, it is quite dif-
ficult to generate reference points. Furthermore, a new search point is
randomly generated on the line segment between a search point and a
reference point and effectiveness and speed of search may be quite low.
In order to overcome such drawbacks of GENOCOP III, we proposed
the revised GENOCOP III by introducing a method for generating a
reference point by minimizing the sum of squares of violated nonlinear
constraints and a bisection method for generating a new search point
on the line segment between a search point and a reference point. Il-
lustrative numerical examples demonstrated both the feasibility and the
effectiveness of the proposed method.
Chapter 8
8.1 Introduction
In the late 1990s, Sakawa and Yauchi [181] formulated nonconvex
MONLP problems and presented an interactive fuzzy satisficing method
through the revised GENOCOP III introduced in the previous chap-
ter. Having determined the fuzzy goals of the decision maker (DM) for
the objective functions, if the DM specifies the reference membership
values, the corresponding Pareto optimal solutions can be obtained by
solving the augmented minimax problems for which the revised GENO-
COP III is effectively applicable. An interactive fuzzy satisficing method
for deriving a satisficing solution for the decision maker from a Pareto
optimal solution set is presented. Furthermore, by considering the ex-
perts' vague or fuzzy understanding of the nature of the parameters
in the problem-formulation process, multiobjective nonconvex program-
ming problems with fuzzy numbers are formulated. Using the a-level
sets of fuzzy numbers, the corresponding nonfuzzy a-multiobjective pro-
gramming and an extended Pareto optimality concept were introduced.
Sakawa and Yauchi [180, 182, 183] then presented interactive decision-
making methods through the revised GENOCOP III, both without and
with the fuzzy goals of the DM, to derive a satisficing solution for the
154 8. FUZZY MULTIOBJECTIVE NONLINEAR PROGRAMMING
fuzzy min
fuzzy max
h(x),
h(x),
i E
i E
h}
h (8.3)
fuzzy equal h{x), i E h '
subject to xEX
The term augmented is adopted because the term p E7=1 (Pi -/1i (/i (x)))
is added to the standard minimax problem, where p is a sufficiently small
positive scalar.
Although, for the nonconvex MONLP, the augmented minimax prob-
lem (8.5) involves nonconvexity, if we define the fitness function
f(s) = 1.0 + kp- i;S~~k { (Pi - /1i (fi (x )) )+p ~ (Pi - /1i(/i(X))) }
(8.6)
for each string s, the revised GENOCOP III [179, 181] proposed by
Sakawa and Yauchi is applicable.
8.2. Multiobjective nonlinear programming 157
The algorithm of the revised GENOCOP III for solving the augmented
minimax problem (8.5) can be summarized as follows.
Step 1*: Elicit a membership function {ti(fi(X)) from the DM for each
of the objective functions by considering the calculated individual
minimum and maximum of each objective function.
Step 6: Having evaluated the individuals via fitness function, apply the
selection operator for generating individuals of the next generation.
158 8. FUZZY MULTIOBJECTIVE NONLINEAR PROGRAMMING
Step 8*: If the DM is satisfied with the current solution, stop. Oth-
erwise, ask the DM to update the reference membership values and
return to step 3.
The parameter values of the revised GENOCOP III are set to the
same values as the nonconvex nonlinear programming example. The
coefficient p of the augmented minimax problem is set as 0.0001.
After calculating the individual minimum and maximum of the objec-
tive functions, assume that the DM subjectively determines the mem-
8.3. Multiobjective nonlinear programming problem with fuzzy numbers 159
,(1 ( )) = 1500-h(x)
/11 1 X 1410
(f ( )) = 3500 - h(x)
/12 2 X 3150
(f ( )) = 3000 - h(x)
/13 3 X 2950
For this numerical example, at each interaction with the DM, the
corresponding augmented minimax problem is solved through the revised
GENOCOP III for obtaining a Pareto optimal solution.
As shown in Table 8.1, in this example, the reference membership
values of (P1, P2, P3) are updated from (1.0, 1.0, 1.0) to (0.8, 1.0, 1.0),
(0.8, 1.0, 0.9), and (0.8, 1.0, 0.95) sequentially.
In the whole interaction processes as shown in Table 8.1, the aug-
mented minimax problem is solved for the initial reference membership
levels and the DM is supplied with the corresponding Pareto optimal
solution and membership values as is shown in interaction 1 of Table
8.1. On the basis of such information, because the DM is not satisfied
with the current membership values (0.81766, 0.81766, 0.81765), the
DM updates the reference membership values to P1 = 0.80, P2 = 1.0,
and P'3 = 1.0 for improving the satisfaction levels for /12 and /1<i at the
expense of /11. For the updated reference membership values, the cor-
responding augmented minimax problem yields the Pareto optimal so-
lution and membership values as is shown in interaction 2 of Table 8.1.
The same procedure continues in this manner until the DI\1 is satisfied
with the current values of the membership functions. In this example,
after updating the reference membership values (P1, P'2, P:J three times,
at the fourth interaction the satisficing solution of the DM is derived
and the entire interactive process is summarized in Table 8.1.
where ai = (ail, ... ,aikJ, bj = (bjl, ... ,bjmj)' a = (al, ... ,ak), and
b = (b 1 , . ,b m ).
a fuzzy goal for each of the objective functions !i(X, ai). In a mini-
mization problem, the fuzzy goal stated by the DM may be to achieve
"substantially less than or equal to some value Pi." This type of state-
ment can be quantified by eliciting a corresponding membership function
J.Li(fi(X, ai)) that is a strictly monotone decreasing function with respect
to !i(X, ai).
In the fuzzy approaches, however, we can further treat a more general
MONLP problem in which the DM has two types of fuzzy goals, namely
fuzzy goals expressed as "!i(X, ai) should be in the vicinity of r/' (fuzzy
equal) as well as "!i(X) should be substantially less than or equal to Pi
or greater than or equal to qi" (fuzzy min or fuzzy max).
Such a generalized o:-MONLP (Go:-MONLP) problem can be ex-
pressed as
fuzzy min !i(X, ai), i E h
fuzzy max !i(X, ad, i E 12
fuzzy equal hex, ai), i E h (8.9)
subject to x E X(b)
(a, b) E (a, b)a
where hUh U h = {I, 2, ... ,p}, Ii n Ij = 0, i = 1,2,3, i =I- j.
To elicit a membership function J.Li(fi(X, ai)) from the DM for a fuzzy
goal such as "!i(X, ai) should be in the vicinity of ri," it is obvious that
we can use different functions for the left and right side of ri.
When the fuzzy equal is included in the fuzzy goals of the DM, it
is desirable that hex, a;) be as close to ri as possible. Consequently,
the notion of o:-Pareto optimal solutions defined in terms of objective
functions cannot be applied. For this reason, the concept of (local) M-
n-Pareto optimal solutions which are defined in terms of membership
functions instead of objective functions, is introduced, where M refers
to membership.
Step 2* Ask the DM to select the initial values of 0:' (0 'S- a 'S- 1) and
set the initial reference membership levels to be 1.
Step 6: Having evaluat.ed the individuals via fitness function, apply the
selection operator for generating individuals of the next generation.
Step 8*: If the DM is satisfied with the current solution, stop. Other-
wise, ask the DM to update the reference membership values and/or
the degree a and return to step 3.
8.3. Multiobjective nonlinear' programming problem with fuzzy numbers 165
For this numerical example, at each interaction with the DM, the
corresponding augmented minimax problem is solved through the revised
GENOCOP III for obtaining an o:-Pareto optimal solution.
As shown in Table 8.3, in this example, the values of (ill,il2,fJ,3; 0:)
are updated from (1.0, 1.0, 1.0; 1.0) to (0.8, 1.0, 1.0; 0.9), (0.8, 1.0, 0.9;
0.9), and (0.8, 1.0, 0.95; 0.8) sequentially.
In the whole interaction processes as shown in Table 8.3, the aug-
mented minimax problem is solved for the initial reference membership
levels and the degree 0:, and the DM is supplied with the correspond-
ing o:-Pareto optimal solution and membership values, as is shown in
interaction 1 of Table 8.3. On the basis of such information, because
the DM is not satisfied with the current membership values (0.80406,
0.80406, 0.80407) and the degree 0: = 1, the DM updates the reference
membership values and the degree 0: to ill = 0.80, il2 = 1.0, il3 = 1.0,
and 0: = 0.9 for improving the satisfaction levels for 112 and 113 at the
expense of 111 and the degree 0:. For the updated reference membership
values and the degree 0:, the corresponding augmented minimax prob-
lem yields the o:-Pareto optimal solution and membership values, as is
shown in interaction 2 of Table 8.3. The same procedure continues in
this manner, and in this example session, after updating the reference
8.4. Conclusion 167
membership values ([tl, [t2, [t3) three times and updating the degree a
two times, at the fourth interaction, the satisficing solution for the OM
is derived and the whole interactive processes are summarized in Table
8.3, and the example session takes about 5 minutes.
The satisficing values for the membership (objective) functions for the
compromised degree a = 0.8 can be interpreted as compromised values
of the OM between the conflicting membership (objective) functions.
8.4 Conclusion
In this chapter, nonconvex MONLP problems were formulated and
an interactive fuzzy satisficing method through the revised GENOCOP
III was presented. After determining the fuzzy goals of the OM, if the
OM specifies the reference membership values, the corresponding Pareto
optimal solutions can be obtained by solving the augmented minimax
problems for which the revised GENOCOP III is effectively applicable.
Furthermore, by considering the experts' vague or fuzzy understanding
of the nature of the parameters in the problem-formulation process, non-
convex MONLP-FN problems were formulated. Using the a-level sets
of fuzzy numbers, the corresponding nonfuzzy a-MONLP problem was
introduced. Having determined the fuzzy goals of the OM, if the DM
specified the degree a and the reference membership values, the cor-
168 8. FUZZY MULTIOBJECTIVE NONLINEAR PROGRAMMING
9.1 Introduction
Scheduling is the allocation of shared resources over time in order
to perform a number of tasks. In manufacturing systems, scheduling
allocates a set of jobs on a set of machines for processing. Scheduling
problems are found in diverse areas such as manufacturing, production
planning, computing, communications, transportation, logistics, health-
care, and so on.
The classic job-shop scheduling problem (JSP) is generally described
as follows: There is a set of jobs to be processed through a set of ma-
chines. Each job must pass through each machine once and once only,
and each machine is capable of processing at most one job at a time.
The processing of a job on a machine is called an operation. Technolog-
ical constraints demand that each job should be processed through the
machines in a particular order. The problem is to determine a process-
ing order of operations on each machine in order to minimize the time
170 9. GENETIC ALGORITHMS FOR JOB-SHOP SCHEDULING
Jobs may be finished at any time, in other words, no due date exists.
There is no randomness.
Denoting the completion time of job Jj by Cj, j = 1, 2, ... , n, the time
required to complete all jobs, called the maximum completion time or
makespan, is defined by C max = max { C 1 , C2 , . . , Cn}. The problem is
to determine a processing order of operations on each machine in order
to minimize the maximum completion time C max .
172 9. GENETIC ALGORITHMS FOR JOB-SHOP SCHEDULING
Ml 11 112) 12
I11 I 121
M2
M3
3 4 5 6 7 10
.. t
J2
J2
J2
i;
IJ I I
Semiaciive
P Iill
i l
MI J2 J2
J2
J3
M~21 MI2 MI
M2 J3
Active
MI
J\ I JII J2
M2 h
Nondelay
Although the nondelay schedules are smaller than active schedules are,
the nondelay schedules will not necessarily contain an optimal schedule.
174 9. GENETIC ALGORITHMS FOR JOB-SHOP SCHEDULING
Step 1 (Fig. 9.3): Find the set C of all the earliest operations in tech-
nological sequence among the operations that are not yet scheduled.
The set C is called "cut."
Cute
Step 2 (Fig. 9.4): Disregarding that only one operation can be pro-
cessed on each machine at a time, create a schedule of earliest com-
pletion time for each operation and denote the obtained earliest com-
pletion time of each operation OJ,i,r E C by ECj,i,r.
Step 3 (Fig. 9.5): Find the operation OJ*,i*,r* that has a minimum EC
in the set C: ECj*,i*,r* = min {ECj,i,r I OJ,i,r E C}, where ties
are broken randomly. Find the set G of operations that consists of
9.3. Genetic algorithms for job-shop scheduling 175
the operations OJ,i,r< E G sharing the same machine Mr< with the
operation OJ< ,i< ,r< and the processing of OJ,i,r< and OJ< ,i< ,r< overlaps.
Because the operations in G overlap in time, G is called the conflict
set.
Conflict
Minimum earliest
completion time
Step 4 (Fig. 9.6): Randomly select one operation Ojs ,is ,r' among the
conflict set G.
Selected operation
machines, the answers are 12, 12, and 10, respectively, to obtain a total
of 44. Because 12 x 4 is 48 when the order of priority is the same for all,
the ratio becomes 44/48, and we can see that the degree of similarity is
0.917.
Job Job
Machine I I 2 4 3 2+2+3+3=10 Machine 1 2 1 4 3
Machine 2 3 1 4 2 3 + 3 + 3 + 3 = 12 Machine 2 3 1 4 2
Machine 3 4 3 1 2 3 + 3 + 3 + 3 = 12 Machine 3 4 3 1 2
Machine 4 3 2 1 4 3+3+2+2=10 Machine 4 3 2 4 1
10+12+12+10 =~=00917
48 48
Step 2: Select one of the two parent individuals with the same proba-
bility 1/2. From among the conflict set G, choose one operation with
the minimum earliest completion time in the schedule represented by
the selected parent individual and denote it by Djs,is,ro .
178 9. GENETIC ALGORITHMS FOR JOB-SHOP SCHEDULING
Step 3: Perform the steps 5 and 6 in the Giffier and Thompson algo-
rithm for generating active schedules.
The Giffier and Thompson algorithm-based crossover is illustrated in
Figure 9.9. As shown in Figure 9.9, in offspring 1, assume that 01"T* is
the operation having a minimum EC in the cut set C, then the conflict
exists among the operations 01 " T*' O 2 ,,,*, and 0 3 ",*. If parent 1 is
selected with the probability 1/2, in parent 1, because O 2 ,,,* is processed
with highest priority among the conflict set, 02"r* is selected toward
eliminating conflict.
Parent 1
I
Parent 1
is selected J~ll I I
D
I
Conflict slet
I ~ Selected ope ratio n
t
~ I I DD~D
Select one of them I I I
with probability 1/2
c=J 0
t CD
I]
Offspring 1 Offspring 1
!I I
c=J
Parent 2
9.3.6 Mutation
At the time of the Giffier and Thompson algorithm-based crossover,
without selecting from either parent, mutation is performed by selecting
a operation from t.he set G at random wit.h a mut.ation rat.io of p % [219].
o 0 o 0 0
00 0 0 00 0
0 0
0 o o 0 0 0
o 00 0 00
o 00 0
Step 3: For each node under conflict at level L, calculate the lower
bound LBtr' which will be defined below.
Step 4: Among the unbranched nodes at level L, find the node with
the minimum lower bound (LBL = mini,r LBtr)'
Step 5: Compare the minimum lower bound LBL at level L with 1*.
If LBL < 1*, go to step 6. If not, i.e., LBL 2: 1*, go to step 8.
Step 6: Branch from an unexplored node with the minimum lower
bound and update the completion time table EC L . If there are more
than two nodes having the minimum lower bound, select one node
by a particular rule and branch from the resulting active node. Oth-
erwise, branch from that node.
Step 7: Update the scheduling time interval as follows:
8.1 If there exists one or more unexplored nodes with a lower bound
such that LBfr < 1*, revise the scheduling time interval by set-
ting T = min~,rE{SL} Ectr-I, where {SL} is the conflict set at
level L, and S by the next lower element in the completion time
table EC L - I . Then return to step 4.
8.2 If all unexplored nodes have lower bounds such that LBtr 2: 1*,
go to step 9.
Step 9: Check for an optimal solution.
LBtTJ
,
= max [maxEC;/"max{min
zEJ repl z,TER
ES.1L,r + . "L pj,r}] , (9.1)
z,rER
Simulated annealing
Step 1: Generate one solution Xc through the random selection in step
4 of an active scheduling generating algorithm; i.e., generate an initial
solution similar to the GA in the previous section. Set an initial
temperature.
Step 2: Represent the job processing order for each machine of a solu-
tion Xc by the corresponding matrix.
Step 3: From the matrix, select a certain machine at random. Select
two job-processing orders of the machine and exchange them.
Step 4: Based on the job-processing order after exchange, generate a
solution that becomes an active schedule and denote the solution by
X.
9.3. Genetic algorithms for job-shop scheduling 183
Simulations through GAs are performed for the GAs, both with and
without the degrees of similarity. Each of the parameter values of GAs
as are shown in Table 9.6 is found through a number of experiences, and
these values are used in each of the trials of GAs. In the population
construction, three subpopulations are prepared, and at about two third
of the specified generation number, each of the subpopulations is merged
to one population. The search numbers of SA are set as shown in T'dble
9.7 by considering the population sizes and the numbers of generations
of GAs. Although it may be appropriate to set the search numbers of
SA as 30 x 50 = 1500 and 40 x 80 = 3200 for the 6 x 6 and 10 x 10 JSPs
184 9. GENETIC ALGORITHMS FOR JOB-SHOP SCHEDULING
respectively, we set the numbers that are larger than them as shown in
Table 9.7 for comparing the accuracy of the solutions and the state of
convergence through GAs and SA.
All of the trials of GAs and SA are performed 10 times for each prob-
lem using a Fujitsu S-4/1O workstation. The average time required for
the simulations is shown in Table 9.8. Naturally, the computation time
of SA is much larger than that of GAs because of the predetermined
search numbers shown in Table 9.7. The maximum completion times for
the approximate optimal solutions obtained from these trials are shown
9.3. Genetic algorithms for job-shop scheduling 185
in Tables 9.9 and 9.10. Also, the maximum completion time for an
optimal solutions obtained from BAB is shown in Table 9.11.
Trial 1 2 3 4 5 6 7 8 9 10
GA/DS 55 55 55 55 55 55 55 55 55 55
GA 55 55 55 55 55 55 55 55 55 55
SA 56 57 57 57 58 56 55 55 55 57
BAB 55
GA, genetic algorithm; DS, degree of similarity
Trial 1 2 3 4 5 6 7 8 9 10
GA/DS 47 47 47 47 47 47 47 47 47 47
GA 47 48 47 48 48 47 49 47 49 48
SA 51 50 49 51 51 49 50 50 50 52
BAB 46
GA, genetic algorithm; DS, degree of similarity
Trial 1 2 3 4 5 6 7 8 9 10
GA/DS 73 73 73 73 73 73 73 73 73 73
GA 73 73 74 73 73 73 74 73 74 73
SA 81 77 80 76 79 81 81 80 78 79
BAB
GA, genetic algorithm; DS, degree of similarity
Q)
Q)
, ,
c
8 55 .g 55
.~ Q)
a. a.
E E
0
0 8
E E
..
:::l
..
:::l
E E
.;it ' )(
E 50 E 50
Q) E
en :::l
~ E
'2
~
c( ~
45 45
80 80
Generation Generation
(a) Average maximum completion lime (b) Minimum maximum completion time
As can be seen in Figures 9.11,9.12, and 9.13, compared with GAs, the
solutions obtained through SA have the evenness, and the convergence
trends vary widely during searches. This may be because of the one
point search of SA that is influenced by an initial solution and the trend
in solutions during searches.
As can be seen in Figures 9.11 and 9.12, compared with GA without
the degree of similarity, the solutions obtained through GA with the
degree of similarity are much more stable. This may be because of the
introduction of the degree of similarity.
188 9. GENETIC ALGORITHMS FOR JOB-SHOP SCHEDULING
60 60
Q)
~
c
o
..
Q)
E
c
.2
'al 55 ~ 55
a. E
E o
8 u
E E
:>
:>
E E
' ;(
.;(
<II
<II
E E
~ 50 5 50
\:! -~
~ c
~
45L-----------------~
80~ 45 L-----------------~80+
Generation Generation
(a) Average maximum completion time (b) Minimum maximum completion time
65
Q)
~ 60
c:
o
'al
a.
E
o
u
E
:>
E
-~
E
E
:>
E
'c
~
45L------------------------------------+
sooo
Search number
10.1 Introduction
As discussed in the previous chapter, a significant number of successful
applications of genetic algorithms to job-shop scheduling problems have
appeared [16, 25, 42, 43, 48, 51, 60, 99, 123, 198, 203, 219]. Naturally, in
these job-shop scheduling problems, various factors, such as processing
time, due date, and so forth, have been precisely fixed at some crisp
values.
However, when formulating job-shop scheduling problems that closely
describe and represent the real-world problems, various factors involved
in the problems are often only imprecisely or ambiguously known to
the analyst. This is particularly true in the real-world situations when
human-centered factors are incorporated into the problems. In such sit-
uations, it may be more appropriate to consider fuzzy processing time
because of man-made factors and fuzzy due date tolerating a certain
190 10. FUZZY MULTIOBJECTIVE JOB-SHOP SCHEDULING
amount of delay in the due date [81, 139, 206]. To be more specific,
considering a number of man-made factors that exist in operations and
planning and other arrangements is also a part of operations, and fuzzi-
ness in processing times undeniably exists. Concerning due date, we
can also imagine many situations in which it is desirable in principle to
strictly satisfy the due date; a certain amount of delay may be tolerated
with longer delay tends to be lower evaluations.
Recently, in order to reflect such situations, a mathematical program-
ming approach to a single machine fuzzy scheduling problem with fuzzy
precedence relation [81] and job-shop scheduling incorporating fuzzy pro-
cessing time using genetic algorithms [206] has been proposed.
In order to more suitably model actual scheduling situations, in 1999,
Sakawa and Mori [158, 159] formulated job-shop scheduling problems
with fuzzy processing time and fuzzy due date by incorporating fuzzy
processing time and fuzzy due date.
On the basis of the concept of an agreement index for fuzzy due date
and fuzzy completion time for each job, the formulated problem was in-
terpreted as seeking a schedule that maximizes the minimum agreement
index. For solving the formulated fuzzy job-shop scheduling problems,
an efficient genetic algorithm for job-shop scheduling problems intro-
duced in the previous chapter is extended to deal with the fuzzy due
dates and fuzzy completion time [157]. As illustrative numerical exam-
ples, both 6 x 6 and 10 x 10 job-shop scheduling problems with fuzzy
processing time and fuzzy due date were considered for demonstrating
the feasibility and efficiency of the proposed genetic algorithm.
Unfortunately, however, in these fuzzy job scheduling problems, only
a single objective function is considered and extensions to multiobjec-
tive job-scheduling problems are desired for reflecting real-world situa-
tions more adequately. On the basis of the agreement index of fuzzy
due date and fuzzy completion time, in 2000, Sakawa and Kubota [156]
formulated multiobjective job-shop scheduling problems with fuzzy due
date and fuzzy processing time as three-objective problems that not
only maximize the minimum agreement index but also maximize the
average agreement index and minimize the maximum fuzzy completion
time. Moreover, by considering the imprecise nature of human judg-
ments, fuzzy goals of the DM for the objective functions are introduced.
After eliciting the linear membership functions through interaction with
the DM, the fuzzy decision of Bellman and Zadeh or minimum operator
[22] is adopted for combining them. Then a genetic algorithm that is
suitable for solving the formulated problems is proposed [156]. As il-
lustrative numerical examples, both 6 x 6 and 10 x 10 three-objective
job-shop scheduling problems with fuzzy due date and fuzzy process-
10.2. lob-shop scheduling with fuzzy processing time and fuzzy due date 191
ing time were considered, and the feasibility and effectiveness of the
proposed method were demonstrated by comparing with the simulated
annealing method.
(10.1)
The agreement index AI can be viewed 3".<; the portion of OJ that was
completed by the due date.
It should be noted here that, although the area of the fuzzy com-
pletion time becomes zero for a conventional scheduling problem with
nonfuzzy processing time and nonfuzzy due date and the definition of the
agreement index AI itself becomes meaningless, it is possible to define
the nonfuzzy case as an extreme limit. More specifically, if we consider
the fuzzification of the nonfuzzy processing time a as (a - <5, a, a + (5),
<5 > 0; regard the agreement index AI(<5) 3"." a function of lj with respect
to the fuzzified processing time and due date; and define the agreement
index AI of the nonfuzzy processing time and due date as the extreme
limit of AI(<5) such that
o x 0 d'J dJ x
(a) Fuzzy processing time (b) Fuzzy duedate
1.0 f------.
o dJ d; x
Agreement index
(10.3)
Figure 10.3 depicts the addition of the TFNs A and .8. The addition is
used when calculating the fuzzy completion time of each operation.
(10.4)
where
(10.5)
(1O.6)
(a) (b)
(c)
Cut
Figure 10.5. Step 1 of the Giffier and Thompson algorithm for FJSPs
Figure 10.6. Step 2 of the Giffier and Thompson algorithm for FJSPs
Step 3 (Fig. 10.7): Find the operation OJ<,i',r< that has a minimum
EC l in the set C: ECJ.,i.,r. = min{EC},i,rIOj,i,r E C}. Find the set
G of operations that consists of the operations OJ,i,r' E C sharing the
same machine Mr' with the operation OJ* ,i' ,r' and the processing of
OJ,i,r* and OJ,j,r* overlaps. Because the operations in G overlap in
time, G is called the conflict set.
Figure 10.7. Step 3 of the Giffier and Thompson algorithm for FJSPs
Step 4 (Fig. 1O.8): Randomly select one operation OJ,,i,,r' among the
conflict set G.
Selected operation
Figure 10.B. Step 4 of the Giffier and Thompson algorithm for FJSPs
tions with conflicts. Observe that the conflict is not considered here.
Remove the selected operation from the cut.
Figure 10.9. Step 5 of the Giffier and Thompson algorithm for FJSPs
Parent 1
is selected
~ A~ 'J
Offsp ring 1 Offspring 1
/\ J\ \
f\\
\. r--=\
Parent 2
(1) Among c offspring individuals, select one individual with the great-
est minimum agreement index.
(2) Among (c+ 1) individuals consisting of (c-1) offspring not selected in
(1) plus two parents, select one individual with the greatest minimum
agreement index.
In this crossover method, because the larger the value of c, the larger
the number of offspring, the probability of excellent offspring being gen-
erated also becomes high. However, because they are generated from
the same parents, the degree of similarity among the offspring individ-
uals becomes high. With this observation in mind, we set c = 3 in the
numerical experiments.
10.2.3.5 Mutation
At the time of crossover, without selecting from either parent, ran-
domly select operations from the set G with a mutation ratio of p%
[219].
The algorithm of SA
Step 2: Represent the job processing order for each machine of a solu-
tion Xc by the corresponding matrix.
"-
Table 10.2. Numerical example of 10 x 10 FJSP ~
"'l
Processing machines (fuzzy processing time) ~
~
Job 1 8(2,3,4) 6(3,5,6) 5(2,4,5) 2( 4,5,6) 1(1,2,3) 3(3,5,6) 9(2,3,5) 4(1,2,3) 7(3,4,5) 10(2,3,4) ~
Job 2 10(2,3,4) 7(2,3,5) 4{2,4,5) 6(1,2,3) 8(4,5,6) 3(2,4,6) 2(2,3,4) 1{1,3,4) 5{2,3,4) 9{3,4,5) ~
Job 3 6{2,4,5) 9(1,2,3) 10(2,3,5) 8(1,2,4) 1(3,5,6) 7(1,3,4) 4(1,3,5) 2(1,2,4) 5(2,4,5) 3(1,3,5) ~
Job 4 1(1,2,3) 5(3,4,5) 8(1,3,5) 9(2,4,6) 10(2,4,5) 6(1,2,4) 7(3,4,5) 2(1,3,5) 4(1,3,6) 3(1,3,4) ~
3(1,3,4) 5(1,2,3) 8(1,3,5) 9(2,3,4) 10(3,4,5) 6{1,3,4) 1 (3,4,5) 4(1,3,4)
a
Job 5 2{2,3,4) 7(1,3,4) t:l:i
Job 6 4(2,3,4) 2(2,3,4) 3{1,2,3) 5(2,4,5) 6(1,3,4) 8(1,3,4) 7(3,4,5) 9(1,2,3) 10(2,4,5) 1(1,3,4) ~
C':l
Job 7 3(2,3,4) 5(1,4,5) 4(1,3,5) 1(3,4,5) 9(2,3,4) 7(3,4,5) 2(1,2,3) 10(3,5,6) 8(3,5,6) 6(1,2,3)
~
Job 8 7(3,4,5) 1(1,2,3) 9(3,4,5) 6(2,4,5) 10(1,3,4) 2(2,3,4) 5(1,2,3) 3(2,4,5) 4(3,4,5) 8(2,3,5)
~
Job 9 9(3,4,5) 4(1,3,4) 10(1,3,5) 2(2,3,4) 3(3,5,6) 6(2,4,5) 8(1,3,4) 1(3,4,5 ) 5{1,2,3) 7(3,4,5)
Job 10 7(2,4,5) 5(1,2,3) 2(3,4,5) 4(2,3,4) 1(1,2,3) 8(3,4,5) 10(2,4,5) 6(3,4,5 ) 3(1,2,3) 9(1,2,4) a'-<
t:l:i
Fuzzy Job 1 Job 2 Job 3 Job 4 Job 5 Job 6 Job 7 Job 8 Job 9 Job 10
50,65 50,65 50,65 45,60 45,60 50,60 45,60 50,60
~
a
due date 45,60 50,60
"ti
V:l
~
~
~
~
~
10.2. Job-shop scheduling with fuzzy processing time and fuzzy due date 203
Now we are ready to apply both the genetic algorithm and the SA
presented in this section to the 6 x 6 and 10 x 10 F JSPs in Tables
10.1 and 10.2 to compare the accuracy of the solution and the state of
convergence.
Each of the parameter values of genetic algorithm as shown in Table
10.3 is found through a lot of experiences, and these values are used in
each of the trials of genetic algorithm. The search numbers of SA are set
as shown in Table lOA by considering the population sizes and the num-
bers of generations of genetic algorithm. Although it may be appropriate
to set the search numbers of SA as 30 x 50 = 1500 and 40 x 80 = 3200
for the 6 x 6 and 10 x 10 F JSPs, respectively, we set the numbers that
are larger than them as shown in Table lOA for comparing the accuracy
of the solutions and the state of convergence through genetic algorithm
and SA.
All of the trials of G A and SA are performed 10 times for each problem
using a Fujitsu S-4/1O workstation. The average time required for the
simulations is shown in Table 10.5. Naturally, the computation time of
SA is much larger than that of GA because of the predetermined search
numbers shown in Table 10.4. The minimum agreement indices for the
approximate optimal solutions obtained from these trials are shown in
Tables 10.6 and 10.7.
Furthermore, to clarify the state of the trend toward convergence in
the 6 x 6 and 10 x 10 F JSPs, the changes occurring in each generation
of the average minimum agreement index and the maximum minimum
agreement index for all trial populations in GA are shown in Figures
10.12 and 10.14, respectively. Also, the changes occurring in each search
204 10. FUZZY MULTIOBJECTIVE JOB-SHOP SCHEDULING
Problem GA SA
6 x 6 17.6 sec 40.4 sec
10 x 10 170.4 sec 220.5 sec
Trial GA SA Trial GA SA
1 0.69 0.59 6 0.69 0.38
2 0.69 0.39 7 0.69 0.38
3 0.69 0.36 8 0.69 0.38
4 0.69 0.36 9 0.69 0.36
5 0.69 0.39 10 0.69 0.36
Trial GA SA Trial GA SA
1 0.94 0.76 6 0.93 0.75
2 0.94 0.68 7 0.94 0.67
3 0.94 0.64 8 0.94 0.69
4 0.94 0.76 9 0.93 0.78
5 0.94 0.75 10 0.93 0.60
number of the maximum minimum agreement index for all trial popula-
tions in SA are shown in Figures 10.13 and 10.15.
The fuzzy completion times for each operation for the solutions ob-
tained from GA are also shown in Tables 10.8 and 10.9.
1.0 1.0
0.8 0.8
>< ><
'"
'0
.5 0.6
<U
'0
.5 0.6
E C
.,E'" '"
E
'"t'b '"'"
0.4 t'b 0.4
<t: <t:
0.2
1.0 f-
0.8 f-
0,0 -=-------------------73000~
Search number
1.0 1.0
0.8
><
OJ
><
OJ
.=
." ."
c 0.6
E
OJ
E
OJ
E E
OJ OJ
OJ OJ
!;'o 0.4 !;'o 0.4
0.2 0.2
0.0'------------;;:'>80 O.0'-----------::::80~
Generation Generation
(a) Average minimum agreement (b) Maximum minimum agreement
index of group index of group
1.0
0.8
><
OJ
."
.5O 0.6
C
OJ
E
OJ
~
00
0.2
0.0 L-------------------.,..,sooo~
Search number
<:..,
c
<:>-
,
;;:,-
'"
.g
~
'";;:,-
Table 10.9. Approximate optimal solution using GA for 10 x 10 FJSP ~
~
~
Fuzzy completion time ~.
Job 1 (2,3,4) (5,9,11) (7,14,18) (11,18,24) (12,20,27) (15,25,33) (17,28,38) (18,33,45) (25,40,53) (27,43,61) ~
...;;:.
Job 2 (2,3,4) (5,7,10) (8,17,23) (10,16,26) (14,21,34) (17,29,40) (19,32,44) (20,34,51) (22,37,55) (25,41,60)
Job 3 (2,4,5) (4,6,8) (7,13,19) (8,15,23) (15,25,33) (16,28,37) (17,31,42) (20,34,48) (24,41,60) (27,44,65) ?
~
'<::!
Job 4 (1,2,3) (4,6,8) (5,9,13) (9,14,19) (12,21,28) (15,30,42) (22,36,48) (23,39,53) (24,42,59) (28,47,69)
~
Job 5 (2,3,4) (6,10,14) (7,13,18) (8,15,21) (9,18,28) (13,23,32) (16,27,37) (17,30,46) (23,38,56) (25,45,63) 21
~
~
Job 6 (2,3,4) (4,6,8) (5,8,11) (11,21,29) (12,24,33) (15,24,38) (19,32,43) (20,34,46) (24,39,57) (25,42,61)
'"'"
;;.
Job 7 (2,3,4) (5,10,13) (6,13,18) (9,17,23) (11,20,27) (14,24,32) (17,27,36) (20,32,43) (23,36,53) (26,41,60) '-Q
Job 8 (3,4,5) (4,6,8) (7,10,13) (9,14,18) (10,17,23) (16,25,33) (17,27,36) (19,33,45) (22,37,50) (25,39,58) ,....
.
Job 9 (3,4,5) (4,7,9) (5,10,14) (7,13,18) (10,18,24) (14,28,38) (16,27,42) (19,31,47) (20,33,50) (28,44,58) ~
!:)
Job 10 (8,14,19) (9,17,24) (14,22,29) (16,25,33) (17,27,36) (20,31,47) (22,35,52) (25,39,57) (26,41,60) (27,43,64) ;:l
~
~
!:)
...
~
t--;l
o
~
208 10. FUZZY MULTIOBJECTIVE JOB-SHOP SCHEDULING
where zf and zl denote the value of the objective function Zi such that
the degree of membership function is 0 and 1, respectively. Such a linear
membership function is illustrated in Figure 10.16.
210 10. FUZZY MULTIOBJECTIVE JOB-SHOP SCHEDULING
1.0
--r---~--~~--~Zi
o
Figure 10.16. Linear membership function
Having elicited the linear membership functions from the DM for all
of the objective functions, if we adopt the well-known fuzzy decision of
Bellman and Zadeh or minimum operator [22], the problem to be solved
can be transformed to find an optimal schedule so as to maximize
(10.11)
Criterion 2. If C1 does not rank the two TFNs, those that have the
best maximal presumption (the mode)
{1O.13}
10.3. Multiobjective job-shop scheduling under fuzziness 211
Step 2: Select. one of the two parent individuals with the same proba-
bility 1/2. Among the conflict. set. G, choose one operation with the
minimum ranking fuzzy completion time in the schedule represented
by the selected parent individual and denote it by OJ.,i.,r>.
(1) Among c offspring individuals, select one individual with the great-
est value of the objective function.
In this crossover method, because the larger the value of c, the larger
generation of offspring, the probability of excellent offspring being gen-
erated also becomes high. However, because they are generated from the
same parents, the degree of similarity among the offspring individuals
becomes high. As discussed previously, we set c = 3 in the numerical
experiments.
Step 2: Represent the job process sequence for each machine of a so-
lution Xc by the corresponding matrix, and select one machine at
random. Select two jobs of the machine and exchange them. For
example in the problem of 3 jobs and 3 machines, when the first job
(h) and the third job (J3 ) of machine 2 (M2 ) are selected, as shown
in Figure 10.19 (a), the result after exchange becomes as shown in
Figure 10.19 (b).
Figure 10.19. Example of job processing order and job exchange (3 x 3 F JSP)
Step 3: Based on the job processing sequence after job exchange, dis-
solve the conflict that occurred in step 4 of an active scheduling gener-
ating algorithm and generate a new solution. If the obtained solution
is different from the solution before job exchange, set the solution as
a neighborhood solution X and go to step 4. Otherwise, return to
step 2 and select a new exchange pair.
(1) Using the decrement tlJ of the objective function value and tem-
perature T, calculate exp( -!:lJ IT).
(2) Generate a uniform random number on the open interval (0,1)
and compare it with the value of exp( - tlJ IT).
(3) If the value of exp( -tlJ IT) is greater than the random number,
accept the exchange and set Xc = X. Otherwise, the exchange
is not accepted.
(1) Repeat the procedures from step 2 to step 4 until the exchanges
are performed by the number of epoch. When the epoch number
has been reached, perform the following substeps (2) to (4).
(2) Calculate the average value fe of the objective function values
during the current epoch and the average value j~ of the objective
function values through the exchanges thus far.
(3) Check whether the relative error between the average value R
in
the whole and the average value fe during the epoch is smaller
than the prescribed tolerance value E, i.e., check whether (Ife -
j~l/ j~) < E holds.
(4) When the relative error is smaller than the tolerance value, regard
the equilibrium st.ate is reached at. this temperature and go to st.ep
6 to decrease the temperature. Otherwise, clear the counter of
the epoch and return to step 2 to repeat the job exchange process.
........~
<:l
~
~
(")
Table 10.13. Problem 1 of 10 x 10 MOFJSP ........
~
~
.....
<:l
Processing machines (fuzzy processing time) 0-
I
Job 1 4(10,13,16) 6(4,7,9) 7(10,12,13) 8(5,6,7) 9(6,8,9) 2(7,8,12) 5(10,12,15) 3(5,6,7) 1(2,3,5) 10(10,14,18) ;;:,-
Job 2 2(3,5,6) 1(9,10,13) 3(5,8,9) 10(9,12,16) 6(5,6,9) 7(7,11,12) 5(9,13,14) 9(8,12,16) 8(2,4,6) 4(4,7,10)
.g'"
(")
'"
Job 3 7(9,12,14) 10(10,13,14) 9(5,7,8) 4(3,4,6) 5(4,7,8) 2(3,5,7) 8(3,4,6) 3(1,2,4) 1(5,7,9) 6(9,11,13) ;;:,-
~
Fuzzy Job 1 Job 2 Job 3 Job 4 Job 5 Job 6 Job 7 Job 8 Job 9 Job 10 '"'"
due date 169,184 123,134 100,110 102,105 121,136 167,174 120,130 163,176 79,94 160,163
~
......
-:t
t-.:l
I-'
00
......
Table 10.14. Problem 2 of 10 x 10 MOFJSP s::>
'""l
Processing machines (fuzzy processing time) ~
~
Job 1 7(3,4,5) 9(10,12,13) 2(4,7,10) 5(5,8,9) 4(10,12,16) 1(10,11,13) 6(4,7,10) 3( 4,5,8) 8( 4,6, 7) 10(9,12,13) '"'<
Job 2 10(10,11,14) 3(9,12,14) 5(7,10,14) 4(7,11,12) 8(2,4,6) 1(8,10,14) 2(7,11,12) 7(8,11,14) 6(6,9,10) 9(10,14,15) ~
Job 3 5(5,7,10) 7(2,3,5) 4(1,3,4) 3(7,10,12) 1(8,11,12) 8(2,4,5) 6(4,5,7) 2(7,8,10) 9(9,13,14) 10(8,12,15) ~
Job 4 5(5,8,10) 6(4,5,8) 4(7,10,14) 7(3,5,7) 8( 4,5,6) 2(8,10,12) 1 (2,3,4) 9(2,3,5) 3(6,8,11) 10(6,8,11) ~
1(2,3,4) 3(9,12,13) 5(5,6,9) 9(5,7,8) 6(1,2,4) 8(3,4,6) 10(2,4,6)
a
Job 5 2(4,7,10) 7(5,6,7) 4(9,10,14) tl:l
Job 6 7(1,3,4) 3(3,4,5) 8(3,5,6) 2(5,7,8) 4(8,9,13) 9(9,12,14) 1O( 4, 7,8) 1(1,2,4) 6(2,4,5) 5(6,9,12) ~
C":l
Job 7 10(10,11,14) 8(2,3,4) 6(9,10,12) 3(9,10,11) 7( 4,5,6) 2(3,5,7) 9(1,3,4) 4(2,4,5) 1(8,10,13) 5(7,10,11)
~
Job 8 6(7,11,15) 2(9,13,15) 8(5,6,9) 4(8,9,13) 7(6,9,12) 3(6,8,10) 9(6,9,11) 5(1,2,4) 10(8,12,14) 1(6,9,12)
~
Job 9 7(4,7,10) 2(3,5,6) 8(6,9,10) 6(3,5,6) 9(8,11,12) 4(5,7,10) 10(4,6,9) 1(1,2,4) 3(3,5,7) 5(10,12,15) '-.;
Job 10 4(1,2,3) 1 (8,12,13) 9(7,8,9) 10(6,9,12) 5(9,11,15) 2(7,11,15) 6(10,14,18) 3(1,3,5) 8(1,2,4) 7(2,3,5)
a
tl:l
,
Fuzzy Job 1 Job 2 Job 3 Job 4 Job 5 Job 6 Job 7 Job 8 Job 9 Job 10
~
due date 151,156 154,157 106,117 123,138 85,88 86,94 120,135 149,158 117,124 142,148 a
'"0
en
~
~
~
~
Q
'-
:::>
~
~
,.,.
"'.
0
~
~
~
Table 10.15. Problem 3 of 10 x 10 MOFJSP
,.,.
~.
~
'-".
0
Processing machines (fuzzy processing time) 0-
I
Job 1 2(7,11,12) 9(5,7,9) 10( 4,5,8) 8(7,11,15) 6(8,9,13) 3(7,9,12) 1(6,7,9) 5(2,3,4) 7(4,7,8) 4(2,4,5) ~
'"
.
Job 2 6(3,5,6) 1(1,2,3) 5(1,2,4) 8(10,11,14) 2(8,12,14) 4(2,4,5) 7(5,8,10) 10(8,9,11) 3(2,4,5) 9( 4,6,8)
~
'"
Job 3 9(9,11,12) 7(8,9,13) 3(7,9,10) 6(3,5,7) 2(10,13,17) 5(2,4,6) 4(8,9,13) 8(3,4,5) 1(1,2,3) 10( 4,5, 7) ~
~
Job 4 2(8,11,13) 1(2,3,5) 8(4,7,8) 4(4,6,7) 3(5,7,8) 9(6,8,10) 10(2,3,4) 7(8,12,14) 6(5,7,8) 5( 4,6,9) ~
~
Job 5 3(4,6,7) 9(4,7,9) 7(6,9,12) 10(2,4,5 ) 4(1,3,5) 6(9,10,12) 5(10,11,12) 1(5,7,9) 2(8,10,13) 8( 4,6,9) ;S.
~
Job 6 9(7,8,9) 4(4,5,8) 1(4,5,7) 3( 4,6,8) 7(6,7,9) 10(3,5,7) 8(9,11,15) 2(7,10,12) 5(6,9,12) 6(5,7,9) ;:!
;:l
Job 7 6(5,6,8) 2(5,6,9) 9( 4,6,7) 8(8,9,10) 4(9,13,14) 3( 4,5,8) 10(7,10,12) 5(3,4,6) 1(9,12,15) 7(5,8,10) ~
~
'1
Job 8 6(7,11,14) 4(3,5,6) 3(10,13,14) 10(7,9,11) 8(7,10,13) 1(5,8,10) 9(7,11,13) 5(10,12,16) 7(10,14,16) 2(6,7,10)
'it
Job 9 10(2,3,5) 6(1,3,4) 7( 4,5,8) 3(10,14,16) 8(3,4,6) 5(1,3,4) 1(1,2,3) 9(3,5,6) 2(5,8,9) 4(8,11,14) ~
"'.
;:l
Job 10 4(6,8,9) 7(5,7,9) 5(5,6,8) 9(3,5,7) 1(2,3,5) 2(1,2,3) 3(8,10,13) 8(9,13,16) 10( 4,5,8) 6(2,4,5) ~
Fuzzy Job 1 Job 2 Job 3 Job 4 Job 5 Job 6 Job 7 Job 8 Job 9 Job 10 '"'"
due date 124,128 81,95 92,99 91,103 109,115 102,107 118,128 170,178 75,86 94,107
t-,:)
........
'l:>
220 10. FUZZY MULTIOBJECTIVE JOB-SHOP SCHEDULING
Problem To a S Epock E:
GA SA
Problem 1 44.13 53.50
Problem 2 43.97 52.62
Problem 3 40.79 51.29
GA SA
Problem 1 1205.46 1286.77
Problem 2 1158.79 1245.68
Problem 3 1205.51 1186.76
of the best solution obtained among 20 trials. Best, Average and Worst
represent the corresponding values of the minimum satisfaction degree.
SOME APPLICATIONS
11.1.1 Introduction
As a real-world application of genetic algorithms, we will consider
a practical scheduling problem of a machining center that produces a
variety of parts with a monthly processing plan. For such a problem,
however, as with several types of scheduling problems including job shop
scheduling, it is generally impossible to obtain an exact optimal solu-
tion because of the well-known combinatorial explosion. Realizing this
224 11. SOME APPLICATIONS
work J@J
L_J L_J
~
L_J L_J
block
machining center
tool box
Also, we can attach two works to a block and use two blocks a day, and
it takes 0.650 hours to process a work.
On the basis of Table 11.1, some numerical values required for schedul-
ing are calculated and shown in Table 11.2.
part I II III IV V
PI 14 50 2 2 0.650
P2 23 50 4 2 1.050
P3 17 24 2 2 0.200
P4 11 10 2 2 0.900
P5 15 26 2 2 0.700
P6 17 40 1 4 0.550
P7 12 4 2 2 0.600
P8 20 40 2 4 0.500
P9 8 12 2 4 0.525
PlO 5 12 1 4 0.450
I, due dates; II, total number of works processed up to the due date; III, maximal number
of blocks processed in a day; IV, number of works attached to a block; V, time to process a
work
As described in Table 11.2, columns I', II', III', IV', and V' repre-
sent time to process a block (hours), total processing time (hours), total
226 11. SOME APPLICATIONS
I', time to process a block; 11', total processing time; Ill', total number of blocks processed
up to the due date; IV', evaluated value of priority; V', order of priority
PI PZ P3 P4 P5 P6 P7 Ps P9 PlO
ps 24
pz 10 24
P6 14 12 33
P4 8 8 10 34
P3 11 6 15 4 20
PI 15 10 7 4 6 27
P7 5 6 6 3 13 12 19
ps 14 7 10 7 15 13 6 25
Pg 7 9 5 3 6 2 7 8 10
PIO 4 6 7 3 6 10 11 4 24 5
10
Leij ~ 99, j = 1, ... ,23 (11.2)
i=1
Si
These two criteria can be unified into the minimization of the following
single objective function j, because the term (Si - j)2 represents the
square of the total days from the completion time to the due date for
each part.
10
f =L L (Si - j)2 ( 11.5)
i=1 jEKi
subject to
10
L (bij X hi) ~ 11, j = 1, ... ,23 (11.9)
i=l
10
L Cij ~ 99, .j = 1, ... ,23 (11.10)
i=]
Lbij = ri,
Si
where
11.1.4.1 Coding
Each individual is represented as an m x n matrix
A=
230 11. SOME APPLICATIONS
where ajk represents the number of blocks used in processing a part P.1
on a date k before the due date.
11.1.4.2 Initialization
In the generation of an initial populationin, if every element ajk of
A is assigned a random number, then A may become infeasible. To
circumvent such a phenomenon, we introduce the following method for
generating an initial population that consists of feasible solutions under
the constraints (11.9) through (11.12).
Step 2: Rearrange the rows of A so that a part with less leeway will
have higher priority.
Step 3: For the part of the highest priority among parts that have not
been scheduled yet, choose a processing day up to the due date at
random and add unity to the element ajk as long as the constraints
(11.9) through (11.11) are satisfied. If the constraint (11.12) is satis-
fied, go to the next step.
Step 4: If there are any parts which have not been scheduled yet, return
to the previous step. Otherwise, stop the procedure.
11.1.4.3 Reproduction
The ranking selection is adopted for reproduction; that is, we arrange
all individuals in order of the value of objective function (11.7) or (11.8)
as a fitness function for each one to discard several lower percentages of
those and to reproduce higher.
11.1.4.4 Crossover
Simple crossover of such matrix type individuals may generate a num-
ber of infeasible individuals. Thus, we modify the crossover operator so
that individuals after crossover become feasible.
parent 2 offspri ng 2
11.1.4.5 Mutation
For ensuring that the individuals after mutation become feasible, the
following mutation operation is introduced.
Step 2: Input the data in connection with the processing plan for every
part.
Step 3: Arrange the data of parts in priority order.
Step 4: Generate an initial population including as many elements as
the total number of individuals given previously.
Step 5: Calculate the fitness of each individual and carry out ranking
selection.
Step 6: Select as many individuals as the crossover rate x the total
number of individuals, pair them randomly, and carry out the intro-
duced crossover with one-point crossover.
Step 7: Carry out mutation with the mutation rate.
Step 8: If the number of generations reaches one established in step 1,
stop the procedure to show the individual with the highest fitness.
Otherwise, return to step 5.
I 1 2 3 4 5 6 7 8 9 10 11 121314 15 16 17 18 19 20 21 22 23 II
Pl 0 2 2 2 2 2 2 2 2 2 2 2 2 1 25
P2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 4 3 4 3 4 4 25
P3 0 0 0 0 0 0 0 0 0 0 2 2 0 2 2 2 2 12
P4 0 0 0 0 0 0 0 2 1 1 1 5
Ps 0 0 0 0 0 0 0 0 2 2 2 2 2 1 2 13
P6 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 10
P7 0 0 0 0 0 0 0 2 0 0 0 0 2
Ps 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0 2 1 2 1 10
Pg 0 0 0 0 0 2 1 0 3
P10 0 0 1 1 1 3
III 0.02.64.44.4 4.4 6.8 4.7 10.8 9.4 9.4 10.2 8.4 7.6 9.7 9.8 9.3 7.0 10.4 10.3 10.4 6.3 8.4 8.4
IV o 27 37 37 37 29 29 56 61 61 72 64 53 78 63 42 58 31 31 31 242424
I, processing date; II, total number of blocks processed up to the due date; III, total processing
time in the day; IV, total number of tools used in the day; NG = 400, NI = 120, CR = 0.98,
MR = 0.01, RSR = 0.20; min f = 1320
I 1 2 3 4 5 6 7 8 9 1011 12 13 14 15 16 17 18 19 20 21 22 23 II
PI 2 2 1 1 2 2 2 2 2 2 1 2 2 2 25
P2 0 2 0 0 0 0 0 0 0 2 0 2 0 2 0 2 1 3 1 2 2 3 3 25
P3 0 2 0 1 1 0 2 2 0 1 1 0 0 2 0 0 0 12
P~ 1 0 0 0 0 1 1 0 0 0 2 5
P5 1 0 0 2 2 0 0 2 1 0 1 2 0 0 2 13
P6 0 1 0 0 1 1 0 0 1 1 1 0 1 1 1 1 0 10
P7 0 1 0 0 0 0 1 0 0 0 0 0 2
PB 0 0 0 1 0 0 0 2 2 0 0 0 0 0 2 0 1 1 0 1 10
P9 1 0 1 0 0 0 1 0 3
PlO 1 0 0 1 1 3
III 9.711.03.48.39.86.68.510.210.29.48.99.64.89.89.06.4 4.18.32.16.24.26.36.3
IV 67 64 51 67 47 58 65 46 63 72 59 54 49 50 42 37 49 24 24 25 24 24 24
I, processing date; II, total number of blocks processed up to the due date; III, total processing
time in the day; IV, total number of tools used in the day; min f = 4697
11.1.6 Conclusion
In this section, we formulated the practical scheduling problem of
a machining center that adequately reflects the practical situation and
proposed a genetic algorithm suitable for it, in which a matrix with
constraints on both rows and columns was adopted as a gene. In the
comparison of simulations through the proposed genetic algorithm and
those through the proposed genetic algorithm with parameters Cl:i, it
was found that the results of the latter reflected the DM' desire more
than those of the former. This might be because the processing of parts
concentrated on the due dates in the former case, whereas it was dis-
persed in the latter case. Although relatively satisfactory approximate
solutions were obtained for the formulated scheduling problem, it must
be observed here that the proposed genetic algorithm does not always
work well when additional strict constraints are imposed. This is be-
cause the solution space is so small that initial populations could not be
11.1. Flexible scheduling in a machining center 235
Table 11.7. Result of simulation through the proposed genetic algorithm with pa-
rameter (}:i (1)
I 1 2 3 4 5 6 7 8 9 10 11 12 14 15 16 17 18 19 20 21 2223 II (}:i
13
PI 0 2 2 2 2 1 2 2 2 2 2 2 22 25 1.0
P2 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 2 2 3 3 4 4 4 25 1.0
0
P3 2 2 2 2 2 2 0 0 0 0 0 0 0
0 0 0 0 12 0.0
P4 0 0 0 0 0 0 0 0 1 2 2 51.0
P5 0 0 0 0 0 0 0 0 2 1 2 2 2 2 2 13 1.0
P6 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 10 0.0
P7 0 0 0 0 0 0 0 0 0 0 1 1 21.0
ps 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 2 2 10 1.0
P9 2 1 0 0 0 0 0 0 30.0
PIO 1 1 1 0 0 30.0
III 9.09.57.4 5.65.64.34.84.89.4 9.810.26.65.45.4 9.1 4.08.28.2 10.3 10.3 8.4 8.4 8.4
IV 60 67 62 55 55 55 40 40 61 61 52 44 39 39 34 25 31 31 31 31 242424
I, processing date; II, total number of blocks processed up to the due date; III, total processing
time in the day; IV, total number of tools used in the day; NG = 400, NI = 120, CR = 0.98,
MR = 0.01, RSR = 0.20; min f = 1278
Table 11.8. Result of simulation through the proposed genetic algorithm with pa-
rameter (}:i (2)
I 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 II (}:;
PI 0 2 2 2 1 2 2 2 2 2 2 2 2 2 25 1.0
P2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 3 3 3 4 4 4 25 1.0
P3 0 0 0 0 0 2 2 2 2 2 0 2 0 0 0 0 0 12 0.5
P4 0 0 0 0 0 0 0 2 1 1 1 51.0
ps 0 0 0 0 0 0 0 0 2 2 2 2 1 2 2 13 1.0
P6 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 10 0.5
P7 0 0 0 0 0 0 0 0 0 0 1 1 21.0
ps 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0 2 2 2 10 1.0
Pg 0 0 0 1 2 0 0 0 30.5
PlO 0 1 1 1 0 30.5
III 0.04.44.48.77.75.65.69.210.210.210.69.66.25.46.84.0 8.410.310.310.38.4 8.4 8.4
IV o 37 37 52 45 55 55 65 72 72 66 69 53 39 38 25 24 31 31 31 242424
I, processing date; II, total number of blocks processed up to the due date; III, total processing
time in the day; IV, total number of tools used in the day; NG = 400, NI = 120, CR = 0.98,
MR = om, RSR = 0.20; min f = 1005
generated well in this case. Hence, the solution space must be wide to
a certain degree for the proposed genetic algorithm to work well. The
unevenness of approximate optimal solutions was observed in some simu-
236 11. SOME APPLICATIONS
lations through the proposed genetic algorithm using the same values of
parameters. This would be based on the dependence of approximate op-
timal solutions on initial populations. Concerning this problem, further
improvement of reproduction, crossover, and mutation will be required.
11.2.1 Introduction
A DHC system that aims at saving energy, saving space, inhibiting
airpollution, or preventing city disaster has lately attracted considerable
attention as an energy supply system in urban areas. In a DHC system,
cold water and steam used at all facilities in a certain district are made
and supplied by a DHC plant, as shown in Figure 11.4.
~--------------------------- , -----',
::@::: Electricity
Electricity ::@:::
::@:::
::@:::
COldwat~
::@:::
Pumps
~
Figure 11.5. A DHC plant
Pumps
P CP
Absorbing Turbo
Freezers Freezers
Pumps
CDP
To district
(III) The freezer output load rate P = Cfoad/ct must be greater than
or equal to 0.2, that is,
(11.14)
(IV) The boiler output load rate Q = (StAR + S~ist) / Sf, which means
the ratio of the (predicted) amount of the demand for steam to the
total output of running boilers Sf = 2:J==l fyyj, must be less than or
equal to 1.0, that is,
(11.15)
where fj denotes the rating output of the jth boiler and StAR de-
notes the total amount of steam used by absorbing freezers at time
t, defined as
4
StAR = L O(P) . Sr ax . Xi, (11.16)
i== 1
This constraint means that the sum of output of running boilers must
exceed the (predicted) amount of the demand for steam.
(V) The boiler output load rate Q = (StAR + S~ist) / st must be greater
than or equal to 0.2, that is,
(11.18)
This constraint means that the sum of output of running boilers must
not exceed five times the (predicted) amount ofthe demand for steam.
240 11. SOME APPLICATIONS
(VI) The minimizing objective function J(t) is the energy cost, which
is the sum of the gas bill C t and the electricity bill Et.
J(t) = Geos t . Ct + Eeost . E t , (11.19)
where Geos t and Eeost denote the unit cost of gas and that of electricity,
respectively.
The gas bill C t is defined by the gas amount consumed in the rating
running of a boiler gj,j = 1,2,3 and the boiler output load rate Q.
ct = (tgjYj) .Q (11.20)
)=1
where Ef3X denotes the maximal electricity amount used by the ith
turbo freezer and Ci, di , and ei are the electricity amount of cooling
tower and two kinds of pumps.
In the previous equation, ~(P) denotes the rate of use of electricity
in a turbo freezer, which is a nonlinear function of the freezer output
load rate P. For the sake of simplicity in this section, we use the
following piecewise linear approximation.
+ t; CiX~ + t;
10 10
dix; + r;
10}
eiX~
(11.23)
11.2. Operation planning of district heating and cooling plants 241
P = Cfoad (11.34)
Ct
4
81(P) =L (0.8775 P + 0.0285) . Siax . x~ (11.35)
i=1
x;
4
8 2 (P) = L (1.1125 P - 0.1125) . Si ax . (11.36)
i=l
x;
10
3r(P) = L (0.6 . P + 0.2) . Ei ax . (11.37)
i=5
10
32(P) = E (1.1 P - 0.1) . Ei ax . x~ (11.38)
i=5
t+K-l
mllllmize J'(>..(t, K)) = L [J(A T) + ~j I"i - A)T-1) I] },
subject to >"(t, K) E A(t, K)
(11.40)
It should be noted that all phenotypes are not always feasible when
the problem is constrained. From the viewpoint of search efficiency and
feasibility, decoding such that all phenotypes are feasible seems desirable.
As discussed previously, Sakawa et al. [148, 155J proposed GADS
using double string representation and a decoding algorithm to decode
an individual to a feasible solution for multidimensional 0-1 knapsack
problems and more general 0-1 programming problems involving positive
and negative coefficients. Here, we propose a new decoding algorithm to
decode an individual to a feasible solution for nonlinear 0-1 programming
problems Pl(t, K), based on the decoding algorithm using a reference
solution in Sakawa et al. [155J.
In Sakawa et al. [155J, a feasible solution used as a template in decod-
ing, called a reference solution, must be obtained in advance. In order
to find a feasible solution to the extended problem Pl(t, K), we solve a
maximizing problem with the following objective function
IrTi
- r[)] ' (11.41)
where
RCe) = {e, e~ 0 (11.42)
0, e< 0
244 11. SOME APPLICATIONS
(5:::; j:::; n2+4), Aj:= 0 (n2+5:::; j:::; 8), Aj:= 1 (11 :::; j:::; n3+1O),
Aj := 0 (n3 + 11 :::; j :::; 12), where rq = 2:]=1 Aj, n2 = 2:J=5 Aj,
n3 = 2:}~1l Aj. Then, go to step 12.
Step 12: If T > t + K - 1, let ,X := ((,Xtf, ... , (,Xt+K -1 )T)T and stop.
Otherwise, return to step 2.
t+K-1 14
d r := L: L:IAj(r) - -\jl,
r=t j=l
let d sum := dsum +dr . If d r > d max and J'('x(r)) < J'(X), let d max :=
dr, rmax := r, r := r + 1 and go to step 3. Otherwise, let r := r + 1
and go to step 3.
11.2.4.2 Evaluation
It is quite natural to define the fitness of an individual S by
where
, ] - J'(>")
(11.45)
f= J-l
Here, 1 and J are the minimal and the maximal objective function
values of the operation plan obtained by connecting K solutions to P(r),
r = t, . .. ,t + K - 1 solved by complete enumeration; J'(>") denotes the
cost of the operation plan for the phenotype>.. decoded from S; dis is the
average Hamming distance between the individual S and its phenotype
>..:
1 t+K-1 14
dis=- L: L:IAj-Sjl
K14 T=t j=l
(11.46)
where I and Imax denote the generation counter and the maximal search
generation.
11.2.4.3 Scaling
When the variance of fitness in a population is small, the ordinary
roulette wheel selection often does not work well because there is lit-
tle difference between the probability of a good individual surviving
and that of a bad one surviving. In order to overcome this problem,
the following linear scaling technique, discussed in Chapter 2, has been
adopted.
Linear scaling
Step 0: Calculate the mean fitness fmean, the maximal fitness fmax and
the minimal fitness fmin in the current. populat.ion.
11.2. Operation planning of district heating and cooling plants 247
Step 1: If
f . > Cmult . fmean - fmax
mm Cmult - 1.0 '
let
0: '= (Cmult - 1.0) . fmean f3'= fmean . (fmax - Cmult . fmean)
fmax - fmean ,. fmax - fmean
and go to step 2.
Step 2: Let r := 1.
Step 3: For the fitness fT = f(S(r)) of the rth individual S(r), carry
out the linear scaling f; := 0: . fT + f3 and let 1" := 1" + 1.
Step 4: If 1" > N, stop. Otherwise, return to step 3.
In the procedure, Cmult denotes the expectation of the number of the
best individuals that will survive in the next generation, usually set as
1.2 ::; Cmult ::; 2.0.
11.2.4.4 Reproduction
As discussed in Chapter 3, Sakawa et al. [148J suggested that elitist
expected value selection is more effective than the other five reproduc-
tion operators (ranking selection, elitist ranking selection, expected value
selection, roulette wheel selection, and elitist roulette wheel selection).
Accordingly, we adopt elitist expected value selection, which is a com-
bination of elitism and expected value selection as mentioned below.
Elitism: One or more individuals with the largest fitness up to the cur-
rent population are unconditionally preserved in the next generation.
Expected value selection: Let N denote the number of individuals in the
population. The expected value of the number of the rth individual
S (1") in the next population is calculated as
NT = f(S(r)) x N (11.48)
N
L f(S(i))
i=l
N
(11.49)
2:(Ni - lNd)
i=l
11.2.4.5 Crossover
In order to preserve the feasibility of offspring generated by crossover,
we use the one-point crossover, in which the crossover point is chosen
from among K - 1 end points of K subindividual ST, T = t, t + 1, ... , t +
K - 1, as shown in Figure 11.8.
One-point crossover
Step 0: Let r := 1.
Step 2: If Pc 2: R holds for the given crossover rate Pc, let the rth
individual and another one chosen randomly from the population be
parent individuals. Otherwise, let r := r + 1 and go to step 5.
Step 4: Get rid of these parent individuals from the population and
include these offspring in the population.
11.2.4.6 Mutation
As the mutation operator, we use the mutation of bit reverse type
and inversion.
Step 1: For each gene that takes 0-1 value in an individual, generate a
random number R E [0,1].
Step 2: If Pm 2: R holds for the given mutation rate Pm, reverse the
value of the gene, i.e., 0 -+ 1 or 1 -+ O.
11.2. Operation planning of district heating and cooling plants 249
Crossover point
~
~-+- s~ ---+-~ . * * ~~ s)+k -+~1- S )+k+I ---+-~ l-+-S ~+K. I ____ :
Step 4: Carry out the procedure from step 1 to step 3 for all individuals
in the population.
22 5.9 x 10- 1
11.2.6 Conclusion
In this section, for operation planning of DHC plants, single-period
operation planning problems P{t) and multiperiod operation planning
problems pi (t, K) by taking account of the continuity of operation of
instrumentals were formulated as nonlinear 0-1 programming problems.
Realizing that the formulated multiperiod operation planning problems
pi (t, K) involve hundreds of 0-1 variables, genetic algorithms for nonlin-
ear 0-1 programming were proposed. The feasibility and effectiveness of
11.3. Coal purchase planning in electric power plants 253
Conventional method Proposed method
Load Load
6 12 18 24 T . 6 12 18 24
Ime Time
11.3.1 Introduction
In real-world programming problems, we may be faced by the difficulty
of modeling them as traditional mathematical programming problems.
The difficulty comes from the following two facts: (1) The problem is
given by verbal descriptions and sometimes includes unclearly described
objective(s) and/or constraints. (2) The problem is too complex to be
modeled as a mathematical programming problem.
The problem we treat in this section is one of such intractable difficul-
ties. The problem is a coal purchase planning problem in a real electric
power plant. Through a series of interviews with the domain experts,
the coal purchase problem had been clarified. The purchase order of the
254 11. SOME APPLICATIONS
C 3 Four kinds of coal cannot be used as fuel without being mixed with
some others. For such a kind, we assume a one-to-one mixture.
C4 The load displacement of a ship and the lead time depend on the
country. However, the load displacement is either 30,000, 60,000, or
80,000 tons.
Cs Only one ship can come alongside the pier in a day. However, when
the weather is stormy, no ship can come alongside the pier.
C 6 The coal from the docked ship at the pier is directly stored in 16
silos. In each silo, only one kind of coal with the same receipt date
can be stored.
C 8 The coal stored for more than 60 days should be moved to another
empty silo in order to avoid spontaneous combustion.
8
30,000 t
one ship
per day
-~o0
shipping'
carry in
~mpty
" sHoes)
~ t ~~e plant
{" ~~~~~X~~::~d to
an empty sIlo
8
" Safety stock:
I 0 O .. O I
160,000 t in summer
290,000 t in winter
8 countries 80,000 t 16 silos of 33,000 t capacity
the cost, annual generation plan, and so on but also the factors indi-
rectly related to the coal purchase such as the coal inventory control,
treatment of coal, and so on. Thus, we must consider the transition of
coal stocks and the complex conditions such as mixing coals, the pre-
vention of the spontaneous combustion, and so on. Because of those,
the problem becomes a very complex and large-scale one. For example,
when the daily stock of 16 silos is regarded as a part of the decision vari-
ables, the number of decision variables becomes more than 5000 because
taking into consideration only the number of variables of daily stocks,
it is equal to 16 x 365 = 5840. Moreover, we should introduce a num-
ber of 0-1 variables to treat complex conditions. We cannot apply the
mathematical programming techniques unless the problem is formulated
as a mathematical programming problem. As described earlier, it will
unfortunately require a formidable effort in formulating the coal pur-
chase problem, and even if the problem is formulated as a mixed integer
programming problem, it will consume a great deal of computation time
to solve the problem. Thus, the traditional mathematical programming
approach will not be suitable for the problem.
In this section, using the desirable property of the given problem, we
demonstrate that, without formulating it as a mixed integer program-
ming problem, a good solution can be explored by a genetic algorithm
together with a fuzzy satisficing method in a practical amount of time.
u
>
~
...:
9 i---""""'r-""\.
'"
time
Figure 11.12. Differences between target stock met) and real stock q(t)
time
satisfies
d~~t) = u(t) - p(t), (11.50)
Because the given purchase sequence specifies the kind and amount of
the forthcoming coal successively, our current problem is to determine
the receipt dates of coals, in other words, ti's in (11.51) so as to minimize
the shaded area Zl for the plan duration (1 year). Because the target
stock m{t) changes seasonally, we have three possible cases: (a) m{t)
does not change from one reception to the next, (b) m{t) decreases from
one reception to the next, and (c) m{t) increases from one reception to
the next.
t4 t5 time
First, in case (a), let us consider the optimal receipt date of the forth-
coming coal. Let n be its amount and t* its receipt time, such that
260 11. SOME APPLICATIONS
m(t*) - q(t*) = 0.5n. If we receive the coal at time t* - 15 (15 > 0),
that is, a little earlier, the area defined by met) and q(t) is larger than
when we receive it at time t*. The difference is illustrated as A - B in
Figure 11.13. On the other hand, if we receive the coal at time t* + E
(E > 0), that is, a little later, the area defined by met) and q(t) is also
larger than when we receive it at time t*. The difference is illustrated
as D - C in Figure 11.13. Hence, the receipt at time t* is optimal.
Let us consider case (b). In the real world situation, m( t) does not
change very often, so that at most one change can occur between two
receipt dates. Looking at Figure 11.14, where tt is the time when met)
decreases, it can be shown that the receipt at time t2 tl) or t3 (~ tl)
such that m(ti) - q(ti) = 0.571, i = 2,3 is optimal. Indeed, if we receive
the coal before h, t2 is the optimal receipt time by a similar discussion
as in case (a). Similarly, if we receive the coal after tl, t3 is the optimal
receipt time.
In case (e), let t4 be the time when met) increases. It can be proved
in the same way as case (a) that the optimal receipt time is t4, if m(t4)-
q(t4) ~ 0.571; otherwise, it is t5 (> t4) such that m(t5) - q(t5) = 0.571
(Fig. 11.15).
To sum up, we have the following two rules:
Rule 1. If met) does not decrease, the coal should be received whenever
met) - q(t) ~ 0.571.
Rule 2. If met) decreases at time tl, the coal should be received at
either t2 or t3, where t2 is the time such that m(t2) - q(t2) = 0.571
and t2 < tl, and t3 is the time such that m(t3) - q(t3) = 0.571 and
t3 ~ tl'
To apply one of those rules, we must know whether met) decreases.
This can be done by the following four steps: (I) Set l as a tentative
receipt time if m( i) - q( i) ~ 0.571. (II) Assuming that the coal is received
at l, calculate the next time t such that met) - q(t) ~ 0.571. (III) Check
whether met} decreased between land t. (IV) If met) decreased, cancel
the receipt at l and apply Rule 2. Otherwise, fix the receipt at i.
Now, let us introduce the conditions C 5 , C 7, C s and Cg. First, we
introduce the safety stock condition Cg. Let set) be a safety stock level
and i the optimal receipt time of the coal. If we have that q(i) < sCi),
a safety stock violation, it can be remedied by changing the receipt
time i to a time t6 such that q(t6) = S(t6) and t6 < i. Similarly, we can
introduce the stock limitation condition C7. Namely, if we have q(i) > ij,
this violation can be remedied by changing the receipt time i to a time
t7 such that q(t7) + 71 = ij and t7 > i, where ij is the stock limitation and
71 is the amount of the received coal.
11.3. Coal purchase planning in electric power plants 261
45 Z (mix) 30 Group 5
48 30 Group 5
262 11. SOME APPLICATIONS
,.
60 ...
1101312611421181
Selection method. Introducing the elitist model [66, 75, 112, 165],
the best two individuals survive unconditionally. The other indi-
viduals of the next population are chosen based on a ranking selec-
tion model [66, 75, 112, 165J. We assign the probability mass Ps(1)
to the first ranked individual twice as much as the probability mass
Ps(N) to the last (Nth) ranked individual, where N is the population
size. We produce the arithmetic progression from Ps(1) = 2Ps(N) to
Ps(N) so that the jth-ranked individual has the probability mass
Ps(j) = 2(2N - j - I)/3N(N - 1).
Mutation operation. Every element of the repeated permutation is
replaced with a random number in {I, 2, ... , 48} with a mutation
rate Pm.
J.tT is used as the fitness function. J.tGiS are assumed to be linear mem-
bership functions defined by two parameters z? (reservation level) and
zl (aspiration level), as shown in Figure 11.17.
The processes of our approach to the coal purchase problem can be
illustrated in Figure 11.18.
individual
I
~ I receipt time determined
by the 2 rules
I~ I fitness
evaluation
I r-- r-- -
+ l J~ I I I:: Iil I::
i8 I~ Se
individual receipt time determined fitness 0
.::; 0
.::;
+- 2 by the 2 rules evaluatIOn
~ u ~ 1-
OJ
] u
individual
N
~ I receipt time determined
by the 2 rules
I~ I fitness
evaluatIon
I '-- '-- '--
Table 11.13 and the target and safety stocks shown in Table 11.14. All
kinds of coal can be used as fuel without being mixed with some others.
One hundred percent output is assigned to every day in the generating
plan. The initial state of the fuel stock is shown in Table 11.15. The
membership functions /-LG; 's are established using the parameters z~ =
2000, z} = 0, z~ = 150, z~ = 0, zR = 3, z~ = 1, zg = 2700, and
zl = 1000. By complete enumeration, we obtain 0.684059 as the optimal
fitness function value. There are 360, 386 feasible solutions.
11.3. Coal purchase planning in electric power plants 265
Table 11.15. The initial state of the fuel stock in the small-sized problem
Silo number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
amount (x 1000 tons) 0 5 5 33 33 33 33 33 33 33 0 0 0 0 0 0
coal number - 1 1 1 2 2 3 3 4 5
oldness (d) - 30 30 30 11 12 13 14 15 16 -
O
Pt = min ( exp ( J..llT T- J..l T ) , 1) , (11.54)
Cooling Schedules. Given the initial temperature TO, every 100 ex-
plorations the temperature parameter is updated as
(11.55)
frequency I mode SA GA
1.0
O.S
solution distribution I
(
" , \.1
,
,,~
i!
/
~
I '~lj!
, ,i!
I ,j I
r" !.
0.6
-
I
0.4 I jl
0.2
0.0 L-_'--_'-=-':::.....-"----_-'--_-'--_..l.L..L.LL-----1
0.0 0.1
and complete enumeration. The fact that the GA and SA solutions are
very good can also be confirmed in Figure 11.19. From Table 11.17 and
Figure 11.19, the GA seems to be better than the employed SA. We did
similar simulations for different small-sized problems and had similar
results. Hence, the proposed GA approach can be regarded as a suitable
technique for our coal purchase problem.
frequency
8
7
!\
6
~ k /RS(20.000)
3 ,'\..1 ~,
2 :
, \
'"
OL-----~~------~~~~-----L~
0.455 0.555
For the GA and SA, we saved the best solution of 100 generations
and 10,000 explorations, respectively. They are respectively shown in
GA(100) and SA(lO,OOO) rows in Table 11.20. GA(200), SA(20,000),
and RS(20,000) rows in Table 11.20 show the results of the GA of 200
generations, the SA of 20,000 explorations, and the RS of 20,000 explo-
rations, respectively.
zimin =. .
nun Zi (r *
j )
, 'Z = 1 , 2 , 3 , 4 . (11.57)
J=1,2,3,4
Now let us see how we can reflect the DM's preference. To this end,
we use the real-world problem described in the previous section as an
example.
Tabl~ 11.21. The /LGi and Zi values under the equally important goals
1 2 3 4
ILGi 0.593652 0.609053 1 0.593652
7,542.12 16.8575 1 99,779
Zi
(x 1000 tons) (%) (time) (xIOOO US$)
Table 11.22. The /LGi and z, values under the minimum cost preference
1 2 3 4
/LGi 0.453331 0.498192 0.694574
8,142.39 21.6377 1 98,950
Zi
(x 1000 tons) (%) (time) (xIOOO US$)
11.3. Coal purchase planning in electric power plants 271
(11.60)
Using this fitness function, we obtain the solution of Table 11.22. From
Tables 11.21 and 11.22, we can observe that the total cost is reduced
from US$ 99,779,000 to 98,950,000. However, the other two objective
function values, except the third one (number of movements), become
worse. Because we are treating a multiobjective problem, we fall in a
trade-off situation. If the DM request is to make the total cost much
smaller, even if it makes the other objective function values worse, then
JLb 4 can be replaced with JLt 4 in (11.60). In such a way, we can reflect
the DM's preference in the fitness function.
11.3.8 Conclusion
In this section, we showed how we tackled a complex real-world coal
purchase planning problem and how we used the GA and fuzzy program-
ming techniques. In the proposed approach, the coal purchase planning
problem is treated as a two-level problem taking advantage of a desir-
able property of the problem. In the upper-level problem, by applying
a genetic algorithm, a good purchase sequence of the coal is explored.
On the other hand, in the lower level problem, the reception dates of
the sequentially coming coal are determined by applying a few rules to
minimize the total deviations from the target stocks. By numerical sim-
ulations of a small-sized problem, we have examined how much the GA
solutions are close to the optimum and then confirmed the soundness
of the GA to this problem. Moreover, the proposed GA approach has
been applied to a real-world problem and compared with the RS and
the SA approaches. Consequently, it is shown that a good solution can
be obtained by the GA and SA approaches and that the GA approach
produces a good solution more stably than the SA approach in our prob-
272 11. SOME APPLICATIONS
[1) E. Aarts and J. Korst, Simulated Annealing and Boltzmann Machine, John
Wiley & Sons, New York, 1989.
[2) N. Abboud, M. Inuiguchi, M. Sakawa, and Y. Uemura, Manpower allocation
using genetic annealing, European Journal Operational Research, Vol. 111, pp.
405-420, 1998.
[3) N. Abboud, M. Sakawa, and M. Inuiguchi, The mutate and spread metaheuris-
tic, Journal of Advanced Computational Intelligence, Vol. 2, No.2, pp. 43-46,
1998.
[4) N. Abboud, M. Sakawa, and M. Inuiguchi, A fuzzy programming approach
to multiobjective multidimensional 0-1 knapsack problems, Fuzzy Sets and
Systems, Vol. 86, No.1, pp. 1-14, 1997.
[5] N. Abboud, M. Sakawa, and M. Inuiguchi, School scheduling using threshold
accepting, Cybernetics and Systems: An International Journal, Vol. 29, No.6,
pp. 593-611, 1998.
[6) D. Applegate and W. Cook, A computational study of the job-shop scheduling
problem, ORSA Journal on Computing, Vol. 3, pp. 149-156, 1991.
[7) M. Aramaki, K. Enjohji, M. Yoshimura, M. Sakawa, and K. Kato, HTS (High
Throughput Screening) system scheduling through genetic algorithms, Pro-
ceedings of Fifth International Conference on Knowledge-Based Intelligent In-
formation Engineering Systems (1 Allied Technologies (KES2001), Osaka (in
press), 2001.
[8) S. Ashour and S.R. Hiremath, A branch-and-bound approach to the job-shop
scheduling problem, International Journal of Production Research, Vol. 11, pp.
47-58, 1973.
[9] P.G. Bachhouse, A.F. Fotheringham, and G. Allan, A comparison of a genetic
algorithm with an experimental design technique in the optimisation of a pro-
duction process, Journal of Operational Research Society, Vol. 48, pp. 247-254,
1997.
[10] T. Bii.ck, Selective pressure in evolutionary algorithms: a characterization of
selection mechanisms, in Proceedings of the First IEEE Conference on Evolu-
tionary Computation, IEEE Press, Orlando, FL, pp. 57-62, 1994.
[11) T. Back, Evolutionary Algorithms in Theory and Practice, Oxford University
Press, New York, 1996.
274 References
[72] J.J. Grefenstette, Genetic Algorithms for Machine Learning, Kluwer Academic
Publishers, Norwell, MA, 1994.
[73] J.J. Grefenstette, R. Gopal, B. Rosmaita, and D. Van Gucht, Genetic algo-
rithms for the traveling salesman problem, in Proceedings of the First Inter-
national Conference on Genetic Algorithms and Their Applications, Lawrence
Erlbaum Associates, Hillsdale, NJ, 160-168, 1985.
[74] F. Herrera and J.L. Verdegay (eds.), Genetic Algorithms and Soft Computing,
Physica-Verlag, Heidelberg, 1996.
[75] J.H. Holland, Adaptation in Natural and Artificial Systems, University of
Michigan Press, Ann Arbor, MI, 1975; MIT Press, Cambridge, MA, 1992.
[76] J. Horn, N. Nafpliotis, and D.E. Goldberg, A niched Pareto genetic algorithm
for multiobjective optimization, in Proceedings of the First IEEE Conference
on Evolutionary Computation, IEEE Press, Orlando, FL, pp. 82-87, 1994.
[77] D. Hertog, Interior Point Approach to Linear, Quadratic and Convex Program-
ming, Kluwer Academic Publishers, Norwell, MA, 1994.
[78] E. Ignall and L. Schrage, Application of the branch and bound technique to
some flow-shop scheduling problem, Operations Research, Vol. 13, pp. 400-412,
1965.
[79) M. Inuiguchi, H. Ichihashi, and H. Tanaka, Fuzzy programming: a survey of
recent developments, in R. Slowinski, and J. Teghem (eds.), Stochastic versus
Fuzzy Approaches to Multiobjective Programming under Uncertainty, Kluwer
Academic Publishers, Norwell, MA, 1990, pp. 45-68.
[80) H. Ishii, M. Sakawa, and S. Iwamoto (eds.), Fuzzy OR, Asakura Publishing,
Tokyo, 2001 (in Japanese).
[81) H. Ishii and M. Tada, Single machine scheduling problem with fuzzy precedence
relation, European Journal of Operational Research, Vol. 87, pp. 284-288, 1995.
[82) K. Ito and R. Yokoyama, Optimal Planning of Co-Generation Systems, Sangyo
Tosho, Tokyo, 1990 (in Japanese).
[83] C. Janikow and Z. Michalewicz, An experimental comparison of binary and
floating Point representations in genetic algorithms, in Proceedings of the
Fourth International Conference on Genetic Algorithms, Morgan Kaufmann
Publishers, San Mateo, CA, 1991, pp. 31-36.
[84) B. Jansen, Interior Point Techniques in Optimization, Kluwer Academic Pub-
lishers, Norwell, MA, 1997.
[85) Q. Ji and Y. Zhang, Camera calibration with genetic algorithms, IEEE Trans-
actions on Systems, Man and Cybernetics, Part A: Systems and Humans, Vol.
31, pp. 120-130, 2001.
[86] S.M. Johnson, Optimal two-and three-stage production schedules with setup
times included, Naval. Research. Logistics Quarterly, Vol. 1, pp. 61-68, 1954.
[87] J.A. Joines, C.T. Culbreth, and R.E. King, Manufacturing cell design: an in-
teger programming model employing genetics, lIE Transactions, Vol. 28, pp.
69-85, 1996.
[88) J.A. Joines and C.R. Houck, On the use of non-stationary penalty functions to
solve nonlinear constrained optimization problems with GA's, in Proceedings
of the First IEEE International Conference Evolutionary Computation, IEEE
Press, Orlando, FL, pp. 579-584, 1994.
[89] K. Kato and M. Sakawa, Genetic algorithms with decomposition procedures for
fuzzy multiobjective 0-1 programming problems with block angular structure,
Proceedings of 1 996 IEEE International Conference on Evolutionary Compu-
tation, IEEE Press, Piscataway, NJ, pp. 706-709, 1996.
278 References
[90] K. Kato and M. Sakawa, An interactive fuzzy satisficing method for multi-
objective structured 0-1 programs through genetic algorithms, Proceedings of
mini-Symposium on Genetic Algorithms and Engineering Design, pp. 48-57,
1996.
[91] K. Kato and M. Sakawa, Interactive decision making for multiobjective block
angular 0-1 programming problems with fuzzy parameters through genetic
algorithms, Proceedings of the Sixth IEEE International Conference on Fuzzy
Systems, Vol. 3, pp. 1645 1650, 1997.
[92] K. Kato and M. Sakawa, An interactive fuzzy satisficing method for multiob-
jective block angular 0-1 programming problems involving fuzzy parameters
through genetic algorithms with decomposition procedures, Proceedings of the
Seventh International Fuzzy Systems Association World Congress, Vol. 3, pp.
9 14, 1997.
[93J K. Kato and M. Sakawa, An interactive fuzzy satisficing method for large-
scale multiobjective 0-1 programming problems with fuzzy parameters through
genetic algorithms, Eumpean Journal of Operational Research, Vol. 107, No.
3, pp. 590-598, 1998.
[94J K. Kato and M. Sakawa, Large scale fuzzy multiobjective 0-1 programs through
genetic algorithms with decomposition procedures, Proceedings of Second In-
ternational Conference on Knowledge-Based Intelligent Electronic Systems,
Vol. 1, pp. 278-284, 1998.
[95] K. Kato and M. Sakawa, Improvement of genetic algorithm by decomposition
procedures for fuzzy block angular multiobjective knapsack problems, Proceed-
ings of the Eighth International Fuzzy Systems Association World Congress,
Vol. 1, pp. 349 353, 1999.
[96] A. Kaufmann and M. Gupta, Fuzzy Mathematical Models in Engineering and
Management Science, North-Holland, Amsterdam, 1988.
[97] A. Kaufmann and M. Gupta, Introduction to Fuzzy Arithmetic, Van Nostrand
Reinhold, New York, 1991.
[98J S. Kirkpatrick, C.D. Gelatt, and M.P. Vecchi, Optimization by simulated an-
nealing, Science, Vol. 220, pp. 671-680, 1983.
[99] S. Kobayashi, 1. Ono, and M. Yamamura, An efficient genetic algorithm for job
shop scheduling problems, in Proceedings of the Sixth International Conference
on Genetic Algorithms, pp. 506~511, 1995.
[100] S. Koziel and Z. Michalewicz, A decoder-based evolutionary algorithms for con-
strained parameter optimization problems, in Proceedings of the Fifth Parallel
Problem Solving from Nature, Springer-Verlag, Berlin, pp. 231-240, 1998.
[101] S. Koziel and Z. Michalewicz, Evolutionary algorithms, homomorphous map-
ping, and constrained parameter optimization, Evolutionary Computation, Vol.
7, No.1, pp. 19-44, 1999.
[102] N. Kubota, T. Fukuda, and K. Shimojima, Virus-evolutionary genetic algo-
rithms for a self-organizing manufacturing system, Computers fj Industrial
Engineering, Vol. 30, No.4, pp. 10151026, 1996.
[103J Y.J. Lai and C.L. Hwang, Fuzzy Multiple Objective Decision Making: Methods
and Applications, Springer-Verlag, Berlin, 1994.
[104] L.S. Lasdon, R.L. Fox, and M.W. Ratner, Nonlinear optimization using the
generalized reduced gradient method, Revue Franr;aise d'Automatique, Infor-
matique et Researche Operationnelle, Vol. 3, pp. 73-103, 1974.
[105J L.S. Lasdon, A.D. Waren, and M.W. Ratner, GRG2 User's Guide, Technical
memorandum, University of Texas, 1980.
References 279
[106] C.C. Lo and W.H. Chang, A multiobjective hybrid genetic algorithm for the
capacitated multipoint network design problem, IEEE Transactions on Sys-
tems, Man and Cybernetics, Part B: Cybernetics, Vol. 30, pp. 461-470, 2000.
[107] Z.A. Lomnicki, A branch-and-bound algorithms for the exact solution of the
three machine scheduling problem, Operational Research Quarterly, Vol. 16,
pp. 89-100, 1965.
[108] J.G. March and H.A. Simon, Organizations, John Wiley, New York, 1958.
[109] R. Manner and B. Manderick (cds.), Parallel Problem Solving from Nature,
2, Proceedings of the Second International Conference on Parallel Problem
Solving from Nature, Brussels, Belgium, North-Holland, Amsterdam, 1992.
[110] D.A. Manolas, C.A. Christos, A. Frangopoulos, T.P. Gialamas, and D.T. Tsa-
halis, Operation optimization of an industrial cogeneration system by a genetic
algorithm, Energy Conversion Management, Vol. 38, pp. 1625-1636, 1997.
[111] D.C. Mattfeld, Evolutionary Search and the Job Shop, Physica-Verlag, Heidel-
berg, 1996.
[112] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs,
Springer-Verlag, Berlin, 1992; 2nd extended edition, 1994; 3rd revised and
extended edition, Berlin, 1996.
[113] Z. Michalewicz, Genetic algorithms, numerical optimization and constraints,
in Proceedings of the Sixth International Conference on Genetic Algorithms,
pp. 151-158, 1995.
[114] Z. Michalewicz and N. Attia, Evolutionary optimization of constrained prob-
lems, in Proceedings of the Third Annual Conference on Evolutionary Program-
ming, World Scientific Publishers, River Edge, NJ, pp. 98-108, 1994.
[115] Z. Michalewicz, D. Dasgupta, R.G. Le Riche, and M. Schoenauer, Evolution-
ary algorithms for constrained engineering problems, Computers & Industrial
Engineering, Vol. 30, pp. 851-870, 1996.
[116] Z. Michalewicz and C.Z. Janikow, Handling constraints in genetic algorithms,
in Proceedings of the Fourth International Conference on Genetic Algorithms,
Morgan Kaufmann Publishers, San Mateo, CA, pp. 151-157, 1991.
[117] Z. Michalewicz, T. Logan, and S. Swaminathan, Evolutionary operators for
continuous convex parameter spaces, in Proceedings of the Third Annual Con-
ference on Evolutionary Programming, World Scientific Publishers, River Edge,
NJ, pp. 84-97, 1994.
[118] Z. Michalewicz and G. Nazhiyath, Genocop III: a co-evolutionary algorithm
for numerical optimization problems with nonlinear constraints, in Proceedings
of 1 995 IEEE International Conference on Evolutionary Computation, IEEE
Press, Piscataway, NJ, pp. 647-651, 1995.
[119] Z. Michalewicz and M. Schoenauer, Evolutionary algorithms for constrained
parameter optimization problems, Evolutionary Computation, Vol. 4, pp. 1-32,
1996.
[120] C. Moon, C.K. Kim, and M. Gen, Genetic algorithm for maximizing the parts
flow within manufacturing cell design, Computers & Industrial Engineering,
Vol. 36, pp. 1730-1733, 1999.
[121] T.E. Morton and D.W. Pentico, Heuristic Scheduling Systems, John Wiley &
Sons, New York, 1993.
[122] I. Nabeshima, Theory of Scheduring, Morikita Publishing, Tokyo, 1974 (in
Japanese).
[123] R. Nakano and T. Yamada, Conventional genetic algorithm for job shop prob-
lems, in Proceedings of the Fourth International Conference on Genetic Algo-
280 References
[155J M. Sakawa, K. Kato, S. Ushiro, and K. Ooura, Fuzzy programming for gen-
eral multiobjective 0-1 programming problems through genetic algorithms with
double strings, 1999 IEEE International Fuzzy Systems Conference Proceed-
ings, Vol. III, pp. 1522~ 1527, 1999.
[156J M. Sakawa and R. Kubota, Fuzzy programming for multiobjective job shop
scheduling with fuzzy processing time and fuzzy duedate through genetic al-
gorithms, European Journal of Operational Research, Vol. 120, pp. 393 407,
2000.
[157J M. Sakawa and T. Mori, Job shop scheduling through genetic algorithms incor-
porating similarity concepts, The Transactions of the Institute of Electronics,
Information and Communication Engineers A, Vol. J80-A (6), pp. 960-968,
1997 (in Japanese).
[158J M. Sakawa and T. Mori, Job shop scheduling with fuzzy duedate and fuzzy
processing time through genetic algorithms, Journal of Japan Society for Fuzzy
Theory and Systems, Vol. 9, pp. 231-238,1997 (in Japanese).
[159J M. Sakawa and T. Mori, An efficient genetic algorithm for job-shop scheduling
problems with fuzzy processing time and fuzzy duedate, Computers fj Indus-
trial Engineering: An International Journal, Vol. 36, pp. 325~341, 1999.
[160J M. Sakawa and T. Shibano, Interactive fuzzy programming for multiobjective
0-1 programming problems through genetic algorithms with double strings, in
Da Ruan (ed.) Fuzzy Logic Foundations and Industrial Applications, Kluwer
Academic Publishers, Norwell, MA, pp. 111~128, 1996.
[161J M. Sakawa and T. Shibano, Multiobjective fuzzy satisficing methods for 0-
1 knapsack problems through genetic algorithms, in W. Pedrycz (ed.) Fuzzy
Evolutionary Computation, Kluwer Academic Publishers, Norwell, MA, pp.
155-177, 1997.
[162J M. Sakawa and T. Shibano, An interactive fuzzy satisficing method for mul-
tiobjective 0-1 programming problems with fuzzy numbers through genetic
algorithms with double strings, European Journal of Opemtional Research,
Vol. 107, pp. 564--574, 1998.
[163J M. Sakawa and T. Shibano, An interactive approach to fuzzy multiobjective 0-1
programming problems using genetic algorithms, in M. Gen and Y. Tsujimura
(eds.), Evolutionary Computations and Intelligent Systems, Gordon & Breach,
New York (to appear).
[164J M. Sakawa, T. Shibano, and K. Kato, An interactive fuzzy satisficing method
for IIlultiobjective integer programming problems with fuzzy numbers through
genetic algorithms, Journal of Japan Society for Fuzzy Theory and Systems,
Vol. 10, pp. 108-116, 1998 (in Japanese).
[165J M. Sakawa and M. Tanaka, Genetic Algorithms, Asakura Publishing, Tokyo,
1995 (in Japanese).
[166J M. Sakawa, S. Ushiro, K. Kato, and T. Inoue, Cooling load prediction through
radial basis function network and simplified robust filter, Journal of Japan So-
ciety for Fuzzy Theory and Systems, Vol. 11, pp. 112-120, 1999 (in Japanese).
[167J M. Sakawa, S. Ushiro, K. Kato, and T. Inoue, Cooling load prediction through
radial basis function network using a hybrid structural learning and simpli-
fied robw:it filter, Transaction of the Institute of Electronics, Information and
Communication Engineers, Vol. A J82-A, pp. 31-39, 1999 (in Japanese).
[168J M. Sakawa, S. Ushiro, K. Kato, and K. Ohtsuka, Cooling load prediction
through simplified robust filter and three-layered neural network in a district
heating and cooling system, Transaction of the Institute of Electronics, Infor-
References 283
mation and Communication Engineers, Vol. A J83-A, pp. 234-237, 2000 (in
Japanese).
[169] M. Sakawa and T. Yumine, Interactive fuzzy decision-making for multiobjec-
tive linear fractional programming problems, Large Scale Systems, Vol. 5, pp.
105-114, 1983.
[170] M. Sakawa and H. Yano, An interactive fuzzy satisficing method using aug-
mented minimax problems and its application to environmental systems, IEEE
Transactions on Systems, Man and Cybernetics, Vol. SMC-15, No.6, pp. 720--
729, 1985.
[171] M. Sakawa and H. Yano, Interactive decision making for multiobjective lin-
ear fractional programming problems with fuzzy parameters, Cybernetics and
Systems: An International Journal, Vol. 16, pp. 377-394, 1985.
[172] M. Sakawa and H. Yano, Interactive decision making for multiobjective linear
problems with fuzzy parameters, in G. Fandel, M. Grauer, A. Kurzhanski
and A. P. Wierzbicki (eds.), Large-Scale Modeling and Interactive Decision
Analysis, Springer-Verlag, Berlin, pp. 88-96, 1986.
[173] M. Sakawa and H. Yano, An interactive fuzzy satisficing method for multiob-
jective linear programming problems with fuzzy parameters, Large Scale Sys-
tems: Theory and Applications, Proceedings of the IFAC/IFORS Symposium,
1986.
[174] M. Sakawa and H. Yano, An interactive fuzzy satisficing method for multiob-
jective nonlinear programming problems with fuzzy parameters, in R. Trappl
(ed.) Cybernetics and Systems '86, D. Reidel Publishing, Dordrecht, pp. 607-
614, 1986.
[175] M. Sakawa and H. Yano, An interactive satisficing method for multiobjective
nonlinear programming problems with fuzzy parameters, in J. Kacprzyk and
S. A. Orlovski (eds.), Optimization Models Using Fuzzy Sets and Possibility
Theory, D. Reidel Publishing, Dordrecht, pp. 258-271, 1987.
[176] M. Sakawa and H. Yano, An interactive fuzzy satisficing method for multiob-
jective linear fractional programming problems, Fuzzy Sets and Systems, Vol.
28, pp. 129-144, 1988.
[177] M. Sakawa and H. Yano, Interactive decision making for multiobjective non-
linear programming problems with fuzzy parameters, Fuzzy Sets and Systems,
Vol. 29, pp. 315-326, 1989.
[178] M. Sakawa and H. Yano, An interactive fuzzy satisficing method for multi-
objective nonlinear programming problems with fuzzy parameters, Fuzzy Sets
and Systems, Vol. 30, pp. 221-238, 1989.
[179] M. Sakawa and K. Yauchi, Co evolutionary genetic algorithms for nonconvex
nonlinear programming problems: Revised GENOCOP III, Cybernetics and
Systems: An International Journal, Vol. 29, No.8, pp. 885-899, 1998.
[180] M. Sakawa and K. Yauchi, An interactive fuzzy satisficing method for multiob-
jective nonconvex programming problems with fuzzy numbers through floating
point genetic algorithms, Journal of Japan Society for Fuzzy Theory and Sys-
tems, Vol. 10, pp. 89-97, 1998.
[181] M. Sakawa and K. Yauchi, An interactive fuzzy satisficing method for mul-
tiobjective nonconvex programming problems through floating point genetic
algorithms, European Journal of Operational Research, Vol. 117, pp. 113-124,
1999.
[182] M. Sakawa and K. Yauchi, Interactive decision making for multiobjective non-
convex programming problems with fuzzy parameters through coevolutionary
284 References
genetic algorithms, Fuzzy Sets and Systems, Vol. 114, pp. 151-165, 2000.
[183] M. Sakawa and K. Yauchi, An interactive fuzzy satisficing method for mul-
tiobjective nonconvex programming problems with fuzzy numbers through
co evolutionary genetic algorithms, IEEE 1ransactions on Systems, Man and
Cybernetics, Part B: Cybernetics, Vol. 31, No.3, 2001.
[184] M. Sakawa, H. Yano, and T. Yumine, An interactive fuzzy satisficing method
for multiobjective linear-programming problems and its application, IEEE
1ransactions on Systems, Man and Cybernetics, Vol. SMC-17, No.4, pp. 654-
661, 1987.
[185] H.G. Sandalidis, P.O. Stavroulakis, and J. Rodriguez-Tellez, An efficient evolu-
tionary algorithm for channel resource management in cellular mobile systems,
IEEE 1ransactions on Evolutionary Computation, Vol. 2, pp. 125-137, 1998.
[186] N. Sannomiya, H. lima, E. Kako, and Y. Kobayashi, Genetic algorithm ap-
proach to a production ordering problem in acid rinsing of steelmaking plant.
Proceedings of Thirteenth IFAC World Congress, Vol. D, pp. 297-302, 1996.
[187] J.D. Schaffer, Multiple objective optimization with vector evaluated genetic
algorithms, in Proceedings of the First International Conference on Genetic
Algorithms and Their Applications, Lawrence Erlbaum Associates, Hillsdale,
NJ, 93-100, 1985.
[188] J.D. Schaffer (ed.), Genetic Algorithms, Proceedings of the Third International
Conference on Genetic Algorithms, Morgan Kaufmann Publishers, San Mateo,
CA,1989.
[189] H-P. Schwefel, Evolution and Optimum Seeking, John Wiley & Sons, New York,
1995.
[190] H.-P. Schwefel and R. Manner (eds.), Parallel Problem Solving from Nature,
Proceedings of the First International Conference on Parallel Problem Solving
from Nature (PPSN), Dortmund, Germany, Springer-Verlag, Berlin, 1990.
[191] F. Seo and M. Sakawa, Multiple Criteria Decision Analysis in Regional Plan-
ning: Concepts, Methods and Applications, D. Reidel Publishing, Dordrecht,
1988.
[192] I. Shiroumaru, M. Inuiguchi, and M. Sakawa, A fuzzy satisficing method for
electric power plant coal purchase using genetic algorithms, European Journal
of Operational Research, Vol. 126, pp. 218-230, 2000.
[193] R.L. Sisson, Methods of sequencing in job shops - a review, Operations Re-
search, Vol. 7, pp. 10-29, 1959.
[194] R. Slowinski and M. Hapke (eds.), Scheduling under Fuzziness, Physica-Verlag,
Heidelberg, 2000.
[195] R. Slowinski and J. Teghem (eds.), Stochastic versus Fuzzy Approaches to Mul-
tiobjective Mathematical Programming Problems under Uncertainty, Kluwer
Academic Publishers, Norwell, MA, 1990.
[196] R Smierzchalski and Z. Michalewicz, Modeling of ship trajectory in collision
situations by an evolutionary algorithm, IEEE 1ransactions on Evolutionary
Computation, Vol. 4, pp. 227-241, 2000.
[197] N. Srinivas and K. Deb, Multiobjective optimization using nondominated sort-
ing in genetic algorithms, Evolutionary Computation, Vol. 2, pp. 221-248, 1995.
[198] T. Starkweather, S. McDaniel, K. Mathias, D. Whitley, and C. Whitley, Com-
parison of genetic sequencing operators. in Proceedings of the Fourth Interna-
tional Conference on Genetic Algorithms, Morgan Kaufmann Publishers, San
Mateo, CA, pp. 69-76, 1991.
References 285
[199] R.E. Steuer, Multiple Criteria Optimization: Theory, Computation, and Ap-
plication, John Wiley & Sons, New York, 1986.
[200] R.E. Steuer and E.U. Choo, An interactive weighted Tchebycheff procedure
for multiple objective programming, Mathematical Programming, Vol. 26, pp.
326-344, 1983.
[201] G. Sommer and M.A. Pollatschek, A fuzzy programming approach to an air
pollution regulation problem, in R. Trappl, G.J. Klir, and L. Ricciardi (eds.),
Progress in Cybernetics and Systems Research, Hemisphere, pp. 303-323, 1978.
[202] G. Syswerda, Uniform crossover in genetic algorithms, in Proceedings of the
Third International Conference on Genetic Algorithms, Morgan Kaufmann
Publishers, San Mateo, CA, pp. 2-9, 1989.
[203] H. Tamaki and Y. Nishikawa, A parallel genetic algorithm based on a neighbor-
hood model and its application to the jobshop scheduling, in Parallel Problem
Solving from Nature 2, North-Holland, Amsterdam, pp. 573-582, 1992.
[204] H. Tamaki and Y. Nishikawa, Maintenance of diversity in a genetic algorithm
and an application to the jobshop scheduling, Proceedings of the IMA CS/SICE
International Symposium on Robotics, Mechatronics and Manufacturing Sys-
tems '92, pp. 869-874, 1992.
[205] H. Tamaki, M. Mori, M. Araki, Y. Mishima, and H. Ogai, Multi-criteria op-
timization by genetic algorithms: a case of scheduling in hot rolling process,
Proceedings of Third Conference of the Association of Asian-Pacific Opera-
tional Research Societies within IFORS, pp. 374-381, 1995.
[206] Y. Tsujimura, M. Gen, and E. Kubota, Solving job-shop scheduling problem
with fuzzy processing time using genetic algorithm, Journal of Japan Society
for Fuzzy Theory and Systems, Vol. 7, pp. 1073-1083, 1995.
[207] Y. Tsujimura and M. Gen, Genetic algorithms for solving multiprocessor
scheduling problems, in X. Yao, J.H. Kim, and T. Furuhashi (eds.) Simulated
Evolution and Learning, Springer-Verlag, Heidelberg, pp. 106-115, 1997.
[208] E.L. Ulungu and J. Teghem, Multi-objective combinatorial optimization prob-
lems: a survey, Journal of Multicriteria Decision Analysis, Vol. 3, pp. 83-104,
1994.
[209] P.J.M. van Laarhiven and E.H.L. Aarts, Simulated Annealing: Theory and
Applications, D. Reidel Publishing, Dordrecht, 1987.
[210] J.-L. Verdegay and M. Delgado (eds.), The Interface between Artificial Intelli-
gence and Operations Research in Fuzzy Environment, Verlag TUV Rheinland,
Ki:iln, 1989.
[211] N. Viswanadhan, S.M. Sharma, and M. Taneja, Inspection allocation in man-
ufacturing systems using stochastic search techniques, IEEE Transactions on
Systems, Man and Cybernetics, Part A: Systems and Humans, Vol. 26, pp.
222-230, 1996.
[212] H-M. Voigt, W. Ebeling, I. Rechenberg, and H-P. Schwefel (cds.), Parallel
Problem Solving from Nature - PPSN IV, Springer-Verlag, Berlin, 1996.
[213] G. Winter, J. Periaux, M. Gal;in, and P. Cuesta (eds.), Genetic Algorithms in
Engineering and Computer Science, John Wiley & Sons, New York, 1995.
[214] L.D. Whitley, T. Starkweather, and D'A Fuquay, Scheduling problems and
traveling salesman: The genetic edge recombination operator, Proceedings of
the Third International Conference on Genetic Algorithms, pp. 133-140, 1989.
[215] A.P. Wierzbicki, The use of reference objectives in multiobjective optimization,
in G. Fandel and T. Gal (cds.) Multiple Criteria Decision Making: Theory and
Application, Springer-Verlag, Berlin, pp. 468-486, 1980.
286 References
[216) A.P. Wierzbicki, A mathematical basis for satisficing decision making, Math-
ematical Modeling, Vol. 3, pp. 391-405, 1982.
[217) A. Wright, Genetic algorithms for real parameter optimization, in J.G. Rawlins
(ed.) Foundations of Genetic Algorithms, Morgan Kaufmann, San Francisco,
CA, pp. 205-218, 1991.
[218) J. Xiao, Z. Michalewicz, L. Zhang, and K. Trojanowski, Adaptive evolution-
ary planner/navigator for mobile robots, IEEE 1ransactions on Evolutionary
Computation, Vol. 1, pp. 18-28, 1997.
[219] T. Yamada and R. Nakano, A genetic algorithm applicable to large-scale job
shop problems, in Parallel Problem Solving from Nature 2, North-Holland,
Amsterdam, pp. 281-290, 1992.
[220) R. Yokoyama and K. Ito, A revised decomposition method for MILP problems
and its application to operational planning of thermal storage systems, Journal
of Energy Resources Technology, Vol. 118, pp. 277-284, 1996.
[221) L.A. Zadeh, Fuzzy sets, Information and Control, Vol. 8, pp. 338-353, 1974.
[222) M. Zeleny, Multiple Criteria Decision Making, McGraw-Hill, New York, 1982.
[223) Q. Zhang and Y.Y. Leung, An orthogonal genetic algorithm for multimedia
multicast routing, IEEE 1ransactions on Evolutionary Computation, Vol. 3,
pp. 53-62, 1999.
[224) H.-J. Zimmermann, Description and optimization of fuzzy systems, Interna-
tional Journal of General Systems, Vol. 2, pp. 209-215, 1976.
[225) H.-J. Zimmermann, Fuzzy programming and linear programming with several
objective functions, Fuzzy Sets and Systems, Vol. 1, pp. 45-55, 1978.
[226) H.-J. Zimmermann, Fuzzy mathematical programming, Computers & Opera-
tions Research, Vol. 10, pp. 291-298, 1983.
[227) H.-J. Zimmermann, Fuzzy Sets, Decision-Making and Expert Systems, Kluwer
Academic Publishers, Norwell, MA, 1987.
[228) H.-J. Zimmermann, Fuzzy Set Theory and Its Applications, Kluwer Academic
Publishers, Norwell, MA, 1985; 2nd edition, 1991, 3rd edition, 1996.
Index
genetic algorithm, 11, 133, 170,229,237, mutation, 13, 26, 35, 93, 179, 200, 213,
261 231, 248, 262
GENOCOP, 142
GENOCOP III, 142 natural selection, 19
genotype, 12, 16 nondelay schedule, 173
Giffier and Thompson algorithm, 174, 195 non inferior solution, 55
Giffler and Thompson algorithm-based nonlinear programming, 135
crossover, 177, 198 nonuniform mutation, 140
greatest associate ordinary number, 210
objective function, 135, 154, 160
heuristic crossover, 139 one-point crossover, 23, 137, 248
optimal schedule, 174
individual, 11 ordered crossover: OX, 25
inequality constraint, 135, 154, 160
initial population, 88, 89, 137, 176, 197, Pareto optimality, 55
212, 230, 262 Pareto optimality test, 58, 111, 156
initial reference point, 143 Pareto optimal solution, 55, 154
inversion, 26, 35,94, 248 partially matched crossover: PMX, 25
phenotype, 12, 16
job-shop scheduling problem, 169, 171, PMX for double strings, 33, 91
191 population, 11, 12, 15
JSP, 169, 171, 191 power law scaling, 19