Soft Computing: Overview and Recent Developments in Fuzzy Optimization
Soft Computing: Overview and Recent Developments in Fuzzy Optimization
Jaroslav Ramík
Ústav pro výzkum a aplikace fuzzy modelování, Ostravská univerzita
Listopad 2001
ii
Abstract
Soft Computing (SC) represents a significant paradigm shift in the
aims of computing, which reflects the fact that the human mind, unlike
present day computers, possesses a remarkable ability to store and process
information which is pervasively imprecise, uncertain and lacking in cat-
egoricity. At this juncture, the principal constituents of Soft Computing
(SC) are: Fuzzy Systems (FS), including Fuzzy Logic (FL); Evolutionary
Computation (EC), including Genetic Algorithms (GA); Neural Networks
(NN), including Neural Computing (NC); Machine Learning (ML); and
Probabilistic Reasoning (PR). In this work, we focus on fuzzy method-
ologies and fuzzy systems, as they bring basic ideas to other SC method-
ologies. The other constituents of SC are also briefly surveyed here but
for details we refer to the existing vast literature. In Part 1 we present
an overview of developments in the individual parts of SC. For each con-
stituent of SC we overview its background, main problems, methodologies
and recent developments. We focus mainly on Fuzzy Systems, for which
the main literature, main professional journals and other relevant informa-
tion is also supplied. The other constituencies of SC are reviewed shortly.
In Part 2 we investigate some fuzzy optimization systems. First, we in-
vestigate Fuzzy Sets - we define fuzzy sets within the classical set theory
by nested families of sets, and discuss how this concept is related to the
usual definition by membership functions. Further, we will bring some
important applications of the theory based on generalizations of concave
functions. We study a decision problem, i.e. the problem to find a ”best”
decision in the set of feasible alternatives with respect to several (i.e. more
than one) criteria functions. Within the framework of such a decision sit-
uation, we deal with the existence and mutual relationships of three kinds
of ”optimal decisions”: Weak Pareto-Maximizers, Pareto-Maximizers and
Strong Pareto-Maximizers - particular alternatives satisfying some natural
and rational conditions. We also study the compromise decisions maxi-
mizing some aggregation of the criteria. The criteria considered here will
be functions defined on the set of feasible alternatives with the values in
the unit interval. In Fuzzy mathematical programming problems (FMP)
the values of the objective function describe effects from choices of the
alternatives. Among others we show that the class of all MP problems
with (crisp) parameters can be naturally embedded into the class of FMP
problems with fuzzy parameters. Finally, we deal with a class of fuzzy
linear programming problems. We show that the class of crisp (classical)
LP problems can be embedded into the class of FLP ones. Moreover, for
FLP problems we define the concept of duality and prove the weak and
strong duality theorems. Further, we investigate special classes of FLP -
interval LP problems, flexible LP problems, LP problems with interactive
coefficients and LP problems with centered coefficients. We present here
an original mathematically oriented and unified approach.
Contents
2 Fuzzy Systems 7
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Fuzzy Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Fuzzy Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Fuzzy Numbers and Fuzzy Arithmetic . . . . . . . . . . . . . . . 13
2.5 Determination of Membership Functions . . . . . . . . . . . . . . 13
2.5.1 Subjective evaluation and elicitation . . . . . . . . . . . . 14
2.5.2 Ad-hoc forms and methods . . . . . . . . . . . . . . . . . 14
2.5.3 Converted frequencies or probabilities . . . . . . . . . . . 14
2.5.4 Physical measurement . . . . . . . . . . . . . . . . . . . . 14
2.6 Membership Degrees Versus Probabilities . . . . . . . . . . . . . 14
2.7 Possibility Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.8 Fuzzy Expert Systems . . . . . . . . . . . . . . . . . . . . . . . . 17
2.9 Fuzzy Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.10 Fuzzy Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.11 Decision Making in Fuzzy Environment . . . . . . . . . . . . . . 23
2.12 Fuzzy Mathematical Programming . . . . . . . . . . . . . . . . . 24
2.13 Mailing Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.14 Main International Journals . . . . . . . . . . . . . . . . . . . . . 28
2.15 Web Pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.16 Fuzzy Researchers . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3 Evolutionary Computation 33
3.1 Genetic Algorithm (GA) . . . . . . . . . . . . . . . . . . . . . . . 34
3.2 Evolutionary Programming (EP) . . . . . . . . . . . . . . . . . . 36
3.3 Evolution Strategies (ES) . . . . . . . . . . . . . . . . . . . . . . 39
3.4 Classifier Systems (CS) . . . . . . . . . . . . . . . . . . . . . . . 40
3.5 Genetic Programming (GP) . . . . . . . . . . . . . . . . . . . . . 41
iii
iv CONTENTS
4 Neural Networks 43
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2 Principles of Neural Networks . . . . . . . . . . . . . . . . . . . . 44
4.3 Learning Methods in NNs . . . . . . . . . . . . . . . . . . . . . . 46
4.4 Well-Known Kinds of NNs . . . . . . . . . . . . . . . . . . . . . . 47
5 Machine Learning 55
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.2 Three Basic Theories . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.3 Supervised machine Learning . . . . . . . . . . . . . . . . . . . . 57
5.4 Reinforcement Machine Learning . . . . . . . . . . . . . . . . . . 57
5.5 Unsupervised machine Learning . . . . . . . . . . . . . . . . . . . 57
6 Probabilistic Reasoning 59
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.2 Markov and Bayesian Networks . . . . . . . . . . . . . . . . . . . 60
6.3 Decision Analysis based on PR . . . . . . . . . . . . . . . . . . . 60
6.4 Learning Structure from Data . . . . . . . . . . . . . . . . . . . . 61
6.5 Dampster-Shaffer’s Theory . . . . . . . . . . . . . . . . . . . . . 61
7 Conclusion 63
II Fuzzy Optimization 65
8 Fuzzy Sets 67
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
8.2 Definition and Basic Properties . . . . . . . . . . . . . . . . . . . 68
8.3 Operations with Fuzzy Sets . . . . . . . . . . . . . . . . . . . . . 71
8.4 Extension Principle . . . . . . . . . . . . . . . . . . . . . . . . . . 72
8.5 Binary and Valued Relations . . . . . . . . . . . . . . . . . . . . 74
8.6 Fuzzy Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
8.7 Fuzzy Extensions of Valued Relations . . . . . . . . . . . . . . . 79
8.8 Fuzzy Quantities and Fuzzy Numbers . . . . . . . . . . . . . . . 84
8.9 Fuzzy Extensions of Real Functions . . . . . . . . . . . . . . . . . 87
8.10 Higher Dimensional Fuzzy Quantities . . . . . . . . . . . . . . . . 92
8.11 Fuzzy Extensions of Valued Relations . . . . . . . . . . . . . . . 97
12 Conclusion 183
vi CONTENTS
Part I
1
Chapter 1
Introduction
Fuzzy theory plays a leading role in soft computing and this stems from the
fact that human reasoning is not crisp and admits degrees. What is important to
note is that soft computing is not a melange. Rather, it is a partnership in which
each of the partners contributes a distinct methodology for addressing problems
in its domain. In this perspective, the principal constituent methodologies in
3
4 CHAPTER 1. INTRODUCTION
Fuzzy Systems
2.1 Introduction
Fuzzy systems are based on fuzzy logic, a generalization of conventional (Boolean)
logic that has been extended to handle the concept of partial truth — truth val-
ues between ”completely true” and ”completely false”. It was introduced by L.
A. Zadeh of University of California, Berkeley, U.S.A., in the 1960’s, as a means
to model the uncertainty of natural language. Zadeh himself says that rather
than regarding fuzzy theory as a single theory, we should regard the process of
“fuzzification” as a methodology to generalize any specific theory from a crisp
(discrete) to a continuous (fuzzy) form.
7
8 CHAPTER 2. FUZZY SYSTEMS
U : S → {0, 1}.
This mapping may be represented as a set of ordered pairs, with exactly one
ordered pair present for each element of S. The first element of the ordered
pair is an element of the set S, and the second element is an element of the set
{0, 1}. The value zero is used to represent non-membership, and the value one
is used to represent membership. The truth or falsity of the statement:
x is in U
is determined by finding the ordered pair whose first element is x. The statement
is true if the second element of the ordered pair is 1, and the statement is false
if it is 0.
Similarly, a fuzzy subset (or fuzzy set) F of a set S can be defined as a set
of ordered pairs, each with the first element from S, and the second element
from the interval [0, 1], with exactly one ordered pair present for each element
of S. This defines a mapping between elements of the set S and values in the
interval [0, 1]. The value zero is used to represent complete non-membership,
the value one is used to represent complete membership, and values in-between
are used to represent intermediate degrees of membership. The set S is referred
to as the universe of discourse for the fuzzy subset F . Frequently, the mapping
is described as a function, the membership function of F . The ordinary sets are
considered as special cases of fuzzy sets with the membership functions equal
to the characteristic functions. They are called crisp sets.
The above definition of fuzzy set brings the equivalence between a fuzzy set
as such, intuitively a set-based concept, and its membership function, a mapping
from the universe of discourse to the unit interval [0, 1], or, more generally, to
2.3. FUZZY LOGIC 9
some lattice L. Here, the operations with fuzzy sets are defined by the operations
with functions.
In Chapter 8, specially devoted to fuzzy sets, our approach is reversed: we
define a fuzzy set as a family of (crisp) sets, where each member of the family
corresponds to a specific grade of membership from the unit interval [0, 1]. Doing
this we define easily the corresponding membership function. This approach is
intuitively well understandable and practically easily tractable, as it is natural to
work with (crisp) sets with the membership grade grater or equal to some level.
Moreover, this approach seems to be more elegant to some mathematicians who
are rather reluctant to speak about ”sets” having in mind ”functions”.
The fuzzy set based on the concept of family of nested sets, enjoys, among
others, the following advantages:
• it makes possible to create consistent (mathematical) theory,
• no ”artificial” identification of a fuzzy set with its membership function is
necessary,
• nonfuzzy sets can be naturally embedded into the fuzzy sets,
• nonfuzzy concepts may be extended to represent fuzzy ones,
• any fuzzy problem can be viewed as a family of nonfuzzy ones,
• practical tractability is achieved.
x is in F
is true is determined by finding the ordered pair whose first element is x. The
degree of truth of the statement is the second element of the ordered pair. In
practice, the terms ”membership function” and fuzzy subset get used inter-
changeably. That is a lot of mathematical baggage, so here is an example. Let
us talk about people and ”tallness” expressed as their HEIGHT . In this case
the set S (the universe of discourse) is the set of people. Let us define a fuzzy
subset tall, which will answer the question ”to what degree is person x tall?”
Zadeh describes HEIGHT as a linguistic variable, which represents our cog-
nitive category of ”tallness”. The values of this linguistic variable are fuzzy
subsets as tall, very_tall, or short. To each person in the universe of discourse,
we assign a degree of membership in the fuzzy subset tall. The easiest way to
do this is with a membership function based on the real function h (”height of
a person in cm”) which is defined for each person x ∈ S :
Figure 2.1:
tend to be triangles pointing up, and they can be much more complex than that.
Also, the discussion characterizes membership functions as if they always are
based on a single criterion, but this is not always the case, although it is quite
common. One could, for example, want to have the membership function for
tall depend on both a person’s height and their age, e.g. ”somebody is tall for
his age”. This is perfectly legitimate, and occasionally used in practice. It is
referred to as a two-dimensional membership function, or a ”fuzzy relation”. It
is also possible to have even more criteria, or to have the membership function
depend on elements from two completely different universes of discourse.
Now that we know what a statement like ”x is LOW ” means in fuzzy logic,
how do we interpret a statement like:
Note that if you plug just the values zero and one into these definitions,
you get the same truth tables as you would obtain from conventional predicate
logic, particularly, from fuzzy logic operations (2.1) we obtain predicate logic
operations on condition all fuzzy membership grades are restricted to the tra-
ditional set {0, 1}. This effectively establishes fuzzy subsets and logic as a true
generalization of classical set theory and logic. In fact, by this reasoning all
crisp (traditional) subsets are fuzzy subsets of this very special type; and there
is no conflict between fuzzy and crisp methods.
Example 2 Assume the same definition of tall as above, and in addition, as-
sume that we have a fuzzy subset old defined by the membership function:
0 if a(x) < 18,
a(x)−18
old(x) = { 42 if 18 < a(x) ≤ 60,
1 if a(x) > 60.
Z.Q. Liu and S. Miyamoto, Eds.: Soft Computing and Human - Centered
Machines, Springer, Tokyo-Berlin-Heidelberg-New York, 2000.
Turksen, I.B., ”Measurement of Fuzziness: Interpretation of the Axioms of
Measure”, in Proceeding of the Conference on Fuzzy Information and Knowledge
Representation for Decision Analysis. IFAC, Oxford, 1984,97-102.
Bezdek, J. C., ”Fuzzy Models – What Are They, and Why?”, IEEE Trans-
actions on Fuzzy Systems, 1:1,1-6.
Delgado, M., and Moral, S., ”On the Concept of Possibility-Probability Con-
sistency”, Fuzzy Sets and Systems 21,1987,311-318.
Dempster, A.P., ”Upper and Lower Probabilities Induced by a Multivalued
Mapping”, Annals of Math. Stat. 38,1967,325-339.
Henkind, S. J. and Harrison, M. C., ”Analysis of Four Uncertainty Calculi”,
IEEE Trans. Man Sys. Cyb. 18(5),1988,700-714.
Kampe, D. and Feriet, J., ”Interpretation of Membership Functions of Fuzzy
Sets in Terms of Plausibility and Belief”, in Fuzzy Information and Decision
Process, M.M. Gupta and E. Sanchez, Eds., North-Holland, Amsterdam, 1982,
93-98.
Klir, G., ”Is There More to Uncertainty than Some Probability Theorists
Would Have Us Believe?”, Int. J. Gen. Sys. 15(4),1989,347-378.
Klir, G., ”Generalized Information Theory”, Fuzzy Sets and Systems 40,
1991, 127-142.
16 CHAPTER 2. FUZZY SYSTEMS
Dubois, D. and Prade, H., ”Possibility Theory”, Plenum Press, New York,
1988.
Joslyn, C., ”Possibilistic Measurement and Set Statistics”, In: Proc. of the
1992 NAFIPS Conference, 2, NASA, 1992, 458-467.
Joslyn, C., ”Possibilistic Semantics and Measurement Methods in Complex
Systems”, In: Proc. of the 2nd International Symposium on Uncertainty Mod-
eling and Analysis, B. Ayyub, Ed., IEEE Computer Society 1993.
Wang, Z. and Klir, G., ”Fuzzy Measure Theory”, Plenum Press, New York,
1991.
Zadeh, L., ”Fuzzy Sets as the Basis for a Theory of Possibility”, Fuzzy Sets
and Systems 1:1978, 3-28.
2.8. FUZZY EXPERT SYSTEMS 17
the maximum method, one of the variable values at which the fuzzy subset
has its maximum truth value is chosen as the crisp value for the output
variable.
t
low(t) = 1 − , (2.2)
10
t
high(t) = . (2.3)
10
Rule 1: IF x is LOW AND y is LOW THEN z is HIGH
Rule 2: IF x is LOW AND y is HIGH THEN z is LOW
Rule 3: IF x is HIGH AND y is LOW THEN z is LOW
Rule 4: IF x is HIGH AND y is HIGH THEN z is HIGH
Notice that instead of assigning a single value to the output variable z, each
rule assigns an entire fuzzy subset (LOW or HIGH).
Notice that we have:
In the inference subprocess, the truth value for the premise of each rule
is computed, and applied to the conclusion part of each rule. This results in
one fuzzy subset to be assigned to each output variable for each rule. As it
has been already mentioned min and · are two inference methods or inference
rules. In min inferencing, the output membership function is clipped off at
2.8. FUZZY EXPERT SYSTEMS 19
z
10 if z ≤ 6.8,
Rule1(z) = {
0.68 if z > 6.8.
For the same conditions, · (product) inferencing will assign z the fuzzy subset
defined by the membership function:
z
Rule1(z) = 0.68 ·
10
The terminology used here is slightly nonstandard. In most texts, the term
”inference method” is used to mean the combination of the things referred to
separately here as ”inference” and ”composition”. Thus, you can see such terms
as ”max-min inference” and ”sum-product inference” in the literature.
P They
are the combination of max composition and min inference, or composition
and · inference, respectively. You’ll also see the reverse terms ”min-max” and
”product-sum” — these mean the same things in the reverse order. It seems
clearer to describe the two processes separately.
In the composition subprocess, all of the fuzzy subsets assigned to each
output variable are combined together to form a single fuzzy subset for each
output variable. Max composition and sum composition are two composition
rules. In max composition, the combined output fuzzy subset is constructed
by taking the pointwise maximum over all of the fuzzy subsets assigned to the
output variable by the inference rule. In sum composition, the combined output
fuzzy subset is constructed by taking the pointwise sum over all of the fuzzy
subsets assigned to the output variable by the inference rule.
Note that this can result in truth values greater than one! For this rea-
son, sum composition is only used when it will be followed by a defuzzification
method, such as the centroid method, that doesn’t have a problem with this
odd case. Otherwise sum composition can be combined with normalization and
is therefore a general purpose method again. For example, assume x = 0.0 and
20 CHAPTER 2. FUZZY SYSTEMS
y = 3.2. Min inferencing would assign the following four fuzzy subsets to z:
z
10 if z ≤ 6.8,
Rule1(z) = {
0.68 if z > 6.8,
0.32 if z ≤ 6.8,
Rule1(z) = { z
1 − 10 if z > 6.8,
Rule3(z) = 0.0,
Rule4(z) = 0.0.
0.32 if z ≤ 3.2,
z
Rules(z) = { 1 − 10 if 3.2 < z ≤ 6.8,
0.68 if z > 6.8.
Rule1(z) = 0.068 · z,
Rule1(z) = 0.32 − 0.032 · z,
Rule3(z) = 0.0,
Rule4(z) = 0.0.
There are several variations of the maximum method that differ only in what
they do when there is more than one variable value at which this maximum
truth value occurs. One of these, the average of maxima method, returns the
average of the variable values at which the maximum truth value occurs.
To compute the centroid of the function f (x), you divide the moment of
the function by the area
R of the function. To compute the moment of f (x), you
compute theRintegral xf (x)dx, and to compute the area of f (x), you compute
the integral f (x)dx.
Sometimes the composition and defuzzification processes are combined, tak-
ing advantage of mathematical relationships that simplify the process of com-
puting the final output variable values.
To date, fuzzy expert systems are the most common use of fuzzy logic. They
are used in several wide-ranging fields, including:
where, A, B and C are constants, e is the error term, IN T (e)dt is the integral of
the error over time and de/dt is the change in the error term. The major draw-
back of this system is that it usually assumes that the system being modelled in
linear or at least behaves in some fashion that is a monotonic function. As the
complexity of the system increases it becomes more difficult to formulate that
mathematical model. Fuzzy control replaces, in the picture above, the role of
the mathematical model and replaces it with another that is build from a num-
ber of smaller rules that in general only describe a small section of the whole
system. The process of inference binding them together to produce the desired
outputs. That is, a fuzzy model has replaced the mathematical one. The inputs
and outputs of the system have remained unchanged. The Sendai subway is the
prototypical example application of fuzzy control.
computer, from the fact that some data are approximate solutions of other
problems or estimations by human experts, etc. In some of these situations,
the fuzzy set approach may be applicable. In the context of multicriteria de-
cision making, functions mapping the set of feasible alternatives into the unit
interval [0, 1] of real numbers representing normalized utility functions can be
interpreted as membership functions of fuzzy subsets of the underlying set of
alternatives. However, functions with the range [0, 1] arise in more contexts.
A decision problem in X, i.e. the problem to find a ”best” decision in
the set of feasible alternatives X with respect to several (i.e. more than one)
criteria functions is considered. Within the framework of such a decision sit-
uation, we deal with the existence and mutual relationships of three kinds of
”optimal decisions”: Weak Pareto-Maximizers, Pareto-Maximizers and Strong
Pareto-Maximizers - particular alternatives satisfying some natural and rational
conditions - commonly called Pareto-optimal decisions.
In Chapter 9 we study also the compromise decisions maximizing some ag-
gregation of the criteria. This problem was introduced originally by Bellman
and Zadeh for the minimum aggregation function. The criteria considered here
will be functions defined on the set X of feasible alternatives with the values
in the unit interval [0, 1]. Such functions can be interpreted as membership
functions of fuzzy subsets of X and will be called here fuzzy criteria.
The set X of feasible alternatives is a convex subset, or a generalized convex
subset of n-dimensional Euclidean space Rn , frequently we consider X = Rn .
The main subject of our interest in Chapter 9 is to derive some important rela-
tions between Pareto-optimal decisions and compromise decisions. The relevant
literature to the subject can be found also in Chapter 9.
f
maximize f˜(x; c̃)
subject to (2.4)
g̃i (x; ãi ) R̃i b̃i , i ∈ M = {1, 2, ..., m},
where R̃i , i ∈ M, are fuzzy relations on F(R), the set of all fuzzy subsets of
26 CHAPTER 2. FUZZY SYSTEMS
f
maximize c̃1 x1 +̃· · ·+̃c̃n xn
subject to
(2.5)
ãi1 x1 +̃· · ·+̃ãin xn R̃i b̃i , i ∈ M,
xj ≥ 0, j ∈ N = {1, 2, ..., n}.
interested in fuzzy systems and fuzzy logic. Frequently, the discussion about
the hot topics of theory and practice of FS is interesting, deep and stimulating.
Information about new conferences and seminars, journals and books are also of
practical use. The list is slightly moderated (only irrelevant mails are rejected)
and is two-way gatewayed to the aforementioned NAFIPS-L list and to the
comp.ai.fuzzy internet newsgroup. Messages should therefore be sent only to
one of the three media, although some mechanism for mail-loop avoidance and
duplicate-message avoidance is activated. In addition to the mailing list itself,
the list server gives access to some files, including archives and the ”Who is
Who in Fuzzy Logic” database. The name of the server is
[email protected]
and the name of the mailing list is
[email protected]
If you are not familiar with this type of mailing list server, the easiest way
to get started is to send the following message to
[email protected]: get fuzzy-mail info
You will receive a brief set of instructions by e-mail within a short time.
Once you have subscribed, you will begin receiving a copy of each message that
is sent by anyone to
[email protected]
and any message that you send to that address will be sent to all of the other
subscribers.
The journal is affiliated with the North American Fuzzy Information Process-
ing Society (NAFIPS).
ISSN 1076-9757
JAIR is an International electronoc and printed journal covers all areas of
artificial intelligence (AI), publishin refereed research articles, survey articles,
and technical notes. Established in 1993 as one of the first electronic scientific
journals, JAIR is indexex by INSPEC, Science CI, and MathSciNet. JAIR
reviews papers within approximately two months of submission and publishes
accepted articles on the Internet immediately upon receiving the final versions.
JAIR articles are published for free distribution on the Internet by AI Access
Foundation, and for purchase in bound volumes by Morgan KAufmann Publ.,
see:
http://ww.cs.washington.edu/research/jair/home.html
Evolutionary Computation
33
34 CHAPTER 3. EVOLUTIONARY COMPUTATION
is fitness-proportionate selection.
Other implementations use a model in which certain randomly selected indi-
viduals in a subgroup compete and the fittest is selected. This is called tourna-
ment selection and is the form of selection we see in nature when stags rut to vie
for the privilege of mating with a herd of hinds. The two processes that most
contribute to evolution are crossover and fitness based selection/reproduction.
As it turns out, there are mathematical proofs that indicate that the process
of fitness proportionate reproduction is, in fact, near optimal in some senses.
Mutation also plays a role in this process, although how important its role is
continues to be a matter of debate (some refer to it as a background operator,
while others view it as playing the dominant role in the evolutionary process).
It cannot be stressed too strongly that the GA (as a simulation of a genetic
process) is not a random search for a solution to a problem. The genetic algo-
rithm uses stochastic processes, but the result is distinctly non-random. GAs
are used for a number of different application areas. An example of this would
be multidimensional optimization problems in which the character string of the
chromosome can be used to encode the values for the different parameters being
optimized. In practice, therefore, we can implement this genetic model of com-
putation by having arrays of bits or characters to represent the chromosomes.
Simple bit manipulation operations allow the implementation of crossover, mu-
tation and other operations.
Although a substantial amount of research has been performed on variable-
length strings and other structures, the majority of work with GAs is focussed
on fixed-length character strings. We should focus on both this aspect of fixed-
lengthness and the need to encode the representation of the solution being sought
as a character string, since these are crucial aspects that distinguish GP, which
does not have a fixed length representation and there is typically no encoding
of the problem.
When the GA is implemented it is usually done in a manner that involves the
following cycle: Evaluate the fitness of all of the individuals in the population.
Create a new population by performing operations such as crossover, fitness-
proportionate reproduction and mutation on the individuals whose fitness has
just been measured. Discard the old population and iterate using the new
population. One iteration of this loop is referred to as a generation. There
is no theoretical reason for this as an implementation model. Indeed, we do
not see this punctuated behavior in populations in nature as a whole, but it
is a convenient implementation model. The first generation (generation 0) of
this process operates on a population of randomly generated individuals. From
there on, the genetic operations, in concert with the fitness measure, operate to
improve the population.
Algorithm GA is
start with an initial time
t := 0;
36 CHAPTER 3. EVOLUTIONARY COMPUTATION
(or multiple such optima) in terms of those variables. For example, if one were
trying to find the shortest path in a Traveling Salesman Problem, each solution
would be a path. The length of the path could be expressed as a number, which
would serve as the solution’s fitness. The fitness landscape for this problem
could be characterized as a hypersurface proportional to the path lengths in a
space of possible paths. The goal would be to find the globally shortest path in
that space, or more practically, to find very short tours very quickly. The basic
EP method involves 3 steps (Repeat until a threshold for iteration is exceeded
or an adequate solution is obtained):
Step 1. Choose an initial population of trial solutions at random. The
number of solutions in a population is highly relevant to the speed of op-
timization, but no definite answers are available as to how many solutions are
appropriate (other than 1) and how many solutions are just wasteful.
Step 2. Each solution is replicated into a new population. Each of these
offspring solutions are mutated according to a distribution of mutation types,
ranging from minor to extreme with a continuum of mutation types between.
The severity of mutation is judged on the basis of the functional change imposed
on the parents.
Step 3. Each offspring solution is assessed by computing it’s fitness.
Typically, a stochastic tournament is held to determine n solutions to be
retained for the population of solutions, although this is occasionally performed
deterministically. There is no requirement that the population size be held con-
stant, however, nor that only a single offspring be generated from each parent. It
should be pointed out that EP typically does not use any crossover as a genetic
operator. EP and GAs
There are two important ways in which EP differs from GA. First, there
is no constraint on the representation. The typical GA approach involves en-
coding the problem solutions as a string of representative tokens, the genome.
In EP, the representation follows from the problem. A neural network can be
represented in the same manner as it is implemented, for example, because the
mutation operation does not demand a linear encoding. (In this case, for a
fixed topology, real- valued weights could be coded directly as their real values
and mutation operates by perturbing a weight vector with a zero mean multi-
variate Gaussian perturbation. For variable topologies, the architecture is also
perturbed, often using Poisson distributed additions and deletions.)
Second, the mutation operation simply changes aspects of the solution ac-
cording to a statistical distribution which weights minor variations in the be-
havior of the offsprings as highly probable and substantial variations as increas-
ingly unlikely. Further, the severity of mutations is often reduced as the global
optimum is approached. There is a certain tautology here: if the global opti-
mum is not already known, how can the spread of the mutation operation be
damped as the solutions approach it? Several techniques have been proposed
and implemented which address this difficulty, the most widely studied being
the ”Meta-Evolutionary” technique in which the variance of the mutation distri-
bution is subject to mutation by a fixed variance mutation operator and evolves
along with the solution.
38 CHAPTER 3. EVOLUTIONARY COMPUTATION
Algorithm EP is
start with an initial time
t := 0;
initialize a usually random population of individuals
initpopulation P (t);
evaluate fitness of all initial individuals in population
evaluate P (t);
test for termination criterion (time, fitness, etc.)
while not done do
·perturb the whole population stochastically
mutate P 0(t);
·evaluate it’s new fitness
evaluate P 0(t);
·stochastically select the survivors from actual fitness
P := survive P, P 0(t);
3.3. EVOLUTION STRATEGIES (ES) 39
References:
Baeck, T. and Fogel, D., ”Handbook of Evolutionary Computation”. Insti-
tute of Physics Publ., Philadelphia, PA, 1997.
Kursawe, F. (1992) ”Evolution strategies for vector optimization”. National
Chiao Tung University, Taipei, 187-193.
40 CHAPTER 3. EVOLUTIONARY COMPUTATION
(1) an environment;
(2) receptors that tell our system about the goings on;
(3) effectors, that let our system manipulate its environment; and
(4) the system itself, conveniently a ”black box” in this first approach, that
has (2) and (3) attached to it, and ”lives” in (1).
The most primitive ”black box” we can think of is a computer. It has inputs
(2), and outputs (3), and a message passing system in-between, that converts
(i.e., computes), certain input messages into output messages, according to a
set of rules, usually called the ”program” of that computer. From the theory of
computer science, we now borrow the simplest of all program structures, that is
something called ”production system” (PS). Although it merely consists of a set
of if-then rules, it still resembles a full- fledged computer. We now term a single
”if-then” rule a ”classifier”, and choose a representation that makes it easy to
manipulate these, for example by encoding them into binary strings. We then
term the set of classifiers, a ”classifier population”, and immediately know how
to breed new rules for our system: just use a GA to generate new rules/classifiers
3.5. GENETIC PROGRAMMING (GP) 41
from the current population. All that is left are the messages floating through
the black box. They should also be simple strings of zeroes and ones, and are to
be kept in a data structure, we call ”the message list”. With all this given, we
can imagine the goings on inside the black box as follows: The input interface
(2) generates messages, i.e., 0/1 strings, that are written on the message list.
Then these messages are matched against the condition-part of all classifiers, to
find out which actions are to be triggered. The message list is then emptied, and
the encoded actions, themselves just messages, are posted to the message list.
Then, the output interface (3) checks the message list for messages concerning
the effectors. And the cycle restarts. Note, that it is possible in this set-up
to have ”internal messages”, because the message list is not emptied after (3)
has checked; thus, the input interface messages are added to the initially empty
list. The general idea of the CS is to start from scratch, i.e., from tabula rasa
(without any knowledge) using a randomly generated classifier population, and
let the system learn its program by induction, this reduces the input stream to
recurrent input patterns, that must be repeated over and over again, to enable
the animat to classify its current situation/context and react on the goings on
appropriately.
References:
Booker, L.B., Goldberg, D.E. and Holland, J.H., ”Classifier Systems and
Genetic Algorithms”, Artificial Intelligence, Vol.40, 1989, 235-282.
Braitenberg, V. , ”Vehicles: Experiments in Synthetic Psychology” Boston,
MA: MIT Press, 1984.
Browne, W.N.L., Holford, K.M., Moore, C.J. and Bullock, J., ”An Industrial
Learning Classifier System: The Importance of Pre-Processing Real Data and
Choice of Alphabet”, Artificial Intelligence, Vol.13,1, 2000.
Holland, J.H. (1986) ”Escaping Brittleness: The possibilities of general-
purpose learning algorithms applied to parallel rule-based systems”. In: R.S.
Michalski, J.G. Carbonell & T.M. Mitchell, Eds., Machine Learning: An Arti-
ficial Intelligence approach, Vol. II, Morgan Kaufman, Los Altos, CA,593-623.
Holland, J.H., et al. (1986) ”Induction: Processes of Inference, Learning,
and Discovery”. MIT Press, Cambridge, MA.
Holland, J.H. (1992) ”Adaptation in natural and artificial systems” Boston,
MA: MIT Press.
Holland, J.H. and Reitman, J.S. (1978) ”Cognitive Systems based on Adap-
tive Algorithms” In: D.A. Waterman and F.Hayes-Roth, Eds., Pattern- directed
inference systems. Academic Press, New York.
Rich, E. (1983) ”Artificial Intelligence”.McGraw-Hill, N. York.
hand, they are programs that, when executed, ”are” the candidate solutions to
the problem. These programs are expressed in genetic programming as parse
trees, rather than as lines of code. Because this is a very simple thing to do in
the programming language Lisp, many GP people tend to use Lisp. However,
this is simply an implementation detail. There are straightforward methods to
implement GP using a non-Lisp programming environment. The programs in
the population are composed of elements from the function set and the terminal
set, which are typically fixed sets of symbols selected to be appropriate to the
solution of problems in the domain of interest. In GP the crossover operation is
implemented by taking randomly selected subtrees in the individuals (selected
according to fitness) and exchanging them. It should be pointed out that GP
usually does not use any mutation as a genetic operator.
References:
Neural Networks
4.1 Introduction
There is no universally accepted definition of neural networks (NN), a com-
mon characterization says that an NN is a network of many simple processors
(”units”), each possibly having a small amount of local memory. The units
are connected by communication channels (”connections”) which usually carry
numeric (as opposed to symbolic) data, encoded by any of various means. The
units operate only on their local data and on the inputs they receive via the
connections. The restriction to local operations is often relaxed during training.
Some NNs are models of biological neural networks and some are not, but
historically, much of the inspiration for the field of NNs came from the desire to
produce artificial systems capable of sophisticated, perhaps ”intelligent”, com-
putations similar to those that the human brain routinely performs, and thereby
possibly to enhance our understanding of the human brain.
Most NNs have some sort of ”training” rule whereby the weights of con-
nections are adjusted on the basis of data. In other words, NNs ”learn” from
examples (as children learn to recognize dogs from examples of dogs) and exhibit
some capability for generalization beyond the training data.
NNs normally have great potential for parallelism, since the computations
of the components are largely independent of each other. Some people regard
massive parallelism and high connectivity to be defining characteristics of NNs,
but such requirements rule out various simple models, such as simple linear
regression (a minimal feedforward net with only two units plus bias), which are
usefully regarded as special cases of NNs.
Here are some of definitions from the books:
According to Haykin, S. (1994) ”Neural Networks: A Comprehensive Foun-
dation”. Macmillan, New York, p. 2:
”A neural network is a massively parallel distributed processor that has a
natural propensity for storing experiential knowledge and making it available
for use. It resembles the brain in two respects:
43
44 CHAPTER 4. NEURAL NETWORKS
ward NNs can be trained using a wide variety of efficient conventional numerical
methods in addition to algorithms invented by NN reserachers.
· In a feedback or recurrent NN, there are cycles in the connections. In
some feedback NNs, each time an input is presented, the NN must iterate for a
potentially long time before it produces a response. Feedback NNs are usually
more difficult to train than feedforward NNs.
Some kinds of NNs (such as those with winner-take-all units) can be imple-
mented as either feedforward or feedback networks.
NNs also differ in the kinds of data they accept. Two major kinds of data
are categorical and quantitative.
· Categorical variables take only a finite (technically, countable) number of
possible values, and there are usually several or more cases falling into each
category. Categorical variables may have symbolic values (e.g., ”male” and ”fe-
male”, or ”red”, ”green” and ”blue”) that must be encoded into numbers before
being given to the network. Both supervised learning with categorical target
values and unsupervised learning with categorical outputs are called ”classifica-
tion.”
· Quantitative variables are numerical measurements of some attribute, such
as length in meters. The measurements must be made in such a way that at least
some arithmetic relations among the measurements reflect analogous relations
among the attributes of the objects that are measured.
Some variables can be treated as either categorical or quantitative, such as
number of children or any binary variable. Most regression algorithms can also
be used for supervised classification by encoding categorical target values as
0/1 binary variables and using those binary variables as target values for the
regression algorithm. The outputs of the network are posterior probabilities
when any of the most common training methods are used.
1. Supervised
— Feedforward
∗ Linear
· Hebbian - Hebb (1949), Fausett (1994)
· Perceptron - Rosenblatt (1958), Minsky and Papert (1969/1988)
· Adaline - Widrow and Hoff (1960), Fausett (1994)
∗ MLP: Multilayer perceptron - Bishop (1995), Reed and Marks
(1999)
· Backprop - Rumelhart, Hinton, and Williams (1986)
· Cascade Correlation - Fahlman and Lebiere (1990), Fausett
(1994)
48 CHAPTER 4. NEURAL NETWORKS
— Competitive
∗ Vector Quantization
· Grossberg - Grossberg (1976)
· Kohonen - Kohonen (1984)
· Conscience - Desieno (1988)
∗ Self-Organizing Map
· Kohonen - Kohonen (1995), Fausett (1994)
· GTM: - Bishop, Svens\’en and Williams (1997)
· Local Linear - Mulier and Cherkassky (1995)
∗ Adaptive resonance theory
· ART 1 - Carpenter and Grossberg (1987a), Moore (1988),
Fausett (1994)
· ART 2 - Carpenter and Grossberg (1987b), Fausett (1994)
· ART 2-A - Carpenter, Grossberg and Rosen (1991a)
· ART 3 - Carpenter and Grossberg (1990)
· Fuzzy ART - Carpenter, Grossberg and Rosen (1991b)
∗ DCL: Differential competitive learning - Kosko (1992)
— Dimension Reduction - Diamantaras and Kung (1996)
∗ Hebbian - Hebb (1949), Fausett (1994)
∗ Oja - Oja (1989)
∗ Sanger - Sanger (1989)
∗ Differential Hebbian - Kosko (1992)
— Autoassociation
∗ Linear autoassociator - Anderson et al. (1977), Fausett (1994)
∗ BSB: Brain State in a Box - Anderson et al. (1977), Fausett
(1994)
∗ Hopfield - Hopfield (1982), Fausett (1994)
3. Nonlearning
References:
http://www.soft-computing.de.
Ackley, D.H., Hinton, G.F., and Sejnowski, T.J. (1985), ”A learning algo-
rithm for Boltzman machines,” Cognitive Science, 9, 147-169.
Albus, J.S (1975), ”New Approach to Manipulator Control: The Cerebellar
Model Articulation Controller (CMAC),” Transactions of the ASME Journal of
Dynamic Systems, Measurement, and Control, September 1975, 220-27.
Anderson, J.A., and Rosenfeld, E., Eds. (1988), ”Neurocomputing: Foun-
dations of Research”. The MIT Press, Cambridge, MA.
Anderson, J.A., Silverstein, J.W., Ritz, S.A., and Jones, R.S. (1977) ”Dis-
tinctive features, categorical perception, and probability learning: Some appli-
cations of a neural model”. Psychological Rveiew, 84, 413-451.
Bishop, C.M. (1995), ”Neural Networks for Pattern Recognition”. Oxford
University Press, Oxford.
Bishop, C.M., Svensen, M. and Williams, C.K.I (1997), ”GTM: A principled
alternative to the self-organizing map”. In: Mozer, M.C., Jordan, M.I., and
Petsche, T., Eds., Advances in Neural Information Processing Systems 9, The
MIT Press, Cambrideg, MA, 354-360.
Brown, M. and Harris, C. (1994), ”Neurofuzzy Adaptive Modelling and Con-
trol”. Prentice Hall, N. York.
Carpenter, G.A., Grossberg, S. (1987a), ”A massively parallel architecture
for a self-organizing neural pattern recognition machine,” Computer Vision,
Graphics, and Image Processing, 37, 54-115.
Carpenter, G.A., Grossberg, S. (1987b), ”ART 2: Self-organization of stable
category recognition codes for analog input patterns,” Applied Optics, 26, 4919-
4930.
Carpenter, G.A., Grossberg, S. (1990), ”ART 3: Hierarchical search us-
ing chemical transmitters in self-organizing pattern recognition architectures.
Neural Networks, 3, 129-152.
Carpenter, G.A., Grossberg, S., Markuzon, N., Reynolds, J.H., and Rosen,
D.B. (1992), ”Fuzzy ARTMAP: A neural network architecture for incremental
supervised learning of analog multidimensional maps,” IEEE Transactions on
Neural Networks, 3, 698-713.
Carpenter, G.A., Grossberg, S., Reynolds, J.H. (1991), ”ARTMAP: Su-
pervised real-time learning and classification of nonstationary data by a self-
organizing neural network,” Neural Networks, 4, 565-588.
Carpenter, G.A., Grossberg, S., Rosen, D.B. (1991a), ”ART 2-A: An adap-
tive resonance algorithm for rapid category learning and recognition,” Neural
Networks, 4, 493-504.
Carpenter, G.A., Grossberg, S., Rosen, D.B. (1991b), ”Fuzzy ART: Fast
stable learning and categorization of analog patterns by an adaptive resonance
system,” Neural Networks, 4, 759-771.
Chen, S., Cowan, C.F.N., and Grant, P.M. (1991), ”Orthogonal least squares
learning for radial basis function networks,” IEEE Transactions on Neural Net-
works, 2, 302-309.
4.4. WELL-KNOWN KINDS OF NNS 51
Machine Learning
5.1 Introduction
The main goal of machine learning is to built computer systems that can adapt
and learn from the experience. Different learning techniques have been devel-
oped for different performance tasks. The primary tasks that have been inves-
tigated are supervised learning for discrete decision making process, supervised
learning for continuous prediction, reinforcement learning for sequential decision
making and unsupervised learning. Here we consider a more general setting not
necessarily based on NNs, see the previous chapter.
The best understood task is one-shot decision making, see Willson, (1999):
the computer is given a description of an object (event, situation, etc.) and it
must output a classification of that object. For example, an optical recognizer
must input a digitized image of a character and output the name of that char-
acter (”A” through ”Z”). A machine leaning approach to constructing such a
system would begin by collecting training examples, each consisting of a digi-
tized image of a character and the correct name of the character. This would
be analyzed by learning algorithm to produce an optical character recognizer
for classifying new images.
Machine learning algorithms search a space of candidate classifiers for one
that performs well on the training examples and is expected to work well to
new classes. Learning methods for classification problems include decision trees,
NNs, rule learning algorithms, clustering methods and Bayesian networks, see
the next chapter.
There exist for basic questions to answer when developing a new machine
learning system:
55
56 CHAPTER 5. MACHINE LEARNING
These three theories lead to different practical approaches. The first theory
encourages the development of large, flexible hypothesis space (such as deci-
sion trees and NNs) that can represent many different classifiers. The second
and third theory implies the development of representational systems that can
readily express prior knowledge, such as Bayesian and fuzzy networks and other
stochastic /fuzzy models.
system should group the objects into stars, planets, and galaxies and describe
each group of its electromagnetic spectrum, distance from earth, and so on.
Unsupervised learning can be understood in a much wider range of tasks
in cluster analysis, see the chapter devoted to fuzzy systems. One useful for-
mulation of unsupervised learning is density estimation. Define a probability
distribution P (X) to be the probability that object X will be observed. Then
the goal of unsupervised learning is to find this probability distribution on the
samples of objects. This may be typically accomplished by defining a family of
possible stochastic models and choosing the model that best fits for the data.
Considering again the above example with astronomical objects, the proba-
bility distribution P (X) describing the whole collection of astronomical objects
could then be modeled as a mixture of normal distributions - one distribution
for each of objects. The learning process determines the number of groups and
the mean and covariance matrix of each multivariate distribution. The AUTO-
CLASS program discovered a new class of astronomical objects in just this way,
see Cheeseman at al. (1988). Similar method has been applied in HMM speech
recognition model, see Rabiner (1989).
References:
Probabilistic Reasoning
6.1 Introduction
Probabilistic reasoning (PR) reefers to the formation of probability judgements
and subjective beliefs about the likelihoods of outcomes and the frequencies of
events. The judgements that people make are often about things that are only
indirectly observable and only partly predictable. For example, the weather, a
game of sports, a project at work, or whatever it could be, our willingness to
engage in an endeavor and the actions that we take depend on our estimated
likelihood of the relevant outcomes. How likely id our team to win? How
frequently have projects like this failed before? Like other areas of reasoning
and decision making, we distinguish normative, descriptive and prescriptive
approaches.
The normative approach to PR is constrained by the same mathematical
rules that govern the classical, set-theoretic concept of probability. In particular,
probability judgements are said to be coherent, if they satisfy Kolmogorov’s
axioms:
Whereas the first three axioms involve unconditional probabilities, the fourth
introduces conditional probabilities. When applied to hypotheses and data in
inferential contexts, simple arithmetic manipulation of rule 4. leads to the result
that the (posterior) probability of a hypothesis conditional on the data is equal
59
60 CHAPTER 6. PROBABILISTIC REASONING
(1) Parameters learning - learning the numerical parameters (i.e. the condi-
tional probabilities) for a given network topology.
(2) Structure learning - identifying the network topology.
The subtasks are not mutually independent because the set of parameters
needed depends largely on topology assumed, and conversely. The chief role is
played by the structure learning. In PR a number of sophisticated methods and
techniques for discovering structures in empirical data have been developed, see
the literature, among the m tree structuring method of Chow and Liu (1968) are
of significant importance. Dechter (1987) used the similar method to decompose
the general n-ary relations into trees of binary relations. The polytree recovery
algorithm was developed by Rebane and Pearl (1987). A general probabilistic
account of causation was developed under the name of minimal causal model
by Pearl and Verma (1991).
References:
Arkes, H.R. and Hammond, K.R. (1986) ”Judgement and Decision Making”.
Cambridge Univ. Press, Cambridge.
Goldstein, W.M. and Hogarth, R.M. (1997) ”Research on Judgement and
Decision Making: Currents, Connections and Controversies”. Cambridge Univ.
Press, Cambridge.
Jaynes, E.T. (1998) ”Probability Theory: The Logic of Science” - electronic
book on: http://omega..albany.edu:8008/JaynesBook.html
62 CHAPTER 6. PROBABILISTIC REASONING
Conclusion
The impact of soft computing has been felt increasingly strong in the recent
years. Soft computing is likely to play an especially important role in science
and engineering, but eventually its influence may extend much farther. Building
human-centered systems is an imperative task for scientists and engineers in the
new millennium.
In many ways, soft computing represents a significant paradigm shift in
the aims of computing - a shift which reflects the fact that the human mind,
unlike present day computers, possesses a remarkable ability to store and process
information which is pervasively imprecise, uncertain and lacking in categoricity.
In this overview, we have focused primarily on fuzzy methodologies and
fuzzy systems, as they bring basic ideas to other SC methodologies. The other
constituents of SC have been also surveyed here but for details we refer to the
existing vast literature.
63
64 CHAPTER 7. CONCLUSION
Part II
Fuzzy Optimization
65
Chapter 8
Fuzzy Sets
8.1 Introduction
A well known fact in the theory of sets is that properties of subsets of a given
set X and their mutual relations can be studied by means of their characteristic
functions, see e.g. [9] - [23] and [38]. While this may be advantageous in some
contexts, we should notice that the notion of a characteristic function is more
complex than the notion of a subset. Indeed, the characteristic function χA of
a subset A of X is defined by
1 if x ∈ A,
χA (x) = {
0 if x ∈
/ A.
Since χA is a function we need not only the underlying set X and its subset A
but also one additional set, in this case the set {0, 1} or any other two-element
set. Moreover, we also need the notion of Cartesian product because functions
are specially structured binary relations, in this case special subsets of X×{0, 1}.
If we define fuzzy sets by means of their membership functions, that is, by
replacing the range {0, 1} of characteristic functions with a lattice, for example,
the naturally ordered unit interval [0, 1] of real numbers, then we should be aware
of the following fact. Such functions may be related to certain objects (build
from subsets of the underlying set) in an analogous way how the characteristic
functions are related to subsets. This may explain why the fuzzy community
(rightly?) hesitates to accept the view that a fuzzy subset of a given set is
nothing else than its membership function. Then, a natural question arises.
Namely, what are those objects? Obviously, it can be expected that they are
more complex than just subsets because the class of functions mapping X into
a lattice can be much richer than the class of characteristic functions. In the
next section, we show that it is advantageous to define these objects as nested
families of subsets satisfying certain mild conditions.
Even if it is not the purpose of this chapter to deal with interpretations of the
concepts involved, it should be noted that fuzzy sets and membership functions
67
68 CHAPTER 8. FUZZY SETS
(i) A0 = X,
(ii) Aβ ⊂ AαTwhenever 0 ≤ α < β ≤ 1, (8.1)
(iii) Aβ = Aα .
0≤α<β
for each x ∈ X.
The core of A, Core(A), is given by
For x ∈ X the function value µA (x) is called the membership degree of x in the
fuzzy set A. The class of all fuzzy subsets of X is denoted by F(X).
Proof. First, we prove that A = {Aα }α∈[0,1] , where Aα = U (µ, α) for all
α ∈ [0, 1] satisfies conditions (8.1). Indeed, conditions (i) and (ii) T
hold easily.
For condition (iii), we observe that by (ii) it follows that Aβ ⊂ Aα . To
0≤α<β
prove the opposite inclusion, let
\
x∈ Aα . (8.7)
0≤α<β
Assume the contrary, that is let x ∈ / Aβ . Then µ(x) < β and there exists
α0 , such that µ(x) < α0 < β. By (8.7) we have x ∈ Aα0 , thus µ(x) ≥ α0 , a
contradiction.
It remains to prove that µ = µA , where µA is a membership function of A.
For this purpose let x ∈ X and let us show that µ(x) = µA (x).
By definition (8.2) we have
T
all 0 ≤ α ≤ α0 . Hence, x ∈ Aα , however, applying (iii) in (8.1) we get
0≤α<β
x ∈ Aβ . Consequently, [A]β ⊂ Aβ .
These results allow for introducing a natural one-to-one correspondence be-
tween fuzzy subsets of X and real functions mapping X to [0, 1]. Any fuzzy
subset A of X is given by its membership function µA and vice-versa, any func-
tion µ : X → [0, 1] uniquely determines a fuzzy subset A of X, with the property
that the membership function µA of A is µ.
The notions of inclusion and equality extend to fuzzy subsets as follows. Let
A = {Aα }α∈[0,1] , B = {Bα }α∈[0,1] be fuzzy subsets of X. Then
Suppose that µA (x) ≤ µB (x) holds for all x ∈ X and let α ∈ [0, 1]. We have
to show that Aα ⊂ Bα . Indeed, for an arbitrary u ∈ Aα , we have
From here, sup{β|β ∈ [0, 1], u ∈ Bβ } ≥ α, therefore, for each β < α it follows
that u ∈ Bβ . Hence, by (iii) in Definition 3, we obtain u ∈ Bα .
Classical sets can be considered as special fuzzy sets where the families con-
tain the same elements. We obtain the following definition.
Let A0 = {A0α }α∈[0,1] , A00 = {A00α }α∈[0,1] be two families of subsets in R defined
as follows:
A0α = {x|x ∈ R,µ(x) > α},
A00α = {x|x ∈ R,µ(x) ≥ α}.
A ∩T CN A = ∅, (8.13)
A ∪S CN A = X. (8.14)
If the t-norm T in the De Morgan triple (T, S, N ) does not have zero divisors,
e.g. T = min, then these properties never hold unless A is a crisp set. On the
other hand, for the De Morgan triple (TL , SL , N ) based on Lukaszewicz t-norm
T = TL , properties (8.13) and (8.13) are satisfied.
Given a t-norm T and fuzzy subsets A and B of X and Y , respectively,
the Cartesian product A ×T B is the fuzzy subset of X × Y with the following
membership function:
An interesting and natural question arises, whether the α-cuts of the inter-
section A ∩T B, union A ∪S B and Cartesian product A ×T B of A, B ∈ F(X),
coincide with the intersection, union and Cartesian product, respectively, of the
corresponding α-cuts [A]α and [B]α . We have the following result, see [38].
In particular, this result means that identities (8.16) hold for all α ∈ [0, 1]
and for all fuzzy sets A, B ∈ F(X) if and only if T = TM and S = SM .
f˜(x0 ) = y0 ,
and the membership function µf˜(x0 ) of the fuzzy set f˜(x0 ) is a characteristic
function of y0 , i.e.
µf˜(x0 ) = χy0 . (8.18)
Let y = y0 . Since y0 = f (x0 ) we obtain by (8.17) that µf˜(x0 ) (y) = χx0 (x0 ) = 1.
Moreover, by the definition of characteristic function we have χy0 (y0 ) = 1, thus
(8.19) is satisfied.
On the other hand, let y 6= y0 . Again, by the definition of characteristic
function we have χy0 (y) = 0. As y 6= f (x0 ) we obtain for all x ∈ X with
y = f (x) that x 6= x0 . Clearly, χx0 (x) = 0 and by (8.17) it follows that
µf˜(x0 ) (y) = 0, which was required.
A more general form of Proposition 12 says that the image of a crisp set by
a fuzzy extension of a function is again crisp.
f˜(A) = f (A)
1 if y ∈ f (A),
µf˜(A) = max {0, sup{χA (t)|t ∈ X, f (t) = y} = { .
0 otherwise.
The valued relations are sometimes called fuzzy relations, however, we re-
serve this name for valued relations defined on F(X) × F(Y ), which will be
defined later.
Any binary relation R, where R ⊂ X×Y , is embedded into the class of valued
relations by its characteristic function χR being understood as its membership
function µR . In this sense, any binary relation is valued.
Particularly, any function f : X → Y is considered as a binary relation, that
is, as a subset Rf of X × Y , where
Here, Rf may be identified with the valued relation by its characteristic function
µR (x, x) = 1;
(iv) separable if
µR (x, y) = 1 if and only if x = y;
µR (x, y) if µR (y, x) = 0,
µRs (x, y) = {
0 otherwise.
is a valued relation on R. If
½
1 if t ≤ 0
ϕ(t) = ,
0 otherwise
A0 ⊂ A00 , B 0 ⊂ B 00 ,
then
µR̃T (A0 , B 0 ) ≤ µR̃T (A00 , B 00 ). (8.28)
Observe that for all u ∈ X, v ∈ Y , µA0 (u) ≤ µA00 (u) and µB 0 (v) ≤ µB 00 (v).
Clearly, (8.28) follows by monotonicity of the t-norm T .
For any t-norm T , we obtain the following properties of a T -fuzzy extension
of the valued relation.
Observe that for all u ∈ X, v ∈ Y , µA0 (u) ≤ µA00 (u) and µB 0 (v) ≤ µB 00 (v).
Clearly, (8.33) follows by monotonicity of the t-norm T .
and by (8.22), (8.23), µRf (x, y) = χf (x) (y) = 1. It follows from (8.34) that
(ii) A fuzzy relation R̃S of F(X) × F(Y ) is defined for all fuzzy sets A ∈
F(X), B ∈ F(Y ) by
µR̃T,S (A, B) = sup{inf{T (µA (x) , S(1 − µB (y) , µR (x, y)))|y ∈ Y }|x ∈ X}.
(8.38)
(iv) A fuzzy relation R̃T,S of F(X) × F(Y ) is defined for all fuzzy sets A ∈
F(X), B ∈ F(Y ) by
µR̃T,S (A, B) = inf {sup{S(T (µA (x) , µR (x, y)), 1 − µB (y))|x ∈ X}|y ∈ Y } .
(8.39)
(v) A fuzzy relation R̃S,T of F(X) × F(Y ) is defined for all fuzzy sets A ∈
F(X), B ∈ F(Y ) by
µR̃S,T (A, B) = sup {inf{T (S(1 − µA (x) , µR (x, y)), µB (y))|x ∈ X}|y ∈ Y }
(8.40)
(vi) A fuzzy relation R̃S,T of F(X) × F(Y ) is defined for all fuzzy sets A ∈
F(X), B ∈ F(Y ) by
µR̃S,T (A, B) = inf {sup{S(1 − µA (x) , T (µB (y) , µR (x, y)))|y ∈ Y }|x ∈ X} .
(8.41)
Now, we shall study the above defined fuzzy extensions. First, we prove
that all fuzzy extensions of a valued relation defined in Definition 27 are fuzzy
extensions in the sense of Definition 19, see the first part of Proposition 22.
Then we present some monotonicity properties similar to that of the second
part of Proposition 22.
By (8.37) we obtain
© ¡ ¢ ª
µR̃S (x, y) = inf S S(1 − χx (u) , 1 − χy (v)), µR (u, v) |u ∈ X, v ∈ Y
¡ ¢
= S S(1 − χx (x) , 1 − χy (y)), µR (x, y) = S(S(0, 0), µR (x, y)) = µR (x, y) .
By (8.38) we obtain
© ª
µR̃T,S (x, y) = sup inf{T (χx (u) , S(1 − χy (v), µR (u, v)))|v ∈ Y }|u ∈ X
= T (χx (x) , S(1 − χy (y), µR (x, y))) = T (1, µR (x, y)) = µR (x, y) .
By (8.39) we obtain
© ª
µR̃T,S (x, y) = inf sup{S(T (χx (u) , µR (u, v)), 1 − χy (v))|u ∈ X}|v ∈ Y
= S(T (χx (x) , µR (x, y)), 1 − χy (y)) = S(0, µR (x, y)) = µR (x, y).
By (8.40) we obtain
© ª
µR̃S,T (x, y) = sup inf{T (χy (v), S(1 − χx (u) , µR (u, v)))|u ∈ X}|v ∈ Y
= T (χy (y), S(1 − χx (x) , µR (x, y))) = T (1, µR (x, y)) = µR (x, y) .
By (8.41) we obtain
© ª
µR̂S,T (x, y) = inf sup{S(1 − χx (u) , T (χy (v), µR (u, v)))|v ∈ Y }|u ∈ X
= S(1 − χx (x) , T (χy (y), µR (x, y))) = S(0, µR (x, y)) = µR (x, y) .
A0 ⊂ A00 , B 0 ⊃ B 00 , (8.47)
82 CHAPTER 8. FUZZY SETS
then
µR̃ (A0 , B 0 ) ≤ µR̃ (A00 , B 00 ). (8.48)
(iii) If R̃ ∈ {R̃S,T , R̃S,T } and
A0 ⊃ A00 , B 0 ⊂ B 00 , (8.49)
then
µR̃ (A0 , B 0 ) ≤ µR̃ (A00 , B 00 ). (8.50)
Proof. (i) Observe that by (8.44) for all u ∈ X, v ∈ Y , µA0 (u) ≤ µA00 (u)
and µB 0 (v) ≤ µB 00 (v). Clearly, (8.45) follows from (8.36) by monotonicity of the
t-norm T . Similarly, for all u ∈ X, v ∈ Y ,
if and only if there exists a ∈ A such that for every b ∈ B it holds µR (a, b) = 1;
(iv)
µR̃T,S (A, B) = 1
if and only if for every b ∈ B there exists a ∈ A such that µR (a, b) = 1;
(v)
µR̃S,T (A, B) = 1
if and only if there exists b ∈ B such that for every a ∈ A it holds µR (a, b) = 1;
(vi)
µR̃S,T (A, B) = 1
if and only if for every a ∈ A there exists b ∈ B such that µR (a, b) = 1.
µR̃S,T (A, B) = inf {sup{S(1 − χA (x) , T (χB (y), µR (x, y)))|y ∈ Y }|x ∈ X}
1 if ∀a ∈ A, ∃b ∈ B : µR (a, b) = 1,
={
0 otherwise.
Figure 8.1:
(i) 0 ∈ Core(A),
(8.54)
(ii) µA is quasiconcave on R.
Notice that the generator A is a special fuzzy interval that satisfies (i).
µA (x) = gA (x − aA ). (8.56)
The set of all B -fuzzy intervals will be denoted by FB (R). Any A ∈ FB (R) is
represented by a couple (aA , gA ), we write A = (aA , gA ).
An ordering relation ≤B is defined on FB (R) as follows. For A, B ∈ FB (R),
A = (aA , gA ), B = (aB , gB )
A ≤B B
if
(aA < aB ) or (aA = aB and gA ≤ gB ). (8.57)
µA (x) = T (µA1 (x1 ), µA2 (x2 ), ..., µAm (xm )). (8.58)
88 CHAPTER 8. FUZZY SETS
The fuzzy set A ∈ F(Rm ) given by the membership function (8.58) is called
the fuzzy vector of non-interactive fuzzy quantities, see [35]. Applying (8.17),
we obtain for all y ∈ R:
sup{T (µA1 (x1 ), ..., µAm (xm ))| x = (x1 , ..., xm ) ∈ Rm , f (x) = y}
µf˜(A) (y) = { if f −1 (y) 6= ∅,
0 otherwise.
(8.59)
Let D = (d1 , d2 , . . . , dm ) be a nonsingular m × m matrix, where all
di ∈ Rm are column vectors, i = 1, 2, ..., m. Let a fuzzy set B ∈ F(Rm ) be given
by the membership function µB : Rm → [0, 1], for all x = (x1 , ..., xm ) ∈ Rm as
follows:
µB (x) = T (µA1 (hd1 , xi), µA2 (hd2 , xi), ..., µAm (hdm , xi)). (8.60)
The fuzzy set B ∈ F(Rm ) given by the membership function (8.60) is called
the fuzzy vector of interactive fuzzy quantities, or the oblique fuzzy vector, and
the matrix D is called the obliquity matrix, see [35].
Notice that if D is equal to the identity matrix E = (e1 , e2 , . . . , em ), ei =
(0, ..., 0, 1, 0, ..., 0), where 1 is only at the i-th position, then the corresponding
vector of interactive fuzzy quantities is a noninteractive one. Interactive fuzzy
numbers have been extensively studied e.g. in [32], [35], [69] and [70]. In this
study, we shall deal with them again in Chapter 11.
Now, we shall continue our investigation of the non-interactive fuzzy quan-
tities.
Example 40 Let m = 2, f : R2 → R, be defined for all (x1 , x2 ) ∈ R2 as
follows: f (x1 , x2 ) = x1 ∗ x2 , where ∗ is a binary operation on R, e.g. one of the
four arithmetic operations (+, −, ·, /). Let A1 , A2 ∈ F(R) be fuzzy quantities
given by membership functions µAi : R → [0, 1], i = 1, 2. Then, for a given
t-norm T , the fuzzy extension f˜ : F(R) × F(R) → F(R) defined by (8.59) as
µA1 ~ T A2 (y) = max {0, sup{T (µA1 (x1 ), µA2 (x2 ))| x1 ∗ x2 = y}} (8.61)
corresponds to the operation ~T on F(R). It is obvious that ~T is an extension
of ∗, since for any two crisp subsets A1 , A2 ∈ P(R) we obtain
A1 ~T A2 = A1 ∗ A2 , (8.62)
and, as a special case thereof, for any two crisp numbers a, b ∈ R,
a ~T b = a ∗ b.
If A1 , A2 ∈ F(R) are fuzzy quantities, we obtain (8.62) in terms of α-cuts as
follows
[A1 ~T A2 ]α = [A1 ]α ∗ [A2 ]α , (8.63)
where α ∈ (0, 1], or in terms of mapping f , (8.63) can be written as
[f˜(A1 , A2 )]α = f ([A1 ]α , [A2 ]α ). (8.64)
8.9. FUZZY EXTENSIONS OF REAL FUNCTIONS 89
Figure 8.2:
A = A1 ×T A2 ×T · · · ×T Am . (8.66)
If the equality at the top of this diagram is satisfied, we say, that the diagram
commutes. For the beginning, we derive several results concerning some con-
vexity properties of the individual elements in the diagram. The first result is
a simple generalization of the similar result from [79] for more than two mem-
bership functions. Notice that the membership functions in question are not
assumed to be normalized.
µA (x) = Gm (µA1 (x1 ), µA2 (x2 ), ..., µAm (xm )). (8.67)
Then µA is upper-starshaped on Rm .
Core(µA1 ) × · · · × Core(µAm ) 6= ∅.
Figure 8.3:
the contours of some α-cuts of the fuzzy set A given by (8.70), are depicted.
This picture demonstrates that µA is not quasiconcave on X, as some of its
α-cuts are not convex. This fact can be verified by looking closely at the curves
µA (x1 , x2 ) = α for α ∈ (0, 1]. All α-cuts are, however, starshaped sets.
The next two results concern the α-cuts of the fuzzy quantities.
Proof. (i) Let x = (x1 , ..., xm ) ∈ [A]α , i.e. µA (x) = T (µA1 (x1 ), ..., µAm (xm ))
≥ α. Since min{µA1 (x1 ), ..., µAm (xm )} ≥ T (µA1 (x1 ), ..., µAm (xm )), we obtain
µAi (xi ) ≥ α for all i = 1, 2, ..., m. Consequently, for all i = 1, 2, ..., m we have
xi ∈ [Ai ]α and also x = (x1 , ..., xm ) ∈ [A1 ]α × [A2 ]α × · · · × [Am ]α .
92 CHAPTER 8. FUZZY SETS
(ii) Let T 6= min. Then there exists x = (x1 , ..., xm ) ∈ Rm such that
min{µA1 (x1 ), ..., µAm (xm )} > T (µA1 (x1 ), ..., µAm (xm )).
Putting β = min{µA1 (x1 ), ..., µAm (xm )}, we have β > 0 and xi ∈ [Ai ]β for
all i = 1, 2, ..., m, i.e. x = (x1 , ..., xm ) ∈ [A1 ]β × [A2 ]β ×· · · × [Am ]β . However,
µA (x) = T (µA1 (x1 ), ..., µAm (xm )) < β and therefore x = (x1 , ..., xm ) ∈ / [A]β , a
contradiction with (8.72). Thus, T = min .
On the other hand, if T = min, then
Figure 8.4:
Without loss of generality we assume that α > 0. Take a number β, 0 < β < α
such that α − βk > 0 for all k = 1, 2, ..., and denote
β
Uk = {x ∈ Rm |µA (x) ≥ α − }, k = 1, 2, .... (8.77)
k
By the compactness of [A]δ for all δ ∈ (0, 1] we know that all Uk are compact
and Uk+1 ⊂ Uk for all k = 1, 2, .... Putting Vk = Uk ∩Xy we obtain by (8.76) and
(8.77) that Vk is nonempty, compact and Vk+1 ⊂ Vk for all k = 1, 2, .... From
T
∞
the well known property of compact spaces it follows that Vk is nonempty.
k=1
T
∞
Hence, for any xy ∈ Vk it holds: f (xy ) = y and µA (xy ) ≥ α.
k=1
Clearly, the fuzzy set A is compact, if the α-cuts [A]α are compact for all
α ∈ (0, 1], or the α-cuts [A]α are bounded for all α ∈ (0, 1] and the membership
function µA is upper semicontinuous on Rm .
Returning back to the problem formulated at the end of the last section,
namely, the problem of the existence of sufficient conditions under which the
membership function of f˜(A) is quasiconcave on R with µA defined by (8.58),
we have the following result.
Proof. It is sufficient to show that µA (x) = T (µA1 (x1 ), µA2 (x2 ), ..., µAm (xm ))
is upper-quasiconnected and [A]α are compact for all α ∈ (0, 1]. Having this,
the result follows from Proposition 48 and Theorem 47.
By Proposition 42, µA is upper starshaped on Rm , hence µA is upper con-
nected.
As T is continuous, [A]α is closed for all α ∈ (0, 1].
It is also supposed that [Ai ]α are compact for all α ∈ (0, 1], i = 1, 2, ..., m,
therefore the same holds for a Cartesian product [A1 ]α × [A2 ]α × · · · × [Am ]α .
Applying (8.71), we obtain that all [A]α are bounded, hence compact.
8.10. HIGHER DIMENSIONAL FUZZY QUANTITIES 95
© ¡ ¢ m
ª
µ≥
˜ (A, B) = sup T µ≥ (x, y) , T (µ (x) , ν (y)) |x, y ∈ R
Then the valued relation Rd defined by the membership function µRd for all
x, y ∈ R as
µRd (x, y) = ϕd (x − y) (8.86)
is a ”generalized” inequality relation ≥ on R. By (8.27), the membership func-
tion µR̃d of the T -fuzzy extension R̃d of relation Rd is as follows
Further on, we shall deal with properties of fuzzy extensions of binary re-
lations on Rm . We start with investigation of m-dimensional intervals. Recall
that the notation A R B means µR (A, B) = 1. We consider the usual com-
ponentwise binary relations in Rm , namely ”less or equal” and ”equal”, i.e.
R ∈ {≤, =}.
≤ ˜T,≤
˜ ∈ {≤ ˜ T,S , ≤
˜ S, ≤ ˜ S,T , ≤
˜ T,S , ≤ ˜ S,T },
= ˜T,=
˜ ∈ {= ˜ T,S , =
˜ S, = ˜ S,T , =
˜ T,S , = ˜ S,T }.
Then
(i)
˜TB
(1) A≤ if and only if a ≤ b,
˜TB
(2) A= if and only if a ≤ b and a ≥ b;
(ii)
˜ SB
(1) A ≤ if and only if a ≤ b,
(2) A=
˜ SB if and only if a = b = b = a;
(iii)
˜ T,S B
(1) A ≤ if and only if a ≤ b,
˜ T,S B
(2) A= if and only if a ≤ b = b ≤ a;
(iv)
˜ T,S B
(1) A ≤ if and only if a ≤ b,
(2) A =
˜ T,S B if and only if a ≤ b ≤ b ≤ a;
(v)
˜ S,T B
(1) A≤ if and only if a ≤ b,
˜ S,T B
(2) A= if and only if b ≤ a = a ≤ b ;
(vi)
˜ S,T B
(1) A≤ if and only if a ≤ b,
(2) A=
˜ S,T B if and only if b ≤ a ≤ a ≤ b.
8.11. FUZZY EXTENSIONS OF VALUED RELATIONS 99
(ii) 1. Let A ≤˜ S B. Then by (ii) in Proposition 30, for every a ∈ A and every
b ∈ B we have a ≤ b, thus a ≤ b.
Conversely, let a ≤ a ≤ b ≤ b. Then by Proposition 30, (ii), we easily obtain
A≤ ˜ S B.
2. Let A= ˜ S B. Then by (ii) in Proposition 30, for every a ∈ A and every
b ∈ B we have a = b, thus a = a = b = b.
Conversely, let a = a = b = b. Then by Proposition 30, (ii), we obtain
A=˜ S B.
(v) 1. Let A ≤˜ S,T B. Then by (v) in Proposition 30, there exists b ∈ B such
that for every a ∈ A we have a ≤ b, thus a ≤ b ≤ b.
Conversely, let a ≤ b. Then by Proposition 30 (v), we take b = b and since
a ≤ a for every a ∈ A, we easily obtain A≤ ˜ S,T B.
S,T
2. Let A =˜ B. Then by (v) in Proposition 30, there exists b ∈ B such tat
for every a ∈ A we have a = b, thus b ≤ a = a ≤ b.
Conversely, let b ≤ a = a ≤ b. Then by Proposition 30 (v), we take b = a
and obtain A= ˜ S,T B.
˜ S,T B. Then by (vi) in Proposition 30, for every a ∈ A there
(vi) 1. Let A≤
exists b ∈ B such that a ≤ b, thus a ≤ b ≤ b.
100 CHAPTER 8. FUZZY SETS
Likewise,
S,T
˜
A≤ ˜ S,T B,
B if and only if A ≤ (8.88)
as is clear from (v) and (vi), in the same proposition.
The following proposition shows that (8.87) and (8.88) can be presented in
a stronger form.
Proof. (i) Let α ∈ (0, 1), µR̃T (A, B) > α. Then by (8.27) we obtain
hence
µA (u0 ) ≥ α and µB (v 0 ) ≥ α, (8.91)
8.11. FUZZY EXTENSIONS OF VALUED RELATIONS 101
If we replace the strict inequality > in (i) by ≥, then the conclusion of (i) is
clearly no longer true. To prove the result with ≥ instead of >, we shall make
an assumption similar to condition (C) from the preceding section.
Proposition 62 Let X, Y be sets, let A ∈ F(X), B ∈ F(Y ) be fuzzy sets given
by the membership functions µA : X → [0, 1], µB : Y → [0, 1], respectively. Let
T be a t-norm and R be a binary relation on X × Y , R̃T be a T -fuzzy extension
of R, let α ∈ (0, 1].
Suppose that there exist u∗ ∈ X, v ∗ ∈ Y such that u∗ Rv ∗ and
T (µA (u∗ ), µB (v ∗ )) = sup{T (µA (u), µB (v)) |u ∈ X, v ∈ Y, uRv}. (8.93)
Proof. We show that ϕ(u, v) = T (µA (u), µB (v)) attains its maximum on
the set Z = {(u, v) ∈ R2m |uRv}. Since R is closed binary relation, Z is a closed
set. Further, since for all β ∈ (0, 1], the upper level sets U (ϕ, β) are compact,
it follows that either U (ϕ, β) ∩ Z is empty for all β ∈ (0, 1], or there exists
β 0 ∈ (0, 1] such that U (ϕ, β 0 ) ∩ Z is nonempty.
In the former case, (8.93) holds for any u∗ , v ∗ ∈ Z with
T (µA (u∗ ), µB (v ∗ )) = 0.
In the latter case, there exists (u∗ , v ∗ ) ∈ R2m , such that ϕ attains its maximum
on U (ϕ, β 0 ) ∩ Z in (u∗ , v ∗ ), which is a global maximizer of ϕ on Z.
(iii) µ≤
˜ T,S (A, B) ≥ α iff µ≤
˜
T,S
(A, B) ≥ α iff sup[A]1−α ≤ sup[B]α ,
(iv) µ≤
˜ S,T (A, B) ≥ α iff µ≤
˜
S,T
(A, B) ≥ α iff inf[A]α ≤ inf[B]1−α .
From the practical point of view, the last theorem is important for calculating
the membership function of both fuzzy feasible solutions and fuzzy optimal
solutions of fuzzy mathematical programming problem in Chapter 10. Some
related results to this problem can be found also in [21].
Chapter 9
Fuzzy Multi-Criteria
Decision Making
9.1 Introduction
When dealing with practical decision problems, we often have to take into con-
sideration uncertainty in the problem data. It may arise from errors in mea-
suring physical quantities, from errors caused by representing some data in a
computer, from the fact that some data are approximate solutions of other
problems or estimations by human experts, etc. In some of these situations,
the fuzzy set approach may be applicable. In the context of multicriteria de-
cision making, functions mapping the set of feasible alternatives into the unit
interval [0, 1] of real numbers representing normalized utility functions can be
interpreted as membership functions of fuzzy subsets of the underlying set of
alternatives. However, functions with the range in [0, 1] arise in more contexts.
In this chapter, we consider a decision problem in X, i.e. the problem to find
a ”best” decision in the set of feasible alternatives X with respect to several (i.e.
more than one) criteria functions, see [81], [90], [80], [76], [77], [78]. Within the
framework of such a decision situation, we deal with the existence and mutual
relationships of three kinds of ”optimal decisions”: Weak Pareto-Maximizers,
Pareto-Maximizers and Strong Pareto-Maximizers - particular alternatives sat-
isfying some natural and rational conditions. Here, they are commonly called
Pareto-optimal decisions.
We study also the compromise decisions x∗ ∈ X maximizing some aggrega-
tion of the criteria µi , i ∈ I = {1, 2, ..., m}. The criteria µi considered here will
be functions defined on the set X of feasible alternatives with the values in the
unit interval [0, 1], i.e. µi : X → [0, 1], i ∈ I. Such functions can be interpreted
as membership functions of fuzzy subsets of X and will be called here fuzzy
criteria. Later on, in chapters 8 and 9, each constraint or objective function
of the fuzzy mathematical programming problem will be naturally apointed to
the unique fuzzy criterion. From this point of view this chapter should follow
103
104 CHAPTER 9. FUZZY MULTI-CRITERIA DECISION MAKING
Figure 9.1:
Later on, we shall take advantage of the above stated properties in case of
Rn for n > 1.
XSP ⊂ XP ⊂ XW P . (9.4)
P
Conv{Dj |j ∈ J} = {z ∈ Rn |z = λj xj , where xj ∈ Dj , λj ≥ 0
P j∈J
and λj = 1 }.
j∈J
Conv{Core(µi )|i ∈ I} ⊂ XW P .
9.3. PARETO-OPTIMAL DECISIONS 107
Figure 9.2:
Proof. Let x0i = inf Core(µi ), x00i = sup Core(µi ) and set x0 = min{x0i |i ∈
I}, x00 = max{x00i |i ∈ I}, then Cl(Conv{Core(µi )|i ∈ I}) = [x0 , x00 ].
Let x ∈ Conv{Core(µi )|i ∈ I} and suppose that x ∈ / XW P . Then there exists
y with µi (y) > µi (x), for all i ∈ I.
Assume that y < x, then there exists k ∈ I and y 0 ∈ Core(µk ) with y <
x ≤ y 0 such that by Proposition 68 we obtain µk (y) ≤ µk (x), a contradiction.
Otherwise, if x < y, then again by Proposition 68 we have µj (x) ≥ µj (y), again
a contradiction.
On the other hand, µ1 (0.7, 0.4) = 0.7175, µ2 (0.7, 0.4) = 0.55. We obtain
µ1 (0.5, 0.5) < µ1 (0.7, 0.4) and µ2 (0.5, 0.5) < µ2 (0.7, 0.4).
As (0.5; 0.5) ∈ Conv{x̄1 , x̄2 } and (0.7; 0.4) ∈/ Conv{x̄1 , x̄2 }, it follows that
Figure 9.3:
Proof. By Proposition 69, each Core(µi ) contains exactly one element, i.e.
xi = Core(µi ). Setting x0 = min{xi |i ∈ I}, x00 = max{xi |i ∈ I}, then
then
XW P = XP = XSP = Conv{Core(µi )| i ∈ I}. (9.6)
9.4. COMPROMISE DECISIONS 109
Proof. By Proposition 69, each Core(µi ) contains exactly one element, i.e.
xi = Core(µi ). Setting x0 = min{xi |i ∈ I}, x00 = max{xi |i ∈ I}, we obtain
Conv{Core(µi )|i ∈ I}) = [x0 , x00 ].
First, we prove that [x0 , x00 ] ⊂ XSP .
Let x ∈ [x0 , x00 ] and suppose by contrary that x ∈
/ XSP . Then there exists y
with y 6= x and µi (y) ≥ µi (x), for all i ∈ I.
Further, suppose that y < x, then by strict quasiconcavity of µk , for k satisfying
xk = x00 , and by Proposition 69, we get µk (y) < µk (x), a contradiction.
On the other hand, if x < y, then by strict quasiconcavity of µj , for j
satisfying xj = x0 , we get µj (x) > µj (y), again a contradiction. Hence
Conv{Core(µi )|i ∈ I}) ⊂ XSP .
Second, we prove that XW P ⊂ Conv{Core(µi )|i ∈ I}).
Suppose that y ∈/ [x0 , x00 ]. Let y < x0 , then by (9.5) µi (x0 ) > 0 for all i ∈ I.
Applying strict monotonicity of µi , we get µi (y) < µi (x0 ), for all i ∈ I, hence,
y∈/ XW P . Assuming y > x00 we obtain by analogy the same result.
Combining the first and the second result, we obtain the chain of inclusions
XW P ⊂ Conv{Core(µi )|i ∈ I}) ⊂ XSP .
However, by (9.4) we have XSP ⊂ XP ⊂ XW P , consequently, we obtain the
required equalities (9.6).
Notice that inclusion (9.5) is satisfied if all Supp(µi ), i ∈ I, are identical.
1 for x < 0,
µ1 (x) = {
0 for x ≥ 0,
ex for x < 0,
µ2 (x) = {
1 for x ≥ 0.
1
It is not difficult to show that ϕ(x1 , x2 ) < 1 on R2 and ϕ(0, x2 ) = 1 − (x2 +1)2
for x2 > 1.
Since lim ϕ(0, x2 ) = 1 for x2 → +∞, ϕ does not attain its maximum on X, i.e.
XM (µ1 , µ2 ) = ∅.
XM ⊂ XSP .
112 CHAPTER 9. FUZZY MULTI-CRITERIA DECISION MAKING
Proof. Let ϕ(x) = min{µi (x)| i ∈ I}, where x ∈ Rn , and suppose that
there exists x0 ∈ XM , x∗ 6= x0 . Then by (9.11)
Gm (µ1 (x∗ ), ..., µm (x∗ )) = max{Gm (µ1 (x), ..., µm (x))| x ∈ X}. (9.12)
The set of all max-G decisions on X is denoted by XG (µ1 , ..., µm ), or, shortly,
XG .
9.5. GENERALIZED COMPROMISE DECISIONS 113
Proof. Let x0 ∈ Rn and ε > 0. It is sufficient to prove that there exist δ > 0,
such that ψ(x) ≤ ψ(x0 ) + ε for every x ∈ B(x0 , δ) = {x ∈ Rn | kx − x0 k < δ}.
Let y0i = µi (x0 ) for i ∈ I, put y0 = (y01 , ..., y0m ). Since G is USC on [0, 1]m ,
there exists η > 0, such that y ∈ B(y0 , η) = {y ∈ [0, 1]m | ky − y0 k < η} implies
By USC of all µi , i ∈ I, there exists δ > 0, such that µi (x) ≤ µi (x0 ) + η2 for
every x ∈ B(x0 , δ) = {x ∈ Rn | kx − x0 k < δ}. By monotonicity property of G,
we obtain
G(µ1 (x), ..., µm (x)) ≤ G(z1 , ..., zm ), (9.15)
where zi = min{1, µi (x0 ) + η2 }, and also (z1 , ..., zm ) ∈ B(y0 , η). Moreover,
by (9.13) we have ψ(x0 ) = G(y01 , ..., y0m ). Combining inequalities (9.14) and
(9.15), we obtain the required result ψ(x) ≤ ψ(x0 ) + ε.
The next two proposition give some sufficient conditions for the existence of
max-G decisions.
Proof. Let α ∈ (0, 1], ψ(x) = G(µ1 (x), ..., µm (x)). We prove that [ψ]α =
{x ∈ Rn |G(µ1 (x), ..., µm (x)) ≥ α} is a compact subset of Rn . First, we prove
that [ψ]α is bounded. Assume the contrary; then there exist xk ∈ [ψ]α , k =
1, 2, ..., with lim kxk k = +∞ for k → +∞ . Take an arbitrary β, with 0 < β < α.
Since all [µi ]β are bounded, then there exists k0 such that for all i ∈ I and k > k0
we obtain xk ∈ / [µi ]β , i.e. µi (xk ) < β. By monotonicity and idempotency of A
it follows that for k > k0 we have
are covered and total shipping cost is minimal. The mathematical model of the
above location problem can be formulated as follows:
P
q P
p
minimize fi (dij (xj , yj ) , zij )
i=1 j=1
P
p (9.27)
subject to zij ≥ bi , i ∈ I, zij ≥ 0, (xj , yj ) ∈ R2 , i ∈ I, j ∈ J.
j=1
where αi > 0 are constant coefficients, and distance function dij is defined as
β β 1
dij (xj , yj ) = ((ui − xj ) + (vi − yj ) ) β ,
where i ∈ I, j ∈ J and β > 0. Problem (9.27), even in the above simple form, is
difficult to solve numerically because of its possible nonlinearities which bring
numerous local optima.
In order to transform problem (9.27) in a moretractable form, we consider
the objective function as a utility or satisfaction function µ, such that µ : R2p ×
Rq → [0, 1]. In such a case
µ ((x1 ; y1 ), . . . , (xp ; yp ) , b1 , . . . , bq ) = 0
denotes the total dissatisfaction (or, zero utility) with location (xj , yj ) ∈ R2 ,
j ∈ J, and supplied amounts bi , i ∈ I. On the other hand,
µ((x1 ; y1 ) , . . . , (xp ; yp ) , b1 , . . . , bq ) = 1
denotes the maximal total satisfaction (or, maximal utility) with location (xj ; yj )
∈ R2 , j ∈ J and supplied amounts.
Depending on the required amount bi , an individual consumer i ∈ I may
express his satisfaction with the supplier j ∈ J located at (xj , yj ) by membership
grade µij (xj , yj , bi ) , where membership function µij : R2 × R1 → [0, 1] satisfies
condition µij (ui , vi , bi ) = 1, i.e. the maximal satisfaction is equal to 1, provided
that the facility j ∈ J is located at the same place as the consumer i ∈ I.
The individual satisfaction expressed by the function µi : R2p × R1 →
[0, 1] of the consumer i ∈ I with the amount bi , and with suppliers located at
(x1 ; y1 ) , (x2 ; y2 ) , . . . , (xp ; yp ) , is defined by the satisfaction of the location of
the facility with maximal value, i.e.:
such suppliers, then they share amount bi equally. The above formula can be
generalized by using an aggregation operator A for all i ∈ I as follows
Moving the location point of a supplier along the path from a location (x, y)
toward the location site of consumer i at ci = (ui ; vi ), it is natural to assume
that satisfaction grade of consumer i is increasing, or, at least non-decreasing,
provided that bi is constant. This assumption results in (Φ,Ψ)-concavity (e.g.
T -quasiconcavity) requirement of membership functions µij on R2 .
On the other hand, with the given location (xj , yj ) of supplier j, satisfaction
grade µij (xj , yj , b) is nonincreasing T in variable b.
We obtain ci = (ui ; vi ) ∈ j∈J Core(µij ). If all µij are T -quasiconcave on
R2 , then by Proposition 96, individual satisfaction µi of consumer i, is upper
starshaped on R2p .
The total satisfaction with (or, utility of) locations (xj ; yj ) ∈ R2 , j ∈ J,
and required amounts bi , i ∈ I, is defined as an aggregation of the individual
satisfaction grades, e.g. a minimal satisfaction, or, more generally, the value of
an aggregation operator G, e.g. a t-norm T .
¡ ¢
µ ((x1 ; y1 ) , · · · , (xp ; yp ) , b1 , · · · , bq ) = G(A(µ11 x1 , y1 , b1 ), . . . , µ1p (xp , yp , b1 ),
. . . , A(µq1 (x1 , y1 , bq ) , . . . , µqp (xp , yp , bq ))). (9.29)
Example 106
1 1 ¡ ¢1
f (x, y, z1 , z2 , z3 ) = z1 (x2 +(1 − y)2 ) 2 +z2 (x2 +(2 − y)2 ) 2 +z3 (x − 3)2 + y 2 2 ,
µM (x, y, 10, 20, 30) = min{µ1 (x, y, 10), µ2 (x, y, 20), µ3 (x, y, 30)},
subject to (x, y) ∈ R2 .
The optimal location of the supplier has been found as (xM , y M ) = (1.8, 0.8)
with the optimal membership function value
Figure 9.4:
122 CHAPTER 9. FUZZY MULTI-CRITERIA DECISION MAKING
Comparing with the optimal solution of the classical problem, here the mem-
bership function value is
µP (x, y, 10, 20, 30) = µ1 (x, y, 10) · µ2 (x, y, 20) · µ3 (x, y, 30),
subject to (x, y) ∈ R2 .
The optimal location of the supplier has been found as (xP , y P ) = (0, 1)
with the optimal membership function value
Figure 9.5:
Moreover, we get
and
µP (xM , y M , 10, 20, 30) = 0.00002.
The cost of this solution is
¡ ¢
f xP , y P , 10, 20, 30 = 114.87.
x y f µM µP
1. 3.0 0.0 103.7 0.01 0.00041
2. 1.8 0.8 104.6 0.02 0.00002
3. 0.0 1.0 114.9 0.01 0.00050
In the above table we can see differences between the results of the individ-
ual approaches. In Row 1., the results of solving the classical problem (9.27)
are displayed. In Row 2., the solution of maximum satisfaction problem with
the aggregation operators being minimum t-norm and maximum t-conorm is
presented. In Row 3., the results of the same problem with the aggregation
operators being product t-norm and t-conorm are given. Depending both on
input information and decision-making requirements, various locations of the
supplier may be optimal.
Example 107
Consider the same problem as in Example 106 with p = 2, i.e. with two
suppliers to be optimally located.
Again, we begin with the classical problem (9.27) with ai = 1, i = 1, 2, 3,
β = 2. Then the problem to solve is to minimize the cost function:
f (x1 , y1 , x2 , y2 , z11 , z12 , z21 , z22 , z31 , z32 ) (9.31)
= z11 (x21 + (1 − y1 )2 ) 2 + z12 (x22 + (1 − y2 )2 ) 2 + z21 (x21 + (2 − y1 )2 ) 2
1 1 1
+z22 (x22 + (2 − y2 )2 ) 2 + z31 ((x1 − 3)2 + y12 ) 2 + z32 ((x2 − 3)2 + y22 ) 2 ,
1 1 1
subject to z11 + z12 ≥ 10, z21 + z22 ≥ 20, z31 + z32 ≥ 30,
zij ≥ 0, (xj , yj ) ∈ R2 , i = 1, 2, 3, j = 1, 2.
The optimal solution, i.e. the locations of the facilities and shipment amounts
have been found as
¡ C C C C C C C C C¢
x1 , y1 , x2 , y2 , z12 , z21 , z22 , z31 , z32 = (0, 2, 3, 0, 10, 0, 20, 0, 0, 30),
whereas the minimal cost is
f (0, 2, 3, 0, 10, 0, 20, 0, 0, 30) = 10.0.
Applying our approach, the individual satisfaction of consumer i ∈ I located
at (ui , vi ) with location of the facility at (xj , yj ), j ∈ J = {1, 2}, and with
demand bi , is given by the membership (satisfaction) function:
1
µij (xj , yj , bi ) = 1 . (9.32)
1+bi ((xj −ui )2 +(yj −vi )2 ) 2
We investigate the problem again with two different aggregation operators, par-
ticularly, t-norms and t-conorms.
First, the aggregation operator SM = max is used for aggregating the sup-
pliers, TM = min is used for combining consumers. According to (9.30) we solve
the optimization problem:
maximize
124 CHAPTER 9. FUZZY MULTI-CRITERIA DECISION MAKING
subject to (x1 , y1 , x2 , y2 ) ∈ R4 .
(xM M M M
1 , y1 , x2 , y2 ) = (3, 0, 0, 5/3)
Q µP ((x1 , y1 ), (x2 , y2 ), b1 , b2 , b3 )
= (µi1 (x1 , y1 , bi ) + µi2 (x2 , y2 , bi ) − µi1 (x1 , y1 , bi ) · µi2 (x2 , y2 , bi )),
subject to (x1 , y1 , x2 , y2 ) ∈ R4 .
(xP P P P
1 , y1 , x2 , y2 ) = (3, 0, 0, 2)
being the same as the optimal solution of classical problem (9.31). The results
are summarized in the following table.
x1 y1 x2 y2 f µM µP
1. 3.0 0.0 0.0 2.0 10.0 0.090 0.119
2. 3.0 0.0 0.0 1.67 13.3 0.130 0.022
3. 3.0 0.0 0.0 2.0 10.0 0.090 0.119
In the above table we can see the differences between the results of the
individual problems. Solving classical problem we obtain the same results as
in the maximum satisfaction problem with the aggregation operators being the
product t-norm TP and t-conorm SP . Notice again, that membership functions
(9.32) are TP -quasiconcave on R4 and by Proposition 98 µP is TP -quasiconcave
on R4 .
9.9. APPLICATION IN ENGINEERING DESIGN 125
required that each aggregation operator A is monotone and satisfies the bound-
ary conditions A(0, 0, ..., 0) = 0, and A(1, 1, ..., 1) = 1. In engineering design
it is often required that aggregating operators are continuous and idempotent.
The last condition restricts the class of feasible aggregation operators to the
operators between the t-norm TM and t-conorm SM .
For some preferences of the system of engineering design, for example, where
the failure of one component results in the failure of the whole system, the non-
compensating aggregation operators such as minimum TM should be applied.
On the other hand, a better performance of some component can compensate
some worse performance of another one. In other words, a lower membership
value of some design variable can be compensated by a higher value of some
other one. Such preferences can be aggregated by compensative aggregation
operators, e.g. averaging operators, see [79]. Notice that by the definition, the
t-norm TM is considered also as a compensative operator, however in some other
sense.
The problem of engineering design is to find an optimal configuration of the
design parameters, i.e.:
Maximize
AO (AD (µ1 (x1 ), ..., µm (xm )), AP (µm+1 (xm+1 ), ..., µn (xn ))). (9.34)
Figure 9.6:
Figure 9.7:
1 for 0 ≤ u ≤ 8,
µ3 (u) = { 1
1+0.2(u−8) for u > 8.
1 for 0 ≤ v ≤ 4,
µ4 (v) = { 1
1+0.1(v−4) for v > 4.
x ≤ 400
y + 120,
u = 0.03x − 0.3y + 175
y ,
10
v = 0.05x − y − 2.5.
128 CHAPTER 9. FUZZY MULTI-CRITERIA DECISION MAKING
Figure 9.8:
AO = TM ,
Fuzzy Mathematical
Programming
10.1 Introduction
Mathematical programming problems (MP) form a subclass of decision - mak-
ing problems where preferences between alternatives are described by means of
objective function(s) defined on the set of alternatives in such a way that greater
values of the function(s) correspond to more preferable alternatives (if ”higher
value” is ”better”). The values of the objective function describe effects from
choices of the alternatives. In economic problems, for example, these values may
reflect profits obtained when using various means of production. The set of fea-
sible alternatives in MP problems is described implicitly by means of constraints
- equations or inequalities, or both - representing relevant relationships between
alternatives. In any case the results of the analysis using given formulation of
the MP problem depend largely upon how adequately various factors of the real
system are reflected in the description of the objective function(s) and of the
constraints.
Descriptions of the objective function and of the constraints in a MP problem
usually include some parameters. For example, in problems of resources alloca-
tion such parameters may represent economic parameters like costs of various
types of production, labor costs requirements, shipment costs, etc. The nature
of these parameters depends, of course, on the detailization accepted for the
model representation, and their values are considered as data that should be
exogenously used for the analysis.
Clearly, the values of such parameters depend on multiple factors not in-
cluded into the formulation of the problem. Trying to make the model more
representative, we often include the corresponding complex relations into it,
causing that the model becomes more cumbersome and analytically unsolvable.
Moreover, it can happen that such attempts to increase ”the precision” of the
model will be of no practical value due to the impossibility of measuring the
129
130 CHAPTER 10. FUZZY MATHEMATICAL PROGRAMMING
parameters accurately. On the other hand, the model with some fixed values of
its parameters may be too crude, since these values are often chosen in a quite
arbitrary way.
An intermediate approach is based on introduction into the model the means
of a more adequate representation of experts´understanding of the nature of
the parameters in the form of fuzzy sets of their possible values. The resul-
tant model, although not taking into account many details of the real system in
question could be a more adequate representation of the reality than that with
more or less arbitrarily fixed values of the parameters. In this way we obtain
a new type of MP problems containing fuzzy parameters. Treating such prob-
lems requires the application of fuzzy-set-theoretic tools in a logically consistent
manner. Such treatment forms an essence of fuzzy mathematical programming
(FMP) investigated in this chapter.
FMP and related problems have been extensively analyzed and many pa-
pers have been published displaying a variety of formulations and approaches.
Most approaches to FMP problems are based on the straightforward use of the
intersection of fuzzy sets representing goals and constraints and on the sub-
sequent maximization of the resultant membership function. This approach
has been mentioned by Bellman and Zadeh already in their paper [9] pub-
lished in the early seventies. Later on many papers have been devoted to the
problem of mathematical programming with fuzzy parameters, known under
different names, mostly as fuzzy mathematical programming, but sometimes as
possibilistic programming, flexible programming, vague programming, inexact
programming etc. For an extensive bibliography see the overview paper [33].
Here we present a general approach based on a systematic extension of the
traditional formulation of the MP problem. This approach is based on the
numerous former works of the author of this study, see [60] - [81] and also on
the works of many other authors, e.g. [38, 23, 19, 2, 43, 41, 69, 70], and others.
FMP is one of more possible approaches how to treat uncertainty in MP
problems. Much space has been devoted to similarities and dissimilarities of
FMP and stochastic programming (SP), see e.g. [89] and [90]. In Chapters
10 and 11 we demonstrate that FMP (in particular, fuzzy linear programming
- FLP) essentially differes from SP; FMP has its own structure and tools for
investigating a broad class of optimization problems.
FMP is also different to parametric programming (PP). PP problems are
in essence deterministic optimization problems with a special variable called a
parameter. The main interest in PP is focused on finding relationships between
the values of parameters and optimal solutions of MP problem.
In FMP some methods and approaches motivated by SP and PP are utilized,
see e.g. [85, 13]. In this book, however, algorithms and solution procedures for
MP problems are not studied, they can be found elsewhere, see e.g. the overview
paper [83].
10.2. MODELLING REALITY BY FMP 131
maximize f (x)
(10.1)
subject to x ∈ X,
Observe that \
X∗ = {x ∈ X|f (x) ≥ f (y)}. (10.3)
y∈X
maximize f (x; c)
subject to
gi (x; ai ) = bi , i ∈ M1 , (10.4)
gi (x; ai ) ≤ bi , i ∈ M2 ,
gi (x; ai ) ≥ bi , i ∈ M3 .
In (10.4) f , gi are real functions, C and Pi are sets of parameters, f : Rn ×C →
R, gi : Rn × Pi → R, i ∈ M = M1 ∪ M2 ∪ M3 , c ∈ C, ai ∈ Pi , bi ∈ R,
x ∈ Rn .
The maximization in (10.4) is understood in the usual sense of finding a
maximizer of the objective function on the set of feasible solutions (10.4).
The sets of parameters C and Pi are some subsets of finite dimensional
vector spaces, depending on the specification of the problem. Particularly, C =
Pi = Rn for all i ∈ M, but here we consider also a more general case. The
right-hand sides bi ∈ R, for i ∈ M in (10.4) are also considered as parameters.
By parameters c ∈ C, ai ∈ Pi , bi ∈ R, taken from the parameter sets, a
flexible structure of MP problem (10.4) is modelled. The subject of parametric
programming is to investigate relations and dependences between parameters
and optimal solutions of MP problem (10.4). This problem is, however, not
studied here.
A linear programming problem (LP) is a particular case of the above formu-
lated MP problem (10.4), where c ∈ C ⊂ Rn , ai ∈ Pi ⊂ Rn and bi ∈ R for all
i ∈ M, that is
f (x; c) = cT x = c1 x1 + · · · + cn xn , (10.5)
gi (x; ai ) = aTi x = ai1 x1 + · · · + ain xn , i ∈ M. (10.6)
As a special case of this problem, we have the standard linear programming
problem:
maximize c1 x1 + · · · + cn xn
subject to
(10.7)
ai1 x1 + · · · + ain xn = bi , i ∈ M,
xj ≥ 0, j ∈ N .
Problem (10.7) in a more general setting will be investigated in Chapter 11.
10.4. FORMULATION OF FMP PROBLEM 133
f
maximize f˜(x; c̃)
subject to (10.8)
g̃i (x; ãi ) R̃i b̃i , i ∈ M = {1, 2, ..., m}.
Notice that the feasible solution of a FMP problem is a fuzzy set. For x ∈ Rn
the interpretation of µX̃ (x) depends on the interpretation of uncertain parame-
ters of the FMP problem. For instance, within the framework of possibility
theory, the membership functions of the parameters are explained as possibility
degrees and µX̃ (x) denotes the possibility that x ∈ Rn belongs to the set of fea-
sible solutions of the corresponding FMP problem. Some other interpretations
were also applied, see e.g. [14], [30] or [83].
On the other hand, α-feasible solution is a vector belonging to an α-cut of
the feasible solution X̃ and the same holds for the max-feasible solution, which
is a special α-feasible solution with α = Hgt(X̃). If a decision maker specifies
the grade of feasibility α ∈ [0, 1] (the grade of possibility, satisfaction etc.), then
a vector x ∈ Rn with µX̃ (x) ≥ α is an α-feasible solution of the corresponding
FMP problem.
Considering the i-th constraint of problem (10.8), for given x, ãi and b̃i , the
value µR̃i (g̃i (x; ãi ), b̃i ) from interval [0, 1] can be interpreted as the degree of
satisfaction of this constraint.
For i ∈ M, we use the following notation: by X̃i we denote a fuzzy set given
by the membership function µX̃i , which is defined for all x ∈ Rn as
The fuzzy set X̃i can be interpreted as an i-th fuzzy constraint. All fuzzy
constraints are aggregated into the feasible solution (10.12) by the aggregation
operator A, usually, A is a t-norm, or A = min. The aggregation operators have
been thouroughly investigated in [79].
not by R̃iT as it was originally introduced in Definition 27. The other five fuzzy
extensions of the usual binary relations on R defined earlier in Definition 27 shall
not be studied in this chapter. However, we shall use them again in Chapter 11.
Investigating the concept of the feasible solution (10.12) of the FMP problem
(10.8), we first show that in case of crisp parameters ai and bi , the feasible
solution is also crisp.
.
Observe first that by extension principle (8.17) we obtain
for all i ∈ M.
Next, for all i ∈ M we obtain by (8.25)
Notice that X = {x ∈ Rn |gi (x; ai )Ri bi , i ∈ M}, where we write gi (x; ai )Ri bi
instead of (10.14). Applying the t-norm A on (10.14), we obtain
¡ ¢
µX̃ (x) = A µR̃1 (g1 (x; a1 ), b1 ) , ..., µR̃m (gm (x; am ), bm ) = χX (x),
ã0i , b̃0i , and X̃ 00 is a feasible solution of the FMP problem with the collection of
parameters ã00i , b̃00i such that for all i ∈ M
ã0i ⊂ ã00i and b̃0i ⊂ b̃00i , (10.15)
then
X̃ 0 ⊂ X̃ 00 . (10.16)
Proof. In order to prove X̃ 0 ⊂ X̃ 00 , we first show that
g̃i (x; ã0i ) ⊂ g̃i (x; ã00i )
for all i ∈ M.
Indeed, by (8.17), for each u ∈ R and i ∈ M,
µg̃i (x;ã0i ) (u) = max{0, sup{µã0i (a)|a ∈ Pi , gi (x; a) = u}}
≤ max{0, sup{µã00i (a)|a ∈ Pi , gi (x; a) = u}} = µg̃i (x;ã00i ) (u).
By definition we obtain
n n oo
µR̃i (g̃i (x; ãi ), b̃i ) = sup{min µRi (u, v) , min µg̃i (x;ãi ) (u) , µb̃i (v) |u, v ∈ R}.
As g̃i (x; ãi ) and b̃i are compact fuzzy sets and Ri is a closed valued relation,
there exist u∗ , v ∗ ∈ R such that
n n oo
µR̃i (g̃i (x; ãi ), b̃i ) = min µRi (u∗ , v ∗ ) , min µg̃i (x;ãi ) (u∗ ) , µb̃i (v ∗ ) ≥ α.
Hence,
µRi (u∗ , v ∗ ) ≥ α, µg̃i (x;ãi ) (u∗ ) ≥ α, µb̃i (v ∗ ) ≥ α. (10.23)
On the other hand, by definition µR̃i ([g̃i (x; ãi )]α , [b̃i ]α )
n n oo
= sup{min µRi (u, v) , min χ[g̃i (x;ãi )]α (u) , χ[b̃i ]α (v) |u, v ∈ R}
= sup{µRi (u, v) |u ∈ [g̃i (x; ãi )]α , v ∈ [b̃i ]α }.
for all i ∈ M. Using the arguments of the first part of the proof, the last
inequality is equivalent to x ∈ [X̃i ]α for all i ∈ M, or, in other words, x ∈
T
m
[X̃i ]α .
i=1
Theorem 113 has some important computational aspects. Assume that in
a FMP problem, we can specify a possibility (satisfaction) level α ∈ (0, 1] and
determine the α-cuts [ãi ]α and [b̃i ]α of the fuzzy parameters. Then the formulae
(10.19) and (10.20) will allow us to compute all α-feasible solutions of FMP
problem without performing special computations of functions g̃i .
If the valued relations Ri are binary relations similar to those in Theorem
110, then the statement of Theorem 113 can be strenghtened as follows.
In order to apply Corollary 64, for each x ∈ Rn and each α ∈ (0, 1], [g̃i (x; ãi )]α
should be compact. Indeed, this is true by Proposition 54. Then by Corol-
lary 64, we obtain the equivalence between (10.28) and (10.29). Moreover, by
Proposition 48 and Theorem 52, it follows that
Substituting (10.30) into (10.29), we obtain µR̃i (gi (x; [ãi ]α ), [b̃i ]α ) = 1, a desir-
able result.
For the rest of the proof we can repeat the arguments of the corresponding
part of the proof of Theorem 113.
Now, we shall see how the concept of generalized concavity introduced in
[79] is utilized in the FMP problem. Particularly, we show that all α-feasible
solutions (10.19), (10.20) are solutions of the system of inequalities on condi-
tion that the membership functions of fuzzy parameters ãi and b̃i are upper-
quasiconnected for all i ∈ M.
For given α ∈ (0, 1], i ∈ M, we introduce the following notation
Theorem 115 Let all assumptions of Theorem 114 be satisfied. Moreover, let
the membership functions of fuzzy parameters ãi and b̃i be upper-quasiconnected
for all i ∈ M.
Then for all α ∈ (0, 1], we have x ∈ [X̃]α if and only if
Theorem 116 Let all assumptions of Theorem 114 be satisfied. Moreover, let
gi be quasiconvex on Rn × Rk for i ∈ M1 ∪ M2 , and gi be quasiconcave on
Rn × Rk for i ∈ M1 ∪ M3 .
Then for all i ∈ M, X̃i are convex and therefore the feasible solution X̃ of FMP
problem (10.8) is also convex.
By (10.20) we get
µR̃i (gi (x; [ãi ]α ), [b̃i ]α ) = sup{min{χgi (x;[ãi ]α ) (u), χ[b̃i ]α (v)}|u ≤ v}
1 if Gi (x; α) ≤ bi (α) ,
={ (10.40)
0 otherwise.
By (10.20) we get
µR̃i (gi (x; [ãi ]α ), [b̃i ]α ) = sup{min{χgi (x;[ãi ]α ) (u), χ[b̃i ]α (v)}|u ≥ v}
1 if Gi (x; α) ≥ bi (α),
={
0 otherwise.
Gi (xj ; α) ≥ bi (α), j = 1, 2,
or
min{Gi (x1 ; α), Gi (x2 ; α)} ≥ bi (α). (10.51)
Then by (10.43) and (10.43) we immediately obtain
Table 1.
Const-
Parame- Rela-
raint t-norm/
ters: tions: Results: Theorem:
functi- agr. op.
ãi , b̃i R/R̃i
ons: gi
=, ≤, ≥
fuzzy
– crisp T /T X̃ = [X̃i ]α = X̄ T110
exten-
sion
valued
ã0i ⊂ ã00i , relat./
– T /A X̃ 0 ⊂ X̃ 00 T111
b̃0i ⊂ b̃00i T -f. ex-
tension
Tm
valued [X̃]α = i=1 [X̃i ]α
conti- com- rel./ [X̃i ]α = {x ∈ Rn |
min / min T113
nuous pact T -f. ex- µR̃i (gi (x; [ãi ]α ), [b̃i ]α )
tension ≥ α}
Tm
[X̃]α = i=1 [X̃i ]α
=, ≤, ≥ /
conti- com- [X̃i ]α = {x ∈ Rn |
T -f. ex- min / min T114
nuous pact µR̃i (gi (x; [ãi ]α ), [b̃i ]α )
tension
= 1}
Gi (x; α) ≤ bi (α),
com- =, ≤, ≥ /
conti- i ∈ M1 ∪ M2 ,
pact T -f. ex- min / min T115
nuous Gi (x; α) ≥ bi (α),
UQCN tension
i ∈ M1 ∪ M3
conti-
com- =, ≤, ≥ /
nuous
pact T -f. ex- min / min [X̃i ]α - convex T116
QCV/
UQCN tension
QCA
144 CHAPTER 10. FUZZY MATHEMATICAL PROGRAMMING
f
maximize f˜(x; c̃)
subject to (10.55)
g̃i (x; ãi ) R̃i b̃i , i ∈ M = {1, 2, ..., m}.
where µX̃ (x) is the membership function of the feasible solution given by (10.12),
is called an optimal solution of the FMP problem (10.55).
For α ∈ (0, 1] a vector x ∈ [X̃ ∗ ]α is called an α-optimal solution of the FMP
problem (10.55).
A vector x∗ ∈ Rn with the property
with the fuzzy set ”of the objective” X̃0 defined by the membership function
³ ´
µX̃0 (x) = µR̃0 f˜(x; c̃), b̃0 , (10.59)
where b̃0 is a given fuzzy goal. As a result, we obtain the membership function
of optimal solution X̃ ∗ as
¡ ¢
µ∗X̃ (x) = AG µX̃0 (x), µX̃ (x) (10.60)
be fuzzy relations such that for i ∈ M1 , R̃i is a fuzzy extension of the equality
relation ”=”, for i ∈ M2 , R̃i is a fuzzy extension of the inequality relation ”≤”,
and for i ∈ {0} ∪ M3 , R̃i is a fuzzy extension of the inequality relation ”≥”.
Let T, A and AG be t-norms.
Then the set of all max-optimal solution coincides with the set of all optimal
solutions X ∗ of MP problem (10.4).
for all x ∈ Rn , where X is the set of all feasible solutions of the crisp MP
problem.
Moreover, by (10.59) we obtain for crisp c ∈ C
³ ´ ³ ´
µX̃0 (x) = µR̃0 f˜(x; c̃), b̃0 = sup{T χf (x,c) (u), µb̃0 (v) |u ≥ v}
= µb̃0 (f (x, c)). (10.63)
¡ ¢ µ (f (x, c)) if x ∈ X,
µ∗X̃ (x) = AG µb̃0 (f (x, c)), χX (x) = { b̃0 (10.64)
0 otherwise.
Since µb̃0 is strictly increasing function, by (10.64) it follows that µ∗X̃ (x∗ ) =
Hgt(X̃ G ), if and only if µ∗X̃ (x∗ ) = sup{µb̃0 (f (x, c))| x ∈ X}, which is the
desired result.
For fuzzy subsets ã0 , ã00 ∈ F(Rn ), we have ã0 ⊂ ã00 , if and only if µã0 (x) ≤
µã00 (x) for all x ∈ Rn .
then
X̃ ∗0 ⊂ X̃ ∗00 . (10.66)
Corollary 120 Let c̃, ãi , b̃i be a collection of fuzzy parameters, and let c ∈
C,ai ∈ Pi and bi ∈ R be a collection of crisp parameters such that for all
i∈M
µc̃ (c) = µãi (ai ) = µb̃i (bi ) = 1. (10.67)
If X ∗ is a nonempty set of all optimal solutions of MP problem (10.4) with the
parameters c, ai and bi , X̃ ∗ is an optimal solution of FMP problem (10.55) with
fuzzy parameters c̃, ãi and b̃i , then for all x ∈ X ∗
Proof. Observe that c ⊂ c̃, ai ⊂ ãi , bi ⊂ b̃i for all i ∈ M . Then by Theorem
119 we obtain X ∗ ⊂ X̃ ∗ , which is equivalent to (10.68).
Notice that the optimal solution X̃ ∗ of FMP problem (10.8) always exists,
even if the MP problem with crisp parameters has no crisp optimal solution.
Corollary 120 states that if the MP problem with crisp parameters has a crisp
optimal solution, then the membership grade of the optimal solution (of the
associated FMP problem with fuzzy parameters) is equal to one. This fact
enables a natural embedding of the class of (crisp) MP problems into the class
of FMP problems.
From now on, the space of parameters is supposed to be the k-dimensional
Euclidean vector space Rk , where k is a positive integer, i.e. C = Pi = Rk
for all i ∈ M. For the remaining part of this section we suppose that T , A
and AG are the minimum t-norms, that is T = A = AG = TM . We shall find
some formulae based on α-cuts of the parameters, analogous to those given by
Theorem 113 and 115, however, for α-optimal solutions of the FMP problem.
Theorem 124 Let all assumptions of Theorem 122 be satisfied. Moreover, let
gi be quasiconvex on Rn × Rk for i ∈ M1 ∪ M2 , f and gi be quasiconcave on
Rn × Rk for i ∈ M1 ∪ M3 .
Then for all i ∈ {0} ∪ M, X̃i are convex and the optimal solution X̃ ∗ of FMP
problem (10.55) is convex, too.
and ³ ´
µX̃i (x) = µR̃i g̃i (x; ãi ), b̃i ,
maximize t
subject to µX̃0 (x) ≥ t, (10.83)
µX̃i (x) ≥ t, i ∈ M,
Hence, x∗ is an optimal solution with the maximal height. The proof of the
opposite statement is straightforward.
150 CHAPTER 10. FUZZY MATHEMATICAL PROGRAMMING
Chapter 11
11.1 Introduction
Most important mathematical programming problems (10.4) are those where
the functions f and gi are linear.
Let M = {1, 2, ..., m}, N = {1, 2, ..., n}, m, n be positive integers. Let f , gi
be linear functions, f : Rn × Rn → R, gi : Rn × Rn → R, c, ai ∈ Rn , i ∈ M,
be the parameters such that
maximize c1 x1 + · · · + cn xn
subject to
(11.3)
ai1 x1 + · · · + ain xn ≤ bi , i ∈ M,
xj ≥ 0, j ∈ N .
The set of all feasible solutions X of (11.3) is defined as follows
151
152 CHAPTER 11. FUZZY LINEAR PROGRAMMING
f
maximize c̃1 x1 +̃· · ·+̃c̃n xn
subject to
(11.5)
ãi1 x1 +̃· · ·+̃ãin xn R̃i b̃i , i ∈ M,
xj ≥ 0, j ∈ N .
Let us clarify the elements of (11.5).
The objective function values and the left hand sides values of the constraints
of (11.5) have been obtained by the extension principle (8.17) as follows. A
membership function of g̃i (x; ãi1 , ..., ãi1 ) is defined for each t ∈ R by
sup{T (µãi1 (a1 ), ..., µãin (an ))|a1 , ..., an ∈ R, a1 x1 + · · · + an xn = t}
µg̃i (t) = if gi−1 (x; t) 6= ∅,
0 otherwise
(11.6)
where gi−1 (x; t) = {(a1 , ..., an ) ∈ Rn |a1 x1 + · · · + an xn = t}. Here, the fuzzy set
g̃i (x; ãi1 , ..., ãi1 ) is denoted as ãi1 x1 +̃· · ·+̃ãin xn , i.e.
In (11.7) the value ãi1 x1 +̃· · ·+̃ãin xn ∈ F(R) is “compared” with the fuzzy
quantity b̃i ∈ F(R) by a fuzzy relation R̃i , i ∈ M.
11.2. FORMULATION OF FLP PROBLEM 153
For x and y ∈ Rn we calculate f˜(x; c̃1 , ..., c̃n ) ∈ F(R) and f˜(y; c̃1 , ..., c̃n ) ∈
F(R), respectively. Such values of the objective function are not linearly ordered
and to maximize the objective function we have to define a suitable ordering on
F(R) which allows for “maximization” of the objective. Let d˜ ∈ F(R) be an
exogenously given fuzzy goal with an associated fuzzy relation R̃0 on R.
The fuzzy relations R̃i for comparing the constraints of (11.5) are usually
extensions of a valued relation on R, particularly, the usual inequality relations
“≤” and “≥”.
If R̃i is a fuzzy extension of relation Ri , then by (8.27) we obtain the mem-
bership function of the i-th constraint as
³ ´ ¡ ¢
µR̃i ãi1 x1 +̃· · ·+̃ãin xn , b̃i = sup{T µãi1 x1 +̃···+̃ãin xn (u), µb̃i (v) |uRi v}.
(11.10)
Apparently, for the feasible solution and also for the optimal solution of a
FLP problem, the concepts which have been already defined in the preceding
chapter for FMP problem (10.4), can be adopted here. Of course, for FLP
problems they have some special features. Let us begin with the concept of
feasible solution.
Definition 126 Let gi , i ∈ M, be linear functions defined by (11.2). Let µãij :
R → [0, 1] and µb̃i : R → [0, 1], i ∈ M = {1, 2, ..., m}, j ∈ N = {1, 2, ..., n},
be membership functions of fuzzy quantities ãij and b̃i , respectively. Let R̃i ,
i ∈ M, be fuzzy relations on F(R). Let GA be an aggregation operator and T
be a t-norm. Here T is used for extending arithmetic operations.
A fuzzy set X̃, a membership function µX̃ of which is defined for all x ∈ Rn by
³ ´
GA µR̃1 (ã11 x1 +̃· · ·+̃ã1n xn , b̃1 ), . . . , µR̃m (ãm1 x1 +̃· · ·+̃ãmn xn , b̃m )
µX̃ (x) = if xj ≥ 0 for all j ∈ N ,
0 otherwise,
(11.11)
is called the feasible solution of the FLP problem (11.5).
For α ∈ (0, 1], a vector x ∈ [X̃]α is called the α-feasible solution of the FLP
problem (11.5).
A vector x̄ ∈ Rn such that µX̃ (x̄) = Hgt(X̃) is called the max-feasible solution.
Notice that the feasible solution X̃ of a FLP problem is a fuzzy set. On the
other hand, α-feasible solution is a vector belonging to the α-cut of the feasible
solution X̃ and the same holds for the max-feasible solution, which is a special
α-feasible solution with α = Hgt(X̃).
If a decision maker specifies the grade of membeship α ∈ (0, 1] (the grade
of possibility, feasibility, satisfaction etc.), then a vector x ∈ Rn satisfying
µX̃ (x) ≥ α is an α-feasible solution of the corresponding FLP problem.
For i ∈ M we introduce the following notation: X̃i will denote a fuzzy subset
of Rn with the membership function µX̃i defined for all x ∈ Rn as
Fuzzy set (11.12) is interpreted as an i-th fuzzy constraint. All fuzzy con-
straints are aggregated into the feasible solution (11.11) by the aggregation
operator GA . Particularly, GA = min, the t-norm T is used for extending arith-
metic operations. Notice, that the fuzzy solution depends also on the fuzzy
relations used in definitions of the constraints of the FLP problem.
Theorem 127 Let for all i ∈ M, j ∈ N , ãij and b̃i be compact, convex and
normal fuzzy quantities and let xj ≥ 0. Let T = min, S = max, and α ∈ (0, 1).
Then for i ∈ M
(i)
P
n
µ≤
˜ T (ãi1 x1 +̃· · ·+̃ãin xn , b̃i ) ≥ α if and only if aij (α)xj ≤ bi (α), (11.17)
j=1
(ii)
P
n
˜ (ãi1 x1 +̃· · ·+̃ãin xn , b̃i ) ≥ α if and only if
µ≤S
aij (1 − α)xj ≤ bi (1 − α).
j=1
(11.18)
(iii) Moreover, if ãij and b̃i are strictly convex fuzzy quantities, then
P
n
µ≤T,S
˜ (ãi1 x1 +̃· · ·+̃ãin xn , b̃i ) ≥ α if and only if aij (1 − α)xj ≤ bi (α),
j=1
(11.19)
where µ≤T,S
˜ = µ≤˜ T,S = µ≤˜ T,S , and
(iv)
P
n
µ≤S,T
˜ (ãi1 x1 +̃· · ·+̃ãin xn , b̃i ) ≥ α if and only if aij (α)xj ≤ bi (1 − α),
j=1
(11.20)
11.3. PROPERTIES OF FEASIBLE SOLUTION 155
where µ≤S,T
˜ = µ≤˜ S,T = µ≤˜ S,T .
Proof. We present here only the proof of part (i). The other parts follow
analogically by Theorem 65.
Let i ∈ M, µ≤ˆ T (ãi1 x1 +̃· · ·+̃ãin xn , b̃i ) ≥ α. By Theorem 65 , this is equiva-
lent to " #
P
n
inf ãij xj ≤ sup[b̃i ]α .
j=1
α
By the well known Nguyen’s result, see [41], or [64], it follows that
" #
Pn Pn
ãij xj = [ãij ]α xj .
j=1 j=1
α
Let GA = min. By Proposition 127, α-cuts [X̃]α of the feasible solution of (11.5)
can be computed by solving the system of inequalities from (11.17) - (11.20).
Moreover, [X̃]α is an intersection of a finite number of halfspaces, hence a convex
polyhedral set.
˜
c̃1 x1 +̃· · ·+̃c̃n xn R̃0 d.
where µX̃ (x) is the membership function of the feasible solution, is called the
optimal solution of FLP problem (11.5).
For α ∈ (0, 1] a vector x ∈ [X̃ ∗ ]α is called the α-optimal solution of FLP problem
(11.5).
A vector x∗ ∈ Rn with the property
Notice that the optimal solution of the FLP problem is a fuzzy set. On the
other hand, the α-optimal solution is a vector belonging to the α-cut [X̃ ∗ ]α .
Likewise, the max-optimal solution is an α-optimal solution with α = Hgt(X̃ ∗ ).
Notice that in view of Chapter 9 a max-optimal solution is in fact a max-AG
decision on Rn .
In Definition 128, the t-norms T and the aggregation operators A and AG
have been used. The former t-norm T has been used for extending arithmetic
operations, the aggregation operator A for aggregating the individual constraints
into the single feasible solution and AG has been applied for aggregating the
fuzzy set of the feasible solution with the fuzzy set of the objective X̃0 defined
by the membership function
˜
µX̃0 (x) = µR̃0 (c̃1 x1 +̃· · ·+̃c̃n xn , d), (11.26)
for all x ∈ Rn , where X is the set of all feasible solutions (11.4) of the crisp LP
problem (11.3). Moreover, by (11.26) we obtain for crisp c ∈ Rn
³ ´
µX̃0 (x) = µR̃0 f (x; c), d˜ = µd˜(c1 x1 + · · · + cn xn ).
158 CHAPTER 11. FUZZY LINEAR PROGRAMMING
Since µd˜ is strictly increasing, by (10.64) it follows that µX̃ ∗ (x∗ ) = Hgt(X̃ ∗ ) if
and only if
µX̃ ∗ (x∗ ) = sup{µd˜(c1 x1 + · · · + cn xn )|x ∈ X},
which is the desired result.
Proposition 130 Let c̃0j , ã0ij and b̃0i and c̃00j , ã00ij and b̃00i be two collections of
fuzzy parameters of FLP problem (11.5), i ∈ M, j ∈ N . Let T, A, AG be t-
norms. Let R̃i , i ∈ {0} ∪ M, be T -fuzzy extensions of valued relations Ri on R.
If X̃ ∗0 is the optimal solution of FLP problem (11.5) with the parameters c̃0j , ã0ij
and b̃0i , X̃ ∗00 is the optimal solution of the FLP problem with the parameters c̃00j ,
ã00ij and b̃00i such that for all i ∈ M, j ∈ N ,
We show that f˜(x; c̃0 ) ⊂ f˜(x; c̃00 ). Indeed, since for all j ∈ N , µc̃0j (c) ≤ µc̃00j (c)
for all c ∈ R, by (11.8), we obtain for all u ∈ R
µc̃01 x1 +̃···+̃c̃0n xn (u) = sup{T (µc̃01 (c1 ), ..., µc̃0n (cn ))|c1 x1 + · · · + cn xn = u}
≤ sup{T (µc̃001 (c1 ), ..., µc̃00n (cn ))|c1 x1 + · · · + cn xn = u}
= µc̃001 x1 +̃···+̃c̃00n xn (u).
11.4. PROPERTIES OF OPTIMAL SOLUTIONS 159
Proposition 131 Let for all i ∈ M, j ∈ N , c̃j , ãij and b̃i be compact, convex
and normal fuzzy quantities, d˜ ∈ F(R) be a fuzzy goal with the membership
function µd̃ satisfying the following conditions
Proposition 132 Consider FLP problem (11.5), where for each x = (x1 , ..., xn )
∈ Rn and every i ∈ M
³ ´
µX̃0 (x) = µR̃0 c̃1 x1 +̃· · ·+̃c̃n xn , d˜
and ³ ´
µX̃i (x) = µR̃i ãi1 x1 + · · · + ãin xn , b̃i
are the membership functions of the fuzzy objective and fuzzy constraints, re-
spectively. Let T = A = AG = min and (11.34) holds for d. ˜
∗ ∗ n+1
A vector (t , x ) ∈ R is an optimal solution of the optimization problem
maximize t
subject to µX̃i (x) ≥ t, i ∈ {0} ∪ M, (11.37)
xj ≥ 0, j ∈ N
and
g̃i (x; ãi1 , ..., ãin ) = ãi1 x1 +̃T · · ·+̃T ãin xn (11.39)
for each x ∈ Rn , where c̃j , ãij ∈ F(R), for all i ∈ M, j ∈ N . Formulae (11.38)
and (11.39) are defined by (11.8) and (8.20), respectively, that is by using of the
extension principle. Here +̃T denotes that the extended summation is performed
by the t-norm T . Note that for arbitrary t-norms T exact formulae for (11.38)
and (11.39) can be either complicated or even inaccessible. However, in some
special cases such formulae exist, some of which will be given bellow.
For the sake of brevity we shall deal only with (11.38), for (11.39) the results
can be obtained analogously.
11.5. EXTENDED ADDITION IN FLP 161
Fγ (x) = F ( xγ ),
(11.40)
Gδ (x) = G( xδ ),
where x ∈ (0, +∞). Let lj , rj ∈ R such that lj ≤ rj , let γ j , δ j ∈ (0, +∞) and
let ³ ´
c̃j = lj , rj , Fγ j , Gδj , j ∈ N , (11.41)
Fγ j (lj − x) if x ∈ (−∞, lj ),
µc̃j (x) = { 1 if x ∈ [lj , rj ], (11.42)
Gδj (x − rj ) if x ∈ (rj , +∞),
see also Chapter 8. In the following proposition we prove that c̃1 x1 +̃· · ·+̃c̃n xn
is also closed fuzzy interval of the same type. The proof is omitted as it is a
straightforward application of the extension principle. For the references, see
[38].
³ ´
Proposition 133 Let c̃j = lj , rj , Fγ j , Gδj , j ∈ N , be closed fuzzy intervals
with the membership functions given by (11.42) and let x = (x1 , . . . , xn ) ∈ Rn ,
xj ≥ 0 for all j ∈ N , denote
Ix = {j|xj > 0, j ∈ N }.
Then
c̃1 x1 +̃TM · · ·+̃TM c̃n xn = (l, r, FlM , GrM ) , (11.43)
c̃1 x1 +̃TD · · ·+̃TD c̃n xn = (l, r, FlD , GrD ) , (11.44)
where TM is the minimum t-norm, TD is the drastic product and
P P
l= lj xj , r = rj xj , (11.45)
j∈Ix j∈Ix
P γj P δj
lM = , rM = , (11.46)
j∈Ix xj j∈Ix xj
γj δj
lD = max{ |j ∈ Ix }, rD = max{ |j ∈ Ix }. (11.47)
xj xj
If all c̃j are (L, R)-fuzzy intervals, then we can obtain an analogous and more
specific result. Let lj , rj ∈ R with lj ≤ rj , let γ j , δ j ∈ [0, +∞) and let L, R
be non-increasing, non-constant, upper-semicontinuous functions mapping the
interval (0, 1] into [0, +∞), i.e. L, R : (0, 1] → [0, +∞). Moreover, assume that
L(1) = R(1) = 0, and define L(0) = lim L(x), R(0) = lim R(x).
x→0 x→0
Let for every j ∈ N ,
162 CHAPTER 11. FUZZY LINEAR PROGRAMMING
¡ ¢
c̃j = lj , rj , γ j , δ j LR (11.48)
be an (L, R)-fuzzy interval given by the membership function defined for each
x ∈ R by
³ ´
L (−1) lj −x
if x ∈ (lj − γ j , lj ), γ j > 0,
γj
1 ³ ´ if x ∈ [lj , rj ],
µc̃j (x) = (11.49)
R (−1) x−rj
if x ∈ (r , r + δ ), δ > 0,
δ j
j j j j
0 otherwise,
The results (11.44) and (11.51) in Proposition 133 and 134, respectively, can
be extended as follows, see [38].
Let c̃j = (lj , rj , Fγ j , Fδj ), j ∈ N , be closed fuzzy intervals with the membership
functions given by (11.42) and let x = (x1 , . . . , xn ) ∈ Rn , xj ≥ 0 for all j ∈ N ,
Ix = {j|xj > 0, j ∈ N }. Then
where P P
l= lj xj , r = rj xj ,
j∈Ix j∈Ix
γj δj
lD = max{ |j ∈ Ix }, rD = max{ |j ∈ Ix }.
xj xj
11.5. EXTENDED ADDITION IN FLP 163
Note that for a continuous Archimedian t-norm T and closed fuzzy intervals
c̃j satisfying the assumptions of Proposition 135, we have
which means that we obtain the same fuzzy linear function based on an arbitrary
t-norm T 0 such that T 0 ≤ T .
The following proposition generalizes several results concerning the addition
of closed fuzzy intervals based on continuous Archimedian t-norms, see [38].
where P P
l= lj xj , r = rj xj , (11.56)
j∈Ix j∈Ix
P γj P δj
lK = , rK = . (11.57)
x
j∈Ix j j∈Ix xj
This means that each piecewise linear fuzzy number (l, r, γ, δ) can be written as
µ ¶
λ ,F
(l, r, γ, δ) = l, r, F λ−1 λ
λ−1
,
γ δ
164 CHAPTER 11. FUZZY LINEAR PROGRAMMING
(l, r, γ, δ) ,
The extensions can be obtained also for some other t-norms, see e.g. [38],
[83].
An alternative approach based on centered fuzzy numbers will be mentioned
later in this chapter, see also [42], [43].
11.6 Duality
In this section we generalize the well known concept of duality in LP for FLP
problems. The results of this section, in a more general setting, can be found in
[64]. We derive some weak and strong duality results which extend the known
results for LP problems.
Consider the following FLP problem
f
maximize c̃1 x1 +̃· · ·+̃c̃n xn
subject to
(11.58)
ãi1 x1 +̃· · ·+̃ãin xn R̃ b̃i , i ∈ M,
xj ≥ 0, j ∈ N .
Here, the parameters c̃j , ãij and b̃i are considered as normal fuzzy quantities,
i.e. µc̃j : R → [0, 1], µãij : R → [0, 1] and µb̃i : R → [0, 1], i ∈ M, j ∈ N . Let
R̃ be a fuzzy extension of a valued relation R on R. FLP problem (11.58) will
be called the primal FLP problem (P).
The dual FLP problem (D) is defined as
f
minimize b̃1 y1 +̃· · ·+̃b̃m ym
subject to
(11.59)
ã1j y1 +̃· · ·+̃ãmj ym ∗ R̃ c̃j , j ∈ N ,
yi ≥ 0, i ∈ M.
(D):
f
minimize b̃1 y1 +̃· · ·+̃b̃m ym
subject to
˜ S c̃j , j ∈ N , (11.61)
ã1j y1 +̃· · ·+̃ãmj ym ≥
yi ≥ 0, i ∈ M.
Let the feasible solution of the primal FLP problem (P) be denoted by X̃,
the feasible solution of the dual FLP problem (D) by Ỹ . Clearly, X̃ is a fuzzy
subset of Rn , Ỹ is a fuzzy subset of Rm .
Notice that in the crisp case, i.e. when the parameters c̃j , ãij and b̃i are
crisp real numbers, the relation ≤˜ T coincides with ≤ and ≥
˜ S coincides with ≥,
hence (P) and (D) is a primal - dual couple of LP problems in the usual sense.
In the following proposition we prove the weak form of the duality theorem
for FLP problems.
Proposition 137 Let for all i ∈ M, j ∈ N , c̃j , ãij and b̃i be compact, convex
and normal fuzzy quantities. Let ≤ ˜ T be the T -fuzzy extension of the binary
relation ≤ on R defined by (8.27) and ≥ ˜ S be the fuzzy extension of the relation
≥ on R defined by (8.37). Let A = T = min, S = max. Let X̃ be a feasible
solution of FLP problem (11.60), Ỹ be a feasible solution of FLP problem (11.61)
and let α ∈ [0.5, 1).
If a vector x = (x1 , ..., xn ) ≥ 0 belongs to [X̃]α and y = (y1 , ..., ym ) ≥ 0 belongs
to [Ỹ ]α , then
X X
c̄j (1 − α)xj ≤ b̄i (1 − α)yi . (11.62)
j∈N i∈M
P
m
aij (1 − α)yi ≥ cj (1 − α). (11.63)
i=1
P
n
aij (1 − α)xj ≤ bi (1 − α). (11.64)
j=1
166 CHAPTER 11. FUZZY LINEAR PROGRAMMING
Let the optimal solution of the primal FLP problem (P), defined by Defini-
tion 128, be denoted by X̃ ∗ , the optimal solution of the dual FLP problem (D),
defined also by Definition 128, by Ỹ ∗ . Clearly, X̃ ∗ is a fuzzy subset of Rn , Ỹ ∗
is a fuzzy subset of Rm , moreover, X̃ ∗ ⊂ X̃ and Ỹ ∗ ⊂ Ỹ .
Proposition 138 Let for all i ∈ M, j ∈ N , c̃j , ãij and b̃i be compact, convex
and normal fuzzy quantities. Let d, ˜ h̃ ∈ F(R) be fuzzy goals with the membership
function µd˜, µh̃ satisfying the following conditions
(i) µd˜, µh̃ are upper semicontinuous,
(ii) µd̃ is strictly increasing, µh̃ is strictly decreasing, (11.65)
(iii) lim µd˜ (t) = lim µh̃ (t) = 0.
t→−∞ t→+∞
f
maximize c̃1 x1 +̃· · ·+̃c̃n xn
subject to
(11.69)
ãi1 x1 +̃· · ·+̃ãin xn R̃ b̃i , i ∈ M,
xj ≥ 0, j ∈ N .
Here, the parameters c̃j , ãij and b̃i are considered to be compact intervals in R,
i.e. c̃j = [cj , cj ], ãij = [aij , aij ] and b̃i = [bi , bi ], where cj , cj , aij , aij and bi , bi
are lower and upper bounds of the corresponding intervals, respectively. The
membership functions of c̃j , ãij and b̃i are the characteristic functions of the
intervals, i.e. χ[cj ,cj ] : R → [0, 1], χ[aij ,aij ] : R → [0, 1] and χ[b ,bi ] : R → [0, 1],
i
(ii)
( )
n P
n
˜ =
X≤S
x∈R | aij xj ≤ bi , xj ≥ 0, j ∈ N . (11.71)
j=1
(iii)
( )
n P
n
X≤
˜ T,S = X≤
˜
T,S
= x∈R | aij xj ≤ bi , xj ≥ 0, j ∈ N . (11.72)
j=1
(iv)
( )
n P
n
X≤˜ S,T = X≤
˜
S,T
= x∈R | aij xj ≤ bi , xj ≥ 0, j ∈ N . (11.73)
j=1
P
n
maximize cj xj
j=1 (11.74)
subject to x ∈ X;
11.7. SPECIAL MODELS OF FLP 169
³ ´
˜
˜ T c̃1 x1 +̃· · ·+̃c̃n xn , d = sup{min{µc̃1 x1 +̃···+̃c̃n xn (u), µd˜(v)}|u ≥ v}
µ≥
= sup{min{χ[c,c] (u), µd˜(v)}|u ≥ v}
Xn
= µd˜ cj xj .
j=1
m P
m
˜ = {y ∈ R |
Y≥S
aij yi ≥ cj , yi ≥ 0, i ∈ M}. (11.76)
i=1
and
P
m
minimize bi yi
i=1
subject to y ∈ Y≥
˜
S
are dual to each other in the usual (crisp) sense if and only if cj = cj and bi = bi
for all i ∈ M and j ∈ N .
For ILP problems our results correspond to that of [24], [49], [82].
maximize c1 x1 + · · · + cn xn
subject to
(11.77)
ai1 x1 + · · · + ain xn ≤ bi , i ∈ M,
xj ≥ 0, j ∈ N ,
see also [83]. In (11.77) the values of parameters cj , aij and bi are known, they
are, however, uncertain, not confident, etc. That is why nonnegative values
pi , i ∈ {0} ∪ M, of admissible violations of the objective and constraints are
(subjectively) chosen and supplemented to the original model (11.77).
For the objective function, an aspiration value d0 ∈ R is also (subjectively)
determined such that if the objective function attains this value, or if it is
greater, then the decision maker (DM) is fully satisfied. On the other hand, if
the objective function attains a value smaller than d0 − p0 , then (DM) is fully
dissatisfied. Within the interval (d0 − p0 , d0 ), the satisfaction of DM increases
linearly from 0 to 1. By these considerations a membership function µd˜ of the
fuzzy goal d˜ is defined as follows
1 if t ≥ d0 ,
t−d0
µd˜(t) = { 1 + p0 if d0 − p0 ≤ t < d0 , (11.78)
0 otherwise.
Similarly, let for the i-th constraint function of (11.77), i ∈ M, a right hand
side bi ∈ R is known such that if the left hand side attains this value, or if it
is smaller, then the decision maker (DM) is fully satisfied. On the other side, if
the objective function attains its value greater than bi + pi , then (DM) is fully
dissatisfied. Within the interval (bi , bi + pi ), the satisfaction of DM decreases
linearly from 1 to 0. By these considerations a membership function µb̃i of the
fuzzy right hand side b̃i is defined as
1 if t ≤ bi ,
t−bi
µb̃i (t) = { 1 − pi if bi ≤ t < bi + pi , (11.79)
0 otherwise.
11.7. SPECIAL MODELS OF FLP 171
The relationship between the objective function and constraints in the flexi-
ble LP problem is considered as fully symetric; i.e. there is no longer a difference
between the former and latter. ”Maximization” is then understood as finding
a vector x ∈ Rn such that the membership grade of the intersection of all
fuzzy sets (11.78) and (11.79) is maximal. In other words, we have to solve the
following optimization problem:
maximize λ
subject to P
µd˜( cj xj ) ≥ λ,
j∈N
P (11.80)
µb̃i ( aij xj ) ≥ λ, i ∈ M,
j∈N
0 ≤ λ ≤ 1, xj ≥ 0, j ∈ N.
maximize λ
subject to P
cj xj ≥ d0 + λp0 ,
j∈N
P (11.81)
aij xj ≤ bi + (1 − λ)pi , i ∈ M,
j∈N
0 ≤ λ ≤ 1, xj ≥ 0, j ∈ N .
f
maximize c1 x1 + · · · + cn xn
subject to
(11.82)
˜ T b̃i , i ∈ M,
ai1 x1 + · · · + ain xn ≤
xj ≥ 0, j ∈ N ,
where cj , aij and bi are the same as above, that is crisp numbers, whereas d˜
and b̃i are fuzzy quantities defined by (11.78) and (11.79). Moreover, ≤ ˜ T is a
T -fuzzy extension of the usual inequality relation ≤, with T = min. It turns out
that the vector x ∈ Rn is an optimal solution of flexible LP problem (11.81) if
and only if it is a max-optimal solution of FLP problem (11.82). This statement
follows directly from Theorem 132.
Notice that piecewise linear membership functions (11.78) and (11.79) can
be replaced by more general nondecreasing and nonincreasing functions, respec-
tively. In general, problem (11.80) cannot be then equivalently transformed to
the LP problem (11.81). Such transformation is, however, sometimes possible,
e.g. if all membership functions are generated by the same strictly monotone
function.
172 CHAPTER 11. FUZZY LINEAR PROGRAMMING
f (x; c1 , ..., cn ) = c1 x1 + · · · + cn xn ,
f D D
maximize c̃1 x1 +̃ 0 · · ·+̃ 0 c̃n xn
subject to
D D (11.83)
ãi1 x1 +̃ i · · ·+̃ i ãin xn R̃i b̃i , i ∈ M,
xj ≥ 0, j ∈ N .
Let us clarify the elements of (11.83).
The objective function values and the left hand sides values of the constraints
of (11.83) have been obtained by the extension principle (8.17) as follows. By
(8.60) we obtain
µãi (a) = T (µãi1 (hdi1 , ai), µãi2 (hdi2 , ai), ..., µãin (hdin , ai)). (11.84)
µc̃ (c) = T (µc̃1 (hd01 , ci), µc̃2 (hd02 , ci), ..., µc̃n (hd0n , ci)). (11.87)
A membership function of f˜(x; c̃) is defined for each t ∈ R by
sup{µc̃ (c)| c = (c1 , ..., cn ) ∈ Rn , c1 x1 + · · · + cn xn = t}
µf˜(t) = if f −1 (x; t) 6= ∅, (11.88)
0 otherwise,
where f −1 (x; t) = {(c1 , ..., cn ) ∈ Rn |c1 x1 + · · · + cn xn = t}. Here, the fuzzy set
D D
f˜(x; c̃) is denoted as c̃1 x1 +̃ 0 · · ·+̃ 0 c̃n xn , i.e.
D D
f˜(x; c̃) = c̃1 x1 +̃ 0 · · ·+̃ 0 c̃n xn (11.89)
for each x ∈ Rn .
The treatment of FLP problem (11.83) is analogical to that of (11.83). The
following proposition demonstrates how the α-cuts of (11.86) and (11.89) can
be calculated.
Let D0 be the non-singular obliquity matrix, denote D0−1 = {d∗ij }ni,j=1 . For
x = (x1 , · · · , xn ) ∈ Rn we denote
Proposition 140 Let c̃1 , ..., c̃n ∈ FI (R) be compact interactive fuzzy intervals
with an obliquity matrix D0 , x = (x1 , · · · , xn ) ∈ Rn . Let T be a continuous
D D
t-norm and f˜(x; c̃) = c̃1 x1 +̃ 0 · · ·+̃ 0 c̃n xn be defined by (11.88), α ∈ (0, 1].
Then
h i X X X X
f˜(x; c̃) = cj (α)x∗j + cj (α)x∗j , cj (α)x∗j + cj (α)x∗j .
α
j∈Ix+∗ j∈Ix−∗ j∈Ix+∗ j∈Ix−∗
(11.93)
£ ¤
Proof. Observe that [c̃j ]α = cj (α), cj (α) . The proof follows directly from
(11.87), (11.88), (11.90) and Theorem 53.
Analogical result can be formulated and proved for interactive fuzzy para-
meters in the constraints of (11.83), i.e. if ãi1 , ..., ãin ∈ FI (R) are compact
174 CHAPTER 11. FUZZY LINEAR PROGRAMMING
1 if cx ≤ t ≤ cx,
µc̃x (t) = sup{χ[c,c] (c)|c ∈ R, cx = t} = { (11.102)
0 otherwise.
Similarly, we obtain the membership function of ãx as
1 if ax ≤ t ≤ ax,
µãx (t) = sup{χ[a,a] (a)|a ∈ R, ax = t} = { (11.103)
0 otherwise.
µ≥
˜ (c̃y, c̃x) = sup{min{µc̃y (u), µc̃x (v)}|u ≥ v},
or
X̃ = [0, +∞).
b
Case 3: a < 0, then a < 0. From (11.110) and (11.106) it follows that
or
X̃ = [0, +∞).
where β is sufficiently small positive number, e.g. β ≤ a/b, to secure that µd˜
is strictly increasing function in a sufficiently large interval. By (11.26) and
(11.27) we obtain
µX̃ ∗ (x) = min{µX̃0 (x), µX̃ (x)}, (11.111)
µX̃0 (x) = µ≥
˜ (ãx , b̃) = sup{min{χ[cx,cx] (u), µd˜(v)}|u ≥ v}
= µd˜(cx). (11.112)
178 CHAPTER 11. FUZZY LINEAR PROGRAMMING
By Proposition 132 we obtain the unique optimal solution with maximal height
b
x∗ = .
a
for all x ∈ R.
Again, by Proposition 131 we obtain the α-cut of the optimal solution
· ¶
α
[X̃ ∗ ]α = , +∞ .
cβ
The set of all optimal solution with maximal height is the interval
· ¶
1
, +∞ .
cβ
Example 144 Consider the same FLP problem as in Example 143, but with
different fuzzy parameters. The problem is as follows
f
maximize c̃x
subject to
˜ b̃, (11.114)
ãx ≤
x ≥ 0.
Here, the parameters c̃, ã and b̃ are supposed to be triangular fuzzy numbers. To
restrict a large amount of particular cases, we suppose that
Figure 11.1:
Piecewise linear membership functions µc̃ , µã and µb̃ are defined for each x ∈ R
as follows:
½ ½ ¾¾
c−x c−x
µc̃ (x) = max 0, min 1 − ,1 + , (11.116)
γ γ
½ ½ ¾¾
a−x a−x
µã (x) = max 0, min 1 − ,1 + , (11.117)
α α
½ ½ ¾¾
b−x b−x
µb̃ (x) = max 0, min 1 − ,1 + , (11.118)
β β
see Fig. 11.1.
˜ is assumed to be a T -fuzzy extension of the
Let T = min. The fuzzy relation ≤
binary relation ≤.
Figure 11.2:
˜
˜ of the fuzzy relation ≤.
Second, we calculate the membership function µ≤
Let x > 0. Then, see Fig. 11.2,
µ≤
˜ (ãx , b̃) = sup{min{µãx (u), µb̃ (v)}|u ≤ v},
For x = 0 we calculate
µ≤
˜ (ãx , b̃) = sup{min{χ{0} (u), µb̃ (v)}|u ≤ v} = 1. (11.121)
µ≤
˜ (ãx , b̃) = sup{min{µãx (u), µb̃ (v)}|u ≤ v} (11.123)
1 if 0 < x, ax ≤ b,
b + β − (a − α)x
= { if b < ax, (a − α)x ≤ b + β,
αx + β
0 otherwise.
Figure 11.3:
1 if 0 ≤ x, ax ≤ b,
b + β − (a − α)x
µX̃ (x) = { if b < ax, (a − α)x ≤ b + β, (11.124)
αx + β
0 otherwise.
µX̃ (x) ≥ ε
if and only if
b + β − (a − α)x
≥ ε and x ≥ 0,
αx + β
or equivalently,
b + (1 − ε)β
0≤x≤ .
a − (1 − ε)α
In other words, · ¸
b + (1 − ε)β
[X̃]ε = 0, . (11.125)
a − (1 − ε)α
Figure 11.4:
µX̃0 (x) = µ≥ ˜
˜ (c̃x , d) = sup{min{µc̃x (u), µd˜(v)}|u ≥ v}. (11.128)
By (11.119) and (11.126) we calculate for all x ≥ 0
where
β(c + γ)
δ= .
1 + βγ
The membership function of optimal solution given by (11.127) is depicted in
Fig. 11.4. Combining (11.125) and (11.129) we obtain the set of all max-
optimal solution X̄ from formula (11.127) as
√
D − [βδ + (a − α)] a 1
if < ,
X̄ = { · 2αδ ¸ b δ
1 a
, otherwise,
δ b
where
D = [βδ + (a − α)]2 + 4αδ(b + β),
see Fig. 11.4.
Chapter 12
Conclusion
183
184 CHAPTER 12. CONCLUSION
as membership functions of fuzzy subsets and will be called here fuzzy criteria.
Each constraint or objective function of the fuzzy mathematical programming
problem has been naturally appointed to the unique fuzzy criterion.
Fuzzy mathematical programming problems form a subclass of decision -
making problems where preferences between alternatives are described by means
of objective function(s) defined on the set of alternatives in such a way that
greater values of the function(s) correspond to more preferable alternatives (if
”higher value” is ”better”). The values of the objective function describe effects
from choices of the alternatives. First we presented a general formulation FMP
problem associated with the classical MP problem, then we defined a feasible
solution of FMP problem and optimal solution of FMP problem as special fuzzy
sets. Among others we have shown that the class of all MP problems with
(crisp) parameters can be naturally embedded into the class of FMP problems
with fuzzy parameters.
We dealt also with a class of fuzzy linear programming problems. and again
investigated feasible and optimal solutions - the necessary tools for dealing with
such problems. In this way we showed that the class of crisp (classical) LP
problems can be embedded into the class of FLP ones. Moreover, for FLP
problems we defined the concept of duality and proved the weak and strong
duality theorems. Further, we investigated special classes of FLP - interval LP
problems, flexible LP problems, LP problems with interactive coefficients and
LP problems with centered coefficients.
In the study we introduced an original unified approach by which a number
of new and yet unpublished results have been acquired.
Our approach to SC presented in this work is mathematically oriented as
the author is a mathematician. There exist, however, other approaches to SC,
e.g. human-science approach and also computer approach, putting more stress
on other aspects of he subject.
Bibliography
185
186 BIBLIOGRAPHY
[12] J.J. Buckley, Possibilistic linear programming with triangular fuzzy num-
bers. Fuzzy Sets and Systems 26 (1988) 135-138.
[14] S. Chen and C. Hwang, Fuzzy multiple attribute decision making. Springer-
Verlag, Berlin, Heidelberg, New York, 1992.
[18] M. Delgado, J. Kacprzyk, J.-L. Verdegay and M.A. Vila, Eds., Fuzzy
optimization - Recent advances. Physica-Verlag, Heidelberg, New York,
1994.
[20] D. Dubois and H. Prade, Possibility theory. Plenum Press, N. York and
London, 1988.
[23] J.C. Fodor and M. Roubens, Fuzzy preference modelling and multi-criteria
decision support. Kluwer Acad. Publ., Dordrecht-Boston-London, 1994.
[35] M. Inuiguchi, J. Ramik, T. Tanino, Oblique fuzzy numbers and its use in
possiblistic linear programming. Fuzzy Sets and Systems, Special issue:
Interfaces between fuzzy sets and interval analysis, to appear.
[38] E.P. Klement, R. Mesiar and E. Pap, Triangular norms. Kluwer Acad.
Publ., Series Trends in Logic, Dordrecht-Boston-London, 2000.
[41] M. Kovacs, Fuzzy linear programming with centered fuzzy numbers. In:
Fuzzy Optimization - Recent Advance, Eds.: M. Delgado, J. Kacprzyk,
J.-L. Verdegay and M.A. Vila, Physica-Verlag, Heidelberg-N. York, 1994,
135-147.
[43] M. Kovacs and L.H. Tran, Algebraic structure of centered M- fuzzy num-
bers. Fuzzy Sets and Systems 39 (1), 1991, 91-99.
[46] S.R. Lay, Convex sets and their applications. John Wiley & Sons Inc.,
New York- Chichester- Brisbane- Toronto- Singapore, 1982.
[47] R. Lowen and M. Roubens, Eds., Fuzzy logic - State of the art. Theory
and Decision Library, Series D: System Theory, knowledge engineering
and problem solving, Kluwer Acad. Publ., Dordrecht, Boston, London,
1993.
[49] K. Menger, Statistical metrics. Proc. Nat. Acad. Sci. U.S.A., 28, 535—537,
1942.
[51] B. Mond and T. Weir, Generalized concavity and duality, In: S. Schaible
and W.T. Ziemba, Eds., Generalized concavity in optimization and eco-
nomics, Academic Press, New York, (1981) 253-289.
[52] C.V. Negoita and D.A. Ralescu, Applications of Fuzzy Sets to Systems
Analysis, J. Wiley & Sons, New York, 1975.
[53] H.T. Nguyen, A note on the extension principle for fuzzy sets. J. Math.
Anal. Appl., 64 (1978) 369-380.
BIBLIOGRAPHY 189
[54] S.A. Orlovsky, Decision making with fuzzy preference relation. Fuzzy Sets
and Systems 1 (1978), 155-167.
[59] D. Ralescu, A survey of the representation of fuzzy concepts and its ap-
plications. In: Advances in Fuzzy Sets Theorey and Applications, M.M.
Gupta, R.K. Regade and R. Yager, Eds., North Holland, Amsterdam,
1979, 77-91.
[60] J. Ramik and J. Římánek, Inequality relation between fuzzy numbers and
its use in fuzzy optimization. Fuzzy Sets and Systems 16 (1985) 123—138.
[61] J. Ramik, Extension principle in fuzzy optimization. Fuzzy Sets and Sys-
tems 19, 1986, 29-37.
[64] J. Ramik and J. Římánek, The linear programming problem with vaguely
formulated relations between the coefficients. In: M. Fedrizzi, J. Kacprzyk
and S. Orlovski, Eds., Interfaces between Artificial Intelligence and Opera-
tions Research in Fuzzy Environment, D. Riedel Publ. Comp., Dordrecht-
Boston-Lancaster-Tokyo, 1989.
[67] J. Ramik, Inequality relations between fuzzy data. In: H. Bandemer, Ed.,
Modelling Uncertain Data, Akademie Verlag, Berlin, 1992, pp.158—162.
190 BIBLIOGRAPHY
[72] J. Ramik, New interpretation of the inequality relations in fuzzy goal gro-
gramming problems. Central European Journal for Operations Research
and Economics 4 (1996) 112—125.
[75] J. Ramik, Fuzzy goals and fuzzy alternatives in fuzzy goal programming
problems. Fuzzy Sets and Systems 111 (2000)1, 81-86.
[97] R.R. Yager, On a general class of fuzzy connectives. Fuzzy Sets and Sys-
tems 4 (1980) 235-242.
[98] L.A. Zadeh, Fuzzy sets. Inform. Control (8) 1965, 338-353.
[99] L.A. Zadeh, The concept of a linguistic variable and its application to
approximate reasoning, Information Sciences, Part I: 8, 1975, 199-249;
Part II: 8, 301-357; Part III: 43-80.
[100] H.-J. Zimmermann, Fuzzy programming and linear programming with
several objective functions. Fuzzy Sets and Systems 1 (1978) 45-55.
[101] H.-J. Zimmermann and P. Zysno, Latent connectives in human decision
making. Fuzzy Sets and Systems 4 (1980), 37-51.