0% found this document useful (0 votes)
172 views198 pages

Soft Computing: Overview and Recent Developments in Fuzzy Optimization

Uploaded by

Shobhit Mittal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
172 views198 pages

Soft Computing: Overview and Recent Developments in Fuzzy Optimization

Uploaded by

Shobhit Mittal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 198

Soft Computing: Overview and Recent

Developments in Fuzzy Optimization

Jaroslav Ramík
Ústav pro výzkum a aplikace fuzzy modelování, Ostravská univerzita

Listopad 2001
ii

Abstract
Soft Computing (SC) represents a significant paradigm shift in the
aims of computing, which reflects the fact that the human mind, unlike
present day computers, possesses a remarkable ability to store and process
information which is pervasively imprecise, uncertain and lacking in cat-
egoricity. At this juncture, the principal constituents of Soft Computing
(SC) are: Fuzzy Systems (FS), including Fuzzy Logic (FL); Evolutionary
Computation (EC), including Genetic Algorithms (GA); Neural Networks
(NN), including Neural Computing (NC); Machine Learning (ML); and
Probabilistic Reasoning (PR). In this work, we focus on fuzzy method-
ologies and fuzzy systems, as they bring basic ideas to other SC method-
ologies. The other constituents of SC are also briefly surveyed here but
for details we refer to the existing vast literature. In Part 1 we present
an overview of developments in the individual parts of SC. For each con-
stituent of SC we overview its background, main problems, methodologies
and recent developments. We focus mainly on Fuzzy Systems, for which
the main literature, main professional journals and other relevant informa-
tion is also supplied. The other constituencies of SC are reviewed shortly.
In Part 2 we investigate some fuzzy optimization systems. First, we in-
vestigate Fuzzy Sets - we define fuzzy sets within the classical set theory
by nested families of sets, and discuss how this concept is related to the
usual definition by membership functions. Further, we will bring some
important applications of the theory based on generalizations of concave
functions. We study a decision problem, i.e. the problem to find a ”best”
decision in the set of feasible alternatives with respect to several (i.e. more
than one) criteria functions. Within the framework of such a decision sit-
uation, we deal with the existence and mutual relationships of three kinds
of ”optimal decisions”: Weak Pareto-Maximizers, Pareto-Maximizers and
Strong Pareto-Maximizers - particular alternatives satisfying some natural
and rational conditions. We also study the compromise decisions maxi-
mizing some aggregation of the criteria. The criteria considered here will
be functions defined on the set of feasible alternatives with the values in
the unit interval. In Fuzzy mathematical programming problems (FMP)
the values of the objective function describe effects from choices of the
alternatives. Among others we show that the class of all MP problems
with (crisp) parameters can be naturally embedded into the class of FMP
problems with fuzzy parameters. Finally, we deal with a class of fuzzy
linear programming problems. We show that the class of crisp (classical)
LP problems can be embedded into the class of FLP ones. Moreover, for
FLP problems we define the concept of duality and prove the weak and
strong duality theorems. Further, we investigate special classes of FLP -
interval LP problems, flexible LP problems, LP problems with interactive
coefficients and LP problems with centered coefficients. We present here
an original mathematically oriented and unified approach.
Contents

I Soft Computing - Overview 1


1 Introduction 3
1.1 Guiding Principle of Soft Computing . . . . . . . . . . . . . . . . 3
1.2 Importance of Soft Computing . . . . . . . . . . . . . . . . . . . 4
1.3 The Contents of the Study . . . . . . . . . . . . . . . . . . . . . . 4

2 Fuzzy Systems 7
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Fuzzy Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Fuzzy Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Fuzzy Numbers and Fuzzy Arithmetic . . . . . . . . . . . . . . . 13
2.5 Determination of Membership Functions . . . . . . . . . . . . . . 13
2.5.1 Subjective evaluation and elicitation . . . . . . . . . . . . 14
2.5.2 Ad-hoc forms and methods . . . . . . . . . . . . . . . . . 14
2.5.3 Converted frequencies or probabilities . . . . . . . . . . . 14
2.5.4 Physical measurement . . . . . . . . . . . . . . . . . . . . 14
2.6 Membership Degrees Versus Probabilities . . . . . . . . . . . . . 14
2.7 Possibility Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.8 Fuzzy Expert Systems . . . . . . . . . . . . . . . . . . . . . . . . 17
2.9 Fuzzy Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.10 Fuzzy Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.11 Decision Making in Fuzzy Environment . . . . . . . . . . . . . . 23
2.12 Fuzzy Mathematical Programming . . . . . . . . . . . . . . . . . 24
2.13 Mailing Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.14 Main International Journals . . . . . . . . . . . . . . . . . . . . . 28
2.15 Web Pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.16 Fuzzy Researchers . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3 Evolutionary Computation 33
3.1 Genetic Algorithm (GA) . . . . . . . . . . . . . . . . . . . . . . . 34
3.2 Evolutionary Programming (EP) . . . . . . . . . . . . . . . . . . 36
3.3 Evolution Strategies (ES) . . . . . . . . . . . . . . . . . . . . . . 39
3.4 Classifier Systems (CS) . . . . . . . . . . . . . . . . . . . . . . . 40
3.5 Genetic Programming (GP) . . . . . . . . . . . . . . . . . . . . . 41

iii
iv CONTENTS

4 Neural Networks 43
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2 Principles of Neural Networks . . . . . . . . . . . . . . . . . . . . 44
4.3 Learning Methods in NNs . . . . . . . . . . . . . . . . . . . . . . 46
4.4 Well-Known Kinds of NNs . . . . . . . . . . . . . . . . . . . . . . 47

5 Machine Learning 55
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.2 Three Basic Theories . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.3 Supervised machine Learning . . . . . . . . . . . . . . . . . . . . 57
5.4 Reinforcement Machine Learning . . . . . . . . . . . . . . . . . . 57
5.5 Unsupervised machine Learning . . . . . . . . . . . . . . . . . . . 57

6 Probabilistic Reasoning 59
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.2 Markov and Bayesian Networks . . . . . . . . . . . . . . . . . . . 60
6.3 Decision Analysis based on PR . . . . . . . . . . . . . . . . . . . 60
6.4 Learning Structure from Data . . . . . . . . . . . . . . . . . . . . 61
6.5 Dampster-Shaffer’s Theory . . . . . . . . . . . . . . . . . . . . . 61

7 Conclusion 63

II Fuzzy Optimization 65
8 Fuzzy Sets 67
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
8.2 Definition and Basic Properties . . . . . . . . . . . . . . . . . . . 68
8.3 Operations with Fuzzy Sets . . . . . . . . . . . . . . . . . . . . . 71
8.4 Extension Principle . . . . . . . . . . . . . . . . . . . . . . . . . . 72
8.5 Binary and Valued Relations . . . . . . . . . . . . . . . . . . . . 74
8.6 Fuzzy Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
8.7 Fuzzy Extensions of Valued Relations . . . . . . . . . . . . . . . 79
8.8 Fuzzy Quantities and Fuzzy Numbers . . . . . . . . . . . . . . . 84
8.9 Fuzzy Extensions of Real Functions . . . . . . . . . . . . . . . . . 87
8.10 Higher Dimensional Fuzzy Quantities . . . . . . . . . . . . . . . . 92
8.11 Fuzzy Extensions of Valued Relations . . . . . . . . . . . . . . . 97

9 Fuzzy Multi-Criteria Decision Making 103


9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
9.2 Fuzzy Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
9.3 Pareto-Optimal Decisions . . . . . . . . . . . . . . . . . . . . . . 105
9.4 Compromise Decisions . . . . . . . . . . . . . . . . . . . . . . . . 109
9.5 Generalized Compromise Decisions . . . . . . . . . . . . . . . . . 112
9.6 Aggregation of Fuzzy Criteria . . . . . . . . . . . . . . . . . . . . 116
9.7 Extremal Properties . . . . . . . . . . . . . . . . . . . . . . . . . 117
CONTENTS v

9.8 Application to Location Problem . . . . . . . . . . . . . . . . . . 118


9.9 Application in Engineering Design . . . . . . . . . . . . . . . . . 125

10 Fuzzy Mathematical Programming 129


10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
10.2 Modelling Reality by FMP . . . . . . . . . . . . . . . . . . . . . 131
10.3 MP Problem with Parameters . . . . . . . . . . . . . . . . . . . . 131
10.4 Formulation of FMP Problem . . . . . . . . . . . . . . . . . . . . 133
10.5 Feasible Solutions of the FMP Problem . . . . . . . . . . . . . . 134
10.6 Properties of Feasible Solution . . . . . . . . . . . . . . . . . . . 135
10.7 Optimal Solutions of the FMP Problem . . . . . . . . . . . . . . 144

11 Fuzzy Linear Programming 151


11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
11.2 Formulation of FLP problem . . . . . . . . . . . . . . . . . . . . 151
11.3 Properties of Feasible Solution . . . . . . . . . . . . . . . . . . . 154
11.4 Properties of Optimal Solutions . . . . . . . . . . . . . . . . . . . 156
11.5 Extended Addition in FLP . . . . . . . . . . . . . . . . . . . . . 160
11.6 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
11.7 Special Models of FLP . . . . . . . . . . . . . . . . . . . . . . . . 167
11.7.1 Interval Linear Programming . . . . . . . . . . . . . . . . 167
11.7.2 Flexible Linear Programming . . . . . . . . . . . . . . . . 170
11.7.3 FLP Problems with Interactive Fuzzy Parameters . . . . 172
11.7.4 FLP Problems with Centered Parameters . . . . . . . . . 174
11.8 Illustrative Examples . . . . . . . . . . . . . . . . . . . . . . . . . 176

12 Conclusion 183
vi CONTENTS
Part I

Soft Computing - Overview

1
Chapter 1

Introduction

1.1 Guiding Principle of Soft Computing


Soft computing is tolerant of imprecision, uncertainty, partial truth, and ap-
proximation. In effect, the role model for soft computing is the human mind.
The guiding principle of soft computing is: Exploit the tolerance for imprecision,
uncertainty, partial truth, and approximation to achieve tractability, robustness
and low solution cost and solve the fundamental problem associated with the
current technological development: the lack of the required intelligence of the
recent information technology that enables human-centered functionality. The
basic ideas underlying soft computing in its current incarnation have links to
many earlier influences, among them Zadeh’s 1965 paper on fuzzy sets; the 1975
paper on the analysis of complex systems and decision processes; and the 1979
report (1981 paper) on possibility theory and soft data analysis. The inclusion
of neural computing and genetic computing in soft computing came at a later
point.
At this juncture, the principal constituents of Soft Computing (SC) are:

• Fuzzy Systems (FS), including Fuzzy Logic (FL);

• Evolutionary Computation (EC), including Genetic Algorithms (GA);

• Neural Networks (NN), including Neural Computing (NC);

• Machine Learning (ML);

• Probabilistic Reasoning (PR).

Fuzzy theory plays a leading role in soft computing and this stems from the
fact that human reasoning is not crisp and admits degrees. What is important to
note is that soft computing is not a melange. Rather, it is a partnership in which
each of the partners contributes a distinct methodology for addressing problems
in its domain. In this perspective, the principal constituent methodologies in

3
4 CHAPTER 1. INTRODUCTION

SC are complementary rather than competitive. Furthermore, soft computing


may be viewed as a foundation component for the emerging field of conceptual
intelligence.

1.2 Importance of Soft Computing


The complementarity of FS, NN, EC, ML and PR has an important conse-
quence: in many cases a problem can be solved most effectively by using FS,
NN, EC, ML and PR in combination rather than exclusively. A striking example
of a particularly effective combination is what has come to be known as ”neu-
rofuzzy systems.” Such systems are becoming increasingly visible as consumer
products ranging from air conditioners and washing machines to photocopiers
and camcorders. Less visible but perhaps even more important are neurofuzzy
systems in industrial applications. What is particularly significant is that in
both consumer products and industrial systems, the employment of soft com-
puting techniques leads to systems which have high MIQ (Machine Intelligence
Quotient). In large measure, it is the high MIQ of SC-based systems that ac-
counts for the rapid growth in the number and variety of applications of soft
computing.
The conceptual structure of soft computing suggests that future university
students should be trained not just in fuzzy logic, neurocomputing, genetic
algorithms, or probabilistic reasoning but in all of the associated methodologies,
though not necessarily to the same degree.
For example, at present, the BISC Group (Berkeley Initiative on Soft Com-
puting) comprises close to 1000 students, professors, employees of private and
non-private organizations and, more generally, individuals who have interest or
are active in soft computing or related areas. Currently, BISC has over 50 Insti-
tutional Affiliates, with their ranks continuing to grow in number. At Berkeley,
U.S.A., BISC provides a supportive environment for visitors, postdocs and stu-
dents who are interested in soft computing and its applications. In the main,
support for BISC comes from member companies. More details on the web page:
http://www-bisc.cs.berkley.edu

1.3 The Contents of the Study


The successful applications of soft computing suggest that the impact of soft
computing will be felt increasingly in coming years of the new millennium.
Soft computing is likely to play an especially important role in science and
engineering, but eventually its influence may extend much farther. Building
human-centered systems is an imperative task for scientists and engineers in
the new millennium.
In many ways, soft computing represents a significant paradigm shift in
the aims of computing - a shift which reflects the fact that the human mind,
unlike present day computers, possesses a remarkable ability to store and process
1.3. THE CONTENTS OF THE STUDY 5

information which is pervasively imprecise, uncertain and lacking in categoricity.


In this work, we focus primarily on fuzzy methodologies and fuzzy systems,
as they bring basic ideas to other SC methodologies. The other constituents of
SC are also surveyed here but for details we refer to the existing vast literature.
In Part 1 we present an overview of developments in the individual parts
of SC. For each constituent of SC we briefly overview its background, main
problems, methodologies and recent developments. We deal mainly with Fuzzy
Systems in which area we have been researching for about 20 years. Here, the
main literature, main professional journals and other relevant information is
also supplied. The other constituencies of SC are reviewed only shortly.
In Part 2 we investigate some fuzzy optimization systems. In a way, this
work is a continuation of the former research report: J. Ramik and M. Vlach,
Generalized concavity as a basis for optimization and decision analysis. Research
report IS-RR-2001-003, JAIST Hokuriku 2001, 116 p.
In Chapter 8 we deal with fuzzy sets. Already in the early stages of the
development of fuzzy set theory, it has been recognized that fuzzy sets can be
defined and represented in several different ways. Here we define fuzzy sets
within the classical set theory by nested families of sets, and then we discuss
how this concept is related to the usual definition by membership functions.
Binary and valued relations are extended to fuzzy relations and their properties
are extensively investigated. Moreover, fuzzy extensions of real functions are
studied, particularly the problem of the existence of sufficient conditions under
which the membership function of the function value is quasiconcave. Sufficient
conditions for commuting the diagram ”mapping - α-cutting” is presented in
the form of classical Nguyen’s result.
In the second part - Applications we will bring some important applications
of the theory presented in the first part of the book, based on generalizations of
concave functions.
In Chapter 9, we consider a decision problem, i.e. the problem to find a
”best” decision in the set of feasible alternatives with respect to several (i.e.
more than one) criteria functions. Within the framework of such a decision
situation, we deal with the existence and mutual relationships of three kinds of
”optimal decisions”: Weak Pareto-Maximizers, Pareto-Maximizers and Strong
Pareto-Maximizers - particular alternatives satisfying some natural and rational
conditions. We study also the compromise decisions maximizing some aggrega-
tion of the criteria. The criteria considered here will be functions defined on
the set of feasible alternatives with the values in the unit interval. The results
by Ramik and Vlach (2001) are extended and presented in the framework of
multi-criteria decision making.
Fuzzy mathematical programming problems (FMP) investigated in Chapter
10 form a subclass of decision - making problems where preferences between
alternatives are described by means of objective function(s) defined on the set
of alternatives in such a way that greater values of the function(s) correspond
to more preferable alternatives (if ”higher value” is ”better”). The values of
the objective function describe effects from choices of the alternatives. In this
chapter we begin with the formulation a FMP problem associated with the
6 CHAPTER 1. INTRODUCTION

classical MP problem. After that we define a feasible solution of FMP problem


and optimal solution of FMP problem as special fuzzy sets. From practical point
of view, α-cuts of these fuzzy sets are important, particularly the α-cuts with
the maximal α. Among others we show that the class of all MP problems with
(crisp) parameters can be naturally embedded into the class of FMP problems
with fuzzy parameters.
In Chapter 11 we deal with a class of fuzzy linear programming problems and
again introduce the concepts of feasible and optimal solutions - the necessary
tools for dealing with such problems. In this way we show that the class of crisp
(classical) LP problems can be embedded into the class of FLP ones. Moreover,
for FLP problems we define the concept of duality and prove the weak and strong
duality theorems. Further, we investigate special classes of FLP - interval LP
problems, flexible LP problems, LP problems with interactive coefficients and
LP problems with centered coefficients.
In the both last chapters we take advantage of an original unified approach
by which a number of new and yet unpublished results are acquired.
Our approach to SC presented in this work is mathematically oriented as the
author is a mathematician. There exist, however, other approaches to SC, e.g.
human science and computer approach, putting more stress on other aspects of
he subject.
Chapter 2

Fuzzy Systems

2.1 Introduction
Fuzzy systems are based on fuzzy logic, a generalization of conventional (Boolean)
logic that has been extended to handle the concept of partial truth — truth val-
ues between ”completely true” and ”completely false”. It was introduced by L.
A. Zadeh of University of California, Berkeley, U.S.A., in the 1960’s, as a means
to model the uncertainty of natural language. Zadeh himself says that rather
than regarding fuzzy theory as a single theory, we should regard the process of
“fuzzification” as a methodology to generalize any specific theory from a crisp
(discrete) to a continuous (fuzzy) form.

2.2 Fuzzy Sets


The theory of fuzzy sets now encompasses a well organized corpus of basic
notions including (and not restricted to) aggregation operations, a generalized
theory of relations, specific measures of information content, a calculus of fuzzy
numbers. Fuzzy sets are also the cornerstone of a non-additive uncertainty
theory, namely possibility theory, and of a versatile tool for both linguistic and
numerical modeling: fuzzy rule-based systems. Numerous works now combine
fuzzy concepts with other scientific disciplines as well as modern technologies.
In mathematics fuzzy sets have triggered new research topics in connec-
tion with category theory, topology, algebra, analysis. Fuzzy sets are also part
of a recent trend in the study of generalized measures and integrals, and are
combined with statistical methods. Furthermore, fuzzy sets have strong logical
underpinnings in the tradition of many-valued logics.
Fuzzy set-based techniques are also an important ingredient in the develop-
ment of information technologies. In the field of information processing fuzzy
sets are important in clustering, data analysis and data fusion, pattern recogni-
tion and computer vision. Fuzzy rule-based modeling has been combined with
other techniques such as neural nets and evolutionary computing and applied to

7
8 CHAPTER 2. FUZZY SYSTEMS

systems and control engineering, with applications to robotics, complex process


control and supervision. In the field of information systems, fuzzy sets play a
role in the development of intelligent and flexible man-machine interfaces and
the storage of imprecise linguistic information. In Artificial Intelligence various
forms of knowledge representation and automated reasoning frameworks benefit
from fuzzy set-based techniques, for instance in interpolative reasoning, non-
monotonic reasoning, diagnosis, logic programming, constraint-directed reason-
ing, etc. Fuzzy expert systems have been devised for fault diagnosis, and also
in medical science. In decision and organization sciences, fuzzy sets has had a
great impact in preference modeling and multicriteria evaluation, and has helped
bringing optimization techniques closer to the users needs. Applications can be
found in many areas such as management, production research, and finance.
Moreover concepts and methods of fuzzy set theory have attracted scientists in
many other disciplines pertaining to human-oriented studies such as cognitive
psychology and some aspects of social sciences.
In classical set theory, a subset U of a set S can be considered as a mapping
from the elements of S to the elements of the set {0, 1}, consisting of the two
elements 0 and 1, i.e.:

U : S → {0, 1}.

This mapping may be represented as a set of ordered pairs, with exactly one
ordered pair present for each element of S. The first element of the ordered
pair is an element of the set S, and the second element is an element of the set
{0, 1}. The value zero is used to represent non-membership, and the value one
is used to represent membership. The truth or falsity of the statement:

x is in U
is determined by finding the ordered pair whose first element is x. The statement
is true if the second element of the ordered pair is 1, and the statement is false
if it is 0.
Similarly, a fuzzy subset (or fuzzy set) F of a set S can be defined as a set
of ordered pairs, each with the first element from S, and the second element
from the interval [0, 1], with exactly one ordered pair present for each element
of S. This defines a mapping between elements of the set S and values in the
interval [0, 1]. The value zero is used to represent complete non-membership,
the value one is used to represent complete membership, and values in-between
are used to represent intermediate degrees of membership. The set S is referred
to as the universe of discourse for the fuzzy subset F . Frequently, the mapping
is described as a function, the membership function of F . The ordinary sets are
considered as special cases of fuzzy sets with the membership functions equal
to the characteristic functions. They are called crisp sets.
The above definition of fuzzy set brings the equivalence between a fuzzy set
as such, intuitively a set-based concept, and its membership function, a mapping
from the universe of discourse to the unit interval [0, 1], or, more generally, to
2.3. FUZZY LOGIC 9

some lattice L. Here, the operations with fuzzy sets are defined by the operations
with functions.
In Chapter 8, specially devoted to fuzzy sets, our approach is reversed: we
define a fuzzy set as a family of (crisp) sets, where each member of the family
corresponds to a specific grade of membership from the unit interval [0, 1]. Doing
this we define easily the corresponding membership function. This approach is
intuitively well understandable and practically easily tractable, as it is natural to
work with (crisp) sets with the membership grade grater or equal to some level.
Moreover, this approach seems to be more elegant to some mathematicians who
are rather reluctant to speak about ”sets” having in mind ”functions”.
The fuzzy set based on the concept of family of nested sets, enjoys, among
others, the following advantages:
• it makes possible to create consistent (mathematical) theory,
• no ”artificial” identification of a fuzzy set with its membership function is
necessary,
• nonfuzzy sets can be naturally embedded into the fuzzy sets,
• nonfuzzy concepts may be extended to represent fuzzy ones,
• any fuzzy problem can be viewed as a family of nonfuzzy ones,
• practical tractability is achieved.

2.3 Fuzzy Logic


The degree to which the statement

x is in F
is true is determined by finding the ordered pair whose first element is x. The
degree of truth of the statement is the second element of the ordered pair. In
practice, the terms ”membership function” and fuzzy subset get used inter-
changeably. That is a lot of mathematical baggage, so here is an example. Let
us talk about people and ”tallness” expressed as their HEIGHT . In this case
the set S (the universe of discourse) is the set of people. Let us define a fuzzy
subset tall, which will answer the question ”to what degree is person x tall?”
Zadeh describes HEIGHT as a linguistic variable, which represents our cog-
nitive category of ”tallness”. The values of this linguistic variable are fuzzy
subsets as tall, very_tall, or short. To each person in the universe of discourse,
we assign a degree of membership in the fuzzy subset tall. The easiest way to
do this is with a membership function based on the real function h (”height of
a person in cm”) which is defined for each person x ∈ S :

0 if h(x) < 150,


h(x)−150
tall(x) = { 50 if 150 < h(x) ≤ 200,
1 if h(x) > 200.
10 CHAPTER 2. FUZZY SYSTEMS

Figure 2.1:

The fuzzy subset very_tall may be defined by a nonlinear function of h(x) :

0 if h(x) < 170,


³ ´2
h(x)−170
very_tall(x) = { 50 if 170 < h(x) ≤ 220,
1 if h(x) > 220.
On the other hand, the fuzzy subset short is defined as follows:

1 if h(x) < 150,


200−h(x)
short(x) = { 50 if 150 < h(x) ≤ 200,
0 if h(x) > 200.
A graphs of these membership functions look like in Figure 2.1.
Given this definition, here are some example values:

Person x h(x) tall(x)


Mikio 135 0.00
Hideki 173 0.46
Atsuko 155 0.10
Masato 195 0.90
Expressions like ”x is A” can be interpreted as degrees of truth, e.g., ”Hideki
is tall” = 0.46.

Remark 1 Membership functions used in most applications almost never have


as simple a shape as tall(x) in the above stated example. At minimum, they
2.3. FUZZY LOGIC 11

tend to be triangles pointing up, and they can be much more complex than that.
Also, the discussion characterizes membership functions as if they always are
based on a single criterion, but this is not always the case, although it is quite
common. One could, for example, want to have the membership function for
tall depend on both a person’s height and their age, e.g. ”somebody is tall for
his age”. This is perfectly legitimate, and occasionally used in practice. It is
referred to as a two-dimensional membership function, or a ”fuzzy relation”. It
is also possible to have even more criteria, or to have the membership function
depend on elements from two completely different universes of discourse.

Now that we know what a statement like ”x is LOW ” means in fuzzy logic,
how do we interpret a statement like:

(x is low) AND (y is high) OR (NOT z is medium).


The standard definitions in fuzzy logic are:

truth(NOT x) = 1.0 − truth(x),


truth(x AND y) = min{truth(x), truth(y)}, (2.1)
truth(x OR y) = max{truth(x), truth(y)}.

Note that if you plug just the values zero and one into these definitions,
you get the same truth tables as you would obtain from conventional predicate
logic, particularly, from fuzzy logic operations (2.1) we obtain predicate logic
operations on condition all fuzzy membership grades are restricted to the tra-
ditional set {0, 1}. This effectively establishes fuzzy subsets and logic as a true
generalization of classical set theory and logic. In fact, by this reasoning all
crisp (traditional) subsets are fuzzy subsets of this very special type; and there
is no conflict between fuzzy and crisp methods.

Example 2 Assume the same definition of tall as above, and in addition, as-
sume that we have a fuzzy subset old defined by the membership function:
0 if a(x) < 18,
a(x)−18
old(x) = { 42 if 18 < a(x) ≤ 60,
1 if a(x) > 60.

where a is a function defined on the set of all people S (”age of a person in


years”). Moreover, let

a = (x is tall) AND (x is old),


b = (x is tall) OR (x is old),
c = NOT(x is tall).

Then we can compute the following values:


12 CHAPTER 2. FUZZY SYSTEMS

Person x h(x) a(x) x is tall x is old a b c


Mikio 135 10 0.00 0.00 0.00 0.00 1.00
Hideki 173 85 0.46 1.00 0.46 1.00 0.54
Atsuko 155 40 0.10 0.52 0.10 0.52 0.90
Masato 192 22 0.90 0.10 0.10 0.90 0.10

After this simple introductory examples we try to explain shortly, what is


fuzzy logic about.
Generally speaking, logic, as a mathematical theory, studies the notions of
consequence. It deals with propositions (sentences), sets of propositions and
the relation of consequence among them, see Hájek (1998). The task of formal
logic is to present all this by means of well-defined logical calculi admitting
exact investigation. Various calculi differ in their definitions of sentences and
concepts of consequences, e.g. propositional/predicate logics, modal proposi-
tional/predicate logics, many-valued propositional/predicate logics etc. Often a
logical calculus has two notions of consequence: syntactical (based on a notion
of proof) and semantical (based on a notion of truth). The natural questions of
soundness (does probability implies truth?) pose themselves.
Fuzziness is imprecision or vagueness; a fuzzy proposition may be true to
some degree. Standard examples of fuzzy propositions use a linguistic variable
as, HEIGHT or AGE with possible values short, very tall and young, old etc.
Fuzzy logic is viewed as a formal mathematical theory for the representation
of uncertainty. Uncertainty is crucial for the management of real systems: if you
had to park your car precisely in one place, it would not be possible. Instead,
you work within, say, 10 cm tolerances. The presence of uncertainty is the
price you pay for handling a complex system. Nevertheless, fuzzy logic is a
mathematical formalism, and a membership grade is a precise number. What’s
crucial to realize is that fuzzy logic is a logic of fuzziness, not a logic which is
itself fuzzy. But that is natural: just as the laws of probability are not random,
so the laws of fuzziness are not vague.
Fuzzy logic is used directly in a number many applications, e.g. the Sony
PalmTop apparently uses a fuzzy logic decision tree algorithm to perform hand-
written (computer light-pen) Kanji character recognition. Most applications of
fuzzy logic use it as the underlying logic system for fuzzy expert systems, e.g.
cameras, video-cameras, washing machines, blood-pressure measuring devices,
rice-cookers, air-conditioners etc.

Relevant literature about fuzzy sets and fuzzy logic:

Kantrowitz, M., Horstkotte, E. and Joslyn, C., ”Answers to Frequently


Asked Questions about Fuzzy Logic and Fuzzy Expert Systems”, comp.ai.fuzzy,
<month, <year, ftp.cs.cmu.edu: /user/ai/pubs/faqs/fuzzy/ fuzzy.faq
Bezdek, J. C., ”Fuzzy Models – What Are They, and Why?”, IEEE Trans-
actions on Fuzzy Systems, 1993, 1:1, 1-6 .
Bandler, W. and Kohout, L. J., ”Fuzzy Power Sets and Fuzzy Implication
Operators”, Fuzzy Sets and Systems 4,1980, 13-30.
2.4. FUZZY NUMBERS AND FUZZY ARITHMETIC 13

Dubois, D. and Prade, H., ”A Class of Fuzzy Measures Based on Triangle


Inequalities”, Internat. J. General Systems, 8, 36-48.
Gottwald, S., Fuzzy Sets and Fuzzy Logic, Vieweg, Wiesbaden, 1993.
P. Hájek, Metamathematics of fuzzy logic. Kluwer Acad. Publ., Series
Trends in Logic, Dordrecht /Boston /London, 1998.
Hőle, U. and Klement, P., Eds., Non-Classical Logics and their Applications
to Fuzzy Subsets, Kluwer Academic Publishers, Dordrecht /Boston /London
1995.
Novák, V., Perfilieva I. and Mockor, J., Mathematical Principles of Fuzzy
Logic. Kluwer Academic Publishers, Dordrecht /Boston /London, 1999.
Ramik, J. and Vlach, M., Generalized concavity as a basis for optimization
and decision analysis. Research report IS-RR-2001-003, JAIST Hokuriku 2001.
Zadeh, L.A. ”Fuzzy sets”. Inform. Control (8) 1965, 338-353.
Zadeh, L. A. ”The Calculus of Fuzzy Restrictions”, In: Fuzzy Sets and
Applications to Cognitive and Decision Making Processes, Eds. Zadeh, L.A. et.
al., Academic Press, New York, 1975, 1-39.
Zadeh, L.A. ”The concept of a linguistic variable and its application to
approximate reasoning”. Information Sciences, Part I: 8, 1975, 199-249; Part
II: 8, 301-357; Part III: 43-80.

2.4 Fuzzy Numbers and Fuzzy Arithmetic


Fuzzy numbers are fuzzy subsets of the real line. They have a peak or plateau
with membership grade 1, over which the members of the universe are com-
pletely in the set. The membership function is increasing towards the peak and
decreasing away from it. Fuzzy numbers are used very widely in fuzzy con-
trol applications. A typical case is the triangular fuzzy number which is one
form of the fuzzy number. Slope and trapezoidal functions are also used, as
well as exponential curves similar to Gaussian probability densities. For more
information, see Chapter 8 and also the extensive literature:

Relevant literature about fuzzy numbers:

Dubois, D. and Prade, H., ”Fuzzy Numbers: An Overview”, in Analysis of


Fuzzy Information 1:3-39, CRC Press, Boca Raton, 1987.
Dubois, D. and Prade, H., ”Mean Value of a Fuzzy Number”, Fuzzy Sets
and Systems 24(3):279-300, 1987.
Kaufmann, A., and Gupta, M., ”Introduction to Fuzzy Arithmetic”, Rein-
hold, New York, 1985.

2.5 Determination of Membership Functions


Determination methods for constructing membership functions of fuzzy sets
break down broadly into the following categories:
14 CHAPTER 2. FUZZY SYSTEMS

2.5.1 Subjective evaluation and elicitation


As fuzzy sets are usually intended to model people’s cognitive states, they can
be determined from either simple or sophisticated elicitation procedures. At
they very least, subjects simply draw or otherwise specify different membership
curves appropriate to a given problem. These subjects are typically experts in
the problem area, or they are given a more constrained set of possible curves
from which they choose. Under more complex methods, users can be tested
using psychological methods.

2.5.2 Ad-hoc forms and methods


While there is a vast (infinite) array of possible membership function forms,
most actual fuzzy control operations draw from a very small set of different
curves, for example simple forms of fuzzy numbers. This simplifies the problem,
for example to choosing just the central value and the slope on either side, see
Chapter 8.

2.5.3 Converted frequencies or probabilities


Sometimes information taken in the form of frequency histograms or other prob-
ability curves are used as the basis to construct a membership function. There
are a variety of possible conversion methods, each with its own mathematical
and methodological strengths and weaknesses. However, it should always be
remembered that membership functions are not (necessarily) probabilities.

2.5.4 Physical measurement


Many applications of fuzzy logic use physical measurement, but almost none
measure the membership grade directly. Instead, a membership function is
provided by another method, and then the individual membership grades of
data are calculated from it (see Turksen, below).

Relevant literature about membership functions:

Z.Q. Liu and S. Miyamoto, Eds.: Soft Computing and Human - Centered
Machines, Springer, Tokyo-Berlin-Heidelberg-New York, 2000.
Turksen, I.B., ”Measurement of Fuzziness: Interpretation of the Axioms of
Measure”, in Proceeding of the Conference on Fuzzy Information and Knowledge
Representation for Decision Analysis. IFAC, Oxford, 1984,97-102.

2.6 Membership Degrees Versus Probabilities


This problem can be solved in two ways:

• how does fuzzy theory differ from probability theory mathematically,


2.6. MEMBERSHIP DEGREES VERSUS PROBABILITIES 15

• how does it differ in interpretation and application.

At the mathematical level, fuzzy values are commonly misunderstood to be


probabilities, or fuzzy logic is interpreted as some new way of handling probabil-
ities. But this is not the case. A minimum requirement of probabilities is addi-
tivity, that is that they must add together to one, or the integral of their density
functions must be one. However, this does not hold in general with membership
grades. And while membership grades can be determined with probability den-
sities in mind, there are other methods as well which have nothing to do with
frequencies or probabilities. Because of this, fuzzy researchers have gone to great
pains to distance themselves from probability. But in so doing, many of them
have lost track of another point, which is that the converse in some sense does
hold: probability distributions can be converted to fuzzy sets. As fuzzy sets and
fuzzy logic generalize Boolean sets and logic, they also generalize probability. In
fact, from a mathematical perspective, fuzzy sets and probability exist as parts
of a greater Generalized Information Theory which includes many formalisms
for representing uncertainty (including random sets, Dempster-Shafer evidence
theory, probability intervals, possibility theory, general fuzzy measures, interval
analysis, etc.). Furthermore, one can also talk about random fuzzy events and
fuzzy random events. This whole issue is beyond the scope of this survey. We
refer to the books and papers cited below.
Semantically, the distinction between fuzzy logic and probability theory has
to do with the difference between the notions of probability and a degree of
membership. Probability statements are about the likelihoods of outcomes: an
event either occurs or does not, and you can bet on it. With fuzziness, one
cannot say unequivocally whether an event occurred or not, and instead you are
trying to model the extent to which an event occurs.

Relevant literature about fuzzy versus probability:

Bezdek, J. C., ”Fuzzy Models – What Are They, and Why?”, IEEE Trans-
actions on Fuzzy Systems, 1:1,1-6.
Delgado, M., and Moral, S., ”On the Concept of Possibility-Probability Con-
sistency”, Fuzzy Sets and Systems 21,1987,311-318.
Dempster, A.P., ”Upper and Lower Probabilities Induced by a Multivalued
Mapping”, Annals of Math. Stat. 38,1967,325-339.
Henkind, S. J. and Harrison, M. C., ”Analysis of Four Uncertainty Calculi”,
IEEE Trans. Man Sys. Cyb. 18(5),1988,700-714.
Kampe, D. and Feriet, J., ”Interpretation of Membership Functions of Fuzzy
Sets in Terms of Plausibility and Belief”, in Fuzzy Information and Decision
Process, M.M. Gupta and E. Sanchez, Eds., North-Holland, Amsterdam, 1982,
93-98.
Klir, G., ”Is There More to Uncertainty than Some Probability Theorists
Would Have Us Believe?”, Int. J. Gen. Sys. 15(4),1989,347-378.
Klir, G., ”Generalized Information Theory”, Fuzzy Sets and Systems 40,
1991, 127-142.
16 CHAPTER 2. FUZZY SYSTEMS

Klir, G., ”Probabilistic vs. Possibilistic Conceptualization of Uncertainty”.


In: Analysis and Management of Uncertainty, B.M. Ayub et. al. Eds., Elsevier,
1992, 13-25.
Klir, G. and Parviz, B., ”Probability-Possibility Transformations: A Com-
parison”. Int. J. Gen. Sys. 21(1),1992,291-310.
Kosko, B., ”Fuzziness vs. Probability”, Int. J. Gen. Sys. 17(2-3), 1990,211-
240.
Puri, M.L. and Ralescu, D.A., ”Fuzzy Random Variables”, J. Math. Analysis
and Applications, 114,1986,409-422.
Shafer, G., ”A Mathematical Theory of Evidence”, Princeton University,
Princeton, 1976.
Shanahan, J.G., Soft Computing for Knowledge Discovery, Kluwer Acad.
Publ., Boston /Dordrecht /London, 2000.

2.7 Possibility Theory


Possibility theory is another recent form of information theory which is related
to but independent of both fuzzy sets and probability theory. Technically, a
possibility distribution is a normal fuzzy set (at least one membership grade
equals 1). For example, all fuzzy numbers are possibility distributions. However,
possibility theory can also be derived without reference to fuzzy sets. The rules
of possibility theory are similar to probability theory, but the possibility calculus
differs to the calculus of probability theory. Also, possibilistic nonspecificity is
available as a measure of information similar to the stochastic entropy.
Possibility theory has a methodological advantage over probability theory as
a representation of nondeterminism in systems, because the ”plus/times” cal-
culus does not validly generalize nondeterministic processes, while ”max/min”
do.

Relevant literature about fuzzy versus probability:

Dubois, D. and Prade, H., ”Possibility Theory”, Plenum Press, New York,
1988.
Joslyn, C., ”Possibilistic Measurement and Set Statistics”, In: Proc. of the
1992 NAFIPS Conference, 2, NASA, 1992, 458-467.
Joslyn, C., ”Possibilistic Semantics and Measurement Methods in Complex
Systems”, In: Proc. of the 2nd International Symposium on Uncertainty Mod-
eling and Analysis, B. Ayyub, Ed., IEEE Computer Society 1993.
Wang, Z. and Klir, G., ”Fuzzy Measure Theory”, Plenum Press, New York,
1991.
Zadeh, L., ”Fuzzy Sets as the Basis for a Theory of Possibility”, Fuzzy Sets
and Systems 1:1978, 3-28.
2.8. FUZZY EXPERT SYSTEMS 17

2.8 Fuzzy Expert Systems


A fuzzy expert system is an expert system that uses a collection of fuzzy mem-
bership functions and rules, instead of Boolean logic, to reason about data. The
rules in a fuzzy expert system are usually of a form similar to the following:

IF (x is LOW ) AND (y is HIGH) THEN (z is M EDIU M ),


where x and y are input variables (names for know data values), z is an output
variable (a name for a data value to be computed), LOW is a membership
function (fuzzy subset) defined on the set of x, HIGH is a membership function
defined on the set of y, and M EDIU M is a membership function defined on the
set of z. The antecedent (the rule’s premise, between IF and THEN) describes
to what degree the rule applies, while the consequent (the rule’s conclusion,
following THEN) assigns a membership function to each of one or more output
variables. Most tools for working with fuzzy expert systems allow more than
one conclusion per rule (a compound consequent). The set of rules in a fuzzy
expert system is known as the rule base or knowledge base. The general inference
process proceeds in 4 steps:

1. Under fuzzification, the membership functions defined on the input vari-


ables are applied to their actual values, to determine the degree of truth
for each rule premise.
2. Under inference, the truth value for the premise of each rule is computed,
and applied to the conclusion part of each rule. This results in one fuzzy
subset to be assigned to each output variable for each rule. Usually only
min or · (”product”) are used as inference rules. In min inferencing, the
output membership function is clipped off at a height corresponding to the
rule premise’s computed degree of truth (fuzzy logic AND). In ”product”
inferencing, the output membership function is scaled by the rule premise’s
computed degree of truth.
3. Under composition, all of the fuzzy subsets assigned to each output vari-
able are combined together to form
P a single fuzzy subset for each output
variable. Again, usually max or ”sum” are used. In max composition,
the combined output fuzzy subset is constructed by taking the pointwise
maximum over all of the fuzzy subsets assigned to variable by the infer-
ence rule (fuzzy logic OR). In ”sum” composition, the combined output
fuzzy subset is constructed by taking the pointwise sum over all of the
fuzzy subsets assigned to the output variable by the inference rule.
4. Finally is the (optional) defuzzification, which is used when it is useful to
convert the fuzzy output set to a crisp number. There are many defuzzi-
fication methods (at least 30). Two of the more common techniques are
the centroid and maximum methods. In the centroid method, the crisp
value of the output variable is computed by finding the variable value of
the center of gravity of the membership function for the fuzzy value. In
18 CHAPTER 2. FUZZY SYSTEMS

the maximum method, one of the variable values at which the fuzzy subset
has its maximum truth value is chosen as the crisp value for the output
variable.

Let us demonstrate the process on a simple example. Assume that the


variables x, y, and z all take on values in the interval [0, 10], and that the
following membership functions and rules are defined as follows:

t
low(t) = 1 − , (2.2)
10
t
high(t) = . (2.3)
10
Rule 1: IF x is LOW AND y is LOW THEN z is HIGH
Rule 2: IF x is LOW AND y is HIGH THEN z is LOW
Rule 3: IF x is HIGH AND y is LOW THEN z is LOW
Rule 4: IF x is HIGH AND y is HIGH THEN z is HIGH
Notice that instead of assigning a single value to the output variable z, each
rule assigns an entire fuzzy subset (LOW or HIGH).
Notice that we have:

low(t) + high(t) = 1.0

for all t. This is not obligatorily required, but it is common.


The value of t at which low(t) is maximal is the same as the value of t at
which high(t) is minimal, and vice-versa. This is also not obligatorily required,
but fairly common.
The same membership functions are used for all variables. This is not re-
quired, but not common.
In the fuzzification subprocess, the membership functions defined on the
input variables are applied to their actual values, to determine the degree of
truth for each rule premise. The degree of truth for a rule’s premise is sometimes
referred to as its level alpha, a number from [0, 1]. If a rule’s premise has a
nonzero degree of truth, then the rule is said to fire. For example, let x = 3.2,
y = 3.3. By (2.2), (2.3) we compute:

low(x) = 0.68, high(x) = 0.32, low(y) = 0.67, high(y) = 0.33.

Each rule has its own alpha, namely:

alpha1 = 0.67, alpha2 = 0.33, alpha3 = 0, 32, alpha4 = 0.32.

In the inference subprocess, the truth value for the premise of each rule
is computed, and applied to the conclusion part of each rule. This results in
one fuzzy subset to be assigned to each output variable for each rule. As it
has been already mentioned min and · are two inference methods or inference
rules. In min inferencing, the output membership function is clipped off at
2.8. FUZZY EXPERT SYSTEMS 19

a height corresponding to the rule premise’s computed degree of truth. This


corresponds to the traditional interpretation of the fuzzy logic AND operation.
In · (product) inferencing, the output membership function is scaled by the rule
premise’s computed degree of truth.
More generally, minimum and product are two particular cases of triangular
norms, or t-norms. For an inference method any triangular norm can be applied,
see Klement et al. (2000).
Let’s look, for example, at Rule 1 for x = 0.0, y = 3.2. As can be easily
computed, the premise degree of truth is 0.68. For this rule, min inferencing
will assign z the fuzzy subset defined by the membership function:

z
10 if z ≤ 6.8,
Rule1(z) = {
0.68 if z > 6.8.

For the same conditions, · (product) inferencing will assign z the fuzzy subset
defined by the membership function:

z
Rule1(z) = 0.68 ·
10

The terminology used here is slightly nonstandard. In most texts, the term
”inference method” is used to mean the combination of the things referred to
separately here as ”inference” and ”composition”. Thus, you can see such terms
as ”max-min inference” and ”sum-product inference” in the literature.
P They
are the combination of max composition and min inference, or composition
and · inference, respectively. You’ll also see the reverse terms ”min-max” and
”product-sum” — these mean the same things in the reverse order. It seems
clearer to describe the two processes separately.
In the composition subprocess, all of the fuzzy subsets assigned to each
output variable are combined together to form a single fuzzy subset for each
output variable. Max composition and sum composition are two composition
rules. In max composition, the combined output fuzzy subset is constructed
by taking the pointwise maximum over all of the fuzzy subsets assigned to the
output variable by the inference rule. In sum composition, the combined output
fuzzy subset is constructed by taking the pointwise sum over all of the fuzzy
subsets assigned to the output variable by the inference rule.
Note that this can result in truth values greater than one! For this rea-
son, sum composition is only used when it will be followed by a defuzzification
method, such as the centroid method, that doesn’t have a problem with this
odd case. Otherwise sum composition can be combined with normalization and
is therefore a general purpose method again. For example, assume x = 0.0 and
20 CHAPTER 2. FUZZY SYSTEMS

y = 3.2. Min inferencing would assign the following four fuzzy subsets to z:
z
10 if z ≤ 6.8,
Rule1(z) = {
0.68 if z > 6.8,
0.32 if z ≤ 6.8,
Rule1(z) = { z
1 − 10 if z > 6.8,
Rule3(z) = 0.0,
Rule4(z) = 0.0.

Max composition would result in the fuzzy subset:

0.32 if z ≤ 3.2,
z
Rules(z) = { 1 − 10 if 3.2 < z ≤ 6.8,
0.68 if z > 6.8.

Product inferencing would assign the following four fuzzy subsets to z:

Rule1(z) = 0.068 · z,
Rule1(z) = 0.32 − 0.032 · z,
Rule3(z) = 0.0,
Rule4(z) = 0.0.

Sum composition would result in the fuzzy subset:

Rules(z) = 0.32 + 0.036 · z.


More generally, maximum and sum are two particular cases of triangular
conorms, or t-conorms. For an composition rules any triangular conorm can be
applied, see Ramik and Vlach (2001). Hence, ”inference” and ”composition” in
an expert system may be created by a couple (T, S), where T is a t-norm and
S is a t-conorm. In the ”best” couple, T and S are mutually dual. More about
this subject can be found in Klement et al. (2000).
Sometimes it is useful to just examine the fuzzy subsets that are the re-
sult of the composition process, but more often, this fuzzy value needs to be
converted to a single number — a crisp value. This is what the defuzzifica-
tion subprocess does. There exist many defuzzification methods, for example,
a couple of years ago, Mizumoto (1989) published a paper that compared ten
defuzzification methods. Two of the more common techniques are the centroid
and maximum methods.
In the centroid method, the crisp value of the output variable is computed
by finding the variable value of the center of gravity of the membership function
for the fuzzy value.
In the maximum method, one of the variable values at which the fuzzy subset
has its maximum truth value is chosen as the crisp value for the output variable.
2.9. FUZZY CONTROL 21

There are several variations of the maximum method that differ only in what
they do when there is more than one variable value at which this maximum
truth value occurs. One of these, the average of maxima method, returns the
average of the variable values at which the maximum truth value occurs.
To compute the centroid of the function f (x), you divide the moment of
the function by the area
R of the function. To compute the moment of f (x), you
compute theRintegral xf (x)dx, and to compute the area of f (x), you compute
the integral f (x)dx.
Sometimes the composition and defuzzification processes are combined, tak-
ing advantage of mathematical relationships that simplify the process of com-
puting the final output variable values.
To date, fuzzy expert systems are the most common use of fuzzy logic. They
are used in several wide-ranging fields, including:

• Linear and Nonlinear Control,


• Pattern Recognition,
• Financial Systems,
• Operation Research,
• Data Analysis.

2.9 Fuzzy Control


The purpose of control is to influence the behavior of a system by changing an
input or inputs to that system according to a rule or set of rules that model
how the system operates. The system being controlled may be mechanical,
electrical, chemical or any combination of these. Classic control theory uses a
mathematical model to define a relationship that transforms the desired state
(requested) and observed state (measured) of the system into an input or inputs
that will alter the future state of that system, see the following figure:

reference→0→( SYSTEM )→+→ output


↑ ↓
+←( MODEL )←+ feedback

The most common example of a control model is the PID (proportional-


integral- derivative) controller. This takes the output of the system and com-
pares it with the desired state of the system. It adjusts the input value based
on the difference between the two values according to the following equation.

output = A.e + B.IN T (e)dt + C.de/dt


22 CHAPTER 2. FUZZY SYSTEMS

where, A, B and C are constants, e is the error term, IN T (e)dt is the integral of
the error over time and de/dt is the change in the error term. The major draw-
back of this system is that it usually assumes that the system being modelled in
linear or at least behaves in some fashion that is a monotonic function. As the
complexity of the system increases it becomes more difficult to formulate that
mathematical model. Fuzzy control replaces, in the picture above, the role of
the mathematical model and replaces it with another that is build from a num-
ber of smaller rules that in general only describe a small section of the whole
system. The process of inference binding them together to produce the desired
outputs. That is, a fuzzy model has replaced the mathematical one. The inputs
and outputs of the system have remained unchanged. The Sendai subway is the
prototypical example application of fuzzy control.

Relevant literature about fuzzy expert systems and fuzzy control:

Driankov, D., Hellendoorn, H. and Reinfrank, M., ”An Introduction to Fuzzy


Control”, Springer-Verlag, New York, 1993.
Chen, G., Ying, M. and Cai, K.-Y., Fuzzy Logic and Soft Computing, Kluwer
Acad. Publ., Boston/Dordrecht/London, 1999.
Harris, C.J., Moore, C.G. and Brown, M., ”Intelligent Control, Aspects of
Fuzzy Logic and Neural Nets”, World Scientific, 1997.
Mizumoto, M., ”Improvement Methods of Fuzzy Controls”, In: Proceedings
of the 3rd IFSA Congress, Seattle, 1989, 60-62.
Terano, T., Asai, K. and Sugeno, M., ”Fuzzy Systems Theory and Applica-
tions”, Academic Press, 1992.
Yager, R.R., and Zadeh, L. A., ”An Introduction to Fuzzy Logic Applications
in Intelligent Systems”, Kluwer Academic Publishers, Boston / Dordrecht /
London, 1991.
Zimmermann, H.J., ”Fuzzy set Theory”, Kluwer Acad. Publ., Boston /
Dordrecht / London, 1991.

2.10 Fuzzy Clustering


Clustering ( also cluster analysis) is referred to a group of classification methods
for data analysis. Since 1970’s, fuzzy set theory has been applied to clustering
and a number of clustering methods and techniques have been developed, see
the relevant literature below. One of the most frequent application area is the
pattern recognition. Clustering methods can be, as usual, divided into two
categories:

• nonhierarchical fuzzy clustering,


• hierarchical fuzzy clustering.

The most popular method of nonhierarchical clustering is the fuzzy c-means


method which is a direct extension of nonfuzzy (crisp) nonhierarchical clustering.
2.11. DECISION MAKING IN FUZZY ENVIRONMENT 23

On the other hand, fuzzy hierarchical clustering is not a direct generalization of


crisp methods, it provides rather direction for agglomerative clustering.
In general, clustering requires that objects to be classified should be put in
one of a number of classes depending on some characterization exemplified by
the concept of distance. The object being close in the sense of the distance
are given in the same class whereas those being far to each other are to be put
in different classes. More precisely, the quality of classification is measured by
an objective function which is usually minimized so as to obtain the optimal
solution, i.e. the resulting classification of the objects in question. Centers of
the clusters are allocated and the distance of objects is measured with respect to
those centers. The underlying idea of fuzzy clustering is that an object belongs
to more than one cluster but with possibly different degrees of membership.
Standard method of fuzzy c-means take advantage of the rectangular matrix
with elements from the unit interval [0, 1] serving as weighting parameters in the
objective function and at the same time as membership degrees of the classified
objects of individual clusters. numerous techniques have been proposed in the
literature to improve the results of classification for numerous types of problem
data. For more information about classification techniques and algorithms as
well as applications of clustering, see the recommended literature.

Relevant literature about fuzzy clustering:

http:/www.fuzzy-clustering.de - extensive bibliographical database, includes


papers, books and software, pdf, ps files to download, etc.
Anderberg, M.R., ”Cluster Analysis for Applications”, Academic Press, New
York, 1973.
Bezdek, J.C., Patern Recognition with Fuzzy Objective Function Algorithms,
Plenum Press, New York, 1981.
Everitt, B.S., ”Cluster Analysis”, Arnold Publ House, London, 1993.
Hopfer, F., Klawon, F., Kruse, R. and Runkler, T., ”Fuzzy Cluster Analysis”,
J. Wiley, Amazon, 1999.
Miyamoto, S., ”Fuzzy Sets in Information Retrieval and Cluster Analysis”,
Kluwer Acad. Publ., Dordrecht, 1990.
Miyamoto, S. and Umayahara, K., ”Methods in Hard and Fuzzy Clustering”,
in: Soft Computing and Human-Centered Machines, Liu, Z. and Miyamoto, S.,
Eds., Springer, Tokyo, Berlin, 2000.
Nakamori, Y., Ryoke, M. and Umayahara, K., ”Multivariate analysis for
Fuzzy Modeling”, Proc. of the 7th World IFSA Congress, Prague, Academia,
Praha, 1997, 93-98.

2.11 Decision Making in Fuzzy Environment


When dealing with practical decision problems, we often have to take into con-
sideration uncertainty in the problem data. It may arise from errors in mea-
suring physical quantities, from errors caused by representing some data in a
24 CHAPTER 2. FUZZY SYSTEMS

computer, from the fact that some data are approximate solutions of other
problems or estimations by human experts, etc. In some of these situations,
the fuzzy set approach may be applicable. In the context of multicriteria de-
cision making, functions mapping the set of feasible alternatives into the unit
interval [0, 1] of real numbers representing normalized utility functions can be
interpreted as membership functions of fuzzy subsets of the underlying set of
alternatives. However, functions with the range [0, 1] arise in more contexts.
A decision problem in X, i.e. the problem to find a ”best” decision in
the set of feasible alternatives X with respect to several (i.e. more than one)
criteria functions is considered. Within the framework of such a decision sit-
uation, we deal with the existence and mutual relationships of three kinds of
”optimal decisions”: Weak Pareto-Maximizers, Pareto-Maximizers and Strong
Pareto-Maximizers - particular alternatives satisfying some natural and rational
conditions - commonly called Pareto-optimal decisions.
In Chapter 9 we study also the compromise decisions maximizing some ag-
gregation of the criteria. This problem was introduced originally by Bellman
and Zadeh for the minimum aggregation function. The criteria considered here
will be functions defined on the set X of feasible alternatives with the values
in the unit interval [0, 1]. Such functions can be interpreted as membership
functions of fuzzy subsets of X and will be called here fuzzy criteria.
The set X of feasible alternatives is a convex subset, or a generalized convex
subset of n-dimensional Euclidean space Rn , frequently we consider X = Rn .
The main subject of our interest in Chapter 9 is to derive some important rela-
tions between Pareto-optimal decisions and compromise decisions. The relevant
literature to the subject can be found also in Chapter 9.

2.12 Fuzzy Mathematical Programming


Mathematical programming problems (MP) form a subclass of decision-making
problems where preferences between alternatives are described by means of ob-
jective function(s) defined on the set of alternatives in such a way that greater
values of the function(s) correspond to more preferable alternatives (if ”higher”
is ”better”). The values of the objective function describe effects from choices
of the alternatives. In economic problems, for example, these values may reflect
profits obtained when using various means of production. The set of feasible
alternatives in MP problems is described implicitly by means of constraints -
equations or inequalities, or both - representing relevant relationships between
alternatives. In any case the results of the analysis using given formulation of
the MP problem depend largely upon how adequately various factors of the real
system are reflected in the description of the objective function(s) and of the
constraint(s).
Descriptions of the objective function and of the constraints in a MP problem
usually include some parameters. For example, in problems of resources alloca-
tion such parameters may represent economic parameters like costs of various
types of production, labor costs requirements, shipment costs, etc. The nature
2.12. FUZZY MATHEMATICAL PROGRAMMING 25

of those parameters depends, of course, on the detailization accepted for the


model representation, and their values are considered as data that should be
exogenously used for the analysis.
Clearly, the values of such parameters depend on multiple factors not in-
cluded into the formulation of the problem. Trying to make the model more
representative, we often include the corresponding complex relations into it,
causing that the model becomes more cumbersome and analytically unsolvable.
Moreover, it can happen that such attempts to increase ”the precision” of the
model will be of no practical value due to the impossibility of measuring the
parameters accurately. On the other hand, the model with some fixed values of
its parameters may be too crude, since these values are often chosen in a quite
arbitrary way.
An intermediate approach is based on the introduction into the model the
means of a more adequate representation of experts´understanding of the nature
of the parameters in the form of fuzzy sets of their possible values. The resul-
tant model, although not taking into account many details of the real system in
question could be a more adequate representation of the reality than that with
more or less arbitrarily fixed values of the parameters. On this way we obtain
a new type of MP problems containing fuzzy parameters. Treating such prob-
lems requires the application of fuzzy-set-theoretic tools in a logically consistent
manner. Such treatment forms an essence of fuzzy mathematical programming
(FMP) investigated in this chapter.
FMP and related problems have been extensively analyzed and many papers
have been published displaying a variety of formulations and approaches. Most
approaches to FMP problems are based on the straightforward use of the inter-
section of fuzzy sets representing goals and constraints and on the subsequent
maximization of the resultant membership function. This approach has been
mentioned by Bellman and Zadeh already in their paper published in the early
seventies. Later on many papers have been devoted to the problem of mathemat-
ical programming with fuzzy parameters, known under different names, mostly
as fuzzy mathematical programming, but sometimes as possibilistic program-
ming, flexible programming, vague programming, inexact programming etc. For
an extensive bibliography see the overview paper Rommelfanger and Slowinski
(1998).
In Chapter 10 we present a general approach based on a systematic extension
of the traditional formulation of the MP problem. This approach is based on
the numerous former works of the author of this work, and also on the works of
many other authors, see the literature to Chapter 10.
The fuzzy mathematical programming problem (FMP problem) is denoted
by:

f
maximize f˜(x; c̃)
subject to (2.4)
g̃i (x; ãi ) R̃i b̃i , i ∈ M = {1, 2, ..., m},

where R̃i , i ∈ M, are fuzzy relations on F(R), the set of all fuzzy subsets of
26 CHAPTER 2. FUZZY SYSTEMS

R. Formulation (2.4) is not an optimization problem in a classical sense, as it


is not yet defined, how the objective function f˜(x; c̃) can be ”maximized”, and
how the constraints g̃i (x; ãi ) R̃i b̃i can be treated. In fact, we need a concept
of feasible solution and also that of optimal solution.
Most important mathematical programming problems (10.4) are those where
the functions f and gi are linear. The fuzzy linear programming problem (FLP
problem) is denoted as

f
maximize c̃1 x1 +̃· · ·+̃c̃n xn
subject to
(2.5)
ãi1 x1 +̃· · ·+̃ãin xn R̃i b̃i , i ∈ M,
xj ≥ 0, j ∈ N = {1, 2, ..., n}.

All these problems are thoroughly investigated in Chapter 10 and 11.

2.13 Mailing Lists


In this section we inform about Internet based mailing lists dealing with fuzzy
systems. This list is far from being complete as the development in this area is
very fast.

NAFIPS Fuzzy Logic Mailing List:


This is a mailing list for the discussion of fuzzy logic, NAFIPS and related
topics, located at the Georgia State University. Recently, there are about 500
subscribers, located primarily in North America. Postings to the mailing list
are automatically archived. The mailing list server itself is like most of those in
use on the Internet. If you are already familiar with Internet mailing lists, the
only thing you will need to know is that the name of the server is
[email protected]
and the name of the mailing list itself is
[email protected]
If you are not familiar with this type of mailing list server, the easiest way to
get started is to send the following message to [email protected]: help
You will receive a brief set of instructions by e-mail within a short time.
Once you have subscribed, you will begin receiving a copy of each message that
is sent by anyone to
[email protected]
and any message that you send to that address will be sent to all of the other
subscribers.

Fuzzy-Mail Mailing List:


This is a mailing list for the discussion of fuzzy logic and related topics,
located at the Technical University of Vienna in Austria. Recently, there are
more than 1000 subscribers, located primarily in Europe. After more than 5
years of experience, the author would recommend this mailing list to any person
2.13. MAILING LISTS 27

interested in fuzzy systems and fuzzy logic. Frequently, the discussion about
the hot topics of theory and practice of FS is interesting, deep and stimulating.
Information about new conferences and seminars, journals and books are also of
practical use. The list is slightly moderated (only irrelevant mails are rejected)
and is two-way gatewayed to the aforementioned NAFIPS-L list and to the
comp.ai.fuzzy internet newsgroup. Messages should therefore be sent only to
one of the three media, although some mechanism for mail-loop avoidance and
duplicate-message avoidance is activated. In addition to the mailing list itself,
the list server gives access to some files, including archives and the ”Who is
Who in Fuzzy Logic” database. The name of the server is
[email protected]
and the name of the mailing list is
[email protected]
If you are not familiar with this type of mailing list server, the easiest way
to get started is to send the following message to
[email protected]: get fuzzy-mail info
You will receive a brief set of instructions by e-mail within a short time.
Once you have subscribed, you will begin receiving a copy of each message that
is sent by anyone to
[email protected]
and any message that you send to that address will be sent to all of the other
subscribers.

Mailing lists for fuzzy systems in Japan:


We mention two mailing lists for fuzzy systems in Japan. Both forward
many articles from the international mailing lists, but the other direction is not
automatic.
Asian Fuzzy Mailing System (AFMS):
[email protected]
To subscribe, send a message to
[email protected]
with your name and email address. Membership is restricted to within Asia
as a general rule. The list is maintained by Prof. Mikio Nakatsuyama, Depart-
ment of Electronic Engineering, Yamagata University, 4-3-16 Jonan, Yonezawa
992 Japan, E-mail: [email protected].
All messages to the list have the Subject line replaced with ”AFMS”. The
language of the list is English.

Fuzzy Mailing List - Japan:


[email protected]
This is an unmoderated list, with mostly original contributions in Japanese
(JIS-code). To subscribe, send subscriptions to the listserver
[email protected]
If you need to speak to a human being, send mail to the list owners Itsuo
Hatono and Motohide Umano of Osaka University to
[email protected].
28 CHAPTER 2. FUZZY SYSTEMS

2.14 Main International Journals


FUZZY SETS AND SYSTEMS (FSS)
International Journal of Soft Computing and Intelligence. The official pub-
lication of the International Fuzzy Systems Association (IFSA). Subscription is
free to members of IFSA.
Published annually, 24 times in 2001
Publisher: Elsevier Science
ISSN: 0165-0114
Since its launching in 1978, the journal Fuzzy Sets and Systems has been
devoted to the international advancement of the theory and application of fuzzy
sets and systems.
The scope of the journal Fuzzy Sets and Systems has expanded so as to
account for all facets of the field while emphasizing its specificity as bridging
the gap between the flexibility of human representations and the precision and
clarity of mathematical or computerized representations, be they numerical or
symbolic.
The journal welcomes original and significant contributions in the area of
Fuzzy Sets whether on empirical or mathematical foundations, or their applica-
tions to any domain of information technology, and more generally to any field
of investigation where fuzzy sets are relevant. Applied papers demonstrating the
usefulness of fuzzy methodology in practical problems are particularly welcome.
Fuzzy Sets and Systems publishes high-quality research articles, surveys as
well as case studies. Separate sections are Recent Literature, and the Bulletin,
which offers research reports, book reviews, conference announcements and var-
ious news items. Invited review articles on topics of general interest are included
and special issues are published regularly.

INTERNATIONAL JOURNAL OF APPROXIMATE REASONING (IJAR)


Fuzzy Logic in Recognition and Search.
Published 8 times annually.
Publisher: Elsevier Science
ISSN 0888-613X.
International Journal of Approximate Reasoning is dedicated to the dis-
semination of research results from the field of approximate reasoning and its
applications, with emphasis on the design and implementation of intelligent
systems for scientific and engineering applications. Approximate reasoning is
computational modeling of any part of the process used by humans to reason
about natural phenomena.
The journal welcomes archival research papers, surveys, short notes and
communications, and book reviews. Current areas of interest include, but are
not limited to, applications and/or theories pertaining to computer vision, en-
gineering and expert systems, fuzzy logic and control, information retrieval and
database design, machine learning, neurocomputing, pattern recognition and
robotics.
2.14. MAIN INTERNATIONAL JOURNALS 29

The journal is affiliated with the North American Fuzzy Information Process-
ing Society (NAFIPS).

IEEE TRANSACTIONS ON FUZZY SYSTEMS (TFS)


A publication of the IEEE Neural Network Council
Published 4 times annually.
ISSN 1063-6706
Transactions on Fuzzy Systems is published quarterly. TFS will consider
papers that deal with the theory, design or an application of fuzzy systems
ranging from hardware to software. Authors are encouraged to submit articles
which disclose significant technical achievements, exploratory developments, or
performance studies of fielded systems based on fuzzy models. Emphasis is given
to engineering applications.
TFS publishes three types of articles: papers, letters, and correspondence.
All contributions are handled in the same fashion. Review management is under
the direction of an associate editor, who will solicit four reviews for each sub-
mission. The associate editor ordinarily waits for at least three reports before
a decision is reached. Often, reviews take six to nine months to obtain, and the
publication process after acceptance can take an additional six months.

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS


Published monthly
Publisher: J. Wiley and Sons
ISSN 0884-8173
International Journal of Intelligent Systems is devoted to the systematic de-
velopment of the theory necessary for the construction of intelligent systems.
Editorials include research papers, tutorial reviews, and short communications
on theoretical as well as developmental issues. This journal presents peer-
reviewed work in such areas as: examination, analysis, creation and application
of expert systems; symbolic and quantitative approaches to knowledge repre-
sentation; management of uncertainty; man-computer interactions and the use
of language; and machine learning, information retrieval, and neural networks.
Readership includes computer scientists, engineers, cognitive scientists, knowl-
edge engineers, logitians, and information scientists. International Journal of In-
telligent Systems serves as a forum for individuals interested in tapping into the
vast theories based on intelligent systems construction. With its peer-reviewed
format, the journal explores several fascinating editorials written by today’s ex-
perts in the field. Because new developments are being introduced each day,
there’s much to be learned — examination, analysis creation, information re-
trieval, man-computer interactions, and more.
The IJIS solicits several types of writings, including: research papers, tutorial
reviews, and short communications on theoretical and developmental issues.

JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH (JAIR)


Published on Internet (free), printed 2 volumes a year
Publisher: Morgan Kaufmann Publishers
30 CHAPTER 2. FUZZY SYSTEMS

ISSN 1076-9757
JAIR is an International electronoc and printed journal covers all areas of
artificial intelligence (AI), publishin refereed research articles, survey articles,
and technical notes. Established in 1993 as one of the first electronic scientific
journals, JAIR is indexex by INSPEC, Science CI, and MathSciNet. JAIR
reviews papers within approximately two months of submission and publishes
accepted articles on the Internet immediately upon receiving the final versions.
JAIR articles are published for free distribution on the Internet by AI Access
Foundation, and for purchase in bound volumes by Morgan KAufmann Publ.,
see:
http://ww.cs.washington.edu/research/jair/home.html

INTERNATIONAL JOURNAL OF UNCERTAINTY, FUZZINESS AND


KNOWLEDGE-BASED SYSTEMS (IJUFKS)
Published 6 issues per year
Publisher: World Scientific Journals
ISSN 0218-4885
IJUFKS is forum for research on various methodologies for management of
imprecise, vague, unceertain or incomplete information. The aim of the journal
is to promote theoretical, methodological or practical works dealing with all
kinds of methods to represent and manipulate imperfectly described pieces of
knowledge. It is published by print and also electronically on Internet.

2.15 Web Pages


Recently, a number of Web pages relevant to the area of Fuzzy Systems or Soft
Computing approaches infinity, below we mention only a few. Interested reader
may visit these pages and surfing on Internet he/she could find any kind of
information.

• http://www-bisc.cs.berkley.edu - official web page of Berkley Initiative for


Soft Computing, UCLA, with BISC Mailing List, special interest groups
and lot of other information and activities.

• http://ic-www.arc.nasa.gov/ne.html - Neuro-Engeneering and Soft Com-


puting web page by NASA research

• http://www.flll.uni-linz.ac.at - web page of Fuzzy Logic Laboratory Linz,


Johannes Kepler University, Austria

• http://www.engineering.missouri.edu/academic/cecs/fuzzy - the Univer-


sity of Missouri - Columbia web page of one of the largest US research
group in fuzzy solutions field

• http://www.mitgmbh.de/mit - web page of Management Intelligent Tech-


nologies GmbH in Aachen,Germany
2.16. FUZZY RESEARCHERS 31

• http://www.osu.cz/irafm - web page of Institute for Research and Appli-


cation of Fuzzy Modeling, University of Ostrava, The Czech Republic
• http://lisp.vse.cz - web page of Laboratory for Intelligent Systems, Uni-
versity of Economics, Prague, The Czech Republic
• http://www.jaist.ac.jp/ks/index-e.html - web page of Graduate School of
Knowledge Science, Japan Institute of Science and Technology.

2.16 Fuzzy Researchers


A list of ”Who’s Who in Fuzzy Logic” (researchers and research organizations
in the field of fuzzy logic and fuzzy expert systems) may be obtained by sending
a message to
[email protected]
with
GET LISTPROC WHOISWHOINFUZZY
in the message body.
32 CHAPTER 2. FUZZY SYSTEMS
Chapter 3

Evolutionary Computation

The universe of Evolutionary Computation (EC); that in turn is only a small


footpath to a voluminous scientific universe, that, incorporating Fuzzy Systems,
and Artificial Neural Networks, is sometimes referred to as Computational In-
telligence (CI) or Computational Science; that in turn is only part of an even
more advanced scientific universe of enormous complexity, that incorporating
Artificial Life, Fractal Geometry, and other Complex Systems Sciences might
someday be referred to as Natural Computation (NC). Over the course of the
past years, global optimization algorithms imitating certain principles of na-
ture have proved their usefulness in various domains of applications. Especially
worth copying are those principles where nature has found ”stable islands” in
a ”turbulent ocean” of solution possibilities. Such phenomena can be found
in annealing processes, central nervous systems and biological evolution, which
in turn have lead to the following optimization methods: Simulated Annealing
(SA), Artificial Neural Networks (ANNs) and the broad field of Evolutionary
Computing (EC). EC may currently be characterized by the following pathways:
Genetic Algorithms (GA),
Evolutionary Programming (EP),
Evolution Strategies (ES),
Classifier Systems (CFS),
Genetic Programming (GP),
and several other problem solving strategies, that are based upon biological
observations, that date back to Charles Darwin’s discoveries in the 19th century:
the means of natural selection and the survival of the fittest, i.e. the theory of
evolution. The inspired algorithms are thus termed Evolutionary Algorithms
(EA).
Evolutionary algorithm is an umbrella term used to describe computer-
based problem solving systems which use computational models of evolution-
ary processes as key elements in their design and implementation. A variety of
evolutionary algorithms have been proposed. The major ones are:
Genetic Algorithms (GA),
Evolutionary Programming (EP),

33
34 CHAPTER 3. EVOLUTIONARY COMPUTATION

Evolutionary Strategies (ES),


Classifiers Systems (CS), and
Genetic Programming (GP).
They all share a common conceptual base of simulating the evolution of in-
dividual structures via processes of selection, mutation, and reproduction. The
processes depend on the perceived performance of the individual structures as
defined by an environment. More precisely, EAs maintain a population of struc-
tures, that evolve according to rules of selection and other operators, that are
referred to as ”search operators”, (or genetic operators), such as recombination
and mutation. Each individual in the population receives a measure of it’s fitness
in the environment. Reproduction focuses attention on high fitness individuals,
thus exploiting the available fitness information. Recombination and mutation
perturb those individuals, providing general heuristics for exploration. Although
simplistic from a biologist’s viewpoint, these algorithms are sufficiently complex
to provide robust and powerful adaptive search mechanisms.

3.1 Genetic Algorithm (GA)


The Genetic Algorithm is a model of machine learning which derives its be-
havior from a metaphor of the processes of evolution in nature. This is done
by the creation within a machine of a population of individuals represented by
chromosomes, in essence a set of character strings that are analogous to the
base-4 chromosomes that we see in our own DNA. The individuals in the pop-
ulation then go through a process of evolution. We should note that evolution
(in nature or anywhere else) is not a purposive or directed process. That is,
there is no evidence to support the assertion that the goal of evolution is to pro-
duce Mankind. Indeed, the processes of nature seem to boil down to different
individuals competing for resources in the environment. Some are better than
others. Those that are better are more likely to survive and propagate their
genetic material.
In nature, we see that the encoding for our genetic information (genome) is
done in a way that admits asexual reproduction (such as by budding) typically
results in offspring that are genetically identical to the parent. Sexual repro-
duction allows the creation of genetically radically different offspring that are
still of the same general flavor (species).
At the molecular level what occurs (wild oversimplification alert!) is that a
pair of chromosomes bump into one another, exchange chunks of genetic infor-
mation and drift apart. This is the recombination operation, which GA/GPers
generally refer to as crossover because of the way that genetic material crosses
over from one chromosome to another. The crossover operation happens in
an environment where the selection of who gets to mate is a function of the
fitness of the individual, i.e. how good the individual is at competing in its
environment. Some GAs use a simple function of the fitness measure to select
individuals (probabilistically) to undergo genetic operations such as crossover
or asexual reproduction (the propagation of genetic material unaltered). This
3.1. GENETIC ALGORITHM (GA) 35

is fitness-proportionate selection.
Other implementations use a model in which certain randomly selected indi-
viduals in a subgroup compete and the fittest is selected. This is called tourna-
ment selection and is the form of selection we see in nature when stags rut to vie
for the privilege of mating with a herd of hinds. The two processes that most
contribute to evolution are crossover and fitness based selection/reproduction.
As it turns out, there are mathematical proofs that indicate that the process
of fitness proportionate reproduction is, in fact, near optimal in some senses.
Mutation also plays a role in this process, although how important its role is
continues to be a matter of debate (some refer to it as a background operator,
while others view it as playing the dominant role in the evolutionary process).
It cannot be stressed too strongly that the GA (as a simulation of a genetic
process) is not a random search for a solution to a problem. The genetic algo-
rithm uses stochastic processes, but the result is distinctly non-random. GAs
are used for a number of different application areas. An example of this would
be multidimensional optimization problems in which the character string of the
chromosome can be used to encode the values for the different parameters being
optimized. In practice, therefore, we can implement this genetic model of com-
putation by having arrays of bits or characters to represent the chromosomes.
Simple bit manipulation operations allow the implementation of crossover, mu-
tation and other operations.
Although a substantial amount of research has been performed on variable-
length strings and other structures, the majority of work with GAs is focussed
on fixed-length character strings. We should focus on both this aspect of fixed-
lengthness and the need to encode the representation of the solution being sought
as a character string, since these are crucial aspects that distinguish GP, which
does not have a fixed length representation and there is typically no encoding
of the problem.
When the GA is implemented it is usually done in a manner that involves the
following cycle: Evaluate the fitness of all of the individuals in the population.
Create a new population by performing operations such as crossover, fitness-
proportionate reproduction and mutation on the individuals whose fitness has
just been measured. Discard the old population and iterate using the new
population. One iteration of this loop is referred to as a generation. There
is no theoretical reason for this as an implementation model. Indeed, we do
not see this punctuated behavior in populations in nature as a whole, but it
is a convenient implementation model. The first generation (generation 0) of
this process operates on a population of randomly generated individuals. From
there on, the genetic operations, in concert with the fitness measure, operate to
improve the population.

Pseudo Code of Genetic Algorithm:

Algorithm GA is
start with an initial time
t := 0;
36 CHAPTER 3. EVOLUTIONARY COMPUTATION

initialize a usually random population of individuals


initpopulation P (t);
evaluate fitness of all initial individuals in population
evaluate P (t);
test for termination criterion (time, fitness, etc.)
while not done do
increase the time counter t := t + 1;
select sub-population for offspring production
P 0 := select parents P (t);
·recombine the ”genes” of selected parents
recombine P 0(t);
·perturb the mated population stochastically
mutate P 0(t);
·evaluate it’s new fitness
evaluate P 0(t);
·select the survivors from actual fitness
P := survive P, P 0(t);
od
end GA.

3.2 Evolutionary Programming (EP)


Evolutionary programming, originally conceived by L. J. Fogel in 1960, is a
stochastic optimization strategy similar to GA, but instead places emphasis on
the behavioral linkage between parents and their offsprings, rather than seeking
to emulate specific genetic operators as observed in nature. EP is similar to
evolution strategies, although the two approaches developed independently (see
below). Like both ES and GAs, EP is a useful method of optimization when
other techniques such as gradient descent or direct, analytical discovery are
not possible. Combinatoric and real-valued function optimization in which the
optimization surface or fitness landscape is ”rugged”, possessing many locally
optimal solutions, are well suited for EP.
The 1966 book, ”Artificial Intelligence Through Simulated Evolution” by
Fogel, Owens and Walsh is the landmark publication for EP applications, al-
though many other papers appear earlier in the literature. In the book, finite
state automata were evolved to predict symbol strings generated from Markov
processes and non- stationary time series. Such evolutionary prediction was
motivated by a recognition that prediction is a keystone to intelligent behavior
(defined in terms of adaptive behavior, in that the intelligent organism must
anticipate events in order to adapt behavior in light of a goal). Since then EP
attracted a diverse group of academic, commercial and military researchers en-
gaged in both developing the theory of the EP technique and in applying EP to
a wide range of optimization problems, both in engineering and biology.
For EP, like GAs, there is an underlying assumption that a fitness landscape
can be characterized in terms of variables, and that there is an optimum solution
3.2. EVOLUTIONARY PROGRAMMING (EP) 37

(or multiple such optima) in terms of those variables. For example, if one were
trying to find the shortest path in a Traveling Salesman Problem, each solution
would be a path. The length of the path could be expressed as a number, which
would serve as the solution’s fitness. The fitness landscape for this problem
could be characterized as a hypersurface proportional to the path lengths in a
space of possible paths. The goal would be to find the globally shortest path in
that space, or more practically, to find very short tours very quickly. The basic
EP method involves 3 steps (Repeat until a threshold for iteration is exceeded
or an adequate solution is obtained):
Step 1. Choose an initial population of trial solutions at random. The
number of solutions in a population is highly relevant to the speed of op-
timization, but no definite answers are available as to how many solutions are
appropriate (other than 1) and how many solutions are just wasteful.
Step 2. Each solution is replicated into a new population. Each of these
offspring solutions are mutated according to a distribution of mutation types,
ranging from minor to extreme with a continuum of mutation types between.
The severity of mutation is judged on the basis of the functional change imposed
on the parents.
Step 3. Each offspring solution is assessed by computing it’s fitness.
Typically, a stochastic tournament is held to determine n solutions to be
retained for the population of solutions, although this is occasionally performed
deterministically. There is no requirement that the population size be held con-
stant, however, nor that only a single offspring be generated from each parent. It
should be pointed out that EP typically does not use any crossover as a genetic
operator. EP and GAs
There are two important ways in which EP differs from GA. First, there
is no constraint on the representation. The typical GA approach involves en-
coding the problem solutions as a string of representative tokens, the genome.
In EP, the representation follows from the problem. A neural network can be
represented in the same manner as it is implemented, for example, because the
mutation operation does not demand a linear encoding. (In this case, for a
fixed topology, real- valued weights could be coded directly as their real values
and mutation operates by perturbing a weight vector with a zero mean multi-
variate Gaussian perturbation. For variable topologies, the architecture is also
perturbed, often using Poisson distributed additions and deletions.)
Second, the mutation operation simply changes aspects of the solution ac-
cording to a statistical distribution which weights minor variations in the be-
havior of the offsprings as highly probable and substantial variations as increas-
ingly unlikely. Further, the severity of mutations is often reduced as the global
optimum is approached. There is a certain tautology here: if the global opti-
mum is not already known, how can the spread of the mutation operation be
damped as the solutions approach it? Several techniques have been proposed
and implemented which address this difficulty, the most widely studied being
the ”Meta-Evolutionary” technique in which the variance of the mutation distri-
bution is subject to mutation by a fixed variance mutation operator and evolves
along with the solution.
38 CHAPTER 3. EVOLUTIONARY COMPUTATION

Despite their independent development over 30 years, EP and ES share many


similarities. When implemented to solve real-valued function optimization prob-
lems, both typically operate on the real values themselves (rather than any cod-
ing of the real values as is often done in GAs). Multivariate zero mean Gaussian
mutations are applied to each parent in a population and a selection mechanism
is applied to determine which solutions to remove from the population. The
similarities extend to the use of self-adaptive methods for determining the ap-
propriate mutations to use — methods in which each parent carries not only a
potential solution to the problem at hand, but also information on how it will
distribute new trials (offspring). Most of the theoretical results on convergence
(both asymptotic and velocity) developed for ES or EP also apply directly to
the other. The main differences between ES and EP are:
1. SELECTION: EP typically uses stochastic selection via a
tournament. Each trial solution in the population faces competition against
a preselected number of opponents and receives a ”win” if it is at least as good
as its opponent in each encounter. Selection then eliminates those solutions with
the least wins. In contrast, ES typically uses deterministic selection in which the
worst solutions are purged from the population based directly on their function
evaluation.
2. RECOMBINATION: EP is an abstraction of evolution at the level of
reproductive population (i.e., species) and thus no recombination mecha-
nisms are typically used because recombination does not occur between species
(by definition: see Mayr’s biological species concept). In contrast, ES is an
abstraction of evolution at the level of individual behavior. When self-adaptive
information is incorporated this is purely genetic information and thus some
forms of recombination are reasonable and many forms of recombination have
been implemented within ES. Again, the effectiveness of such operators depends
on the problem at hand.

Pseudo Code of Evaluation Programming:

Algorithm EP is
start with an initial time
t := 0;
initialize a usually random population of individuals
initpopulation P (t);
evaluate fitness of all initial individuals in population
evaluate P (t);
test for termination criterion (time, fitness, etc.)
while not done do
·perturb the whole population stochastically
mutate P 0(t);
·evaluate it’s new fitness
evaluate P 0(t);
·stochastically select the survivors from actual fitness
P := survive P, P 0(t);
3.3. EVOLUTION STRATEGIES (ES) 39

·increase the time counter t := t + 1;


od
end EP.

3.3 Evolution Strategies (ES)


Evolution strategies were invented to solve technical OPTIMIZATION problems
like e.g. constructing an optimal flashing nozzle, and ES were primarily only
known in civil engineering as an alternative to standard methods. Usually no
closed form analytical objective function is available for optimization problems
and hence, a few applicable optimization method exists. The first attempts
to imitate principles of organic evolution on a computer still resembled the
iterative optimization methods. In a two-membered ES, one parent generates
one offspring per generation by applying normally distributed mutations, i.e.
smaller steps occur more likely than big ones, until a child performs better than
its ancestor and takes its place. Because of this simple structure, theoretical
results for stepsize control and speed of convergence could be derived. The
ratio between successful and all mutations should come to 1/5: the so- called
1/5 success rule was discovered. This first algorithm, using mutation only, has
then been enhanced to an (µ + λ) strategy which incorporated recombination
due to several, i.e. µ parents being available. The mutation scheme and the
exogenous stepsize control were taken across unchanged from (1+1) ESs. Later,
this rule was generalized to the multimembered ES now denoted by (µ + λ)
and (µ, λ) which imitates the following basic principles of organic evolution: a
population, leading to the possibility of recombination with random mating,
mutation and selection. These strategies are termed plus strategy and comma
strategy, respectively. In the plus case, the parental generation is taken into
account during selection, while in the comma case only the offspring undergoes
selection, and the parents die off. By µ the population size is denoted, and λ
denotes the number of offspring generated per generation.
ESs are capable of solving high dimensional, multimodal, nonlinear problems
subject to linear and/or nonlinear constraints. The objective function can also,
e.g. be the result of a simulation, it does not have to be given in a closed
form. This also holds for the constraints which may represent the outcome
of, e.g. a finite elements method (FEM). ESs have been adapted to vector
optimization problems, and they can also serve as a heuristic for NP-complete
combinatorial problems like the travelling salesman problem or problems with
a noisy or changing response surface.

References:
Baeck, T. and Fogel, D., ”Handbook of Evolutionary Computation”. Insti-
tute of Physics Publ., Philadelphia, PA, 1997.
Kursawe, F. (1992) ”Evolution strategies for vector optimization”. National
Chiao Tung University, Taipei, 187-193.
40 CHAPTER 3. EVOLUTIONARY COMPUTATION

Kursawe, F. (1994) ”Evolution strategies: Simple models of natural processes?”,


Revue Internationale de Systemique, France (to appear). Rechenberg, I. (1973)
”Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der bi-
ologischen Evolution”. Fromman-Holzboog, Stuttgart.
Schwefel, H.P. (1977) ”Numerische Optimierung von Computermodellen mit-
tels der Evolutionsstrategie”, Birkhaeuser, Basel.
Schwefel, H.-P. (1987) ”Collective Phaenomena in Evolutionary Systems”,
31st Annua. Meeting Internat. Soc. for General System Research, Budapest,
1025-1033.

3.4 Classifier Systems (CS)


No other paradigm of EC has undergone more changes to it’s name space than
this one. Initially, Holland, see the references below, called his cognitive models
”Classifier Systems” (CS). Whence Riolo came into play in 1986 and Holland
added a reinforcement component to the overall design of a CS, that empha-
sized its ability to learn. So, the word ”learning” was pretended to the name,
to make: ”Learning Classifier Systems” (LCS). LCSs are sometimes subsumed
under a ”new” machine learning paradigm called ”Evolutionary Reinforcement
Learning” (ERL), see also the chapter Machine Learning below. Classifier sys-
tems are systems which take a set of inputs, and produce a set of outputs which
indicate some classification on the inputs. It is often regarded as artificial life
rather than EC. CS can be seen as one of the early applications of GAs, for
CSs use this evolutionary algorithm to adapt their behavior toward a changing
environment. A cognitive system is capable of classifying the goings on in its
environment, and then reacting to these goings on appropriately. We need

(1) an environment;
(2) receptors that tell our system about the goings on;
(3) effectors, that let our system manipulate its environment; and
(4) the system itself, conveniently a ”black box” in this first approach, that
has (2) and (3) attached to it, and ”lives” in (1).

The most primitive ”black box” we can think of is a computer. It has inputs
(2), and outputs (3), and a message passing system in-between, that converts
(i.e., computes), certain input messages into output messages, according to a
set of rules, usually called the ”program” of that computer. From the theory of
computer science, we now borrow the simplest of all program structures, that is
something called ”production system” (PS). Although it merely consists of a set
of if-then rules, it still resembles a full- fledged computer. We now term a single
”if-then” rule a ”classifier”, and choose a representation that makes it easy to
manipulate these, for example by encoding them into binary strings. We then
term the set of classifiers, a ”classifier population”, and immediately know how
to breed new rules for our system: just use a GA to generate new rules/classifiers
3.5. GENETIC PROGRAMMING (GP) 41

from the current population. All that is left are the messages floating through
the black box. They should also be simple strings of zeroes and ones, and are to
be kept in a data structure, we call ”the message list”. With all this given, we
can imagine the goings on inside the black box as follows: The input interface
(2) generates messages, i.e., 0/1 strings, that are written on the message list.
Then these messages are matched against the condition-part of all classifiers, to
find out which actions are to be triggered. The message list is then emptied, and
the encoded actions, themselves just messages, are posted to the message list.
Then, the output interface (3) checks the message list for messages concerning
the effectors. And the cycle restarts. Note, that it is possible in this set-up
to have ”internal messages”, because the message list is not emptied after (3)
has checked; thus, the input interface messages are added to the initially empty
list. The general idea of the CS is to start from scratch, i.e., from tabula rasa
(without any knowledge) using a randomly generated classifier population, and
let the system learn its program by induction, this reduces the input stream to
recurrent input patterns, that must be repeated over and over again, to enable
the animat to classify its current situation/context and react on the goings on
appropriately.

References:
Booker, L.B., Goldberg, D.E. and Holland, J.H., ”Classifier Systems and
Genetic Algorithms”, Artificial Intelligence, Vol.40, 1989, 235-282.
Braitenberg, V. , ”Vehicles: Experiments in Synthetic Psychology” Boston,
MA: MIT Press, 1984.
Browne, W.N.L., Holford, K.M., Moore, C.J. and Bullock, J., ”An Industrial
Learning Classifier System: The Importance of Pre-Processing Real Data and
Choice of Alphabet”, Artificial Intelligence, Vol.13,1, 2000.
Holland, J.H. (1986) ”Escaping Brittleness: The possibilities of general-
purpose learning algorithms applied to parallel rule-based systems”. In: R.S.
Michalski, J.G. Carbonell & T.M. Mitchell, Eds., Machine Learning: An Arti-
ficial Intelligence approach, Vol. II, Morgan Kaufman, Los Altos, CA,593-623.
Holland, J.H., et al. (1986) ”Induction: Processes of Inference, Learning,
and Discovery”. MIT Press, Cambridge, MA.
Holland, J.H. (1992) ”Adaptation in natural and artificial systems” Boston,
MA: MIT Press.
Holland, J.H. and Reitman, J.S. (1978) ”Cognitive Systems based on Adap-
tive Algorithms” In: D.A. Waterman and F.Hayes-Roth, Eds., Pattern- directed
inference systems. Academic Press, New York.
Rich, E. (1983) ”Artificial Intelligence”.McGraw-Hill, N. York.

3.5 Genetic Programming (GP)


Genetic programming is the extension of the genetic model of learning into the
space of programs. That is, the objects that constitute the population are not
fixed-length character strings that encode possible solutions to the problem at
42 CHAPTER 3. EVOLUTIONARY COMPUTATION

hand, they are programs that, when executed, ”are” the candidate solutions to
the problem. These programs are expressed in genetic programming as parse
trees, rather than as lines of code. Because this is a very simple thing to do in
the programming language Lisp, many GP people tend to use Lisp. However,
this is simply an implementation detail. There are straightforward methods to
implement GP using a non-Lisp programming environment. The programs in
the population are composed of elements from the function set and the terminal
set, which are typically fixed sets of symbols selected to be appropriate to the
solution of problems in the domain of interest. In GP the crossover operation is
implemented by taking randomly selected subtrees in the individuals (selected
according to fitness) and exchanging them. It should be pointed out that GP
usually does not use any mutation as a genetic operator.

References:

Koza, J.R., ”Genetic Programming”, A Bradford Book, MIT Press, Cam-


bridge, 1992.
Koza, J.R., ”Genetic Programming II”, A Bradford Book, MIT Press, Cam-
bridge, 1994.
Shanahan, J.G., Soft Computing for Knowledge Discovery, Kluwer Acad.
Publ., Boston /Dordrecht /London, 2000.
Chapter 4

Neural Networks

4.1 Introduction
There is no universally accepted definition of neural networks (NN), a com-
mon characterization says that an NN is a network of many simple processors
(”units”), each possibly having a small amount of local memory. The units
are connected by communication channels (”connections”) which usually carry
numeric (as opposed to symbolic) data, encoded by any of various means. The
units operate only on their local data and on the inputs they receive via the
connections. The restriction to local operations is often relaxed during training.
Some NNs are models of biological neural networks and some are not, but
historically, much of the inspiration for the field of NNs came from the desire to
produce artificial systems capable of sophisticated, perhaps ”intelligent”, com-
putations similar to those that the human brain routinely performs, and thereby
possibly to enhance our understanding of the human brain.
Most NNs have some sort of ”training” rule whereby the weights of con-
nections are adjusted on the basis of data. In other words, NNs ”learn” from
examples (as children learn to recognize dogs from examples of dogs) and exhibit
some capability for generalization beyond the training data.
NNs normally have great potential for parallelism, since the computations
of the components are largely independent of each other. Some people regard
massive parallelism and high connectivity to be defining characteristics of NNs,
but such requirements rule out various simple models, such as simple linear
regression (a minimal feedforward net with only two units plus bias), which are
usefully regarded as special cases of NNs.
Here are some of definitions from the books:
According to Haykin, S. (1994) ”Neural Networks: A Comprehensive Foun-
dation”. Macmillan, New York, p. 2:
”A neural network is a massively parallel distributed processor that has a
natural propensity for storing experiential knowledge and making it available
for use. It resembles the brain in two respects:

43
44 CHAPTER 4. NEURAL NETWORKS

1. Knowledge is acquired by the network through a learning process.


2. Interneuron connection strengths known as synaptic weights are used to
store the knowledge”.
According to Nigrin, A. (1993) ”Neural Networks for Pattern Recognition”.
The MIT Press, Cambridge, MA, p. 11:
”A neural network is a circuit composed of a very large number of simple
processing elements that are neurally based. Each element operates only on local
information. Furthermore each element operates asynchronously; thus there is
no overall system clock”.
According to Zurada, J.M. (1992) ”Introduction to Artificial Neural Sys-
tems”. PWS Publishing Company, Boston, p. 15:
”Artificial neural systems, or neural networks, are physical cellular systems
which can acquire, store, and utilize experiential knowledge”.
Below we list some exposition texts about NN on Internet:
http://www.dontveter.com/bpr/bpr.html
http://gannoo.uce.ac.uk/bpr/bpr.html
http://www.shef.ac.uk/psychology/gurney/notes/index.html
http://www.statsoft.com/textbook/stathome.html
ftp://ftp.sas.com/pub/neural/FAQ.html

4.2 Principles of Neural Networks


In principle, NNs can compute any computable function, i.e., they can do every-
thing a normal digital computer can do, or perhaps even more.
In practice, NNs are especially useful for classification and function approx-
imation/mapping problems which are tolerant of some imprecision, which have
lots of training data available, but to which hard and fast rules (such as those
that might be used in an expert system) cannot easily be applied. Almost any
finite-dimensional vector function on a compact set can be approximated to ar-
bitrary precision by feedforward NNs (which are the type most often used in
practical applications) if you have enough data and enough computing resources.
To be more precise, feedforward networks with a single hidden layer and
trained by least-squares are statistically consistent estimators of arbitrary square-
integrable regression functions under certain practically-satisfiable assumptions
regarding sampling, target noise, number of hidden units, size of weights, and
form of hidden-unit activation function. Such networks can also be trained as
statistically consistent estimators of derivatives of regression functions and quan-
tiles of the conditional noise distribution. Feedforward networks with a single
hidden layer using threshold or activation functions are universally consistent
estimators of binary classifications under similar assumptions. Note that these
results are stronger than the universal approximation theorems that merely
show the existence of weights for arbitrarily accurate approximations, without
demonstrating that such weights can be obtained by learning.
Unfortunately, the above consistency results depend on one impractical as-
sumption: that the networks are trained by an error minimization technique that
4.2. PRINCIPLES OF NEURAL NETWORKS 45

comes arbitrarily close to the global minimum. Such minimization is computa-


tionally intractable except in small or simple problems. In practice, however,
you can usually get good results without doing a full-blown global optimization;
e.g., using multiple (say, 10 to 1000) random weight initializations is usually
sufficient.
One example of a function that a typical neural net cannot learn is f (x) =
1/x on the open interval (0, 1). An open interval is not a compact set. With any
bounded output activation function, the error will get arbitrarily large as the
input approaches zero. Of course, you could make the output activation function
a reciprocal function and easily get a perfect fit, but NNs are most often used in
situations where you do not have enough prior knowledge to set the activation
function in such a clever way. There are also many other important problems
that are so difficult that a neural network will be unable to learn them without
memorizing the entire training set, such as:
· Predicting random or pseudo-random numbers.
· Factoring large integers.
· Determining whether a large integer is prime or composite.
· Decrypting anything encrypted by a good algorithm.
It is important to understand that there are no methods for training NNs
that can magically create information that is not contained in the training data.
Feedforward NNs are restricted to finite-dimensional input and output spaces.
Recurrent NNs can in theory process arbitrarily long strings of numbers or sym-
bols. But training recurrent NNs has posed much more serious practical diffi-
culties than training feedforward networks. NNs are, at least today, difficult to
apply successfully to problems that concern manipulation of symbols and rules,
but much research is being done.
As for simulating human consciousness and emotion, that’s still in the realm
of science fiction. Consciousness is still one of the world’s great mysteries.
Artificial NNs may be useful for modeling some aspects of or prerequisites for
consciousness, such as perception and cognition, but NNs provide no insight so
far into what is called the ”hard problem”:
Many books and articles on consciousness have appeared in the past few
years, and one might think we are making progress. But on a closer look, most
of this work leaves the hardest problems about consciousness untouched. Often,
such work addresses what might be called the ”easy problems” of consciousness:
How does the brain process environmental stimulation? How does it integrate
information? How do we produce reports on internal states? These are impor-
tant questions, but to answer them is not to solve the hard problem: Why is all
this processing accompanied by an experienced inner life?
Neural Networks are interesting for quite a lot of very different people for
various reasons:
· Computer scientists want to find out about the properties of non-symbolic
information processing with neural nets and about learning systems in general.
· Statisticians use neural nets as flexible, nonlinear regression and classifica-
tion models.
46 CHAPTER 4. NEURAL NETWORKS

· Engineers of many kinds exploit the capabilities of neural networks in many


areas, such as signal processing and automatic control.
· Cognitive scientists view neural networks as a possible apparatus to describe
models of thinking and consciousness (High-level brain function).
· Neuro-physiologists use neural networks to describe and explore medium-
level brain function (e.g. memory, sensory system, motorics).
· Physicists use neural networks to model phenomena in statistical mechanics
and for a lot of other tasks.
· Biologists use Neural Networks to interpret nucleotide sequences.
· Philosophers.
For world-wide lists of groups doing research on NNs, see the Foundation for
Neural Networks’s (SNN) web page:
http://www.mbfys.kun.nl/snn/pointers/groups.html
and Neural Networks Research on the IEEE Neural Network Council’s home-
page:
http://www.ieee.org/nnc.
There are many kinds of NNs by now, new ones (or at least variations of old
ones) are invented continuously. Below is a collection of some of the most well
known methods, not claiming to be complete.

4.3 Learning Methods in NNs


The two main kinds of learning algorithms are supervised and unsupervised.
· In supervised learning, the correct results (target values, desired outputs)
are known and are given to the NN during training so that the NN can adjust
its weights to try match its outputs to the target values. After training, the NN
is tested by giving it only input values, not target values, and seeing how close
it comes to outputting the correct target values.
· In unsupervised learning, the NN is not provided with the correct results
during training. Unsupervised NNs usually perform some kind of data compres-
sion, such as dimensionality reduction or clustering. See ”What does unsuper-
vised learning learn?”
The distinction between supervised and unsupervised methods is not always
clear-cut. An unsupervised method can learn a summary of a probability dis-
tribution, then that summarized distribution can be used to make predictions.
Furthermore, supervised methods come in two subvarieties: auto-associative and
hetero-associative. In auto-associative learning, the target values are the same
as the inputs, whereas in hetero-associative learning, the targets are generally
different from the inputs. Many unsupervised methods are equivalent to auto-
associative supervised methods. For more details, see ”What does unsupervised
learning learn?”
Two major kinds of network topology are feedforward and feedback.
· In a feedforward NN, the connections between units do not form cycles.
Feedforward NNs usually produce a response to an input quickly. Most feedfor-
4.4. WELL-KNOWN KINDS OF NNS 47

ward NNs can be trained using a wide variety of efficient conventional numerical
methods in addition to algorithms invented by NN reserachers.
· In a feedback or recurrent NN, there are cycles in the connections. In
some feedback NNs, each time an input is presented, the NN must iterate for a
potentially long time before it produces a response. Feedback NNs are usually
more difficult to train than feedforward NNs.
Some kinds of NNs (such as those with winner-take-all units) can be imple-
mented as either feedforward or feedback networks.
NNs also differ in the kinds of data they accept. Two major kinds of data
are categorical and quantitative.
· Categorical variables take only a finite (technically, countable) number of
possible values, and there are usually several or more cases falling into each
category. Categorical variables may have symbolic values (e.g., ”male” and ”fe-
male”, or ”red”, ”green” and ”blue”) that must be encoded into numbers before
being given to the network. Both supervised learning with categorical target
values and unsupervised learning with categorical outputs are called ”classifica-
tion.”
· Quantitative variables are numerical measurements of some attribute, such
as length in meters. The measurements must be made in such a way that at least
some arithmetic relations among the measurements reflect analogous relations
among the attributes of the objects that are measured.
Some variables can be treated as either categorical or quantitative, such as
number of children or any binary variable. Most regression algorithms can also
be used for supervised classification by encoding categorical target values as
0/1 binary variables and using those binary variables as target values for the
regression algorithm. The outputs of the network are posterior probabilities
when any of the most common training methods are used.

4.4 Well-Known Kinds of NNs


Below we classify the well known kinds of NNs.

1. Supervised

— Feedforward
∗ Linear
· Hebbian - Hebb (1949), Fausett (1994)
· Perceptron - Rosenblatt (1958), Minsky and Papert (1969/1988)
· Adaline - Widrow and Hoff (1960), Fausett (1994)
∗ MLP: Multilayer perceptron - Bishop (1995), Reed and Marks
(1999)
· Backprop - Rumelhart, Hinton, and Williams (1986)
· Cascade Correlation - Fahlman and Lebiere (1990), Fausett
(1994)
48 CHAPTER 4. NEURAL NETWORKS

· Quickprop - Fahlman (1989)


· RPROP - Riedmiller and Braun (1993)
∗ RBF networks - Bishop (1995), Moody and Darken (1989), Orr
(1996)
· OLS: Orthogonal least squares - Chen, Cowan and Grant
(1991)
∗ CMAC: Cerebellar Model Articulation Controller - Albus (1975),
Brown and Harris (1994)
∗ Classification only
· LVQ: Learning Vector Quantization - Kohonen (1988), Fausett
(1994)
· PNN: Prob abilistic Neural Network - Specht (1990), Mas-
ters (1993), Hand (1982), Fausett (1994)
∗ Regression only
· GNN: General Regression Neural Network - Specht (1991),
Nadaraya (1964), Watson (1964)
— Feedback - Hertz, Krogh, and Palmer (1991), Medsker and Jain
(2000)
∗ BAM: Bidirectional associative memory - Kosko (1992), Fausett
(1994)
∗ Boltzman machine - Ackley et al. (1985), Fausett (1994)
∗ Recurrent time series
· Backpropagtion through time - Werbos (1990)
· Elman - Elman (1990)
· FIR: Finite impulse response - Wan (1990)
· Jordan - Jordan (1986)
· Real-time recurrent network - Williams and Zipser (1989)
· Recurrent backpropagation - Pineda (1989), Fausett (1994)
· TDNN: Time delay NN - Lang, Waibel and Hinton (1990)
— Competitive
∗ ARTMAP - Carpenter, Grossberg and Reynolds (1991)
∗ Fuzzy ARTMAP - Carpenter, Grossberg, Markuzon, Reynolds
and Rosen (1992), Kasuba (1993)
∗ Gaussian ARTMAP - Williamson (1995)
∗ Counterpropagation - Hecht-Nielsen (1987; 1988; 1990), Fausett
(1994)
∗ Neocognitron - Fukushima, Miyake, and Ito (1983), Fukushima,
(1988), Fausett (1994)

2. Unsupervised - Hertz, Krogh, and Palmer (1991)


4.4. WELL-KNOWN KINDS OF NNS 49

— Competitive
∗ Vector Quantization
· Grossberg - Grossberg (1976)
· Kohonen - Kohonen (1984)
· Conscience - Desieno (1988)
∗ Self-Organizing Map
· Kohonen - Kohonen (1995), Fausett (1994)
· GTM: - Bishop, Svens\’en and Williams (1997)
· Local Linear - Mulier and Cherkassky (1995)
∗ Adaptive resonance theory
· ART 1 - Carpenter and Grossberg (1987a), Moore (1988),
Fausett (1994)
· ART 2 - Carpenter and Grossberg (1987b), Fausett (1994)
· ART 2-A - Carpenter, Grossberg and Rosen (1991a)
· ART 3 - Carpenter and Grossberg (1990)
· Fuzzy ART - Carpenter, Grossberg and Rosen (1991b)
∗ DCL: Differential competitive learning - Kosko (1992)
— Dimension Reduction - Diamantaras and Kung (1996)
∗ Hebbian - Hebb (1949), Fausett (1994)
∗ Oja - Oja (1989)
∗ Sanger - Sanger (1989)
∗ Differential Hebbian - Kosko (1992)
— Autoassociation
∗ Linear autoassociator - Anderson et al. (1977), Fausett (1994)
∗ BSB: Brain State in a Box - Anderson et al. (1977), Fausett
(1994)
∗ Hopfield - Hopfield (1982), Fausett (1994)

3. Nonlearning

— Hopfield - Hertz, Krogh, and Palmer (1991)


— Various networks for optimization - Cichocki and Unbehauen (1993),
etc.

References:

The amount of available references is enormous, below we cite only those


associated with the general problems of NNs and kinds of NNs. For more
detailed references, see
ftp://ftp.sas.com/pub/neural/FAQ.html, or
50 CHAPTER 4. NEURAL NETWORKS

http://www.soft-computing.de.

Ackley, D.H., Hinton, G.F., and Sejnowski, T.J. (1985), ”A learning algo-
rithm for Boltzman machines,” Cognitive Science, 9, 147-169.
Albus, J.S (1975), ”New Approach to Manipulator Control: The Cerebellar
Model Articulation Controller (CMAC),” Transactions of the ASME Journal of
Dynamic Systems, Measurement, and Control, September 1975, 220-27.
Anderson, J.A., and Rosenfeld, E., Eds. (1988), ”Neurocomputing: Foun-
dations of Research”. The MIT Press, Cambridge, MA.
Anderson, J.A., Silverstein, J.W., Ritz, S.A., and Jones, R.S. (1977) ”Dis-
tinctive features, categorical perception, and probability learning: Some appli-
cations of a neural model”. Psychological Rveiew, 84, 413-451.
Bishop, C.M. (1995), ”Neural Networks for Pattern Recognition”. Oxford
University Press, Oxford.
Bishop, C.M., Svensen, M. and Williams, C.K.I (1997), ”GTM: A principled
alternative to the self-organizing map”. In: Mozer, M.C., Jordan, M.I., and
Petsche, T., Eds., Advances in Neural Information Processing Systems 9, The
MIT Press, Cambrideg, MA, 354-360.
Brown, M. and Harris, C. (1994), ”Neurofuzzy Adaptive Modelling and Con-
trol”. Prentice Hall, N. York.
Carpenter, G.A., Grossberg, S. (1987a), ”A massively parallel architecture
for a self-organizing neural pattern recognition machine,” Computer Vision,
Graphics, and Image Processing, 37, 54-115.
Carpenter, G.A., Grossberg, S. (1987b), ”ART 2: Self-organization of stable
category recognition codes for analog input patterns,” Applied Optics, 26, 4919-
4930.
Carpenter, G.A., Grossberg, S. (1990), ”ART 3: Hierarchical search us-
ing chemical transmitters in self-organizing pattern recognition architectures.
Neural Networks, 3, 129-152.
Carpenter, G.A., Grossberg, S., Markuzon, N., Reynolds, J.H., and Rosen,
D.B. (1992), ”Fuzzy ARTMAP: A neural network architecture for incremental
supervised learning of analog multidimensional maps,” IEEE Transactions on
Neural Networks, 3, 698-713.
Carpenter, G.A., Grossberg, S., Reynolds, J.H. (1991), ”ARTMAP: Su-
pervised real-time learning and classification of nonstationary data by a self-
organizing neural network,” Neural Networks, 4, 565-588.
Carpenter, G.A., Grossberg, S., Rosen, D.B. (1991a), ”ART 2-A: An adap-
tive resonance algorithm for rapid category learning and recognition,” Neural
Networks, 4, 493-504.
Carpenter, G.A., Grossberg, S., Rosen, D.B. (1991b), ”Fuzzy ART: Fast
stable learning and categorization of analog patterns by an adaptive resonance
system,” Neural Networks, 4, 759-771.
Chen, S., Cowan, C.F.N., and Grant, P.M. (1991), ”Orthogonal least squares
learning for radial basis function networks,” IEEE Transactions on Neural Net-
works, 2, 302-309.
4.4. WELL-KNOWN KINDS OF NNS 51

Cichocki, A. and Unbehauen, R. (1993) ”Neural Networks for Optimization


and Signal Processing”. John Wiley & Sons, N. York.
Desieno, D. (1988), ”Adding a conscience to competitive learning,” Proc.
Int. Conf. on Neural Networks I, IEEE Press, 117-124.
Diamantaras, K.I., and Kung, S.Y. (1996) ”Principal Component Neural
Networks: Theory and Applications”. John Wiley & Sons, N. York.
Elman, J.L. (1990), ”Finding structure in time,” Cognitive Science, 14, 179-
211.
Fahlman, S.E. (1989), ”Faster-Learning Variations on Back-Propagation: An
Empirical Study”, in Touretzky, D., Hinton, G, and Sejnowski, T., Eds., Pro-
ceedings of the 1988 Connectionist Models Summer School, Morgan Kaufmann,
38-51.
Fahlman, S.E., and Lebiere, C. (1990), ”The Cascade-Correlation Learn-
ing Architecture”. In: Touretzky, D. S., Ed., Advances in Neural Information
Processing Systems 2, Morgan Kaufmann Publishers, Los Altos, CA, 524-532.
Fausett, L. (1994), Fundamentals of Neural Networks, Englewood Cliffs,
Prentice Hall, N. York.
Fukushima, K., Miyake, S., and Ito, T. (1983), ”Neocognitron: A neural net-
work model for a mechanism of visual pattern recognition,” IEEE Transactions
on Systems, Man, and Cybernetics, 13, 826-834.
Fukushima, K. (1988), ”Neocognitron: A hierarchical neural network capa-
ble of visual pattern recognition,” Neural Networks, 1, 119-130.
Grossberg, S. (1976), ”Adaptive pattern classification and universal recod-
ing: I. Parallel development and coding of neural feature detectors,” Biological
Cybernetics, 23, 121-134.
Hecht-Nielsen, R. (1987), ”Counterpropagation networks,” Applied Optics,
26, 4979-4984.
Hecht-Nielsen, R. (1988), ”Applications of counterpropagation networks,”
Neural Networks, 1, 131-139.
Hecht-Nielsen, R. (1990), ”Neurocomputing, Reading”. Addison-Wesley, N.
York.
Hertz, J., Krogh, A., and Palmer, R. (1991) ”Introduction to the Theory of
Neural Computation”. Addison-Wesley, Redwood City, California.
Hopfield, J.J. (1982), ”Neural networks and physical systems with emer-
gent collective computational abilities,” Proceedings of the National Academy
of Sciences, 79, 2554-2558.
Jordan, M. I. (1986), ”Attractor dynamics and parallelism in a connectionist
sequential machine,” In: Proceedings of the Eighth Annual conference of the
Cognitive Science Society, Lawrence Erlbaum, 531-546.
Kasuba, T. (1993), ”Simplified Fuzzy ARTMAP”. AI Expert, 8, 18-25.
Kohonen, T. (1984), Self-Organization and Associative Memory, Springer,
Berlin /Hedelberg /N. York.
Kohonen, T. (1988), ”Learning Vector Quantization”. Neural Networks, 1,
303.
Kosko, B.(1992), ”Neural Networks and Fuzzy Systems”. Prentice-Hall, En-
glewood Cliffs, N.J.
52 CHAPTER 4. NEURAL NETWORKS

Lang, K. J., Waibel, A. H., and Hinton, G. (1990), ”A time-delay neural


network architecture for isolated word recognition,” Neural Networks, 3, 23-44.
Masters, T. (1993). ”Practical Neural Network Recipes in C++”. Academic
Press, San Diego.
Medsker, L.R. and Jain, L.C., Eds. (2000) ”Recurrent Neural Networks:
Design and Applications”, CRC Press, Boca Raton, FL.
Moody, J. and Darken, C.J. (1989), ”Fast learning in networks of locally-
tuned processing units,” Neural Computation, 1, 281-294.
Oja, E. (1989), ”Neural networks, principal components, and subspaces,”
International Journal of Neural Systems, 1, 61-68.
Pineda, F.J. (1989), ”Recurrent back-propagation and the dynamical ap-
proach to neural computation,” Neural Computation, 1, 161-172.
Riedmiller, M. and Braun, H. (1993), ”A Direct Adaptive Method for Faster
Backpropagation Learning: The RPROP Algorithm”, Proceedings of the IEEE
International Conference on Neural Networks 1993, IEEE Press, San Francisco.
Rumelhart, D.E., Hinton, G.E., and Williams, R.J. (1986), ”Learning in-
ternal representations by error propagation”. In: Rumelhart, D.E. and Mc-
Clelland, J. L., Eds. (1986), Parallel Distributed Processing: Explorations in
the Microstructure of Cognition, Volume 1, The MIT Press, Cambridge, MA,
318-362.
Sanger, T.D. (1989), ”Optimal unsupervised learning in a single-layer linear
feedforward neural network,” Neural Networks, 2, 459-473.
Specht, D.F. (1990) ”Probabilistic neural networks,” Neural Networks, 3,
110-118.
Specht, D.F. (1991) ”A Generalized Regression Neural Network”, IEEE
Transactions on Neural Networks, 2, Nov. 1991, 568-576.
Werbos, P.J. (1990), ”Backpropagtion through time: What it is and how to
do it”. Proceedings of the IEEE, 78, 1550-1560.
Williams, R.J., and Zipser, D., (1989), ”A learning algorithm for continually
running fully recurrent neurla networks,” Neural Computation, 1, 270-280.
Blum, A., and Rivest, R.L. (1989), ”Training a 3-node neural network is
NP-complete”. In: Touretzky, D.S. (Ed.), Advances in Neural Information
Processing Systems 1, Morgan Kaufmann, San Mateo, CA, 494-501.
Chalmers, D.J. (1996), ”The Conscious Mind: In Search of a Fundamental
Theory”. Oxford University Press, Oxford.
Chrisman, L. (1991), ”Learning Recursive Distributed Representations for
Holistic Computation”. Connection Science, 3, 345-366.
Collier, R. (1994), ”An historical overview of natural language processing
systems that learn”. Artificial Intelligence Review, 8(1).
Devroye, L., Gyorfi, L., and Lugosi, G. (1996), ”A Probabilistic Theory of
Pattern Recognition”. Springer, Berlin /Heidelberg /New York .
Farago, A. and Lugosi, G. (1993), ”Strong Universal Consistency of Neural
Network Classifiers”. IEEE Transactions on Information Theory, 39, 1146-1151.
Lugosi, G., and Zeger, K. (1995), ”Nonparametric Estimation via Empirical
Risk Minimization”. IEEE Transactions on Information Theory, 41, 677-678.
4.4. WELL-KNOWN KINDS OF NNS 53

Siegelmann, H.T., and Sontag, E.D. (1999), ”Turing Computability with


Neural Networks”. Applied Mathematics Letters, 4, 77-80.
Valiant, L. (1988), ”Functionality in Neural Nets”. Learning and Knowledge
Acquisition, Proc. AAAI, 629-634.
White, H. (1990), ”Connectionist Nonparametric Regression: Multilayer
Feedforward Networks Can Learn Arbitrary Mappings”. Neural Networks, 3,
535-550.
White, H. (1992b), ”Artificial Neural Networks: Approximation and Learn-
ing Theory”. Blackwell, London.
White, H., and Gallant, A.R. (1992), ”On Learning the Derivatives of an
Unknown Mapping with Multilayer Feedforward Networks,” Neural Networks,
5, 129-138.
54 CHAPTER 4. NEURAL NETWORKS
Chapter 5

Machine Learning

5.1 Introduction
The main goal of machine learning is to built computer systems that can adapt
and learn from the experience. Different learning techniques have been devel-
oped for different performance tasks. The primary tasks that have been inves-
tigated are supervised learning for discrete decision making process, supervised
learning for continuous prediction, reinforcement learning for sequential decision
making and unsupervised learning. Here we consider a more general setting not
necessarily based on NNs, see the previous chapter.
The best understood task is one-shot decision making, see Willson, (1999):
the computer is given a description of an object (event, situation, etc.) and it
must output a classification of that object. For example, an optical recognizer
must input a digitized image of a character and output the name of that char-
acter (”A” through ”Z”). A machine leaning approach to constructing such a
system would begin by collecting training examples, each consisting of a digi-
tized image of a character and the correct name of the character. This would
be analyzed by learning algorithm to produce an optical character recognizer
for classifying new images.
Machine learning algorithms search a space of candidate classifiers for one
that performs well on the training examples and is expected to work well to
new classes. Learning methods for classification problems include decision trees,
NNs, rule learning algorithms, clustering methods and Bayesian networks, see
the next chapter.
There exist for basic questions to answer when developing a new machine
learning system:

1. How is the classifier represented?


2. How are examples represented?
3. What objective function should be employed to evaluate candidate classi-
fiers?

55
56 CHAPTER 5. MACHINE LEARNING

4. What search algorithm should be used?


Let us illustrate these four questions using two of the most popular learning
algorithms: C4.5 (by Quinlan, 1993) and backpropagation.
The C4.5 algorithm represents a classifier as a decision tree. Each example
is represented by a vector of features, e.g. one feature describing a printed
character might be whether it has a long vertical line segment (such as the
letters B, D, E, etc.). Each node in the decision tree tests the value of one of
the features and branches to one of its children, depending on the results of
the test. A new example is classified by starting at the root of the tree and
applying the test at that node. If the test is true, it branches to the left child;
otherwise it branches to the right child. The test at the child node is then
applied recursively, until one of the leaf nodes of the tree is reached. The leaf
node gives the predicted classification of the example.
C4.5 searches the space of decision trees through a constructive search. It
first considers all trees consisting of only a single root node and chooses one
of those. Then it considers all trees having that root node and various left
children, and chooses one of those, and so on. This process constructs the tree
incrementally with the goal of finding the decision tree that miniimzes so called
pessimistic error, which is an estimate of classification error of the tree on new
training examples. It is based on taking the upper endpoint of a confidence
interval for the error of the tree computed separately for each leaf. Although
C4.5 constructs its classifier, other learning algorithms begin with a complete
classifier and modify it. The backpropagation algorithm for learning NNs begins
with an initial NN and computes the classification error of that network on the
training data. It then makes small adjustments in the weights of the network
to reduce this error. This process is repeated until the error is minimized.

5.2 Three Basic Theories


There are two fundamentally different theories of machine learning. The clas-
sical theory takes the view that, before analyzing the training examples, the
learning algorithms makes a ”guess” about an appropriate space of classifiers
to consider, e.g., it guesses that decision trees will be better than neural net-
works. The algorithm then searches the chosen space of classifiers hoping to
find a good fit to the data. The Bayesian theory take the view that the designer
of a learning algorithm encodes all of his/her prior knowledge in the form of a
prior probability distribution over the space of candidate classifiers. The learn-
ing algorithm then analyzes the training examples and computes the posterior
probability distribution over the space of classifiers. In this view the training
data serve to reduce our remaining uncertainty about the unknown classifier.
Based on the development of fuzzy set theory, reently fuzzy theory has gained
a popularity. Similar to Bayesian approach, learning algorithms encode the
prior knowledge in the form of prior possibility distributions. The resulting
algorithms output the possibility distributions and in this way the training data
again reduce our uncertainty about the unknown classifier.
5.3. SUPERVISED MACHINE LEARNING 57

These three theories lead to different practical approaches. The first theory
encourages the development of large, flexible hypothesis space (such as deci-
sion trees and NNs) that can represent many different classifiers. The second
and third theory implies the development of representational systems that can
readily express prior knowledge, such as Bayesian and fuzzy networks and other
stochastic /fuzzy models.

5.3 Supervised machine Learning


Here the discussion has focused on discrete classification, but the same issues
arise for the second learning task: supervised learning for continuous prediction,
or regression. In this task, the computer is given a description of an object and
it must output a real number. For example, given a description of prospective
student (by the high-school grade-point average (GPA), etc.), the system must
predict the student’s college GPA. The machine learning approach is the same: a
collection of training examples describing students and their GPAs is provided
to the learning algorithm, which outputs a predictor to predict college GPA.
Learning methods for continuous prediction include neural networks, regression
trees, linear and additive models, etc. Classification and prediction are often
called supervised learning tasks, because the training data include not only the
input objects but also the corresponding output values.

5.4 Reinforcement Machine Learning


As far as reinforcement learning tasks are concerned, in these tasks, each deci-
sion made by the computer affects subsequent decisions. Consider, for example,
a computer controlled robot attempting to navigate from a hospital kitchen to
a patient’s room. At each point in time, the computer must decide whether to
move the robot forward, left, right or backward. Each decision changes the
location of the robot, so that the next decision will depend on previous deci-
sion. After each decision, the environment provides a real valued reward. For
example, robot may receive a positive reward for delivering a meal to the correct
patient and a negative reward for bumping into walls. the goal of the robot is
to choose sequences of actions to maximize its long-term reward. This is very
different from the standard supervised learning task, where each classification
decision is completely independent of other decisions.

5.5 Unsupervised machine Learning


The final learning task we discuss here is unsupervised learning, where the com-
puter is given a collection of objects and is asked to construct a model to explain
the observed properties of these objects. No teacher provides desired output or
rewards. For example, given a collection of astronomical objects, the learning
58 CHAPTER 5. MACHINE LEARNING

system should group the objects into stars, planets, and galaxies and describe
each group of its electromagnetic spectrum, distance from earth, and so on.
Unsupervised learning can be understood in a much wider range of tasks
in cluster analysis, see the chapter devoted to fuzzy systems. One useful for-
mulation of unsupervised learning is density estimation. Define a probability
distribution P (X) to be the probability that object X will be observed. Then
the goal of unsupervised learning is to find this probability distribution on the
samples of objects. This may be typically accomplished by defining a family of
possible stochastic models and choosing the model that best fits for the data.
Considering again the above example with astronomical objects, the proba-
bility distribution P (X) describing the whole collection of astronomical objects
could then be modeled as a mixture of normal distributions - one distribution
for each of objects. The learning process determines the number of groups and
the mean and covariance matrix of each multivariate distribution. The AUTO-
CLASS program discovered a new class of astronomical objects in just this way,
see Cheeseman at al. (1988). Similar method has been applied in HMM speech
recognition model, see Rabiner (1989).

References:

Bishop, C.M., (1996) ”Neural Networks for Pattern Recognition”. Oxford


Univ. Press, Oxford.
Breiman, L.J. et. al.,(1984) ”Classification and Regression Trees”. Wadsworth,
Monterey.
Cheeseman, P.J. et al., (1989) ”AUTOCLAS: A Bayesian Classification Sys-
tem, Proc. 5th Internat. Conference on Machine Learning, Kaufmann, San
Francisco, 54-64.
Cleveland, W.S. and Devlin, J.S. (1988) ”Locally-Weighted Regressin: An
Approach to Regression Analysis by Local Fitting. J. of the American Statist.
Assoc., 83, 596-610.
Cohen, W.W. (1995) ”Fast Effective Rule Induction”. Proc. 12th Internat.
Conference on Machine Learning. Kaufmann, San Francisco,115-123.
Everitt, B.S. (1993) ”Cluster Analysis”. Arnold Publ. House, London.
Hastie, T.J. and Tibshirani, R.J. (1990) ”Generalized Additive Models”.
Chapman and Hall, London.
Hojjat, A. and Shih-Lin H. (1995) ”Machine Learning.” J. Wiley, N. York.
Mitchell, T.M. (1997) ”Machine Learning”. McGraw-Hill, New York.
Quinlan, J.R. (1993) ”C4.5: Programs for Empirical Learning. Kaufmann,
San Francisco.
Rabiner, L.R. (1989) ”A tutorial on Hidden Markov Models and Selected
Applications in Speech Recognition. Proc. IEEE 77(2), 257-286.
Wilson, R.A. and Keil, F.C. Eds. (1999) ”The MIT Encyklopedia of the
Cognitive Sciences”. MIT Press, Cambridge, MA, London.
Chapter 6

Probabilistic Reasoning

6.1 Introduction
Probabilistic reasoning (PR) reefers to the formation of probability judgements
and subjective beliefs about the likelihoods of outcomes and the frequencies of
events. The judgements that people make are often about things that are only
indirectly observable and only partly predictable. For example, the weather, a
game of sports, a project at work, or whatever it could be, our willingness to
engage in an endeavor and the actions that we take depend on our estimated
likelihood of the relevant outcomes. How likely id our team to win? How
frequently have projects like this failed before? Like other areas of reasoning
and decision making, we distinguish normative, descriptive and prescriptive
approaches.
The normative approach to PR is constrained by the same mathematical
rules that govern the classical, set-theoretic concept of probability. In particular,
probability judgements are said to be coherent, if they satisfy Kolmogorov’s
axioms:

1. No probabilities are negative.

2. The probability of tautology is 1.

3. The probability of a disjunction of two logically exclusive statements


equals to the sum of their respective probabilities.

4. The probability of a conjunction of two statements equals the product of


probability of the first and the probability of the second, assuming that
the second statement is true.

Whereas the first three axioms involve unconditional probabilities, the fourth
introduces conditional probabilities. When applied to hypotheses and data in
inferential contexts, simple arithmetic manipulation of rule 4. leads to the result
that the (posterior) probability of a hypothesis conditional on the data is equal

59
60 CHAPTER 6. PROBABILISTIC REASONING

to the probability of the data conditional on the hypothesis multiplied by the


(prior) probability of the hypothesis, all divided by the probability of the data.
Although mathematically trivial, this is of central importance in the context of
Bayesian inference, which underlies theories of belief updating and is considered
by many to be a normative requirement of probabilistic reasoning. This is the
main difference to reasoning that is based on logic, either classical or fuzzy logic
(or multi-valued logic), see the chapter dealing with fuzzy systems.

6.2 Markov and Bayesian Networks


Using the above axiomatic basis, structural properties of probabilistic models
can be identified and captured by graphical representations, particularly Markov
networks and Bayesian networks. A Markov network is an undirected graph
whose links represent symmetrical probabilistic dependences, while a Bayesian
network is a directed acyclic graph whose arrows represent causal influences
or class-property relationships. A formal semantics of both network types has
been established and knowledge representation schemes in inference systems is
explored with its power and limitations.
The impact of each new piece of evidence is viewed as a perturbation that
propagates through the network via message-passing between neighboring vari-
ables, with minimal external supervision. Belief parameters, communication
messages and updating rules to guarantee some equilibrium can be reached in
time proportional to the longest path in the network.
In belief updating the impact of each new piece of evidence is viewed as a
perturbation that propagates through the network, at equilibrium, each variable
should be bound to a fixed value that together with all other value assignments
is the best interpretation of the evidence. This approach is called the distributed
computation.

6.3 Decision Analysis based on PR


In the decision analysis and making rational decisions PR provides coherent
prescriptions for choosing actions and meaningfully guarantees of the quality of
these choices. Whereas judgements about the likelihoods of the events is qual-
ified by probabilities, judgements about the desirability of action consequences
are qualified by utilities. Bayesian metodologies regard the expected utilities
as a gauge of the merit of actions and therefore treat them as prescriptions for
choosing among alternatives. These attitudes are captured by what is known
as the axioms of utility theory introduced by von Neumann and Morgestern in
their famous book from 1947.
While the arguments for the expected-value criterion are normally based
on long-run accumulation of payoffs from a long series of repetitive decisions,
e.g. gambling, the expected utility criterion is also justified in single-decision
situation, as the summation operator originates not with additive accumulation
6.4. LEARNING STRUCTURE FROM DATA 61

of payoffs but with the additive axiom of probability theory.

6.4 Learning Structure from Data


Taking Bayesian belief networks as the basic scheme of knowledge representa-
tion, the learning task separates into two additional subtasks:

(1) Parameters learning - learning the numerical parameters (i.e. the condi-
tional probabilities) for a given network topology.
(2) Structure learning - identifying the network topology.

The subtasks are not mutually independent because the set of parameters
needed depends largely on topology assumed, and conversely. The chief role is
played by the structure learning. In PR a number of sophisticated methods and
techniques for discovering structures in empirical data have been developed, see
the literature, among the m tree structuring method of Chow and Liu (1968) are
of significant importance. Dechter (1987) used the similar method to decompose
the general n-ary relations into trees of binary relations. The polytree recovery
algorithm was developed by Rebane and Pearl (1987). A general probabilistic
account of causation was developed under the name of minimal causal model
by Pearl and Verma (1991).

6.5 Dampster-Shaffer’s Theory


Belief functions were introduced by Dempster (1967) as a generalization of
Bayesian inference wherein probabilities are assigned to sets rather than to
individual points. In this interpretation, however, it is hard to justify the ex-
clusion of models that are consistent with the information available but that
take into account the possibility that the observation could have turned out
differently. Shaffer (1976) has reinterpreted Dempster’s theory as a model of
evidential reasoning including two interactive frames:
- probabilistic frame representing the evidence,
- possibilistic frame where categorical compatibility relations are defined.
In this sense the Dampster-Shaffer’s theory serves as a bridge between prob-
abilistic and possibilistic (fuzzy) reasoning, see also chapter Fuzzy Systems.

References:

Arkes, H.R. and Hammond, K.R. (1986) ”Judgement and Decision Making”.
Cambridge Univ. Press, Cambridge.
Goldstein, W.M. and Hogarth, R.M. (1997) ”Research on Judgement and
Decision Making: Currents, Connections and Controversies”. Cambridge Univ.
Press, Cambridge.
Jaynes, E.T. (1998) ”Probability Theory: The Logic of Science” - electronic
book on: http://omega..albany.edu:8008/JaynesBook.html
62 CHAPTER 6. PROBABILISTIC REASONING

Pearl, J. (1988) ”Probabilistic Reasonong In Intelligent Systems: Networks


of Plausible Inference”. Kaufmann, San Francisco.
Tversky, A. and Koehler, D.J. (1994) ”Support Theory: A nonextensional
Representation of Subjective Probability”. Psychological Review, 101, 547-567.
Wilson, R.A. and Keil, F.C. Eds. (1999) ”The MIT Encyklopedia of the
Cognitive Sciences”. MIT Press, Cambridge, MA, London.
Chapter 7

Conclusion

The impact of soft computing has been felt increasingly strong in the recent
years. Soft computing is likely to play an especially important role in science
and engineering, but eventually its influence may extend much farther. Building
human-centered systems is an imperative task for scientists and engineers in the
new millennium.
In many ways, soft computing represents a significant paradigm shift in
the aims of computing - a shift which reflects the fact that the human mind,
unlike present day computers, possesses a remarkable ability to store and process
information which is pervasively imprecise, uncertain and lacking in categoricity.
In this overview, we have focused primarily on fuzzy methodologies and
fuzzy systems, as they bring basic ideas to other SC methodologies. The other
constituents of SC have been also surveyed here but for details we refer to the
existing vast literature.

63
64 CHAPTER 7. CONCLUSION
Part II

Fuzzy Optimization

65
Chapter 8

Fuzzy Sets

8.1 Introduction
A well known fact in the theory of sets is that properties of subsets of a given
set X and their mutual relations can be studied by means of their characteristic
functions, see e.g. [9] - [23] and [38]. While this may be advantageous in some
contexts, we should notice that the notion of a characteristic function is more
complex than the notion of a subset. Indeed, the characteristic function χA of
a subset A of X is defined by

1 if x ∈ A,
χA (x) = {
0 if x ∈
/ A.

Since χA is a function we need not only the underlying set X and its subset A
but also one additional set, in this case the set {0, 1} or any other two-element
set. Moreover, we also need the notion of Cartesian product because functions
are specially structured binary relations, in this case special subsets of X×{0, 1}.
If we define fuzzy sets by means of their membership functions, that is, by
replacing the range {0, 1} of characteristic functions with a lattice, for example,
the naturally ordered unit interval [0, 1] of real numbers, then we should be aware
of the following fact. Such functions may be related to certain objects (build
from subsets of the underlying set) in an analogous way how the characteristic
functions are related to subsets. This may explain why the fuzzy community
(rightly?) hesitates to accept the view that a fuzzy subset of a given set is
nothing else than its membership function. Then, a natural question arises.
Namely, what are those objects? Obviously, it can be expected that they are
more complex than just subsets because the class of functions mapping X into
a lattice can be much richer than the class of characteristic functions. In the
next section, we show that it is advantageous to define these objects as nested
families of subsets satisfying certain mild conditions.
Even if it is not the purpose of this chapter to deal with interpretations of the
concepts involved, it should be noted that fuzzy sets and membership functions

67
68 CHAPTER 8. FUZZY SETS

are closely related to the inherent imprecision of linguistic expressions in natural


languages. Probability theory does not provide a way out, as it usually deals
with crisp events and the uncertainty is whether this event will occur or not.
However, in fuzzy logic, this is a matter of degree of truth rather than a simple
”yes or no” decision.
Such generalized characteristic functions have found numerous connotations
in different areas of mathematics, variety of philosophical interpretations and
lot of real applications, see e.g. books [9], [21], [23], [38] and [52].
In the context of multicriteria decision making, functions mapping the un-
derlying space into the unit interval [0, 1] and representing normalized utility
functions can be also interpreted as membership functions of fuzzy sets of the
underlying space, see [23].
The main purpose of this chapter is investigation of some properties of fuzzy
sets, primarily with respect to generalized concave membership functions and
with the prospect of applications in optimization and decision analysis.

8.2 Definition and Basic Properties


In order to define the concept of a fuzzy subset of a given set X within the
framework of standard set theory we are motivated by the concept of upper
level set of a function introduced in [79], see also [59] and [35]. Throughout this
chapter, X is a nonempty set.

Definition 3 Let X be a set, a fuzzy subset A of X is the family of subsets


Aα ⊂ X, where α ∈ [0, 1], satisfying the following properties:

(i) A0 = X,
(ii) Aβ ⊂ AαTwhenever 0 ≤ α < β ≤ 1, (8.1)
(iii) Aβ = Aα .
0≤α<β

A fuzzy subset A of X will be also called a fuzzy set.


A membership function µA : X → [0, 1] of the fuzzy set A is defined as follows:

µA (x) = sup{α|α ∈ [0, 1], x ∈ Aα }, (8.2)

for each x ∈ X.
The core of A, Core(A), is given by

Core(A) = {x ∈ X | µA (x) = 1}. (8.3)

A fuzzy subset A of X is called normalized if its core is nonempty.


The support of A, Supp(A), is given by

Supp(A) = Cl({x ∈ X | µA (x) > 0}). (8.4)

The height of A, Hgt(A), is given by

Hgt(A) = sup{µA (x)|x ∈ X}. (8.5)


8.2. DEFINITION AND BASIC PROPERTIES 69

For each α ∈ [0, 1] the α-cut of A, [A]α , is given by

[A]α = {x ∈ X | µA (x) ≥ α}. (8.6)

For x ∈ X the function value µA (x) is called the membership degree of x in the
fuzzy set A. The class of all fuzzy subsets of X is denoted by F(X).

If A is normalized, i.e. Core(A) 6= ∅, then Hgt(A) = 1, but not vice versa.


In the following two propositions, we show that the family generated by the
upper level sets of a function µ : X → [0, 1] satisfies conditions (8.1), thus, it
generates a fuzzy subset of X and the membership function µA defined by (8.2)
coincides with µ. Moreover, for a given fuzzy set A = {Aα }α∈[0,1] , every α-cut
[A]α given by (8.6) coincides with the corresponding Aα .

Proposition 4 Let µ : X → [0, 1] be a function and let A = {Aα }α∈[0,1] be a


family of its upper-level sets, i.e. Aα = U (µ, α) for all α ∈ [0, 1]. Then A is a
fuzzy subset of X and µ is the membership function of A.

Proof. First, we prove that A = {Aα }α∈[0,1] , where Aα = U (µ, α) for all
α ∈ [0, 1] satisfies conditions (8.1). Indeed, conditions (i) and (ii) T
hold easily.
For condition (iii), we observe that by (ii) it follows that Aβ ⊂ Aα . To
0≤α<β
prove the opposite inclusion, let
\
x∈ Aα . (8.7)
0≤α<β

Assume the contrary, that is let x ∈ / Aβ . Then µ(x) < β and there exists
α0 , such that µ(x) < α0 < β. By (8.7) we have x ∈ Aα0 , thus µ(x) ≥ α0 , a
contradiction.
It remains to prove that µ = µA , where µA is a membership function of A.
For this purpose let x ∈ X and let us show that µ(x) = µA (x).
By definition (8.2) we have

µA (x) = sup{α|α ∈ [0, 1], x ∈ Aα } = sup{α|α ∈ [0, 1], x ∈ U (µ, α)}


= sup{α|α ∈ [0, 1], µ(x) ≥ α},

therefore, µ(x) = µA (x).

Proposition 5 Let A = {Aα }α∈[0,1] be a fuzzy subset of X and let µA : X →


[0, 1] be the membership function of A. Then for all α ∈ [0, 1] the α-cuts [A]α
coincide with Aα , i.e. [A]α = Aα .

Proof. Let β ∈ [0, 1]. By definition (8.2), observe that Aβ ⊂ [A]β . It


suffices to prove the opposite inclusion.
Let x ∈ [A]β . Then µA (x) ≥ β, or, equivalently, sup{α|α ∈ [0, 1], x ∈ Aα } ≥
β. It follows that for every β 0 with β 0 < β there exists α0 with β 0 ≤ α0 ≤ β,
such that x ∈ Aα0 . By monotonicity condition (ii) in (8.1) we have x ∈ Aα for
70 CHAPTER 8. FUZZY SETS

T
all 0 ≤ α ≤ α0 . Hence, x ∈ Aα , however, applying (iii) in (8.1) we get
0≤α<β
x ∈ Aβ . Consequently, [A]β ⊂ Aβ .
These results allow for introducing a natural one-to-one correspondence be-
tween fuzzy subsets of X and real functions mapping X to [0, 1]. Any fuzzy
subset A of X is given by its membership function µA and vice-versa, any func-
tion µ : X → [0, 1] uniquely determines a fuzzy subset A of X, with the property
that the membership function µA of A is µ.
The notions of inclusion and equality extend to fuzzy subsets as follows. Let
A = {Aα }α∈[0,1] , B = {Bα }α∈[0,1] be fuzzy subsets of X. Then

A ⊂ B if Aα ⊂ Bα for each α ∈ [0, 1], (8.8)


A = B if Aα = Bα for each α ∈ [0, 1]. (8.9)

Proposition 6 Let A and B be fuzzy subsets of X, A = {Aα }α∈[0,1] , B =


{Bα }α∈[0,1] . Then the following holds:

A ⊂ B if and only if µA (x) ≤ µB (x) for all x ∈ X, (8.10)


A = B if and only if µA (x) = µB (x) for all x ∈ X. (8.11)

Proof. We prove only (8.10), the proof of statement (8.11) is analogical.


Let x ∈ X, A ⊂ B. Then by definition (8.8), Aα ⊂ Bα for each α ∈ [0, 1]. Using
(8.2) we obtain

µA (x) = sup{α|α ∈ [0, 1], x ∈ Aα }


≤ sup{α|α ∈ [0, 1], x ∈ Bα } = µB (x).

Suppose that µA (x) ≤ µB (x) holds for all x ∈ X and let α ∈ [0, 1]. We have
to show that Aα ⊂ Bα . Indeed, for an arbitrary u ∈ Aα , we have

sup{β|β ∈ [0, 1], u ∈ Aβ } ≤ sup{β|β ∈ [0, 1], u ∈ Bβ }.

From here, sup{β|β ∈ [0, 1], u ∈ Bβ } ≥ α, therefore, for each β < α it follows
that u ∈ Bβ . Hence, by (iii) in Definition 3, we obtain u ∈ Bα .
Classical sets can be considered as special fuzzy sets where the families con-
tain the same elements. We obtain the following definition.

Definition 7 Let A be a subset of X. The fuzzy subset {Aα }α∈[0,1] of X defined


by Aα = A for all α ∈ (0, 1] is called a crisp fuzzy subset of X generated by A.
A fuzzy subset of X generated by some A ⊂ X is called a crisp fuzzy subset of
X or briefly a crisp subset of X.

Proposition 8 Let A = {A}α∈[0,1] be a crisp subset of X generated by A. Then


the membership function of A is equal to the characteristic function of A.
8.3. OPERATIONS WITH FUZZY SETS 71

Proof. Let µ be the membership function of A = {A}α∈[0,1] . We wish to


prove that
1 if x ∈ A,
µ(x) = {
0 if x ∈/ A.
Let x ∈ X, x ∈ / A. Then by definition (8.2) and Definition 7, µ(x) =
sup{α|α ∈ [0, 1], x ∈ Aα } = 0.
Let x ∈ X, x ∈ A. Then again by definition (8.2) and Definition 7,

µ(x) = sup{α|α ∈ [0, 1], x ∈ Aα }


= sup{α|α ∈ [0, 1], x ∈ A} = 1.

By Definition 7, the set P(X) of all subsets of X can naturally be embedded


into the set of all fuzzy subsets of X and we can write A = {A}α∈[0,1] if {A}α∈[0,1]
is generated by A ⊂ X. According to Proposition 8, we have in this case µA =
χA . In particular, if A contains one element a of X, that is A = {a}, then we
write a ∈ F(X) instead of {a} ∈ F(X) and χa instead of χ{a} .

Example 9 Let µ : R → [0, 1] be such that


2
µ(x) = e−x if x ∈ R.

Let A0 = {A0α }α∈[0,1] , A00 = {A00α }α∈[0,1] be two families of subsets in R defined
as follows:
A0α = {x|x ∈ R,µ(x) > α},
A00α = {x|x ∈ R,µ(x) ≥ α}.

Clearly, A00 is a fuzzy subset of R and A0 =


6 TA00 . Observe that (i) and (ii) are
0 00 0
satisfied for A and A , however A1 = ∅, A0α = {0}, thus (iii) in (8.1) is
0≤α<1
not satisfied. Hence A0 is not a fuzzy subset of R.

8.3 Operations with Fuzzy Sets


In order to generalize the set operations of intersection, union and complement
to fuzzy set operations, it is natural to use triangular norms, triangular conorms
and fuzzy negations, introduced in [79], respectively.
Given a De Morgan triple (T, S, N ), i.e. a t-norm T , a t-conorm S and a
fuzzy negation N , we can define the operations intersection ∩T , union ∪S and
complement CN on F(X), where for A, B ∈ F(X) given by the membership
functions µA , µB , the membership functions of the fuzzy subsets A∩T B, A∪S B
and CN A of X are defined for each x ∈ X as follows, see [38]:

µA∩T B (x) = T (µA (x), µB (x)),


µA∪S B (x) = S(µA (x), µB (x)), (8.12)
µCN A (x) = N (µA (x)).
72 CHAPTER 8. FUZZY SETS

The operations introduced by L. Zadeh in [38] have been originally based on


T = TM = min, S = SM = max and standard negation N . The properties of
the operations intersection ∩T , union ∪S and complement CN can be derived
directly from the corresponding properties of the t-norm T , t-conorm S and
fuzzy negation N . For brevity, in case of T = min and S = max, we write only
∩ and ∪, instead of ∩T and ∪S .
Notice that for all A ∈ F(X) we do not necessarily obtain properties which
hold for crisp sets, namely

A ∩T CN A = ∅, (8.13)
A ∪S CN A = X. (8.14)

If the t-norm T in the De Morgan triple (T, S, N ) does not have zero divisors,
e.g. T = min, then these properties never hold unless A is a crisp set. On the
other hand, for the De Morgan triple (TL , SL , N ) based on Lukaszewicz t-norm
T = TL , properties (8.13) and (8.13) are satisfied.
Given a t-norm T and fuzzy subsets A and B of X and Y , respectively,
the Cartesian product A ×T B is the fuzzy subset of X × Y with the following
membership function:

µA×T B (x, y) = T (µA (x), µB (y)) for (x, y) ∈ X × Y . (8.15)

An interesting and natural question arises, whether the α-cuts of the inter-
section A ∩T B, union A ∪S B and Cartesian product A ×T B of A, B ∈ F(X),
coincide with the intersection, union and Cartesian product, respectively, of the
corresponding α-cuts [A]α and [B]α . We have the following result, see [38].

Proposition 10 Let T be a t-norm, S be a t-conorm, α ∈ [0, 1]. Then the


equalities
[A ∩T B]α = [A]α ∩ [B]α ,
[A ∪S B]α = [A]α ∪ [B]α , (8.16)
[A ×T B]α = [A]α × [B]α ,
hold for all fuzzy sets A, B ∈ F(X) if and only if α is an idempotent element of
both T and S.

In particular, this result means that identities (8.16) hold for all α ∈ [0, 1]
and for all fuzzy sets A, B ∈ F(X) if and only if T = TM and S = SM .

8.4 Extension Principle


The purpose of the extension principle proposed by L. Zadeh in [98], [99], is to
extend functions or operations having crisp arguments to functions or operations
with fuzzy set arguments. Zadeh’s methodology can be cast in the more general
setting of carrying a membership function via a mapping, see e.g. [21]. There
exist other generalizations for set-to-set mappings, see e.g. [21], [61]. From now
on, X and Y are nonempty sets.
8.4. EXTENSION PRINCIPLE 73

Definition 11 (Extension Principle)


Let X, Y be sets, f : X → Y be a mapping. The mapping f˜ : F(X) → F(Y )
defined for all A ∈ F(X) with µA : X → [0, 1] and all y ∈ Y by

sup{µA (x)|x ∈ X, f (x) = y} if f −1 (y) 6= ∅,


µf˜(A) (y) = { (8.17)
0 otherwise,
is called a fuzzy extension of f .

By formula (8.17) we define the membership function of the image of the


fuzzy set A by fuzzy extension f˜. A justification of this concept is given in the
following theorem stating that the mapping f˜ is a true extension of the mapping
f when considering the natural embedding of P(X) into F(X) and P(Y ) into
F(Y ).

Proposition 12 Let X, Y be sets, f : X → Y be a mapping, x0 ∈ X, y0 =


f (x0 ). If f˜ : F(X) → F(Y ) is defined by (8.17), then

f˜(x0 ) = y0 ,

and the membership function µf˜(x0 ) of the fuzzy set f˜(x0 ) is a characteristic
function of y0 , i.e.
µf˜(x0 ) = χy0 . (8.18)

Proof. To prove the theorem, it is sufficient to prove (8.18). Remember


that we identify subsets and points of X and Y with the corresponding crisp
fuzzy subsets.
Let y ∈ Y , we will show that

µf˜(x0 ) (y) = χy0 (y). (8.19)

Let y = y0 . Since y0 = f (x0 ) we obtain by (8.17) that µf˜(x0 ) (y) = χx0 (x0 ) = 1.
Moreover, by the definition of characteristic function we have χy0 (y0 ) = 1, thus
(8.19) is satisfied.
On the other hand, let y 6= y0 . Again, by the definition of characteristic
function we have χy0 (y) = 0. As y 6= f (x0 ) we obtain for all x ∈ X with
y = f (x) that x 6= x0 . Clearly, χx0 (x) = 0 and by (8.17) it follows that
µf˜(x0 ) (y) = 0, which was required.

A more general form of Proposition 12 says that the image of a crisp set by
a fuzzy extension of a function is again crisp.

Theorem 13 Let X, Y be sets, f : X → Y be a mapping, A ⊂ X. Then

f˜(A) = f (A)

and the membership function µf˜(A) of f˜(A) is a characteristic function of the


set f (A), i.e.
µf˜(A) = χf (A) . (8.20)
74 CHAPTER 8. FUZZY SETS

Proof. We prove only (8.20). Let y ∈ Y . Since A is crisp, µA = χA . By


(8.17) we obtain

1 if y ∈ f (A),
µf˜(A) = max {0, sup{χA (t)|t ∈ X, f (t) = y} = { .
0 otherwise.

Consequently, µf˜(A) (y) = χf (A) (y).


In the following sections the extension principle will be used in different
settings for various sets X and Y , and also for different classes of mappings.
The mathematics of fuzzy sets is, in a narrow sense, a mathematics of the
space of membership functions. In this chapter we shall deal with some prop-
erties of this space, primarily with respect to a subset of it - the (generalized)
quasiconcave functions.

8.5 Binary and Valued Relations


In a classical set theory, a binary relation R between the elements of sets X and
Y is defined as a subset of the Cartesian product X × Y , that is, R ⊂ X × Y .
A valued relation on X × Y will be a fuzzy subset of X × Y .

Definition 14 A valued relation R on X × Y is a fuzzy subset of X × Y .

The valued relations are sometimes called fuzzy relations, however, we re-
serve this name for valued relations defined on F(X) × F(Y ), which will be
defined later.
Any binary relation R, where R ⊂ X×Y , is embedded into the class of valued
relations by its characteristic function χR being understood as its membership
function µR . In this sense, any binary relation is valued.
Particularly, any function f : X → Y is considered as a binary relation, that
is, as a subset Rf of X × Y , where

Rf = {(x, y) ∈ X × Y |y = f (x)}. (8.21)

Here, Rf may be identified with the valued relation by its characteristic function

µRf (x, y) = χRf (x, y) (8.22)

for all (x, y) ∈ X × Y , where

χRf (x, y) = χf (x) (y). (8.23)

In particular, if Y = X, then any valued relation R on X × X is a fuzzy


subset of X × X.

Definition 15 A valued relation R on X is a valued relation on X × X. A


valued relation R on X is
8.5. BINARY AND VALUED RELATIONS 75

(i) reflexive if for each x ∈ X

µR (x, x) = 1;

(ii) symmetric if for each x, y ∈ X

µR (x, y) = µR (y, x);

(iii) T -transitive if for a t-norm T and each x, y, z ∈ X

T (µR (x, y), µR (y, z)) ≤ µR (x, z);

(iv) separable if
µR (x, y) = 1 if and only if x = y;

(v) T -equivalence if R is reflexive, symmetric and T -transitive, where T is a


t-norm;
(vi) T -equality if R is reflexive, symmetric, T -transitive and separable, where
T is a t-norm.

Definition 16 Let a be a valued relation R on X.

(i) A valued relation R−1 on X is inverse to R if for each x, y ∈ X

µR−1 (x, y) = µR (y, x);

(ii) Let N be a negation, N : [0, 1] → [0, 1]. A valued relation CN R on X is


the complement relation to R if for each x, y ∈ X

µCN R (x, y) = N (µR (x, y)).

(iii) A valued relation Rs on X is the strict relation to R if for each x, y ∈ X

µR (x, y) if µR (y, x) = 0,
µRs (x, y) = {
0 otherwise.

A valued relation R on X is strict if R = Rs .


(iv) A valued relation R on X is closed if µR is USC on X × X.

For more information about valued relations see also [23].

Example 17 Let ϕ : R → [0, 1] be a function. Then R defined by the member-


ship function µR for all x, y ∈ R by

µR (x, y) = ϕ(x − y) (8.24)


76 CHAPTER 8. FUZZY SETS

is a valued relation on R. If
½
1 if t ≤ 0
ϕ(t) = ,
0 otherwise

then R defined by (8.24) is the usual binary relation ≤ on R. If


½
1 if t ≥ 0
ϕ(t) = ,
0 otherwise

then R defined by (8.24) is the usual binary relation ≥ on R. If


½
1 if t = 0
ϕ(t) = ,
0 otherwise

then R defined by (8.24) is the usual binary relation = on R.

8.6 Fuzzy Relations


Consider a valued relation R on X × Y given by the membership function
µR : X × Y → [0, 1]. In order to extend this function with crisp arguments
to function with fuzzy arguments, we apply the extension principle (8.63) in
Definition 11. Then we obtain a mapping µ̃R : F(X × Y ) → F([0, 1]), that is,
values of µ̃R are fuzzy subsets of [0, 1].
Since F([0, 1]) can be considered as a lattice, we can consider µ̃R as the
membership function of an L-fuzzy set.
However, we do not folow this way, instead, we follow a more practical way
and define fuzzy relations as valued relations on F(X) × F(Y ).

Definition 18 Let X, Y be nonempty sets. A fuzzy subset of F(X) × F(Y ) is


called a fuzzy relation on F(X) × F(Y ).

Definition 19 Let X, Y be sets. Let R be a valued relation on X × Y . A fuzzy


relation R̃ on F(X) × F(Y ) given by the membership function µR̃ : F(X) ×
F(Y ) → [0, 1] is called a fuzzy extension of relation R, if for each x ∈ X,
y ∈ Y , it holds
µR̃ (x, y) = µR (x, y) . (8.25)

Let A ∈ F(X), B ∈ F(Y ) be fuzzy sets. When appropriate, we shall use


also the notation A R̃ B, instead of µR̃ (A, B) = 1.

Definition 20 Let X be a nonempty set. Let R be a valued relation on X. Let


cf
R be a fuzzy extension of relation c R. A fuzzy relation ∗ R̃ on F(X) × F(X)
defined for all A, B ∈ F(X) by

µ∗ R̃ (A, B) = 1 − µcfR (B, A) (8.26)

is called dual to fuzzy extension cf


R of relation c R.
8.6. FUZZY RELATIONS 77

Now, we define an important special fuzzy extension of a valued relation R.

Definition 21 Let X, Y be nonempty sets, T be a t-norm. Let R be a valued


relation on X × Y . A fuzzy relation R̃T on F(X) × F(Y ) defined for all fuzzy
sets A, B with the membership functions µA : X → [0, 1], µB : Y → [0, 1],
respectively, by

µR̃T (A, B) = sup{T (µR (x, y) , T (µA (x) , µB (y)))|x ∈ X, y ∈ Y }, (8.27)

is called a T -fuzzy extension of relation R.

It is easy to show that any T -fuzzy extension R̃T of relation R is a fuzzy


extension R̃ of relation R on F(X) × F(Y ) in the sense of Definition 19.

Proposition 22 Let X, Y be nonempty sets, T be a t-norm. Let R be a valued


relation on X × Y . If R̃T is a T -fuzzy extension of relation R, then R̃T is a
fuzzy extension of relation R. Moreover, if A0 , A00 ∈ F(X), B 0 , B 00 ∈ F(Y ) and

A0 ⊂ A00 , B 0 ⊂ B 00 ,

then
µR̃T (A0 , B 0 ) ≤ µR̃T (A00 , B 00 ). (8.28)

Proof. Let x ∈ X, y ∈ Y . By (8.27) we obtain


¡ ¢
µR̃T (x, y) = sup{T (µR (u, v) , T χx (u) , χy (ν )|u ∈ X, v ∈ Y } (8.29)
= T (µR (x, y) , T (1, 1)) = µR (x, y) .

Observe that for all u ∈ X, v ∈ Y , µA0 (u) ≤ µA00 (u) and µB 0 (v) ≤ µB 00 (v).
Clearly, (8.28) follows by monotonicity of the t-norm T .
For any t-norm T , we obtain the following properties of a T -fuzzy extension
of the valued relation.

Proposition 23 Let X, Y be nonempty sets, T be a t-norm. Let R be a valued


relation on X × Y . Let R̃T be a T -fuzzy extension of relation R. If A and B
are crisp sets, A ⊂ X, B ⊂ Y , then

sup{µR (x, y)|x ∈ A, y ∈ B} ≤ µR̃T (A, B). (8.30)

Furthermore, if R is a binary relation on X × Y and there exists a ∈ A and


b ∈ B with µR (a, b) = 1, then

µR̃T (A, B) = 1. (8.31)

Proof. Let x ∈ A, y ∈ B. By Definition 21 and (8.28) it follows that


µR (x, y) = µR̃T (x, y) ≤ µR̃T (A, B). Then

sup{µR (x, y)|x ∈ A, y ∈ B} ≤ µR̃T (A, B). (8.32)


78 CHAPTER 8. FUZZY SETS

If R is a binary relation on X × Y , then µR (u, v) ∈ {0, 1} for all u ∈ X, v ∈


Y . Following (8.32) and taking into account that µR (a, b) = 1, we obtain
µR̃T (A, B) = 1.
T
It is clear that any dual ∗ R̃T to a T -fuzzy extension cf
R of a valued relation
c
R is a fuzzy extension of a valued relation R0 defined as follows

µR0 (x, y) = 1 − µc R (y, x)

for all x ∈ X, y ∈ Y . Moreover, ∗ R̃T is monotone in the sense of the following


proposition.

Proposition 24 Let X, Y be nonempty sets, T be a t-norm. Let R be a valued


relation on X × Y .
T
If ∗ R̃T is the dual fuzzy relation to cf
R of c R with A0 , A00 ∈ F(X), B 0 , B 00 ∈
F(Y ) and
A0 ⊂ A00 , B 0 ⊂ B 00 ,
then
µ∗ R̃T (A0 , B 0 ) ≥ µ∗ R̃T (A00 , B 00 ). (8.33)

Proof. Let x ∈ X, y ∈ Y . By (8.27) and (8.26) we obtain

µ∗ R̃T (A0 , B 0 ) = 1 − µcfRT (B 0 , A0 )


= 1 − sup{T (µc R (v, u) , T (µB0 (u) , µA0 (ν))|u ∈ X, v ∈ Y }.

Observe that for all u ∈ X, v ∈ Y , µA0 (u) ≤ µA00 (u) and µB 0 (v) ≤ µB 00 (v).
Clearly, (8.33) follows by monotonicity of the t-norm T .

Example 25 Let X, Y be nonempty sets, f : X → Y be a function, let the


corresponding relation Rf be defined by (8.21) and (8.22). Let T be a t-norm,
let A and B be fuzzy subsets of X and Y given by the membership functions
µA : X → [0, 1], µB : Y → [0, 1], respectively. Let y ∈ Y and let B be defined
for all z ∈ Y as follows: µB (z) = χy (z). Then by (8.27) we get the T -fuzzy
extension R̃fT of relation Rf as
n ¡ ¢ o
µR̃T (A, B) = max 0, sup{T (µRf (x, z), T µA (x) , χy (z) |x ∈ X, z ∈ Y } .
f
(8.34)
The value µR̃T (A, B) expresses the degree in which y ∈ Y is considered as the
f
image of A ∈ F(X) through the function f .

The following proposition says that extension principle (8.17) is a special


t-norm independent fuzzy extension of relation (8.21).

Proposition 26 Let X, Y be nonempty sets, f : X → Y be a function, let


the corresponding relation Rf be defined by (8.21) and (8.22). Let T be a t-
norm and A, B be fuzzy subsets with the corresponding membership functions
8.7. FUZZY EXTENSIONS OF VALUED RELATIONS 79

µA : X → [0, 1], µB : Y → [0, 1], respectively. Let y ∈ Y and let µB be defined


for all z ∈ Y by µB (z) = χy (z). Then for the membership function of the
T -fuzzy extension R̃fT of relation Rf , it holds

µR̃T (A, B) = µf˜(A) (y),


f

where µf˜(A) (y) is defined by (8.17).

Proof. Let x ∈ f −1 (y). For z = y, we get


¡ ¢
T µA (x) , χy (z) = T (µA (x) , 1) = µA (x)

and by (8.22), (8.23), µRf (x, y) = χf (x) (y) = 1. It follows from (8.34) that

µR̃T (A, B) = sup{T (1, µA (x))|x ∈ X, f (x) = y}


f

= sup{µA (x)|x ∈ X, f (x) = y} = µf˜(A) (y).

Next, if f −1 (y) = ∅, then µRf (x, z) = 0 for all x ∈ X, z ∈ Y . By (8.34) we


obtain
¡ ¢
µR̃T (A, B) = sup{T (µRf (x, z) , T µA (x) , χy (z) )|x ∈ X, z ∈ Y }
f
¡ ¢
= sup{T (0, T µA (x) , χy (z) )|x ∈ X, z ∈ Y } = 0.

However, by (8.17), µf˜(A) (y) = 0.


A natural question may arise, whether there exists some extension of a valued
relation, which is not a T -fuzzy extension. In the following section we shall
introduce another fuzzy extensions of valued relations.

8.7 Fuzzy Extensions of Valued Relations


In the preceding section, Definition 21, we have introduced a T -fuzzy extension
R̃T of a valued relation R, where T has been a t-norm. For arbitrary fuzzy
sets A, B given by the membership functions µA : X → [0, 1], µB : Y → [0, 1],
respectively, the T -fuzzy extension R̃T of a valued relation R has been defined
by

µR̃T (A, B) = sup{T (T (µA (x) , µB (y)) , µR (x, y))|x ∈ X, y ∈ Y }. (8.35)

The T -fuzzy extension of a valued relation is the most common in applications


in the area of decision making. However, in possibility theory the other fuzzy
relations based on t-norms, t-conorms, possibility and necessity measures are
well known, see e.g. [29].
In the following definition we introduce six fuzzy extensions of the valued
relation R, including the previously defined T -fuzzy extension R̃T . Later on,
these relations will be used for comparing left and right sides of the constraints
in mathematical programming problems.
80 CHAPTER 8. FUZZY SETS

Definition 27 Let X, Y be nonempty sets, T be a t-norm, S be a t-conorm.


Let R be a valued relation on X × Y .
(i) A fuzzy relation R̃T of F(X) × F(Y ) is defined for all fuzzy sets A ∈
F(X), B ∈ F(Y ) by

µR̃T (A, B) = sup{T (T (µA (x) , µB (y)) , µR (x, y))|x ∈ X, y ∈ Y }. (8.36)

(ii) A fuzzy relation R̃S of F(X) × F(Y ) is defined for all fuzzy sets A ∈
F(X), B ∈ F(Y ) by

µR̃S (A, B) = inf {S (S(1 − µA (x) , 1 − µB (y)), µR (x, y)) |x ∈ X, y ∈ Y } .


(8.37)
(iii) A fuzzy relation R̃T,S of F(X) × F(Y ) is defined for all fuzzy sets A ∈
F(X), B ∈ F(Y ) by

µR̃T,S (A, B) = sup{inf{T (µA (x) , S(1 − µB (y) , µR (x, y)))|y ∈ Y }|x ∈ X}.
(8.38)
(iv) A fuzzy relation R̃T,S of F(X) × F(Y ) is defined for all fuzzy sets A ∈
F(X), B ∈ F(Y ) by

µR̃T,S (A, B) = inf {sup{S(T (µA (x) , µR (x, y)), 1 − µB (y))|x ∈ X}|y ∈ Y } .
(8.39)
(v) A fuzzy relation R̃S,T of F(X) × F(Y ) is defined for all fuzzy sets A ∈
F(X), B ∈ F(Y ) by

µR̃S,T (A, B) = sup {inf{T (S(1 − µA (x) , µR (x, y)), µB (y))|x ∈ X}|y ∈ Y }
(8.40)
(vi) A fuzzy relation R̃S,T of F(X) × F(Y ) is defined for all fuzzy sets A ∈
F(X), B ∈ F(Y ) by

µR̃S,T (A, B) = inf {sup{S(1 − µA (x) , T (µB (y) , µR (x, y)))|y ∈ Y }|x ∈ X} .
(8.41)

Now, we shall study the above defined fuzzy extensions. First, we prove
that all fuzzy extensions of a valued relation defined in Definition 27 are fuzzy
extensions in the sense of Definition 19, see the first part of Proposition 22.
Then we present some monotonicity properties similar to that of the second
part of Proposition 22.

Proposition 28 Let X, Y be sets, T be a t-norm, S be a t-conorm. Let R be a


valued relation on X × Y given by the membership function µR : X × Y → [0, 1].
If
R̃ ∈ {R̃T , R̃S , R̃T,S , R̃T,S , R̃S,T , R̃S,T }, (8.42)
then R̂ is a fuzzy extension, that is, for each x ∈ X, y ∈ Y

µR̃ (x, y) = µR (x, y) . (8.43)


8.7. FUZZY EXTENSIONS OF VALUED RELATIONS 81

Proof. Let x ∈ X, y ∈ Y . By (8.36) we obtain


¡ ¢
µR̃T (x, y) = sup{T (T χx (u) , χy (v ), µR (u, v))|u ∈ X, v ∈ Y }
= T (T (1, 1), µR (x, y)) = µR (x, y) .

By (8.37) we obtain
© ¡ ¢ ª
µR̃S (x, y) = inf S S(1 − χx (u) , 1 − χy (v)), µR (u, v) |u ∈ X, v ∈ Y
¡ ¢
= S S(1 − χx (x) , 1 − χy (y)), µR (x, y) = S(S(0, 0), µR (x, y)) = µR (x, y) .

By (8.38) we obtain
© ª
µR̃T,S (x, y) = sup inf{T (χx (u) , S(1 − χy (v), µR (u, v)))|v ∈ Y }|u ∈ X
= T (χx (x) , S(1 − χy (y), µR (x, y))) = T (1, µR (x, y)) = µR (x, y) .

By (8.39) we obtain
© ª
µR̃T,S (x, y) = inf sup{S(T (χx (u) , µR (u, v)), 1 − χy (v))|u ∈ X}|v ∈ Y
= S(T (χx (x) , µR (x, y)), 1 − χy (y)) = S(0, µR (x, y)) = µR (x, y).

By (8.40) we obtain
© ª
µR̃S,T (x, y) = sup inf{T (χy (v), S(1 − χx (u) , µR (u, v)))|u ∈ X}|v ∈ Y
= T (χy (y), S(1 − χx (x) , µR (x, y))) = T (1, µR (x, y)) = µR (x, y) .

By (8.41) we obtain
© ª
µR̂S,T (x, y) = inf sup{S(1 − χx (u) , T (χy (v), µR (u, v)))|v ∈ Y }|u ∈ X
= S(1 − χx (x) , T (χy (y), µR (x, y))) = S(0, µR (x, y)) = µR (x, y) .

Proposition 29 Let X, Y be sets, T be a t-norm, S be a t-conorm. Let R be a


valued relation on X × Y given by the membership function µR : X × Y → [0, 1].
Let A0 , A00 ∈ F(X), B 0 , B 00 ∈ F(Y ).
(i) If
A0 ⊂ A00 , B 0 ⊂ B 00 , (8.44)
then
µR̃T (A0 , B 0 ) ≤ µR̃T (A00 , B 00 ) (8.45)
and
µR̃S (A0 , B 0 ) ≥ µR̃S (A00 , B 00 ). (8.46)

(ii) If R̃ ∈ {R̃T,S , R̃T,S } and

A0 ⊂ A00 , B 0 ⊃ B 00 , (8.47)
82 CHAPTER 8. FUZZY SETS

then
µR̃ (A0 , B 0 ) ≤ µR̃ (A00 , B 00 ). (8.48)
(iii) If R̃ ∈ {R̃S,T , R̃S,T } and

A0 ⊃ A00 , B 0 ⊂ B 00 , (8.49)

then
µR̃ (A0 , B 0 ) ≤ µR̃ (A00 , B 00 ). (8.50)

Proof. (i) Observe that by (8.44) for all u ∈ X, v ∈ Y , µA0 (u) ≤ µA00 (u)
and µB 0 (v) ≤ µB 00 (v). Clearly, (8.45) follows from (8.36) by monotonicity of the
t-norm T . Similarly, for all u ∈ X, v ∈ Y ,

1 − µA0 (u) ≥ 1 − µA00 (u) and 1 − µB 0 (v) ≥ 1 − µB 00 (v).

Then (8.46) follows from (8.37) by monotonicity of the t-conorm S.


(ii) Let R̃ = R̃T,S , u ∈ X, v ∈ Y . Then by (8.47) we have

µA0 (u) ≤ µA00 (u) and 1 − µB 0 (v) ≤ 1 − µB 00 (v).

Inequality (8.48) follows from (8.38) by monotonicity of the t-norm T and t-


conorm S.
Analogically we can prove the case R̃ = R̃T,S .
(iii) Let R̃ = R̃S,T , u ∈ X, v ∈ Y . Then by (8.49) we have

1 − µA0 (u) ≤ 1 − µA00 (u) and µB 0 (v) ≤ µB 00 (v).

Inequality (8.50) follows from (8.40) by monotonicity of the t-norm T and t-


conorm S.
Analogically, we can prove the case R̃ = R̃S,T , we left it to the reader.
Further on, we shall deal with properties of fuzzy extensions of binary rela-
tions on X × Y .

Proposition 30 Let R be a binary relation on X × Y , A and B be nonempty


crisp subsets of X and Y , respectively. Let T be a t-norm, S be a t-conorm.
Then for the membership functions of fuzzy extensions of R it holds
(i)
µR̃T (A, B) = 1
if and only if there exist a ∈ A and b ∈ B such that µR (a, b) = 1;
(ii)
µR̃S (A, B) = 1
if and only if for every a ∈ A and every b ∈ B it holds µR (a, b) = 1;
(iii)
µR̃T,S (A, B) = 1
8.7. FUZZY EXTENSIONS OF VALUED RELATIONS 83

if and only if there exists a ∈ A such that for every b ∈ B it holds µR (a, b) = 1;
(iv)
µR̃T,S (A, B) = 1
if and only if for every b ∈ B there exists a ∈ A such that µR (a, b) = 1;
(v)
µR̃S,T (A, B) = 1
if and only if there exists b ∈ B such that for every a ∈ A it holds µR (a, b) = 1;
(vi)
µR̃S,T (A, B) = 1
if and only if for every a ∈ A there exists b ∈ B such that µR (a, b) = 1.

Proof. (i) By (8.36), we obtain


µR̃T (A, B) = sup{T (T (µA (x) , µB (y)) , µR (x, y))|x ∈ X, y ∈ Y }
= sup {T (T (χA (x) , χB (y)), µR (x, y)) |x ∈ X, y ∈ Y }
1 if ∃a ∈ A, ∃b ∈ B : µR (a, b) = 1,
={
0 otherwise.
(ii) By (8.37), we obtain
µR̂S (A, B) = inf{S(S (1 − µA (x) , 1 − µB (y)) , µR (x, y))|x ∈ X, y ∈ Y }
= inf {S(S (1 − χA (x) , 1 − χB (y)) , µR (x, y))|x ∈ X, y ∈ Y }
1 if ∀a ∈ A, ∀b ∈ B : µR (a, b) = 1,
={
0 otherwise.
(iii) By (8.38), we obtain
µR̃T,S (A, B) = sup {inf{T (µA (x) , S(1 − µB (y) , µR (x, y)))|y ∈ Y }|x ∈ X}
= sup {inf{T (χA (x) , S(1 − χB (y) , µR (x, y)))|y ∈ Y }|x ∈ X}
1 if ∃a ∈ A, ∀b ∈ B : µR (a, b) = 1,
={
0 otherwise.
(iv) By (8.39), we obtain
µR̃T,S (A, B) = inf {sup{S(T (χA (x) , µR (x, y)), 1 − χB (y))|x ∈ X}|y ∈ Y }
1 if ∀b ∈ B, ∃a ∈ A : µR (a, b) = 1,
={
0 otherwise.
(v) By (8.40), we obtain
µR̃S,T (A, B) = sup {inf{T (χB (y), S(1 − χA (x) , µR (x, y)))|x ∈ X}|y ∈ Y }
1 if ∃b ∈ B, ∀a ∈ A : µR (a, b) = 1,
={
0 otherwise.
84 CHAPTER 8. FUZZY SETS

(vi) By (8.41), we obtain

µR̃S,T (A, B) = inf {sup{S(1 − χA (x) , T (χB (y), µR (x, y)))|y ∈ Y }|x ∈ X}
1 if ∀a ∈ A, ∃b ∈ B : µR (a, b) = 1,
={
0 otherwise.

In the following proposition some relationships between fuzzy extensions and


the dual fuzzy extensions are mentioned. When T = min and S = max, the
same results can be also found in [29].

Proposition 31 Let X, Y be sets, T be a t-norm, S be a t-conorm dual to T .


Let R be a valued relation on X × Y . Then for the duals of fuzzy extensions of
R it holds
(i) µ∗ R̃T (A, B) = µR̃S (B, A),
(ii) µ∗ R̃S (A, B) = µR̃T (B, A),
(iii) µ∗ R̃T,S (A, B) = µR̃S,T (B, A),
(iv) µ∗ R̃T,S (A, B) = µR̃S,T (B, A),
(v) µ∗ R̃S,T (A, B) = µR̃T,S (B, A),
(vi) µ∗ R̃S,T (A, B) = µR̃T,S (B, A),
whenever A ∈ F(X) and B ∈ F(Y ).

A number of other properties in case of T = min and S = max can be found


in [29].
Some more properties of the fuzzy extensions of valued relations for the case
X = Y = Rm shall be derived in the last section of this chapter.

8.8 Fuzzy Quantities and Fuzzy Numbers


We start our investigation with the simplest case of fuzzy subsets of the real
line: one-dimensional space of real numbers R, therefore we have X = R and
F(X) = F(R).

Definition 32 (i) A fuzzy subset A = {Aα }α∈[0,1] of R is called a fuzzy quan-


tity. The set of all fuzzy quantities will be denoted by F(R).
(ii) A fuzzy quantity A = {Aα }α∈[0,1] is called a fuzzy interval if Aα is non-
empty and convex subset of R for all α ∈ [0, 1]. The set of all fuzzy intervals
will be denoted by FI (R).
(iii) A fuzzy interval A is called a fuzzy number if its core Core(A) is a single-
ton. The set of all fuzzy numbers will be denoted by FN (R).

Notice that the membership function µA : R → [0, 1] of the fuzzy interval


A is quasiconcave on R, and for all x, y ∈ R, x 6= y, λ ∈ (0, 1), the following
inequality holds:

µA (λx + (1 − λ)y) ≥ min{µA (x), µA (y)}. (8.51)


8.8. FUZZY QUANTITIES AND FUZZY NUMBERS 85

By Definition 32, any fuzzy interval is normalized, since Core(A) = [A]1 is


nonempty, that is, there exists an element x0 ∈ R with µA (x0 ) = 1. Then
Hgt(A) = 1. Moreover, the restriction of the membership function µA to
(−∞, x0 ] is non-decreasing and the restriction of µA to [x0 , +∞) is a non-
increasing function.
From the point of view of applications, there are some subclasses of the class
of fuzzy intervals FI (R), we shall investigate them in the sequel.
A closed fuzzy interval A has an upper semicontinuous membership function
µA or, equivalently, for all α ∈ (0, 1] the α-cut [A]α is a closed subinterval in
R. Such a membership function µA , and correspondingly the fuzzy interval A,
can be fully described by a quadruple (l, r, F, G), where l, r, ∈ R with l ≤ r, and
F, G are non-increasing left continuous functions mapping (0, +∞) into [0, 1],
i.e. F, G : (0, +∞) → [0, 1]. Moreover, for each x ∈ R let
F (l − x) if x ∈ (−∞, l),
µA (x) = { 1 if x ∈ [l, r], (8.52)
G(x − r) if x ∈ (r, +∞).
We shall briefly write A = (l, r, F, G) and the set of all closed fuzzy intervals will
be denoted by FCI (R). As the ranges of F and G are included in [0, 1), we have
Core(A) = [l, r]. We can see that the functions F, G describe the left and right
”shape” of µA , respectively. Observe also that a crisp number x0 ∈ R and crisp
interval [a, b] ⊂ R belongs to FCI (R), as they may be equivalently expressed by
the characteristic functions χ{x0 } and χ[a,b] , respectively. These characteristic
functions can be also described in the form (8.52) with F (x) = G(x) = 0 for all
x ∈ (0, +∞).
Example 33 Gaussian fuzzy number
Let A = (a, a, G, G), where a ∈ R, γ ∈ (0, +∞) and for all x ∈ (0, +∞)
x2
G(x) = e− γ .
The membership function µA of A is for all x ∈ R as follows
(x−a)2
µA (x) = e− γ ,
see Figure 8.1, where γ = 2, a = 3.
A class of more specific fuzzy intervals of FCI (R) is obtained, if the α-cuts
are considered to be bounded intervals. Let l, r, ∈ R with l ≤ r, let γ, δ ∈
[0, +∞) and let L, R be non-increasing non-constant functions mapping interval
(0, 1] into [0, +∞), i.e. L, R : (0, 1] → [0, +∞). Moreover, assume that L(1) =
R(1) = 0, define L(0) = lim L(x), R(0) = lim R(x), and for each x ∈ R
x→0 x→0
 ³ ´

 L (−1) l−x
if x ∈ (l − γ, l), γ > 0,

 γ

µA (x) = 1 ¡ ¢ if x ∈ [l, r], (8.53)



 R (−1) x−r
if x ∈ (r, r + δ), δ > 0,

 δ
0 otherwise,
86 CHAPTER 8. FUZZY SETS

Figure 8.1:

where L(−1) , R(−1) are pseudo-inverse functions of L, R, respectively. We shall


write A = (l, r, γ, δ)LR , the fuzzy interval A is called an (L,R)-fuzzy interval and
the set of all (L, R)-fuzzy intervals will be denoted by FLR (R), see also [38].
The values of γ, δ are called the left and the right spread of A, respectively.
Observe that Supp(A) = [l − γ, r + δ], Core(A) = [l, r] and [A]α is a compact
interval for every α ∈ (0, 1]. It is obvious that the class of fuzzy intervals extends
the class of crisp closed intervals [a, b] ⊂ R including the case a = b, i.e. crisp
numbers.
Particularly important fuzzy intervals are so called piecewise linear fuzzy
intervals where L(x) = R(x) = 1 − x for all x ∈ [0, 1]. In this case, the subscript
LR will be omitted in the notation, we simply write A = (l, r, γ, δ). If l = r, then
A = (r, r, γ, δ) is called a triangular fuzzy number and the notation is simplified
as follows: A = (r, γ, δ).
Interesting classes of fuzzy quantities are based on the concept of basis of
generators, see [42], [41].

Definition 34 A fuzzy quantity A of R given by the membership function


µA : R → [0, 1] is called a generator in R if

(i) 0 ∈ Core(A),
(8.54)
(ii) µA is quasiconcave on R.

Notice that the generator A is a special fuzzy interval that satisfies (i).

Definition 35 A set B = {g|g is a generators in R} is called a basis of


generators in R if
(i) χ{0} ∈ B , χR ∈ B ,
(8.55)
(ii) max{f, g} ∈ B and min{f, g} ∈ B whenever f, g ∈ B .
8.9. FUZZY EXTENSIONS OF REAL FUNCTIONS 87

Remember that by χA we denote the characteristic function of a set A.

Definition 36 Let B be a basis of generators. A fuzzy quantity A of R given


by the membership function µA : R → [0, 1] is called a B -fuzzy interval if there
exists aA ∈ R and gA ∈ B such that for each x ∈ R

µA (x) = gA (x − aA ). (8.56)

The set of all B -fuzzy intervals will be denoted by FB (R). Any A ∈ FB (R) is
represented by a couple (aA , gA ), we write A = (aA , gA ).
An ordering relation ≤B is defined on FB (R) as follows. For A, B ∈ FB (R),
A = (aA , gA ), B = (aB , gB )
A ≤B B
if
(aA < aB ) or (aA = aB and gA ≤ gB ). (8.57)

Notice that ≤B is a partial ordering on FB (R). The proof of the following


proposition follows directly from (8.55).

Proposition 37 A couple (B, ≤), where B is a basis of generators and ≤ is a


pointwise ordering of functions, is a lattice with the maximal element χR and
minimal element χ{0} .

Example 38 The following sets of functions form a basis of generators in R:


(i) BD = {χ{0} , χR }- discrete basis,
(ii) BI = {χ[a,b] | − ∞ ≤ a ≤ b ≤ +∞}- interval basis,
(iii) BG = {µ|µ(x) = g (−1) ( |x|
d ) for each x ∈ R,d > 0}∪{χ{0} , χR }, where
g : (0, 1] → [0, +∞) is non-increasing non-constant function, g(1) = 0, g(0) =
lim g(x). Evidently, the pointwise relation ≤ between function values is a com-
x→0
plete ordering on BG .

Example 39 FBG (R) = {µ|there exists a ∈ R and g ∈ BG , such that µ(x) =


g(x − a) for each x ∈ R}. Evidently, the relation ≤B is a complete ordering on
FBG (R).

8.9 Fuzzy Extensions of Real Functions


Now, we shall deal with the problem of fuzzy extension of a real function f ,
where f : Rm → R, m ≥ 1, to a function f˜ : F(R) × · · · × F(R) → F(R),
applying the extension principle from Definition 11. Let Ai ∈ F(R) be fuzzy
quantities given by the membership functions µAi : R → [0, 1], i = 1, 2, ..., m.
Let T be a t-norm and let a fuzzy set A ∈ F(Rm ) be given by the membership
function µA : Rm → [0, 1], for all x = (x1 , ..., xm ) ∈ Rm as follows:

µA (x) = T (µA1 (x1 ), µA2 (x2 ), ..., µAm (xm )). (8.58)
88 CHAPTER 8. FUZZY SETS

The fuzzy set A ∈ F(Rm ) given by the membership function (8.58) is called
the fuzzy vector of non-interactive fuzzy quantities, see [35]. Applying (8.17),
we obtain for all y ∈ R:
sup{T (µA1 (x1 ), ..., µAm (xm ))| x = (x1 , ..., xm ) ∈ Rm , f (x) = y}
µf˜(A) (y) = { if f −1 (y) 6= ∅,
0 otherwise.
(8.59)
Let D = (d1 , d2 , . . . , dm ) be a nonsingular m × m matrix, where all
di ∈ Rm are column vectors, i = 1, 2, ..., m. Let a fuzzy set B ∈ F(Rm ) be given
by the membership function µB : Rm → [0, 1], for all x = (x1 , ..., xm ) ∈ Rm as
follows:
µB (x) = T (µA1 (hd1 , xi), µA2 (hd2 , xi), ..., µAm (hdm , xi)). (8.60)
The fuzzy set B ∈ F(Rm ) given by the membership function (8.60) is called
the fuzzy vector of interactive fuzzy quantities, or the oblique fuzzy vector, and
the matrix D is called the obliquity matrix, see [35].
Notice that if D is equal to the identity matrix E = (e1 , e2 , . . . , em ), ei =
(0, ..., 0, 1, 0, ..., 0), where 1 is only at the i-th position, then the corresponding
vector of interactive fuzzy quantities is a noninteractive one. Interactive fuzzy
numbers have been extensively studied e.g. in [32], [35], [69] and [70]. In this
study, we shall deal with them again in Chapter 11.
Now, we shall continue our investigation of the non-interactive fuzzy quan-
tities.
Example 40 Let m = 2, f : R2 → R, be defined for all (x1 , x2 ) ∈ R2 as
follows: f (x1 , x2 ) = x1 ∗ x2 , where ∗ is a binary operation on R, e.g. one of the
four arithmetic operations (+, −, ·, /). Let A1 , A2 ∈ F(R) be fuzzy quantities
given by membership functions µAi : R → [0, 1], i = 1, 2. Then, for a given
t-norm T , the fuzzy extension f˜ : F(R) × F(R) → F(R) defined by (8.59) as
µA1 ~ T A2 (y) = max {0, sup{T (µA1 (x1 ), µA2 (x2 ))| x1 ∗ x2 = y}} (8.61)
corresponds to the operation ~T on F(R). It is obvious that ~T is an extension
of ∗, since for any two crisp subsets A1 , A2 ∈ P(R) we obtain
A1 ~T A2 = A1 ∗ A2 , (8.62)
and, as a special case thereof, for any two crisp numbers a, b ∈ R,
a ~T b = a ∗ b.
If A1 , A2 ∈ F(R) are fuzzy quantities, we obtain (8.62) in terms of α-cuts as
follows
[A1 ~T A2 ]α = [A1 ]α ∗ [A2 ]α , (8.63)
where α ∈ (0, 1], or in terms of mapping f , (8.63) can be written as
[f˜(A1 , A2 )]α = f ([A1 ]α , [A2 ]α ). (8.64)
8.9. FUZZY EXTENSIONS OF REAL FUNCTIONS 89

Further on, we shall investigate equality (8.64) in a more general setting as a


commutation of a diagram of two operations: mapping by f or f˜ and α-cutting
of A or f˜(A). Considering (8.58) and (8.59), we are interested in the following
equality
[f˜(A)]α = f ([A1 ]α , ..., [Am ]α ). (8.65)
The process of forming of the left side and parallely the right side of (8.65) may
be visualized by the diagram depicted in Fig. 8.2.

Figure 8.2:

Observe that by (8.58) and by definition (8.15) we obtain

A = A1 ×T A2 ×T · · · ×T Am . (8.66)

If the equality at the top of this diagram is satisfied, we say, that the diagram
commutes. For the beginning, we derive several results concerning some con-
vexity properties of the individual elements in the diagram. The first result is
a simple generalization of the similar result from [79] for more than two mem-
bership functions. Notice that the membership functions in question are not
assumed to be normalized.

Proposition 41 Let Ai ∈ F(R) be fuzzy quantities given by the membership


functions µAi : R → [0, 1], i = 1, 2, ..., m. Let T be a t-norm and let a fuzzy
quantity A ∈ F(Rm ) be given by the membership function µA : Rm → [0, 1]
defined by (8.58). If µAi are T -quasiconcave on R for all i = 1, 2, ..., m, then
µA is T -quasiconcave on Rm .

If we assume that Ai ∈ F(R) are normalized, then T -quasiconcavity on R is


equivalent to quasiconcavity of µA on R. We have also the following proposition.
90 CHAPTER 8. FUZZY SETS

Proposition 42 Let Ai ∈ FI (R) be fuzzy intervals given by the membership


functions µAi : R → [0, 1], i = 1, 2, ..., m. Let G = {Gk }∞
k=1 be an aggregation
operator and let A ∈ F(Rm ) be given by the membership function µA : Rm →
[0, 1] for all x ∈ Rm by

µA (x) = Gm (µA1 (x1 ), µA2 (x2 ), ..., µAm (xm )). (8.67)

Then µA is upper-starshaped on Rm .

Proof. Let us define for i = 1, 2, ..., m

µi (x1 , ..., xm ) = µAi (xi ). (8.68)

Then µi is normalized and quasiconcave on Rm . First, we have to show that


Core(µ1 ) ∩ · · · ∩ Core(µm ) 6= ∅. We prove even more, particularly

Core(µ1 ) ∩ · · · ∩ Core(µm ) = Core(µA1 ) × · · · × Core(µAm ) 6= ∅. (8.69)

Indeed, if x = (x1 , ..., xm ) ∈ Core(µ1 ) ∩ · · · ∩ Core(µm ), then for all i =


1, 2, ..., m, µi (x) = 1 and by (8.68) we obtain µAi (xi ) = 1. Consequently,
for all i = 1, 2, ..., m, xi ∈ Core(µAi ), therefore, x = (x1 , ..., xm ) ∈ Core(µA1 ) ×
· · · × Core(µAm ).
Conversely, let x = (x1 , ..., xm ) ∈ Core(µA1 )×· · ·×Core(µAm ), then for each
xi ∈ Core(µAi ) and all i = 1, 2, ..., m, and by (8.68) it follows that. µi (x) = 1
for all i, thus we obtain

x = (x1 , ..., xm ) ∈ Core(µ1 ) ∩ · · · ∩ Core(µm ).

Finally, since by the assumption we have Ai ∈ FI (R), and Core(µAi ) is


nonempty for all i = 1, 2, ..., m, then

Core(µA1 ) × · · · × Core(µAm ) 6= ∅.

The rest of the proposition can be proven analogously to the Proposition


from [79] with G being an aggregation operator. Thus, µA = Gm (µA1 , ..., µAm )
is upper-starshaped on Rm .
The next example shows that Proposition 42 cannot be strenghten e.g. in
such a way that µA is quasiconcave on Rm .

Example 43 Let X = R2 and let µAi : R → [0, 1], i = 1, 2, be defined as


follows:
p p
µA1 (x1 ) = max{0, 1 − |x1 |}, µA2 (x2 ) = max{0, 1 − |x2 |}.

Let T = TP be a product t-norm. Following (8.58) define for all (x1 , x2 ) ∈ X:


p p
µA (x1 , x2 ) = max{0, 1 − |x1 |} · max{0, 1 − |x2 |}. (8.70)

It is evident that µAi is normalized quasiconcave functions on R for i = 1, 2.


By Proposition (42), µA defined by (8.70) is upper starshaped. In Fig. 8.3,
8.9. FUZZY EXTENSIONS OF REAL FUNCTIONS 91

Figure 8.3:

the contours of some α-cuts of the fuzzy set A given by (8.70), are depicted.
This picture demonstrates that µA is not quasiconcave on X, as some of its
α-cuts are not convex. This fact can be verified by looking closely at the curves
µA (x1 , x2 ) = α for α ∈ (0, 1]. All α-cuts are, however, starshaped sets.

The next two results concern the α-cuts of the fuzzy quantities.

Proposition 44 Let Ai ∈ F(R) be fuzzy quantities given by the membership


functions µAi : R → [0, 1], i = 1, 2, ..., m. Let T be a t-norm and let a fuzzy
quantity A ∈ F(Rm ) be given by the membership function µA : Rm → [0, 1] in
(8.58).

(i) If α ∈ (0, 1], then

[A]α ⊂ [A1 ]α × [A2 ]α × · · · × [Am ]α . (8.71)

(ii) The equation


[A]α = [A1 ]α × [A2 ]α × · · · × [Am ]α , (8.72)
holds for all α ∈ (0, 1], if and only if T = TM .

Proof. (i) Let x = (x1 , ..., xm ) ∈ [A]α , i.e. µA (x) = T (µA1 (x1 ), ..., µAm (xm ))
≥ α. Since min{µA1 (x1 ), ..., µAm (xm )} ≥ T (µA1 (x1 ), ..., µAm (xm )), we obtain
µAi (xi ) ≥ α for all i = 1, 2, ..., m. Consequently, for all i = 1, 2, ..., m we have
xi ∈ [Ai ]α and also x = (x1 , ..., xm ) ∈ [A1 ]α × [A2 ]α × · · · × [Am ]α .
92 CHAPTER 8. FUZZY SETS

(ii) Let T 6= min. Then there exists x = (x1 , ..., xm ) ∈ Rm such that
min{µA1 (x1 ), ..., µAm (xm )} > T (µA1 (x1 ), ..., µAm (xm )).
Putting β = min{µA1 (x1 ), ..., µAm (xm )}, we have β > 0 and xi ∈ [Ai ]β for
all i = 1, 2, ..., m, i.e. x = (x1 , ..., xm ) ∈ [A1 ]β × [A2 ]β ×· · · × [Am ]β . However,
µA (x) = T (µA1 (x1 ), ..., µAm (xm )) < β and therefore x = (x1 , ..., xm ) ∈ / [A]β , a
contradiction with (8.72). Thus, T = min .
On the other hand, if T = min, then

µA = min{µA1 , ..., µAm }. (8.73)

Let α ∈ (0, 1] be arbitrary and x = (x1 , ..., xm ) ∈ Rm be also arbitrary with


x ∈ [A1 ]α × [A2 ]α × · · · × [Am ]α . Then µAi (xi ) ≥ α for all i = 1, 2, ..., m
and it follows that min{µA1 (x1 ), ..., µAm (xm )} ≥ α. Hence, by (8.73) we have
x ∈ [A]α . We have just proven inclusion [A]α ⊃ [A1 ]α × [A2 ]α × · · · × [Am ]α ,
the opposite inclusion (8.71) is true by (i). Consequently, we have the required
result (8.72).
Now, we shall deal with f˜ being a fuzzy extension of the mapping f by using
the extension principle (8.59). Some sufficient conditions under which f˜(A) is
quasiconcave on R will be given in the next section as the consequence of a
more general result. The problem of commuting of the diagram in Fig. 8.2 will
be also resolved there.

8.10 Higher Dimensional Fuzzy Quantities


In the previous section we assumed that the fuzzy subset A ∈ F(Rm ) was
given in the special form (8.58), or eventually (8.60). In this section, we shall
investigate a fuzzy subsets of the m-dimensional real vector space Rm , where
m is a positive integer. The set of all fuzzy subsets of Rm , denoted by F(Rm ),
is called the set of all m-dimensional fuzzy quantities. Sometimes the expression
m-dimensional is omitted. We shall investigate the problem of extension a real
function f : Rm → R, m ≥ 1, to a function f˜ : F(Rm ) → F(R). The process
of commuting of the operations of mapping and α-cutting is depicted on the
diagram in Fig. 8.4.
The following definition will be useful.

Definition 45 A fuzzy subset A = {Aα }α∈[0,1] of Rm is called closed, bounded,


compact or convex if Aα is a closed, bounded, compact or convex subset of Rm
for every α ∈ (0, 1], respectively.

If a fuzzy subset A of Rm given by the membership function µA : Rm → [0, 1]


is closed, bounded, compact or convex, then [A]α is a closed, bounded, compact
or convex subset of Rm for every α ∈ (0, 1], respectively. Notice that A is convex
if and only if its membership function µA is quasiconcave on Rm .
In what follows we shall use the following important condition requiring that
a special class of optimization problems always posses some optimal solution.
Some sufficient conditions securing this requirement will be investigated later.
8.10. HIGHER DIMENSIONAL FUZZY QUANTITIES 93

Figure 8.4:

Definition 46 Condition (C):


Let f : Rm → R, µ : Rm → [0, 1]. We say that condition (C) is satisfied for f
and µ, if for every y ∈ Ran(f ) there exists xy ∈ Rm such that f (xy ) = y and
µ(xy ) = sup{µ(x)|x ∈ Rm , f (x) = y}. (8.74)
m
Theorem 47 Let A ∈ F(R ) be a fuzzy quantity, let µA be upper-quasiconnected
on Rm , let f : Rm → R be continuous on Rm and let condition (C) be satisfied
for f and µA . Then the membership function of f˜(A) is quasiconcave on R.
h i h i
Proof. Let α ∈ (0, 1] we show that f˜(A) is convex. Let yi ∈ f˜(A) ,
α α
i = 1, 2, with y1 < y2 and λ ∈ (0, 1) . Putting y0 = λy1 + (1 − λ)y2 , we have
y1 < y0 < y2 . (If y1 = y2 , then there is nothing to prove.)
By Condition (C) there exists xi ∈ Rm , i = 1, 2, with f (xi ) = yi such that by
(8.74) and (8.19) we get µA (xi ) = µf˜(A) (yi ) ≥ α, therefore, xi ∈ [A]α . Since µA
is upper quasiconnected on Rm , [A]α is path-connected, therefore there exists
a path P belonging to [A]α , i.e.
P ⊂ [A]α . (8.75)
Since P is connected and f is continuous on P with f (xi ) = yi and y1 < y0 < y2 ,
then f (P ) is also connected, y1 , y0 , y2 ∈ f (P ) and it follows that there exists
x0 ∈ P such that f (x0 ) = y0 . By (8.75) we have x0 ∈ [A]α , i.e. µA (x0 ) ≥ α,
which implies µf˜(A) (y0 ) = sup{µA (x)|x ∈ Rm , f (x) = y0 } ≥ µA (x0 ) ≥ α.
h i h i
Consequently, y0 ∈ f˜(A) , thus f˜(A) is convex.
α α
Now, we return back to the question concerning sufficient conditions under
which condition (C) is satisfied.
94 CHAPTER 8. FUZZY SETS

Proposition 48 Let A ∈ F(Rm ) be a compact fuzzy quantity and let f : Rm →


R be a continuous function. Then condition (C) is satisfied for f and µA .

Proof. Let y ∈ Ran(f ) and denote Xy = {x ∈ Rm |f (x) = y}. Then Xy is


nonempty and closed. Put

α = sup{µA (x)|x ∈ Xy }. (8.76)

Without loss of generality we assume that α > 0. Take a number β, 0 < β < α
such that α − βk > 0 for all k = 1, 2, ..., and denote

β
Uk = {x ∈ Rm |µA (x) ≥ α − }, k = 1, 2, .... (8.77)
k
By the compactness of [A]δ for all δ ∈ (0, 1] we know that all Uk are compact
and Uk+1 ⊂ Uk for all k = 1, 2, .... Putting Vk = Uk ∩Xy we obtain by (8.76) and
(8.77) that Vk is nonempty, compact and Vk+1 ⊂ Vk for all k = 1, 2, .... From
T

the well known property of compact spaces it follows that Vk is nonempty.
k=1
T

Hence, for any xy ∈ Vk it holds: f (xy ) = y and µA (xy ) ≥ α.
k=1
Clearly, the fuzzy set A is compact, if the α-cuts [A]α are compact for all
α ∈ (0, 1], or the α-cuts [A]α are bounded for all α ∈ (0, 1] and the membership
function µA is upper semicontinuous on Rm .
Returning back to the problem formulated at the end of the last section,
namely, the problem of the existence of sufficient conditions under which the
membership function of f˜(A) is quasiconcave on R with µA defined by (8.58),
we have the following result.

Theorem 49 Let Ai ∈ FI (R) be compact fuzzy intervals. Let T be a continuous


t-norm and let a fuzzy quantity A ∈ F(Rm ) be given by the membership function
µA : Rm → [0, 1] as

µA (x) = T (µA1 (x1 ), µA2 (x2 ), ..., µAm (xm )),

for all x = (x1 , ..., xm ) ∈ Rm . Moreover, let f : Rm → R be continuous on


Rm . Then the membership function of f˜(A) given by (8.59) is quasiconcave on
R.

Proof. It is sufficient to show that µA (x) = T (µA1 (x1 ), µA2 (x2 ), ..., µAm (xm ))
is upper-quasiconnected and [A]α are compact for all α ∈ (0, 1]. Having this,
the result follows from Proposition 48 and Theorem 47.
By Proposition 42, µA is upper starshaped on Rm , hence µA is upper con-
nected.
As T is continuous, [A]α is closed for all α ∈ (0, 1].
It is also supposed that [Ai ]α are compact for all α ∈ (0, 1], i = 1, 2, ..., m,
therefore the same holds for a Cartesian product [A1 ]α × [A2 ]α × · · · × [Am ]α .
Applying (8.71), we obtain that all [A]α are bounded, hence compact.
8.10. HIGHER DIMENSIONAL FUZZY QUANTITIES 95

Finally, all assumptions of Proposition 48 are satisfied, thus Condition (C)


is satisfied and by applying Theorem 47 we obtain the required result.
Now, we resolve the former problem of commuting of the diagrams in Fig.
8.2 and Fig. 8.4.

Proposition 50 Let A ∈ F(Rm ) be a fuzzy quantity, let µA be upper-quasiconnected


on Rm , let f : Rm → R be continuous on Rm . Then f ([A]α ) is convex for all
α ∈ [0, 1].

Proof. Let α ∈ [0, 1]. As [A]α is path-connected and f is continuous,


therefore f ([A]α ) is a connected subset of R, thus it is a convex subset of R.

Proposition 51 Let A ∈ F(Rm ) be a fuzzy set, let f : Rm → R. Then


h i
f ([A]α ) ⊂ f˜(A) , (8.78)
α

for all α ∈ (0, 1].

Proof. Let α ∈ (0, 1] and y ∈ f ([A]α ). Then there exists xy ∈ [A]α ,


such that f (xy ) = y. By (8.59) we obtain µf˜(A) (y) = sup{µA (x)|x ∈ Rm ,
h i
f (x) = y} ≥ µ (xy ) ≥ α. Hence, y ∈ f˜(A) .
A
α
The following theorem gives a necessary and sufficient condition for the
diagram in Fig. 8.4 to commute.

Theorem 52 Let A ∈ F(Rm ) be a fuzzy quantity. Condition (C) is satisfied if


and only if h i
f˜(A) = f ([A]α ), (8.79)
α

for all α ∈ (0, 1].


h i
Proof. 1. Let Condition (C) be satisfied. We have to prove only f˜(A) ⊂
α
f ([A]α ), the opposite inclusion
h holds
i by Proposition 51.
˜
Let α ∈ (0, 1] and y ∈ f (A) . Then µf˜(A) (y) ≥ α and by Condition (C)
α
we have
µf˜(A) (y) = sup{µA (x)|x ∈ Rm , f (x) = y} = µA (xy )
for some xy ∈ Rm with f (xy ) = y. Combining these results we obtain xy ∈ [A]α ,
consequently, f (xy ) = y ∈ f ([A]α ).
2. On the contrary, suppose that Condition (C) does not hold. Then there
exists y0 such that for all z ∈ Rm with f (z) = y0 we have

sup{µA (x)|x ∈ Rm , f (x) = y0 } > µA (z). (8.80)

Put β = sup{µA (x)|x ∈ Rm , f (x) = y0 }. Then µf˜(A) (y0 ) = β, i.e. y0 ∈


h i
f˜(A) . Suppose that (8.79) holds for α = β, then it follows that there exists
β
96 CHAPTER 8. FUZZY SETS

x0 ∈ [A]β , i.e. µA (x0 ) ≥ β, with f (x0 ) = y0 . However, this is a contradiction


with (8.80).
Theorem 52 is a reformulation of the well known Nguyen’s result, see [53].
As a consequence of the Theorems 52, 49 and Proposition 48, we resolve the
problem of commuting of the diagram in Fig. 8.2.
Theorem 53 Let Ai ∈ FI (R) be compact fuzzy intervals, i = 1, 2, ..., m. Let
f : Rm → R be a continuous function, let T be a continuous t-norm and let a
fuzzy quantity A ∈ F(Rm ) be given by the membership function µA : Rm → [0, 1]
by (8.58). Then h i
f˜(A) = f ([A1 ]α , ..., [Am ]α ), (8.81)
α
for all α ∈ (0, 1].
Proof. It is sufficient to show that µA (x) = T (µA1 (x1 ), µA2 (x2 ), ..., µAm (xm ))
is upper starshaped on Rm with [A]α compact for all α ∈ (0, 1]. Then the rest
of the proof follows from Proposition 48 and Theorem 52.
Indeed, by Proposition 42, µA is upper starshaped on Rm .
As µAi are upper semicontinuous, i = 1, 2, ..., m, T is continuous, it follows
that µA = T (µA1 , ..., µAm ) is upper semicontinuous on Rm . Hence, [A]α is
closed for all α ∈ (0, 1].
It is supposed that [Ai ]α is compact for all α ∈ (0, 1], i = 1, 2, ..., m, the
same holds for the Cartesian product [A1 ]α × [A2 ]α × · · · × [Am ]α and applying
(8.71), we obtain that [A]α is bounded, thus compact.
Now, all assumptions of Proposition 48 are satisfied, thus Condition (C)
holds and by Theorem 52 we obtain the required result.
Proposition 54 Let A ∈ F(Rm ) be a compact fuzzy quantity. If f : Rm → R
is continuous then f˜(A) is compact.
Proof. Let α ∈ (0, 1]. Since [A]α is compact, then by continuity of f , it
follows that f ([A]α ) is compact. By Proposition 48, Condition (C) is satisfied
and by Theorem 52 we obtain
h i
f˜(A) = f ([A]α ), (8.82)
α

for all α ∈ (0, 1].


h i
Corollary 55 If in Proposition 54, µA is upper-quasiconnected, then f˜(A)
α
is a compact interval for each α ∈ (0, 1].
h i
Proof. By Proposition 54, f˜(A) are compact and by Proposition 50,
α h i
f ([A]α ) are convex for all α ∈ (0, 1]. Using equation (8.82), f˜(A) are convex
α
and compact, i.e. compact intervals in R.
Commuting of the diagram in Fig. 8.4 is important when calculating the ex-
tensions of aggregation operators in multi-criteria decision making, see Chapter
9.
8.11. FUZZY EXTENSIONS OF VALUED RELATIONS 97

8.11 Fuzzy Extensions of Valued Relations


In Section 8.6, Definition 27, we have introduced and investigated six types of
fuzzy extensions of valued relations on X × Y . In this section we shall deal
with fuzzy extensions of valued and binary relations on Rm , where m is a
positive integer. Binary relations can be viewed as special valued relations with
values from {0, 1}. The usual component-wise equality relation = and inequality
relations ≤, ≥, < and > on Rm are simple examples of binary relations. The
results derived in this section will be useful in fuzzy mathematical programming
we shall investigate in Chapter 11.
Now, let us turn our attention to fuzzy relations on Rm , i.e. consider X =
Y = Rm . We start with three important examples.
Example 56 Let us consider the usual binary relation = (”equal”) on Rm ,
given by the membership function µ= for all x, y ∈ Rm as
1 if xi = yi for all i = 1, 2, ..., m,
µ= (x, y) = { (8.83)
0 otherwise.
Let T be a t-norm, A, B be fuzzy subsets of Rm with the corresponding member-
ship functions µ : Rm → [0, 1], ν : Rm → [0, 1], respectively. Then by (8.27),
the membership function µ= ˜ of the T -fuzzy extension =
˜ of relation = can be
derived as
m
µ=
˜ (A, B) = sup {T (µ= (x, y) , T (µ (x) , ν (y))) |x, y ∈ R }
m
= sup {T (1, T (µ (x) , ν (y)))| x, y ∈ R , x = y}
= sup {T (µ (x) , ν (x)) |x ∈ Rm } = Hgt (A ∩T B) .
Example 57 Let us consider the usual binary relation ≥ (”greater or equal”)
on Rm . The corresponding membership function is defined as
1 if xi ≥ yi for all i = 1, 2, ..., m,
µ≥ (x, y) = { (8.84)
0 otherwise.
Let T be a t-norm, A, B be fuzzy subsets of Rm with the corresponding member-
ship functions µ : Rm → [0, 1], ν : Rm → [0, 1], respectively. Then by (8.27),
the membership function µ≥ ˜ of relation ≥ can be
˜ of the T -fuzzy extension ≥
derived as follows:

© ¡ ¢ m
ª
µ≥
˜ (A, B) = sup T µ≥ (x, y) , T (µ (x) , ν (y)) |x, y ∈ R

= sup {T (1, T (µ (x) , ν (y)))| x, y ∈ Rm , x ≥ y}


= sup {T (µ (x) , ν (y)) |x, y ∈ Rm , x ≥ y} .
Example 58 Let d > 0 and let ϕd : R → [0, 1] be a function defined as follows
1 if t ≥ 0,
d+t
ϕd (t) = { d if − d ≤ t < 0,
(8.85)
0 otherwise.
98 CHAPTER 8. FUZZY SETS

Then the valued relation Rd defined by the membership function µRd for all
x, y ∈ R as
µRd (x, y) = ϕd (x − y) (8.86)
is a ”generalized” inequality relation ≥ on R. By (8.27), the membership func-
tion µR̃d of the T -fuzzy extension R̃d of relation Rd is as follows

µR̃d (A, B) = sup {T (ϕd (x − y), T (µ (x) , ν (y))) |x, y ∈ R} .

Further on, we shall deal with properties of fuzzy extensions of binary re-
lations on Rm . We start with investigation of m-dimensional intervals. Recall
that the notation A R B means µR (A, B) = 1. We consider the usual com-
ponentwise binary relations in Rm , namely ”less or equal” and ”equal”, i.e.
R ∈ {≤, =}.

Theorem 59 Let A, B be nonempty and closed intervals of Rm , A = {a ∈ Rm |


a ≤ a ≤ a}, B = {b ∈ Rm | b ≤ b ≤ b}. Let T be a t-norm, S be a t-conorm.
˜ and =
Let ≤ and = be usual binary relations in Rm , ≤ ˜ be the respective fuzzy
extensions of relations ≤ and =, where

≤ ˜T,≤
˜ ∈ {≤ ˜ T,S , ≤
˜ S, ≤ ˜ S,T , ≤
˜ T,S , ≤ ˜ S,T },
= ˜T,=
˜ ∈ {= ˜ T,S , =
˜ S, = ˜ S,T , =
˜ T,S , = ˜ S,T }.

Then
(i)
˜TB
(1) A≤ if and only if a ≤ b,
˜TB
(2) A= if and only if a ≤ b and a ≥ b;
(ii)
˜ SB
(1) A ≤ if and only if a ≤ b,
(2) A=
˜ SB if and only if a = b = b = a;
(iii)
˜ T,S B
(1) A ≤ if and only if a ≤ b,
˜ T,S B
(2) A= if and only if a ≤ b = b ≤ a;
(iv)
˜ T,S B
(1) A ≤ if and only if a ≤ b,
(2) A =
˜ T,S B if and only if a ≤ b ≤ b ≤ a;
(v)
˜ S,T B
(1) A≤ if and only if a ≤ b,
˜ S,T B
(2) A= if and only if b ≤ a = a ≤ b ;
(vi)
˜ S,T B
(1) A≤ if and only if a ≤ b,
(2) A=
˜ S,T B if and only if b ≤ a ≤ a ≤ b.
8.11. FUZZY EXTENSIONS OF VALUED RELATIONS 99

Proof. (i) 1. Let A≤ ˜ T B. Then by (i) in Proposition 30 there exists a ∈ A


and b ∈ B such that a ≤ b, thus a ≤ a ≤ b ≤ b.
Conversely, let a ≤ b.
Since a ∈ A, b ∈ B, by Proposition 30 (i), we immediately obtain A≤ ˜ T B.
2. Observe that a ≤ b and b ≤ a is equivalent to A ∩ B is nonempty, i.e.
there is c such that c ∈ A ∩ B. However, by Proposition 30 (i), it is equivalent
to A=˜ T B.

(ii) 1. Let A ≤˜ S B. Then by (ii) in Proposition 30, for every a ∈ A and every
b ∈ B we have a ≤ b, thus a ≤ b.
Conversely, let a ≤ a ≤ b ≤ b. Then by Proposition 30, (ii), we easily obtain
A≤ ˜ S B.
2. Let A= ˜ S B. Then by (ii) in Proposition 30, for every a ∈ A and every
b ∈ B we have a = b, thus a = a = b = b.
Conversely, let a = a = b = b. Then by Proposition 30, (ii), we obtain
A=˜ S B.

(iii) 1. Let A ≤˜ T,S B. Then by (iii) in Proposition 30, there exists a ∈ A


such that for every b ∈ B we have a ≤ b, thus a ≤ a ≤ b.
Conversely, let a ≤ b . Then by Proposition 30, (iii), we take a = a and
since b ≤ b, we easily obtain A ≤ ˜ T,S B.
T,S
2. Let A= ˜ B. Then by (iii) in Proposition 30, there exists a ∈ A such
that for every b ∈ B we have a = b, thus a ≤ b = b ≤ a.
Conversely, let a ≤ b = b ≤ a. Then by Proposition 30 (iii), we take a = b
and immediately obtain A= ˜ T,S B.
(iv) 1. Let A≤ ˜ T,S B. Then by (iv) in Proposition 30, for every b ∈ B there
exists a ∈ A such that a ≤ b, thus a ≤ a ≤ b.
Conversely, let a ≤ b . Then by Proposition 30 (iv), we take a = a and
since b ≤ b, we obtain A≤ ˜ T,S B.
2. Let A= ˜ T,S B. Then (iv) in Proposition 30, for every b ∈ B there exists
a ∈ A such that a = b, hence a ≤ b ≤ b ≤ a.
Conversely, let a ≤ b ≤ b ≤ a. Then by Proposition 30 (iv), we take a = b
and obtain A= ˜ T,S B.

(v) 1. Let A ≤˜ S,T B. Then by (v) in Proposition 30, there exists b ∈ B such
that for every a ∈ A we have a ≤ b, thus a ≤ b ≤ b.
Conversely, let a ≤ b. Then by Proposition 30 (v), we take b = b and since
a ≤ a for every a ∈ A, we easily obtain A≤ ˜ S,T B.
S,T
2. Let A =˜ B. Then by (v) in Proposition 30, there exists b ∈ B such tat
for every a ∈ A we have a = b, thus b ≤ a = a ≤ b.
Conversely, let b ≤ a = a ≤ b. Then by Proposition 30 (v), we take b = a
and obtain A= ˜ S,T B.
˜ S,T B. Then by (vi) in Proposition 30, for every a ∈ A there
(vi) 1. Let A≤
exists b ∈ B such that a ≤ b, thus a ≤ b ≤ b.
100 CHAPTER 8. FUZZY SETS

Conversely, let a ≤ b. Then by Proposition 30 (vi), we take b = b and since


a ≤ a, we therefore obtain A≤ ˜ S,T B.
2. Let A = ˜ S,T B. Then by (vi) in Proposition 30, for every a ∈ A there
exists b ∈ B such that a = b, thus b ≤ a ≤ a ≤ b.
Conversely, let b ≤ a ≤ a ≤ b. Then by Proposition 30, (vi), we take a = b
and finally obtain A= ˜ S,T B.
If A, B are nonempty and closed intervals of Rm , then by comparing (iii)
and (iv) in Proposition 59, we can see that
T,S
˜
A≤ ˜ T,S B.
B if and only if A ≤ (8.87)

Likewise,
S,T
˜
A≤ ˜ S,T B,
B if and only if A ≤ (8.88)
as is clear from (v) and (vi), in the same proposition.
The following proposition shows that (8.87) and (8.88) can be presented in
a stronger form.

Proposition 60 Let A, B ∈ F(R) be compact fuzzy sets, T = min, S = max.


Then

˜ T,S (A, B)) = µ≤


µ≤ ˜
S,T
(A, B), µ≤
˜
T,S
(A, B) = µ≤
˜ S,T (A, B). (8.89)

The proof of Proposition 60 is given in [88] in a more generalized setting.


The following two propositions hold for a binary relation R on X × Y , where
˜ T of R.
X, Y are arbitrary sets, and the T -fuzzy extension ≤

Proposition 61 Let X, Y be sets, let A ∈ F(X), B ∈ F(Y ) be fuzzy sets given


by the membership functions µA : X → [0, 1], µB : Y → [0, 1], respectively. Let
T be a t-norm and R be a binary relation on X × Y , R̃T be a T -fuzzy extension
of R, let α ∈ (0, 1).
(i) If µR̃T (A, B) > α, then µR̃T ([A]α , [B]α ) = 1.
(ii) Let T = min. If µR̃T ([A]α , [B]α ) = 1, then µR̃T (A, B) ≥ α.

Proof. (i) Let α ∈ (0, 1), µR̃T (A, B) > α. Then by (8.27) we obtain

µR̃T (A, B) = sup{T (µA (u), µB (v)) |uRv}. (8.90)

Since sup{T (µA (u), µB (v)) |u ∈ X, v ∈ Y, uRv} > α, there exist u0 , v 0 ∈ Rm


such that u0 Rv 0 and T (µA (u0 ), µB (v 0 )) ≥ α . The minimum is a maximal
t-norm, therefore

min{µA (u0 ), µB (v 0 )} ≥ T (µA (u0 ), µB (v 0 )) ≥ α,

hence
µA (u0 ) ≥ α and µB (v 0 ) ≥ α, (8.91)
8.11. FUZZY EXTENSIONS OF VALUED RELATIONS 101

in other words, u0 ∈ [A]α , v 0 ∈ [B]α and u0 Rv 0 . By Proposition 30, (i), we


obtain
µR̃ ([A]α , [B]α ) = 1. (8.92)
(ii) Let T = min, µR̃T ([A]α , [B]α ) = 1. By Proposition 30 there exist u00 ∈ [A]α ,
v 00 ∈ [B]α such that u00 Rv 00 . Then µA (u00 ) ≥ α and µB (v 00 ) ≥ α, therefore
min{µA (u00 ), µB (v 00 )} ≥ α. Consequently,
µR̃ (A, B) = sup{min{µA (u), µB (v)}|u ∈ X, v ∈ Y, uRv} ≥ α.

If we replace the strict inequality > in (i) by ≥, then the conclusion of (i) is
clearly no longer true. To prove the result with ≥ instead of >, we shall make
an assumption similar to condition (C) from the preceding section.
Proposition 62 Let X, Y be sets, let A ∈ F(X), B ∈ F(Y ) be fuzzy sets given
by the membership functions µA : X → [0, 1], µB : Y → [0, 1], respectively. Let
T be a t-norm and R be a binary relation on X × Y , R̃T be a T -fuzzy extension
of R, let α ∈ (0, 1].
Suppose that there exist u∗ ∈ X, v ∗ ∈ Y such that u∗ Rv ∗ and
T (µA (u∗ ), µB (v ∗ )) = sup{T (µA (u), µB (v)) |u ∈ X, v ∈ Y, uRv}. (8.93)

If µR̃T (A, B) ≥ α, then µR̃T ([A]α , [B]α ) = 1.

Proof. Let α ∈ (0, 1], µR̃T (A, B) ≥ α. Then by (8.27) we obtain


µR̃T (A, B) = sup{T (µA (u), µB (v)) |u ∈ X, v ∈ Y, uRv}. (8.94)
Applying (8.93) and (8.94), we obtainT (µA (u∗ ), µB (v ∗ )) ≥ α. Since min is the
maximal t-norm, we have
min{µA (u∗ ), µB (v ∗ )} ≥ T (µA (u∗ ), µB (v ∗ )) ≥ α,
hence
µA (u∗ ) ≥ α and µB (v ∗ ) ≥ α,
in other words, u∗ Rv ∗ and u∗ ∈ [A]α , v ∗ ∈ [B]α . By Proposition 30 we
obtain
µR̃T ([A]α , [B]α ) = 1.

The following proposition gives some sufficient conditions for (8.93).


Proposition 63 Let A, B be compact fuzzy quantities with the membership
functions µA : Rm → [0, 1], µB : Rm → [0, 1]. Let T be a continuous t-
norm and R be a closed binary relation on Rm , moreover, let R̃T be a T -fuzzy
extension of R. Then there exist u∗ , v ∗ ∈ Rm such that u∗ Rv ∗ and
T (µA (u∗ ), µB (v ∗ )) = sup{T (µA (u), µB (v)) |u, v ∈ Rm , uRv}. (8.95)
102 CHAPTER 8. FUZZY SETS

Proof. We show that ϕ(u, v) = T (µA (u), µB (v)) attains its maximum on
the set Z = {(u, v) ∈ R2m |uRv}. Since R is closed binary relation, Z is a closed
set. Further, since for all β ∈ (0, 1], the upper level sets U (ϕ, β) are compact,
it follows that either U (ϕ, β) ∩ Z is empty for all β ∈ (0, 1], or there exists
β 0 ∈ (0, 1] such that U (ϕ, β 0 ) ∩ Z is nonempty.
In the former case, (8.93) holds for any u∗ , v ∗ ∈ Z with

T (µA (u∗ ), µB (v ∗ )) = 0.

In the latter case, there exists (u∗ , v ∗ ) ∈ R2m , such that ϕ attains its maximum
on U (ϕ, β 0 ) ∩ Z in (u∗ , v ∗ ), which is a global maximizer of ϕ on Z.

Corollary 64 Let A, B ∈ F(Rm ) be compact fuzzy quantities with the mem-


bership functions µA : Rm → [0, 1], µB : Rm → [0, 1]. Let T = min and R be
a closed binary relation on Rm , R̃T be a T -fuzzy extension of R.
For α ∈ (0, 1],

µR̃T (A, B) ≥ α if and only if µR̃T ([A]α , [B]α ) = 1.

Proof. By Proposition 63, condition (8.93) is satisfied. Then by Proposition


62, we obtain the ”if” part of the statement. The opposite part follows from
Proposition 61.
Notice that the usual binary relations ”=”, ”≤” and ”≥” are closed binary
relations.
Propositions 62, 61 and 63 hold for the T -fuzzy extension R̃T of the valued
relation R. Similar results can be derived also for the other fuzzy extensions,
particularly R̃S , R̃T,S , R̃T,S , R̃S,T and R̃S,T . Here, we present an important
result for a particular case m = 1, i.e. Rm = R. The following theorem is a
parallel to Theorem 59. Similar results to Theorem 59, we obtain the following
proposition.

Theorem 65 Let A, B ∈ F(R) be strictly convex and compact fuzzy sets, T =


min, S = max, α ∈ (0, 1).
Then
(i) µ≤ ˜ T (A, B) ≥ α if and only if inf[A]α ≤ sup[B]α ,
(ii) µ≤˜ (A, B) ≥ α if and only if sup[A]1−α ≤ inf[B]1−α ,
S

(iii) µ≤
˜ T,S (A, B) ≥ α iff µ≤
˜
T,S
(A, B) ≥ α iff sup[A]1−α ≤ sup[B]α ,
(iv) µ≤
˜ S,T (A, B) ≥ α iff µ≤
˜
S,T
(A, B) ≥ α iff inf[A]α ≤ inf[B]1−α .

From the practical point of view, the last theorem is important for calculating
the membership function of both fuzzy feasible solutions and fuzzy optimal
solutions of fuzzy mathematical programming problem in Chapter 10. Some
related results to this problem can be found also in [21].
Chapter 9

Fuzzy Multi-Criteria
Decision Making

9.1 Introduction
When dealing with practical decision problems, we often have to take into con-
sideration uncertainty in the problem data. It may arise from errors in mea-
suring physical quantities, from errors caused by representing some data in a
computer, from the fact that some data are approximate solutions of other
problems or estimations by human experts, etc. In some of these situations,
the fuzzy set approach may be applicable. In the context of multicriteria de-
cision making, functions mapping the set of feasible alternatives into the unit
interval [0, 1] of real numbers representing normalized utility functions can be
interpreted as membership functions of fuzzy subsets of the underlying set of
alternatives. However, functions with the range in [0, 1] arise in more contexts.
In this chapter, we consider a decision problem in X, i.e. the problem to find
a ”best” decision in the set of feasible alternatives X with respect to several (i.e.
more than one) criteria functions, see [81], [90], [80], [76], [77], [78]. Within the
framework of such a decision situation, we deal with the existence and mutual
relationships of three kinds of ”optimal decisions”: Weak Pareto-Maximizers,
Pareto-Maximizers and Strong Pareto-Maximizers - particular alternatives sat-
isfying some natural and rational conditions. Here, they are commonly called
Pareto-optimal decisions.
We study also the compromise decisions x∗ ∈ X maximizing some aggrega-
tion of the criteria µi , i ∈ I = {1, 2, ..., m}. The criteria µi considered here will
be functions defined on the set X of feasible alternatives with the values in the
unit interval [0, 1], i.e. µi : X → [0, 1], i ∈ I. Such functions can be interpreted
as membership functions of fuzzy subsets of X and will be called here fuzzy
criteria. Later on, in chapters 8 and 9, each constraint or objective function
of the fuzzy mathematical programming problem will be naturally apointed to
the unique fuzzy criterion. From this point of view this chapter should follow

103
104 CHAPTER 9. FUZZY MULTI-CRITERIA DECISION MAKING

the chapters 10 and 11 dealing with fuzzy mathematical programming. Our


approach here is, however, more general and can be adopted to a more general
class of decison problems.
The set X of feasible alternatives is supposed to be a convex subset, or a
generalized convex subset of the n-dimensional Euclidean space Rn , frequently
we consider X = Rn . The main subject of our interest is to derive some
important relations between Pareto-optimal decisions and compromise decisions.
Moreover, we generalize the concept of the compromise decision by adopting
aggregation operators which were investigated in [79] and we also extend the
results derived for max-min decisions. The results will be derived for the n-
dimensional Euclidean vector space Rn with n ≥ 1. However, some results can
be derived only for R1 denoted here simply by R.

9.2 Fuzzy Criteria


Since no function mapping R into [0, 1] is strictly concave, and each concave
function mapping R into [0, 1] is constant on R, we take advantage of the
definition presented in Chapter 3, [79], where we defined (quasi)concave and
(quasi)convex functions on arbitrary subsets X of Rn . Now, for membership
functions of fuzzy subsets of Rn , the concavity concepts will be applied to the
supports of the membership functions, that is, for a fuzzy subset A of Rn with
the membership function µ : Rn → [0, 1], we consider X = Supp(A). Here, we
use the notation and nomenclature of Chapter 8.
It is evident that a function µ : Rn →[0, 1] quasiconcave on Rn is quasicon-
cave on Supp(µ), and vice versa: any function µ quasiconcave on Supp(µ) is qua-
siconcave on Rn . However, this is no longer true that the membership functions
strictly (quasi)concave on Rn and membership functions strictly (quasi)concave
on their supports coincide.

Example 66 Let µ : R → [0, 1] be defined by µ(x) = max{0, 1 − x2 }, see Fig.


9.1(a). It can be easily shown that Supp(µ) = [−1, 1] and µ is strictly concave
and strictly quasiconcave on Supp(µ). However, µ is neither strictly concave nor
strictly quasiconcave on R. In Fig.9.1 (b), a semistrictly quasiconcave function
on Supp(µ) which is not semistrictly quasiconcave on R is depicted.

As mentioned in the introduction, we are interested in the properties of


solution concepts of optimization problems whose objectives are expressed in
the terms of fuzzy criteria. A particular interest will be given to fuzzy criteria
defined as follows.

Definition 67 A fuzzy subset of Rn given by its membership function µ : Rn →


[0, 1] is called a fuzzy criterion on Rn if µ is upper normalized.

By Definition 67 any fuzzy criterion is given by the membership function


attaining the maximal membership value 1. Sometimes in this chapter, the
9.3. PARETO-OPTIMAL DECISIONS 105

Figure 9.1:

results will be derived for membership functions of fuzzy subsets of Rn not


necessarily upper normalized. This fact will be distinctly stressed if necessary.
Fuzzy criteria in one-dimensional Euclidean space R with additional concav-
ity properties can be characterized by the following simple propositions. The
corresponding proofs follow easily from the definition.

Proposition 68 If the membership function µ of a fuzzy criterion on R is


quasiconcave on R, then there exist α, β ∈ [0, 1] and a, b, c, d ∈ R ∪ {−∞, +∞},
such that a ≤ b ≤ c ≤ d and
µ(x) = α for x < a,
µ(x) is non-decreasing for a ≤ x ≤ b,
µ(x) = 1 for b < x < c,
µ(x) is non-increasing for c ≤ x ≤ d,
µ(x) = β for d < x.

Proposition 69 If the membership function µ of a fuzzy criterion on R is


strictly quasiconcave on Supp(µ) and Supp(µ) is convex, then there exist a, b ∈
R ∪ {−∞, +∞}, and x̄ ∈ R, such that a ≤ x̄ ≤ b and
µ(x) = 0 for x ≤ a or x ≥ b,
µ(x) is increasing for a ≤ x ≤ x̄,
µ(x̄) = 1,
µ(x) is decreasing for x̄ ≤ x ≤ b.

Later on, we shall take advantage of the above stated properties in case of
Rn for n > 1.

9.3 Pareto-Optimal Decisions


Throughout this chapter we suppose that I = {1, 2, ..., m}, m > 1, I is an index
set of a given family F = {µi | i ∈ I} of membership functions of fuzzy subsets
106 CHAPTER 9. FUZZY MULTI-CRITERIA DECISION MAKING

of Rn . Let X be a subset of Rn such that Supp(µi ) ⊂ X for all i ∈ I. The


elements of X are called decisions.

Definition 70 (i) A decision xW P is said to be a Weak Pareto-Maximizer


(WPM), if there is no x ∈ X such that

µi (xW P ) < µi (x), for every i ∈ I. (9.1)

(ii) A decision xP is said to be a Pareto-Maximizer (PM), if there is no x ∈ X


such that
µi (xP ) ≤ µi (x), for every i ∈ I, (9.2)
µi (xW P ) < µi (x), for some i ∈ I.

(iii) A decision xSP is said to be a Strong Pareto-Maximizer (SPM), if there


is no x ∈ X, x 6= xSP , such that

µi (xSP ) ≤ µi (x), for every i ∈ I. (9.3)

Definition 71 The sets of all WPM, PM and SPM are denoted by XW P , XP ,


XSP , respectively. The elements of XW P ∪ XP ∪XSP are called Pareto-Optimal
Decisions.

The following property is evident.

Proposition 72 Any SPM is PM, and any PM is WPM, i.e.

XSP ⊂ XP ⊂ XW P . (9.4)

To illustrate the above concepts, let us inspect the following example.

Example 73 Let µ1 and µ2 be as in Fig.9.2. Then


XW P = [a,g], XP = [b,c] ∪ (d,e) ∪ {f}, XSP = (d,e) ∪ {f}.

Let Dj ⊂ Rn for all j ∈ J, where J is a finite index set. By Conv{Dj |


j ∈ J} we denote the convex hull of all sets Dj , i.e.

P
Conv{Dj |j ∈ J} = {z ∈ Rn |z = λj xj , where xj ∈ Dj , λj ≥ 0
P j∈J
and λj = 1 }.
j∈J

In the following propositions, we obtain a transparent characterization of all


WPM, PM and SPM for strictly quasiconcave fuzzy criteria in one-dimensional
space R. Unfortunately, we cannot obtain parallel results in Rn for n > 1, as
is demonstrated by Example 75, see also [81].

Proposition 74 Let µi , i ∈ I, be membership functions of fuzzy criteria on R,


quasiconcave on R. Then

Conv{Core(µi )|i ∈ I} ⊂ XW P .
9.3. PARETO-OPTIMAL DECISIONS 107

Figure 9.2:

Proof. Let x0i = inf Core(µi ), x00i = sup Core(µi ) and set x0 = min{x0i |i ∈
I}, x00 = max{x00i |i ∈ I}, then Cl(Conv{Core(µi )|i ∈ I}) = [x0 , x00 ].
Let x ∈ Conv{Core(µi )|i ∈ I} and suppose that x ∈ / XW P . Then there exists
y with µi (y) > µi (x), for all i ∈ I.
Assume that y < x, then there exists k ∈ I and y 0 ∈ Core(µk ) with y <
x ≤ y 0 such that by Proposition 68 we obtain µk (y) ≤ µk (x), a contradiction.
Otherwise, if x < y, then again by Proposition 68 we have µj (x) ≥ µj (y), again
a contradiction.

Example 75 This example demonstrates that Proposition 74 is not true for


Rn , where n > 1, particularly for n = 2. Set

µ1 (x1 , x2 ) = max{0, 1 − 14 x21 − x22 },


µ2 (x1 , x2 ) = max{0, 1 − (x1 − 1)2 − (x2 − 1)2 }.

Notice that µ1 , µ2 are continuous membership functions of fuzzy criteria, strictly


concave on their supports, hence quasiconcave on R2 . Here, Core(µ1 ) = x̄1 =
(0; 0), Core(µ2 ) = x̄2 = (1; 1) are the end points of the segment Conv{x̄1 , x̄2 }
in R2 .
It is easy to calculate that

µ1 (0.5, 0.5) = 0.6875 and µ2 (0.5, 0.5) = 0.5.

On the other hand, µ1 (0.7, 0.4) = 0.7175, µ2 (0.7, 0.4) = 0.55. We obtain
µ1 (0.5, 0.5) < µ1 (0.7, 0.4) and µ2 (0.5, 0.5) < µ2 (0.7, 0.4).
As (0.5; 0.5) ∈ Conv{x̄1 , x̄2 } and (0.7; 0.4) ∈/ Conv{x̄1 , x̄2 }, it follows that

Conv{Core(µi )|i = 1, 2} = Conv{x̄1 , x̄2 },


which is not a subset of XW P .
108 CHAPTER 9. FUZZY MULTI-CRITERIA DECISION MAKING

Example 76 Let µ1 , µ2 be as in Fig. 9.3. Here, µ1 , µ2 are continuous and


evidently XW P = [a,d], XP = [b,c], however, XSP = ∅.

Figure 9.3:

Proposition 77 Let µi , i ∈ I, be membership functions of fuzzy criteria on R,


strictly quasiconcave on their convex supports. Then

XP ⊂ Conv{Core(µi )|i ∈ I}.

Proof. By Proposition 69, each Core(µi ) contains exactly one element, i.e.
xi = Core(µi ). Setting x0 = min{xi |i ∈ I}, x00 = max{xi |i ∈ I}, then

Conv{Core(µi )|i ∈ I}) = [x0 , x00 ].

Let x ∈/ Conv{Core(µi )|i ∈ I} and suppose that x < x0 . By monotonicity of µi


we get µi (x) ≤ µi (x0 ) for all i ∈ I. Moreover, by Proposition 69, µj (x) < µj (x0 )
for j satisfying xj = x0 , consequently, x ∈/ XP .
On the other hand, suppose that x00 < x. Again by monotonicity of µi we
get µi (x00 ) ≥ µi (x) for all i ∈ I, and by Proposition 69, µk (x00 ) > µk (x) for k
satisfying xk = x00 . Hence, again x ∈ / XP , which gives the required result.

Proposition 78 Let µi , i ∈ I, be membership functions of fuzzy criteria on R,


strictly quasiconcave on their convex supports. If
\
Conv{Core(µi )| i ∈ I} ⊂ Supp(µi ), (9.5)
i∈I

then
XW P = XP = XSP = Conv{Core(µi )| i ∈ I}. (9.6)
9.4. COMPROMISE DECISIONS 109

Proof. By Proposition 69, each Core(µi ) contains exactly one element, i.e.
xi = Core(µi ). Setting x0 = min{xi |i ∈ I}, x00 = max{xi |i ∈ I}, we obtain
Conv{Core(µi )|i ∈ I}) = [x0 , x00 ].
First, we prove that [x0 , x00 ] ⊂ XSP .
Let x ∈ [x0 , x00 ] and suppose by contrary that x ∈
/ XSP . Then there exists y
with y 6= x and µi (y) ≥ µi (x), for all i ∈ I.
Further, suppose that y < x, then by strict quasiconcavity of µk , for k satisfying
xk = x00 , and by Proposition 69, we get µk (y) < µk (x), a contradiction.
On the other hand, if x < y, then by strict quasiconcavity of µj , for j
satisfying xj = x0 , we get µj (x) > µj (y), again a contradiction. Hence
Conv{Core(µi )|i ∈ I}) ⊂ XSP .
Second, we prove that XW P ⊂ Conv{Core(µi )|i ∈ I}).
Suppose that y ∈/ [x0 , x00 ]. Let y < x0 , then by (9.5) µi (x0 ) > 0 for all i ∈ I.
Applying strict monotonicity of µi , we get µi (y) < µi (x0 ), for all i ∈ I, hence,
y∈/ XW P . Assuming y > x00 we obtain by analogy the same result.
Combining the first and the second result, we obtain the chain of inclusions
XW P ⊂ Conv{Core(µi )|i ∈ I}) ⊂ XSP .
However, by (9.4) we have XSP ⊂ XP ⊂ XW P , consequently, we obtain the
required equalities (9.6).
Notice that inclusion (9.5) is satisfied if all Supp(µi ), i ∈ I, are identical.

9.4 Compromise Decisions


In the theory of multi-objective optimization, the ”compromise decision”, or,
”compromise solution” is obtained as the solution of single-objective problem
with the objective being a combination of all criteria in question, see e.g. [37].
In this section we investigate a concept and some properties of compromise
decision x∗ ∈ X, maximizing the aggregation of all criteria, e.g. min{µi (x)|
i ∈ I}, where X ⊂ Rn is a convex set, I = {1, 2, ..., m}, m > 1. The original
idea belongs to Bellman and Zadeh in [9], who proposed its use in decision
analysis by using the following definition.
Definition 79 Let µi , i ∈ I, be the membership functions of fuzzy subsets of X,
X be a convex subset of Rn . A decision x∗ ∈ X is called a max-min decision,
if
min{µi (x∗ )| i ∈ I} = max{min{µi (x)| i ∈ I}| x ∈ X}. (9.7)
The set of all max-min decisions in X is denoted by XM .
We start with two propositions that are concerned with the existence of max-
min decision. In R, the requirement of compactness of the α-cuts of the criteria
is not necessary for the existence of nonempty XM , whereas the same result is
no longer true in Rn with n > 1, as will be demonstrated on an example.
110 CHAPTER 9. FUZZY MULTI-CRITERIA DECISION MAKING

Proposition 80 Let µi , i ∈ I, be the membership functions of fuzzy criteria


on R, quasiconcave on R. If all µi are upper semicontinuous (USC) on R,
then XM 6= ∅.

Proof. For each i ∈ I there exists xi ∈ Core(µi ). Put x0 = min{xi |i ∈ I}


and x00 = max{xi |i ∈ I}. By Proposition 68, µi are non-decreasing in (−∞, x0 ]
and non-increasing in [x00 , +∞) for all i ∈ I. Then ϕ = min{µi | i ∈ I} is also
non-decreasing in (−∞, x0 ] and non-increasing in [x00 , +∞). As µi , i ∈ I, are
upper semicontinuous, ϕ is also USC on R, particularly on the compact interval
[x0 , x00 ]. Hence ϕ = min{µi | i ∈ I} attains its maximum on [x0 , x00 ], in a global
maximizer on R. This maximizer is a max-min decision, i.e. XM 6= ∅.

Example 81 This example demonstrates that semicontinuity is essential in the


above proposition. Let

1 for x < 0,
µ1 (x) = {
0 for x ≥ 0,
ex for x < 0,
µ2 (x) = {
1 for x ≥ 0.

Here, µ1 , µ2 are the membership functions of fuzzy criteria, µ2 is contin-


uous, µ1 is not upper semicontinuous on R. It is easy to see that ψ(x) =
min{µ1 (x), µ2 (x)} does not attain its maximum on R , i.e. XM = ∅.

Example 82 This example demonstrates that Proposition 80 does not hold in


Rn with n > 1, particularly for n = 2. Set
½ ¾
x1 + 1 2
µ1 (x1 , x2 ) = max 0, min{x2 , 1 − ( ) } ,
x2 + 1
½ ¾
x1 − 1 2
µ2 (x1 , x2 ) = max 0, min{x2 , 1 − ( ) } .
x2 + 1

It can be easily verified that µ1 and µ2 are continuous fuzzy criteria on R2 ,


quasiconcave on R2 . Let

ϕ(x1 , x2 ) = min{µ1 (x1 , x2 ), µ2 (x1 , x2 )}.

1
It is not difficult to show that ϕ(x1 , x2 ) < 1 on R2 and ϕ(0, x2 ) = 1 − (x2 +1)2

for x2 > 1.
Since lim ϕ(0, x2 ) = 1 for x2 → +∞, ϕ does not attain its maximum on X, i.e.
XM (µ1 , µ2 ) = ∅.

Remember that a fuzzy subset A given by the membership function µ :


Rn → [0, 1] is compact if and only if µ is USC on Rn and [A]α is a bounded
subset of Rn for every α ∈ (0, 1], or, [A]α is a compact subset of Rn for every
α ∈ (0, 1].
9.4. COMPROMISE DECISIONS 111

Proposition 83 Let A be a compact fuzzy subset of Rn given by the member-


ship function µ : Rn → [0, 1]. Then µ attains its maximum on Rn .

Proof. Let α∗ = sup{µ(x)|x ∈ Rn } > 0 and let {αk }∞ k=1 is an increasing


sequence of numbers such that αk ∈ (0, 1) and αk → α∗ . (If α∗ = 0, then
there is nothing to prove.) Then [A]αk is compact and [A]αk+1 ⊂ [A]αk for all
k = 1, 2, ... By the well known property of compact sets there exists x∗ ∈ Rn
with
\∞

x ∈ [A]αk . (9.8)
k=1

It remains to show that


µ(x∗ ) = α∗ . (9.9)
On contrary, suppose that µ(x∗ ) < α∗ . Then there exists k0 such that

µ(x∗ ) < αk0 ≤ α∗ . (9.10)

By (9.8), x∗ ∈ [A]αk0 , hence, µ(x∗ ) ≥ αk0 , a contradiction to (9.10). Conse-


quently, (9.9) is true, which completes the proof.

Proposition 84 Let µi , i ∈ I, be membership functions of fuzzy subsets Ai of


Rn . If Ai is compact for every i ∈ I, then XM 6= ∅.
T
Proof. Let ϕ = min{µi | i ∈ I}. Observe that [ϕ]α = [µi ]α for all
i∈I
α ∈ (0, 1]. Since all [µi ]α are compact, the same holds for [ϕ]α . By Proposition
83, ϕ attains its maximum on Rn , i.e. XM 6= ∅.
In what follows we investigate some relationships between Pareto-Optimal
decisions and max-min decisions. In the following two propositions, normality
of µi is not required.

Proposition 85 Let µi , i ∈ I, be membership functions of fuzzy subsets of Rn .


Then
XM ⊂ XW P .

Proof. Let x∗ ∈ XM . Suppose that x∗ is not a Weak Pareto-Maximizer,


then by Definition 70 there exists x0 such that µi (x∗ ) < µi (x0 ), for all i ∈ I.
Then
min{µi (x∗ )| i ∈ I} < min{µi (x0 )| i ∈ I},
showing that x∗ is not a max-min decision, a contradiction.

Proposition 86 Let µi , i ∈ I, be the membership functions of fuzzy sets of Rn ,


let XM = {x∗ }, i.e. x∗ ∈ X be a unique max-min decision. Then

XM ⊂ XSP .
112 CHAPTER 9. FUZZY MULTI-CRITERIA DECISION MAKING

Proof. Suppose that x∗ is a unique max-min decision and suppose that x∗


is not a SPM. Then there exists x+ ∈ X, x∗ 6= x+ , µi (x∗ ) ≤ µi (x+ ), for all
i ∈ I. Then
min{µi (x∗ )| i ∈ I} ≤ min{µ1 (x+ )| i ∈ I},
which is a contradiction with the uniqueness of x∗ ∈ XM .
In the following proposition sufficient conditions for the uniqueness of a
compromise decision are given.

Proposition 87 Let µi , i ∈ I, be membership functions of fuzzy subsets of Rn


strictly quasiconcave on their convex supports Supp(µi ). Let x∗ ∈ XM such
that
min{µi (x∗ )| i ∈ I} > 0. (9.11)
Then
XM = {x∗ },
i.e. the max-min decision x∗ is unique.

Proof. Let ϕ(x) = min{µi (x)| i ∈ I}, where x ∈ Rn , and suppose that
there exists x0 ∈ XM , x∗ 6= x0 . Then by (9.11)

ϕ(x0 ) = ϕ(x∗ ) > 0.


T
As by (9.11) we have also x∗ , x0 ∈ X = Supp(µi ), where X is convex. By
i∈I
strict quasiconcavity of ϕ on X we obtain for x+ = λx0 +(1−λ)x∗ and 0 < λ < 1:

ϕ(x+ ) > min{ϕ(x0 ), ϕ(x∗ )},

which contradicts the fact that ϕ(x∗ ) = max{ϕ(x)| x ∈ Rn }. Consequently,


XM consists of the unique element.

Corollary 88 If µi , i ∈ I, are membership functions of fuzzy subsets of Rn ,


strictly quasiconcave on their convex supports and x∗ ∈ XM satisfying (9.11),
then x∗ ∈ XSP .

9.5 Generalized Compromise Decisions


In this section we generalize the concept of the max-min decision by adopting
aggregation operators investigated in [79].

Definition 89 Let µi , i ∈ I, be the membership functions of fuzzy subsets of X,


X be a convex subset of Rn . Let G = {Gm }∞ m=1 be an aggregation operator. A
decision x∗ ∈ X is called a max-G decision, if

Gm (µ1 (x∗ ), ..., µm (x∗ )) = max{Gm (µ1 (x), ..., µm (x))| x ∈ X}. (9.12)

The set of all max-G decisions on X is denoted by XG (µ1 , ..., µm ), or, shortly,
XG .
9.5. GENERALIZED COMPROMISE DECISIONS 113

If there is no danger of misunderstanding, we omit the subscript m in the


aggregation mapping Gm , writing shortly G. In the previous section we have in-
vestigated some properties of the compromise decisions considering a particular
aggregation operator, namely the t-norm, TM . In what follows we extend the
results from the previous section to some more general aggregation operators.
The following propositions generalize Propositions 84, 85, 86 and 87.

Proposition 90 Let µi , i ∈ I, be the USC membership functions of fuzzy


subsets of Rn , G be an USC aggregation operator. Then ψ : Rn → [0, 1] defined
for x ∈ Rn by
ψ(x) = G(µ1 (x), ..., µm (x)) (9.13)
is USC on Rn .

Proof. Let x0 ∈ Rn and ε > 0. It is sufficient to prove that there exist δ > 0,
such that ψ(x) ≤ ψ(x0 ) + ε for every x ∈ B(x0 , δ) = {x ∈ Rn | kx − x0 k < δ}.
Let y0i = µi (x0 ) for i ∈ I, put y0 = (y01 , ..., y0m ). Since G is USC on [0, 1]m ,
there exists η > 0, such that y ∈ B(y0 , η) = {y ∈ [0, 1]m | ky − y0 k < η} implies

G(y1 , ..., ym ) ≤ G(y01 , ..., y0m ) + ε. (9.14)

By USC of all µi , i ∈ I, there exists δ > 0, such that µi (x) ≤ µi (x0 ) + η2 for
every x ∈ B(x0 , δ) = {x ∈ Rn | kx − x0 k < δ}. By monotonicity property of G,
we obtain
G(µ1 (x), ..., µm (x)) ≤ G(z1 , ..., zm ), (9.15)
where zi = min{1, µi (x0 ) + η2 }, and also (z1 , ..., zm ) ∈ B(y0 , η). Moreover,
by (9.13) we have ψ(x0 ) = G(y01 , ..., y0m ). Combining inequalities (9.14) and
(9.15), we obtain the required result ψ(x) ≤ ψ(x0 ) + ε.
The next two proposition give some sufficient conditions for the existence of
max-G decisions.

Proposition 91 Let µi , i ∈ I, be membership functions of fuzzy subsets Ai of


Rn , G be an USC and idempotent aggregation operator. If Ai is compact for
every i ∈ I, then XG 6= ∅.

Proof. Let α ∈ (0, 1], ψ(x) = G(µ1 (x), ..., µm (x)). We prove that [ψ]α =
{x ∈ Rn |G(µ1 (x), ..., µm (x)) ≥ α} is a compact subset of Rn . First, we prove
that [ψ]α is bounded. Assume the contrary; then there exist xk ∈ [ψ]α , k =
1, 2, ..., with lim kxk k = +∞ for k → +∞ . Take an arbitrary β, with 0 < β < α.
Since all [µi ]β are bounded, then there exists k0 such that for all i ∈ I and k > k0
we obtain xk ∈ / [µi ]β , i.e. µi (xk ) < β. By monotonicity and idempotency of A
it follows that for k > k0 we have

G(µ1 (xk ), ..., µm (xk )) ≤ G(β, ..., β) = β < α.

But this is a clear contradiction, consequently, [ψ]α is bounded.


By Proposition 90 ψ(x) = G(µ1 (x), ..., µm (x)) is USC on Rn , hence [ψ]α
is closed. Consequently, [ψ]α is compact. Then ψ is a membership function
114 CHAPTER 9. FUZZY MULTI-CRITERIA DECISION MAKING

of a compact fuzzy subset of Rn , therefore, by Proposition 83, ψ attains its


maximum on Rn , i.e. XG 6= ∅.
Notice that the mean aggregation operators are idempotent. However, the
only idempotent t-norm is the minimum t-norm TM . The following proposition
extends the statement of Proposition 91 to all other USC t-norms.
Proposition 92 Let µi , i ∈ I, be membership functions of fuzzy subsets Ai of
Rn , T be an USC t-norm. If Ai is compact for every i ∈ I, then XT 6= ∅.
Proof. Let φ : Rn → [0, 1] be defined for x ∈ Rn as
φ(x) = T (µ1 (x), ..., µm (x)). (9.16)
Let α ∈ (0, 1], we prove that [φ]α is a bounded subset of Rn . By definition
we have [φ]α = {x ∈ Rn |T (µ1 (x), ..., µm (x)) ≥ α}. Since [µi ]α are bounded for
all i ∈ I, it follows that
\
[µi ]α = {x ∈ Rn | min{µ1 (x), ..., µm (x)} ≥ α}
i∈I
T
is also bounded. We show that [φ]α ⊂ [µi ]α .
i∈I
Let x ∈ [φ]α . Then by definition of α-cut we have
T (µ1 (x), ..., µm (x)) ≥ α. (9.17)
Since T is dominated by the minimum t-norm TM , it follows that
T (µ1 (x), ..., µm (x)) ≤ min{µ1 (x), ..., µm (x)}. (9.18)
From (9.17) and (9.18) we obtain
α ≤ min{µ1 (x), ..., µm (x)},
T T
thus x ∈ [µi ]α , proving that [φ]α ⊂ [µi ]α . Consequently, [φ]α is bounded
i∈I i∈I
for all α ∈ (0, 1].
By Proposition 90 φ(x) = T (µ1 (x), ..., µm (x)) is USC on Rn and by Propo-
sition 83 φ attains its maximum on Rn , i.e. XT 6= ∅.
The following proposition is an extension of Proposition 85.
Proposition 93 Let µi , i ∈ I, be the membership functions of fuzzy subsets of
Rn , G be a strictly monotone aggregation operator. Then
XG ⊂ XW P .
Proof. Let x∗ ∈ XG . Suppose that x∗ is not a Weak Pareto-Maximizer,
then there exists x0 such that µi (x∗ ) < µi (x0 ), for all i ∈ I. Then by the strict
monotonicity of G we obtain
G(µ1 (x∗ ), ..., µm (x∗ )) < G(µ1 (x0 ), ..., µm (x0 )),
showing that x∗ is not a max-G decision, a contradiction.
The following proposition is a generalization of Proposition 86.
9.5. GENERALIZED COMPROMISE DECISIONS 115

Proposition 94 Let µi , i ∈ I, be the membership functions of fuzzy subsets


of Rn , let XG = {x∗ }, i.e. x∗ ∈ Rn be a unique max-G decision, let G be an
aggregation operator. Then
XG ⊂ XSP .
Proof. Suppose that x∗ is a unique max-G decision and suppose that x∗ is
not a SPM. Then there exists x+ ∈ Rn , x∗ 6= x+ , µi (x∗ ) ≤ µi (x+ ), for all i ∈ I.
Then by monotonicity of G we obtain
G(µ1 (x∗ ), ..., µm (x∗ )) ≤ G(µ1 (x+ ), ..., µm (x+ )),
which is a contradiction with the uniqueness of x∗ ∈ XG .
The proof of the following proposition is a slight adaptation of that of Propo-
sition 87 and requires that G dominates TM , i.e. G À TM , see [79].
Proposition 95 Let µi , i ∈ I, be the membership functions of fuzzy criteria on
Rn , strictly quasiconcave on their convex supports Supp(µi ). Let G be a strictly
monotone aggregation operator such that G dominates TM and let x∗ ∈ XG with
min{µi (x∗ )| i ∈ I} > 0. (9.19)
Then
XG = {x∗ },
i.e. x∗ is a unique max-G decision.
Proof. Let ϕ(x) = G(µ1 (x), ..., µm (x)), where x ∈ Rn , and suppose that
there exists x0 ∈ XM , x∗ 6= x0 . Then by (9.19)
ϕ(x0 ) = ϕ(x∗ ) > 0. (9.20)
T
By (9.19) we have also x∗ , x0 ∈ X = Supp(µi ), where X is convex. Since G
i∈I
is a strictly monotone aggregation operator and µi are strictly quasiconcave,
it follows that ϕ is strictly quasiconcave on X. Then we obtain for x+ =
λx0 + (1 − λ)x∗ and 0 < λ < 1:
ϕ(x+ ) = G(µ1 (λx0 + (1 − λ)x∗ ), ..., µm (λx0 + (1 − λ)x∗ ))
> G(min{µ1 (x0 ), µ1 (x∗ )}, ..., min{µm (x0 ), µm (x∗ )}).
As G dominates TM , we obtain
G(min{µ1 (x0 ), µ1 (x∗ )}, ..., min{µm (x0 ), µm (x∗ )})
≥ min{G(µ1 (x0 ), ..., µm (x0 )), G(µ1 (x∗ ), ..., µm (x∗ ))}.
Combining the last two inequalities with (9.20), we obtain
ϕ(x+ ) > min{ϕ(x0 ), ϕ(x∗ )} = ϕ(x∗ ),
which contradicts the fact that ϕ(x∗ ) = max{ϕ(x)| x ∈ X}. Consequently, XG
consists of the unique element.
For fuzzy criteria µi , i ∈ I, an aggregation operator G and x∗ ∈ XG
satisfying the assumptions of Proposition 95, it follows that x∗ ∈ XSP .
116 CHAPTER 9. FUZZY MULTI-CRITERIA DECISION MAKING

9.6 Aggregation of Fuzzy Criteria


In this section we shall investigate the problem of aggregation of several fuzzy
criteria µi on Rn with additional property of some generalized concavity prop-
erty (e.g. upper-connectedness, T -quasiconcavity, or quasiconcavity). We will
look for some sufficient conditions which secure some attractive properties. The
proofs of the following propositions are omitted as they can be obtained by
slight modifications of the corresponding propositions in [79].

Proposition 96 Let µi : Rn → [0, 1], i ∈ I, be TD -quasiconcave membership


functions of fuzzy criteria on Rn such that
\
Core(µi ) 6= ∅. (9.21)
i∈I

Let A : [0, 1]m → [0, 1] be an aggregation operator. Then ψ : Rn → [0, 1] defined


for x ∈ Rn by
ψ (x) = A (µ1 (x), ..., µm (x)) (9.22)
is upper-starshaped on Rn .

Proposition 97 Let µi : Rn → [0, 1], i ∈ I, be TD -quasiconcave membership


functions of fuzzy criteria on Rn such that

Core(µ1 ) = · · · = Core(µm ) 6= ∅. (9.23)

Let A : [0, 1]m → [0, 1] be a strictly monotone aggregation operator. Then


ψ : Rn → [0, 1] defined for x ∈ Rn by (9.22) is TD -quasiconcave on Rn .

Conditions (9.21) and (9.23) in Propositions 96 and 97 are essential for


validity of the statements of Propositions 97 and 97 as has been demonstrated
on Examples in [79].

Proposition 98 Let T be a t-norm, µi : Rn → [0, 1], i ∈ I, be T -quasiconcave


membership functions of fuzzy criteria. Let A be an aggregation operator, and
let A dominates T . Then ψ : Rn → [0, 1] defined for x ∈ Rn by (9.22) is
T -quasiconcave on Rn .

Obviously, any t-norm T dominates T (reflexivity) and the minimum t-


norm TM dominates any other t-norm T . Accordingly, we obtain the following
consequence of Proposition 98.

Corollary 99 Let T be a t-norm, µi : Rn → [0, 1], i ∈ I, be T -quasiconcave


membership functions of fuzzy criteria. Then ϕj : Rn → [0, 1], j = 1, 2, defined
by

ϕ1 (x) = T (µ1 (x), ..., µm (x)), x ∈ Rn , (9.24)


ϕ2 (x) = TM (µ1 (x), ..., µm (x)), x ∈ Rn , (9.25)

are also T -quasiconcave on Rn .


9.7. EXTREMAL PROPERTIES 117

9.7 Extremal Properties


In this section we derive several results concerning relations between local and
global maximizers (i.e. max-A decisions) of some aggregations of fuzzy criteria.
For this purpose we apply the local-global properties of generalized concave
functions from [79].

Theorem 100 Let µi : Rn → [0, 1], i ∈ I, be T -quasiconcave membership


functions of fuzzy criteria on Rn such that
\
Core(µi ) 6= ∅.
i∈I

Let A : [0, 1]m → [0, 1] be an aggregation operator. If ψ : Rn → [0, 1] defined for


x ∈ Rn by
ψ (x) = A (µ1 (x), ..., µm (x)) (9.26)
attains its strict local maximizer at x̄ ∈ Rn , then x̄ is a strict global maximizer
of ψ on Rn .

Proof. Observe that ψ is upper-starshaped on Rn . Now, we easily obtain


the required result.

Theorem 101 Let T be a t-norm, µi : Rn → [0, 1], i ∈ I, be T -quasiconcave


membership functions of fuzzy criteria on Rn . Let A be an aggregation operator
and let A dominates T . If ψ : Rn → [0, 1] defined for x ∈ Rn by (9.26) attains
its strict local maximum at x̄ ∈ Rn , then x̄ is a strict global maximizer of ψ on
Rn .

Proof. Obviously, ψ = A (µ1 , . . . , µm ) is T -quasiconcave on Rn .


Any T -quasiconcave function on Rn is upper-quasiconnected on Rn . Then
the statement follows from the corresponding theorem in [79].

Theorem 102 Let T be a t-norm, let µi : Rn → [0, 1], i ∈ I, be semistrictly T -


quasiconcave membership functions of fuzzy criteria on Rn . Let A be a strictly
monotone aggregation operator and let A dominates T . If ψ : Rn → [0, 1]
defined for x ∈ Rn by (9.26) attains its local maximizer at x̄ ∈ Rn , then x̄ is a
global maximizer of ψ on Rn .

Proof. Again, ψ is T -quasiconcave on Rn . Any T -quasiconcave function


on Rn is upper-quasiconnected on Rn . Then the statement follows from the
corresponding theorem in [79].
Since any t-norm T dominates T and the minimum t-norm TM dominates
any other t-norm T , we obtain the following results.

Corollary 103 Let T be a t-norm, µi : Rn → [0, 1], i ∈ I, be T -quasiconcave


membership functions of fuzzy criteria on Rn . If ϕ1 : Rn → [0, 1] defined for
x ∈ Rn by (9.24) attains its strict local maximum at x̄1 ∈ Rn , then x̄1 is a
strict global maximizer of ϕ1 on Rn .
118 CHAPTER 9. FUZZY MULTI-CRITERIA DECISION MAKING

Corollary 104 Let T be a strict t-norm, µi : Rn → [0, 1], i ∈ I, be semistrictly


T -quasiconcave membership functions of fuzzy criteria on Rn . If ϕ1 : Rn →
[0, 1] defined for x ∈ Rn by

ϕ1 (x) = T (µ1 (x), . . . , µm (x)),

attains its local maximum at x̄1 ∈ Rn , then x̄1 is a global maximizer of ϕ1 on


Rn .

Corollary 105 Let T be a t-norm, µi : Rn → [0, 1], i ∈ I, be T -quasiconcave


membership functions of fuzzy criteria on Rn . If ϕ2 : Rn → [0, 1] defined for
x ∈ Rn by
ϕ2 (x) = TM (µ1 (x), . . . , µm (x)),
attains its strict local maximum at x̄2 ∈ Rn , then x̄2 is a strict global maximizer
of ϕ2 on Rn .

9.8 Application to Location Problem


A classical problem in location theory consists in location p suppliers to cover
given demands of q consumers in such a way that total shipping costs are min-
imized, see e.g. [19]. Consider the following mathematical model:
Let I = {1, 2, . . . , q} be a set of q consumers located on the plane R2 in the
points ci with coordinates ci = (ui ; vi ) ∈ R2 , i ∈ I. Each consumer i ∈ I is
characterized by a given nonnegative demand bi - an amount of products, goods,
services, etc. The demands of consumers are to be satisfied by a given set of p
suppliers denoted by J = {1, 2, . . . , p}. The distance of consumer i ∈ I located
at (ui ; vi ) and supplier j ∈ J at (xj ; yj ) is denoted by dij (xj , yj ) and defined by

dij (xj , yj ) = d ((ui ; vi ) , (xj ; yj )) ,

where d is an appropriate distance function (e.g. Euclidean distance). The


shipping cost between i and j depends on the location of consumer i, on the
distance dij (xj , yj ) of consumer i to supplier j, and, on the amount of goods
zij transported from j to i. These characteristics can be expressed by the value
fi (dij (xj , yj ) , zij ) of a cost function fi : [0, +∞)× [0, +∞) −→ R1 , i ∈ I, that
is nondecreasing in both variables. The total cost of the shipment of goods
from all producers to all consumers is defined as the sum of the individual cost
functions
q X
X p
f ((x1 ; y1 ), . . . , (xp ; yp ) , z11 , . . . , zqp ) = fi (dij (xj , yj ) , zij ) .
i=1 j=1

The problem is to find locations of suppliers (xj ; yj ) and transported amounts


zij for all consumers and suppliers such that the requirements of the consumers
9.8. APPLICATION TO LOCATION PROBLEM 119

are covered and total shipping cost is minimal. The mathematical model of the
above location problem can be formulated as follows:
P
q P
p
minimize fi (dij (xj , yj ) , zij )
i=1 j=1
P
p (9.27)
subject to zij ≥ bi , i ∈ I, zij ≥ 0, (xj , yj ) ∈ R2 , i ∈ I, j ∈ J.
j=1

Problem (9.27) is a constrained nonlinear optimization problem with 2p + pq


variables xj , yj , zij . We assume that functions fi have the following simple form:

fi (dij (xj , yj ) , zij ) = αi .dij (xj , yj ) .zij ,

where αi > 0 are constant coefficients, and distance function dij is defined as
β β 1
dij (xj , yj ) = ((ui − xj ) + (vi − yj ) ) β ,

where i ∈ I, j ∈ J and β > 0. Problem (9.27), even in the above simple form, is
difficult to solve numerically because of its possible nonlinearities which bring
numerous local optima.
In order to transform problem (9.27) in a moretractable form, we consider
the objective function as a utility or satisfaction function µ, such that µ : R2p ×
Rq → [0, 1]. In such a case

µ ((x1 ; y1 ), . . . , (xp ; yp ) , b1 , . . . , bq ) = 0

denotes the total dissatisfaction (or, zero utility) with location (xj , yj ) ∈ R2 ,
j ∈ J, and supplied amounts bi , i ∈ I. On the other hand,

µ((x1 ; y1 ) , . . . , (xp ; yp ) , b1 , . . . , bq ) = 1

denotes the maximal total satisfaction (or, maximal utility) with location (xj ; yj )
∈ R2 , j ∈ J and supplied amounts.
Depending on the required amount bi , an individual consumer i ∈ I may
express his satisfaction with the supplier j ∈ J located at (xj , yj ) by membership
grade µij (xj , yj , bi ) , where membership function µij : R2 × R1 → [0, 1] satisfies
condition µij (ui , vi , bi ) = 1, i.e. the maximal satisfaction is equal to 1, provided
that the facility j ∈ J is located at the same place as the consumer i ∈ I.
The individual satisfaction expressed by the function µi : R2p × R1 →
[0, 1] of the consumer i ∈ I with the amount bi , and with suppliers located at
(x1 ; y1 ) , (x2 ; y2 ) , . . . , (xp ; yp ) , is defined by the satisfaction of the location of
the facility with maximal value, i.e.:

µi ((x1 ; y1 ) , . . . , (xp ; yp ) , bi ) = max{µi1 (x1 , y1 , bi ) , . . . , µip (xp , yp , bi )}.

More generaly, we can apply a compensative aggregation operator with the


values inbetween max and min. The supplier j ∈ J with the maximal grade of
satisfaction µij (xj , yj , bi ) will cover the required amount bi . If there are more
120 CHAPTER 9. FUZZY MULTI-CRITERIA DECISION MAKING

such suppliers, then they share amount bi equally. The above formula can be
generalized by using an aggregation operator A for all i ∈ I as follows

µi ((x1 , y1 ) , . . . , (xp , yp ) , bi ) = A(µi1 (x1 , y1 , bi ) , . . . , µip (xp , yp , bi )). (9.28)

Moving the location point of a supplier along the path from a location (x, y)
toward the location site of consumer i at ci = (ui ; vi ), it is natural to assume
that satisfaction grade of consumer i is increasing, or, at least non-decreasing,
provided that bi is constant. This assumption results in (Φ,Ψ)-concavity (e.g.
T -quasiconcavity) requirement of membership functions µij on R2 .
On the other hand, with the given location (xj , yj ) of supplier j, satisfaction
grade µij (xj , yj , b) is nonincreasing T in variable b.
We obtain ci = (ui ; vi ) ∈ j∈J Core(µij ). If all µij are T -quasiconcave on
R2 , then by Proposition 96, individual satisfaction µi of consumer i, is upper
starshaped on R2p .
The total satisfaction with (or, utility of) locations (xj ; yj ) ∈ R2 , j ∈ J,
and required amounts bi , i ∈ I, is defined as an aggregation of the individual
satisfaction grades, e.g. a minimal satisfaction, or, more generally, the value of
an aggregation operator G, e.g. a t-norm T .
¡ ¢
µ ((x1 ; y1 ) , · · · , (xp ; yp ) , b1 , · · · , bq ) = G(A(µ11 x1 , y1 , b1 ), . . . , µ1p (xp , yp , b1 ),
. . . , A(µq1 (x1 , y1 , bq ) , . . . , µqp (xp , yp , bq ))). (9.29)

Now, we have the transformed unconstrained problem of optimal location:

maximize (9.29), subject to (xj ; yj ) ∈ R2 , j ∈ J. (9.30)

As a problem of unconstrained optimization, (9.30) can be numerically more


easily tractable than the original problem (9.27). Notice also that problem
(9.30) has only 2p variables, whereas problem (9.27) has 2p + pq variables.
We illustrate the above approach in the following two numerical examples.

Example 106

Consider the location problem with q = 3 consumers and p = 1 supplier


given by (u1 , v1 ) = (0, 1) , (u2 , v2 ) = (0, 2) , (u3 , v3 ) = (3, 0) , b1 = 10, b2 = 20,
b3 = 30.
First, let us deal with the classical problem (9.27) with αi = 1, i = 1, 2, 3,
β = 2. Then the problem becomes that of minimizing:

1 1 ¡ ¢1
f (x, y, z1 , z2 , z3 ) = z1 (x2 +(1 − y)2 ) 2 +z2 (x2 +(2 − y)2 ) 2 +z3 (x − 3)2 + y 2 2 ,

subject to z1 ≥ 10, z2 ≥ 20, z3 ≥ 30, (x, y) ∈ R2 .


The optimal location of the supplier has been found as (xC , y C , z1C , z2C , z3C ) =
(3, 0, 10, 20, 30), the minimal cost is
¡ ¢
f xC , y C , z1C , z2C , z3C = 103.73.
9.8. APPLICATION TO LOCATION PROBLEM 121

Applying the alternative approach, we assume that the individual satisfac-


tion of consumer i, located at ci = (ui , vi ) with location of the supplier at
(x, y) and with demand bi , is given by the following membership (satisfaction)
function:
1
µi (x, y, bi ) = 1 .
1 + bi ((x − ui )2 + (y − vi )2 ) 2
We shall investigate the problem with two different aggregation operators,
particularly, t-norms.
If we consider the minimum t-norm, i.e. G(u, v) = TM (u, v) = min{u, v},
then for
1
µ1 (x, y, 10) = 1 ,
1+10(x2 +(y−1)2 ) 2
1
µ2 (x, y, 20) = 1 ,
1+20(x2 +(y−2)2 ) 2
1
µ3 (x, y, 30) = 1 ,
1+30((x−3)2 +y 2 ) 2

we solve the maximum satisfaction problem:


maximize

µM (x, y, 10, 20, 30) = min{µ1 (x, y, 10), µ2 (x, y, 20), µ3 (x, y, 30)},

subject to (x, y) ∈ R2 .

The optimal location of the supplier has been found as (xM , y M ) = (1.8, 0.8)
with the optimal membership function value

µM (1.8, 0.8, 10, 20, 30) = 0.023,

see Fig. 9.4.

Figure 9.4:
122 CHAPTER 9. FUZZY MULTI-CRITERIA DECISION MAKING

Comparing with the optimal solution of the classical problem, here the mem-
bership function value is

µM (xC , y C , 10, 20, 30) = 0.013.


¡ ¢
On the other hand, the cost is f xM , y M , 10, 20, 30 = 104.64.
Now, as an aggregation operator we consider the product t-norm, i.e. G(u, v)
= TP (u, v) = u · v. We solve the maximum satisfaction problem:
maximize

µP (x, y, 10, 20, 30) = µ1 (x, y, 10) · µ2 (x, y, 20) · µ3 (x, y, 30),

subject to (x, y) ∈ R2 .
The optimal location of the supplier has been found as (xP , y P ) = (0, 1)
with the optimal membership function value

µP (0, 1, 10, 20, 30) = 0.0005,

see Fig. 9.5.

Figure 9.5:

Moreover, we get

µp (xC , y C , 10, 20, 30) = 0.0004

and
µP (xM , y M , 10, 20, 30) = 0.00002.
The cost of this solution is
¡ ¢
f xP , y P , 10, 20, 30 = 114.87.

The obtained results are summarized in the following table.


9.8. APPLICATION TO LOCATION PROBLEM 123

x y f µM µP
1. 3.0 0.0 103.7 0.01 0.00041
2. 1.8 0.8 104.6 0.02 0.00002
3. 0.0 1.0 114.9 0.01 0.00050
In the above table we can see differences between the results of the individ-
ual approaches. In Row 1., the results of solving the classical problem (9.27)
are displayed. In Row 2., the solution of maximum satisfaction problem with
the aggregation operators being minimum t-norm and maximum t-conorm is
presented. In Row 3., the results of the same problem with the aggregation
operators being product t-norm and t-conorm are given. Depending both on
input information and decision-making requirements, various locations of the
supplier may be optimal.

Example 107
Consider the same problem as in Example 106 with p = 2, i.e. with two
suppliers to be optimally located.
Again, we begin with the classical problem (9.27) with ai = 1, i = 1, 2, 3,
β = 2. Then the problem to solve is to minimize the cost function:
f (x1 , y1 , x2 , y2 , z11 , z12 , z21 , z22 , z31 , z32 ) (9.31)
= z11 (x21 + (1 − y1 )2 ) 2 + z12 (x22 + (1 − y2 )2 ) 2 + z21 (x21 + (2 − y1 )2 ) 2
1 1 1

+z22 (x22 + (2 − y2 )2 ) 2 + z31 ((x1 − 3)2 + y12 ) 2 + z32 ((x2 − 3)2 + y22 ) 2 ,
1 1 1

subject to z11 + z12 ≥ 10, z21 + z22 ≥ 20, z31 + z32 ≥ 30,
zij ≥ 0, (xj , yj ) ∈ R2 , i = 1, 2, 3, j = 1, 2.
The optimal solution, i.e. the locations of the facilities and shipment amounts
have been found as
¡ C C C C C C C C C¢
x1 , y1 , x2 , y2 , z12 , z21 , z22 , z31 , z32 = (0, 2, 3, 0, 10, 0, 20, 0, 0, 30),
whereas the minimal cost is
f (0, 2, 3, 0, 10, 0, 20, 0, 0, 30) = 10.0.
Applying our approach, the individual satisfaction of consumer i ∈ I located
at (ui , vi ) with location of the facility at (xj , yj ), j ∈ J = {1, 2}, and with
demand bi , is given by the membership (satisfaction) function:
1
µij (xj , yj , bi ) = 1 . (9.32)
1+bi ((xj −ui )2 +(yj −vi )2 ) 2

We investigate the problem again with two different aggregation operators, par-
ticularly, t-norms and t-conorms.
First, the aggregation operator SM = max is used for aggregating the sup-
pliers, TM = min is used for combining consumers. According to (9.30) we solve
the optimization problem:
maximize
124 CHAPTER 9. FUZZY MULTI-CRITERIA DECISION MAKING

µM ((x1 , y1 ), (x2 , y2 ), 10, 20, 30) = min{maxj∈J {µ1j (xj , yj , 10)},


maxj∈J {µ2j (xj , yj , 20)}, maxj∈J {µ3j (xj , yj , 30)}},

subject to (x1 , y1 , x2 , y2 ) ∈ R4 .

The optimal locations of the facilities has been computed as

(xM M M M
1 , y1 , x2 , y2 ) = (3, 0, 0, 5/3)

with the optimal membership value

µM ((3, 0), (0, 5/3), 10, 20, 30) = 0.130.

Notice that membership functions (9.32) are all TM -quasiconcave on R4 and,


by Proposition 98, µM is also TM -quasiconcave on R4 .
Second, the operator SP (u, v) = u + v − u · v is used for aggregating the
suppliers, TP (u, v) = u · v is used for combining consumers. Again, by (9.30) we
solve the maximum satisfaction problem:
maximize

Q µP ((x1 , y1 ), (x2 , y2 ), b1 , b2 , b3 )
= (µi1 (x1 , y1 , bi ) + µi2 (x2 , y2 , bi ) − µi1 (x1 , y1 , bi ) · µi2 (x2 , y2 , bi )),

subject to (x1 , y1 , x2 , y2 ) ∈ R4 .

The optimal locations of the facilities have been found as

(xP P P P
1 , y1 , x2 , y2 ) = (3, 0, 0, 2)

with the optimal membership value

µP ((3, 0), (0, 2), 10, 20, 30) = 0.119,

being the same as the optimal solution of classical problem (9.31). The results
are summarized in the following table.

x1 y1 x2 y2 f µM µP
1. 3.0 0.0 0.0 2.0 10.0 0.090 0.119
2. 3.0 0.0 0.0 1.67 13.3 0.130 0.022
3. 3.0 0.0 0.0 2.0 10.0 0.090 0.119

In the above table we can see the differences between the results of the
individual problems. Solving classical problem we obtain the same results as
in the maximum satisfaction problem with the aggregation operators being the
product t-norm TP and t-conorm SP . Notice again, that membership functions
(9.32) are TP -quasiconcave on R4 and by Proposition 98 µP is TP -quasiconcave
on R4 .
9.9. APPLICATION IN ENGINEERING DESIGN 125

9.9 Application in Engineering Design


Innovative product development requires high quality and resource-efficient en-
gineering design. Traditionally, it is common for engineers to evaluate promising
design alternatives one by one. Such an approach does not consider the nature
of imprecision of the design process and leads to expensive design computations.
At the stage where technical solution concepts are being generated, the descrip-
tion of a design is largely vague or imprecise. The need for a methodology to
represent and manipulate imprecision is greatest in the early, preliminary phases
of engineering design, where the designer is most unsure of the final dimensions
and shapes, material and properties and performance of the completed design,
see [2].
Because design imprecision concerns the choice of design variable values
used to describe an product or process, the designer’s preference is used to
quantify the imprecision with which design variables are known. Preferences,
modelled here by quasiconcave membership functions, denote either subjective
or objective information that may be quantified and included in the evaluation
of design alternatives.
Each design variable is characterized by the membership function

µi : Rki → [0, 1], i ∈ I = {1, 2, ..., n},


where the value of the membership function µi (xi ) specifies the design prefer-
ence of the design parameter xi , its nature is possibly multi-dimensional, n is a
number of variables. This preference function, which may arise objectively (e.g.
cost of the parameter), or subjectively (e.g. from experience), is used to quantify
the imprecision associated with the design variable. Thus the designer’s expe-
rience and judgement are incorporated into the design evaluations. In practice,
the design variables-preferences are divided into two groups: individual design
preferences (D = {1, 2, ..., m}) and individual customers preferences - functional
requirements (P = {m + 1, ..., n}).
In order to evaluate a design, the various individual preferences must be
combined or aggregated to give a single, overall measure. This aggregation, in
practice, occurs in two stages, see [2].
First, the individual design preferences µi , i ∈ D, are aggregated into the
combined design preference µD by an aggregation operator AD , and the cus-
tomer individual preferences µi , i ∈ P , are aggregated by an aggregating oper-
ator AP into the combined design preference µP .
Second, once the all preferences are aggregated according to its nature into
two resulting preferences µD and µP , we combine them by an aggregating op-
erator AO to obtain an overall preference µO from the following formula:

µO = AO (µD , µP ) = AO (AD (µ1 , ..., µm ), AP (µm+1 , ..., µn )). (9.33)

where AD : Rk1 × Rk2 × · · · × Rkm → [0, 1], AP : Rkm+1 × Rkm+2 × · · · × Rkn →


[0, 1], AO : R × R → [0, 1] are aggregation operators. In the definition it is
126 CHAPTER 9. FUZZY MULTI-CRITERIA DECISION MAKING

required that each aggregation operator A is monotone and satisfies the bound-
ary conditions A(0, 0, ..., 0) = 0, and A(1, 1, ..., 1) = 1. In engineering design
it is often required that aggregating operators are continuous and idempotent.
The last condition restricts the class of feasible aggregation operators to the
operators between the t-norm TM and t-conorm SM .
For some preferences of the system of engineering design, for example, where
the failure of one component results in the failure of the whole system, the non-
compensating aggregation operators such as minimum TM should be applied.
On the other hand, a better performance of some component can compensate
some worse performance of another one. In other words, a lower membership
value of some design variable can be compensated by a higher value of some
other one. Such preferences can be aggregated by compensative aggregation
operators, e.g. averaging operators, see [79]. Notice that by the definition, the
t-norm TM is considered also as a compensative operator, however in some other
sense.
The problem of engineering design is to find an optimal configuration of the
design parameters, i.e.:
Maximize

AO (AD (µ1 (x1 ), ..., µm (xm )), AP (µm+1 (xm+1 ), ..., µn (xn ))). (9.34)

We illustrate this approach on a simple example.

Example 108 Car design

Consider 4 design variables, 2 of them areindividual preferences:


µ1 - maximal speed,
µ2 - time to reach 100km/hour,
defined as follows, see Fig. 9.6 and Fig. 9.7:
1
1+0.25(160−x) for 0 ≤ x ≤ 160,
µ1 (x) = {
1 for x > 160.
1 for 0 ≤ y ≤ 7,
µ2 (y) = { 1
1+0.3(7−y) for y > 7.

We consider compensative aggregation operator - Product t-norm TP :

µD (x, y) = AD (µ1 (x), µ2 (y)) = TP (µ1 (x), µ2 (y)),


particularly,
1 1
1+0.25(160−x) · 1+0.3(y−7) for 0 ≤ x ≤ 160, y ≥ 7,
µD (x, y) = {
1 for x ≥ 160, 0 ≤ y ≤ 7,

see Fig. 9.8.


Notice, that µD is a starshaped function which is not quasiconcave on R2+ .
Further, we consider 2 customer preferences:
9.9. APPLICATION IN ENGINEERING DESIGN 127

Figure 9.6:

Figure 9.7:

µ3 - price of the car in 1000 $,


µ4 - fuel consumption in liter/100 km,
defined as follows:

1 for 0 ≤ u ≤ 8,
µ3 (u) = { 1
1+0.2(u−8) for u > 8.

1 for 0 ≤ v ≤ 4,
µ4 (v) = { 1
1+0.1(v−4) for v > 4.

Moreover, technological and economical functional dependences are given by


the following (regression) model:

x ≤ 400
y + 120,
u = 0.03x − 0.3y + 175
y ,
10
v = 0.05x − y − 2.5.
128 CHAPTER 9. FUZZY MULTI-CRITERIA DECISION MAKING

Figure 9.8:

For an aggregation operator AP we apply a compensative operator geometric


average G, i.e.

µP (u, v) = AP (µ3 (u), µ4 (v)) = G(µ3 (u), µ4 (v)).

Finally, µD and µP , have been combined by an aggregating operator

AO = TM ,

to obtain the overall aggregation:

µO (x, y, u, v) = TM (µD (x, y), µP (u, v)).

The optimal configuration of the parameters has been found by Conjugate


Gradient optimization method as follows:

x∗ = 159.8, y ∗ = 10.1, u∗ = 19.2, v ∗ = 6.5, A∗O = 0.497.


Chapter 10

Fuzzy Mathematical
Programming

10.1 Introduction
Mathematical programming problems (MP) form a subclass of decision - mak-
ing problems where preferences between alternatives are described by means of
objective function(s) defined on the set of alternatives in such a way that greater
values of the function(s) correspond to more preferable alternatives (if ”higher
value” is ”better”). The values of the objective function describe effects from
choices of the alternatives. In economic problems, for example, these values may
reflect profits obtained when using various means of production. The set of fea-
sible alternatives in MP problems is described implicitly by means of constraints
- equations or inequalities, or both - representing relevant relationships between
alternatives. In any case the results of the analysis using given formulation of
the MP problem depend largely upon how adequately various factors of the real
system are reflected in the description of the objective function(s) and of the
constraints.
Descriptions of the objective function and of the constraints in a MP problem
usually include some parameters. For example, in problems of resources alloca-
tion such parameters may represent economic parameters like costs of various
types of production, labor costs requirements, shipment costs, etc. The nature
of these parameters depends, of course, on the detailization accepted for the
model representation, and their values are considered as data that should be
exogenously used for the analysis.
Clearly, the values of such parameters depend on multiple factors not in-
cluded into the formulation of the problem. Trying to make the model more
representative, we often include the corresponding complex relations into it,
causing that the model becomes more cumbersome and analytically unsolvable.
Moreover, it can happen that such attempts to increase ”the precision” of the
model will be of no practical value due to the impossibility of measuring the

129
130 CHAPTER 10. FUZZY MATHEMATICAL PROGRAMMING

parameters accurately. On the other hand, the model with some fixed values of
its parameters may be too crude, since these values are often chosen in a quite
arbitrary way.
An intermediate approach is based on introduction into the model the means
of a more adequate representation of experts´understanding of the nature of
the parameters in the form of fuzzy sets of their possible values. The resul-
tant model, although not taking into account many details of the real system in
question could be a more adequate representation of the reality than that with
more or less arbitrarily fixed values of the parameters. In this way we obtain
a new type of MP problems containing fuzzy parameters. Treating such prob-
lems requires the application of fuzzy-set-theoretic tools in a logically consistent
manner. Such treatment forms an essence of fuzzy mathematical programming
(FMP) investigated in this chapter.
FMP and related problems have been extensively analyzed and many pa-
pers have been published displaying a variety of formulations and approaches.
Most approaches to FMP problems are based on the straightforward use of the
intersection of fuzzy sets representing goals and constraints and on the sub-
sequent maximization of the resultant membership function. This approach
has been mentioned by Bellman and Zadeh already in their paper [9] pub-
lished in the early seventies. Later on many papers have been devoted to the
problem of mathematical programming with fuzzy parameters, known under
different names, mostly as fuzzy mathematical programming, but sometimes as
possibilistic programming, flexible programming, vague programming, inexact
programming etc. For an extensive bibliography see the overview paper [33].
Here we present a general approach based on a systematic extension of the
traditional formulation of the MP problem. This approach is based on the
numerous former works of the author of this study, see [60] - [81] and also on
the works of many other authors, e.g. [38, 23, 19, 2, 43, 41, 69, 70], and others.
FMP is one of more possible approaches how to treat uncertainty in MP
problems. Much space has been devoted to similarities and dissimilarities of
FMP and stochastic programming (SP), see e.g. [89] and [90]. In Chapters
10 and 11 we demonstrate that FMP (in particular, fuzzy linear programming
- FLP) essentially differes from SP; FMP has its own structure and tools for
investigating a broad class of optimization problems.
FMP is also different to parametric programming (PP). PP problems are
in essence deterministic optimization problems with a special variable called a
parameter. The main interest in PP is focused on finding relationships between
the values of parameters and optimal solutions of MP problem.
In FMP some methods and approaches motivated by SP and PP are utilized,
see e.g. [85, 13]. In this book, however, algorithms and solution procedures for
MP problems are not studied, they can be found elsewhere, see e.g. the overview
paper [83].
10.2. MODELLING REALITY BY FMP 131

10.2 Modelling Reality by FMP


An alternative approach to classical MP is based on the introduction into the
model the means of a more adequate representation of the parameters in the
form of fuzzy sets was named FMP. By applying FMP on real problems, one
obtains a mathematical model which, although not taking into account many
details of the real system in question, could be a more adequate representation of
the reality than that with more or less arbitrarily chosen values of its parameters.
In this way we obtain a new type of MP problems containing fuzzy parameters.
As it was mentioned in [83], the use of FMP models does not only avoid
unrealistic modeling, but also offers a chance for reducing information costs.
Then, in the first step of the interactive solution process, the fuzzy system
is modeled by using ony the information which the decision maker can provide
without any expensive acquisition so as to obtain an initial compromise solution.
Then the decion maker can percieve which further information would be required
and is able to justify additional information costs.
An appropriate treatment of such problems requires proper application of
fuzzy-set-theoretic tools in a logically consistent manner. An important role
in this treatment is played by generalized concave membership functions. Such
approach forms the essence of FMP investigated in this chapter. The follow-
ing explanation is based on the substance investigated formerly in [79] and,
particularly, in Chapter 8.
In this chapter we begin with the formulation a FMP problem associated
with the classical MP problem. After that we define a feasible solution of FMP
problem and optimal solution of FMP problem as special fuzzy sets. From
practical point of view, α-cuts of these fuzzy sets are important, particularly
the α-cuts with the maximal α. The main result of this part says that the class
of all MP problems with (crisp) parameters can be naturally embedded into the
class of FMP problems with fuzzy parameters.

10.3 MP Problem with Parameters


The classical constrained optimization problem is given as follows

maximize f (x)
(10.1)
subject to x ∈ X,

where we assume that:


(i) The set X is a nonempty subset of Rn , n is a positive integer, and is
called the set of feasible solutions (set of alternatives).
(ii) The function f is a real function, f : Rn → R, called the objective
function (criterion function).
An optimal solution of problem (10.1) is a vector x∗ ∈ X which maximizes
the function f on X. The set of all optimal solution is denoted by X ∗ . We have

X ∗ = {x∗ ∈ X|f (x∗ ) = sup{f (x)|x ∈ X}}. (10.2)


132 CHAPTER 10. FUZZY MATHEMATICAL PROGRAMMING

Observe that \
X∗ = {x ∈ X|f (x) ≥ f (y)}. (10.3)
y∈X

Usually, the set X has a structure specified by equalities and inequalities


including some parameters. We distinguish the individual constraints according
to the sense of inequality, i.e. we set M1 = {1, ..., m1 }, M2 = {m1 + 1, ..., m2 },
M3 = {m2 + 1, ..., m} with M = M1 ∪ M1 ∪ M1 . Here, m1 , m2 and m are
nonnegative integers with 0 ≤ m1 ≤ m2 ≤ m. If m1 = 0, then M1 = ∅.
Similarly, if m1 = m2 or m2 = m, then M2 = ∅ or M3 = ∅, respectively.
Moreover, denote N = {1, 2, ..., n}.
Consider the following MP problem:

maximize f (x; c)
subject to
gi (x; ai ) = bi , i ∈ M1 , (10.4)
gi (x; ai ) ≤ bi , i ∈ M2 ,
gi (x; ai ) ≥ bi , i ∈ M3 .
In (10.4) f , gi are real functions, C and Pi are sets of parameters, f : Rn ×C →
R, gi : Rn × Pi → R, i ∈ M = M1 ∪ M2 ∪ M3 , c ∈ C, ai ∈ Pi , bi ∈ R,
x ∈ Rn .
The maximization in (10.4) is understood in the usual sense of finding a
maximizer of the objective function on the set of feasible solutions (10.4).
The sets of parameters C and Pi are some subsets of finite dimensional
vector spaces, depending on the specification of the problem. Particularly, C =
Pi = Rn for all i ∈ M, but here we consider also a more general case. The
right-hand sides bi ∈ R, for i ∈ M in (10.4) are also considered as parameters.
By parameters c ∈ C, ai ∈ Pi , bi ∈ R, taken from the parameter sets, a
flexible structure of MP problem (10.4) is modelled. The subject of parametric
programming is to investigate relations and dependences between parameters
and optimal solutions of MP problem (10.4). This problem is, however, not
studied here.
A linear programming problem (LP) is a particular case of the above formu-
lated MP problem (10.4), where c ∈ C ⊂ Rn , ai ∈ Pi ⊂ Rn and bi ∈ R for all
i ∈ M, that is
f (x; c) = cT x = c1 x1 + · · · + cn xn , (10.5)
gi (x; ai ) = aTi x = ai1 x1 + · · · + ain xn , i ∈ M. (10.6)
As a special case of this problem, we have the standard linear programming
problem:

maximize c1 x1 + · · · + cn xn
subject to
(10.7)
ai1 x1 + · · · + ain xn = bi , i ∈ M,
xj ≥ 0, j ∈ N .
Problem (10.7) in a more general setting will be investigated in Chapter 11.
10.4. FORMULATION OF FMP PROBLEM 133

10.4 Formulation of FMP Problem


Before formulating a fuzzy mathematical programming problem as an optimiza-
tion problem associated with the MP problem (10.4), we make a few assumptions
and remarks. We use notation introduced in Chapter 8.
Let f , gi be functions, f : Rn × C → R, gi : Rn × Pi → R, where C and Pi
are sets of parameters. Let µc̃ : C → [0, 1], µãi : Pi → [0, 1] and µb̃i : R → [0, 1],
i ∈ M, be membership functions of fuzzy parameters c̃, ãi and b̃i , respectively.
Moreover, let R̃i , i ∈ M, be fuzzy relations with the corresponding membership
functions µR̃i : F(R) × F(R) → [0, 1]. They will be used for comparing the left
and right sides of the constraints in (10.7).
The maximization of objective function (10.5) needs, however, a special
treatment. Generally, fuzzy values of the objective function are not linearly
ordered and to maximize the objective function we have to define a suitable
ordering on F(R) which allows for “maximization” of the objective. In our ap-
proach it will be done by an exogenously given fuzzy goal d˜ ∈ F(R) and another
fuzzy relation R̃0 on R. There exist some other approaches, see e.g. [18], [23],
[60].
The fuzzy mathematical programming problem (FMP problem) associated
with MP problem (10.4) is denoted by:

f
maximize f˜(x; c̃)
subject to (10.8)
g̃i (x; ãi ) R̃i b̃i , i ∈ M = {1, 2, ..., m}.

Here, in comparison with classical MP problem (10.4), fuzzy parameters c, ai


and bi are denoted with the upper ”wavelet”. Formulation (10.8) is not an
optimization problem in a classical sense, as it is not yet defined, how the
objective function f˜(x; c̃) is ”maximized”, and how the constraints g̃i (x; ãi ) R̃i
b̃i are satisfied. In fact, we need a new concept of a ”feasible solution” and also
that of ”optimal solution”, some counterparts to the concepts used in classical
MP.
Let us clarify the elements of (10.8).
Remember that for given x ∈ Rn , ãi ∈ F(Pi ) by extension principle (8.17),
g̃i (x; ãi ) is a fuzzy extension of gi (x; ·) with the membership function defined by

sup{µãi (a)|a ∈ Pi , gi (x; a) = t} if gi−1 (x; t) 6= ∅,


µg̃i (x;ãi ) (t) = { (10.9)
0 otherwise,

for each t ∈ R, where gi−1 (x; t) = {a ∈ Pi |gi (x; a) = t}.


The fuzzy relations R̃i for comparing the constraints of (10.8) will be con-
sidered as extensions of valued relations on R, particularly, the usual inequality
relations “≤” and “≥”. In a special case, namely, if T is a t-norm and R̃i is
a T -fuzzy extension of relation Ri , then by (8.27) we obtain the membership
134 CHAPTER 10. FUZZY MATHEMATICAL PROGRAMMING

function of the i-th constraint of (10.8) as follows


³ ´ ³ ´
µR̃i g̃i (x; ãi ), b̃i = sup{T µRi (u, v), T (µg̃i (x;ãi ) (u), µb̃i (v)) |u, v ∈ R}
³ ´
= sup{T µg̃i (x;ãi ) (u), µb̃i (v) |uRi v}. (10.10)

Likewise, for a given x ∈ Rn , c̃ ∈ F(C), f˜(x; c̃) is a fuzzy extension of f (x; ·)


given by the membership function defined as follows:

sup{µc̃ (c)|c ∈ C, f (x; c) = t} if f −1 (x; t) 6= ∅,


µf˜(x;c̃) (t) = { (10.11)
0 otherwise,

for each t ∈ R, where f −1 (x; t) = {c ∈ C|f (x; c) = t}.


Therefore, g̃i (x; ãi ) ∈ F(R) and this fuzzy quantity is ”compared” with the
fuzzy quantity b̃i ∈ F(R) by a fuzzy relation R̃i , i ∈ M. For x, y ∈ Rn we ob-
tain f˜(x; c̃) ∈ F(R), f˜(y; c̃) ∈ F(R). However, as has been mentioned earlier,
fuzzy values are not linearly ordered. To deal with this problem, we assume
the existence of a given additional goal b̃0 ∈ F(R) - a fuzzy set of the real line,
which the fuzzy values f˜(x; c̃) of the objective function are compared to, by
means of a fuzzy relation R̃0 , which is also assumed to be given exogenously.
This approach is frequently used in the literature, see the overview paper [83].
The fuzzy objective is then treated as another constraint f˜(x; c̃) R̃0 b̃0 , and
f
maximize the
³ objective´ function denotes finding the maximal membership de-
gree of µR̃0 f˜(x; c̃), b̃0 .
The concepts of the feasible solution and optimal solution of FLP problem
(10.8) need a more detailed explanation. Let us begin with the concept of the
feasible solution.

10.5 Feasible Solutions of the FMP Problem


The fuzzy relation R̃i is considered to be either an extension of the usual equal-
ity relation ”=” or inequality relations ”≤” and ”≥”, see Examples 56, 57 in
Chapter 8. Many authors, however, pointed out some disadvantages of T -fuzzy
extensions of equality and inequality relations. Therefore, numerous special
fuzzy relations for comparing left and right sides of constraints (10.8) have been
proposed, see e.g. [2, 19, 23, 38, 43, 41, 69, 70, 54, 55, 44, 45]. Here, we shall use
some extensions of the usual equality and inequality relations in the constraints
of FMP (10.8), being not necessarily T -fuzzy extensions. In the following defin-
ition, for the sake of generality of the presentation, we consider fuzzy relations.

Definition 109 Let gi , i ∈ M = {1, 2, ..., m}, be functions, gi : Rn × Pi → R,


where Pi are given sets of parameters. Let µãi : Pi → [0, 1] and µb̃i : R → [0, 1]
be membership functions of fuzzy parameters ãi and b̃i , respectively. Moreover,
let R̃i , i ∈ M, be fuzzy relations with the corresponding membership functions
µR̃i : F(R) × F(R) → [0, 1], let A be an aggregation operator.
10.6. PROPERTIES OF FEASIBLE SOLUTION 135

A fuzzy subset X̃ of Rn given by the membership function µX̃ , for all x ∈ Rn


defined as
³ ³ ´ ³ ´´
µX̃ (x) = A µR̃1 g̃1 (x; ã1 ), b̃1 , ..., µR̃m g̃m (x; ãm ), b̃m (10.12)

is called the feasible solution of the FMP problem (10.8).


For α ∈ (0, 1], a vector x ∈ [X̃]α is called the α-feasible solution of the FMP
problem (10.8).
A vector x̄ ∈ Rn such that µX̃ (x̄) = Hgt(X̃) is called the max-feasible solution.

Notice that the feasible solution of a FMP problem is a fuzzy set. For x ∈ Rn
the interpretation of µX̃ (x) depends on the interpretation of uncertain parame-
ters of the FMP problem. For instance, within the framework of possibility
theory, the membership functions of the parameters are explained as possibility
degrees and µX̃ (x) denotes the possibility that x ∈ Rn belongs to the set of fea-
sible solutions of the corresponding FMP problem. Some other interpretations
were also applied, see e.g. [14], [30] or [83].
On the other hand, α-feasible solution is a vector belonging to an α-cut of
the feasible solution X̃ and the same holds for the max-feasible solution, which
is a special α-feasible solution with α = Hgt(X̃). If a decision maker specifies
the grade of feasibility α ∈ [0, 1] (the grade of possibility, satisfaction etc.), then
a vector x ∈ Rn with µX̃ (x) ≥ α is an α-feasible solution of the corresponding
FMP problem.
Considering the i-th constraint of problem (10.8), for given x, ãi and b̃i , the
value µR̃i (g̃i (x; ãi ), b̃i ) from interval [0, 1] can be interpreted as the degree of
satisfaction of this constraint.
For i ∈ M, we use the following notation: by X̃i we denote a fuzzy set given
by the membership function µX̃i , which is defined for all x ∈ Rn as

µX̃i (x) = µR̃i (g̃i (x; ãi ), b̃i ). (10.13)

The fuzzy set X̃i can be interpreted as an i-th fuzzy constraint. All fuzzy
constraints are aggregated into the feasible solution (10.12) by the aggregation
operator A, usually, A is a t-norm, or A = min. The aggregation operators have
been thouroughly investigated in [79].

10.6 Properties of Feasible Solution


In this section we suppose that T is a t-norm, A is an aggregation operator,
Pi are given sets of parameters, R̃i are fuzzy relations, i ∈ M. In Theorem
110, stated below, the fuzzy relations R̃i are supposed to be fuzzy extensions
of the usual binary relations on R, but in the other theorems and propositions
which will follow, we suppose a stronger condition, namely that R̃i are T -fuzzy
extensions. For the sake of simplicity, we denote the relations only by R̃i and
136 CHAPTER 10. FUZZY MATHEMATICAL PROGRAMMING

not by R̃iT as it was originally introduced in Definition 27. The other five fuzzy
extensions of the usual binary relations on R defined earlier in Definition 27 shall
not be studied in this chapter. However, we shall use them again in Chapter 11.
Investigating the concept of the feasible solution (10.12) of the FMP problem
(10.8), we first show that in case of crisp parameters ai and bi , the feasible
solution is also crisp.

Theorem 110 Let gi be real functions, gi : Rn × Pi → R, let ai ∈ Pi and


bi ∈ R be crisp parameters. For i ∈ M1 , let R̃i be a fuzzy extension of the
equality relation ”=”; for i ∈ M2 , let R̃i be a fuzzy extension of the inequality
relation ”≤”; and for i ∈ M3 , let R̃i be a fuzzy extension of the inequality
relation ”≥”. Let A be a t-norm.
Then the feasible solution X̃ is a crisp set and coincides with the set of feasible
solutions X of MP problem (10.4).

Proof. Let x ∈ Rn be an arbitrary vector, we show that

µX̃ (x) = χX (x)

.
Observe first that by extension principle (8.17) we obtain

µg̃i (x;ai ) = χgi (x;ai )

for all i ∈ M.
Next, for all i ∈ M we obtain by (8.25)

µR̃i (gi (x; ai ), bi ) = 1. (10.14)

Notice that X = {x ∈ Rn |gi (x; ai )Ri bi , i ∈ M}, where we write gi (x; ai )Ri bi
instead of (10.14). Applying the t-norm A on (10.14), we obtain
¡ ¢
µX̃ (x) = A µR̃1 (g1 (x; a1 ), b1 ) , ..., µR̃m (gm (x; am ), bm ) = χX (x),

which is the desired result.


Let us remind that for two fuzzy subsets ã0 , ã00 ∈ F(Rn ), ã0 ⊂ ã00 if and only
if µã0 (x) ≤ µã00 (x) for all x ∈ Rn , see Proposition 6. The following theorem
proves some monotonicity of the feasible solution depending on the parameters
of the FMP problem. In Theorem 110, we assumed that R̃i have been fuzzy
extensions of the usual binary relations ”=”, ”≤” and ”≥”. Here, we allow R̃i
to be fuzzy extensions of more general valued relations.

Theorem 111 Let gi be real functions, gi : Rn × Pi → R, where Pi are sets


of parameters. Let ã0i , b̃0i , and ã00i , b̃00i be two collections of fuzzy parameters of
the FMP problem. Let R̃i be T -fuzzy extensions of valued relations Ri on R,
i ∈ M. Let T be a t-norm, A be an aggregation operator.
If X̃ 0 is a feasible solution of the FMP problem with the collection of parameters
10.6. PROPERTIES OF FEASIBLE SOLUTION 137

ã0i , b̃0i , and X̃ 00 is a feasible solution of the FMP problem with the collection of
parameters ã00i , b̃00i such that for all i ∈ M
ã0i ⊂ ã00i and b̃0i ⊂ b̃00i , (10.15)
then
X̃ 0 ⊂ X̃ 00 . (10.16)
Proof. In order to prove X̃ 0 ⊂ X̃ 00 , we first show that
g̃i (x; ã0i ) ⊂ g̃i (x; ã00i )
for all i ∈ M.
Indeed, by (8.17), for each u ∈ R and i ∈ M,
µg̃i (x;ã0i ) (u) = max{0, sup{µã0i (a)|a ∈ Pi , gi (x; a) = u}}
≤ max{0, sup{µã00i (a)|a ∈ Pi , gi (x; a) = u}} = µg̃i (x;ã00i ) (u).

Now, since b̃0i ⊂ b̃00i , using monotonicity of T -fuzzy extension R̃i of Ri , it


follows that µR̃i (g̃i (x; ã0i ), b̃0i ) ≤ µR̃i (g̃i (x; ã00i ), b̃00i ). Then, applying monotonicity
of A in (10.12), we obtain X̃ 0 ⊂ X̃ 00 .
Corollary 112 Let ãi , b̃i be a collection of fuzzy parameters, and let ai ∈ Pi
and bi ∈ R be a collection of crisp parameters such that for all i ∈ M
µãi (ai ) = µb̃i (bi ) = 1. (10.17)
If the set X of all feasible solutions of MP problem (10.4) with the parameters
ai and bi is nonempty, and X̃ is a feasible solution of FMP problem (10.8) with
fuzzy parameters ãi and b̃i , then for all x ∈ X
µX̃ (x) = 1 . (10.18)
Proof. Observe that ai ⊂ ãi , bi ⊂ b̃i for all i ∈ M. Then by Theorem 111
we obtain X ⊂ X̃, which is nothing else than (10.18).
Corollary 112 says that if we ”fuzzify” the parameters of the original crisp
MP problem, then the feasible solution of the new FMP problem ”fuzzifies”
the original set of all feasible solutions such that the membership grade of any
feasible solution of the MP problem is eaual to 1.
So far, the parameters ãi of the constraint functions gi have been specified as
fuzzy subsets of arbitrary sets Pi , i ∈ M. From now on, the space of parameters
is supposed to be the k-dimensional Euclidean vector space Rk , i.e., Pi = Rk
for all i ∈ M, where k is a positive integer. Particularly, µãi : Rk → [0, 1] and
µb̃i : R → [0, 1] are the membership functions of fuzzy parameters ãi and b̃i . We
shall also require compactness of fuzzy parameters ãi and b̃i and closedness of
the valued relations Ri . For the rest of this section we suppose that A and T are
the minimum t-norms, i.e. A = T = min. As a result, we obtain some formulae
for α-feasible solutions of FMP problem based on α-cuts of the parameters.
Remember that fuzzy parameters ãi and b̃i are compact if [ãi ]α and [b̃i ]α are
compact for all α ∈ (0, 1].
138 CHAPTER 10. FUZZY MATHEMATICAL PROGRAMMING

Theorem 113 Let gi be continuous functions, gi : Rn × Rk → R. Let ãi and


b̃i be compact fuzzy subsets of Rk and R, respectively. Let R̃i be T -fuzzy exten-
sions of closed valued relations Ri on R, i ∈ M.
Then for all α ∈ (0, 1]
m
\
[X̃]α = [X̃i ]α , (10.19)
i=1

and, moreover, for all i ∈ M

[X̃i ]α = {x ∈ Rn |µR̃i (gi (x; [ãi ]α ), [b̃i ]α ) ≥ α}. (10.20)

Proof. 1. Let α ∈ (0, 1], i ∈ M, x ∈ [X̃i ]α . Then by (10.13) we have

µR̃i (g̃i (x; ãi ), b̃i ) ≥ α. (10.21)

We prove first that (10.21) is valid if and only if

µR̃i ([g̃i (x; ãi )]α , [b̃i ]α ) ≥ α. (10.22)

By definition we obtain
n n oo
µR̃i (g̃i (x; ãi ), b̃i ) = sup{min µRi (u, v) , min µg̃i (x;ãi ) (u) , µb̃i (v) |u, v ∈ R}.

As g̃i (x; ãi ) and b̃i are compact fuzzy sets and Ri is a closed valued relation,
there exist u∗ , v ∗ ∈ R such that
n n oo
µR̃i (g̃i (x; ãi ), b̃i ) = min µRi (u∗ , v ∗ ) , min µg̃i (x;ãi ) (u∗ ) , µb̃i (v ∗ ) ≥ α.

Hence,
µRi (u∗ , v ∗ ) ≥ α, µg̃i (x;ãi ) (u∗ ) ≥ α, µb̃i (v ∗ ) ≥ α. (10.23)

On the other hand, by definition µR̃i ([g̃i (x; ãi )]α , [b̃i ]α )
n n oo
= sup{min µRi (u, v) , min χ[g̃i (x;ãi )]α (u) , χ[b̃i ]α (v) |u, v ∈ R}
= sup{µRi (u, v) |u ∈ [g̃i (x; ãi )]α , v ∈ [b̃i ]α }.

Therefore, by (10.23) we obtain

µR̃i ([g̃i (x; ãi )]α , [b̃i ]α ) ≥ α.

The opposite implication can be proved analogously. Hence, (10.21) is equivalent


to (10.22).
Now, by Proposition 48 and Theorem 52, it follows that

[g̃i (x; ãi )]α = gi (x; [ãi ]α ). (10.24)


10.6. PROPERTIES OF FEASIBLE SOLUTION 139

Substituting (10.24) into (10.22), we obtain

µR̃i (gi (x; [ãi ]α ), [b̃i ]α ) ≥ α,

a desirable result. We have just proven (10.20).


2. To prove (10.19), observe first that with A = min in (10.12), we have
n ³ ´ ³ ´o
µX̃ (x) = min µR̃1 g̃1 (x; ã1 ), b̃1 , ..., µR̃m g̃m (x; ãm ), b̃m . (10.25)

Let x ∈ [X̃]α , that is µX̃ (x) ≥ α. By (10.25) this is equivalent to

µR̃i (g̃i (x; ãi ), b̃i ) ≥ α

for all i ∈ M. Using the arguments of the first part of the proof, the last
inequality is equivalent to x ∈ [X̃i ]α for all i ∈ M, or, in other words, x ∈
T
m
[X̃i ]α .
i=1
Theorem 113 has some important computational aspects. Assume that in
a FMP problem, we can specify a possibility (satisfaction) level α ∈ (0, 1] and
determine the α-cuts [ãi ]α and [b̃i ]α of the fuzzy parameters. Then the formulae
(10.19) and (10.20) will allow us to compute all α-feasible solutions of FMP
problem without performing special computations of functions g̃i .
If the valued relations Ri are binary relations similar to those in Theorem
110, then the statement of Theorem 113 can be strenghtened as follows.

Theorem 114 Let gi be continuous functions, gi : Rn × Rk → R. Let ãi and


b̃i be compact fuzzy subsets of Rk and R, respectively. For i ∈ M1 , let R̃i be a
T -fuzzy extension of the equality relation ”=”; for i ∈ M2 , let R̃i be a T -fuzzy
extension of the inequality relation ”≤”; and for i ∈ M3 , let R̃i be a T -fuzzy
extension of the inequality relation ”≥”.
Then for all α ∈ (0, 1]
m
\
[X̃]α = [X̃i ]α , (10.26)
i=1

and, moreover, for all i ∈ M

[X̃i ]α = {x ∈ Rn |µR̃i (gi (x; [ãi ]α ), [b̃i ]α ) = 1}. (10.27)

Proof. 1. Let α ∈ (0, 1], i ∈ M, x ∈ [X̃i ]α . Then by (10.10) and (10.13) we


have
µR̃i (g̃i (x; ãi ), b̃i ) ≥ α. (10.28)

We prove first that (10.28) is valid if and only if

µR̃i ([g̃i (x; ãi )]α , [b̃i ]α ) = 1. (10.29)


140 CHAPTER 10. FUZZY MATHEMATICAL PROGRAMMING

In order to apply Corollary 64, for each x ∈ Rn and each α ∈ (0, 1], [g̃i (x; ãi )]α
should be compact. Indeed, this is true by Proposition 54. Then by Corol-
lary 64, we obtain the equivalence between (10.28) and (10.29). Moreover, by
Proposition 48 and Theorem 52, it follows that

[g̃i (x; ãi )]α = gi (x; [ãi ]α ). (10.30)

Substituting (10.30) into (10.29), we obtain µR̃i (gi (x; [ãi ]α ), [b̃i ]α ) = 1, a desir-
able result.
For the rest of the proof we can repeat the arguments of the corresponding
part of the proof of Theorem 113.
Now, we shall see how the concept of generalized concavity introduced in
[79] is utilized in the FMP problem. Particularly, we show that all α-feasible
solutions (10.19), (10.20) are solutions of the system of inequalities on condi-
tion that the membership functions of fuzzy parameters ãi and b̃i are upper-
quasiconnected for all i ∈ M.
For given α ∈ (0, 1], i ∈ M, we introduce the following notation

Gi (x; α) = inf {gi (x; a)|a ∈ [ãi ]α } , (10.31)


Gi (x; α) = sup {gi (x; a)|a ∈ [ãi ]α } , (10.32)
bi (α) = inf{b ∈ R|b ∈ [b̃i ]α }, (10.33)
bi (α) = sup{b ∈ R|b ∈ [b̃i ]α }. (10.34)

Theorem 115 Let all assumptions of Theorem 114 be satisfied. Moreover, let
the membership functions of fuzzy parameters ãi and b̃i be upper-quasiconnected
for all i ∈ M.
Then for all α ∈ (0, 1], we have x ∈ [X̃]α if and only if

Gi (x; α) ≤ bi (α), i ∈ M1 ∪ M2 , (10.35)


Gi (x; α) ≥ bi (α), i ∈ M1 ∪ M3 , (10.36)

Proof. Let x ∈ [X̃]α . By Theorem 114, this is equivalent to

µR̃i (gi (x; [ãi ]α ), [b̃i ]α ) = 1, (10.37)

for all i ∈ M = M1 ∪ M2 ∪ M3 . Moreover, by Proposition 48 and Theorem


52, it follows that
[g̃i (x; ãi )]α = gi (x; [ãi ]α ).
Since the membership functions of fuzzy parameters ãi and b̃i , i ∈ M, are upper-
quasiconnected, by Propositions 50 and 54, [g̃i (x; ãi )]α is closed and convex, i.e.,
it is a closed interval in R. The rest of the proof follows from (10.31) - (10.34)
and Theorem 111.
If we assume that the functions gi satisfy some convexity and concavity
requirements, then we can prove that the membership function µX̃ of the
feasible solution X̃ is quasiconcave, or, in other words, that X̃ is convex.
10.6. PROPERTIES OF FEASIBLE SOLUTION 141

Theorem 116 Let all assumptions of Theorem 114 be satisfied. Moreover, let
gi be quasiconvex on Rn × Rk for i ∈ M1 ∪ M2 , and gi be quasiconcave on
Rn × Rk for i ∈ M1 ∪ M3 .
Then for all i ∈ M, X̃i are convex and therefore the feasible solution X̃ of FMP
problem (10.8) is also convex.

Proof. 1. Let i ∈ M1 ∪ M2 , α ∈ (0, 1]. We show that [X̃i ]α is convex.


Let x1 , x2 ∈ [X̃i ]α , λ ∈ (0, 1), put y = λx1 +(1−λ)x2 . Since gi is quasiconvex
on Rn × Rk , then for all (x1 , a1 ) ∈ Rn × Rk , (x2 , a2 ) ∈ Rn × Rk , we have

gi (λx1 + (1 − λ)x2 , λa1 + (1 − λ)a2 ) ≤ max{gi (x1 , a1 ), gi (x2 , a2 )}. (10.38)

By (10.20) we get

[X̃i ]α = {x ∈ Rn |µR̃i (gi (x; [ãi ]α ), [b̃i ]α ) = 1}.

Hence, it remains only to show that

µR̃i (gi (λx1 + (1 − λ)x2 ; [ãi ]α ), [b̃i ]α ) = 1. (10.39)

Apparently, by Proposition 54 and Corollary 55, it follows that gi (λx1 + (1 −


λ)x2 ; [ãi ]α ) is a compact interval in R. However, [b̃i ]α is also a compact interval,
therefore for x ∈ Rn

µR̃i (gi (x; [ãi ]α ), [b̃i ]α ) = sup{min{χgi (x;[ãi ]α ) (u), χ[b̃i ]α (v)}|u ≤ v}
1 if Gi (x; α) ≤ bi (α) ,
={ (10.40)
0 otherwise.

To prove (10.39), we have to show that for y = λx1 + (1 − λ)x2

Gi (y; α) ≤ bi (α), (10.41)

Observe that [ãi ]α is convex and compact subset of Rk . By continuity of gi ,


there exists aj ∈ [ãi ]α , j = 1, 2, such that

Gi (xj ; α) = gi (xj ; aj ). (10.42)

Since x1 , x2 ∈ [X̃i ]α , we get from (10.40) Gi (xj ; α) ≤ bi (α), j = 1, 2, or

max{Gi (x1 ; α), Gi (x2 ; α)} ≤ bi (α). (10.43)

Then by (10.43) and (10.42) we immediately obtain

max{gi (x1 ; a1 ), gi (x2 ; a2 )} ≤ bi (α). (10.44)

Considering inequality (10.38), we get

gi (y; λa1 + (1 − λ)a2 ) ≤ max{gi (x1 ; a1 ), gi (x2 ; a2 )}. (10.45)


142 CHAPTER 10. FUZZY MATHEMATICAL PROGRAMMING

Apparently, as λa1 + (1 − λ)a2 ∈ [ãi ]α , we get

Gi (y; α) ≤ gi (y; λa1 + (1 − λ)a2 ). (10.46)

Then inequalities (10.44) - (10.46) give the required result (10.41).


2. Let i ∈ M1 ∪ M3 , α ∈ (0, 1]. Again, we show that [X̃i ]α is convex.
Let x1 , x2 ∈ [X̃i ]α , λ ∈ (0, 1), put y = λx1 + (1 − λ)x2 . Since gi is quasi-
concave on Rn × Rk , then for all (x1 , a1 ) ∈ Rn × Rk , (x2 , a2 ) ∈ Rn × Rk , we
have

gi (λx1 + (1 − λ)x2 , λa1 + (1 − λ)a2 ) ≥ min{gi (x1 , a1 ), gi (x2 , a2 )}. (10.47)

By (10.20) we get

[X̃i ]α = {x ∈ Rn |µR̃i (gi (x; [ãi ]α ), [b̃i ]α ) = 1}.

Hence, it remains to show that

µR̃i (gi (λx1 + (1 − λ)x2 ; [ãi ]α ), [b̃i ]α ) = 1. (10.48)

By Proposition 54 and Corollary 55, it follows that gi (λx1 + (1 − λ)x2 ; [ãi ]α )


is a compact interval in R. However, [b̃i ]α is also a compact interval, therefore
for x ∈ Rn

µR̃i (gi (x; [ãi ]α ), [b̃i ]α ) = sup{min{χgi (x;[ãi ]α ) (u), χ[b̃i ]α (v)}|u ≥ v}
1 if Gi (x; α) ≥ bi (α),
={
0 otherwise.

To prove (10.48), we have to show that for y = λx1 + (1 − λ)x2

Gi (y; α) ≥ bi (α). (10.49)

The proof of (10.49) can be conducted in an analogous way to part 1.


Observe first that [ãi ]α is a convex and compact subset of Rk . By continuity
of gi , there exists a0j ∈ [ãi ]α , j = 1, 2, such that

Gi (xj ; α) = gi (xj ; a0j ). (10.50)

Since x1 , x2 ∈ [X̃i ]α , we get from (10.40)

Gi (xj ; α) ≥ bi (α), j = 1, 2,

or
min{Gi (x1 ; α), Gi (x2 ; α)} ≥ bi (α). (10.51)
Then by (10.43) and (10.43) we immediately obtain

min{gi (x1 ; a1 ), gi (x2 ; a2 )} ≥ bi (α). (10.52)


10.6. PROPERTIES OF FEASIBLE SOLUTION 143

Considering inequality (10.38), we get

gi (y; λa1 + (1 − λ)a2 ) ≥ min{gi (x1 ; a1 ), gi (x2 ; a2 )}. (10.53)

Since λa1 + (1 − λ)a2 ∈ [ãi ]α , we have

Gi (y; α) ≥ gi (y; λa1 + (1 − λ)a2 ). (10.54)

Then inequalities (10.52) - (10.54) give the required result (10.49).


The main results of this section are schematically summarized in Table 1.

Table 1.

Const-
Parame- Rela-
raint t-norm/
ters: tions: Results: Theorem:
functi- agr. op.
ãi , b̃i R/R̃i
ons: gi
=, ≤, ≥
fuzzy
– crisp T /T X̃ = [X̃i ]α = X̄ T110
exten-
sion
valued
ã0i ⊂ ã00i , relat./
– T /A X̃ 0 ⊂ X̃ 00 T111
b̃0i ⊂ b̃00i T -f. ex-
tension
Tm
valued [X̃]α = i=1 [X̃i ]α
conti- com- rel./ [X̃i ]α = {x ∈ Rn |
min / min T113
nuous pact T -f. ex- µR̃i (gi (x; [ãi ]α ), [b̃i ]α )
tension ≥ α}
Tm
[X̃]α = i=1 [X̃i ]α
=, ≤, ≥ /
conti- com- [X̃i ]α = {x ∈ Rn |
T -f. ex- min / min T114
nuous pact µR̃i (gi (x; [ãi ]α ), [b̃i ]α )
tension
= 1}
Gi (x; α) ≤ bi (α),
com- =, ≤, ≥ /
conti- i ∈ M1 ∪ M2 ,
pact T -f. ex- min / min T115
nuous Gi (x; α) ≥ bi (α),
UQCN tension
i ∈ M1 ∪ M3
conti-
com- =, ≤, ≥ /
nuous
pact T -f. ex- min / min [X̃i ]α - convex T116
QCV/
UQCN tension
QCA
144 CHAPTER 10. FUZZY MATHEMATICAL PROGRAMMING

10.7 Optimal Solutions of the FMP Problem


For convenience of the reader we first recall FMP problem (10.8). Let us consider
an optimization problem associated with the MP problem (10.4), particularly,
let f, gi , i ∈ M, be functions, f : Rn × C → R, gi : Rn × Pi → R, where
C and Pi are sets of parameters. Let µc̃ : C → [0, 1], µãi : Pi → [0, 1]
and µb̃i : R → [0, 1] be membership functions of fuzzy parameters c̃, ãi and
b̃i , respectively. Moreover, let R̃i , i ∈ {0} ∪ M, be fuzzy relations with the
corresponding membership functions µR̃i : F(R) × F(R) → [0, 1].
We assume the existence of an additional goal b̃0 ∈ F(R) - a fuzzy subset
of the real line, which the fuzzy values of the objective function are compared
to, by means of a fuzzy relation R̃0 , which is also assumed to be given exoge-
nously. The fuzzy objective is then treated as another constraint f˜(x; c̃) R̃0 b̃0 ,
f
and maximize ³the objective´ function denotes finding the maximal membership
˜
degree of µR̃0 f (x; c̃), b̃0 .
The FMP problem associated with MP problem (10.4) is formulated as fol-
lows

f
maximize f˜(x; c̃)
subject to (10.55)
g̃i (x; ãi ) R̃i b̃i , i ∈ M = {1, 2, ..., m}.

We obtain a modification of Definition 109.

Definition 117 Let f , gi , i ∈ M, be functions, f : Rn × C → R, gi : Rn ×


Pi → R, where C, Pi are sets of parameters. Let µc̃ : C → [0, 1], µãi : Pi →
[0, 1] and µb̃i : R → [0, 1] be membership functions of fuzzy parameters c̃, ãi
and b̃i , respectively. Let b̃0 ∈ F(R) be a fuzzy subset of R called a fuzzy goal.
Furthermore, let R̃i , i ∈ {0} ∪ M, be fuzzy relations given by the membership
functions µR̃i : F(R) × F(R) → [0, 1], let A and AG be aggregation operators.
A fuzzy set X̃ ∗ given by the membership function µ∗X̃ for all x ∈ Rn as
³ ³ ´ ´
µ∗X̃ (x) = AG µR̃0 f˜(x; c̃), b̃0 , µX̃ (x) , (10.56)

where µX̃ (x) is the membership function of the feasible solution given by (10.12),
is called an optimal solution of the FMP problem (10.55).
For α ∈ (0, 1] a vector x ∈ [X̃ ∗ ]α is called an α-optimal solution of the FMP
problem (10.55).
A vector x∗ ∈ Rn with the property

µ∗X̃ (x∗ ) = Hgt(X̃ ∗ ) (10.57)

is called the max-optimal solution.


10.7. OPTIMAL SOLUTIONS OF THE FMP PROBLEM 145

Notice that the optimal solution X̃ ∗ of a FMP problem is a fuzzy subset of


R . Moreover, X̃ ∗ ⊂ X̃, where X̃ is the feasible solution. On the other hand,
n

the α-optimal solution is a vector, as well as the max-optimal solution, which


is, in fact, the α-optimal solution with α = Hgt(X̃ ∗ ). Notice that in view of
Chapter 9 a max-optimal solution is in fact a max-AG decision on Rn .
In Definition 117 of optimal solution, two aggregation operators A and AG
are used. The former aggregation operator is used for aggregating the individual
constraints into the feasible solution by Definitions 109, the latter one is used
for the purpose of aggregating the fuzzy set of feasible solution given by the
membership function
³ ³ ´ ³ ´´
µX̃ (x) = A µR̃1 g̃1 (x; ã1 ), b̃1 , ..., µR̃m g̃m (x; ãm ), b̃m (10.58)

with the fuzzy set ”of the objective” X̃0 defined by the membership function
³ ´
µX̃0 (x) = µR̃0 f˜(x; c̃), b̃0 , (10.59)

where b̃0 is a given fuzzy goal. As a result, we obtain the membership function
of optimal solution X̃ ∗ as
¡ ¢
µ∗X̃ (x) = AG µX̃0 (x), µX̃ (x) (10.60)

for all x ∈ Rn . In particular, if A = AG , then by commutativity and associa-


tivity we obtain (10.56) in a simple form
³ ³ ´ ³ ´ ³ ´´
µ∗X̃ (x) = A µR̃0 f˜(x; c̃), b̃0 , µR̃1 g̃1 (x; ã1 ), b̃1 , ..., µR̃m g̃m (x; ãm ), b̃m .
(10.61)
Since the problem (10.55) is a maximization problem, i.e. ”the higher value
is better”, the membership function µb̃0 of b̃0 should be increasing, or nonde-
creasing. By the same reason, the fuzzy relation R̃0 for comparing f˜(x; c̃) and
b̃0 should be of the ”greater or equal” type. In this section, we consider R̃0 as
a T -fuzzy extension of the usual binary operation ≥, where T is a t-norm.
Notice that an extension to a multi-objective MP problem with more than
one objective functions and more fuzzy goals and corresponding fuzzy relation
are considered, is straightforward, however, it is not followed here.
Formally, in Definitions 109 and 117, the concepts of feasible solution and
optimal solution, α-feasible solution and α-optimal solution, respectively, are
similar to each other. Therefore, we can take advantage of the results already
derived in the preceding section for some characterization of the optimal solu-
tions of the FMP problem. We first show that in case of crisp parameters c, ai
and bi , the max-optimal solution given by (10.57) coincides with the optimal
solution of the crisp problem.

Theorem 118 Let f : Rn × C → R, gi : Rn × Pi → R, let c ∈ C, ai ∈ Pi


and bi ∈ R be crisp parameters, i ∈ M. Let b̃0 ∈ F(R) be a fuzzy goal with the
strictly increasing membership function µb̃0 : R → [0, 1], let R̃i , i ∈ {0} ∪ M,
146 CHAPTER 10. FUZZY MATHEMATICAL PROGRAMMING

be fuzzy relations such that for i ∈ M1 , R̃i is a fuzzy extension of the equality
relation ”=”, for i ∈ M2 , R̃i is a fuzzy extension of the inequality relation ”≤”,
and for i ∈ {0} ∪ M3 , R̃i is a fuzzy extension of the inequality relation ”≥”.
Let T, A and AG be t-norms.
Then the set of all max-optimal solution coincides with the set of all optimal
solutions X ∗ of MP problem (10.4).

Proof. By Theorem 110, the feasible solution of (10.55) is crisp, i.e.,

µX̃ (x) = χX (x) (10.62)

for all x ∈ Rn , where X is the set of all feasible solutions of the crisp MP
problem.
Moreover, by (10.59) we obtain for crisp c ∈ C
³ ´ ³ ´
µX̃0 (x) = µR̃0 f˜(x; c̃), b̃0 = sup{T χf (x,c) (u), µb̃0 (v) |u ≥ v}
= µb̃0 (f (x, c)). (10.63)

Substituting (10.62) and (10.63) into (10.60) we obtain

¡ ¢ µ (f (x, c)) if x ∈ X,
µ∗X̃ (x) = AG µb̃0 (f (x, c)), χX (x) = { b̃0 (10.64)
0 otherwise.

Since µb̃0 is strictly increasing function, by (10.64) it follows that µ∗X̃ (x∗ ) =
Hgt(X̃ G ), if and only if µ∗X̃ (x∗ ) = sup{µb̃0 (f (x, c))| x ∈ X}, which is the
desired result.
For fuzzy subsets ã0 , ã00 ∈ F(Rn ), we have ã0 ⊂ ã00 , if and only if µã0 (x) ≤
µã00 (x) for all x ∈ Rn .

Theorem 119 Let f , gi , i ∈ M, be real functions, f : Rn × C → R, gi :


Rn ×Pi → R. Let c̃0 , ã0i , b̃0i , and c̃00 , ã00i , b̃00i be two collections of fuzzy parameters
of the FMP problem. Let T, A and AG be t-norms. Let b̃0 ∈ F(R) be a fuzzy
goal, let R̃i , i ∈ {0} ∪ M, be T -fuzzy extensions of valued relations Ri on R.
If X̃ ∗0 is an optimal solution of FMP problem (10.55) with the parameters c̃0 , ã0i
and b̃0i , and X̃ ∗00 is an optimal solution of the FMP problem with the parameters
c̃00 , ã00i and b̃00i such that for all i ∈ M

c̃0 ⊂ c̃00 , ã0i ⊂ ã00i and b̃0i ⊂ b̃00i , (10.65)

then
X̃ ∗0 ⊂ X̃ ∗00 . (10.66)

Proof. By Theorem 111, for the corresponding feasible solutions it holds


X̃ 0 ⊂ X̃ 00 . It remains to show that X̃00 ⊂ X̃000 , where
³ ´ ³ ´
µX̃ 0 (x) = µR̃0 f˜(x; c̃0 ), b̃0 , µX̃ 00 (x) = µR̃0 f˜(x; c̃00 ), b̃0 .
0 0
10.7. OPTIMAL SOLUTIONS OF THE FMP PROBLEM 147

First, we show that f˜(x; c̃0 ) ⊂ f˜(x; c̃00 ).


Indeed, since µc̃0 (c) ≤ µc̃00 (c) for all c ∈ C, by (10.11), we obtain for all
u∈R

µf˜(x;c̃0 ) (u) = max{0, sup{µc̃0 (c)|c ∈ C, f (x; c) = u}}


≤ max{0, sup{µc̃00 (c)|c ∈ C, f (x; c) = u}} = µf˜(x;c̃00 ) (u).

Now, using monotonicity of T -fuzzy extension R̃0 , it follows that

µR̃i (f˜(x; c̃0 ), b̃0 ) ≤ µR̃i (f˜(x; c̃00 ), b̃0 ).

Applying monotonicity of AG in (10.60), we obtain X̃ ∗0 ⊂ X̃ ∗00 .

Corollary 120 Let c̃, ãi , b̃i be a collection of fuzzy parameters, and let c ∈
C,ai ∈ Pi and bi ∈ R be a collection of crisp parameters such that for all
i∈M
µc̃ (c) = µãi (ai ) = µb̃i (bi ) = 1. (10.67)
If X ∗ is a nonempty set of all optimal solutions of MP problem (10.4) with the
parameters c, ai and bi , X̃ ∗ is an optimal solution of FMP problem (10.55) with
fuzzy parameters c̃, ãi and b̃i , then for all x ∈ X ∗

µ∗X̃ (x) = 1. (10.68)

Proof. Observe that c ⊂ c̃, ai ⊂ ãi , bi ⊂ b̃i for all i ∈ M . Then by Theorem
119 we obtain X ∗ ⊂ X̃ ∗ , which is equivalent to (10.68).
Notice that the optimal solution X̃ ∗ of FMP problem (10.8) always exists,
even if the MP problem with crisp parameters has no crisp optimal solution.
Corollary 120 states that if the MP problem with crisp parameters has a crisp
optimal solution, then the membership grade of the optimal solution (of the
associated FMP problem with fuzzy parameters) is equal to one. This fact
enables a natural embedding of the class of (crisp) MP problems into the class
of FMP problems.
From now on, the space of parameters is supposed to be the k-dimensional
Euclidean vector space Rk , where k is a positive integer, i.e. C = Pi = Rk
for all i ∈ M. For the remaining part of this section we suppose that T , A
and AG are the minimum t-norms, that is T = A = AG = TM . We shall find
some formulae based on α-cuts of the parameters, analogous to those given by
Theorem 113 and 115, however, for α-optimal solutions of the FMP problem.

Theorem 121 Let f , gi be continuous functions, f : Rn × Rk → R, gi :


Rn × Rk → R. Let c̃, ãi and b̃i be compact fuzzy parameters, i ∈ M, let
b̃0 ∈ F(R) be a fuzzy goal. Let T = A = AG = TM . Let R̃i be T -fuzzy
extensions of closed valued relations Ri on R, i ∈ {0} ∪ M.
Then for all α ∈ (0, 1]
m
\

[X̃ ]α = [X̃i ]α , (10.69)
i=0
148 CHAPTER 10. FUZZY MATHEMATICAL PROGRAMMING

and, moreover, for all i ∈ M


[X̃0 ]α = {x ∈ Rn |µR̃0 (f (x; [c̃]α ), [b̃0 ]α ) ≥ α}, (10.70)
n
[X̃i ]α = {x ∈ R |µR̃i (gi (x; [ãi ]α ), [b̃i ]α ) ≥ α}. (10.71)
Proof. The proof is omitted since it is analogous to the proof of Theorem
113.
If the valued relations Ri are usual equality and inequality relations, then
the stronger statement of Theorem 113 can be proven.
Theorem 122 Let f , gi , i ∈ M, be continuous functions, f : Rn × Rk → R,
gi : Rn × Rk → R. Let c̃, ãi and b̃i be compact fuzzy parameters. Let T = A =
AG = TM . Let R̃i be the same as in Theorem 118, i ∈ {0} ∪ M.
Then for all α ∈ (0, 1]
m
\
[X̃ ∗ ]α = [X̃i ]α , (10.72)
i=0
and, moreover, for all i ∈ M
[X̃0 ]α = {x ∈ Rn |µR̃0 (f (x; [c̃]α ), [b̃0 ]α ) = 1}, (10.73)
n
[X̃i ]α = {x ∈ R |µR̃i (gi (x; [ãi ]α ), [b̃i ]α ) = 1}. (10.74)
Proof. The proof is omitted since it is analogous to the proof of Theorem
114.
Next, we shall derive an analogue to Theorem 115, for this purpose we extend
the notation from the previous section as follows. Given α ∈ (0, 1], i ∈ M, let
F (x; α) = sup {f (x; c)|c ∈ [c̃]α } , (10.75)
F (x; α) = inf {f (x; c)|c ∈ [c̃]α } , (10.76)
b0 (α) = inf{b ∈ R|b ∈ [b̃0 ]α }, (10.77)
b0 (α) = sup{b ∈ R|b ∈ [b̃0 ]α }. (10.78)
Theorem 123 Let all assumptions of Theorem 122 be satisfied. Moreover, let
the membership functions of fuzzy parameters c̃, ãi and b̃i be upper-quasiconnected
for all i ∈ M. Let b̃0 ∈ F(R) be a fuzzy goal with the membership function µb̃0
satisfying the following conditions
µb̃0 is upper semicontinuous,
µb̃0 is strictly increasing, (10.79)
limt→−∞ µb̃0 (t) = 0.

Then for all α ∈ (0, 1], we have x ∈ [X̃ ∗ ]α if and only if


F (x; α) ≥ b0 (α), (10.80)
Gi (x; α) ≤ bi (α), i ∈ M1 ∪ M2 , (10.81)
Gi (x; α) ≥ bi (α), i ∈ M1 ∪ M3 , (10.82)
10.7. OPTIMAL SOLUTIONS OF THE FMP PROBLEM 149

Proof. The proof is omitted since it is analogous to the proof of Theo-


rem 115 with the only modification that instead of compactness of b̃0 , we have
assumptions (10.79).

Theorem 124 Let all assumptions of Theorem 122 be satisfied. Moreover, let
gi be quasiconvex on Rn × Rk for i ∈ M1 ∪ M2 , f and gi be quasiconcave on
Rn × Rk for i ∈ M1 ∪ M3 .
Then for all i ∈ {0} ∪ M, X̃i are convex and the optimal solution X̃ ∗ of FMP
problem (10.55) is convex, too.

Proof. Again, the proof is omitted since it can be performed in an analogous


way as the proof of Theorem 116.
If the individual membership functions of the fuzzy objective and fuzzy con-
straints can be expressed in an explicit form, then the max-optimal solution can
be found as the optimal solution of some crisp MP problem.

Theorem 125 Let ³ ´


µX̃0 (x) = µR̃0 f˜(x; c̃), b̃0

and ³ ´
µX̃i (x) = µR̃i g̃i (x; ãi ), b̃i ,

x ∈ Rn , i ∈ M, be the membership functions of the fuzzy objective and fuzzy


constraints of the FMP problem (10.55), respectively. Let T = A = AG = TM
and (10.79) holds for b̃0 .
The vector (t∗ , x∗ ) ∈ Rn+1 is an optimal solution of the problem

maximize t
subject to µX̃0 (x) ≥ t, (10.83)
µX̃i (x) ≥ t, i ∈ M,

if and only if x∗ is a max-optimal solution of the problem (10.55).

Proof. Let (t∗ , x∗ ) ∈ Rn+1 be an optimal solution of the MP problem


(10.83). By (10.57) and (10.60) we obtain

µ∗X̃ (x∗ ) = sup{min{µX̃0 (x), µX̃i (x)}|x ∈ Rn } = Hgt(X̃ ∗ ).

Hence, x∗ is an optimal solution with the maximal height. The proof of the
opposite statement is straightforward.
150 CHAPTER 10. FUZZY MATHEMATICAL PROGRAMMING
Chapter 11

Fuzzy Linear Programming

11.1 Introduction
Most important mathematical programming problems (10.4) are those where
the functions f and gi are linear.
Let M = {1, 2, ..., m}, N = {1, 2, ..., n}, m, n be positive integers. Let f , gi
be linear functions, f : Rn × Rn → R, gi : Rn × Rn → R, c, ai ∈ Rn , i ∈ M,
be the parameters such that

f (x; c1 , ..., cn ) = cT x = c1 x1 + · · · + cn xn , (11.1)

gi (x; ai1 , ..., ain ) = aTi x = ai1 x1 + · · · + ain xn , i ∈ M, (11.2)


where x ∈ Rn . We consider the following linear programming problem (LP
problem)

maximize c1 x1 + · · · + cn xn
subject to
(11.3)
ai1 x1 + · · · + ain xn ≤ bi , i ∈ M,
xj ≥ 0, j ∈ N .
The set of all feasible solutions X of (11.3) is defined as follows

X = {x ∈ Rn |ai1 x1 + · · · + ain xn ≤ bi , i ∈ M, xj ≥ 0, j ∈ N }. (11.4)

11.2 Formulation of FLP problem


Before formulating a fuzzy linear problem as an optimization problem associated
with the LP problem (11.3), we make a few assumptions and remarks.
Let f , gi be linear functions defined by (11.1), (11.2), respectively. From now
on, the parameters cj , aij and bi will be considered as normal fuzzy quantities,
that is, normal fuzzy subsets of the Euclidean space R. The fuzzy quantities
will be denoted by symbols with the wavelets above. Let µc̃j : R → [0, 1],

151
152 CHAPTER 11. FUZZY LINEAR PROGRAMMING

µãij : R → [0, 1] and µb̃i : R → [0, 1], i ∈ M, j ∈ N , be membership functions


of the fuzzy parameters c̃j , ãij and b̃i , respectively.
Let R̃i , i ∈ M, be fuzzy relations on F(R). They will be used for comparing
the left and right sides of the constraints in (11.3).
The maximization of objective function (11.1) needs, however, a special
treatment, similar to that of FMP problem. As it was stated in Chapter 8,
fuzzy values of the objective function are not linearly ordered and to maximize
the objective function we have to define a suitable ordering on F(R) which
allows for “maximization” of the objective. Again it shall be done by an exoge-
nously given fuzzy goal d˜ ∈ F(R) and another fuzzy relation R̃0 on R. There
exist some other approaches, see [18], [23], [60].
The fuzzy linear programming problem (FLP problem) associated with LP
problem (11.3) is denoted as

f
maximize c̃1 x1 +̃· · ·+̃c̃n xn
subject to
(11.5)
ãi1 x1 +̃· · ·+̃ãin xn R̃i b̃i , i ∈ M,
xj ≥ 0, j ∈ N .
Let us clarify the elements of (11.5).
The objective function values and the left hand sides values of the constraints
of (11.5) have been obtained by the extension principle (8.17) as follows. A
membership function of g̃i (x; ãi1 , ..., ãi1 ) is defined for each t ∈ R by

 sup{T (µãi1 (a1 ), ..., µãin (an ))|a1 , ..., an ∈ R, a1 x1 + · · · + an xn = t}
µg̃i (t) = if gi−1 (x; t) 6= ∅,

0 otherwise
(11.6)
where gi−1 (x; t) = {(a1 , ..., an ) ∈ Rn |a1 x1 + · · · + an xn = t}. Here, the fuzzy set
g̃i (x; ãi1 , ..., ãi1 ) is denoted as ãi1 x1 +̃· · ·+̃ãin xn , i.e.

g̃i (x; ãi1 , ..., ãi1 ) = ãi1 x1 +̃· · ·+̃ãin xn (11.7)

for every i ∈ M and for each x ∈ Rn .


Also, for given c̃1 , ..., c̃n ∈ F(R), f˜(x; c̃1 , ..., c̃n ) is a fuzzy extension of
f (x; c1 , ..., cn ) with the membership function defined for each t ∈ R as

 sup{T (µc̃1 (c1 ), ..., µc̃n (cn ))|c1 , ..., cn ∈ R, c1 x1 + · · · + cn xn = t}
µf˜(t) = if f −1 (x; t) 6= ∅,

0 otherwise,
(11.8)
where f −1 (x; t) = {(c1 , ..., cn ) ∈ Rn |f (x; c1 , ..., cn ) = t}.
The fuzzy set f˜(x; c̃1 , ..., c̃n ) will be denoted as c̃1 x1 +̃· · ·+̃c̃n xn , i.e.

f˜(x; c̃1 , ..., c̃n ) = c̃1 x1 +̃· · ·+̃c̃n xn . (11.9)

In (11.7) the value ãi1 x1 +̃· · ·+̃ãin xn ∈ F(R) is “compared” with the fuzzy
quantity b̃i ∈ F(R) by a fuzzy relation R̃i , i ∈ M.
11.2. FORMULATION OF FLP PROBLEM 153

For x and y ∈ Rn we calculate f˜(x; c̃1 , ..., c̃n ) ∈ F(R) and f˜(y; c̃1 , ..., c̃n ) ∈
F(R), respectively. Such values of the objective function are not linearly ordered
and to maximize the objective function we have to define a suitable ordering on
F(R) which allows for “maximization” of the objective. Let d˜ ∈ F(R) be an
exogenously given fuzzy goal with an associated fuzzy relation R̃0 on R.
The fuzzy relations R̃i for comparing the constraints of (11.5) are usually
extensions of a valued relation on R, particularly, the usual inequality relations
“≤” and “≥”.
If R̃i is a fuzzy extension of relation Ri , then by (8.27) we obtain the mem-
bership function of the i-th constraint as
³ ´ ¡ ¢
µR̃i ãi1 x1 +̃· · ·+̃ãin xn , b̃i = sup{T µãi1 x1 +̃···+̃ãin xn (u), µb̃i (v) |uRi v}.
(11.10)
Apparently, for the feasible solution and also for the optimal solution of a
FLP problem, the concepts which have been already defined in the preceding
chapter for FMP problem (10.4), can be adopted here. Of course, for FLP
problems they have some special features. Let us begin with the concept of
feasible solution.
Definition 126 Let gi , i ∈ M, be linear functions defined by (11.2). Let µãij :
R → [0, 1] and µb̃i : R → [0, 1], i ∈ M = {1, 2, ..., m}, j ∈ N = {1, 2, ..., n},
be membership functions of fuzzy quantities ãij and b̃i , respectively. Let R̃i ,
i ∈ M, be fuzzy relations on F(R). Let GA be an aggregation operator and T
be a t-norm. Here T is used for extending arithmetic operations.
A fuzzy set X̃, a membership function µX̃ of which is defined for all x ∈ Rn by
 ³ ´

 GA µR̃1 (ã11 x1 +̃· · ·+̃ã1n xn , b̃1 ), . . . , µR̃m (ãm1 x1 +̃· · ·+̃ãmn xn , b̃m )
µX̃ (x) = if xj ≥ 0 for all j ∈ N ,


0 otherwise,
(11.11)
is called the feasible solution of the FLP problem (11.5).
For α ∈ (0, 1], a vector x ∈ [X̃]α is called the α-feasible solution of the FLP
problem (11.5).
A vector x̄ ∈ Rn such that µX̃ (x̄) = Hgt(X̃) is called the max-feasible solution.

Notice that the feasible solution X̃ of a FLP problem is a fuzzy set. On the
other hand, α-feasible solution is a vector belonging to the α-cut of the feasible
solution X̃ and the same holds for the max-feasible solution, which is a special
α-feasible solution with α = Hgt(X̃).
If a decision maker specifies the grade of membeship α ∈ (0, 1] (the grade
of possibility, feasibility, satisfaction etc.), then a vector x ∈ Rn satisfying
µX̃ (x) ≥ α is an α-feasible solution of the corresponding FLP problem.
For i ∈ M we introduce the following notation: X̃i will denote a fuzzy subset
of Rn with the membership function µX̃i defined for all x ∈ Rn as

µX̃i (x) = µR̃i (ãi1 x1 +̃· · ·+̃ãin xn , b̃i ). (11.12)


154 CHAPTER 11. FUZZY LINEAR PROGRAMMING

Fuzzy set (11.12) is interpreted as an i-th fuzzy constraint. All fuzzy con-
straints are aggregated into the feasible solution (11.11) by the aggregation
operator GA . Particularly, GA = min, the t-norm T is used for extending arith-
metic operations. Notice, that the fuzzy solution depends also on the fuzzy
relations used in definitions of the constraints of the FLP problem.

11.3 Properties of Feasible Solution


Considering crisp parameters aij and bi , clearly, the feasible solution is also
crisp. Moreover, it is not difficult to show that if the fuzzy parameters of two
FLP problems are ordered by fuzzy inclusion, that is ã0ij ⊂ ã00ij and b̃0i ⊂ b̃00i , then
the same inclusion holds for the feasible solutions, i.e. X̃ 0 ⊂ X̃ 00 , on condition
R̃i are T-fuzzy extensions of valued relations, see also below Proposition 130.
Now, we derive special formulae which will allow for computing an α-feasible
solution x ∈ [X̃]α of the FLP problem (11.5). For this purpose, the following
notation will be useful. Given α ∈ (0, 1], i ∈ M, j ∈ N , let

aij (α) = inf {a ∈ R|a ∈ [ãij ]α } , (11.13)


aij (α) = sup {a ∈ R|a ∈ [ãij ]α } , (11.14)
bi (α) = inf{b ∈ R|b ∈ [b̃i ]α }, (11.15)
bi (α) = sup{b ∈ R|b ∈ [b̃i ]α }. (11.16)

Theorem 127 Let for all i ∈ M, j ∈ N , ãij and b̃i be compact, convex and
normal fuzzy quantities and let xj ≥ 0. Let T = min, S = max, and α ∈ (0, 1).
Then for i ∈ M
(i)
P
n
µ≤
˜ T (ãi1 x1 +̃· · ·+̃ãin xn , b̃i ) ≥ α if and only if aij (α)xj ≤ bi (α), (11.17)
j=1

(ii)
P
n
˜ (ãi1 x1 +̃· · ·+̃ãin xn , b̃i ) ≥ α if and only if
µ≤S
aij (1 − α)xj ≤ bi (1 − α).
j=1
(11.18)
(iii) Moreover, if ãij and b̃i are strictly convex fuzzy quantities, then
P
n
µ≤T,S
˜ (ãi1 x1 +̃· · ·+̃ãin xn , b̃i ) ≥ α if and only if aij (1 − α)xj ≤ bi (α),
j=1
(11.19)
where µ≤T,S
˜ = µ≤˜ T,S = µ≤˜ T,S , and
(iv)
P
n
µ≤S,T
˜ (ãi1 x1 +̃· · ·+̃ãin xn , b̃i ) ≥ α if and only if aij (α)xj ≤ bi (1 − α),
j=1
(11.20)
11.3. PROPERTIES OF FEASIBLE SOLUTION 155

where µ≤S,T
˜ = µ≤˜ S,T = µ≤˜ S,T .

Proof. We present here only the proof of part (i). The other parts follow
analogically by Theorem 65.
Let i ∈ M, µ≤ˆ T (ãi1 x1 +̃· · ·+̃ãin xn , b̃i ) ≥ α. By Theorem 65 , this is equiva-
lent to " #
P
n
inf ãij xj ≤ sup[b̃i ]α .
j=1
α
By the well known Nguyen’s result, see [41], or [64], it follows that
" #
Pn Pn
ãij xj = [ãij ]α xj .
j=1 j=1
α

Since [ãij ]α , i ∈ M, j ∈ N , are compact and convex intervals in R and xj ≥ 0,


j ∈ N , the rest of the proof follows easily from definitions (11.13), (11.14) and
Proposition 111.
Let l, r, ∈ R with l ≤ r, let γ, δ ∈ [0, +∞) and let L, R be non-increasing,
non-constant, upper-semicontinuous functions mapping interval (0, 1] into [0, +∞),
i.e. L, R : (0, 1] → [0, +∞). Moreover, assume that L(1) = R(1) = 0, define
L(0) = lim L(x), R(0) = lim R(x).
x→0 x→0
Let A be an (L, R)-fuzzy interval given by the membership function defined
for each x ∈ R
 ³ ´

 L (−1) l−x
if x ∈ (l − γ, l), γ > 0,

 γ

µA (x) = 1 ¡ ¢ if x ∈ [l, r], (11.21)



 R (−1) x−r
if x ∈ (r, r + δ), δ > 0,

 δ
0 otherwise,

where L(−1) , R(−1) are pseudo-inverse functions of L, R, respectively. As it was


mentioned already in Chapter 9, the class of (L, R)-fuzzy intervals extends the
class of crisp closed intervals [a, b] ⊂ R including the case a = b, i.e. crisp
numbers. Particularly, if the membership functions of ãij and b̃i are given
analytically by
 ³ ´


l −x
L(−1) ijγ if x ∈ [lij − γ ij , lij ), γ ij > 0,

 ij

1 ³ ´ if x ∈ [lij , rij ],
µãij (x) = (11.22)

 R (−1) x−rij
if x ∈ (rij , rij + δ ij ], δ ij > 0,

 δ ij

0 otherwise,
and  ³ ´

 L(−1) liγ−x if x ∈ [li − γ i , li ), γ i > 0,


 i
1 ³ ´ x ∈ [li , ri ],
if
µb̃i (x) = (11.23)

 R (−1) x−ri
if x ∈ (ri , ri + δ i ], δ i > 0,

 δi

0 otherwise,
156 CHAPTER 11. FUZZY LINEAR PROGRAMMING

for each x ∈ R, i ∈ M, j ∈ N , then the values of (11.13) - (11.16) can be


computed as

aij (α) = lij − γ ij L(α), aij (α) = rij + δ ij R(α),


bi (α) = li − γ i L(α), bi (α) = ri + δ i R(α).

Let GA = min. By Proposition 127, α-cuts [X̃]α of the feasible solution of (11.5)
can be computed by solving the system of inequalities from (11.17) - (11.20).
Moreover, [X̃]α is an intersection of a finite number of halfspaces, hence a convex
polyhedral set.

11.4 Properties of Optimal Solutions


As the FLP problem is a particular case of the FMP problem, all properties
and results which have been derived in Chapter 10 are applicable to any FLP
problem.
We assume the existence of an exogenously given additional goal d˜ ∈ F(R)
- a fuzzy set of the real line. The fuzzy value d˜ is compared to fuzzy values
c̃1 x1 +̃· · ·+̃c̃n xn of the objective function by a given fuzzy relation R̃0 . In this
way the fuzzy objective is treated as another constraint

˜
c̃1 x1 +̃· · ·+̃c̃n xn R̃0 d.

We obtain a modification of Definition 126.

Definition 128 Let f , gi be linear functions defined by (11.1), (11.2). Let


µc̃j : R → [0, 1], µãij : R → [0, 1] and let µb̃i : R → [0, 1], i ∈ M, j ∈ N ,
be membership functions of normal fuzzy quantities c̃j , ãij and b̃i , respectively.
Moreover, let d˜ ∈ F(R) be a fuzzy set of the real line called the fuzzy goal. Let
R̃i , i ∈ {0} ∪ M, be fuzzy relations on R and let T be a t-norm, let A and AG
be aggregation operators.
A fuzzy set X̃ ∗ with the membership function µX̃ ∗ defined for all x ∈ Rn by
³ ´
˜ µ (x) ,
µX̃ ∗ (x) = AG µR̃0 (c̃1 x1 +̃· · ·+̃c̃n xn , d), (11.24)

where µX̃ (x) is the membership function of the feasible solution, is called the
optimal solution of FLP problem (11.5).
For α ∈ (0, 1] a vector x ∈ [X̃ ∗ ]α is called the α-optimal solution of FLP problem
(11.5).
A vector x∗ ∈ Rn with the property

µX̃ ∗ (x∗ ) = Hgt(X̃ ∗ ) (11.25)

is called the max-optimal solution.


11.4. PROPERTIES OF OPTIMAL SOLUTIONS 157

Notice that the optimal solution of the FLP problem is a fuzzy set. On the
other hand, the α-optimal solution is a vector belonging to the α-cut [X̃ ∗ ]α .
Likewise, the max-optimal solution is an α-optimal solution with α = Hgt(X̃ ∗ ).
Notice that in view of Chapter 9 a max-optimal solution is in fact a max-AG
decision on Rn .
In Definition 128, the t-norms T and the aggregation operators A and AG
have been used. The former t-norm T has been used for extending arithmetic
operations, the aggregation operator A for aggregating the individual constraints
into the single feasible solution and AG has been applied for aggregating the
fuzzy set of the feasible solution with the fuzzy set of the objective X̃0 defined
by the membership function
˜
µX̃0 (x) = µR̃0 (c̃1 x1 +̃· · ·+̃c̃n xn , d), (11.26)

x ∈ Rn . As a result, we have obtained the membership function of optimal


solution X̃ ∗ defined for all x ∈ Rn by
¡ ¢
µX̃ ∗ (x) = AG µX̃0 (x), µX̃ (x) . (11.27)

Since problem (11.5) is a maximization problem “the higher value is better”,


the membership function µd˜ of d˜ is supposed to be increasing, or non-decreasing.
The fuzzy relation R̃0 for comparing c̃1 x1 +̃· · ·+̃c̃n xn and d˜ is supposed to be a
fuzzy extension of ≥.
Formally, in Definitions 126 and 128, the concepts of feasible solution and
optimal solution, etc., correspond to each other. Therefore, we can derive some
properties of optimal solutions of the FLP problem by the case of the feasible
solution studied in the preceding section.
We first observe that in case of crisp parameters cj , aij and bi , the set of
all max-optimal solutions given by (11.25) coincides with the set of all optimal
solutions of the crisp problem. We have the following theorem.

Proposition 129 Let cj , aij , bi ∈ R be crisp parameters of (11.5) for all i ∈


M, j ∈ N . Let d˜ ∈ F(R)be a fuzzy goal with a strictly increasing membership
function µd˜. Let for i ∈ M, R̃i be a fuzzy extension of the relation “≤” on
R, and R̃0 be a T -fuzzy extension of the relation “≥”. Let T , A and AG be
t-norms.
Then the set of all max-optimal solutions coincides with the set of all optimal
solutions X ∗ of LP problem (11.3).

Proof. Clearly, the feasible solution of (11.5) is crisp, i.e.

µX̃ (x) = χX (x)

for all x ∈ Rn , where X is the set of all feasible solutions (11.4) of the crisp LP
problem (11.3). Moreover, by (11.26) we obtain for crisp c ∈ Rn
³ ´
µX̃0 (x) = µR̃0 f (x; c), d˜ = µd˜(c1 x1 + · · · + cn xn ).
158 CHAPTER 11. FUZZY LINEAR PROGRAMMING

Substituting (10.62) and (10.63) into (11.27) we obtain


¡ ¢ µ (c x + · · · + cn xn ) if x ∈ X,
µX̃ ∗ (x) = AG µd˜(f (x, c)), χX (x) = { d˜ 1 1
0 otherwise.

Since µd˜ is strictly increasing, by (10.64) it follows that µX̃ ∗ (x∗ ) = Hgt(X̃ ∗ ) if
and only if
µX̃ ∗ (x∗ ) = sup{µd˜(c1 x1 + · · · + cn xn )|x ∈ X},
which is the desired result.
Proposition 130 Let c̃0j , ã0ij and b̃0i and c̃00j , ã00ij and b̃00i be two collections of
fuzzy parameters of FLP problem (11.5), i ∈ M, j ∈ N . Let T, A, AG be t-
norms. Let R̃i , i ∈ {0} ∪ M, be T -fuzzy extensions of valued relations Ri on R.
If X̃ ∗0 is the optimal solution of FLP problem (11.5) with the parameters c̃0j , ã0ij
and b̃0i , X̃ ∗00 is the optimal solution of the FLP problem with the parameters c̃00j ,
ã00ij and b̃00i such that for all i ∈ M, j ∈ N ,

c̃0j ⊂ c̃00j , ã0ij ⊂ ã00ij and b̃0i ⊂ b̃00i , (11.28)


then
X̃ ∗0 ⊂ X̃ ∗00 . (11.29)
Proof. First we show that for the feasible solutions it holds X̃ 0 ⊂ X̃ 00 . Let
x ∈ Rn , i ∈ M. Now we show that
ã0i1 x1 +̃· · ·+̃ã0in xn ⊂ ã00i1 x1 +̃· · ·+̃ã00in xn .
Indeed, by (8.17), for each u ∈ R
µã0i1 x1 +̃···+̃ã0in xn (u) = sup{T (µã0i1 (a1 ), ..., µã0in (an ))|ai1 x1 + · · · + ain xn = u}
≤ sup{T (µã00i1 (a1 ), ..., µã00in (an ))|ai1 x1 + · · · + ain xn = u}
= µã00i1 x1 +̃···+̃ã00in xn (u).

Now, as b̃0i ⊂ b̃00i , using monotonicity of T -fuzzy extension R̃i of Ri , it follows


that
µR̃i (ã0i1 x1 +̃· · ·+̃ã0in xn , b̃0i ) ≤ µR̃i (ã00i1 x1 +̃· · ·+̃ã00in xn , b̃00i ).
Then, applying again monotonicity of A in (11.11), we obtain X̃ 0 ⊂ X̃ 00 .
It remains to show that X̃00 ⊂ X̃000 , where
³ ´ ³ ´
µX̃ 0 (x) = µR̃0 f˜(x; c̃0 ), d˜ , µX̃ 00 (x) = µR̃0 f˜(x; c̃00 ), d˜ .
0 0

We show that f˜(x; c̃0 ) ⊂ f˜(x; c̃00 ). Indeed, since for all j ∈ N , µc̃0j (c) ≤ µc̃00j (c)
for all c ∈ R, by (11.8), we obtain for all u ∈ R
µc̃01 x1 +̃···+̃c̃0n xn (u) = sup{T (µc̃01 (c1 ), ..., µc̃0n (cn ))|c1 x1 + · · · + cn xn = u}
≤ sup{T (µc̃001 (c1 ), ..., µc̃00n (cn ))|c1 x1 + · · · + cn xn = u}
= µc̃001 x1 +̃···+̃c̃00n xn (u).
11.4. PROPERTIES OF OPTIMAL SOLUTIONS 159

Again, using monotonicity of R̃0 it follows that


˜ ≤ µ (c̃00 x1 +̃· · ·+̃c̃00 xn , d).
µR̃0 (c̃01 x1 +̃· · ·+̃c̃0n xn , d) ˜
R̃0 1 n

Applying monotonicity of AG in (11.27), we obtain X̃ ∗0 ⊂ X̃ ∗00 .


Next, we extend Proposition 127 to the case of an optimal solution of a
FLP problem. For this purpose we add some new notation as follows. Given
α ∈ (0, 1], j ∈ N , let

cj (α) = inf {c|c ∈ [c̃j ]α } , (11.30)


cj (α) = sup {c|c ∈ [c̃j ]α } , (11.31)
d(α) = inf{d|d ∈ [d] ˜ α }, (11.32)
d(α) = sup{d|d ∈ [d] ˜ α }. (11.33)

Proposition 131 Let for all i ∈ M, j ∈ N , c̃j , ãij and b̃i be compact, convex
and normal fuzzy quantities, d˜ ∈ F(R) be a fuzzy goal with the membership
function µd̃ satisfying the following conditions

µd˜ is upper semicontinuous,


µd˜ is strictly increasing, (11.34)
limt→−∞ µd˜ (t) = 0.

Let R̃i = ≤ ˜ T , i ∈ M, ≤ ˜ be the T -fuzzy extensions of the binary relation ≤


T
on R, R̃0 = ≥ ˜ be the T -fuzzy extension of the binary relation ≥ on R. Let
T = A = AG = min. Let X̃ ∗ be an optimal solution of FLP problem (11.5) and
α ∈ (0, 1).
A vector x = (x1 , ..., xn ) ≥ 0 belongs to [X̃ ∗ ]α if and only if
n
X
cj (α)xj ≥ d(α), (11.35)
j=1
n
X
aij (α)xj ≤ bi (α), i ∈ M. (11.36)
j=1

Proof. The proof is omitted since it is analogous to the proof of Proposition


127, part (i), with a simple modification that instead of compactness of d,˜ we
have assumptions (11.34).
For the sake of simplicity we confined ourselves in Proposition 131 only to
the case of T -fuzzy extension of valued relations ≤ on R, i.e. for i ∈ M, R̃i =
˜ T and R̃0 = ≥
≤ ˜ T . Evidently, similar results could be obtained for some other
fuzzy extensions, e.g. R̃i ∈ {≤ ˜T,≤ ˜ T,S , ≤
˜ S, ≤ ˜ S,T , ≤
˜ T,S , ≤ ˜ S,T } as in [28].
If the membership functions of the fuzzy parameters c̃j , ãij and b̃i can be
formulated in an explicit form, e.g. similar to that of (8.18), (11.23), then we
can find an optimal solution with maximal height as the (crisp) optimal solution
of some optimization problem.
160 CHAPTER 11. FUZZY LINEAR PROGRAMMING

Proposition 132 Consider FLP problem (11.5), where for each x = (x1 , ..., xn )
∈ Rn and every i ∈ M
³ ´
µX̃0 (x) = µR̃0 c̃1 x1 +̃· · ·+̃c̃n xn , d˜

and ³ ´
µX̃i (x) = µR̃i ãi1 x1 + · · · + ãin xn , b̃i

are the membership functions of the fuzzy objective and fuzzy constraints, re-
spectively. Let T = A = AG = min and (11.34) holds for d. ˜
∗ ∗ n+1
A vector (t , x ) ∈ R is an optimal solution of the optimization problem

maximize t
subject to µX̃i (x) ≥ t, i ∈ {0} ∪ M, (11.37)
xj ≥ 0, j ∈ N

if and only if x∗ ∈ Rn is a max-optimal solution of FLP problem (11.5).

Proof. Let (t∗ , x∗ ) ∈ Rn+1 be an optimal solution of problem (11.37). By


(11.24) and (11.25) we obtain

µX̃ ∗ (x∗ ) = sup{min{µX̃0 (x), µX̃i (x)}|x ∈ Rn } = Hgt(X̃ ∗ ).

Hence, x∗ is a max-optimal solution.


The proof of the opposite statement is omitted.

11.5 Extended Addition in FLP


In Theorems 127 and 131, formulae (11.17) and (11.36) hold only on condition
the special case of T = TM is assumed. This t-norm has been used not only for
the T -fuzzy extensions of the binary relations on R, but also for extending linear
functions, that is, the objective function and constraints of the FLP problem.
In this section we shall investigate the problem of making summation

f˜(x; c̃1 , ..., c̃n ) = c̃1 x1 +̃T · · ·+̃T c̃n xn , (11.38)

and
g̃i (x; ãi1 , ..., ãin ) = ãi1 x1 +̃T · · ·+̃T ãin xn (11.39)
for each x ∈ Rn , where c̃j , ãij ∈ F(R), for all i ∈ M, j ∈ N . Formulae (11.38)
and (11.39) are defined by (11.8) and (8.20), respectively, that is by using of the
extension principle. Here +̃T denotes that the extended summation is performed
by the t-norm T . Note that for arbitrary t-norms T exact formulae for (11.38)
and (11.39) can be either complicated or even inaccessible. However, in some
special cases such formulae exist, some of which will be given bellow.
For the sake of brevity we shall deal only with (11.38), for (11.39) the results
can be obtained analogously.
11.5. EXTENDED ADDITION IN FLP 161

Let F, G : (0, +∞) → [0, 1] be non-increasing left continuous functions. For


γ, δ ∈ (0, +∞), define functions Fγ , Gδ : (0, +∞) → [0, 1] by

Fγ (x) = F ( xγ ),
(11.40)
Gδ (x) = G( xδ ),

where x ∈ (0, +∞). Let lj , rj ∈ R such that lj ≤ rj , let γ j , δ j ∈ (0, +∞) and
let ³ ´
c̃j = lj , rj , Fγ j , Gδj , j ∈ N , (11.41)

be closed fuzzy intervals, with the membership functions given by

Fγ j (lj − x) if x ∈ (−∞, lj ),
µc̃j (x) = { 1 if x ∈ [lj , rj ], (11.42)
Gδj (x − rj ) if x ∈ (rj , +∞),

see also Chapter 8. In the following proposition we prove that c̃1 x1 +̃· · ·+̃c̃n xn
is also closed fuzzy interval of the same type. The proof is omitted as it is a
straightforward application of the extension principle. For the references, see
[38].
³ ´
Proposition 133 Let c̃j = lj , rj , Fγ j , Gδj , j ∈ N , be closed fuzzy intervals
with the membership functions given by (11.42) and let x = (x1 , . . . , xn ) ∈ Rn ,
xj ≥ 0 for all j ∈ N , denote

Ix = {j|xj > 0, j ∈ N }.

Then
c̃1 x1 +̃TM · · ·+̃TM c̃n xn = (l, r, FlM , GrM ) , (11.43)
c̃1 x1 +̃TD · · ·+̃TD c̃n xn = (l, r, FlD , GrD ) , (11.44)
where TM is the minimum t-norm, TD is the drastic product and
P P
l= lj xj , r = rj xj , (11.45)
j∈Ix j∈Ix

P γj P δj
lM = , rM = , (11.46)
j∈Ix xj j∈Ix xj

γj δj
lD = max{ |j ∈ Ix }, rD = max{ |j ∈ Ix }. (11.47)
xj xj

If all c̃j are (L, R)-fuzzy intervals, then we can obtain an analogous and more
specific result. Let lj , rj ∈ R with lj ≤ rj , let γ j , δ j ∈ [0, +∞) and let L, R
be non-increasing, non-constant, upper-semicontinuous functions mapping the
interval (0, 1] into [0, +∞), i.e. L, R : (0, 1] → [0, +∞). Moreover, assume that
L(1) = R(1) = 0, and define L(0) = lim L(x), R(0) = lim R(x).
x→0 x→0
Let for every j ∈ N ,
162 CHAPTER 11. FUZZY LINEAR PROGRAMMING

¡ ¢
c̃j = lj , rj , γ j , δ j LR (11.48)
be an (L, R)-fuzzy interval given by the membership function defined for each
x ∈ R by
 ³ ´

 L (−1) lj −x
if x ∈ (lj − γ j , lj ), γ j > 0,

 γj

1 ³ ´ if x ∈ [lj , rj ],
µc̃j (x) = (11.49)

 R (−1) x−rj
if x ∈ (r , r + δ ), δ > 0,

 δ j
j j j j

0 otherwise,

where L(−1) , R(−1) are pseudo-inverse functions of L, R, respectively.


¡ ¢
Proposition 134 Let c̃j = lj , rj , γ j , δ j LR , j ∈ N , be (L, R)-fuzzy intervals
with the membership functions given by (11.49) and let x = (x1 , . . . , xn ) ∈ Rn ,
xj ≥ 0 for all j ∈ N . Then

c̃1 x1 +̃TM · · ·+̃TM c̃n xn = (l, r, AM , BM )LR , (11.50)

c̃1 x1 +̃TD · · ·+̃TD c̃n xn = (l, r, AD , BD )LR , (11.51)


where TM is the minimum t-norm, TD is the drastic product and
P P
l= lj xj , r = rj xj , (11.52)
j∈N j∈N
P P
AM = γ j xj , BM = δ j xj , (11.53)
j∈N j∈N

AD = max{γ j |j ∈ N }, BD = max{δ j |j ∈ N }. (11.54)

The results (11.44) and (11.51) in Proposition 133 and 134, respectively, can
be extended as follows, see [38].

Proposition 135 Let T be a continuous Archimedian t-norm with an additive


generator f . Let F : (0, +∞) → [0, 1] be defined for each x ∈ (0, +∞) as

F (x) = f (−1) (x).

Let c̃j = (lj , rj , Fγ j , Fδj ), j ∈ N , be closed fuzzy intervals with the membership
functions given by (11.42) and let x = (x1 , . . . , xn ) ∈ Rn , xj ≥ 0 for all j ∈ N ,
Ix = {j|xj > 0, j ∈ N }. Then

c̃1 x1 +̃T · · ·+̃T c̃n xn = (l, r, FlD , FrD ) ,

where P P
l= lj xj , r = rj xj ,
j∈Ix j∈Ix

γj δj
lD = max{ |j ∈ Ix }, rD = max{ |j ∈ Ix }.
xj xj
11.5. EXTENDED ADDITION IN FLP 163

Note that for a continuous Archimedian t-norm T and closed fuzzy intervals
c̃j satisfying the assumptions of Proposition 135, we have

c̃1 x1 +̃T · · ·+̃T c̃n xn = c̃1 x1 +̃TD · · ·+̃TD c̃n xn ,

which means that we obtain the same fuzzy linear function based on an arbitrary
t-norm T 0 such that T 0 ≤ T .
The following proposition generalizes several results concerning the addition
of closed fuzzy intervals based on continuous Archimedian t-norms, see [38].

Proposition 136 Let T be a continuous Archimedian t-norm with an additive


generator f . Let K : [0, +∞) → [0, +∞) be continuous convex function with
K(0) = 0. Let α ∈ (0, +∞) and
x
Fα (x) = f (−1) (αK( ))
α
for all x ∈ [0, +∞). Let c̃j = (lj , rj , Fγ j , Fδj ), j ∈ N , be closed fuzzy intervals
with the membership functions given by (11.42) and let x = (x1 , . . . , xn ) ∈ Rn ,
xj ≥ 0 for all j ∈ N , Ix = {j|xj > 0, j ∈ N }. Then

c̃1 x1 +̃T · · ·+̃T c̃n xn = (l, r, FlK , FrK ) , (11.55)

where P P
l= lj xj , r = rj xj , (11.56)
j∈Ix j∈Ix

P γj P δj
lK = , rK = . (11.57)
x
j∈Ix j j∈Ix xj

Two immediate consequences can be obtained from Proposition 135:


(i) The sum based on the product t-norm TP of Gaussian fuzzy numbers, see
Example 33, is again a Gaussian fuzzy number. Indeed, the additive generator
f of the product t-norm TP is given by f (x) = − log(x). Let K(x) = x2 . Then
³ x ´ x2
Fα (x) = f (−1) αK( ) = e− α .
α
By Proposition 136 we obtain the required result.
(ii) The sum based on Yager t-norm TλY , see [38], of closed fuzzy inter-
vals generated by the same K, is again a closed fuzzy interval of the same
type. Observe that an additive genrator fλY of the Yager t-norm TλY is given by
fλY (x) = (1 − x)λ . For λ ∈ (0, +∞) we obtain
x
Fα (x) = max{0, 1 − λ−1 }.
α λ

This means that each piecewise linear fuzzy number (l, r, γ, δ) can be written as
µ ¶
λ ,F
(l, r, γ, δ) = l, r, F λ−1 λ
λ−1
,
γ δ
164 CHAPTER 11. FUZZY LINEAR PROGRAMMING

and the sum of piecewise linear fuzzy numbers c̃j = (lj , rj , γ j , δ j ), j ∈ N , is


again a piecewise linear fuzzy number

(l, r, γ, δ) ,

where l and r are given by (11.56), and γ and δ are given as


P λ P λ
γ= γ jλ−1 , δ = δ jλ−1 .
j∈N j∈N

The extensions can be obtained also for some other t-norms, see e.g. [38],
[83].
An alternative approach based on centered fuzzy numbers will be mentioned
later in this chapter, see also [42], [43].

11.6 Duality
In this section we generalize the well known concept of duality in LP for FLP
problems. The results of this section, in a more general setting, can be found in
[64]. We derive some weak and strong duality results which extend the known
results for LP problems.
Consider the following FLP problem

f
maximize c̃1 x1 +̃· · ·+̃c̃n xn
subject to
(11.58)
ãi1 x1 +̃· · ·+̃ãin xn R̃ b̃i , i ∈ M,
xj ≥ 0, j ∈ N .

Here, the parameters c̃j , ãij and b̃i are considered as normal fuzzy quantities,
i.e. µc̃j : R → [0, 1], µãij : R → [0, 1] and µb̃i : R → [0, 1], i ∈ M, j ∈ N . Let
R̃ be a fuzzy extension of a valued relation R on R. FLP problem (11.58) will
be called the primal FLP problem (P).
The dual FLP problem (D) is defined as

f
minimize b̃1 y1 +̃· · ·+̃b̃m ym
subject to
(11.59)
ã1j y1 +̃· · ·+̃ãmj ym ∗ R̃ c̃j , j ∈ N ,
yi ≥ 0, i ∈ M.

Here, ∗ R̃ is the dual fuzzy relation to cf


R of c R on R as in Definition 20 in Part
I . Recall the properties of the dual fuzzy extensions discussed in Proposition
31. Taking combinations of “primal” and dual fuzzy relations from (i)-(iv),
Proposition 31, we can create a number of primal - dual pairs of FLP problems.
Further on, we shall investigate the primal - dual pair of FLP problems with
the fuzzy relation ≤ ˜ T and the corresponding dual fuzzy relation ≥ ˜ S . Remember
that S should be the dual t-conorm to the t-norm T , e.g. S = max, T = min.
11.6. DUALITY 165

Now, consider the following pair of FLP problems


(P):
f
maximize c̃1 x1 +̃· · ·+̃c̃n xn
subject to
(11.60)
ãi1 x1 +̃· · ·+̃ãin xn ≤˜ T b̃i , i ∈ M,
xj ≥ 0, j ∈ N .

(D):
f
minimize b̃1 y1 +̃· · ·+̃b̃m ym
subject to
˜ S c̃j , j ∈ N , (11.61)
ã1j y1 +̃· · ·+̃ãmj ym ≥
yi ≥ 0, i ∈ M.

Let the feasible solution of the primal FLP problem (P) be denoted by X̃,
the feasible solution of the dual FLP problem (D) by Ỹ . Clearly, X̃ is a fuzzy
subset of Rn , Ỹ is a fuzzy subset of Rm .
Notice that in the crisp case, i.e. when the parameters c̃j , ãij and b̃i are
crisp real numbers, the relation ≤˜ T coincides with ≤ and ≥
˜ S coincides with ≥,
hence (P) and (D) is a primal - dual couple of LP problems in the usual sense.
In the following proposition we prove the weak form of the duality theorem
for FLP problems.

Proposition 137 Let for all i ∈ M, j ∈ N , c̃j , ãij and b̃i be compact, convex
and normal fuzzy quantities. Let ≤ ˜ T be the T -fuzzy extension of the binary
relation ≤ on R defined by (8.27) and ≥ ˜ S be the fuzzy extension of the relation
≥ on R defined by (8.37). Let A = T = min, S = max. Let X̃ be a feasible
solution of FLP problem (11.60), Ỹ be a feasible solution of FLP problem (11.61)
and let α ∈ [0.5, 1).
If a vector x = (x1 , ..., xn ) ≥ 0 belongs to [X̃]α and y = (y1 , ..., ym ) ≥ 0 belongs
to [Ỹ ]α , then
X X
c̄j (1 − α)xj ≤ b̄i (1 − α)yi . (11.62)
j∈N i∈M

Proof. Let x ∈ [X̃]α and y ∈ [Ỹ ]α , xj ≥ 0, yi ≥ 0 for all i ∈ M, j ∈ N .


Then by Proposition 127 we obtain

P
m
aij (1 − α)yi ≥ cj (1 − α). (11.63)
i=1

Since α ≥ 0.5, it follows that 1 − α ≤ α, hence [X̃]α ⊂ [X̃]1−α . Again by


Proposition 127 we obtain for all i ∈ M

P
n
aij (1 − α)xj ≤ bi (1 − α). (11.64)
j=1
166 CHAPTER 11. FUZZY LINEAR PROGRAMMING

Multiplying both sides of (11.63) and (11.64) by xj and yi , respectively, and


summing up the results, we obtain
P
n P
n P
m P
m
cj (1 − α)xj ≤ aij (1 − α)yi xj ≤ bi (1 − α)yi ,
j=1 j=1 i=1 i=1

which is the desired result.


Notice that in the crisp case, (11.62) is nothing else than the standard weak
duality. Let us turn to the strong duality.
For this purpose, we assume the existence of an exogenously given additional
goals d˜ ∈ F(R) and h̃ ∈ F(R) - fuzzy sets of the real line. The fuzzy goal d˜ is
compared to fuzzy values c̃1 x1 +̃· · ·+̃c̃n xn of the objective function of the primal
FLP problem (P) by a given fuzzy relation ≥ ˜ T . On the other hand, the fuzzy
goal h̃ is compared to fuzzy values b̃1 y1 +̃· · ·+̃b̃m ym of the objective function of
the dual FLP problem (D) by a given fuzzy relation ≤ ˜ S . In this way the fuzzy
objectives are treated as constraints
˜ T d,
c̃1 x1 +̃· · ·+̃c̃n xn ≥ ˜ b̃1 y1 +̃· · ·+̃b̃m ym ≤
˜ S h̃.

Let the optimal solution of the primal FLP problem (P), defined by Defini-
tion 128, be denoted by X̃ ∗ , the optimal solution of the dual FLP problem (D),
defined also by Definition 128, by Ỹ ∗ . Clearly, X̃ ∗ is a fuzzy subset of Rn , Ỹ ∗
is a fuzzy subset of Rm , moreover, X̃ ∗ ⊂ X̃ and Ỹ ∗ ⊂ Ỹ .
Proposition 138 Let for all i ∈ M, j ∈ N , c̃j , ãij and b̃i be compact, convex
and normal fuzzy quantities. Let d, ˜ h̃ ∈ F(R) be fuzzy goals with the membership
function µd˜, µh̃ satisfying the following conditions
(i) µd˜, µh̃ are upper semicontinuous,
(ii) µd̃ is strictly increasing, µh̃ is strictly decreasing, (11.65)
(iii) lim µd˜ (t) = lim µh̃ (t) = 0.
t→−∞ t→+∞

Let ≤˜ T be the T -fuzzy extension of the relation ≤ on R and ≥ ˜ S be the fuzzy


extension of the relation ≥ on R. Let A = T = min, S = max. Let X̃ ∗ be
an optimal solution of FLP problem (11.60), Ỹ ∗ be an optimal solution of FLP
problem (11.61) and α ∈ (0, 1).
If a vector x∗ = (x∗1 , ..., x∗n ) ≥ 0 belongs to [X̃ ∗ ]α , then there exists a vector
y ∗ = (y1∗ , ..., ym

) ≥ 0 which belongs to [Ỹ ∗ ]1−α , and
X X
c̄j (α)x∗j = b̄i (α)yi∗ . (11.66)
j∈N i∈M

Proof. Let x∗ = (x∗1 , ..., x∗n ) ≥ 0, x∗ ∈ [X̃ ∗ ]α . By Proposition 131


n
X
cj (α)x∗j ≥ d(α), (11.67)
j=1
n
X
aij (α)x∗j ≤ bi (α), i ∈ M. (11.68)
j=1
11.7. SPECIAL MODELS OF FLP 167

Consider the following LP problem:


Pn
(P1) maximize cj (α)xj
j=1
Pn
subject to aij (α)xj ≤ bi (α), i ∈ M,
j=1
xj ≥ 0, j ∈ N .

By conditions (11.65) concerning the fuzzy goal d, ˜ the system of inequalities


(11.67),(11.68) is satisfied if and only if x∗ is the optimal solution of (P1). By
the standard strong duality theorem for LP, there exists y ∗ ∈ R∗ being an
optimal solution of the dual problem
P
m
(D1) minimize bi (α)yi
i=1
Pm
subject to aij (α)yi ≥ cj (α), j ∈ N ,
i=1
yi ≥ 0, i ∈ M,
such that (11.66) holds.
It remains only to prove that y ∗ ∈ [Ỹ ∗ ]1−α . This fact follows, however, from
conditions (11.65) concerning the fuzzy goal h̃, and from (11.18).
Notice that in the crisp case, (11.66) is the standard strong duality result
for LP.

11.7 Special Models of FLP


Several models of FLP problem known from the literature are investigated in
this section.

11.7.1 Interval Linear Programming


In this subsection we apply the previous results for a special case of the FLP
problem - interval linear programming problem (ILP problem). By interval
linear programming problem (ILP) we understand the following FLP problem

f
maximize c̃1 x1 +̃· · ·+̃c̃n xn
subject to
(11.69)
ãi1 x1 +̃· · ·+̃ãin xn R̃ b̃i , i ∈ M,
xj ≥ 0, j ∈ N .

Here, the parameters c̃j , ãij and b̃i are considered to be compact intervals in R,
i.e. c̃j = [cj , cj ], ãij = [aij , aij ] and b̃i = [bi , bi ], where cj , cj , aij , aij and bi , bi
are lower and upper bounds of the corresponding intervals, respectively. The
membership functions of c̃j , ãij and b̃i are the characteristic functions of the
intervals, i.e. χ[cj ,cj ] : R → [0, 1], χ[aij ,aij ] : R → [0, 1] and χ[b ,bi ] : R → [0, 1],
i

i ∈ M, j ∈ N . The fuzzy relation R̃ is considered as a fuzzy extension of


168 CHAPTER 11. FUZZY LINEAR PROGRAMMING

a valued relation R on R. We assume that R is the usual binary relation ≤,


A = T = min, S = max, and
T T,S S,T
R̃ ∈ {≤ ˜ S, ≤
˜ ,≤ ˜ ˜ T,S , ≤
,≤ ˜ ˜ S,T }.
,≤

Then by Proposition 127 we obtain 6 types of feasible solutions of ILP problem


(11.69):
(i)
( )
n P
n
X≤˜T = x∈R | aij xj ≤ bi , xj ≥ 0, j ∈ N . (11.70)
j=1

(ii)
( )
n P
n
˜ =
X≤S
x∈R | aij xj ≤ bi , xj ≥ 0, j ∈ N . (11.71)
j=1

(iii)
( )
n P
n
X≤
˜ T,S = X≤
˜
T,S
= x∈R | aij xj ≤ bi , xj ≥ 0, j ∈ N . (11.72)
j=1

(iv)
( )
n P
n
X≤˜ S,T = X≤
˜
S,T
= x∈R | aij xj ≤ bi , xj ≥ 0, j ∈ N . (11.73)
j=1

Clearly, feasible solutions (11.70) - (11.73) are crisp subsets of Rn , moreover,


they all are polyhedral.
In order to find an optimal solution of ILP problem (11.69), we consider a
fuzzy goal d˜ ∈ F(R) and R̃0 , a fuzzy extension of the usual binary relation ≥
for comparing the objective with the fuzzy goal.
In the following proposition we show that if the feasible solution of ILP
problem is crisp then its max-optimal solution is the same as the set of all
optimal solution of the problem of maximizing a particular crisp objective over
the set of feasible solutions.

Proposition 139 Let X be a crisp feasible solution of ILP problem (11.69).


Let d˜ ∈ F(R) be a fuzzy goal with the membership function µd˜ satisfying condi-
tions (11.34). Let AG = A = T = min, S = max.
˜ T , then the set of all max-optimal solutions of ILP problem (11.69)
(i) If R̃0 is ≥
coincides with the set of all optimal solution of the problem

P
n
maximize cj xj
j=1 (11.74)
subject to x ∈ X;
11.7. SPECIAL MODELS OF FLP 169

˜ S , then the set of all max-optimal solutions of ILP problem (11.69)


(ii) If R̃0 is ≥
coincides with the set of all optimal solution of the problem
P
n
maximize cj xj
j=1 (11.75)
subject to x ∈ X.

Proof. (i) Let x ∈ X be a max-optimal solution of ILP problem (11.69),


P
n P
n
c= cj xj , c = cj xj . By our assumptions, (8.37) and (11.34) give
j=1 j=1

³ ´
˜
˜ T c̃1 x1 +̃· · ·+̃c̃n xn , d = sup{min{µc̃1 x1 +̃···+̃c̃n xn (u), µd˜(v)}|u ≥ v}
µ≥
= sup{min{χ[c,c] (u), µd˜(v)}|u ≥ v}
 
Xn
= µd˜  cj xj  .
j=1

Hence, x is an optimal solution of (11.74). Conversely, if x ∈ X is an optimal


solution of (11.74), then by Definition 128 and by (11.34), x is a max-optimal
solution of problem (11.69).
(ii) Analogously to the proof of (i), we have
³ ´
µ≥ c̃ x +̃· · ·+̃c̃ x , ˜ = inf{max{1 − µ
d
˜
S
1 1 n n c̃1 x1 +̃···+̃c̃n xn (u), 1 − µd˜(v)}|u ≥ v}

= inf{1 − min{χ[c,c] (u), µd˜(v)}|u ≤ v}


 
Xn
= µd˜  cj xj  .
j=1

By the same arguments as in (i) we conclude the proof.


We close this section with several observations about duality in ILP prob-
lems.
Let the primal ILP problem (P) be problem (11.69) with R̃ = ≤ ˜ T , i.e.
(11.60). Then the dual ILP problem (D) is (11.61). Clearly, the feasible solution
X≤˜ T of (P) is defined by (11.70) and the feasible solution Y≥
˜ of the dual problem
S
(D) can be derived from (11.71) as

m P
m
˜ = {y ∈ R |
Y≥S
aij yi ≥ cj , yi ≥ 0, i ∈ M}. (11.76)
i=1

Notice that the problems


P
n
maximize cj xj
j=1
subject to x ∈ X≤
˜T
170 CHAPTER 11. FUZZY LINEAR PROGRAMMING

and
P
m
minimize bi yi
i=1
subject to y ∈ Y≥
˜
S

are dual to each other in the usual (crisp) sense if and only if cj = cj and bi = bi
for all i ∈ M and j ∈ N .
For ILP problems our results correspond to that of [24], [49], [82].

11.7.2 Flexible Linear Programming


The term flexible linear programming is referred to an original approach to LP
problems, see e.g. [100], allowing for a kind of flexibility of the objective function
and constraints in standard LP problem (11.3), that is

maximize c1 x1 + · · · + cn xn
subject to
(11.77)
ai1 x1 + · · · + ain xn ≤ bi , i ∈ M,
xj ≥ 0, j ∈ N ,

see also [83]. In (11.77) the values of parameters cj , aij and bi are known, they
are, however, uncertain, not confident, etc. That is why nonnegative values
pi , i ∈ {0} ∪ M, of admissible violations of the objective and constraints are
(subjectively) chosen and supplemented to the original model (11.77).
For the objective function, an aspiration value d0 ∈ R is also (subjectively)
determined such that if the objective function attains this value, or if it is
greater, then the decision maker (DM) is fully satisfied. On the other hand, if
the objective function attains a value smaller than d0 − p0 , then (DM) is fully
dissatisfied. Within the interval (d0 − p0 , d0 ), the satisfaction of DM increases
linearly from 0 to 1. By these considerations a membership function µd˜ of the
fuzzy goal d˜ is defined as follows

1 if t ≥ d0 ,
t−d0
µd˜(t) = { 1 + p0 if d0 − p0 ≤ t < d0 , (11.78)
0 otherwise.

Similarly, let for the i-th constraint function of (11.77), i ∈ M, a right hand
side bi ∈ R is known such that if the left hand side attains this value, or if it
is smaller, then the decision maker (DM) is fully satisfied. On the other side, if
the objective function attains its value greater than bi + pi , then (DM) is fully
dissatisfied. Within the interval (bi , bi + pi ), the satisfaction of DM decreases
linearly from 1 to 0. By these considerations a membership function µb̃i of the
fuzzy right hand side b̃i is defined as

1 if t ≤ bi ,
t−bi
µb̃i (t) = { 1 − pi if bi ≤ t < bi + pi , (11.79)
0 otherwise.
11.7. SPECIAL MODELS OF FLP 171

The relationship between the objective function and constraints in the flexi-
ble LP problem is considered as fully symetric; i.e. there is no longer a difference
between the former and latter. ”Maximization” is then understood as finding
a vector x ∈ Rn such that the membership grade of the intersection of all
fuzzy sets (11.78) and (11.79) is maximal. In other words, we have to solve the
following optimization problem:

maximize λ
subject to P
µd˜( cj xj ) ≥ λ,
j∈N
P (11.80)
µb̃i ( aij xj ) ≥ λ, i ∈ M,
j∈N
0 ≤ λ ≤ 1, xj ≥ 0, j ∈ N.

Problem (11.80) can be easily transformed to the equivalent LP problem:

maximize λ
subject to P
cj xj ≥ d0 + λp0 ,
j∈N
P (11.81)
aij xj ≤ bi + (1 − λ)pi , i ∈ M,
j∈N
0 ≤ λ ≤ 1, xj ≥ 0, j ∈ N .

In Section 2, we introduced FLP problem (11.5). Now, consider the follow-


ing, a more specific FLP problem:

f
maximize c1 x1 + · · · + cn xn
subject to
(11.82)
˜ T b̃i , i ∈ M,
ai1 x1 + · · · + ain xn ≤
xj ≥ 0, j ∈ N ,

where cj , aij and bi are the same as above, that is crisp numbers, whereas d˜
and b̃i are fuzzy quantities defined by (11.78) and (11.79). Moreover, ≤ ˜ T is a
T -fuzzy extension of the usual inequality relation ≤, with T = min. It turns out
that the vector x ∈ Rn is an optimal solution of flexible LP problem (11.81) if
and only if it is a max-optimal solution of FLP problem (11.82). This statement
follows directly from Theorem 132.
Notice that piecewise linear membership functions (11.78) and (11.79) can
be replaced by more general nondecreasing and nonincreasing functions, respec-
tively. In general, problem (11.80) cannot be then equivalently transformed to
the LP problem (11.81). Such transformation is, however, sometimes possible,
e.g. if all membership functions are generated by the same strictly monotone
function.
172 CHAPTER 11. FUZZY LINEAR PROGRAMMING

11.7.3 FLP Problems with Interactive Fuzzy Parameters


In this subsection we shall deal with a fuzzy linear programming problem with
the parameters being interacitive fuzzy quantities as introduced in Chapter 8.
Let f , gi be linear functions defined by (11.1), (11.2), i.e.

f (x; c1 , ..., cn ) = c1 x1 + · · · + cn xn ,

gi (x; ai1 , ..., ain ) = ai1 x1 + · · · + ain xn , i ∈ M,


The parameters cj , aij and bi will be considered as normal fuzzy quantities,
that is normal fuzzy subsets of the Euclidean space R. Let µc̃j : R → [0, 1],
µãij : R → [0, 1] and µb̃i : R → [0, 1], i ∈ M, j ∈ N , be membership functions
of the fuzzy parameters c̃j , ãij and b̃i , respectively.
Let R̃i , i ∈ M, be fuzzy relations on F(R). Similarly to section 11.2. we
have an exogenously given fuzzy goal d˜ ∈ F(R) and another fuzzy relation R̃0
on R. Moreover, let Di = (di1i , di2 , . . . , dim ) be a nonsingular n × n matrices -
obliquity matrices, where all dij ∈ Rn are columns of matrices Di , i = {0} ∪ M.
The fuzzy linear programming problem with interacive parameters (FLP prob-
lem with IP) associated with LP problem (11.3) is denoted as

f D D
maximize c̃1 x1 +̃ 0 · · ·+̃ 0 c̃n xn
subject to
D D (11.83)
ãi1 x1 +̃ i · · ·+̃ i ãin xn R̃i b̃i , i ∈ M,
xj ≥ 0, j ∈ N .
Let us clarify the elements of (11.83).
The objective function values and the left hand sides values of the constraints
of (11.83) have been obtained by the extension principle (8.17) as follows. By
(8.60) we obtain

µãi (a) = T (µãi1 (hdi1 , ai), µãi2 (hdi2 , ai), ..., µãin (hdin , ai)). (11.84)

A membership function of g̃i (x; ãi ) is defined for each t ∈ R by



 sup{µãi (a)| a = (a1 , ..., an ) ∈ Rn , a1 x1 + · · · + an xn = t}
µg̃i (t) = if gi−1 (x; t) 6= ∅,

0 otherwise,
(11.85)
where gi−1 (x; t) = {(a1 , ..., an ) ∈ Rn |a1 x1 + · · · + an xn = t}. Here, the fuzzy set
D D
g̃i (x; ãi ) is denoted as ãi1 x1 +̃ i · · ·+̃ i ãin xn , i.e.
Di Di
g̃i (x; ãi ) = ãi1 x1 +̃ · · ·+̃ ãin xn (11.86)

for every i ∈ M and for each x ∈ Rn .


Also, for given interactive c̃1 , ..., c̃n ∈ F(R), by (8.60) we obtain
11.7. SPECIAL MODELS OF FLP 173

µc̃ (c) = T (µc̃1 (hd01 , ci), µc̃2 (hd02 , ci), ..., µc̃n (hd0n , ci)). (11.87)
A membership function of f˜(x; c̃) is defined for each t ∈ R by

 sup{µc̃ (c)| c = (c1 , ..., cn ) ∈ Rn , c1 x1 + · · · + cn xn = t}
µf˜(t) = if f −1 (x; t) 6= ∅, (11.88)

0 otherwise,

where f −1 (x; t) = {(c1 , ..., cn ) ∈ Rn |c1 x1 + · · · + cn xn = t}. Here, the fuzzy set
D D
f˜(x; c̃) is denoted as c̃1 x1 +̃ 0 · · ·+̃ 0 c̃n xn , i.e.
D D
f˜(x; c̃) = c̃1 x1 +̃ 0 · · ·+̃ 0 c̃n xn (11.89)

for each x ∈ Rn .
The treatment of FLP problem (11.83) is analogical to that of (11.83). The
following proposition demonstrates how the α-cuts of (11.86) and (11.89) can
be calculated.
Let D0 be the non-singular obliquity matrix, denote D0−1 = {d∗ij }ni,j=1 . For
x = (x1 , · · · , xn ) ∈ Rn we denote

Ix+ = {i ∈ N |xi ≥ 0},


Ix− = {i ∈ N |xi < 0}

and for all i ∈ N


n
X
x∗i = d∗ij xj . (11.90)
j=1

Given α ∈ (0, 1], j ∈ N , let

cj (α) = inf {c|c ∈ [c̃j ]α } , (11.91)


cj (α) = sup {c|c ∈ [c̃j ]α } , (11.92)

Proposition 140 Let c̃1 , ..., c̃n ∈ FI (R) be compact interactive fuzzy intervals
with an obliquity matrix D0 , x = (x1 , · · · , xn ) ∈ Rn . Let T be a continuous
D D
t-norm and f˜(x; c̃) = c̃1 x1 +̃ 0 · · ·+̃ 0 c̃n xn be defined by (11.88), α ∈ (0, 1].
Then
 
h i X X X X
 
f˜(x; c̃) =  cj (α)x∗j + cj (α)x∗j , cj (α)x∗j + cj (α)x∗j  .
α
j∈Ix+∗ j∈Ix−∗ j∈Ix+∗ j∈Ix−∗
(11.93)
£ ¤
Proof. Observe that [c̃j ]α = cj (α), cj (α) . The proof follows directly from
(11.87), (11.88), (11.90) and Theorem 53.
Analogical result can be formulated and proved for interactive fuzzy para-
meters in the constraints of (11.83), i.e. if ãi1 , ..., ãin ∈ FI (R) are compact
174 CHAPTER 11. FUZZY LINEAR PROGRAMMING

interactive fuzzy intervals with an obliquity matrix Di , i ∈ M. Then we can


take advatage of Theorem 127.
The practical difficulty of FLP with interactive parameters is that the mem-
bership functions of interactive parameters c̃j , ãij are not observable. Instead,
marginal fuzzy parameters can be measured or estimated. The problem of a
unique representation of interactive fuzzy parameters by their marginals has
been resolved in [35] and [70].

11.7.4 FLP Problems with Centered Parameters


Interesting FLP models can be obtained if the parameters of the FLP prob-
lem are supposed to be centered fuzzy numbers called B-fuzzy intervals, see
Definition 36 in Chapter 8, or [41].
Let B be a basis of generators ordered by ⊂. Let ≤B be a partial ordering on
the set FB (R) of all B-fuzzy intervals on R, defined by (8.57) in Definition 36.
Obviously, if B is completely ordered by ⊂, then FB (R) is completely ordered
by ≤B . By Definition 36, each c̃ ∈ FB (R) can be uniquely represented by a
couple (c, µ), where c ∈ R and µ ∈ B such that
µc̃ (t) = µ(c − t),
we can write c̃ = (c, µ).
Let ◦ be either addition + or multiplication · arithmetic operations on R
and be either min or max operations on B. Let us introduce on FB (R) the
following four operations:
(a, f ) ◦( ) (b, g) = (a ◦ b, f g) (11.94)
for all¡ (a, f ), (b, g) ∈ ¡ B (R). It can¢ be
¢ F ¡ easily proved ¢ that¡the pairs of opera- ¢
tions +(min) , ·(min) , +(min) , ·(max) , +(max) , ·(min) , and +(max) , ·(max) , are
distributive. For more properties of these information, see [43].
Now, let c̃j = (cj , fj ), ãij = (aij , gij ), b̃i = (bi , hi ), c̃j , ãij , b̃i ∈ FB (R) be
B-fuzzy intervals, i ∈ M, j ∈ N . Let ¦ and be either min or max operations
on B. Consider the following optimization problem:

maximize c̃1 ·(¦) x̃1 +( ) · · · +( ) c̃n ·(¦) x̃n


subject to
(11.95)
ãi1 ·(¦) x̃1 +( ) · · · +( ) ãin ·(¦) x̃n ≤B b̃i , i ∈ M,
x̃j ≥B 0̃, j ∈ N .
Here, maximization is performed with respect to the ordering ≤B . Moreover,
x̃j = (xj , ξ j ), where xj ∈ R and ξ j ∈ B , 0̃ = (0, χ{0} ). The constraints x̃j ≥B 0̃,
j ∈ N , is equivalent to xj ≥ 0, j ∈ N . Comparing to the previous approach,
we consider a different concept of feasible and optimal solution.
A feasible solution of the optimization problem (11.95) is a vector (x̃1 , x̃2 , · · · ,
x̃n ) ∈ FB (R) × FB (R) × · · · × FB (R), satisfying the constraints
ãi1 ·(¦) x̃1 +( ) · · · +( ) ãin ·(¦) x̃n ≤ B b̃i , i
∈ M, (11.96)
x̃j ≥ B 0̃, j ∈ N. (11.97)
11.7. SPECIAL MODELS OF FLP 175

The set of all feasible solutions of (11.95) is denoted by XB .


An optimal solution of the optimization problem (11.95) is a vector (x̃∗1 , x̃∗2 ,
· · · , x̃∗n ) ∈ FB (R) × FB (R) × · · · × FB (R) such that
z̃ ∗ = c̃1 ·(¦) x̃∗1 +( ) · · · +( ) c̃n ·(¦) x̃∗n (11.98)
is the maximal element of the set
XB∗ = {z̃|z̃ = c̃1 ·(¦) x̃1 +( ) · · · +( ) c̃n ·(¦) x̃n , (x̃1 , x̃2 , · · · , x̃n ) ∈ XB }. (11.99)
Notice that for each of four possible combinations of min and max in the
operations ·(¦) and +( ) , (11.95) defines in fact an individual optimization prob-
lem.
Proposition 141 Let B be a completely ordered basis of generators. Let (x̃∗1 ,
x̃∗2 , · · · , x̃∗n ) ∈ FB (R)n be an optimal solution of (11.95), where x̃∗j = (x∗j , ξ ∗j ),
j ∈ N . Then the vector x∗ = (x∗1 , · · · , x∗n ) is the optimal solution of the following
LP problem:
maximize c1 x1 + · · · + cn xn
subject to
(11.100)
ai1 x1 + · · · + ain xn ≤ bi , i ∈ M,
xj ≥ 0, j ∈ N .
Proof. The proof immediatelly follows from the definition (11.94) of the
extended operations and from (8.57).
By Ax we denote the set of indices of all active constraints of (11.100) at
x = (x1 , · · · , xn ), i.e.
Ax = {i ∈ M|ai1 x1 + · · · + ain xn = bi }.
The following proposition gives a necessary condition for the existence of a
feasible solution of (11.95). The proof can be found in [41].
Proposition 142 Let B be a completely ordered basis of generators. Let (x̃1 ,
x̃2 , · · · , x̃n ) ∈ FB (R)n be a feasible solution of (11.95), where x̃j = (xj , ξ j ),
j ∈ N . Then the vector x = (x1 , · · · , xn ) is the feasible solution of the LP
problem (11.100) and it holds
(i) if ¦ = max and = min, then
min{aij |j ∈ N } ≤B bi for all i ∈ Ax ;
(ii) if ¦ = max and = max, then
max{aij |j ∈ N } ≤B bi for all i ∈ Ax .
In this subsection we have presented an alternative approach to LP problems
with fuzzy parameters. Comparing to the approach presented in the previous
sections, the decision variables xj considered here have not been taken as crisp
numbers, they have been considered as fuzzy intervals of the same type as
the corresponding coefficients - parameters of the optimization problem. From
computational point of view this approach is simple as it requires to solve only
a classical LP problem.
176 CHAPTER 11. FUZZY LINEAR PROGRAMMING

11.8 Illustrative Examples


In this section we present two ”one-dimensional examples” illustrating the basic
concepts. The examples below could be, in many aspects, extended from R1 to
Rn .

Example 143 Consider the following simple FLP problem in R1 .


f
maximize c̃x
subject to
˜ b̃, (11.101)
ãx ≤
x ≥ 0.

Here, c̃, ã and b̃ are supposed to be crisp subsets of R, particularly, closed


bounded intervals: c̃ = [c, c], ã = [a, a], b̃ = [b, b], with c, b > 0. Let T = min.
Remember that the membership functions of c̃, ã and b̃ are their characteristic
functions. The fuzzy relation ≤ ˜ and ≥ ˜ is assumed to be a T -fuzzy extension of
the binary relation ≤ and ≥, respectively.

(I) Membership functions of c̃x and ãx.


By (8.17) we obtain for every t ∈ R:

1 if cx ≤ t ≤ cx,
µc̃x (t) = sup{χ[c,c] (c)|c ∈ R, cx = t} = { (11.102)
0 otherwise.
Similarly, we obtain the membership function of ãx as
1 if ax ≤ t ≤ ax,
µãx (t) = sup{χ[a,a] (a)|a ∈ R, ax = t} = { (11.103)
0 otherwise.

Now, we derive the membership function µ≤


˜ , µ≥ ˜ ≥,
˜ of the fuzzy relations ≤, ˜
respectively.
By (8.37) we obtain

µ≥
˜ (c̃y, c̃x) = sup{min{µc̃y (u), µc̃x (v)}|u ≥ v},

˜ (ãx , b̃) = sup{min{µãx (u), µb̃ (v)}|u ≤ v}.


µ≤ (11.104)
A feasible solution can be calculated as follows.
By (11.11) a feasible solution X̃ of the FLP problem (11.101) is given by the
membership function

µX̃ (x) = min{µ≤


˜ (ãx , b̃), χ[0,+∞) (x)} (11.105)

By (11.104) and (11.103), we get

˜ (ãx , b̃) = sup{min{χ[ax,ax] (u), χ[b,b] (v)}|u ≤ v}


µ≤ (11.106)
1 if ax ≤ b,
={
0 otherwise.
11.8. ILLUSTRATIVE EXAMPLES 177

Consider 3 cases of the value of a:


Case 1: a > 0. From (11.106) it follows that

˜ (ãx , b̃) = χ[0, b ] (x).


µ≤ (11.107)
a

By (11.105) and (11.107) we get

µX̃ (x) = min{χ[0, b ] (x), χ[0,+∞) (x)} = χ[0, b ] (x), (11.108)


a a

or, in other words · ¸


b
X̃ = 0, .
a

Case 2: a = 0. Since b > 0, apparently by (11.105) and (11.106) we get

µX̃ (x) = χ[0,+∞) (x), (11.109)

or
X̃ = [0, +∞).

b
Case 3: a < 0, then a < 0. From (11.110) and (11.106) it follows that

˜ (ãx , b̃) = χ[ b ,+∞) (x).


µ≤ (11.110)
a

By (11.105) and (11.110) we get for all x ≥ 0

µX̃ (x) = χ[0,+∞) (x),

or
X̃ = [0, +∞).

(II) Optimal solution X̃ ∗ of FLP problem (11.101).

Consider a fuzzy goal d˜ given by the membership function

µd˜(t) = max{0, min{βt, 1}},

where β is sufficiently small positive number, e.g. β ≤ a/b, to secure that µd˜
is strictly increasing function in a sufficiently large interval. By (11.26) and
(11.27) we obtain
µX̃ ∗ (x) = min{µX̃0 (x), µX̃ (x)}, (11.111)

µX̃0 (x) = µ≥
˜ (ãx , b̃) = sup{min{χ[cx,cx] (u), µd˜(v)}|u ≥ v}
= µd˜(cx). (11.112)
178 CHAPTER 11. FUZZY LINEAR PROGRAMMING

Consider 2 cases corresponding to the value of a:


Case 1: a > 0. Then by (11.108), X̃ = [0, ab ] and by (11.111) and (11.112) we
obtain
µ (cx) if x ∈ [0, ab ],
µ∗X̃ (x) = { d˜
0 otherwise.
Let α ∈ (0, 1]. By Proposition 131, it is easy to verify that
· ¸
α b
[X̃ ∗ ]α = , . (11.113)
β a

By Proposition 132 we obtain the unique optimal solution with maximal height

b
x∗ = .
a

Case 2: a ≤ 0. Then by (11.109), X̃ = [0, +∞) and

µ∗X̃ (x) = µb̃0 (cx).

for all x ∈ R.
Again, by Proposition 131 we obtain the α-cut of the optimal solution
· ¶
α
[X̃ ∗ ]α = , +∞ .

The set of all optimal solution with maximal height is the interval
· ¶
1
, +∞ .

Example 144 Consider the same FLP problem as in Example 143, but with
different fuzzy parameters. The problem is as follows

f
maximize c̃x
subject to
˜ b̃, (11.114)
ãx ≤
x ≥ 0.

Here, the parameters c̃, ã and b̃ are supposed to be triangular fuzzy numbers. To
restrict a large amount of particular cases, we suppose that

0 < γ < c, 0 < α < a, 0 < β < b. (11.115)


11.8. ILLUSTRATIVE EXAMPLES 179

Figure 11.1:

Piecewise linear membership functions µc̃ , µã and µb̃ are defined for each x ∈ R
as follows:
½ ½ ¾¾
c−x c−x
µc̃ (x) = max 0, min 1 − ,1 + , (11.116)
γ γ
½ ½ ¾¾
a−x a−x
µã (x) = max 0, min 1 − ,1 + , (11.117)
α α
½ ½ ¾¾
b−x b−x
µb̃ (x) = max 0, min 1 − ,1 + , (11.118)
β β
see Fig. 11.1.
˜ is assumed to be a T -fuzzy extension of the
Let T = min. The fuzzy relation ≤
binary relation ≤.

(I) Membership functions of c̃x and ãx.


Let x > 0. Then by (8.17) we obtain for every t ∈ R:
t
µc̃x (t) = sup{µc̃ (c)|c ∈ R, cx = t} = µc̃ ( )
½ ½ x ¾¾
cx − t cx − t
= max 0, min 1 − ,1 + . (11.119)
γx γx
In the same way, we obtain the membership function of ãx as
½ ½ ¾¾
ax − t ax − t
µãx (t) = max 0, min 1 − ,1 + (11.120)
αx αx
Let x = 0. Then
µc̃x (t) = χ0 (t) and µãx (t) = χ0 (t)
for every t ∈ R.
180 CHAPTER 11. FUZZY LINEAR PROGRAMMING

Figure 11.2:

˜
˜ of the fuzzy relation ≤.
Second, we calculate the membership function µ≤
Let x > 0. Then, see Fig. 11.2,

µ≤
˜ (ãx , b̃) = sup{min{µãx (u), µb̃ (v)}|u ≤ v},

For x = 0 we calculate

µ≤
˜ (ãx , b̃) = sup{min{χ{0} (u), µb̃ (v)}|u ≤ v} = 1. (11.121)

(II) Feasible solution.


By (11.11) a feasible solution X̃ of the FLP problem (11.114) is given by the
membership function

µX̃ (x) = min{µ≤


˜ (ãx , b̃), χ[0,+∞) (x)}. (11.122)

Suppose that x ≥ 0. Using (11.116), (11.117) and (11.120), we calculate

µ≤
˜ (ãx , b̃) = sup{min{µãx (u), µb̃ (v)}|u ≤ v} (11.123)
1 if 0 < x, ax ≤ b,
b + β − (a − α)x
= { if b < ax, (a − α)x ≤ b + β,
αx + β
0 otherwise.

By (11.11) a feasible solution X̃ of the FLP problem (11.114) is given by the


membership function

µX̃ (x) = min{µ≤


˜ (ãx , b̃), χ[0,+∞) (x)}.

Recalling (11.122), (11.126) we summarize, see Fig. 11.3.


11.8. ILLUSTRATIVE EXAMPLES 181

Figure 11.3:

1 if 0 ≤ x, ax ≤ b,
b + β − (a − α)x
µX̃ (x) = { if b < ax, (a − α)x ≤ b + β, (11.124)
αx + β
0 otherwise.

Let ε ∈ (0, 1]. From (11.124) it follows that

µX̃ (x) ≥ ε

if and only if
b + β − (a − α)x
≥ ε and x ≥ 0,
αx + β
or equivalently,
b + (1 − ε)β
0≤x≤ .
a − (1 − ε)α
In other words, · ¸
b + (1 − ε)β
[X̃]ε = 0, . (11.125)
a − (1 − ε)α

(III) Optimal solution of FLP problem (11.114).


Consider a fuzzy goal d˜ given by the membership function

µd˜(x) = min{βx, 1}, (11.126)

for all x ≥ 0, where β is sufficiently small positive number, e.g. β ≤ a/b. By


(11.26) and (11.27) we have

µX̃ ∗ (x) = min{µX̃0 (x), µX̃ (x)}, (11.127)


182 CHAPTER 11. FUZZY LINEAR PROGRAMMING

Figure 11.4:

µX̃0 (x) = µ≥ ˜
˜ (c̃x , d) = sup{min{µc̃x (u), µd˜(v)}|u ≥ v}. (11.128)
By (11.119) and (11.126) we calculate for all x ≥ 0

µX̃0 (x) = min{δx, 1}, (11.129)

where
β(c + γ)
δ= .
1 + βγ
The membership function of optimal solution given by (11.127) is depicted in
Fig. 11.4. Combining (11.125) and (11.129) we obtain the set of all max-
optimal solution X̄ from formula (11.127) as

D − [βδ + (a − α)] a 1
if < ,
X̄ = { · 2αδ ¸ b δ
1 a
, otherwise,
δ b

where
D = [βδ + (a − α)]2 + 4αδ(b + β),
see Fig. 11.4.
Chapter 12

Conclusion

In this work, we focused primarily on fuzzy methodologies and fuzzy systems, as


they bring basic ideas to the area of Soft Computing. The other constituents of
SC are also surveyed here but for details we refere to the existing vast literature.
In the first part of this study we presented an overview of developments in
the individual parts of SC. For each constituent of SC we briefly overviewed
its background, main problems, methodologies and recent developments. Also
the main literature, professional journals and technical newsletters, professional
organizations and other relevant information are mentioned.
In the second part of the study we extensively studied the subject of fuzzy
optimization, being an important part of SC. Here we presented original results
of our research conducted by during the author’s research stay at the School of
Knowledge Science, JAIST, Hokuriku, Japan, in the first three months of 2001.
Already in the early stages of the development of fuzzy set theory, it has been
recognized that fuzzy sets can be defined and represented in several different
ways. Here we defined fuzzy sets within the classical set theory by nested
families of sets, and then we discussed how this concept is related to the usual
definition by membership functions. Binary and valued relations were extended
to fuzzy relations and their properties were extensively investigated. Moreover,
fuzzy extensions of real functions were studied, particularly the problem of the
existence of sufficient conditions under which the membership function of the
function value is quasiconcave.
We brought also some important applications of the theory, namely, we con-
sidered a decision problem, i.e. the problem to find a ”best” decision in the
set of feasible alternatives with respect to several (i.e. more than one) criteria
functions. Within the framework of such a decision situation, we dealt with
the existence and mutual relationships of three kinds of ”optimal decisions”:
Weak Pareto-Maximizers, Pareto-Maximizers and Strong Pareto-Maximizers -
particular alternatives satisfying some natural and rational conditions. We stud-
ied also the compromise decisions maximizing some aggregation of the criteria.
The criteria considered here are functions defined on the set of feasible alter-
natives with the values in the unit interval. Such functions can be interpreted

183
184 CHAPTER 12. CONCLUSION

as membership functions of fuzzy subsets and will be called here fuzzy criteria.
Each constraint or objective function of the fuzzy mathematical programming
problem has been naturally appointed to the unique fuzzy criterion.
Fuzzy mathematical programming problems form a subclass of decision -
making problems where preferences between alternatives are described by means
of objective function(s) defined on the set of alternatives in such a way that
greater values of the function(s) correspond to more preferable alternatives (if
”higher value” is ”better”). The values of the objective function describe effects
from choices of the alternatives. First we presented a general formulation FMP
problem associated with the classical MP problem, then we defined a feasible
solution of FMP problem and optimal solution of FMP problem as special fuzzy
sets. Among others we have shown that the class of all MP problems with
(crisp) parameters can be naturally embedded into the class of FMP problems
with fuzzy parameters.
We dealt also with a class of fuzzy linear programming problems. and again
investigated feasible and optimal solutions - the necessary tools for dealing with
such problems. In this way we showed that the class of crisp (classical) LP
problems can be embedded into the class of FLP ones. Moreover, for FLP
problems we defined the concept of duality and proved the weak and strong
duality theorems. Further, we investigated special classes of FLP - interval LP
problems, flexible LP problems, LP problems with interactive coefficients and
LP problems with centered coefficients.
In the study we introduced an original unified approach by which a number
of new and yet unpublished results have been acquired.
Our approach to SC presented in this work is mathematically oriented as
the author is a mathematician. There exist, however, other approaches to SC,
e.g. human-science approach and also computer approach, putting more stress
on other aspects of he subject.
Bibliography

[1] G.I. Adamopoulos, and C.P. Pappis, A neighbourhood-based hybrid


method for scheduling with fuzzy due-dates. Preprint, Dept. of Industrial
Management University of Piraeus, Piraeus, Greece, 1995.
[2] E.K. Antonsson and H.-J. Sebastian, Fuzzy sets in engineering design. In:
Practical Applications in Fuzzy Technologies, H.-J. Zimmermann Ed., The
Handbooks of Fuzzy Sets Series, Kluwer Academic Publ., New York, 1999,
57-117.
[3] M. Avriel, W.E. Diewert, S. Schaible and I. Zang, Generalized concavity.
Plenum Press, N.Y., London, 1988.
[4] S. Barbera, P. J. Hammond and C. Seidl, Eds., Handbook of utility theory.
Vol.1: Principles, Kluwer Academic Publ., Boston-Dordrecht-London,
1998.
[5] C.R. Bector and C. Singh, B-vex functions, Journal of Optimization The-
ory and Applications, 71, (1991), 237-253.
[6] C.R. Bector, S.K. Suneja, and C.S. Lalitha, Generalized B-vex functions,
Journal of Optimization Theory and Applications, 76, (1993), 561-576.
[7] C.R. Bector,C.R., S.K. Suneja and C. Singh, Generalizations of Pre-invex
functions by B-vex functions, Journal of Optimization Theory and Appli-
cations, 76, (1993), 577-587.
[8] C.R. Bector, S. Chandra, S. Gupta and K. Suneja, Univex sets, func-
tions and univex nonlinear programming. In: S. Komlósi, T. Rapcsak, S.
Schaible, Eds., Generalized Conexity, Springer-Verlag, Berlin-Heidelberg-
New York, 1994, 3-18.
[9] R. Bellmann and L. Zadeh, Decision making in fuzzy environment. Man-
agement Science 17(4), 1970, 141—164.
[10] C. Bertoluzza and A. Bodini, A new proof of Nguyen’s compatibility the-
orem in a more general context. Fuzzy Sets and Systems 95 (1998) 99-102.
[11] D.P. Bertsekas, Nonlinear programming. Second Edition, Athena Scien-
tific, Belmont, Massachusetts, 1999.

185
186 BIBLIOGRAPHY

[12] J.J. Buckley, Possibilistic linear programming with triangular fuzzy num-
bers. Fuzzy Sets and Systems 26 (1988) 135-138.

[13] S. Chanas, Fuzzy programming in multiobjective linear programming - a


parametric approach. Fuzzy Sets and Systems 29 (1989) 303-313.

[14] S. Chen and C. Hwang, Fuzzy multiple attribute decision making. Springer-
Verlag, Berlin, Heidelberg, New York, 1992.

[15] G. Choquet, Theory of capacities, Annales de l’Institut Fourier, 5, 1953,


131-295.

[16] P.R. Cromwell, Polyhedra. Cambridge University Press, Cambridge - New


York - Melbourne, 1997.

[17] J.-P. Crouzeix, J.-E. Martinez-Legaz and M. Volle, Generalized convex-


ity, generalized monotonicity. Kluwer Academic Publ., Boston-Dordrecht-
London, 1998.

[18] M. Delgado, J. Kacprzyk, J.-L. Verdegay and M.A. Vila, Eds., Fuzzy
optimization - Recent advances. Physica-Verlag, Heidelberg, New York,
1994.

[19] Z. Drezner and E. Zemel, Competitive location in the plane. Annals of


Operations Research, Vol. 40 (1992), 173-193.

[20] D. Dubois and H. Prade, Possibility theory. Plenum Press, N. York and
London, 1988.

[21] D. Dubois et al., Fuzzy interval analysis. In: Fundamentals of fuzzy


sets, Kluwer Acad. Publ., Series on fuzzy sets, Vol. 1, Dordrecht-Boston-
London, 2000.

[22] J.C. Fodor, On contrapositive symmetry of implications in fuzzy logic,


Proc. the First European Congress on Fuzzy and Intelligent Technologies,
Aachen, 1993, 1342—1348.

[23] J.C. Fodor and M. Roubens, Fuzzy preference modelling and multi-criteria
decision support. Kluwer Acad. Publ., Dordrecht-Boston-London, 1994.

[24] H.J. Frank, On the simultaneous associativity of F (x, y) and x + y −


F (x, y). Aequationes Math. 19, 194—226, 1979.

[25] M. Grabisch, T. Murofushi and M. Sugeno, Eds., Fuzzy measures and


integrals. Physica Verlag, Heidelberg, New York, 2000.

[26] M. Grabisch, H. Nguyen and E. Walker, Fundamentals of uncertainty


calculi with applications to fuzzy inference. Kluwer Acad.ubl., Dordrecht,
Boston, London, 1995.
BIBLIOGRAPHY 187

[27] P. Hájek, Metamathematics of fuzzy logic. Kluwer Acad. Publ., Series


Trends in Logic, Dordrecht-Boston-London, 1998.

[28] U. Hőhle and S. E. Rodabaugh, Eds., Mathematics of fuzzy sets. The


handbooks of fuzzy sets series, Kluwer Academic Publ., Boston-Dordrecht-
London, 1999.

[29] M. Inuiguchi, H. Ichihashi and Y. Kume, Some properties of extended


fuzzy preference relations using modalities. Information Sciences, Vol.61
(1992), 187-209.

[30] M. Inuiguchi, H. Ichihashi and Y. Kume, Modality constrained program-


ming problems: A unified approach to fuzzy mathematical programming
problems in the setting of possibility theory. Information Sciences, Vol.67
(1993), 93-126.

[31] M. Inuiguchi, H. Ichihashi and Y. Kume, Relationships between modal-


ity constrained programming problems and various fuzzy mathematical
programming problems. Fuzzy Sets and Syastems, Vol.49 (1992), 243-259.

[32] M. Inuiguchi and T. Tanino, Scenario decomposition approach to interac-


tive fuzzy numbers in possibilistic linear programming problems. In: R.
Felix, Ed., Proceedings of EFDAN’99, FLS Fuzzy Logic Systeme GmbH
Dortmund 2000, 133-142.

[33] M. Inuiguchi and J. Ramík, Possibilistic linear programming: A brief re-


view of fuzzy mathematical programming and a comparison with stochas-
tic programming in portfolio selection problem. Fuzzy Sets and Systems,
111 (2000)1, 3-28.

[34] M. Inuiguchi, J. Ramik, T. Tanino and M. Vlach, Optimality and duality


in interval and fuzzy linear programming. Fuzzy Sets and Systems, Special
issue: Interfaces between fuzzy sets and interval analysis, to appear.

[35] M. Inuiguchi, J. Ramik, T. Tanino, Oblique fuzzy numbers and its use in
possiblistic linear programming. Fuzzy Sets and Systems, Special issue:
Interfaces between fuzzy sets and interval analysis, to appear.

[36] V. Jeyakumar, and B. Mond, On generalized convex mathematical pro-


gramming problem, Journal of Australian Math. Soc., B34 (1992), 43-53.

[37] R. L. Keeney and H. Raiffa, Decisions with multiple objectives - Prefer-


ences and value tradeoffs. Cambridge University Press, 1998.

[38] E.P. Klement, R. Mesiar and E. Pap, Triangular norms. Kluwer Acad.
Publ., Series Trends in Logic, Dordrecht-Boston-London, 2000.

[39] A. Kolesárová, Triangular norm-based addition of fuzzy numbers. Tatra


Mt. Math. Publ. 6, 1995, 75-81.
188 BIBLIOGRAPHY

[40] S. Komlósi, T. Rapcsak, S. Schaible, Eds., Generalized Conexity, Springer-


Verlag, Berlin-Heidelberg-New York, 1994.

[41] M. Kovacs, Fuzzy linear programming with centered fuzzy numbers. In:
Fuzzy Optimization - Recent Advance, Eds.: M. Delgado, J. Kacprzyk,
J.-L. Verdegay and M.A. Vila, Physica-Verlag, Heidelberg-N. York, 1994,
135-147.

[42] M. Kovacs, Fuzzy linear programming with triangular fuzzy parameters.


In: Identification, modelling and simulation, Proc. of IASTED Conf. Paris,
1987, 447-451.

[43] M. Kovacs and L.H. Tran, Algebraic structure of centered M- fuzzy num-
bers. Fuzzy Sets and Systems 39 (1), 1991, 91-99.

[44] Y. J. Lai and C. L. Hwang, Fuzzy Mathematical Programming: Theory


and Applications. Lecture notes in economics and matehematical systems
394, Springer-Verlag, Berlin, Heidelberg, New York, London, Paris, Tokyo,
1992.

[45] Y. J. Lai and C. L. Hwang, Multi-Objective Fuzzy Mathematical Program-


ming: Theory and Applications. Lecture notes in economics and mate-
hematical systems 404, Springer - Verlag, Berlin, Heidelberg, New York,
London, Paris, Tokyo, 1993.

[46] S.R. Lay, Convex sets and their applications. John Wiley & Sons Inc.,
New York- Chichester- Brisbane- Toronto- Singapore, 1982.

[47] R. Lowen and M. Roubens, Eds., Fuzzy logic - State of the art. Theory
and Decision Library, Series D: System Theory, knowledge engineering
and problem solving, Kluwer Acad. Publ., Dordrecht, Boston, London,
1993.

[48] B. Martos, Nonlinear programming. Akademia Kiado, Budapest, 1975.

[49] K. Menger, Statistical metrics. Proc. Nat. Acad. Sci. U.S.A., 28, 535—537,
1942.

[50] Mizumoto, M., ”Improvement Methods of Fuzzy Controls”, In: Proceed-


ings of the 3rd IFSA Congress, Seattle, 1989, 60-62.

[51] B. Mond and T. Weir, Generalized concavity and duality, In: S. Schaible
and W.T. Ziemba, Eds., Generalized concavity in optimization and eco-
nomics, Academic Press, New York, (1981) 253-289.

[52] C.V. Negoita and D.A. Ralescu, Applications of Fuzzy Sets to Systems
Analysis, J. Wiley & Sons, New York, 1975.

[53] H.T. Nguyen, A note on the extension principle for fuzzy sets. J. Math.
Anal. Appl., 64 (1978) 369-380.
BIBLIOGRAPHY 189

[54] S.A. Orlovsky, Decision making with fuzzy preference relation. Fuzzy Sets
and Systems 1 (1978), 155-167.

[55] S. A. Orlovsky, On formalization of a general fuzzy mathematical pro-


gramming problem. Fuzzy Sets and Systems 3 (1980) 311—321.

[56] D. Pallaschke and S. Rolewicz, Foundations of mathematical optimiza-


tion - Convex analysis without linearity. Kluwer Academic Publ., Boston-
Dordrecht-London, 1997.

[57] M. Pinedo, Scheduling - Theory, algorithms and systems. Prentice Hall


Englewood Cliffs, New Jersey, 1995.

[58] R. Pini and C. Singh, (Φ1 , Φ2 )-convexity. Optimization 40 (1997), 103-120.

[59] D. Ralescu, A survey of the representation of fuzzy concepts and its ap-
plications. In: Advances in Fuzzy Sets Theorey and Applications, M.M.
Gupta, R.K. Regade and R. Yager, Eds., North Holland, Amsterdam,
1979, 77-91.

[60] J. Ramik and J. Římánek, Inequality relation between fuzzy numbers and
its use in fuzzy optimization. Fuzzy Sets and Systems 16 (1985) 123—138.

[61] J. Ramik, Extension principle in fuzzy optimization. Fuzzy Sets and Sys-
tems 19, 1986, 29-37.

[62] J. Ramik, An application of fuzzy optimization to optimum alloca-


tion of production. Proc. Internat. Workshop on fuzzy set applications,
Academia-Verlag, IIASA, Laxenburg, Berlin, 1987.

[63] J. Ramik, A unified approach to fuzzy optimization, In: Proceedings of


the 2-nd IFSA Congress, Tokyo, 1987, 128—130.

[64] J. Ramik and J. Římánek, The linear programming problem with vaguely
formulated relations between the coefficients. In: M. Fedrizzi, J. Kacprzyk
and S. Orlovski, Eds., Interfaces between Artificial Intelligence and Opera-
tions Research in Fuzzy Environment, D. Riedel Publ. Comp., Dordrecht-
Boston-Lancaster-Tokyo, 1989.

[65] J. Ramik, Fuzzy preferences in linear programming. In: M. Fedrizzi and J.


Kacprzyk, Eds., Interactive Fuzzy Optimization and Mathematical Pro-
gramming, Springer-Verlag, Berlin-Heidelberg-New York, 1990.

[66] J. Ramik, Vaguely interrelated coefficients in LP as bicriterial optimiza-


tion problem. Internat. Journal on General Systems 20(1) (1991) 93—114.

[67] J. Ramik, Inequality relations between fuzzy data. In: H. Bandemer, Ed.,
Modelling Uncertain Data, Akademie Verlag, Berlin, 1992, pp.158—162.
190 BIBLIOGRAPHY

[68] J. Ramik, Some problems of linear programming with fuzzy coefficients.


In: K.-W.Hansmann, A. Bachem, M. Jarke and A. Marusev, Eds., Opera-
tion Research Proceedings: Papers of the 21st Annual Meeting of DGOR
1992, Springer-Verlag, Heidelberg, (1993) 296—305.

[69] J. Ramik and K. Nakamura, Canonical fuzzy numbers of dimension two.


Fuzzy Sets and Systems 54 (1993) 167—180.

[70] J. Ramik and K. Nakamura, I. Rozenberg and I. Miyakawa, Joint canonical


fuzzy numbers. Fuzzy Sets and Systems 53 (1993) 29—47.

[71] J. Ramik and H. Rommelfanger, A single- and multi-valued order on fuzzy


numbers and its use in linear programming with fuzzy coefficients. Fuzzy
Sets and Systems 57 (1993) 203—208.

[72] J. Ramik, New interpretation of the inequality relations in fuzzy goal gro-
gramming problems. Central European Journal for Operations Research
and Economics 4 (1996) 112—125.

[73] J. Ramik and H. Rommelfanger, Fuzzy mathematical programming based


on some new inequality relations. Fuzzy Sets and Systems 81 (1996) 77—88.

[74] J. Ramik and H. Rommelfanger, A new algorithm for solving multi-


objective fuzzy linear programming problems. Foundations of Computing
and Decision Sciences 3 (1996) 145—157.

[75] J. Ramik, Fuzzy goals and fuzzy alternatives in fuzzy goal programming
problems. Fuzzy Sets and Systems 111 (2000)1, 81-86.

[76] J. Ramik and M. Vlach, Generalized quasiconcavity in location theory.


Proceedings of MME 2000, University of Economics-CSOR, Prague, 2000.

[77] J. Ramik and M. Vlach, Generalized concavity in location theory. Pro-


ceedings of the 2nd Japan Fuzzy Symposium, Akita, 2000.

[78] J. Ramik and M. Vlach, Triangular norms and generalized quasiconcavity.


Proceedings of the APORS Conference, Sinapore, 2000.

[79] J. Ramik and M. Vlach, Generalized concavity as a basis for optimization


and decision analysis. Research report IS-RR-2001-003, JAIST Hokuriku
2001, 116 p., ISSN 0918-7553.

[80] J. Ramik and M. Vlach, Concepts of generalized concavity based on trian-


gular norms. Journal of Statistics and Management Systems, to appear.

[81] J. Ramik and M. Vlach, Pareto-optimality of compromise decisions, Fuzzy


Sets and Systems, to appear.

[82] H. Rommelfanger, Entscheiden bei Unschärfe - Fuzzy Decision Support


Systeme. Springer Verlag, Berlin - Heidelberg, 1988; Second Edition 1994.
BIBLIOGRAPHY 191

[83] H. Rommelfanger and R. Slowinski, Fuzzy linear programming with single


or multiple objective functions. In: R. Slowinski, Ed., Fuzzy sets in deci-
sion analysis, operations research and statistics. The handbooks of fuzzy
sets series, Kluwer Academic Publ., Boston-Dordrecht-London, 1998, 179-
213.
[84] A. Rubinov, Abstract convexity and global optimization. Kluwer Academic
Publ., Boston-Dordrecht-London, 2000.
[85] M. Sakawa and H. Yano, Interactive decision making forfor multiobjec-
tive programming problemswith fuzzy parameters, In: R. Slowinski and J.
Teghem, Eds., Stochastic versus fuzzy approaches to multiobjective math-
ematical programming under uncertainty, Kluwer Acad. Publ., Dordrecht,
1990, 191-22.
[86] A. Schveidel, Separability of star-shaped sets and its application to an
optimization problem, Optimization 40 (1997), 207-227.
[87] B. Schweizer and A. Sklar, Statistical metric spaces, Pacific J. Math. 10
(1960), 313-334.
[88] B. Schweizer and A. Sklar, Probabilistic metric spaces, North-Holland, N.
York, Amsterdam, Oxford, 1983.
[89] R. Slowinski, ”FLIP”: an interactive method for multiobjective linear
programming with fuzzy coefficients, In: R. Slowinski and J. Teghem,
Eds., Stochastic versus fuzzy approaches to multiobjective mathematical
programming under uncertainty, Kluwer Acad. Publ., Dordrecht, 1990,
249-262.
[90] R. Slowinski, Ed., Fuzzy sets in decision analysis, operations research and
statistics. The handbooks of fuzzy sets series, Kluwer Academic Publ.,
Boston-Dordrecht-London, 1998.
[91] M. Sugeno, Theory of fuzzy integrals and its applications. Ph.D. Thesis,
Tokyo institute of technology, 1974.
[92] H. Tanaka and K. Asai, Fuzzy linear programming with fuzzy numbers.
Fuzzy Sets and Systems 13 (1984) 1-10.
[93] H. Tanaka, T. Okuda and K. Asai, On fuzzy mathematical programming.
Journal of Cybernetics, 3, 4, (1974) 37-46.
[94] F.A. Valentine, Convex sets. R.E. Krieger Publ. Co., Huntington, New
York, 1976.
[95] M. Vlach, A concept of separation for families of sets. Ekonomicko-
matematicky obzor, 12 (3), 1976, 316-324.
[96] B. Werners, Interactive fuzzy programming system. Fuzzy Sets and Sys-
tems 23 (1987) 131-147.
192 BIBLIOGRAPHY

[97] R.R. Yager, On a general class of fuzzy connectives. Fuzzy Sets and Sys-
tems 4 (1980) 235-242.
[98] L.A. Zadeh, Fuzzy sets. Inform. Control (8) 1965, 338-353.
[99] L.A. Zadeh, The concept of a linguistic variable and its application to
approximate reasoning, Information Sciences, Part I: 8, 1975, 199-249;
Part II: 8, 301-357; Part III: 43-80.
[100] H.-J. Zimmermann, Fuzzy programming and linear programming with
several objective functions. Fuzzy Sets and Systems 1 (1978) 45-55.
[101] H.-J. Zimmermann and P. Zysno, Latent connectives in human decision
making. Fuzzy Sets and Systems 4 (1980), 37-51.

You might also like