Fractional Programming
Fractional Programming
FRACTIONAL PROGRAMMING
Johannes B.G. Frenk
Econometric Institute,
Erasmus University,
3000 DR Rotterdam, The Netherlands
[email protected]
Siegfried Schaible
A.G. Anderson Graduate School of Management
University of California,
Riverside, CA 92521-0203, USA
[email protected]
Abstract
Keywords: Single-ratio fractional programs, min-max fractional programs, sum-ofratios fractional programs, parametric approach.
1.
Introduction.
In various applications of nonlinear programming a ratio of two functions is to be maximized or minimized. In other applications the objective function involves more than one ratio of functions. Ratio optimization problems are commonly called fractional programs. One of
the earliest fractional programs (though not called so) is an equilibrium
model for an expanding economy introduced by von Neumann (cf. [74])
336
Fractional Programming
2.
337
The problem
is called a single-ratio fractional program. In most
applications the nonempty feasible region B has more structure and is
given by
with
and
some set of real-valued continuous functions. So far, the functions in the numerator and denominator
were not specified. If
and
are affine functions (linear
plus a constant) and
denotes the nonnegative orthant of
then the optimization problem
is called a single-ratio linear fractional program. Moreover, we call
a single-ratio quadratic fractional
program if
the functions and are quadratic and the functions
are affine. The minimization problem
is called a
single-ratio convex fractional program if C is a convex set,
and
are convex functions and is a positive concave function on B.
In addition it is assumed that is nonnegative on B if is not affine.
In case of a maximization problem the single-ratio fractional program
is called a single-ratio concave fractional program if is concave and
is convex. Under these restrictive convexity\concavity assumptions the
minimization problem
is in general a nonconvex problem.
In some applications more than one ratio appears in the objective
function. One form of such an optimization problem is the nonlinear
programming problem
338
Let
and
be nonempty closed sets and
be a finite-valued function on A B. In case
is a finite-valued positive function on A B, consider the minmax nonlinear programming problem
with
for every
and
It is a more challenging
problem than
as recent studies have shown. We also encounter in
applications the so-called multi-objective fractional program
which is related to
and
In Sections 3 and 4 we will review applications of fractional programs
and
respectively. Section 5 focuses on applications of the
fractional program
In addition we review here some of the solution
procedures for this rather challenging problem. Finally in Section 6 we
return to problems
and
In a joint treatment of both involving
the more general problem (P) a parametric approach is used for the
analysis and development of solution procedures of (P).
3.
Fractional Programming
339
Economic Applications.
The efficiency of a system is sometimes characterized by a ratio of
technical and/or economical terms. Maximizing the efficiency then leads
to a fractional program. Some applications are given below.
Maximization of Productivity.
Gilmore and Gomory [37] discuss a stock cutting problem in the
paper industry for which under the given circumstances it is more
appropriate to minimize the ratio of wasted and used amount of
raw material rather than just minimizing the amount of wasted
material. This stock cutting problem is formulated as a linear
fractional program. In a case study, Hoskins and Blom [43] use
fractional programming to optimize the allocation of warehouse
personnel. The objective is to minimize the ratio of labor cost to
the volume entering and leaving the warehouse.
Maximization of Return on Investment.
In some resource allocation problems the ratio profit/capital or
profit/revenue is to be maximized. A related objective is return
per cost maximization. Resource allocation problems with this
objective are discussed in more detail by Mjelde in [53]. In these
models the term cost may either be related to actual expenditure
or may stand, for example, for the amount of pollution or the probability of disaster in nuclear energy production. Depending on the
nature of the functions describing return, profit, cost or capital,
different types of fractional programs are encountered. For example, if the price per unit depends linearly on the output and cost
and capital are affine functions, then maximization of the return
on investment gives rise to a concave quadratic fractional program
(assuming linear constraints). In location analysis maximizing the
profitability index (rate of return) is in certain situations preferred
to maximizing the net present value, according to [5] and [6] and
the cited references.
Maximization of Return/Risk.
Some portfolio selection problems give rise to a concave nonquadratic fractional program of the form (8.3) below which expresses the
maximization of the ratio of expected return and risk. For related
concave and nonconcave fractional programs arising in financial
planning see [61]. Markov decision processes may also lead to the
340
Non-Economic Applications.
In information theory the capacity of a communication channel can be
defined as the maximal transmission rate over all probabilities. This is a
Fractional Programming
341
Indirect Applications.
There are a number of management science problems that indirectly
give rise to a concave fractional program. We begin with a recent study
which shows that the sensitivity analysis of general decision systems
leads to linear fractional programs (cf. [52]). The developed software
was used in the appraisal of Hungarian hotels. A concave quadratic fractional program arises in location theory as the dual of a Euclidean multifacility min-max problem. In large scale mathematical programming,
decomposition methods reduce the given linear program to a sequence of
smaller problems. In some of these methods the subproblems are linear
fractional programs. The ratio originates in the minimum-ratio rule of
the simplex method.
Fractional programs are also met indirectly in stochastic programming,
as first shown by Charnes and Cooper [19] and by Bereanu [14]. This
will be illustrated by two models below (cf. [65, 71]).
Consider the following stochastic mathematical program
where is the mean vector of the random vector and V its variancecovariance matrix. Hence the maximum probability model of the concave
program (8.2) gives rise to a fractional program. If in problem (8.2) the
342
where
are concave functions on the convex feasible region B,
and is a random variable with a continuous cumulative distribution
function. Then the maximum probability model for (8.4) gives rise to
the fractional program
4.
Fractional Programming
is used or with the help of prescribed ratio goals
343
the model
with variables
At the end of this section on applications of
we point out that in
case of infinitely many ratios
is related to a fractional semi-infinite
program (cf. [41]). Several applications in engineering give rise to such
a problem when a lower bound for the smallest eigenvalue of an elliptical
differential operator is to be determined (cf. [40]).
For further applications of
we refer to the very recent survey [59].
5.
Problem
arises naturally in decision making when several rates
are to be optimized simultaneously and a compromise is sought which
optimizes a weighted sum of these rates. In light of the applications of
single-ratio fractional programming numerators and denominators may
be representing output, input, profit, cost, capital, risk or time, for
example. A multitude of applications of the sum-of-ratios problem can
be envisioned in this way. Included is the case where some of the ratios
are not proper quotients. This describes situations where a compromise
is sought between absolute and relative terms like profit and return on
investment (profit/capital) or return and return/risk, for example.
Almogy and Levin (cf. [1]) analyze a multistage stochastic shipping
problem. A deterministic equivalent of this stochastic problem is formulated which turns out to be a sum-of-ratios problem.
Rao (cf. [57]) discusses various models in cluster analysis. The problem of optimal partitioning of a given set of entities into a number of
344
Fractional Programming
345
applications call for methods which can handle a large number of ratios;
e.g., fifty (cf. [1]). Currently such methods are not available.
For a special class of sum-of-ratios problems with up to about one
thousand ratios, but only very few variables an algorithm is given in
[20]. This method by Chen et al. is superior to the other algorithms on
the particular class of problems in manufacturing These are geometric
optimization problems arising in layered manufacturing. In contrast to
general-purpose algorithms for
the method in [20] is rather robust
with regard to the number of ratios.
Focus of the remainder of this review of fractional programming will
be the min-max fractional program (P). It includes as special cases
and
For a very recent survey of applications, theoretical results
and solution methods for
and
since [61] was published we refer
to [59]. A corresponding survey for
since [61] appeared is given
in [60]. For a survey of some recent developments for multi-objective
fractional programs
we refer to [33].
6.
we have
346
program
Clearly
It is not assumed beforehand
that the optimization problems (P) and
have an optimal solution.
Therefore we cannot replace sup by max or inf by min. The simpler
optimization problem
is introduced since it will be part of the socalled primal Dinkelbach-type approach discussed in subsection 6.2 to
solve the (primal) min-max fractional program (P).
Another optimization problem is to consider for every
the
single-ratio fractional program
Analyzing the so-called dual Dinkelbach-type approach to solve problem (D), we need the following counterpart of Condition 8.1.
Condition 8.2 For every
we have
Fractional Programming
347
for every
and so this condition implies Condition 8.1. Moreover,
the single-ratio fractional program
has an optimal solution and
is finite for every
In case we also analyze the dual Dinkelbach-type approach, not all
results are valid under Condition 8.2, and so we sometimes need the
following counterpart of Condition 8.3.
348
for every
and so this condition implies Condition 8.2. Moreover,
the single-ratio fractional program
has an optimal solution, and
is finite for every
Before analyzing in the next subsection the parametric approach applied to (P), we will derive an alternative representation of a generalized
fractional program. This alternative representation satisfies automatically Condition 8.3. For a generalized fractional program the set A is
given by
and the functions and are replaced by
the functions
and
This means
for every
With this we have found another representation of a generalized fractional program. Using this representation, the corresponding (dual) generalized fractional program is given by
Fractional Programming
349
6.1
For every
the function
is now given by
Since
on
and
is the supremum of affine functions, it is
obvious that
is a decreasing lower semicontinuous convex function.
Its so-called effective domain
is defined by (cf. [58])
By the finiteness of
problem than
on
and
for every
is given by
For
and
the effective
350
if and only if
Assume
Suppose by contradiction that there exists
satisfying
This implies for every
that
Hence for a given
one can find some sequence
satisfying
Since
and
for every
assumption.
Conversely, if
then clearly
exists some
satisfying
it is easy to see that
and so
the proof of the first part. By identifying B with
follows immediately from the first part.
and so there
Due to
which completes
the second part
Using similar algebraic manipulations as in [22] applied to a generalized fractional program one can show the following important result for
the optimal value function
of a parametric min-max problem
The validity of the so-called parametric approach to solve problem (P)
is based on this result.
Theorem 8.1 Assume Condition 8.1 holds and
if and only if
Moreover, if
if and only if
Proof. If
and
and
satisfying
for every
Since
for every
It follows that
Conversely, if
satisfying
Then
then
this yields
and
for
Fractional Programming
every
351
that
Since
it follows from relation (8.14) that
and the proof of the first part is completed. By identifying B with
the second part follows from the first part.
A useful implication of Theorem 8.1 is given by the following result.
Lemma 8.2 Assume Condition 8.1 holds and
Then
for some
and
(P) reduces to
Moreover, the
equals B, and
If
352
for every
for every
and hence
and
that
Since
we obtain for every
that
By
relation (8.15),
and
this yields for every
that
Hence
and so
Applying Theorem 1.7 of [29] yields that
is upper semicontinuous.
By Theorem 8.2 and Lemma 1.30 of [29] we obtain
we know that
this yields
Fractional Programming
Definition 8.1 The set-valued mapping
closed set is called closed if its graph is a closed set.
353
where X is a
By the definition of a closed set it is immediately clear that the setvalued mapping
is closed if and only if for any sequence
and
it follows that
and
The set
represents the set of optimal solutions of the optimization problem
while the set
denotes the set of optimal
solutions in B of the optimization problem
Also we consider the
set-valued mapping
given by
354
Since
is lower semicontinuous on
it follows that
Using
this shows that
Hence we have verified
that
is closed.
Finally, to show that
is closed, consider a sequence
satisfying
and
Since
it follows that
using the fact that
is closed. This shows
Moreover, since
we obtain
using the fact that
is closed. Hence
Therefore
is an optimal solution of the min-max fractional program (P). This completes the proof.
We will now consider for every
the decreasing convex function
introduced in relation (8.12). In the next result it is shown
for
finite that this function is Lipschitz continuous with Lipschitz
constant
Lemma 8.4 Assume Condition 8.1 holds and
is finite for
Then the function
is strictly decreasing and
Lipschitz continuous with Lipschitz constant
and this function
satisfies
and
Proof. If
is finite for some
then we know by Lemma 8.1 that
is finite for every
Selecting some
using
and the fact that
is finite, it is easy to verify that
for every
Hence
with Lipschitz constant
that
Fractional Programming
355
for every
This shows that
is strictly decreasing on
Again by relation (8.22) we obtain for a given
and
that
and for a given and
that
If
is finite, it follows from Lemma 8.4 and Theorem 1.13 of [29]
that the finite-valued convex function
has a nonempty subgradient
set
for every
Hence for every
and
the subgradient inequality
for every
for every
that
is strictly decreas-
and
and
it holds
356
and since
it follows that
Fractional Programming
357
for some
for
that
replaced by
given by
then without Condition 8.1 one can show, using similar techniques as
before, the following result. Note the vector
is an optimal solution
of the (primal) min-max problem (P) if and only if
and
Theorem 8.4 The (primal) min-max fractional program (P) has an
optimal solution if and only if
Moreover, if (P) has an
optimal solution, then the set
listed in relation (8.18) is nonempty
and
358
For the moment this concludes our discussion of some of the theoretical properties related to the parametric approach. In the next subsection
we will consider the (primal) Dinkelbach-type algorithm and use the previously derived properties to show its convergence.
6.2
Condition 8.5
Condition 8.1 holds and
If
is finite, then for every
while for
the set
Fractional Programming
359
and
and compute
2 Determine
If
Otherwise compute
let
and
and go to step 1.
To determine
in step 1 and 2 one has to solve a single-ratio
fractional program. If A is a finite set, then this is easy. Also in order
to select
one has to solve for A finite a finite min-max
problem. Algorithms for such a problem can be found in part 2 of
[55]. In case A is not finite, one needs to solve a much more difficult
problem, a semi-infinite min-max problem (cf. [27, 55]). Therefore to
apply the above generic primal Dinkelbach-type algorithm in practice
one needs to have an efficient algorithm to determine an element of the
set
and this is in most cases the bottleneck. In general one
cannot expect that an efficient and fast algorithm exists. But for special
cases this might be the case. Including the construction of approximate
solutions of the problem
by using smooth approximations of the
max operator, thus speeding up the computations and at the same time
bounding the errors (cf. [16]) seems to be an important topic for future
research.
By Lemma 8.6 it is sufficient to find in step 2 of the primal Dinkelbachtype algorithm the solution of the equation
As already
observed, we can give an easy geometrical interpretation of the above
algorithm (cf. [5, 16]). The next result shows that the sequence
generated by the primal Dinkelbach-type algorithm is strictly decreasing.
Lemma 8.7 If Condition 8.5 holds, then the sequence
generated by
the primal Dinkelbach-type algorithm is strictly decreasing and satisfies
for every
Proof. If the algorithm stops at
then by the stopping rule we
know that
This implies by Theorem 8.1 for
that
which shows that
If the algorithm does not
stop at the first step, then
Since
is nonempty, the
algorithm finds some
Hence
360
and so
we obtain
for every
This shows
To verify that
we assume by contradiction that
Since
yields by relation (8.29) and Lemma 8.6 that
this
In the remainder of this subsection we only consider the case that the
primal Dinkelbach-type algorithm generates an infinite sequence
By Lemma 8.7 it follows that
exists. Imposing
some additional condition it will be shown in Lemma 8.9 that this limit
equals
To simplify the notation in the following lemmas we introduce
for the sequence
generated by the
primal Dinkelbach-type algorithm the sequence
with
and for
with
Fractional Programming
361
By the observation after Lemma 8.4 these subgradient sets are nonempty. It is now possible to derive the next result.
Lemma 8.9 If Condition 8.5 holds and there exists a subsequence
satisfying
then
Moreover, for
finite it follows that
Proof. By Lemma 8.7 the sequence
is strictly decreasing,
and so
exists. If
we obtain using
for every
that
and so for
the result is proved. Therefore assume that
is finite. Since
and the function
and the sequence
are
decreasing, it follows that the sequence
is increasing
and
exists. If we assume that
then one can find some
satisfying
for every
By Lemma 8.2 we also know that
Applying the
subgradient inequality to the convex function
we obtain for every
that
with
and so
that
of [29] yields
it follows that
for every
362
satisfying
However, if the condition of Lemma 8.9
holds, we conjecture for
finite that the min-max fractional program
(P) might not have an optimal solution in B, and so
is not equal to
zero. Using a stronger condition than in Lemma 8.9, we show in the next
lemma for finite
that the sequence
generated by the
primal Dinkelbach-type algorithm satisfies
This sufficient condition implies the existence of an optimal solution of
the (primal) min-max fractional program (P) in B.
Lemma 8.10 If Condition 8.5 holds,
sequence
satisfying
and
Proof. By the convexity of the function
equality we obtain for every
that
with
Since
it follows by our assumption and
the monotonicity of the subgradient sets as shown in Lemma 8.5 that
one can find some finite M satisfying
for every
and every sequence
and
This
shows
for every
obtain
and so
Fractional Programming
363
and
364
Proof. Since
to the function
for some
we obtain by Lemma 8.6 that
Applying now the subgradient inequality
at the point
it follows for
that
Hence
Moreover, for every
again by Lemma 8.6 that
gradient inequality to the function
that
and
we obtain
Applying now the subat the point
yields for
(cf. [67]).
Before introducing convergence results for the primal Dinkelbach-type
algorithm, we need the following definition (cf. [54]).
Fractional Programming
365
The sequence
with limit
such that
converges
with
and
Since
is strictly
decreasing and
it follows by Lemma 8.5 that the sequence
is decreasing and satisfies
with
This shows that
exists. To identify
we observe in view of
that
for every
and
for every
and so
Therefore
and we have
identified this limit. Also by our assumption we obtain that there exists
some
and this shows
366
with
and
Since
is uniformly bounded by the compactness of A B and the function
is continuous, there exists a converging subsequence
satisfying
To identify
we observe for every
that
with
Since B is compact and continuous, it follows
by Proposition 1.7 of [3] that
is upper semicontinuous, and
this implies by relation (8.40) that
Fractional Programming
Since
we obtain
by relation (8.41) that
solution and Lemma 8.5 we obtain
367
with
an optimal solution of this fractional programming problem. Replacing now in relation (8.38)
by
we obtain for a single-ratio fractional program with B compact and
continuous that the sequence
always converges Q-superlinearly. Clearly in practice the
(primal) Dinkelbach-type algorithm stops in a finite number of steps,
and so we need to derive a practical stopping rule. Such a rule is constructed in the next lemma. For other practical stopping rules yielding
so-called
solutions the reader may consult [16].
Lemma 8.12 If Condition 8.5 holds and there exists some subsequence
satisfying
and some
satisfying
then the sequence
is
decreasing and its limit equals 0. Moreover, it follows for every
that
Proof. By Lemma 8.7 the sequence
is strictly decreasing, and this
implies by Lemma 8.5 that the negative sequence
is decreasing. Also,
since
is decreasing, we obtain that the negative sequence
is increasing and so the positive sequence
is decreasing. Applying
now Lemma 8.9 it follows that
while the listed inequality is an immediate consequence of Lemma 8.9 and relation (8.35).
Using Lemma 8.12 a stopping rule for the (primal) Dinkelbach-type
algorithm is given by
for some predetermined
Finally we observe that the (primal) Dinkelbach-type algorithm applied to
a generalized fractional program can be regarded as a cutting plane algorithm (cf. [10]). This generalizes a similar observation by Sniedovich (cf.
[70]) who showed the result for the (primal) Dinkelbach-type algorithm
applied to a single-ratio fractional program.
In the next section we investigate the dual max-min fractional program (D) and its relation to the primal min-max fractional program
(P).
368
6.3
In this subsection we first investigate under which conditions the optimal objective function values of the primal min-max fractional program
(P) and the dual max-min fractional program (D) coincide. To start
with this analysis, we introduce the following class of bifunctions.
Definition 8.3 The function
is called a concave/convex bifunction on the convex set
with
and
if for every
the function
is concave on
and for every
the function
is convex on
Moreover, a function
is called a convex/concave
bifunction on
if
is a concave/convex bifunction on the same
set. It is called an affine/affine bifunction if it is both a concave/convex
and a convex/concave bifunction.
To guarantee that
condition.
equals
Fractional Programming
369
for every
and so Condition 8.6 implies Condition 8.1. Also, since
for every
the function
is continuous on
A and the set A is compact, we obtain that
is finite for every
implying
For
we derive in Theorem 8.8 that
the optimal objective function value of the (primal) min-max fractional
program (P) equals the optimal objective function value of the (dual)
max-min fractional program (D). Contrary to the proof of the same
result in [5] for generalized fractional programs based on Sions minimax
result (cf. [31, 69]) the present proof is an easy consequence of the easierto-prove minimax result by Ky Fan (cf. [26, 27, 32]) and Theorem 8.1.
Note that we do not assume that there exists some
satisfying
This shows by Theorem 8.1 and the remark after Condition 8.6 that
for some
Since
for every
Hence
for every
we obtain
370
(cf. [28, 30]), the above result holds for a much larger class than the
class of concave/convex bifunctions. However, since the class of concave/convex bifunctions is most known, we have restricted ourselves to
this well-known class. An easy consequence of Theorem 8.8 is given by
the next result.
Lemma 8.13 If Condition 8.6 holds and there exists some
satisfying
and some
satisfying
then the
vector
is an optimal solution of the (primal) min-max fractional
program (P) and an optimal solution of the (dual) max-min fractional
program (D).
Proof. By the definition of
vector
that
and
Hence
is an optimal solution of the (primal) min-max fractional program (P) and an optimal solution of the (dual) max-min fractional program (D).
If the (dual) max-min fractional program (D) has a unique optimal
solution and the optimal solution set of the (primal) min-max fractional
program (P) is nonempty, then by Lemma 8.13 the unique optimal solution of (D) is an optimal solution of (P). If Condition 8.6 holds and
we use the so-called dual Dinkelbach-type algorithm to be discussed in
subsection 6.4 for identifying
this observation will be useful. To analyze the properties of the optimization problem (D) and at the same
time construct some generic algorithm to solve problem (D), we introduce similar parametric optimization problems as done for problem (P)
at the beginning of subsection 6.1. For every
consider the
parametric optimization problem
For every
the function
is now given by
Fractional Programming
371
Since
on A B and
is the infimum of affine functions, it is
obvious that
is a decreasing upper semicontinuous concave function.
The so-called effective domain
is defined by
By the finiteness of on
that actually
optimization problem than problem
optimization problem
It should be clear to the reader that we actually apply the Dinkelbachtype approach to the (dual) max-min fractional program (D) while at
the beginning of subsection 6.1 we applied the same approach to the
(primal) min-max fractional program (P). It is easy to show that
and so we obtain
for every
If the optimization
problem (P) is a single-ratio fractional program, then the set A consists
of one element, and as already observed there is no difference in the
representation of the (primal) min-max fractional program (P) and the
(dual) max-min fractional program (D). Hence for A consisting of one
element it is not surprising that also the functional representation of the
functions
and
are the same. If the set A consists of more than one
element, then we want to know, despite the different functional representations of the functions
and
under which conditions
for some
It should come as no surprise that this equality holds under
the same conditions as used in Theorem 8.8. Note that in the next result
we do not assume that the set
is nonempty.
Theorem 8.9 Assume Condition 8.6 holds where
bifunction on A B. Then it follows for every
some
satisfying
Moreover, if
every
is a convex/concave
that there exists
372
Proof. Since
we obtain by Lemma 8.1 that
for every
Also, for a convex/concave bifunction it follows by Condition
8.6 and
that the function
is a concave/convex
bifunction on A B and
is continuous on
for every
A similar observation holds for
if
is an
affine/affine bifunction. Since A is compact, we can now apply Theorem
3.2 of [32]. This shows
if and
Clearly Lemma 8.14 can be compared with Lemma 8.1 while the next
result is the counterpart of Theorem 8.1.
Theorem 8.10 Assume Condition 8.2 holds and
if and only if
Moreover, if
and only if
Then
then
if
Fractional Programming
373
for every
Hence the first part follows. The second part can be
proved similarly, and its proof is therefore omitted.
If Condition 8.6 holds and hence also Condition 8.1 and
is finite, then it might happen (as shown in Example 8.1) that the value
is not equal to zero. If additionally there exists some
satisfying
then by Theorems 8.3 and 8.11 we know that
and we need this assumption in combination with Condition 8.6 to identify
by the so-called dual Dinkelbachtype algorithm to be discussed in the next subsection. The next result is
the counterpart of Theorem 8.2. It can be proved by similar techniques.
Theorem 8.12 Assume Condition 8.2 holds. Then the decreasing function
is lower semicontinuous.
Similarly as in Section 6.1 it follows by Theorem 8.12 that
and the function
is right-continuous with left-hand limits.
As in Section 6.1 we now introduce the following set-valued mappings
and
given by
and
The set
represents the set of optimal solutions of optimization problem
while the set
denotes the set of optimal solutions in A of optimization problem
Also we consider the set-valued
mapping
given by
given by
374
for every
of Lemma 8.5.
Lemma 8.17 Assume Condition 8.4 holds. Then it follows for every
that
is finite,
is a nonempty compact set for every
and
and
and
it
Fractional Programming
375
given by
then without Condition 8.2 one can show the following counterpart of
Theorem 8.4. Remember a vector
is an optimal solution of (D) if
and only if
and
Theorem 8.14 The (dual) max-min fractional program (D) has an optimal solution if and only if
Moreover, if (D) has an optimal
solution, then the set
is nonempty and
Finally we will consider in this section another dual max-min fractional program if the nonempty set B is given by (see also relation (8.1))
If
on
376
Proof. Since
and
obtain by the positivity of
for every
and
for every
on AC that
and
we
This shows
for every
If
is finite and we want to ensure that
then the following so-called Slater-type condition on the nonempty set B should be considered. Before introducing this condition,
we assume throughout the remainder of this section that the (possibly
empty) set
denotes the set of indices for which
is affine. Note that
denotes the relative interior of the set C (cf.
[29, 58]).
Condition 8.8 There exists some
where C is a closed convex
set satisfying
for every
and
for every
Moreover, for every
the functions
are convex.
To show under which conditions the equality
and the finiteness of
holds, we first need to prove the following Lagrangean duality
result.
Lemma 8.20 Assume Condition 8.8 holds and for a given
the
function
is convex on C and
is concave on C.
Then it follows for every
that there exists some
satisfying
with B defined in relation (8.51). Moreover, the same result holds for
every
if
is convex and
is affine.
Proof. Using the definition of the set B and
Fractional Programming
377
Proof. For
we know by the remark after Lemma 8.19 that the
result holds. Hence we only need to verify the result for
finite. To
start we observe by relation (8.42) that
for some
satisfying
This shows
we obtain
Thus for this (Lagrangean) dual (cf. [66, 68]) the single-ratio fractional program and its dual have a different representation. If Theorem
8.15 holds, one can always apply a Dinkelbach-type algorithm to the partial dual
to find
This is discussed in detail in [6] and [9]. In the
next subsection we will introduce a similar Dinkelbach-type algorithm
applied to the (dual) max-min problem (D).
378
6.4
If
is finite, then for every
while for
the set
the set
is nonempty
is nonempty for every
If condition 8.9 holds, then one can execute the following so-called
dual Dinkelbach-type algorithm. As for the (primal) Dinkelbach-type
algorithm introduced in Section 6.2 one can give a similar geometrical
interpretation of the next algorithm.
Dual Dinkelbach-type algorithm.
1 Select
and
and compute
2 Determine
If
Otherwise compute
let
and
and go to 1.
Observe in Step 1 and 2 one has to solve a single-ratio fractional program. If B is a finite set, then solving such a problem is easy. Moreover,
by Lemma 8.18 it is sufficient to find in step 2 of the primal Dinkelbachtype algorithm the solution of the equation
As already
observed, this yields an easy geometrical interpretation of the above
algorithm (see also [5]). The next result shows that the sequence
generated by the dual Dinkelbach-type algorithm is strictly increasing.
The proof of this result is similar to the proof of the corresponding result for the primal Dinkelbach-type algorithm in Lemma 8.7. This also
shows that the primal Dinkelbach-type algorithm approaches the optimal objective function value from above while the dual Dinkelbach-type
algorithm approaches it from below.
Fractional Programming
379
and for
given by
By the observation after Lemma 8.16 these subgradient sets are nonempty. Using a similar proof as in Lemma 8.9 it is possible to verify the
next result.
Lemma 8.23 If Condition 8.9 holds and there exists a subsequence
satisfying
then
Moreover for
finite it follows that
By relation (8.49) it follows that
for every
380
and
REFERENCES
381
also strong duality holds, then we know by the remark after Lemma 8.13
that this unique optimal solution of (D) is also an optimal solution of the
primal min-max fractional program P assuming this set is nonempty. By
the compactness of A B in the next result the set of optimal solutions
of (P) is nonempty.
Theorem 8.18 If Condition 8.9 holds, the functions and are continuous on some open set W containing the compact set A B and the
max-min fractional program (D) has a unique optimal solution
then
and
and the sequence
converges Q-superlinearly.
If strong duality holds, then it is clear that one can also use the
dual Dinkelbach-type algorithm to determine the value
This is the
main use of this algorithm in the literature (cf. [8, 9]). Also one could
combine the dual and primal approach in case strong duality holds and
use simultaneously both. An example of such an approach applied to a
generalized fractional program with an easy geometrical interpretation
is discussed by Gugat (cf. [39, 41]). In [39] it is shown under slightly
stronger conditions that always a Q-superlinear convergence rate holds.
This concludes our discussion of the parametric approach used in minmax fractional programming which was a major emphasis in this chapter
on fractional programming.
References
[1] Almogy, Y. and O. Levin, Parametric analysis of a multistage stochastic shipping problem, in Operational Research 69, J.
Lawrence, ed., Tavistock Publications, London, 1970, 359-370.
[2] Asmussen, S., Applied Probability and Queues, Wiley, New York,
1987.
[3] Aubin, J.B., Optima and Equilibria (An Introduction to Nonlinear
Analysis), Graduate Texts in Mathematics Vol. 140, Springer Verlag, Berlin, 1993.
[4] Avriel, M., Diewert, W.E., Schaible, S. and I. Zang, Generalized
Concavity, Plenum Press, New York, 1988.
[5] Barros, A., Discrete and Fractional Programming Techniques for
Location Models, Kluwer Academic Publishers, Dordrecht, 1998.
[6] Barros, A.I., Frenk, J.B.G. and J. Gromicho, Fractional location
problems, Location Science 5, 1997, 47-58.
382
[7] Barros, A.I., Dekker, R., Frenk, J.B.G. and S. van Weeren, Optimizing a general optimal replacement model by fractional programming
techniques, J. Global Optim. 10, 1997, 405-423.
[8] Barros, A.I., Frenk, J.B.G., Schaible, S. and S. Zhang, A new algorithm for generalized fractional programming, Math. Program. 72,
1996, 147-175.
[9] Barros, A.I., Frenk, J.B.G., Schaible, S. and S. Zhang, Using duality to solve generalized fractional programming problems, J. Global
Optim. 8, 1996, 139-170.
[10] Barros, A.I. and J.B.G. Frenk, Generalized fractional programming
and cutting plane algorithms, J. Optim. Theory Appl. 87, 1995,
103-120.
[11] Bazaraa, M.S., Sherali, H.D. and C.M. Shetty, Nonlinear Programming (Theory and Applications), Wiley, New York, 1993.
[12] Bzsa, E., Decision Support for Inventory Systems with Complete
Backlogging, PhD Thesis, Tinbergen Institute Research Series No.
282, Econometric Institute, Erasmus University, Rotterdam, 2002.
[13] Bzsa, E., den Iseger, P.W. and J.B.G. Frenk, Modeling of inventory
control with regenerative processes, International Journal of Production Economics 71, 2001, 263-276.
[14] Bereanu, B., Decision regions and minimum risk solutions in linear
programming, in: A. Prekopa (ed.), Colloquium on Applications of
Mathematics to Economics, Budapest 1963 (Publication House of
the Hungarian Academy of Sciences, Budapest 1965), 37-42.
Frenk, J.B.G. and S. Zhang, A progressive finite repre[15]
sentation approach to minimax optimization, 2004, in preparation.
Frenk, J.B.G. and S. Zhang, Generalized fractional pro[16]
gramming with user interaction, 2004, submitted.
[17] Charnes, A., Cooper, W.W., Levin, A.Y and L.M. Seiford, eds.,
Data Envelopment Analysis: Theory, Methodology and Applications,
Kluwer Academic Publishers, Dordrecht, 1994.
[18] Charnes, A. and W.W. Cooper, Programming with linear fractional
functionals, Naval Research Logistics Quarterly 9, 1962, 181-186.
[19] Charnes, A. and W.W. Cooper, Deterministic equivalents for optimizing and satisficing under chance constraints, Operations Research 11, 1963, 18-39.
[20] Chen, D.Z., Daescu, O., Dai, Y., Katoh, N., Wu, X. and J. Xu,
Optimizing the sum of linear fractional functions and applications,
in Proceedings of the Eleventh Annual ACM-SIAM Symposium on
Discrete Algorithms, ACM, New York, 2000, 707-716.
REFERENCES
383
384
REFERENCES
385
386
[66] Schaible, S., Fractional programming, I, Duality, Management Science 22, 1976, 858-867.
[67] Schaible, S., Fractional programming, II, On Dinkelbachs algorithm, Management Science 22, 1976, 868-873.
[68] Schaible, S., Duality in fractional programming: a unified approach,
Oper. Res. 24, 1976, 452-461.
[69] Sion, M., On general minimax theorems, Pacific J. Math. 8, 1958,
171-176.
Sniedovich,
M., Fractional programming revisited, Eur. J. Oper. Res.
[70]
33, 1988, 334-341.
[71] Stancu-Minasian, I.M., Fractional Programming: Theory, Methods
and Applications, Kluwer Academic Publishers, Dordrecht, 1997.
On some procedures for solving fractional max-min prob[72]
lems, Mathematica-Revue DAnalyse Numrique et de Thorie de
LApproximation 17, 1988, 73-91.
A parametrical method for max-min nonlinear fractional
[73]
problems, Seminarul Itinerant de
Aproximare
Convexitate, Cluj-Napoca, 1983, 175-184.
[74] Von Neumann, J., ber ein konomisches Gleichungssystem und
eine Verallgemeinerung des Brouwerschen Fixpuntsatzes, in Ergebnisse eines mathematischen Kolloquiums (8), Menger, K., ed.,
Leipzig und Wien, 1937, 73-83.
[75] Zhang, S., Stochastic Queue Location Problems, Tinbergen Institute
Research Series No. 14, Econometric Institute, Erasmus University,
Rotterdam, 1991.
II
GENERALIZED MONOTONICITY