0% found this document useful (0 votes)
37 views7 pages

On The e Perturbation Method For Avoiding Degeneracy

Uploaded by

Gautham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views7 pages

On The e Perturbation Method For Avoiding Degeneracy

Uploaded by

Gautham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

On the -perturbation Method for Avoiding Degeneracy

Nimrod Megiddo and R. Chandrasekarany

Abstract. Although it is NP-complete to decide whether a linear program-


ming problemis degenerate, the -perturbation method can be used to reduce in
polynomial time any linear programming problemwith rational coecients to a
nondegenerate problem. The perturbed problem has the same status as the given
one in terms of feasibility and unboundedness, and optimal bases of the perturbed
problem are optimal in the given problem.
Keywords: linear programming, polynomial-time, perturbation, degeneracy.
OR/MS Index: 650

1. Introduction
Degenerate problems cause some inconvenience in the practice as well as in the theory of
linear programming. However, in this note we are interested only in the theoretical side.
Many methods are known for avoiding the evils caused by degeneracy in the context of
the simplex method (see, for example, Murty 3]). When a new algorithm is proposed,
the analysis is often complicated by the need to address degeneracy, and results are
sometimes proved under a nondegeneracy assumption.
Our aim here is to point out that for theoretical purposes degeneracy can easily be
dispensed with in polynomial time. The basic idea is an old one and is known as the \ -
perturbation" method due to Charnes 2]. In the context of simplex-type methods there
is no need to determine a precise value for . However, for a more general application, it
is interesting to point out that an whose size is bounded by a polynomial in terms of
the input size can be determined in polynomial time.
IBM Almaden Research Center, 650 Harry Road, San Jose, California 95120-6099, and School of
Mathematical Sciences, Tel Aviv University, Tel Aviv, Israel.
y The University of Texas at Dallas, Box 830688, Richardson, TX 75083-0688.

1
2. Preliminaries
Consider the linear programming problem:
Maximize cT x
(P ) subject to Ax = b
x 0
where A 2 Z mn , b 2 Z m and c 2 Z n are integral, and A is of rank m. Assuming A
has no zero row or column, the \size" of A is at least
max(m n) + dlog2(M + 1)e
where M is the maximum absolute value of any entry of A.
The problem (P ) is said to be primal degenerate if b is a linear combination of less
than m columns of A. The problem of recognizing primal degeneracy is NP-complete
1]. Denote
= (  2     m)T :
It is well-known that there exists 0 > 0 such that for every , 0 <  0, the perturbed
problem, with the vector
b( ) = b +
replacing b, is primal-nondegenerate. The proof follows by observing that for any basis
(i.e., a nonsingular submatrix B 2 Z mm ) of A, the coordinates of the vector B;1b( )
are nonzero polynomials of degrees not greater than m in terms of , and hence do not
vanish in some open interval (0 0).
The dual problem of (P ) is
Minimize bT y
(D) subject to AT y ; v = c
v 0:
Dual nondegeneracy means that in every solution of
AT y ; v = c
the vector v has at least n ; m nonzeros. Dual degeneracy can be handled by an -
perturbation of the vector c. In fact, the resolution of primal degeneracy and dual
degeneracy can be accomplished independently, so we concentrate here on the primal
one.

2
3. The perturbation and its consequences
We rst determine a valid value for 0.
Proposition 3.1. For every > 0 such that
1
< 0 = (m!)2M 
2m;1

the problem
Maximize cT x
(P ( )) subject to Ax = b +
x 0
is primal-nondegenerate.
Proof: Let B be any basis and let B;1
i denote the ith row of B . Consider the
;1

polynomial
0
X
m
p( ) = a + aj j = B i b( ) = Bi
;1 ;1
X
m
b + Bij ;1 j :
j =1 j =1
Now, p( ) is not identically zero since B;1 is of course nonsingular. For every j
(j = 0 1     m), if aj 6= 0, then
1 :
jaj j j det(1 B )j m!M m

On the other hand, for j = 1     m,


jaj j  (m ; 1)!M m;1 :
6 0. For  0 < 1, we have
Let k be the smallest index such that ak =
jp( )j jakj k ;
Xm jaj j j jak j k ;
Xm jaj j k
m!M m ;
k+1 k+1  m!M m;1 >0:
j =k+1 j =k+1
This estimate implies our claim.
Note that the number 0 can be computed from the data in polynomial time.
Corollary 3.2.
(i) A basis B is feasible in (P ( )) for some 2 (0 0) if and only if it is feasible in
(P ( )) for every 2 (0 0).
3
(ii) If a basis B is a feasible in (P ( )) for some 2 (0 0) then it is feasible in (P ).
Remark 3.3. A basis B may be feasible in (P ) but infeasible in (P ( )) for all > 0. A
trivial example is the system ;x1 = 0 x1 0. Thus, the result of Corollary 3.2 is not
yet satisfactory since by solving (P ( )) instead of (P ) we may reach a wrong conclusion
that (P ) is infeasible. This diculty will be resolved below.
Denote e = (1     1)T and let
K = (m!)2M 2m;1 :
Consider the problem
Maximize cT x
Ax = b +
x ; Ke
or, equivalently,
Maximize cT x
(P~ ( )) subject to Ax = b + K Ae +
x 0:
Proposition 3.4. The problem (P ) is feasible if and only if there exists a basis B which
is feasible in (P~ ( )) for every such
0   1 = m2(m!)1 4M 4m :

Proof: The `if' part is obvious. Suppose (P ) is feasible. Then there exists a basis B
such that B ;1b 0. Obviously, for every i (i = 1     m)
B;1
i (b + ) ;(m ; 1)!M
m;1 > ; K :

This implies that for every 0, (P~ ( )) is feasible. Thus, for every 0 there exists
a basis B such that
B;1 (b + K Ae + ) 0 :
Let B be any basis, let i be any index (1  i  m), and consider the polynomial
p( ) =a0 +
Xm aj j = Bi (b + K Ae + )
;1
j
;1
=1

;1
Xm
=B i b + (K B i Ae + Bi ) + Bij ;1
1
;1 j :
j =2

4
We rst claim that p( ) is not identically zero. The proof of this claim is as follows.
There exists j (1  j  m) such that Bij;1 6= 0. If j 6= 1 then we are done. If j = 1
then if B ;1
i Ae = 0 we are done, and otherwise,

ja1j = jK B ;1 K
i Ae + Bi1 j K jB i Aej ; jBi1 j m!M m ; (m ; 1)!M
;1 ;1 ;1 m;1 > 1 :

As in the proof of Proposition 3.1, if aj 6= 0, then


1 :
jaj j m!M m
Also, in this case for j = 2     m,
jaj j  (m ; 1)!M m;1
and
ja1j  Kmm!M m + (m ; 1)!M m;1 < m(m!)3M 3m :
It follows that for > 0 such that  1, we have jp(x)j > 0. This implies that there
exists a basis B such that for every (0   1),
B;1
i (b + K Ae + ) 0 :

Corollary 3.5. If a basis B is feasible in (P~ ( )) for any 2 (0 1) then B is feasible in
(P )
Proof: It follows from the proof of Proposition 3.4 that if B is feasible in (P~ ( )) for
some 2 (0 1) then it is feasible in (P~ ( )) for all such and hence also in (P ).

Corollary 3.6. For any 2 (0 1) the problem (P~ ( )) is primal-nondegenerate.


Given a basis B , denote, as usual, by cB the restriction of c to the coordinates
corresponding to the columns in B. A basis B is called primal-optimal in (P ) if cTB B ;1b
equals the maximum of (P ). Without loss of generality, assume c 6= 0.
Proposition 3.7. If B is a primal-optimal basis in (P~ ( )) for any > 0 such that
< 2 = 2m2(m!)4M 41m max fjc jg 
j j
then it is also primal-optimal in (P ).
5
Proof: If (P ( )) has an optimal solution for any 2 (0 1 ), then by Corollary 3.5,
(P ) has a feasible solution. Moreover, by the duality theorem applied to (P~ ( )), the
problem (D) is feasible and hence (P ) has an optimal solution. Furthermore, (P ) has
a basic optimal solution. Suppose B is not optimal in (P ) and let C be an optimal
basis. Consider the polynomials
p( ) = cTB B ;1(b + K Ae + )
q( ) = cTC C ;1 (b + K Ae + ) :
Since q(0) > p(0), it follows that
q(0) p(0) + m!M 1 :
m
Moreover, for < 1,
q( ) ; p( ) m!M 1 ; 2m2(m!)3M 3m maxfjc jg :
m j j

It follows that for 2 (0 2), q( ) > p( ), which contradicts our assumptions.
Proposition 3.8. If (P~ ( )) is feasible and unbounded for any < 1 then also (P ) is
feasible and unbounded.
Proof: Feasibility of (P ) is claimed in Corollary 3.5. Unboundedness follows by the
duality theorem, since the dual of (P ( )) is feasible if and only if (D) is.
To conclude,
Theorem 3.9. The problem (P~ ( 12 2)) is nondegenerate, has polynomial size in terms of
the size of (P ), and is equivalent to (P ) in the sense that both have the same status in
terms of feasibility and boundedness. Moreover, every optimal basis of (P~ ( 12 2)) is an
optimal basis of (P ).
Note that (P ) may have an optimal basis which not optimal in (P~ ( 12 2)).
Remark 3.10. It is easy to see that using the ideas presented above, a perturbation
can be applied to the objective function vector c so that the dual problem becomes
nondegenerate. The perturbation itself depends only on the matrix A. Note that in the
case of the primal problem the estimate of an upper bound on , which guarantees that
an optimal basis for the perturbed problem is optimal in the original problem, depends
on the vector c. When we perturb both b and c, we get a problem which is primal-
and dual-nondegenerate, and we would like to compute a suitable bound for . Such a
nondegenerate problem has a unique basis which is both primal- and dual-optimal. As
in Proposition 3.7, it is easy to nd a value 3, depending on b and c such that if a
basis B is primal- and dual-optimal in (P~ ( )) for any 2 (0 3), then B is primal- and
dual-optimal in (P ).
6
Acknowledgement. We thank K. G. Murty for a conversation which attracted
our attention to the problem discussed in this paper. We also thank M. Kojima for
pointing out some errors in an earlier draft.

References
1] R. Chandrasekaran, S. N. Kabadi and K. G. Murty, \Some NP-complete problems in
linear programming," Operations Research Letters 1 (1982) 101-104.
2] A. Charnes, \Optimality and degeneracy in linear programming," Econometrica 20
(1952) 160-170.
3] K. G. Murty, Linear Programming, John Wiley & Sons, New York, 1983.

You might also like