An Efficient Algorithm For Linear Programming (1990)
An Efficient Algorithm For Linear Programming (1990)
Abstract. A simplebut efficientalgorithm is presentedfor linear programming.The algorithm computes the projection matrix exactly once throughout the computation unlike that of Karmarkar's algorithm where in the projection matrix is computed at each and every iteration. The algorithm is best suitable to be implemented on a parallel architecture. Complexity of the algorithm is being studied. Keywords. Direction vector; Karmarkar's algorithm; Moore-Penrose inverse; orthogonal projection; projection matrix; projective transformation.
I. Introduction
In 1979 Khachiyan gave an algorithm called ellipsoid algorithm for linear programming [1]. It has polynomial-time complexity. But it is found to be not superior to the simplex algorithm in practice. A new polynomial-time algorithm based on projective transformation technique was published by K a r m a r k a r 1984 [2-1. It is being said that this algorithm is superior to the simplex algorithm even in practice. In this paper an algorithm is presented [3]. We feel that this algorithm is efficient than the existing algorithms because 1. it computes the projection matrix P = I - A A, which requires O(mn2) operations, exactly once throughout the computation and needs only O(n 2) operations per iteration. 2. it makes use of the information that is available from the recently computed point. Note that K a r m a r k a r ' s algorithm computes the projection matrix at each iteration and hence requires O(n 2"5) operations per iteration.
This problem is trivial because the solution is obtained by setting those components 295
296
V Ch Venkaiah
of X to zero that correspond to positive components of C and other components of X to 'infinity'. We defer from the usual practice to solve the above problem by an iterative method with an initial feasible solution X ~ > 0. The method is described in the following algorithm.
Al#orithm AI Step 1. Compute an initial feasible solution X ~ > 0. Step 2. S e t Y = C a n d K = 0 . Step 3. Compute
2=rain ' : Y~>O
Step 4. Compute
X k+ 1 = X k _ e2Y where 0 < e < 1.
Step 5. If the optimum is achieved then stop. Step 6. Set K = K + 1, D k diag (X k x2k X3.. Xn .k )k Step 7. Y = DgC, go to step 3.
=
Let A + = A'(AA')-, - - d e n o t e s the generalized inverse. A + is called the M o o r e P enrose inverse of A. Also, let P = I - A + A. It can be proved that the columns of P span the null space of A and are normals to the hyperplanes X i = 0 in the null space of A. P is the projection operator that projects every vector in R" orthogonally onto the null space of A. To use Algorithm A1 to solve P2, we need to have the initial feasible solution X~ such that AX ~ = b and the direction vector Y not in R" but in the null space of A. This can be achieved by operating P on this vector. With this explanation we now give an algorithm to solve P2.
An efficient algorithm for linear programmin9 Step 2, Compute the projection operator p=I-A+A Step 3. Compute
Cp = P C y= Cp
297
IICpll
Step 4. Set K --- 0 Step 5. Compute
2 = man { ~ k : Y i > O } the problem has u n b o u n d e d solution if all Yi ~< 0 and at least one Y~< 0. If all Y~= 0 then X ~ is a solution. Step 6. Compute X k + 1 = X k _ e2Y where 0 < e < 1. Step 7. If
IIX k ~ 1 _
Step 8. Set
Xk II < 2 - L
K = K + 1, DR=diag(dii),
where d, = x k / ~ and is the distance between the point X k and the hyperplane Xi = O. P, is positive because pz = p. Step 9. Compute
y_
PDkCp IIeDk Cp II
go to step 5, Distance 6 between the point X k and the hyperplane X~ = 0 is calculated as follows. Consider
xk
Since implies
6 Pi
liP, l[
~X~_6x~.=O
This
298
V Ch Venkaiah
Step 3 computes the orthogonal projection of C onto the null space of A and the initial direction vector, step 5 computes the step length, step 6 a new feasible solution at which the objective function value is less than that of the previous value and step 9 the new direction vector and its projection on to the null space of A. It can be easily seen that the computation of the algorithm is dominated by the computation of the new direction vector which requires only O(n 2) operations. Whereas in Karmarkar's algorithm it requires O(n 2"5) operations. 3.2 Correctness of algorithm A2 Theorem 1. Let P2 have a bounded solution. Then alyorithm A2 converges to a solution of P2.
Proof.
(i) Nonnegativity conditions Since X ~ > 0, 2=min ' : Yi>O ,
and e < 1 it follows that X k 1 > 0 for each k and hence lim X~ >~0
k~oO
AY-
IlCpll- IlCpl~
ACp
APC _ 0
since Cp = PC
AY
IIeDkCpll
APDkCp = 0
and hence AXk= b. (iii) Optimality of the objective function For k = 0, we have
C,X ~ _ C,X~+ x = ~2C' 1
Cp IlfPll
- IlCpll ~ , ~ C ' C p
299
- IICp II t2(PC)tCP -[ICpII eACp~Cp since P C = Cp : ~A IICp II > 0. For k i> 1, CtX k _ CtXk+ 1 = e~,Cty - ][PDkCp I]~2Ct PDkCp
- I[ PD~Cp 1 1
IICp'Dk Cp
Therefore CtX k is a decreasing sequence. Since P2 is assumed to have a bounded solution it follows that C'X k is bounded below. Hence CtX ~ converges. 3.3 Complexity of algorithm A1 Let L be such that 2 L is numerical infinity and 2 - L is numerical zero. Theorem 2. Algorithm A1 converges to the solution of P1 in O(L) steps.
'~2 = rain ~ .
1
Ci > 0
since X~ > 0
C/
300
Therefore
V Ch Venkaiah
In general,
C, k
because (1
l~ -e-'cCi)k~
2L~k log2 ( I -
eC~.) ~> L L
~ k ~>l o g 2 ( 1 - eC~.)"
4. Conclusions
Complexity of Algorithm A2 is being studied. Observe that 21 = 22 . . . . in Algorithm A1. This observation may be useful in establishing the complexity of Algorithm A2. We feel that the path followed by our algorithm is the same as that of Karmarkar's algorithm. Therefore, Algorithm A2 takes at the most O(n) steps to reach the optimum and hence the complexity of Algorithm A2 is O(n3). In practice the calculation need to be done in high precision. Modifying the direction vector Y = (PDkCp)/]IPDkCpt] in Step 9 of Algorithm A2 as Y = (PD~,kCp)/IIPD~"Cp II by introducing a real parameter ~k, another algorithm will be obtained. The resulting algorithm will be better in performance and can handle practical problems with the existing precision if the optimal values can be computed for ~k- Further details on computing 7k will be discussed in a future correspondence.
301
The author thanks Dr Eric A Lord for the discussions and Dr S K Sen for getting him research associatcship in SERC for pursuing this research work.
References
[1] Khachiyan L G, A polynomial algorithm in linear programming, Doklady Akademiia Nauk SSSR 244:S (1979), pp. 1093-1096, translated in Soviet Mathematics Doklady 20:1 (1979), pp. 191-194 [2] Karmarkar N, A new polynomial - time algorithm for linear programming, Tech. Rep., AT & T Bell Labs., New Jersey, (1984) [3] Venkaiah V Ch, An efficient algorithm for linear programming, Tech. Rep., TR\SERC\KBCS\89 - 003, SERC, IISc, Bangalore 560012, (1989)