Karmarkar's Method: Appendix E
Karmarkar's Method: Appendix E
Karmarkar’s Method
E.1 Introduction
In 1984 introduced a new and innovative polynomial-time algorithm for linear
programming. The polynomial running-time of this algorithm combined with its
promising performance created tremendous excitement (as well as some initial skep-
ticism) and spawned a flurry of research activity in interior-point methods for linear
programming that eventually transformed the entire field of optimization.
Despite its momentous impact on the field, Karmarkar’s method has been
superseded by algorithms that have better computational complexity and better
practical performance. Chapter 10 presents an overview of some of the leading
interior point methods for linear programming. Karmarkar’s method still remains
interesting because if its historical impact, and possibly, because of its projective
scaling approach. This Appendix outlines the main concepts of the method.
759
760 Appendix E. Karmarkar’s Method
minimize z = cTx
subject to Ax = 0
aTx = 1,
x ≥ 0.
This assumption is not restrictive, and any linear program can be written in such
form. Consider for example a problem in standard form
minimize c̃Tx̃
subject to Ãx̃ = b̃,
x̃ ≥ 0
minimize z = c̃Tx̃
subject to Ãx̃ − b̃xn = 0,
xn = 1,
x̃, xn ≥ 0.
The problem is now in the special canonical form, with A = [Ã, −b̃], x = (x̃, xn )T,
a = en (the n-th unit vector), and c = (c̃, 0)T.
The third assumption is that the value of the objective at the optimum is
known, and is equal to zero. This is an unlikely assumption that is not likely to hold.
Although it appears restrictive, it is possible, nevertheless, to adapt the method to
solve problems where the optimal objective value is unknown. We address this issue
later in the Section.
An example of a program that satisfies the three assumptions is
The first two assumptions are clearly satisfied. The optimal solution is x∗ =
(3/4, 1/4, 0)T with corresponding objective z∗ = 0, and so the third assumption
is satisfied also.
Karmarkar’s algorithm starts at an interior feasible point. At each iteration
of the algorithm: (i) the problem is transformed via a projective transformation, to
obtain an equivalent problem in transformed space, (ii) a projected steepest-descent
direction is computed, (iii) a step is taken along this direction, and (iv) the resulting
point is mapped back to the original space. We discuss each of these steps in turn.
The projective transformation in Karmarkar’s method can be split into two
operations: the first scales the variables as in the affine scaling, so that the current
point xk goes to e; the second scales each resulting point by the sum of its variables.
The end result is that xk is transformed to e/n = (1/n, . . . , 1/n)T, and the sum of
E.2. Karmarkar’s Projective Scaling Method 761
X −1 x
x̄ = ,
eTX −1 x
where X = diag (xk ). The corresponding inverse transformation is
X x̄
x= .
aTX x̄
Example E.1 Projective Transformation. Consider the constraints
x1 − 3x2 + 2x3 = 0
x1 + x2 + x3 = 1
x1 , x2 , x3 ≥ 0.
The feasible region is the line segment between xa = (3/4, 1/4, 0)T and xb =
(0, 2/5, 3/5)T. Notice that e/n = e/3 = (1/3, 1/3, 1/3)T is indeed feasible. Let
xk = (1/8, 3/8, 1/2)T. The projective transformation that takes xk to e/3 can be
performed in two steps. First the affine transformation takes x to
⎛ ⎞⎛ ⎞ ⎛ ⎞
8 0 0 x1 8x1
X −1 x = ⎝ 0 8/3 0 ⎠ ⎝ x2 ⎠ = ⎝ 8/3x2 ⎠
0 0 2 x3 2x3
and next this point is scaled by the sum of its variables. The final image of x under
this transformation is
⎛ ⎞
8x1
1 ⎝ 8/3x2 ⎠ .
x̄ =
8x1 + 8/3x2 + 2x3
2x3
9 1 8 3 T
Thus xa is sent to x̄a = ( 10 , 10 , 0)T, xb is sent to x̄b = (0, 11 , 11 ) , and e/3 is sent
12 4 3 T
to ( 19 , 19 , 19 ) , which lies on the line segment between x̄a and x̄b .
minimize cTX x̄
x̄ aTX x̄
subject to AX x̄ = 0
eTx̄ = 1,
x̄ ≥ 0.
(See the Exercises.) The new objective function is not a linear function. However,
since the optimal objective value in the original problem is assumed to be zero, the
optimal objective in the new problem will also be zero. The denominator is positive
762 Appendix E. Karmarkar’s Method
and bounded, so the optimal value of the numerator will be zero as well. As a
result, we will ignore the denominator, and consider the problem
minimize c̄Tx̄
subject to Āx̄ = 0
(P̄ )
eTx̄ = 1,
x̄ ≥ 0,
cTxk
Δx̄ = −P̄ Xc + e.
n
Starting from e/n, we now take a step of length α along the projected steepest-
descent direction:
e
x̄k+1 = + αΔx̄.
n
The first requirement on α is that the new point satisfy the nonnegativity con-
straints. Any step length less than αmax , the step to the boundary, will fulfill this
requirement. This requirement alone does not guarantee polynomial complexity.
Karmarkarproposed a suitable step length, by inscribing a sphere with center e/n
in the set x : eTx̄ = 1, x̄ ≥ 0 . He showed that the largest such inscribed sphere
has radius r = (n(n − 1))−1/2 . By taking a step θr along the scaled direction
Δx̄/ Δx̄ with 0 < θ ≤ 1, feasibility is always maintained. However, to guarantee
polynomial complexity, θ must be restricted to a rather small interval, with θ = 1/3
being an acceptable choice. In practice, better progress can often be made by taking
a much larger step, slightly less than αmax , even though the polynomial complexity
may be lost.
The final step of an iteration is to map x̄k+1 back to x-space. This gives the
new solution estimate
X x̄k+1
xk+1 = T .
a X x̄k+1
E.2. Karmarkar’s Projective Scaling Method 763
hence ⎛ ⎞ ⎛ ⎞
0.1113 T 1
c xk 1
P̄ Xc = ⎝ 1.2483 ⎠ , e = ⎝1⎠,
n 6
1.3904 1
and the projected steepest-descent direction is
⎛ ⎞
0.0554
cTxk
Δx̄ = −P̄ Xc + e = ⎝ −1.0816 ⎠ .
n
−1.2237
The resulting step length is α = 0.9×0.2724 = 0.2451. The new point in transformed
space is ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
1/3 0.0554 0.3469
x̄k+1 = ⎝ 1/3 ⎠ + 0.2451 ⎝ −1.0816 ⎠ = ⎝ 0.0682 ⎠ .
1/3 −1.2237 0.0333
Transforming back to original space we obtain
⎛ ⎞ ⎛ ⎞
0.3469 0.5066
1 ⎝
xk+1 = 0.0682 ⎠ = ⎝ 0.2987 ⎠ .
0.0856
0.0333 0.1947
maximize w
T
subject to Ā y + ew ≤ c̄(wk ).
764 Appendix E. Karmarkar’s Method
If wk were indeed equal to z∗ , the optimal value for w would be zero. An estimate
for the optimal vector y is
T
yk = (ĀĀ )−1Āc̄(wk ).
The maximal value of w for which (yk , w) is dual feasible is equal to the minimum
T
component w̄ of c̄(wk ) −Ā yk . If w̄ ≥ wk , then the new lower bound wk+1 is set to
w̄; otherwise wk+1 = wk .
To prove polynomiality of the algorithm, Karmarkar introduced a potential
function
n n
f (x) = n log cTx − log xj = log(cTx/xj ).
j=1 j=1
e cTxk 1 + αcTxk
x̄ = − αP̄ Xc + α e= e − αP̄ Xc.
n n n
From this it follows that
1 + αcTxk
X x̄ = xk − αX P̄ Xc.
n
The new point xk+1 is obtained by scaling X x̄ by aTX x̄. Denoting the latter by τ ,
1 + αcTxk
τ = aTX x̄ = − αaTX TP̄ Xc,
n
since aTxk = 1. Denoting μk = aTX TP̄ Xc, this gives
1 + αcTxk
τ= − αμk .
n
We now obtain
X x̄ 1 + αcTxk α
xk+1 = = xk − X P̄ Xc
τ τn τ
αμk α
= (1 + )xk − X P̄ Xc
τ τ
Exercises 765
αμk 1
= xk + xk − X P̄ Xc .
τ μk
If we define
αμk 1
αk = , Δx = xk − X P̄ Xc
τ μk
then
xk+1 = xk + αk Δx,
so the iteration is now written as a regular search algorithm. The last technicality
is to observe that xk = X P̄ e. Therefore we can write
1
Δxk = − X P̄ Xc + X P̄ e, where μk = xTkP̄ Xc.
μk
This is exactly the search direction prescribed by the primal logarithmic-barrier
path-following algorithm for problems in standard form (see Section 4). It is also
the search direction for the logarithmic barrier function for problems that are given
in the canonical form (see the Exercises).
From a practical point of view, Karmarkar’s method and its variants have not
been as successful as other interior-point methods. In numerical tests they appear
to be slower and less robust than leading methods. A possible explanation for this
less satisfactory performance is the need to generate lower bounds w on the optimal
objective. These bounds can be poor, in particular if the dual feasible region has
no interior, and this can cause the method to converge slowly.
Exercises
2.1. In Karmarkar’s method prove that AXe = 0.
2.2. Let
AX
B= .
eT
Prove that PB = P̄ − n1 eeT. Hint: Use the result of the previous Problem.
Prove also that PB = PeT P̄ .
2.3. Consider the logarithmic barrier function for a problem in Karmarkar’s spe-
cial form. Show that if μk = xTkP̄ Xc, then the search direction in Karmarkar’s
method coincides with the projected Newton direction for the barrier func-
tion at xk . (Hint: Prove that the Karmarkar search direction satisfies the
projected Newton equations.)
E.3 Notes
In his original paper Karmarkar used eTx = 1 as the normalizing constraint in
the canonical form. He showed that all linear programs can be transformed to
this specific form, but the proposed transformation results in a much larger linear
program. Variants of the method that are suitable for problems in standard form
were proposed by Anstreicher (1986), Gay (1987), and Ye and Kojima (1987).
766 Appendix E. Karmarkar’s Method