0% found this document useful (0 votes)
87 views8 pages

Karmarkar's Method: Appendix E

Karmarkar introduced the first polynomial-time algorithm for linear programming in 1984. His projective scaling method transforms the linear program using a projective transformation so that the current point maps to (1/n,...,1/n). It then takes a step along the projected steepest descent direction in the transformed space before mapping the new point back to the original space. The method assumes the linear program is in a special canonical form and that the optimal objective value is known to be zero. It helped spawn significant research in interior point methods and transformed the field of optimization, though newer methods now surpass it.

Uploaded by

laxmi mahto
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
87 views8 pages

Karmarkar's Method: Appendix E

Karmarkar introduced the first polynomial-time algorithm for linear programming in 1984. His projective scaling method transforms the linear program using a projective transformation so that the current point maps to (1/n,...,1/n). It then takes a step along the projected steepest descent direction in the transformed space before mapping the new point back to the original space. The method assumes the linear program is in a special canonical form and that the optimal objective value is known to be zero. It helped spawn significant research in interior point methods and transformed the field of optimization, though newer methods now surpass it.

Uploaded by

laxmi mahto
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Appendix E

Karmarkar’s Method

E.1 Introduction
In 1984 introduced a new and innovative polynomial-time algorithm for linear
programming. The polynomial running-time of this algorithm combined with its
promising performance created tremendous excitement (as well as some initial skep-
ticism) and spawned a flurry of research activity in interior-point methods for linear
programming that eventually transformed the entire field of optimization.
Despite its momentous impact on the field, Karmarkar’s method has been
superseded by algorithms that have better computational complexity and better
practical performance. Chapter 10 presents an overview of some of the leading
interior point methods for linear programming. Karmarkar’s method still remains
interesting because if its historical impact, and possibly, because of its projective
scaling approach. This Appendix outlines the main concepts of the method.

E.2 Karmarkar’s Projective Scaling Method


In Section 10.5 we introduced the primal affine-scaling method for solving linear
programs. The basic idea of an iteration is to transform the linear problem via an
affine scaling, so that the current point xk is transformed to the “central point”
e = (1, 1, . . . , 1)T, then to take a step along the steepest-descent direction in the
transformed space, and finally to map the resulting point back to its corresponding
position in the original space. Conceptually, Karmarkar’s algorithm may be de-
scribed in similar terms, except that a “projective scaling” transformation is used.
Some special assumptions must be made, however.
The first, which is standard, is that the linear program has a strictly feasible
point, and that the set of optimal points is bounded. The second assumption is

759
760 Appendix E. Karmarkar’s Method

that the linear program has a special “canonical” form:

minimize z = cTx
subject to Ax = 0
aTx = 1,
x ≥ 0.

This assumption is not restrictive, and any linear program can be written in such
form. Consider for example a problem in standard form

minimize c̃Tx̃
subject to Ãx̃ = b̃,
x̃ ≥ 0

and assume for convenience that x̃ has dimension n − 1. By introducing a new


variable xn that is always equal to 1, we can write the problem as

minimize z = c̃Tx̃
subject to Ãx̃ − b̃xn = 0,
xn = 1,
x̃, xn ≥ 0.

The problem is now in the special canonical form, with A = [Ã, −b̃], x = (x̃, xn )T,
a = en (the n-th unit vector), and c = (c̃, 0)T.
The third assumption is that the value of the objective at the optimum is
known, and is equal to zero. This is an unlikely assumption that is not likely to hold.
Although it appears restrictive, it is possible, nevertheless, to adapt the method to
solve problems where the optimal objective value is unknown. We address this issue
later in the Section.
An example of a program that satisfies the three assumptions is

minimize z = x1 − 3x2 + 3x3


subject to x1 − 3x2 + 2x3 = 0
x1 + x2 + x3 = 1
x1 , x2 , x3 ≥ 0.

The first two assumptions are clearly satisfied. The optimal solution is x∗ =
(3/4, 1/4, 0)T with corresponding objective z∗ = 0, and so the third assumption
is satisfied also.
Karmarkar’s algorithm starts at an interior feasible point. At each iteration
of the algorithm: (i) the problem is transformed via a projective transformation, to
obtain an equivalent problem in transformed space, (ii) a projected steepest-descent
direction is computed, (iii) a step is taken along this direction, and (iv) the resulting
point is mapped back to the original space. We discuss each of these steps in turn.
The projective transformation in Karmarkar’s method can be split into two
operations: the first scales the variables as in the affine scaling, so that the current
point xk goes to e; the second scales each resulting point by the sum of its variables.
The end result is that xk is transformed to e/n = (1/n, . . . , 1/n)T, and the sum of
E.2. Karmarkar’s Projective Scaling Method 761

the components at any point in the transformed space is equal to 1. Mathematically


the projective transformation sends x to

X −1 x
x̄ = ,
eTX −1 x
where X = diag (xk ). The corresponding inverse transformation is

X x̄
x= .
aTX x̄
Example E.1 Projective Transformation. Consider the constraints

x1 − 3x2 + 2x3 = 0
x1 + x2 + x3 = 1
x1 , x2 , x3 ≥ 0.

The feasible region is the line segment between xa = (3/4, 1/4, 0)T and xb =
(0, 2/5, 3/5)T. Notice that e/n = e/3 = (1/3, 1/3, 1/3)T is indeed feasible. Let
xk = (1/8, 3/8, 1/2)T. The projective transformation that takes xk to e/3 can be
performed in two steps. First the affine transformation takes x to
⎛ ⎞⎛ ⎞ ⎛ ⎞
8 0 0 x1 8x1
X −1 x = ⎝ 0 8/3 0 ⎠ ⎝ x2 ⎠ = ⎝ 8/3x2 ⎠
0 0 2 x3 2x3

and next this point is scaled by the sum of its variables. The final image of x under
this transformation is
⎛ ⎞
8x1
1 ⎝ 8/3x2 ⎠ .
x̄ =
8x1 + 8/3x2 + 2x3
2x3

9 1 8 3 T
Thus xa is sent to x̄a = ( 10 , 10 , 0)T, xb is sent to x̄b = (0, 11 , 11 ) , and e/3 is sent
12 4 3 T
to ( 19 , 19 , 19 ) , which lies on the line segment between x̄a and x̄b .

The projective transformation transforms the original problem into

minimize cTX x̄
x̄ aTX x̄
subject to AX x̄ = 0
eTx̄ = 1,
x̄ ≥ 0.

(See the Exercises.) The new objective function is not a linear function. However,
since the optimal objective value in the original problem is assumed to be zero, the
optimal objective in the new problem will also be zero. The denominator is positive
762 Appendix E. Karmarkar’s Method

and bounded, so the optimal value of the numerator will be zero as well. As a
result, we will ignore the denominator, and consider the problem

minimize c̄Tx̄
subject to Āx̄ = 0
(P̄ )
eTx̄ = 1,
x̄ ≥ 0,

where c̄ = Xc and Ā = AX.


We now compute the projected steepest-descent direction for problem P̄ . De-
noting the constraint matrix by
   
Ā AX
B= = ,
eT eT

the corresponding orthogonal projection matrix is PB = I − B T(BB T)−1 B. Since


(AX)e = Axk = 0,
1
PB = P̄ − eeT.
n
(See the Exercises.) The projected steepest-descent direction is Δx̄ = −PB c̄ =
−PB Xc. Using the previous equation and the relation eTXc = xTkc = cTxk we
obtain the following expression for the direction in transformed space:

cTxk
Δx̄ = −P̄ Xc + e.
n

Starting from e/n, we now take a step of length α along the projected steepest-
descent direction:
e
x̄k+1 = + αΔx̄.
n
The first requirement on α is that the new point satisfy the nonnegativity con-
straints. Any step length less than αmax , the step to the boundary, will fulfill this
requirement. This requirement alone does not guarantee polynomial complexity.
Karmarkarproposed a suitable step length, by inscribing a sphere with center e/n
in the set x : eTx̄ = 1, x̄ ≥ 0 . He showed that the largest such inscribed sphere
has radius r = (n(n − 1))−1/2 . By taking a step θr along the scaled direction
Δx̄/ Δx̄ with 0 < θ ≤ 1, feasibility is always maintained. However, to guarantee
polynomial complexity, θ must be restricted to a rather small interval, with θ = 1/3
being an acceptable choice. In practice, better progress can often be made by taking
a much larger step, slightly less than αmax , even though the polynomial complexity
may be lost.
The final step of an iteration is to map x̄k+1 back to x-space. This gives the
new solution estimate
X x̄k+1
xk+1 = T .
a X x̄k+1
E.2. Karmarkar’s Projective Scaling Method 763

Example E.2 Iteration of Karmarkar’s method. Consider the previous example,


and suppose as before that the current point is xk = (1/8, 3/8, 1/2)T. Then
⎛ ⎞
1/8
AX = ( 1/8 −9/8 1 ) and c̄ = Xc = ⎝ −9/8 ⎠ ,
3/2

hence ⎛ ⎞ ⎛ ⎞
0.1113 T 1
c xk 1
P̄ Xc = ⎝ 1.2483 ⎠ , e = ⎝1⎠,
n 6
1.3904 1
and the projected steepest-descent direction is
⎛ ⎞
0.0554
cTxk
Δx̄ = −P̄ Xc + e = ⎝ −1.0816 ⎠ .
n
−1.2237

Suppose we decide to use a stepsize of 0.9αmax . Here




(1/3) (1/3)
αmax = min , = min { 0.3082, 0.2724 } = 0.2724.
1.0816 1.2237

The resulting step length is α = 0.9×0.2724 = 0.2451. The new point in transformed
space is ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
1/3 0.0554 0.3469
x̄k+1 = ⎝ 1/3 ⎠ + 0.2451 ⎝ −1.0816 ⎠ = ⎝ 0.0682 ⎠ .
1/3 −1.2237 0.0333
Transforming back to original space we obtain
⎛ ⎞ ⎛ ⎞
0.3469 0.5066
1 ⎝
xk+1 = 0.0682 ⎠ = ⎝ 0.2987 ⎠ .
0.0856
0.0333 0.1947

The objective value at the new point is cTxk+1 = 0.7102.

A difficulty with Karmarkar’s method is the assumption that the optimal


objective value must be zero. If the optimal objective value z∗ were known, we
could replace the objective function cTx by (c−z∗ a)Tx, since this is equal to cTx−z∗
for any feasible x, and the the optimal objective value would be zero. In practice
z∗ is not known. However it is possible to adapt this approach, using lower bounds
on z∗ obtained from dual solutions.
Given a lower bound wk , the vector of objective coefficients is replaced by
(c − wk a) and accordingly, the vector of objective coefficients in P̄ is replaced by
c̄(wk ) = X(c − wk a). The dual to P̄ is then

maximize w
T
subject to Ā y + ew ≤ c̄(wk ).
764 Appendix E. Karmarkar’s Method

If wk were indeed equal to z∗ , the optimal value for w would be zero. An estimate
for the optimal vector y is
T
yk = (ĀĀ )−1Āc̄(wk ).
The maximal value of w for which (yk , w) is dual feasible is equal to the minimum
T
component w̄ of c̄(wk ) −Ā yk . If w̄ ≥ wk , then the new lower bound wk+1 is set to
w̄; otherwise wk+1 = wk .
To prove polynomiality of the algorithm, Karmarkar introduced a potential
function
n n
f (x) = n log cTx − log xj = log(cTx/xj ).
j=1 j=1

He proved that this function is reduced by a constant at each iteration. (The


constant, which we denote by γ, depends on the stepsize parameter θ.) This implies
that, after k iterations,
cTxk ≤ e−k/γn cTx0 .
Since the algorithm terminates if cTxk < 2−2L cTx0 , it is possible to show that
the number of iterations is at most O(nL). Although the potential function is
guaranteed to decrease at each iteration, there is no such guarantee for the objective
cTx and, in fact it may occasionally increase.
In a 1986 paper, Gill, Murray, Saunders, Tomlin, and Wright showed that
Karmarkar’s method is equivalent to the logarithmic barrier method with a partic-
ular choice of the barrier parameter (which, atypically for barrier methods, may be
negative). To verify this, we obtain an explicit expression for the search direction
in the original space. First,

e cTxk 1 + αcTxk
x̄ = − αP̄ Xc + α e= e − αP̄ Xc.
n n n
From this it follows that
1 + αcTxk
X x̄ = xk − αX P̄ Xc.
n
The new point xk+1 is obtained by scaling X x̄ by aTX x̄. Denoting the latter by τ ,

1 + αcTxk
τ = aTX x̄ = − αaTX TP̄ Xc,
n
since aTxk = 1. Denoting μk = aTX TP̄ Xc, this gives

1 + αcTxk
τ= − αμk .
n
We now obtain
X x̄ 1 + αcTxk α
xk+1 = = xk − X P̄ Xc
τ τn τ
αμk α
= (1 + )xk − X P̄ Xc
τ τ
Exercises 765
 
αμk 1
= xk + xk − X P̄ Xc .
τ μk
If we define
αμk 1
αk = , Δx = xk − X P̄ Xc
τ μk
then
xk+1 = xk + αk Δx,
so the iteration is now written as a regular search algorithm. The last technicality
is to observe that xk = X P̄ e. Therefore we can write
1
Δxk = − X P̄ Xc + X P̄ e, where μk = xTkP̄ Xc.
μk
This is exactly the search direction prescribed by the primal logarithmic-barrier
path-following algorithm for problems in standard form (see Section 4). It is also
the search direction for the logarithmic barrier function for problems that are given
in the canonical form (see the Exercises).
From a practical point of view, Karmarkar’s method and its variants have not
been as successful as other interior-point methods. In numerical tests they appear
to be slower and less robust than leading methods. A possible explanation for this
less satisfactory performance is the need to generate lower bounds w on the optimal
objective. These bounds can be poor, in particular if the dual feasible region has
no interior, and this can cause the method to converge slowly.

Exercises
2.1. In Karmarkar’s method prove that AXe = 0.
2.2. Let  
AX
B= .
eT
Prove that PB = P̄ − n1 eeT. Hint: Use the result of the previous Problem.
Prove also that PB = PeT P̄ .
2.3. Consider the logarithmic barrier function for a problem in Karmarkar’s spe-
cial form. Show that if μk = xTkP̄ Xc, then the search direction in Karmarkar’s
method coincides with the projected Newton direction for the barrier func-
tion at xk . (Hint: Prove that the Karmarkar search direction satisfies the
projected Newton equations.)

E.3 Notes
In his original paper Karmarkar used eTx = 1 as the normalizing constraint in
the canonical form. He showed that all linear programs can be transformed to
this specific form, but the proposed transformation results in a much larger linear
program. Variants of the method that are suitable for problems in standard form
were proposed by Anstreicher (1986), Gay (1987), and Ye and Kojima (1987).
766 Appendix E. Karmarkar’s Method

You might also like