Introduction To Approximation Algorithms: Proximation Ratio. Informally, For A Minimization Problem Such As The V
Introduction To Approximation Algorithms: Proximation Ratio. Informally, For A Minimization Problem Such As The V
To date, thousands of natural optimization problems have been shown to be NP-hard [8, 18]. To deal
with these problems, two approaches are commonly adopted: (a) approximation algorithms, (b) randomized algorithms. Roughly speaking, approximation algorithms aim to find solutions whose costs are as
close to be optimal as possible in polynomial time. Randomized algorithms can be looked at from different angles. We can design algorithms giving optimal solutions in expected polynomial time, or giving
expectedly good solutions.
Many different techniques are used to devise these kinds of algorithms, which can be broadly classified into: (a) combinatorial algorithms, (b) linear programming based algorithms, (c) semi-definite
programming based algorithms, (d) randomized (and derandomized) algorithms, etc. Within each class,
standard algorithmic designs techniques such as divide and conquer, greedy method, and dynamic programming are often adopted with a great deal of ingenuity. Designing algorithms, after all, is as much
an art as it is science.
There is no point approximating anything unless the approximation algorithm runs efficiently, i.e. in
polynomial time. Hence, when we say approximation algorithms we implicitly imply polynomial time
algorithms.
To clearly understand the notion of an approximation algorithm, we first define the so-called approximation ratio. Informally, for a minimization problem such as the V ERTEX C OVER problem, a
polynomial time algorithm A is said to be an approximation algorithm with approximation ratio if and
only if for every instance of the problem, A gives a solution which is at most times the optimal value for
that instance. This way, is always at least 1. As we do not expect to have an approximation algorithm
with = 1, we would like to get as close to 1 as possible. Conversely, for maximization problems the
algorithm A must produce, for each input instance, a solution which is at least times the optimal value
for that instance. (In this case 1.)
The algorithm A in both cases are said to be -approximation algorithm. Sometimes, for maximization problems people use the term approximation ratio to refer to 1/. This is to ensure that the ratio
is at least 1 in both the min and the max case. The terms approximation ratio, approximation factor,
performance guarantee, worst case ratio, absolute worst case ratio, are more or less equivalent, except
for the 1/ confusion we mentioned above.
Let us now be a little bit more formal. Consider an optimization problem , in which we try to
minimize a certain objective function. For example, when is V ERTEX C OVER the objective function
is the size of a vertex cover. For each instance I , let OPT(I) be the optimal value of the objective
function for I. In V ERTEX C OVER, I is a graph G and OPT(G) depends on the structure of G. Given a
polynomial time algorithm A which returns some feasible solution for , let A(I) denote the objective
value returned by A on in put I. Define
RA (I) :=
A(I)
OPT (I)
(minimization case),
(1)
called the performance ratio of A on input I. When is a maximization problem, we agree that
RA (I) :=
OPT (I)
A(I)
(maximization case).
(2)
Combinatorial Algorithms
The approximation algorithm A PPROX -V ERTEX -C OVER is a perfect introduction to the combinatorial
methods of designing approximation algorithms. It is difficult to get a hold of exactly what we mean by
combinatorial methods. Basically, algorithms which make use of discrete structures and ideas (mostly
graph theoretic) are often referred to as combinatorial algorithms.
One of the best examples of combinatorial approximation algorithms is a greedy algorithm approximating S ET C OVER. An instance of the S ET C OVER problem consists of a universe set U = {1, . . . , n}
and a family S = {S1 , . . . , Sm } of subsets of U . We want to find a sub-family of S with as few sets as
possible, such that the union of the sub-family is U (i.e. covers U ).
An obvious greedy algorithm for this problem is as follows.
G REEDY-S ET-C OVER(U, S)
2
1:
2:
3:
4:
5:
6:
7:
C=
while U 6= do
Pick S S which covers the most number of elements in U
U U S
C = C {S}
end while
return C
Theorem 2.1. Let d = max{|S| : S S}, then G REEDY-S ET-C OVER is an Hd -approximation algorithm, where Hd = (1 + 21 + + d1 ) is the dth harmonic number.
Proof. Suppose G REEDY-S ET-C OVER returns a cover of size k. For 1 i k, let Xi be the set of
newly covered elements of U after the ith iteration. Note that X1 Xk = U . Let xi = |Xi |, then
xi is the maximum number of un-covered elements which can be covered by any set after the (i 1)th
step.
For each element u Xi , P
assign to u a cost c(u) = 1/xi . Then, the cost of the solution returned by
G REEDY-S ET-C OVER is k = uU c(u). Let T S be any optimal solution. Then,
k=
c(u)
XX
c(u),
(3)
T T uT
uU
xi
ai + + ak
uS
i=1
i=1
XX
c(u) |T | Hd .
T T uT
Exercise 1. The W EIGHTED S ET C OVER problem is similar to the S ET C OVER problem, except that
every set has a weight defined by a weight function w : S Z+ . The objective is to find a set cover
with minimum weight.
1. State the decision version of W EIGHTED S ET C OVER and show that it is NP-complete.
2. Consider the following greedy algorithm:
G REEDY-W EIGHTED -S ET-C OVER(U, S, w)
1: C =
2: while U 6= do
3:
Pick S S with the least cost per un-covered element, i.e. pick S such that w(S)/|S U |
is minimized.
4:
U U S
5:
C = C {S}
6: end while
7: return C
Let Xi be the set of newly covered elements of U after the ith step. Let xi = |Xi |, and wi be the
weight of the ith set picked by the algorithm. Assign a cost c(u) = wi /xi to each element u Xi ,
for all i k. Show that, for any set S S,
X
c(u) w(S) H|S| .
uS
3. Show that G REEDY-W EIGHTED -S ET-C OVER has approximation ratio Hd , too.
What is amazing is that Hd ln d is basically the best approximation ratio we can hope for, as was
shown by Feige [13].
Exercise 2 (Bin Packing). Suppose we are given a set of n objects, where the size si of the ith object
satisfies 0 < si < 1. We wish to pack all the objects into the minimum number of unit-size bins. Each
bin can hold any subset of the objects whose total size does not exceed 1.
1. Prove that the problem of determining the minimum number of bins required is NP-hard. (Hint:
use S UBSET S UM).
The first fit P
heuristic takes each object in turn and places it into the first bin that can accommodate
it. Let S = ni=1 si .
2. Show that the optimal number of bins required is at least dSe.
3. Show that the first-fit heristic leaves at most one bin less than half full.
4. Show that the number of bins used by the first-fit heuristic is never more than d2Se.
5. Prove that the first-fit heuristic has approximation ratio 2.
6. Give an efficient implementation of the first-fit heuristic, and analyze its running time.
A linear program consists of a linear objective function which we are trying to maximize or minimize
subject to a set of linear equalities and inequalities. For instance, the following is a linear program:
min
x1 x2 + 4x3
subject to 3x1 x2
x2
+ 2x4
x1
+ x3
x1 , x2
= 3
4
3
0
Linear programs can be solved in polynomial time, where the outcome is either an optimal solution or
an indication that the problem is either unbounded or infeasible.
An integer (linear) program is similar to a linear program with an additional requirement that variables are integers. The I NTEGER P ROGRAMMING problem is the problem of determining if a given
integer program has a feasible solution. This problem is known to be NP-hard. Hence, we cannot hope
to solve general integer programs efficiently. However, integer programs can often be used to formulate
a lot of discrete optimization problems.
Let us see how we can formulate V ERTEX C OVER as an integer program. Suppose we are given a
graph G = (V, E) with n vertices and n edges. For each i V = {1, . . . , n}, let xi {0, 1} be a
variable which is 1 if i belongs to the vertex cover, and 0 otherwise; then, the problem is equivalent to
solving the following (linear) integer program:
min
x1 + x2 + + xn
subject to xi + xj 1, ij E,
xi {0, 1}, i V.
(4)
The objective function basically counts the number of vertices in the vertex cover. Each inequality
xi + xj 1, ij E requires each edge to have at least one of its end points in the vertex cover. Actually,
the formulation above is somewhat too strict. Suppose we relax it a little bit:
min
x1 + x2 + + xn
subject to xi + xj 1,
ij E,
xi 0, xi Z i V.
Then, this would still be equivalent to solving the vertex cover problem, since in an optimal solution to
the integer program above, none of the xi can be more than 1 (why?).
The next problem is a generalized version of the VC problem.
W EIGHTED V ERTEX C OVER
Given a graph G = (V, E),
P |V | = n, |E| = m, a weight function w : V R. Find a vertex
cover C V for which iC w(i) is minimized.
Note that when w 1, the weighted version is the same as the non-weighted version. An equivalent
linear integer program can be written as
min
w1 x1 + w2 x2 + + wn xn
subject to xi + xj 1,
ij E,
xi {0, 1},
i V.
Note that if the weights were all non-negative, then we only have to require the variables to be nonnegative integers, just like in the case of normal vertex cover. An integer program (IP) as above is also
referred to as a 01-integer program.
The next two problems are more general versions of the V ERTEX C OVER and the W EIGHTED V ER TEX C OVER problems. Recall that we use [n] to denote the set {1, . . . , n}, for all positive integer n, and
[0] naturally denotes .
S ET C OVER
Given a collection S = {S1 , . . . , Sn } of subsets of [m] = {1, . . . , m}. Find a sub-collection
C = {Si | i J} with as few members as possible (i.e. |J| as small as possible) such that
S
iJ Si = [m].
Similar to V ERTEX C OVER, we use a 01-variable xj to indicate the inclusion of Sj in the cover. For each
i {1, . . . , m}, we need at least one of the Sj containing i to be picked. The integer program is then
min
xX
1 + + xn
subject to
xj 1, i [m],
j:Sj 3i
Trivially, we have
min
wX
1 x1 + + wn xn
subject to
xj 1, i [m],
j:Sj 3i
(6)
(i) Given an integer program of the form (5), where A is any 01-matrix, formulate a (weighted) S ET
C OVER instance which is equivalent to the program.
(ii) Given an integer program of the form (6), where A is any 01-matrix, formulate an I NDEPENDENT
S ET instance which is equivalent to the program.
In both questions, show the equivalence. For example, in (i) you must show that the minimum
weighted set cover of the constructed set family corresponds to an optimal solution of the integer program
and vice versa.
Exercise 7 (B IN PACKING ). Formulate the following problem as an IP problem:
Given a set of n items {1, . . . , n}, and their size s(i) (0, 1]. Find a way to partition the
set of items in to a minimum number m of bins B1 , . . . , Bm , such that
X
s(i) 1, j [m].
iBj
In general, relaxation refers to the action of relaxing the integer requirement of a linear IP to turn it into
an LP. For example, the LP corresponding to the IP (4) of V ERTEX C OVER is
min
x1 + x2 + + xn
subject to xi + xj 1, ij E,
0 xi 1, i V.
(7)
Obviously if the LP version is infeasible, then the IP version is also infeasible. This is the first good
reason to do relaxation. Now, suppose x is an optimal solution to the LP problem. We know that x can
be found in polynomial time. We shall construct a feasible solution xA to the IP problem as follows. Let
(
1 if xi 1/2
xA
=
i
0 if xi < 1/2.
You should check that xA is definitely feasible for the IP. This technique of constructing a feasible
solution for the IP from the LP is called rounding. We have just seen the second advantage of doing
relaxation. The third is that an optimal value for the LP provides a lower bound for the optimal value
of the IP (why?). Using this fact, one can derive the approximation ratio of the feasible solution xA .
Let OPT(IP ) be the optimal value for the IP instance, OPT(LP ) be the optimal value for the LP, and
Cost(xA ) be the cost of the feasible solution xA for the IP, then we have
OPT (IP )
OPT (LP )
= x1 + + xn
1
1 A
x1 + + xA
2
2 n
1
=
Cost(xA ).
2
In other words, the cost of the approximation xA is at most twice the optimal. We thus have a 2approximation algorithm to solve the V ERTEX C OVER problem. Since it is impossible, unless P = NP,
to have an exact polynomial time algorithm to solve V ERTEX C OVER, an algorithm giving a feasible solution within twice the optimal is quite satisfactory. The exact same technique works for the W EIGHTED
7
V ERTEX C OVER problem, when the weights are non-negative. Thus, we also have a 2-approximation
algorithm for the W EIGHTED V ERTEX C OVER problem. (Note, again, that when we say approximation
algorithm, it automatically means a polynomial-time approximation algorithm. There would be no point
approximating a solution if it takes exponentially long.)
Theorem 4.1. There is an approximation algorithm to solve the W EIGHTED V ERTEX C OVER problem
with approximation ratio 2.
Obviously, one would like to reduce the ratio 2 to be as close to 1 as possible.
To this end, let us attempt to use the relaxation and rounding idea to find approximation algorithms
for the W EIGHTED V ERTEX C OVER problem. In fact, we shall deal with the following much more
general problem called the G ENERAL C OVER problem:
min
c1 x1 + . . .
subject to ai1 x1 + . . .
+ cn xn
+ ain xn bi , i [m].
xj {0, 1}, j [n],
(8)
where aij , bi , cj are all non-negative integers. Since we can remove an inequality if bi = 0, we can
assume that bi > 0, i [n]. Moreover, if cj = 0 then we can set xj = 1 and remove the column
corresponding to j without effecting the objective function as well as feasible solutions. Thus, we can
also assume that cj > 0, j [n]. Lastly, for each row i there should be at least one positive aij , which
is our last natural assumption.
The relaxed LP version for (8) is
min
c1 x1 + . . .
subject to ai1 x1 + . . .
+ cn xn
+ ain xn bi , i [m].
0 xj 1, j [n],
(9)
Let x be an optimal solution to the LP version. How would we round x to get xA , as in the V ERTEX
C OVER case? Firstly, the rounding must ensure that xA is feasible, namely they must satisfy each of the
inequalities ai1 x1 + +ain xn bi . Secondly, we do not want to do over-rounding, such as assigning
everything to 1, which would give a feasible solution but it does not give a very good approximation.
Consider an inequality such as
3x1 + 4x2 + x3 + 2x4 4,
(10)
which x satisfies. If we were to round some of the xi up to 1, and the rest down to 0, we must
pick the ones whose coefficients sum up to 4 or more; for instance, we could round x2 up to 1 and
the rest to 0, or x1 and x3 to 1 and the rest to 0. The difficulty with this idea is that there might
be an exponential number of ways to do this, and we also have to do this consistently throughout all
inequalities ai1 x1 + + ain xn bi . We cannot round x1 to 0 in one inequality, and to 1 in another
inequality. Fortunately, some information about which xj to round is contained in the actual values of
the xj . Consider inequality (10) again. The sum of all coefficients of the xj is 10. If all xj were at most
1/10, then the left hand side is at most 1. Hence, there must be one xj which is 1/10. If x2 1/10,
and we round it up to 1, then wed be fine. However, if x3 and x4 are the only ones which are 1/10,
then rounding them up to 1 is not sufficient. Fortunately, that cannot happen, since if x1 , x2 < 1/10 and
x3 , x4 1/10, then
3
4
3x1 + 4x2 + x3 + 2x4 <
+
+ 1 + 2 < 4.
10 10
Thus, the sum of the coefficients of the xj which are at least 1/(ai1 + + ain ) has to be at least bi .
This is an informal proof. I will leave the rigorous proof as an exercise.
n
X
f = max
aij ,
i=1..n
and set
xA
j =
j=1
(
1
if xj
if xj <
1
f
1
f,
then we have an approximation ratio of f . You should map this rounding back to the rounding of the
V ERTEX C OVER problem to see the analogy.
Theorem 4.2. The rounding strategy above gives an approximation algorithm with approximation ratio
at most
n
X
f = max
aij
i=1..n
j=1
for the G ENERAL C OVER problem; namely for every instance of the G ENERAL C OVER problem, the
relaxation and rounding algorithm yields a solution of cost at most f times the optimal.
Proof. We first show that xA is indeed feasible for IP-GC (the integer program for the G ENERAL C OVER
problem). Suppose xA is not feasible, then there is some row i for which
A
ai1 xA
1 + + ain xn < bi .
aij bi 1.
j:xj 1/f
If
j:xj <1/f
n
X
aij xj =
aij xj +
j:xj 1/f
j=1
aij xj <
j:xj <1/f
aij + 1 bi ,
j:xj 1/f
aij = 0, we get
j:xj <1/f
n
X
aij xj =
j=1
aij xj +
j:xj 1/f
aij xj =
j:xj <1/f
aij bi 1,
j:xj 1/f
n
X
j=1
cj xA
j
n
X
cj (f xj ) = f
OPT (LP-GC)
OPT (IP-GC).
j=1
Exercise 8. Describe the ratio f for the W EIGHTED S ET C OVER problem in terms of m, n and the set
collection S.
9
Randomized algorithms
TRUE ] +
1
E[S | xi = ai , 1 i k 1, xi =
2
FALSE]
The larger of the two expectations on the right hand side is at least E[S | xi = ai , 1 i k 1]. Hence,
we can set xi to be TRUE or FALSE one by one, following the path that leads to the larger expectation, to
eventually get a truth assignment which satisfies as many clauses as E[S ] = 7m/8.
An instance of the S UBSET S UM problem consists of a set X of n positive integers and a target t. The
objective is to find a subset of X whose sum is at most t and is as close to t as possible. In this section
we give a fully polynomial time approximation scheme for this problem.
Let X = {x1 , . . . , xn }. Suppose we have a set Si holding all sums of subsets of {x1 , . . . , xi }, then
Si+1 = Si (Si + xi+1 ), where S + x is the set of all sums of elements of S and x, for any set S and
number x. This idea can be used to give an exact algorithm for the S UBSET S UM problem. Unfortunately,
the algorithm runs in exponential worst case time. The lists Si keeps growing exponentially longer,
roughly doubled after each step.
10
The idea of our approximation scheme is to trim down Si so that it does not grow too large. Suppose
Si is sorted in increasign order. Given a parameter > 0, we can scan Si from left to right. Suppose b is
the current element being examined and a is the previous element, then b is removed from Si whenever
a(1 + ) b (which is at least a). Basically, when is small enough b can be represented by a since
they are close enough to each other. Let TRIM(S, ) be a procedure which trims a set S with parameter ,
then TRIM can certainly be implemented in polynomial time. We are now ready to describe our FPTAS
for S UBSET S UM.
FPTAS-S UBSET-S UM(X, t, )
1: n |X|
2: S0 h0i
3: for i = 1 to n do
4:
Si Si1 (Si1 + xi )
5:
Si TRIM(Si , /2n)
6:
Remove from Si elements > t
7: end for
8: return a the largest element in Sn
Theorem 6.1. FPTAS-S UBSET-S UM is a fully polynomial time approximation scheme for S UBSET
S UM.
Proof. Let Pi be the set of all sums of subsets of {x1 , . . . , xi }. Then, it is easy to see that for any b Pi
where b t, there is an a Si such that
i
b 1+
a.
2n
In particular, if t is the optimal sum then
n
t 1 +
a (1 + )a .
2n
It remains to show that the algorithm actually runs in polynomial time in n and 1/. Since the ith loop
of the algorithm takes time polynomial in the size of Si , it is sufficient to show that |Si | is a polynomial
in n and 1/. Note that if 1 < a < b are both in Si (after trimming), then b/a > (1 + /2n). Hence, the
number of elements in Si is at most
ln t
ln(1 + /2n)
2n(1 + /2n) ln t
2+
1
= 2 + 2n ln t + 2n ln t,
2 + log1+/2n t = 2 +
which is clearly a polynomial in n and 1/. (We used the fact that ln x x/(1 + x).)
Inapproximability
Suppose we have designed an r-approximation algorithm for some problem. How do we know that r is
really the best we can do? Is there a better approximation algorithm? One way to show that r is the best
approximation ratio is to show that it is NP-hard to approximate the problem to within any ratio smaller
than r. We give several results of this favor in this section.
11
Historical Notes
Texts on Linear Programming are numerous, of which I recommend [7] and [34]. For Integer Programming, [39] and [34] are suggested. Recent books on approximation algorithms include [5,23,32,37]. For
linear algebra, see [24,35]. See [1,33] for randomized algorithms, derandomization and the probabilistic
methods.
The notion of an approximation algorithm dated back to the seminal works of Garey, Graham, and
Ullman [17] and Johnson [25]. Interestingly enough, approximation algorithms were designed in the
works of Graham [19], Vizing [38], and Erdos [11] before the notion of NP-completeness came to life.
The greedy approximation algorithm for S ET C OVER is due to Johnson [25], Lovasz [30], and
Chvatal [6]. Feige [13] showed that approximating S ET C OVER to an asymptotically better ratio than
ln n is NP-hard.
The most popular method of solving a linear program is called the simplex method, whose idea is
to move along edges of the feasible polyhedron from vertex to vertex. This idea dates back to Fourier
(1826), and mechanized algebraically by George Dantzig in 1947 (published in 1951 [9]), who also acknowledged fruitful conversation with von Neumann. This worst-case exponential algorithm has proved
to work very well for most practical problems. Even now, when we know of many other polynomial time
algorithms [27, 28, 41] to solve LPs, the simplex method is still among the best when it comes to practice. The worst-case complexity of the simplex method was determined to be exponential when Klee and
Minty [29] found an example where the method actually visits all vertices of the feasible polyhedron.
The quest for a provably good algorithm continued until Khachian [28] devised the ellipsoid method
in 1979. The method performs poorly in practice, however. A breakthrough was made by Karmarkar
in 1984 [27], when he found a method which works in provably polynomial time, and also 50 times
faster than the simplex method in his experiments. Karmarkars method was of the interior point type of
method,
The 8/7-approximation algorithm for M AX -E3SAT follows the line of Yannakakis [40], who gave
the first 4/3-approximation for M AX -SAT. A 2-approximation for M AX -SAT was given in the seminal
early work of Johnson [25]. Johnsons algorithm can also be interpreted as a derandomized algorithm,
mostly the same as the one we presented. Later, Karloff and Zwick [26] gave an 8/7-approximation
algorithm for M AX -3SAT based on semidefinite programming. This approximation ratio is optimal as
shown by Hastad [22]. The conditional expectation method was implicit in Erdos and Selfridge [12].
Until 1990, few inapproximability results were known. To prove a typical inapproximability result
such as M AX -C LIQUE is not approximable to within some ratio r (unless P=NP), a natural direction is
12
to find a reduction from some NP-complete problem, say 3SAT, to M AX -C LIQUE which satisfies the
following properties:
given a 3CNF formula , the reduction constructs in poly-time a graph G
there is some poly-time computable threshold t such that, if is satisfiable, then G has a clique
of size at least t, and if is not satisfiable, then G does not have any clique of size t/r or more.
If M AX -C LIQUE is r-approximable, then one can use this r-approximation algorithm, along with the
reduction above, to decide if a 3CNF formula is satisfiable. The strategy is to run the algorithm on G .
If the answer is t/r or more, then is satisfiable, else is not.
Techniques for proving NP-hardness seem inadequate for this kind of gap-producing reductions.
Intuitively, the reason is that non-deterministic Turing Machines are sensitive to small changes: the
accepting computations and rejecting computations are not very far from one another (no gap). In 1990,
the landmark work by Feige, Goldwasser, Lovasz, Safra, and Szegedy [15] connects probabilistic proof
systems and inapproximability of NP-hard problems. This has become known as the PCP connection.
A year later, the PCP theorem - a very strong characterization of NP - was proved with the works of
Arora and Safra [4], Arora, Lund, Motwani, Sudan, and Szegedy [3]. A plethora of inapproximability
results using the PCP connection follow, some of them are optimal [2,10,13,16,2022,31]. The reader is
referred to recent surveys by Feige [14] and Trevisan [36] for good discussions on this point and related
history.
References
[1] N. A LON AND J. H. S PENCER, The probabilistic method, Wiley-Interscience Series in Discrete Mathematics and Optimization, Wiley-Interscience [John Wiley & Sons], New York, second ed., 2000. With an appendix on the life and work
of Paul Erdos.
[2] S. A RORA , L. BABAI , J. S TERN , AND Z. S WEEDYK, The hardness of approximate optima in lattices, codes, and
systems of linear equations, J. Comput. System Sci., 54 (1997), pp. 317331. 34th Annual Symposium on Foundations
of Computer Science (Palo Alto, CA, 1993).
[3] S. A RORA , C. L UND , R. M OTWANI , M. S UDAN , AND M. S ZEGEDY, Proof verification and the hardness of approximation problems, J. ACM, 45 (1998), pp. 501555. Prelim. version in FOCS92.
[4] S. A RORA AND S. S AFRA, Probabilistic checking of proofs: a new characterization of NP, J. ACM, 45 (1998), pp. 70
122. Prelim. version in FOCS92.
[5] G. AUSIELLO , P. C RESCENZI , G. G AMBOSI , V. K ANN , A. M ARCHETTI -S PACCAMELA , AND M. P ROTASI, Complexity and approximation, Springer-Verlag, Berlin, 1999. Combinatorial optimization problems and their approximability
properties, With 1 CD-ROM (Windows and UNIX).
[6] V. C HV ATAL
, A greedy heuristic for the set-covering problem, Math. Oper. Res., 4 (1979), pp. 233235.
[7] V. C HV ATAL
, Linear programming, A Series of Books in the Mathematical Sciences, W. H. Freeman and Company, New
York, 1983.
[8] P. C RESCENZI AND V. K. ( EDS .), A compendium of NP-optimization problems. http://www.nada.kth.se/
[9] G. B. DANTZIG, Maximization of a linear function of variables subject to linear inequalities, in Activity Analysis of
Production and Allocation, Cowles Commission Monograph No. 13, John Wiley & Sons Inc., New York, N. Y., 1951,
pp. 339347.
[10] I. D INUR AND S. S AFRA, On the hardness of approximating label-cover, Inform. Process. Lett., 89 (2004), pp. 247254.
, On even subgraphs of graphs, Mat. Lapok, 18 (1967), pp. 283288.
[11] P. E RD OS
AND J. L. S ELFRIDGE, On a combinatorial game, J. Combinatorial Theory Ser. A, 14 (1973), pp. 298301.
[12] P. E RD OS
13
[13] U. F EIGE, A threshold of ln n for approximating set cover (preliminary version), in Proceedings of the Twenty-eighth
Annual ACM Symposium on the Theory of Computing (Philadelphia, PA, 1996), New York, 1996, ACM, pp. 314318.
[14]
, Approximation thresholds for combinatorial optimization problems, in Proceedings of the International Congress
of Mathematicians, Vol. III (Beijing, 2002), Beijing, 2002, Higher Ed. Press, pp. 649658.
, S. S AFRA , AND M. S ZEGEDY, Interactive proofs and the hardness of ap[15] U. F EIGE , S. G OLDWASSER , L. L OV ASZ
proximating cliques, J. ACM, 43 (1996), pp. 268292. Prelim. version in FOCS91.
[16] U. F EIGE AND J. K ILIAN, Zero knowledge and the chromatic number, J. Comput. System Sci., 57 (1998), pp. 187199.
Complexity 96The Eleventh Annual IEEE Conference on Computational Complexity (Philadelphia, PA).
[17] M. R. G AREY, R. L. G RAHAM , AND J. D. U LLMAN, Worst case analysis of memory allocation algorithms, in Proceedings of the Fourth Annual ACM Symposium on Theory of Computing (STOC), New York, 1972, ACM, pp. 143150.
[18] M. R. G AREY AND D. S. J OHNSON, Computers and intractability, W. H. Freeman and Co., San Francisco, Calif., 1979.
A guide to the theory of NP-completeness, A Series of Books in the Mathematical Sciences.
[19] R. L. G RAHAM, Bounds for certain multiprocessing anomalies, Bell System Tech. J., 45 (1966), pp. 15631581.
[21] J. H ASTAD
, Clique is hard to approximate within n1 , Acta Math., 182 (1999), pp. 105142.
[22]
, Some optimal inapproximability results, in STOC 97 (El Paso, TX), ACM, New York, 1999, pp. 110 (electronic).
[23] D. S. H OCHBAUM, ed., Approximation Algorithms for NP Hard Problems, PWS Publishing Company, Boston, MA,
1997.
[24] R. A. H ORN AND C. R. J OHNSON, Matrix analysis, Cambridge University Press, Cambridge, 1985.
[25] D. S. J OHNSON, Approximation algorithms for combinatorial problems, J. Comput. System Sci., 9 (1974), pp. 256278.
Fifth Annual ACM Symposium on the Theory of Computing (Austin, Tex., 1973).
[26] H. K ARLOFF AND U. Z WICK, A 7/8-approximation algorithm for MAX 3SAT?, in Proceedings of the 38th Annual IEEE
Symposium on Foundations of Computer Science, Miami Beach, FL, USA, IEEE Press, 1997.
[27] N. K ARMARKAR, A new polynomial-time algorithm for linear programming, Combinatorica, 4 (1984), pp. 373395.
[28] L. G. K HACHIAN, A polynomial algorithm for linear programming, Dokl. Akad. Nauk SSSR, 244 (1979), pp. 1093
1096. English translation in Soviet Math. Dokl. 20, 191-194, 1979.
[29] V. K LEE AND G. J. M INTY, How good is the simplex algorithm?, in Inequalities, III (Proc. Third Sympos., Univ.
California, Los Angeles, Calif., 1969; dedicated to the memory of Theodore S. Motzkin), Academic Press, New York,
1972, pp. 159175.
, On the ratio of optimal integral and fractional covers, Discrete Math., 13 (1975), pp. 383390.
[30] L. L OV ASZ
[31] C. L UND AND M. YANNAKAKIS, On the hardness of approximating minimization problems, J. Assoc. Comput. Mach.,
41 (1994), pp. 960981.
14
[36] L. T REVISAN, Inapproximability of combinatorial optimization problems, Tech. Rep. 65, The Electronic Colloquium in
Computational Complexity, 2004.
[37] V. V. VAZIRANI, Approximation algorithms, Springer-Verlag, Berlin, 2001.
[38] V. G. V IZING, On an estimate of the chromatic class of a p-graph, Diskret. Analiz No., 3 (1964), pp. 2530.
[39] L. A. W OLSEY, Integer programming, Wiley-Interscience Series in Discrete Mathematics and Optimization, John Wiley
& Sons Inc., New York, 1998. A Wiley-Interscience Publication.
[40] M. YANNAKAKIS, On the approximation of maximum satisfiability, J. Algorithms, 17 (1994), pp. 475502. Third Annual
ACM-SIAM Symposium on Discrete Algorithms (Orlando, FL, 1992).
[41] Y. Y. Y E, Extensions of the potential reduction algorithm for linear programming, J. Optim. Theory Appl., 72 (1992),
pp. 487498.
15