0% found this document useful (0 votes)
37 views

10.1 Integer Programming and LP Relaxation

Uploaded by

p1999_user
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

10.1 Integer Programming and LP Relaxation

Uploaded by

p1999_user
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

CS787: Advanced Algorithms Lecture 10: LP Relaxation and Rounding

In this lecture we will design approximation algorithms using linear programming. The key insight
behind this approach is that the closely related integer programming problem is NP-hard (a proof
is left to the reader). We can therefore reduce any NP-complete optimization problem to an integer
program, “relax” it to a linear program by removing the integrality constraints, solve the linear
program, and then “round” the LP solution to a solution to the original problem. We first describe
the integer programming problem in more detail.

10.1 Integer Programming and LP relaxation


Definition 10.1.1 An integer program is a linear program in which all variables must be integers.
As in a linear program, the constraints in an integer program form a polytope. However, the
feasible set is given by the set of all integer-valued points within the polytope, and not the entire
polytope. Therefore, the feasible region is not a convex set. Moreover, the optimal solution may
not be achieved at an extreme point of the polytope; it is found at an extreme point of the convex
hull of all feasible integral points. (See Figure 10.1.1.)

Figure 10.1.1: The feasible region of an integer linear program.

Usually, in designing LP-based approximation algorithms we follow the following approach:

1. Reduce the problem to an integer program.

2. Relax the integrality constraint, that is, allow variables to take on non-integral values.

1
3. Solve the resulting linear program to obtain a fractional optimal solution.

4. “Round” the fractional solution to obtain an integral feasible solution.

Note that the optimal solution to the LP is not necessarily integral. However, since the feasible
region of the LP is larger than the feasible region of the IP, the optimal value of the former is no
worse than the optimal value of the latter. This implies that the optimal value to the LP is a lower
bound on OPT, the optimal value for the problem we started out with. While the rounded solution
is not necessarily optimal for the original problem, since we start out with the optimal LP solution,
we aim to show that the rounded solution is not too far from optimal.
These relationships between the different values are illustrated in Figure 10.1.2 below. The gap
between the optimal LP value and the optimal integral solution is called the integrality gap of the
linear program.

Figure 10.1.2: The relationship between the optimal LP and ILP values for minimization problems.

We now apply the linear programming approach to two problems: vertex cover and facility location.

10.2 Vertex Cover revisited


We have already seen a factor of 2 approximation using maximum matchings for the lower bound.
Today we will see another factor of 2 approximation based on Linear Programming. This time we
consider a weighted version of the problem. Recall that we are given a graph G = (V, E) with
weights w : V → <+ , and our goal is to find the minimum weight subset of vertices such that every
edge is incident on some vertex in that subset.
We first reduce this problem to an integer program. We have one {0, 1} variable for each vertex
denoting whether or not this vertex is picked in the vertex cover. Call this variable xv for vertex
v. Then we get the following integer program. The first constraint essentially states that for each

2
edge we must pick at least one of its endpoints.
X
Minimize wv xv subject to
v∈V
xu + xv ≥ 1 ∀(u, v) ∈ E
xv ∈ {0, 1} ∀v ∈ V
To obtain a linear program, we relax the last constraint to the following:
xv ∈ [0, 1] ∀v ∈ V
Next we use a standard LP solver to find the optimal solution to this LP. Let x∗v denote this optimal
fractional solution. By our previous argument, the following holds:
Proposition 10.2.1 Val(x∗ )≤OPT, where OPT is the value of the optimal solution to the vertex
cover instance.
The example below illustrates that the optimal solution to the LP is not necessarily integral.

Figure 10.2.3: An example where the vertex cover LP has an integrality gap of 4/3. The optimal
fractional solution sets xv = 1/2 for all vertices, with a total cost of 3/2, while the optimal integral
solution has cost 2.

It remains to round the fractional solution. For vertex cover, the obvious rounding works: for each
x∗v ≥ 1/2, set xv = 1 and include v in the vertex cover. For each x∗v < 1/2, set xv = 0 and don’t
include v in the vertex cover.
It is easy to see that this is a feasible solution and forms a vertex cover. Consider any edge
(u, v) ∈ E. Then, by construction, x∗u + x∗v ≥ 1. Therefore, at least one of x∗u and x∗v is at least 1/2,
and is picked in the vertex cover. It remains to prove that the solution we obtain has small weight.

P P
Lemma 10.2.2 v∈V xv wv ≤ 2 v∈V xv wv
Proof: Recall that we set xv to be 1 if and only if x∗v ≥ 1/2, and 0 otherwise. The lemma then
follows by noting that xv ≤ 2x∗v for all v.
P
Finally, the weight of our vertex cover is exactly v∈V xv wv because by definition xv = 1 if and
only if v is included in our vertex cover, and 0 otherwise. We therefore have the following theorem.

3
Theorem 10.2.3 The above algorithm is a 2-approximation to weighted vertex cover.

10.3 Facility location


The story behind the facility location problem is that a company wants to locate a number of
warehouses such that the cost of opening these warehouses plus the cost of shipping its product to
its retail stores is minimized. Formally, the facility location problem gives a collection of facilities
and a collection of customers, and asks which facilities we should open to minimize the total cost.
We accept a facility cost of fi if we decide to open facility i, and we accept a routing cost of c(i, j)
if we decide to route customer j to facility i. Furthermore, the routing costs form a metric, that
is, they are distance functions satisfying the triangle inequality.
First, we reduce this problem to an integer program. We let the variable xi denote whether facility
i is open, and let yij denote whether customer j is assigned to facility i. The following program
then expresses the problem. The first constraint says that each customer should be assigned to at
least one facility. The second says that is a customer is assigned to a facility, then that facility
must be open.

X X
minimize fi xi + c(i, j)yij subject to
i i,j
X
yij ≥ 1 ∀j
i
xi ≥ yij ∀i, j
xi , yij ∈ {0, 1} ∀i, j

To obtain a linear program, we relax the last constraint to xi , yij ∈ [0, 1].
P
For convenience, let Cf (x) denote the total factory
Pcost induced by x, i.e., i fi xi . Similarly, let
Cr (y) denote the total routing cost induced by y, i,j c(i, j)yij .
Let x∗ , y ∗ be the optimal solution to this linear program. Since every feasible solution to the original
ILP lies in the feasible region of this LP, the cost C(x∗ , y ∗ ) is less than or equal to the optimal
solution to the ILP. Since x∗ and y ∗ are almost certainly non-integral, we need a way to round this
solution to a feasible, integral solution without increasing the cost function much.
Note that the yij variables for a single j form a probability distribution over the facilities. The
LP pays the expected routing cost over these facilities. If we could pick the closest facility over all
those that j is connected to, our routing cost would be no more than in the LP solution. However,
the closest facility is not necessarily the cheapest facility, and this is what makes this rounding
process complicated.
To get around this problem, we first use a filtering technique that ensures that all facilities that j
is connected to have small routing cost, and we pick the cheapest of these in our solution.


P
1. For each customer j, compute the average cost c̃j = i c(i, j)yij .

4
2. For each customer j, let the Sj denote the set {i : c(i, j) ≤ 2c̃j }.
∗/ ∗
P
3. For all i and j: if i 6∈ Sj , then set ỹij = 0; else, set ỹij = yij i∈Sj yij .

4. For each facility i, let x̃i = min(2x∗i , 1).


∗.
Lemma 10.3.1 For all i and j, ỹij ≤ 2yij
∗ as a probability distribution, then we can show this by Markov’s
Proof: If we fix j and treat yij
inequality. However, the proof of Markov’s Inequality is simple enough to show precisely how it
applies here:
X X X X
∗ ∗ ∗ ∗
c̃j = c(i, j)yij ≥ c(i, j)yij > 2c̃j yij ≥ 2c̃j yij .
i i∈S
/ j i∈S
/ j i∈S
/ j

∗ . For any fixed j, y ∗ is a probability distribution, so ∗


P P
So, 1/2 ≥ i∈S/ y ij ij i∈Sj yij ≥ 1/2. Therefore,
P j 
∗/ ∗ ∗
ỹij = yij i∈Sj yij ≤ 2yij .

Lemma 10.3.2 x̃, ỹ is feasible, and C(x̃, ỹ) ≤ 2C(x∗ , y ∗ ).


Proof: For any fixed P j, the elements ỹij form a probability distribution. For every i and j,

ỹij ≤ 2yij and thus x̃i ≥ i ỹij . It is clear that 0 ≤ xi , yij ≤ 1 for all i and j, so x̃ and ỹ are feasible
solutions to the LP.
Now, given x̃ and ỹ, we perform the following algorithm:
1. Pick the unassigned j that minimizes c̃j .

2. Open factory i, where i = argmini∈Sj (fi ).

3. Assign customer j to factory i.

4. For all j 0 such that Sj ∩ Sj 0 6= ∅, assign customer j 0 to factory i.

5. Repeat steps 1-4 until all customers have been assigned to a factory.
Let L be the set of facilities that we open in this way. We now show that the solution that this
algorithm picks has reasonably limited cost.
Lemma 10.3.3 Cf (L) ≤ 2Cf (x∗ ) and Cr (L) ≤ 6Cr (y ∗ ).
Proof: For any two customers j1 and j2 that were picked in Step 1, Sj1 ∩ Sj2 = ∅.
Consider the facility cost incurred by one execution of Steps 1 through 4. Let j be the customer
chosenPin Step 1, and let i bePthe facility chosen in Step 2. Since x̃ is part of a feasible
P solution,
1 ≤ k∈Sj x̃ k . So, fi ≤ fi k∈Sj x̃ k ; and since fi is chosen to be minimal, fi ≤ k∈Sj fk x̃k .
Facility i is the only member of Sj that the algorithm can open.
Let J be the set of all customers selected in Step 1. Considering the above across the algorithm’s
whole execution yields
XX X
Cf (L) ≤ fk x̃k = fi x̃i = Cf (x̃) ≤ 2Cf (x∗ ).
j∈J k∈Sj i

5
Consider now the routing cost Cr , and let Cr (j) for a customer j denote the cost of routing j in L.
If j was picked in Step 1, then its routing cost is c(i, j) for some facility i ∈ Sj ; so Cr (j) ≤ 2c̃j .
Now, suppose instead that j 0 was not picked in Step 1. By the algorithm, there is some j that
picked in Step 1 such that Sj ∩ Sj 0 6= ∅. Suppose that facility i0 is in this intersection, and say that
facility i is the facility to which customers j and j 0 are routed. Now, at long last, we use the fact
that c(i, j) forms a metric: we know that Cr (j 0 ) ≤ c(i0 , j 0 ) + c(i0 , j) + c(i, j). Because i is in both
Sj and Sj 0 , we know by their definition that c(i0 , j 0 ) ≤ 2c̃j 0 and that c(i0 , j) + c(i, j) ≤ 4c̃j . The
customer j 0 was not picked in Step 1, and customer j was, so c̃j ≤ c̃j 0 , and thus, Cr (j 0 ) ≤ 6c̃j 0 .
Now, c̃j was the routing cost of customer j in the y ∗ LP solution. So, Cr (L) ≤ 6Cr (y ∗ ).
This lemma yields the following as a corollary:
Theorem 10.3.4 This algorithm is a 6-approximation to Facility Location.
Notice that, in the preceding construction, we picked Sj to be all i such that the cost c(i, j) ≤ 2c̃j .
The constant 2 is actually not optimal for this algorithm. Suppose we replace it with α, some
parameter of the construction. If we redo the above arithmetic, we find that Cf (L) ≤ (1/(1 −
α))Cf (x∗ ) and that Cr (L) ≤ (3/α)Cr (y ∗ ). Thus, if we let α = 3/4 instead of 1/2, this method
yields a 4-approximation. If we let α be a variable in the actual computed values of Cf (x∗ ) and
Cr (y ∗ ), we would get a somewhat better approximation.
Note that the integrality gap of the facility location LP is actually quite a bit smaller than 4. There
are several better rounding algorithms known based on the same lower bound that lead to improved
approximations.

You might also like