0% found this document useful (0 votes)
46 views4 pages

Lec5 2

This document summarizes key concepts from a lecture on linear programming (LP) duality. It discusses: 1) How the dual LP provides a lower bound on the optimal value of the primal LP. 2) The process of writing the dual LP given a primal LP, including establishing constraints and objective functions. 3) How the max-flow min-cut theorem can be derived as a corollary of LP duality by formulating flow networks as LPs and interpreting their dual LPs.

Uploaded by

yokito85
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views4 pages

Lec5 2

This document summarizes key concepts from a lecture on linear programming (LP) duality. It discusses: 1) How the dual LP provides a lower bound on the optimal value of the primal LP. 2) The process of writing the dual LP given a primal LP, including establishing constraints and objective functions. 3) How the max-flow min-cut theorem can be derived as a corollary of LP duality by formulating flow networks as LPs and interpreting their dual LPs.

Uploaded by

yokito85
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

CS787: Advanced Algorithms

Lecture 5 addendum: LP Duality

5.5.1

LP Duality

The motivation behind using an LP dual is that it provides a lower bound on the value of the optimal LP solution (for minimization problems, and an upper bound for maximization problems). For instance, consider the following LP. Minimize 7x + 3y subject to x+y 2 3x + y 4 x, y 0 By inspection we can obtain a solution x = 1, y = 1, with an objective function value of 10. Can we prove that this is optimal? Consider the following approach. If we multiply the constraints with some values and add them such that the coecients of x and y are at most 7 and 3 respectively, we can get a lower bound on the solution. For instance, if we add the two constraints, we get 4x + 2y 6, and since x, y 0, we have 7x + 3y > 4x + 2y 6. So, 6 is a lower bound to the optimal solution. Likewise, if we multiply the rst constraint with 1, and the second constraint with 2, we get 7x + 3y = (x + y ) + 2(3x + y ) 2 + 2(4) = 10. Hence, 10 is a lower bound on the solution, and (x = 1, y = 1) with an objective function value of 10 is an optimal solution. LP duality generalizes this approach to arbitrary problems. The idea is to nd the optimal multipliers for the constraints so as to obtain the tightest bound possible. We can express this problem of nding the best multipliers as another LP; this is called the dual LP. The variables in this LP represent constraints of the original primal LP. Each constraint in the dual LP refers to one variable of the primal LP and states that the weighted sum of the coecients corresponding to that variable should be no more than the coecient of the variable in the objective function. In the above example, suppose that the multipliers for the two constraints are and respectively, then we obtain the following dual constraints corresponding to the variables x and y respectively. + 3 7 + 3 Subject to these constaints, our goal is to maximize the sum 2 + 4 so as to obtain the best lower bound on the optimal value of the primal LP. This gives us the dual of the above LP.

5.5.2

Writing a dual LP given the primal

In the remainder we focus on primal LPs with a minimization objective, although all of our claims hold in the maximization case as well. We To obtain a dual LP given the primal, we do the following. 1

1. For each constraint in the primal, we have a corresponding variable in the dual. (This variable is the multiplier). 2. For each variable in the primal, we have a corresponding constraint in the dual. These constraints say that when we multiply the primal constraints with the dual variables and add them, the sum of coecients of any primal variable should be less than or equal to the coecient of the variable in the primal objective function. 3. The dual objective function is to maximize the sum of products of right hand sides of primal constraints and the corresponding dual variables. (This is maximizing the lower bound on the primal LP solutions). Formally, consider a minimization LP in the standard form: Minimize
i

ci xi subject to Aij xi bj
i

j i

xi 0

Let yj denote the multiplier corresponding to the j the constraint. Then the dual LP is given by: Maximize
j

bj yj subject to Aij yj ci
j

i j

yj 0 Expressed in matrix form, the primal and dual are given as below. Minimize cT x subject to Ax b x0

Maximize bT y subject to AT y c y0

Note that the dual of a dual LP is the original primal LP. We now note the following two theorems. The proof of the strong duality theorem is beyond the scope of this class. Theorem 5.5.2.1 (Weak LP duality theorem) If x is any primal feasible solution and y is any dual feasible solution, then cT x bT y .

Proof: ci xi
i i

Aij yj xi
j

=
j i

Aij xi bj yj
j

yj

Theorem 5.5.2.2 (Strong LP duality theorem) If the primal has an optimal solution x and the dual has an optimal solution y , then cT x = bT y , i.e, the primal and the dual have the same optimal objective function value. In general, if the primal is infeasible (there is no feasible point which satises all the constraints), the dual is unbounded (the optimal objective function value is unbounded). Similarly, if the dual is infeasible, the primal is unbounded. However, if both the primal and dual are feasible (have at least one feasible point), the strong LP duality theorem says that the optimal solutions to the primal and the dual have the exact same objective function value.

5.5.3

Max-ow min-cut theorem revisited

The max-ow min-cut theorem can be derived as a corollary of LP duality. Recall the LP for the max ow problem:

maximize
v :(s,v )E

fs,v subject to f(u,v) = f(v,u)


u:(v,u)E

v V, v = s, t e E e E

(Conservation) (Capacity constraints)

u:(u,v )E

fe ce fe 0 We can rewrite this in standard form as follows:

maximize
v :(s,v )E

fs,v subject to f(u,v) = 0 v V, v = s, t e E e E 3 (Conservation) (Capacity constraints)

f(v,u)
u:(v,u)E u:(u,v )E

fe ce fe 0

Now let us associate multipliers dv with the rst set of constraints and multipliers e with the second set of constraints. Then, for an edge (u, v ) E , with u, v = s, t, we get the following constraint: dv + du +
(u,v )

Likewise, for edges (s, v ) E and (u, t) E , we get: dv + du +


(s,v ) (u,v )

1 0

For uniformity, we dene and set ds = 1 and dt = 0. Then, we get the following dual LP: minimize
eE

ce

subject to
(u,v )

dv du + ds = 1 dt = 0
e

(u, v ) E

e E

One way to interpret this dual LP is to think of the e variables as dening lengths on edges. Then the dual constraints are triangle inequalities and the dv variables represent the distance of vertex v from sink t under lengths . The LP asks for lengths on edges such that the distance between s and t is at least 1 and the cost is minimized. If all the lengths e in some feasible solution are either 0 or 1, then the solution forms an s-t cut and its cost denotes the capacity of the cut. So an optimal integral solution to this LP represents the min s-t cut in the graph. It turns out that all basic solutions to this LP are integral (that is, all edge lengths e are either 0 or 1). This, along with the above discussion and the strong LP duality theorem, gives another proof of the max-ow min-cut theorem. We will prove the integrality of the min-cut LP in a future lecture or homework.

You might also like