0% found this document useful (0 votes)
50 views8 pages

Max Flow Min Cut

The document discusses linear programming and algorithms for solving the maximum flow problem in networks. It describes how the maximum flow problem can be formulated as a linear program and solved using the simplex algorithm. It also presents a direct maximum flow algorithm and proves that it finds an optimal flow by relating the maximum flow to minimum cut capacities in a network.

Uploaded by

Diego Aragón
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views8 pages

Max Flow Min Cut

The document discusses linear programming and algorithms for solving the maximum flow problem in networks. It describes how the maximum flow problem can be formulated as a linear program and solved using the simplex algorithm. It also presents a direct maximum flow algorithm and proves that it finds an optimal flow by relating the maximum flow to minimum cut capacities in a network.

Uploaded by

Diego Aragón
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

S. Dasgupta, C.H. Papadimitriou, and U.V.

Vazirani

211

max x1 + 6x2 x1 200 x2 300

min x1 6x2

x1 + s1 = 200 x2 + s2 = 300

x1 + x2 400 x1 , x2 0

x1 + x2 + s3 = 400 x1 , x2 , s1 , s2 , s3 0

The original was also in a useful form: maximize an objective subject to certain inequalities. Any LP can likewise be recast in this way, using the reductions given earlier.

Matrix-vector notation
A linear function like x1 + 6x2 can be written as the dot product of two vectors c= 1 6 and x = x1 , x2

denoted c x or cT x. Similarly, linear constraints can be compiled into matrix-vector form: 200 1 0 x1 200 x1 = 0 1 300 . x2 300 x2 1 1 400 x1 + x2 400 A x b Here each row of matrix A corresponds to one constraint: its dot product with x is at most the value in the corresponding row of b. In other words, if the rows of A are the vectors a1 , . . . , am , then the statement Ax b is equivalent to ai x bi for all i = 1, . . . , m. With these notational conveniences, a generic LP can be expressed simply as max cT x Ax b x 0.

7.2 Flows in networks


7.2.1 Shipping oil
Figure 7.4(a) shows a directed graph representing a network of pipelines along which oil can be sent. The goal is to ship as much oil as possible from the source s to the sink t. Each pipeline has a maximum capacity it can handle, and there are no opportunities for storing oil

212 Figure 7.4 (a) A network with edge capacities. (b) A ow in the network. (a)
3

Algorithms

a
10 1

d
2 1 5

(b)
2

a
0 1

d
2 0

3 4

1 4

t
5

en route. Figure 7.4(b) shows a possible ow from s to t, which ships 7 units in all. Is this the best that can be done?

7.2.2

Maximizing ow

The networks we are dealing with consist of a directed graph G = (V, E ); two special nodes s, t V , which are, respectively, a source and sink of G; and capacities c e > 0 on the edges. We would like to send as much oil as possible from s to t without exceeding the capacities of any of the edges. A particular shipping scheme is called a ow and consists of a variable f e for each edge e of the network, satisfying the following two properties: 2. For all nodes u except s and t, the amount of ow entering u equals the amount leaving u: fwu = fuz . In other words, ow is conserved.
(w,u)E (u,z )E

1. It doesnt violate edge capacities: 0 f e ce for all e E .

The size of a ow is the total quantity sent from s to t and, by the conservation principle, is equal to the quantity leaving s: size(f ) =
(s,u)E

fsu .

In short, our goal is to assign values to {f e : e E } that will satisfy a set of linear constraints and maximize a linear objective function. But this is a linear program! The maximum-ow problem reduces to linear programming. For example, for the network of Figure 7.4 the LP has 11 variables, one per edge. It seeks to maximize fsa + fsb + fsc subject to a total of 27 constraints: 11 for nonnegativity (such as fsa 0), 11 for capacity (such as fsa 3), and 5 for ow conservation (one for each node of the graph other than s and t, such as f sc + fdc = fce ). Simplex would take no time at all to correctly solve the problem and to conrm that, in our example, a ow of 7 is in fact optimal.

S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani

213

Figure 7.5 An illustration of the max-ow algorithm. (a) A toy network. (b) The rst path chosen. (c) The second path chosen. (d) The nal ow. (e) We could have chosen this path rst. (f) In which case, we would have to allow this second path. (a)
1

(b)

a
1

a t s t

s
1

(c)

(d)
1

a
0

s b
(e)
1

s
1

t
1

(f)

a
1 1

a t s
1 1

7.2.3

A closer look at the algorithm

All we know so far of the simplex algorithm is the vague geometric intuition that it keeps making local moves on the surface of a convex feasible region, successively improving the objective function until it nally reaches the optimal solution. Once we have studied it in more detail (Section 7.6), we will be in a position to understand exactly how it handles ow LPs, which is useful as a source of inspiration for designing direct max-ow algorithms. It turns out that in fact the behavior of simplex has an elementary interpretation: Start with zero ow. Repeat: choose an appropriate path from s to t, and increase ow along the edges of this path as much as possible. Figure 7.5(a)(d) shows a small example in which simplex halts after two iterations. The nal ow has size 2, which is easily seen to be optimal.

214

Algorithms

There is just one complication. What if we had initially chosen a different path, the one in Figure 7.5(e)? This gives only one unit of ow and yet seems to block all other paths. Simplex gets around this problem by also allowing paths to cancel existing ow. In this particular case, it would subsequently choose the path of Figure 7.5(f). Edge (b, a) of this path isnt in the original network and has the effect of canceling ow previously assigned to edge (a, b). To summarize, in each iteration simplex looks for an s t path whose edges (u, v ) can be of two types: 1. (u, v ) is in the original network, and is not yet at full capacity. 2. The reverse edge (v, u) is in the original network, and there is some ow along it. If the current ow is f , then in the rst case, edge (u, v ) can handle up to c uv fuv additional units of ow, and in the second case, upto f vu additional units (canceling all or part of the existing ow on (v, u)). These ow-increasing opportunities can be captured in a residual network Gf = (V, E f ), which has exactly the two types of edges listed, with residual capacities cf : cuv fuv if (u, v ) E and fuv < cuv fvu if (v, u) E and fvu > 0

Thus we can equivalently think of simplex as choosing an s t path in the residual network. By simulating the behavior of simplex, we get a direct algorithm for solving max-ow. It proceeds in iterations, each time explicitly constructing G f , nding a suitable s t path in Gf by using, say, a linear-time breadth-rst search, and halting if there is no longer any such path along which ow can be increased. Figure 7.6 illustrates the algorithm on our oil example.

7.2.4

A certicate of optimality

Now for a truly remarkable fact: not only does simplex correctly compute a maximum ow, but it also generates a short proof of the optimality of this ow! Lets see an example of what this means. Partition the nodes of the oil network (Figure 7.4) into two groups, L = {s, a, b} and R = {c, d, e, t}:

L
3

a
10

R
2

1 1 1

3 4

t
5

Any oil transmitted must pass from L to R. Therefore, no ow can possibly exceed the total capacity of the edges from L to R, which is 7. But this means that the ow we found earlier, of size 7, must be optimal!

S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani

215

More generally, an (s, t)-cut partitions the vertices into two disjoint groups L and R such that s is in L and t is in R. Its capacity is the total capacity of the edges from L to R, and as argued previously, is an upper bound on any ow: Pick any ow f and any (s, t)-cut (L, R). Then size(f ) capacity(L, R).

Some cuts are large and give loose upper boundscut ({s, b, c}, {a, d, e, t}) has a capacity of 19. But there is also a cut of capacity 7, which is effectively a certicate of optimality of the maximum ow. This isnt just a lucky property of our oil network; such a cut always exists. Max-ow min-cut theorem The size of the maximum ow in a network equals the capacity of the smallest (s, t)-cut. Moreover, our algorithm automatically nds this cut as a by-product! Lets see why this is true. Suppose f is the nal ow when the algorithm terminates. We know that node t is no longer reachable from s in the residual network G f . Let L be the nodes that are reachable from s in Gf , and let R = V L be the rest of the nodes. Then (L, R) is a cut in the graph G:
L
e

s
e

We claim that

size(f ) = capacity(L, R).

To see this, observe that by the way L is dened, any edge going from L to R must be at full capacity (in the current ow f ), and any edge from R to L must have zero ow. (So, in the gure, fe = ce and fe = 0.) Therefore the net ow across (L, R) is exactly the capacity of the cut.

7.2.5

Efciency

Each iteration of our maximum-ow algorithm is efcient, requiring O(|E |) time if a depthrst or breadth-rst search is used to nd an s t path. But how many iterations are there? Suppose all edges in the original network have integer capacities C . Then an inductive argument shows that on each iteration of the algorithm, the ow is always an integer and increases by an integer amount. Therefore, since the maximum ow is at most C |E | (why?), it follows that the number of iterations is at most this much. But this is hardly a reassuring bound: what if C is in the millions? We examine this issue further in Exercise 7.31. It turns out that it is indeed possible to construct bad examples in which the number of iterations is proportional to C , if s t paths are not carefully chosen. However, if paths are chosen in a sensible mannerin particular, by

216

Algorithms

using a breadth-rst search, which nds the path with the fewest edgesthen the number of iterations is at most O(|V | |E |), no matter what the capacities are. This latter bound gives an overall running time of O(|V | |E |2 ) for maximum ow.

S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani

217

Figure 7.6 The max-ow algorithm applied to the network of Figure 7.4. At each iteration, the current ow is shown on the left and the residual network on the right. The paths chosen are shown in bold. Current ow (a)
a d
3

Residual graph
a
10 2

d
2 1 5

1 1

3 4

(b)
1

d
1 2

a
10 1

1 1

d
2 1 1

1 1

3 4

b
1

t
4

(c)
2

d
2 1

a
10 1

d
2 1 2

t
2

3 4

b
1

t
3

(d)
2

d
2 1

a
10 1

d
2 1 5

s
3

t
5

3 3 1

b
4 1

218 Figure 7.6 Continued Current Flow (e)


2 2

Algorithms

Residual Graph
2

d
1 2 1

a
10 1

d
1 1 1 1 5

s
4

1 5

3 4

(f)
2

a
1

d
2 2 1 1 2 4

a
10 1

d
2 1 1 5

1 5

You might also like