0% found this document useful (0 votes)
16 views60 pages

Unit IV

The document discusses iterative improvement as an algorithm design technique for optimization problems, emphasizing the importance of local search and the distinction between local and global optima. It covers linear programming (LP), including the simplex method for solving LP problems, and the maximum flow problem in transportation networks, detailing the Ford-Fulkerson method for finding maximum flow. Key concepts such as basic feasible solutions, flow-augmenting paths, and the performance of the simplex method are also highlighted.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views60 pages

Unit IV

The document discusses iterative improvement as an algorithm design technique for optimization problems, emphasizing the importance of local search and the distinction between local and global optima. It covers linear programming (LP), including the simplex method for solving LP problems, and the maximum flow problem in transportation networks, detailing the Ford-Fulkerson method for finding maximum flow. Key concepts such as basic feasible solutions, flow-augmenting paths, and the performance of the simplex method are also highlighted.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 60

Iterative Improvement

Algorithm design technique for solving optimization problems

 Start with a feasible solution


 Repeat the following step until no improvement can be found:
• change the current feasible solution to a feasible solution with a better
value of the objective function
 Return the last feasible solution as optimal

Note: Typically, a change in a current solution is “small” (local


search)

Major difficulty: Local optimum vs. global optimum

1
Important Examples
 simplex method

 Ford-Fulkerson algorithm for maximum flow problem

 maximum matching of graph vertices

 Gale-Shapley algorithm for the stable marriage problem

 local search heuristics

2
Linear Programming

Linear programming (LP) problem is to optimize a linear function


of several variables subject to linear constraints:

maximize (or minimize) c1 x1 + ...+ cn xn


subject to ai 1x1+ ...+ ain xn ≤ (or ≥ or =) bi , i = 1,...,m
x1 ≥ 0, ... , xn ≥ 0

The function z = c1 x1 + ...+ cn xn is called the objective function;


constraints x1 ≥ 0, ... , xn ≥ 0 are called nonnegativity constraints

3
Example
maximize 3x + 5y
subject to x + y ≤ 4
x + 3y ≤ 6 y
x ≥ 0, y ≥ 0

x + 3y = 6

( 0, 2 )
Feasible region is the ( 3, 1 )
set of points defined
by the constraints x
( 0, 0 ) ( 4, 0 )

x+y=4
4
Geometric solution
y
maximize 3x + 5y
subject to x+ y≤4
x + 3y ≤ 6

x ≥ 0, y ≥ 0

( 0, 2 )
( 3, 1 )

x
( 0, 0 ) ( 4, 0 )
3x + 5y = 20
3x + 5y = 14
Optimal solution: x = 3, y = 1
3x + 5y = 10

Extreme Point Theorem Any LP problem with a nonempty bounded


feasible region has an optimal solution; moreover, an optimal solution can
always be found at an extreme point of the problem's feasible region.
5
3 possible outcomes in solving an LP problem

 has a finite optimal solution, which may no be unique

 unbounded: the objective function of maximization


(minimization) LP problem is unbounded from above
(below) on its feasible region

 infeasible: there are no points satisfying all the constraints,


i.e. the constraints are contradictory

6
The Simplex Method

 The classic method for solving LP problems;


one of the most important algorithms ever invented

 Invented by George Dantzig in 1947

 Based on the iterative improvement idea:


Generates a sequence of adjacent points of the
problem’s feasible region with improving values of the
objective function until no further improvement is
possible

7
Standard form of LP problem
 must be a maximization problem
 all constraints (except the nonnegativity constraints) must be
in the form of linear equations
 all the variables must be required to be nonnegative

Thus, the general linear programming problem in standard


form with m constraints and n unknowns (n ≥ m) is

maximize c1 x1 + ...+ cn xn
subject to ai 1x1+ ...+ ain xn = bi , i = 1,...,m,
x1 ≥ 0, ... , xn ≥ 0

Every LP problem can be represented in such form


8
Example

maximize 3x + 5y maximize 3x + 5y + 0u + 0v
subject to x + y ≤ 4 subject to x + y + u =4
x + 3y ≤ 6 x + 3y + v
=6
x≥0, y≥0 x≥0, y≥0, u≥0, v≥0

Variables u and v, transforming inequality constraints into


equality constrains, are called slack variables

9
Basic feasible solutions
A basic solution to a system of m linear equations in n unknowns
(n ≥ m) is obtained by setting n – m variables to 0 and solving the
resulting system to get the values of the other m variables. The
variables set to 0 are called nonbasic; the variables obtained by
solving the system are called basic.
A basic solution is called feasible if all its (basic) variables are
nonnegative.

Example x + y + u =4
x + 3y +v =6

(0, 0, 4, 6) is basic feasible solution


(x, y are nonbasic; u, v are basic)

There is a 1-1 correspondence between extreme points of LP’s


feasible region and its basic feasible solutions.
10
Simplex Tableau
maximize z = 3x + 5y + 0u + 0v
subject to x+ y+ u =4
x + 3y + v =6
x≥0, y≥0, u≥0, v≥0

x y u v

basic u 1 1 1 0 4 basic feasible solution


variables (0, 0, 4, 6)
v 1 3 0 1 6

objective row 3 5 0 0 0 value of z at (0, 0, 4, 6)

11
Outline of the Simplex Method
Step 0 [Initialization] Present a given LP problem in standard form and
set up initial tableau.
Step 1 [Optimality test] If all entries in the objective row are nonnegative —
stop: the tableau represents an optimal
solution.
Step 2 [Find entering variable] Select (the most) negative entry in the
objective row. Mark its column to indicate the
entering variable and the pivot column.

Step 3 [Find departing variable] For each positive entry in the pivot column,
calculate the θ-ratio by dividing that row's entry in the rightmost column by
its entry in the pivot column. (If there are no positive entries in the pivot
column — stop: the problem is unbounded.) Find the row with the smallest
θ-ratio, mark this row to indicate the departing variable and the pivot row.

Step 4 [Form the next tableau] Divide all the entries in the pivot row by its
entry in the pivot column. Subtract from each of the other rows, including the
objective row, the new pivot row multiplied by the entry in the pivot column
of the row in question. Replace the label of the pivot row by the variable's 12
Example of Simplex Method Application

maximize z = 3x + 5y + 0u + 0v
subject to x+ y+ u =4
x + 3y + v =6
x≥0, y≥0, u≥0, v≥0

x y u v x y u v x y u v
2 1 3 1
u 1 1 1 0 4 u 0 1 2 x 1 0 3
3 3 2 2
1 1 1 1
v 1 3 0 1 6 y 1 0 2 y 0 1 1
3 3 2 2
4 5
3 5 0 0 0 0 0 10 0 0 2 1 14
3 3

basic feasible sol. basic feasible sol. basic feasible sol.


(0, 0, 4, 6) (0, 2, 2, 0) (3, 1, 0, 0)
z=0 z = 10 z = 14
13
Notes on the Simplex Method
 Finding an initial basic feasible solution may pose a problem

 Theoretical possibility of cycling

 Typical number of iterations is between m and 3m, where m is

the number of equality constraints in the standard form

 Worse-case efficiency is exponential

 More recent interior-point algorithms such as Karmarkar’s


algorithm (1984) have polynomial worst-case efficiency and
have performed competitively with the simplex method in
empirical tests 14
Maximum Flow Problem
Problem of maximizing the flow of a material through a
transportation network (e.g., pipeline system, communications
or transportation networks)

Formally represented by a connected weighted digraph with n


vertices numbered from 1 to n with the following properties:
• contains exactly one vertex with no entering edges, called
the source (numbered 1)
• contains exactly one vertex with no leaving edges, called
the sink (numbered n)
• has positive integer weight uij on each directed edge (i.j),
called the edge capacity, indicating the upper bound on
the amount of the material that can be sent from i to j
through this edge

15
Example of Flow Network

4
3
Source

2 5 2
1 2 3 6

3 Sink
1

16
Definition of a Flow

A flow is an assignment of real numbers xij to edges (i,j) of a


given network that satisfy the following:
 flow-conservation requirements
The total amount of material entering an intermediate
vertex must be equal to the total amount of the material
leaving the vertex
∑ xji = ∑ xij for i = 2,3,…, n-1
j: (j,i) є E j: (i,j) є E

 capacity constraints
0 ≤ xij ≤ uij for every edge (i,j)  E

17
Flow value and Maximum Flow Problem

Since no material can be lost or added to by going through


intermediate vertices of the network, the total amount of the
material leaving the source must end up at the sink:
∑ x1j = ∑ xjn
j: (1,j) є E j: (j,n) є E

The value of the flow is defined as the total outflow from the
source (= the total inflow into the sink).

The maximum flow problem is to find a flow of the largest


value (maximum flow) for a given network.

18
Maximum-Flow Problem as LP problem

Maximize v = ∑ x1j
j: (1,j)  E
subject to
∑ xji - ∑ xij = 0 for i = 2, 3,…,n-1
j: (j,i)  E j: (i,j)  E

0 ≤ xij ≤ uij for every edge (i,j)  E

19
Augmenting Path (Ford-Fulkerson) Method
 Start with the zero flow (xij = 0 for every edge)

 On each iteration, try to find a flow-augmenting path from


source to sink, which a path along which some additional
flow can be sent

 If a flow-augmenting path is found, adjust the flow along


the edges of this path to get a flow of increased value and
try again

 If no flow-augmenting path is found, the current flow is


maximum

20
Example 1

0/3 0/4

0/2 0/5 0/2


1 2 3 6

0/3
0/1
xij/uij
4 Augmenting path:
1→2 →3 →6

21
Example 1 (cont.)

0/3 0/4

2/2 2/5 2/2


1 2 3 6

0/3
0/1

4
Augmenting path:
1 →4 →3←2 →5 →6

22
Example 1 (maximum flow)

1/3 1/4

2/2 1/5 2/2


1 2 3 6

1/3
1/1
max flow value = 3
4

23
Finding a flow-augmenting path
To find a flow-augmenting path for a flow x, consider paths from
source to sink in the underlying undirected graph in which any
two consecutive vertices i,j are either:
• connected by a directed edge (i to j) with some positive
unused capacity rij = uij – xij
– known as forward edge ( → )
OR
• connected by a directed edge (j to i) with positive flow xji
– known as backward edge ( ← )
If a flow-augmenting path is found, the current flow can be
increased by r units by increasing xij by r on each forward edge
and decreasing xji by r on each backward edge, where

r = min {rij on all forward edges, xji on all backward edges}

24
Finding a flow-augmenting path (cont.)

 Assuming the edge capacities are integers, r is a positive


integer

 On each iteration, the flow value increases by at least 1

 Maximum value is bounded by the sum of the capacities of


the edges leaving the source; hence the augmenting-path
method has to stop after a finite number of iterations

 The final flow is always maximum, its value doesn’t depend


on a sequence of augmenting paths used

25
Performance degeneration of the method
 The augmenting-path method doesn’t prescribe a specific
way for generating flow-augmenting paths
 Selecting a bad sequence of augmenting paths could impact
the method’s efficiency

Example 2
2
0/U 0/U

1 0/1 3 U = large positive integer

0/U 0/U
4

26
Example 2 (cont.)

2 2
0/U 0/U 1/U 0/U

0/1 1 1/1 3 V=1


1 3

0/U 0/U 0/U 1/U


1→2→4→3 4
4
1→4←2→3

2 2
U/U U/U 1/U 1/U
V=2U
1 0/1 3 ●●● 1 0/1 3 V=2

U/U U/U 1/U 1/U


4 4
Requires 2U iterations to reach
maximum flow of value 2U
27
Shortest-Augmenting-Path Algorithm

Generate augmenting path with the least number of edges by


BFS as follows.
Starting at the source, perform BFS traversal by marking new
(unlabeled) vertices with two labels:
• first label – indicates the amount of additional flow that
can be brought from the source to the vertex being labeled
• second label – indicates the vertex from which the vertex
being labeled was reached, with “+” or “–” added to the
second label to indicate whether the vertex was reached
via a forward or backward edge

28
Vertex labeling
 The source is always labeled with ∞,-

 All other vertices are labeled as follows:


• If unlabeled vertex j is connected to the front vertex i of
the traversal queue by a directed edge from i to j with
positive unused capacity rij = uij –xij (forward edge),
vertex j is labeled with lj,i+, where lj = min{li, rij}

• If unlabeled vertex j is connected to the front vertex i of


the traversal queue by a directed edge from j to i with
positive flow xji (backward edge), vertex j is labeled lj,i-,
where lj = min{li, xji}

29
Vertex labeling (cont.)
 If the sink ends up being labeled, the current flow can be
augmented by the amount indicated by the sink’s first label

 The augmentation of the current flow is performed along the


augmenting path traced by following the vertex second labels
from sink to source; the current flow quantities are increased
on the forward edges and decreased on the backward edges
of this path

 If the sink remains unlabeled after the traversal queue


becomes empty, the algorithm returns the current flow as
maximum and stops

30
Example: Shortest-Augmenting-Path
Algorithm

5
0/3 0/4

0/2 0/5 0/2


1 2 3 6
2,2+
0/3 0/1 5
0/3 0/4
4
∞,- 2,1+ 2,3+
Queue: 1 2 4 3 5 6 0/2 0/5 0/2
↑↑↑↑ 1 2 3 6
2,2 +

0/3 0/1
4 3,1+

Augment the flow by 2 (the sink’s first


label) along the path 1→2→3→6
31
Example (cont.)
5
0/3 0/4

2/2 2/5 2/2


1 2 3 6
1,2+
0/3 0/1 5
0/3 0/4
4
Queue: 1 4 3 2 5 6 ∞,- 1,3- 1,5+
↑↑↑↑↑ 2/2 2/5 2/2
1 2 3 6
1,4 +

0/3 0/1
4 3,1+
Augment the flow by 1 (the sink’s first
label) along the path 1→4→3←2→5→6

32
Example (cont.)

5
1/3 1/4

2/2 1/5 2/2


1 2 3 6

1/3 1/1 5
1/3 1/4
4
Queue: 1 4
↑↑ ∞,-
2/2 1/5 2/2
1 2 3 6

1/3 1/1
4 2,1+

No augmenting path (the sink is unlabeled)


the current flow is maximum
33
34
Time Efficiency
 The number of augmenting paths needed by the shortest-
augmenting-path algorithm never exceeds nm/2, where n
and m are the number of vertices and edges, respectively

 Since the time required to find shortest augmenting path by


breadth-first search is in O(n+m)=O(m) for networks
represented by their adjacency lists, the time efficiency of
the shortest-augmenting-path algorithm is in O(nm2) for
this representation
 More efficient algorithms have been found that can run in
close to O(nm) time, but these algorithms don’t fall into the
iterative-improvement paradigm

35
Definition of a Cut
Let X be a set of vertices in a network that includes its source but
does not include its sink, and let X, the complement of X, be the
rest of the vertices including the sink. The cut induced by this
partition of the vertices is the set of all the edges with a tail in X
and a head in X.
Capacity of a cut is defined as the sum of capacities of the edges
that compose the cut.

 We’ll denote a cut and its capacity by C(X,X) and c(X,X)


 Note that if all the edges of a cut were deleted from the
network, there would be no directed path from source to sink
 Minimum cut is a cut of the smallest capacity in a given
network

36
Examples of network cuts
5

3 4

1 2 2 5 3 2
6

3
1

If X = {1} and X = {2,3,4,5,6}, C(X,X) = {(1,2), (1,4)}, c = 5


If X ={1,2,3,4,5} and X = {6}, C(X,X) = {(3,6), (5,6)}, c = 6
If X = {1,2,4} and X ={3,5,6}, C(X,X) = {(2,3), (2,5), (4,3)}, c = 9

37
Max-Flow Min-Cut Theorem
 The value of maximum flow in a network is equal to the
capacity of its minimum cut

 The shortest augmenting path algorithm yields both a


maximum flow and a minimum cut:
• maximum flow is the final flow produced by the
algorithm
• minimum cut is formed by all the edges from the labeled
vertices to unlabeled vertices on the last iteration of the
algorithm
• all the edges from the labeled to unlabeled vertices are
full, i.e., their flow amounts are equal to the edge
capacities, while all the edges from the unlabeled to
labeled vertices, if any, have zero flow amounts on them
38
Bipartite Graphs

Bipartite graph: a graph whose vertices can be partitioned into


two disjoint sets V and U, not necessarily of the same size, so
that every edge connects a vertex in V to a vertex in U
A graph is bipartite if and only if it does not have a cycle of an
odd length

V 1 2 3 4 5

U 6 7 8 9 10

39
Bipartite Graphs (cont.)
A bipartite graph is 2-colorable: the vertices can be colored
in two colors so that every edge has its vertices colored
differently

V 1 2 3 4 5

U 6 7 8 9 10

40
Matching in a Graph
A matching in a graph is a subset of its edges with the property
that no two edges share a vertex

V 1 2 3 4 5
a matching
in this graph
M = {(4,8), (5,9)}

U 6 7 8 9 10

A maximum (or maximum cardinality) matching is a matching


with the largest number of edges
• always exists
• not always unique
41
Free Vertices and Maximum Matching

V 1 2 3 4 5 A matched
A matching vertex
in this graph (M)

A free
vertex
U 6 7 8 9 10

For a given matching M, a vertex is called free (or unmatched) if


it is not an endpoint of any edge in M; otherwise, a vertex is said
to be matched
• If every vertex is matched, then M is a maximum matching
• If there are unmatched or free vertices, then M may be able to be improved
• We can immediately increase a matching by adding an edge connecting two
free vertices (e.g., (1,6) above)
42
Augmenting Paths and Augmentation
An augmenting path for a matching M is a path from a free vertex
in V to a free vertex in U whose edges alternate between edges
not in M and edges in M
 The length of an augmenting path is always odd
 Adding to M the odd numbered path edges and deleting from it the even
numbered path edges increases the matching size by 1 ( augmentation)
 One-edge path between two free vertices is special case of augmenting path

V 1 2 3 4 5 V 1 2 3 4 5

U 6 7 8 9 10 U 6 7 8 9 10

Augmentation along path 2,6,1,7


43
Augmenting Paths (another example)

V 1 2 3 4 5 V 1 2 3 4 5

U 6 7 8 9 10 U 6 7 8 9 10

Augmentation along
3, 8, 4, 9, 5, 10

• Matching on the right is maximum (perfect matching)


• Theorem A matching M is maximum if and only if there exists
no augmenting path with respect to M
44
Augmenting Path Method (template)
 Start with some initial matching
• e.g., the empty set

 Find an augmenting path and augment the current


matching along that path
• e.g., using breadth-first search like method

 When no augmenting path can be found, terminate and


return the last matching, which is maximum

45
BFS-based Augmenting Path Algorithm
 Initialize queue Q with all free vertices in one of the sets (say V)

 While Q is not empty, delete front vertex w and label every


unlabeled vertex u adjacent to w as follows:
Case 1 (w is in V)
If u is free, augment the matching along the path ending at u
by moving backwards until a free vertex in V is reached.
After that, erase all labels and reinitialize Q with all the
vertices in V that are still free
If u is matched (not with w), label u with w and enqueue u
Case 2 (w is in U) Label its matching mate v with w and
enqueue v
 After Q becomes empty, return the last matching, which is
maximum

46
Example (revisited)
Initial Graph Resulting Graph

V 1 2 3 4 5 V 1 2 3 4 5

U 6 7 8 9 10 U 6 7 8 9 10
1

Queue: 1 2 3 Queue: 1 2 3 Augment


from 6

Each vertex is labeled with the vertex it was reached from. Queue deletions are
indicated by arrows. The free vertex found in U is shaded and labeled for clarity;
the new matching obtained by the augmentation is shown on the next slide.

47
Example (cont.)
Initial Graph Resulting Graph
6 8

V 1 2 3 4 5 V 1 2 3 4 5

U 6 7 8 9 10 U 6 7 8 9 10
2 1 3

Queue: 2 3 Queue: 2 3 6 8 1 4 Augment


from 7

48
Example (cont.)
Initial Graph Resulting Graph
6 8

V 1 2 3 4 5 V 1 2 3 4 5

U 6 7 8 9 10 U 6 7 8 9 10
3 3 4 4

Queue: 3 Queue: 3 6 8 2 4 9 Augment


from 10

49
Example: maximum matching found

V 1 2 3 4 5

U 6 7 8 9 10

maximum
matching

 This matching is maximum since there are no remaining


free vertices in V (the queue is empty)
 Note that this matching differs from the maximum
matching found earlier
50
51
Notes on Maximum Matching Algorithm
 Each iteration (except the last) matches two free vertices (one
each from V and U). Therefore, the number of iterations
cannot exceed n/2 + 1, where n is the number of vertices in
the graph. The time spent on each iteration is in O(n+m),
where m is the number of edges in the graph. Hence, the time
efficiency is in O(n(n+m))

 This can be improved to O(sqrt(n)(n+m)) by combining


multiple iterations to maximize the number of edges added to
matching M in each search

 Finding a maximum matching in an arbitrary graph is much


more difficult, but the problem was solved in 1965 by Jack
Edmonds

52
Conversion to Max-Flow Problem
s
1 1 1 11

V 1 2 3 4 5
1 1
1 1
1 1 1 1
1 1

U 6 7 8 9 10

1 1 1 1
1
t

 Add a source and a sink, direct edges (with unit capacity)


from the source to the vertices of V and from the vertices
of U to the sink
 Direct all edges from V to U with unit capacity
53
Stable Marriage Problem
There is a set Y = {m1,…,mn} of n men and a set X = {w1,…,wn} of
n women. Each man has a ranking list of the women, and
each woman has a ranking list of the men (with no ties in these
lists).
A marriage matching M is a set of n pairs (mi, wj).
A pair (m, w) is said to be a blocking pair for matching M if man
m and woman w are not matched in M but prefer each other
to their mates in M.
A marriage matching M is called stable if there is no blocking
pair for it; otherwise, it’s called unstable.
The stable marriage problem is to find a stable marriage
matching for men’s and women’s given preferences.

54
Instance of the Stable Marriage Problem
An instance of the stable marriage problem can be specified
either by two sets of preference lists or by a ranking matrix, as in
the example below.
men’s preferences women’s preferences
1st 2nd 3rd 1st 2nd 3rd
Bob: Lea Ann Sue Ann: Jim Tom Bob
Jim: Lea Sue Ann Lea: Tom Bob Jim
Tom: Sue Lea Ann Sue: Jim Tom Bob

ranking matrix
Ann Lea Sue
Bob 2,3 1,2 3,3 {(Bob, Ann) (Jim, Lea) (Tom, Sue)} is unstable
Jim 3,1 1,3 2,1
{(Bob, Ann) (Jim, Sue) (Tom, Lea)} is stable
Tom 3,2 2,1 1,2
55
Stable Marriage Algorithm (Gale-Shapley)

Step 0 Start with all the men and women being free

Step 1 While there are free men, arbitrarily select one of them
and do the following:
Proposal The selected free man m proposes to w, the
next woman on his preference list
Response If w is free, she accepts the proposal to be
matched with m. If she is not free, she compares m with
her current mate. If she prefers m to him, she accepts
m’s proposal, making her former mate free; otherwise,
she simply rejects m’s proposal, leaving m free

Step 2 Return the set of n matched pairs

56
Example
Ann Lea Sue

Bob 2,3 1,2 3,3


Free men: Bob proposed to
Bob, Jim, Tom Jim 3,1 1,3 2,1 Lea
Lea accepted
Tom 3,2 2,1 1,2

Ann Lea Sue

Bob 2,3 1,2 3,3


Free men: Jim proposed to Lea
Jim, Tom Jim 3,1 1,3 2,1 Lea rejected

Tom 3,2 2,1 1,2

57
Example (cont.)
Ann Lea Sue

Bob 2,3 1,2 3,3


Free men: Jim proposed to Sue
Jim, Tom Jim 3,1 1,3 2,1 Sue accepted

Tom 3,2 2,1 1,2

Ann Lea Sue

Bob 2,3 1,2 3,3


Free men: Tom proposed to Sue
Tom Jim 3,1 1,3 2,1 Sue rejected

Tom 3,2 2,1 1,2

58
Example (cont.)
Ann Lea Sue

Bob 2,3 1,2 3,3


Free men: Tom proposed to Lea
Tom Jim 3,1 1,3 2,1 Lea replaced Bob
with Tom
Tom 3,2 2,1 1,2

Ann Lea Sue

Bob 2,3 1,2 3,3


Free men: Bob proposed to Ann
Bob Jim 3,1 1,3 2,1 Ann accepted

Tom 3,2 2,1 1,2

59
Analysis of the Gale-Shapley Algorithm
 The algorithm terminates after no more than n2 iterations with
a stable marriage output

 The stable matching produced by the algorithm is always


man-optimal: each man gets the highest rank woman on his list
under any stable marriage. One can obtain the woman-
optimal matching by making women propose to men

 A man (woman) optimal matching is unique for a given set of


participant preferences

 The stable marriage problem has practical applications such


as matching medical-school graduates with hospitals for
residency training

60

You might also like