0% found this document useful (0 votes)
2 views

04GreedyAlgorithmsII

The document discusses greedy algorithms, focusing on Dijkstra's algorithm for finding shortest paths in directed graphs. It covers the single-pair and single-source shortest path problems, various applications of shortest paths, and the implementation details of Dijkstra's algorithm, including optimizations and performance considerations. Additionally, it touches on related topics such as minimum spanning trees and clustering methods.

Uploaded by

Alan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

04GreedyAlgorithmsII

The document discusses greedy algorithms, focusing on Dijkstra's algorithm for finding shortest paths in directed graphs. It covers the single-pair and single-source shortest path problems, various applications of shortest paths, and the implementation details of Dijkstra's algorithm, including optimizations and performance considerations. Additionally, it touches on related topics such as minimum spanning trees and clustering methods.

Uploaded by

Alan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 86

4.

G REEDY A LGORITHMS II

‣ Dijkstra′s algorithm
‣ minimum spanning trees
‣ Prim, Kruskal, Boruvka
‣ single-link clustering
‣ min-cost arborescences

Lecture slides by Kevin Wayne


Copyright © 2005 Pearson-Addison Wesley
http://www.cs.princeton.edu/~wayne/kleinberg-tardos

Last updated on 12.02.25 16:04


4. G REEDY A LGORITHMS II

‣ Dijkstra′s algorithm
‣ minimum spanning trees
‣ Prim, Kruskal, Boruvka
‣ single-link clustering
‣ min-cost arborescences

SECTION 4.4
Single-pair shortest path problem

Problem. Given a digraph G = (V, E), edge lengths ℓe ≥ 0, source s ∈ V,


and destination t ∈ V, nd a shortest directed path from s to t.

1 15 3

5
4
source s 12
0 3
8
7 2 9
7

9 6 1
11
5
5
4 13

4 20 6

destination t

length of path = 9 + 4 + 1 + 11 = 25
3
fi
Single-source shortest paths problem

Problem. Given a digraph G = (V, E), edge lengths ℓe ≥ 0, source s ∈ V,


nd a shortest directed path from s to every node.

1 15 3

5
4
source s 12
0 3
8
7 2 9
7

9 6 1
11
5
5
4 13

4 20 6

shortest-paths tree
4
fi
Shortest paths: quiz 1

Suppose that you change the length of every edge of G as follows.


For which is every shortest path in G a shortest path in G′?

A. Add 17.

B. Multiply by 17.

C. Either A or B.

D. Neither A nor B. 24
7

s 1
— 2
— 3
— t
18 19 20

5
Shortest paths: quiz 2

Which variant in car GPS?

A. Single source: from one node s to every other node.

B. Single sink: from every node to one node t.

C. Source–sink: from one node s to another node t.

D. All pairs: between all pairs of nodes.

6
Shortest path applications

・PERT/CPM.
・Map routing.
・Seam carving.
・Robot navigation.
・Texture mapping.
・Typesetting in LaTeX.
・Urban traf c planning.
・Telemarketer operator scheduling.
・Routing of telecommunications messages.
・Network routing protocols (OSPF, BGP, RIP).
・Optimal truck routing through given traf c congestion pattern.

Network Flows: Theory, Algorithms, and Applications,


by Ahuja, Magnanti, and Orlin, Prentice Hall, 1993.

7
fi
fi
Dijkstra′s algorithm (for single-source shortest paths problem)

Greedy approach. Maintain a set of explored nodes S for which


algorithm has determined d[u] = length of a shortest s↝u path.
・Initialize S ← { s }, d[s] ← 0.
・Repeatedly choose unexplored node v ∉ S which minimizes
(v) = min d[u] + e
e = (u,v) : u S the length of a shortest path from s
to some node u in explored part S,
followed by a single edge e = (u, v)

ℓe v
d[u]
u
S

8
Dijkstra′s algorithm (for single-source shortest paths problem)

Greedy approach. Maintain a set of explored nodes S for which


algorithm has determined d[u] = length of a shortest s↝u path.
・Initialize S ← { s }, d[s] ← 0.
・Repeatedly choose unexplored node v ∉ S which minimizes
(v) = min d[u] + e
e = (u,v) : u S the length of a shortest path from s
to some node u in explored part S,
add v to S, and set d[v] ← π(v). followed by a single edge e = (u, v)

・To recover path, set pred[v] ← e that achieves min.

d[v]
ℓe v
d[u]
u
S

9
Dijkstra′s algorithm: proof of correctness

Invariant. For each node u ∈ S : d[u] = length of a shortest s↝u path.


Pf. [ by induction on ⎜S⎟ ]
Base case: ⎜S⎟ = 1 is easy since S = { s } and d[s] = 0.
Inductive hypothesis: Assume true for ⎜S⎟ ≥ 1.
・Let v be next node added to S, and let (u, v) be the nal edge.
・A shortest s↝u path plus (u, v) is an s↝v path of length π(v).
・Consider any other s↝v path P. We show that it is no shorter than π(v).
・Let e = (x, y) be the rst edge in P that leaves S,
and let P be the subpath from s to x. e
P
・The length of P is already ≥ π (v) as soon x y

as it reaches y: s P

S u
v
ℓ(P) ≥ ℓ(P ) + ℓe ≥ d[x] + ℓe ≥ π (y) ≥ π (v) ▪

non-negative inductive de nition Dijkstra chose v


lengths hypothesis of π(y) instead of y

10

fi


fi
fi
Dijkstra′s algorithm: ef cient implementation

Critical optimization 1. For each unexplored node v ∉ S :


explicitly maintain π[ v] instead of computing directly from de nition

(v) = min d[u] + e


e = (u,v) : u S

・For each v ∉ S : π(v) can only decrease (because set S increases).

・More speci cally, suppose u is added to S and there is an edge e = (u, v) leaving
u. Then, it suf ces to update:

π[v] ← min { π[v], π[u] + ℓe) }

recall: for each u ∈ S,


π[u] = d [u] = length of shortest s↝u path

Critical optimization 2. Use a min-oriented priority queue (PQ)


to choose an unexplored node that minimizes π[v].

11
fi
fi
fi
fi
Dijkstra s algorithm: ef cient implementation

Implementation.
・Algorithm maintains π[v] for each node v.
・Priority queue stores unexplored nodes, using π[⋅] as priorities.
・Once u is deleted from the PQ, π[u] = length of a shortest s↝u path.

DIJKSTRA (V, E, ℓ, s)
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

FOREACH v ≠ s : π[v] ← ∞, pred[v] ← null; π[s] ← 0.


Create an empty priority queue pq.
FOREACH v ∈ V : INSERT(pq, v, π[v]).
WHILE (IS-NOT-EMPTY(pq))
u ← DEL-MIN(pq).
FOREACH edge e = (u, v) ∈ E leaving u:
IF (π[v] > π[u] + ℓe)
DECREASE-KEY(pq, v, π[u] + ℓe).
π[v] ← π[u] + ℓe ; pred[v] ← e.

12

fi
Dijkstra′s algorithm: which priority queue?

Performance. Depends on PQ: n INSERT, n DELETE-MIN, ≤ m DECREASE-KEY.


・Array implementation optimal for dense graphs. 2
Θ(n ) edges

・Binary heap much faster for sparse graphs. Θ(n) edges

・4-way heap worth the trouble in performance-critical situations.

priority queue INSERT DELETE-MIN DECREASE-KEY total

node-indexed array
(A[i] = priority of i)
O(1) O(n) O(1) O(n2)

binary heap O(log n) O(log n) O(log n) O(m log n)

d-way heap
(Johnson 1975)
O(d logd n) O(d logd n) O(logd n) O(m logm/n n)

Fibonacci heap
(Fredman–Tarjan 1984)
O(1) O(log n) † O(1) † O(m + n log n)

integer priority queue


(Thorup 2004)
O(1) O(log log n) O(1) O(m + n log log n)

assumes m ≥ n † amortized
13
Shortest paths: quiz 3

How to solve the the single-source shortest paths problem in


undirected graphs with positive edge lengths?

A. Replace each undirected edge with two antiparallel edges of same


length. Run Dijkstra’s algorithm in the resulting digraph.

B. Modify Dijkstra’s algorithms so that when it processes node u,


it consider all edges incident to u (instead of edges leaving u).

C. Either A or B.

D. Neither A nor B.

14
Shortest paths: quiz 3

Theorem. [Thorup 1999] Can solve single-source shortest paths problem in


undirected graphs with positive integer edge lengths in O(m) time.

Remark. Does not explore nodes in increasing order of distance from s.

15
It’s less well-known that very similar techniques still apply can be thought of as semirings, with multiplication as sequen
Describing
where instead of a problem using classical
real or complex numbers linear algebra
we have is a very
a closed well- andally
semir-
lack negation
addition as choice. or The
reciprocals.
distributive law then states, intuitiv
formu- a followed Such structures,
by a choicehaving between addition
b andand c ismultiplication
the same as a(wh ch
ing,known
whichproblem-solving
is a structure withtechnique. If yourofquestion
additioncan andbe
Extensions of Dijkstra s algorithm
lated asthat
plication a question
need notaboutsupport
some analogue
realsubtraction
or complexormatrices,
multi-
division.then the answer between
tributeainfollowed
are called
Plain
the usualbyway)
semirings.
semirings
b andbut
are aMany
not in general
a followed
very structures
by c. negation or recip
specifying
weak structure. We sequential
can find m
can
We often
definebea found by standard
typeclass in Haskelltechniques.
for describing closed semir-
It’s implement
less well-known that very can be thought of as semirings,
apply examples of them in the wild, but unlike fields whichwith multiplication as sequ
pro
ings, and a few functions forsimilar techniques
manipulating matricesstill and
where instead
polynomials overofthem.
real orWe complex numbers
then show howwe havefunctions
these a closed can semir- theand addition
toolbox as choice.
of linear The distributive
algebra, there isn’t law much thenwestates,
can do intu
a followed
multi- something by a only
knowing choice thatbetween b and c is the same as a
it is a semiring.
be ing,
usedwhich
Dijkstra’s algorithm andis a structure
to calculate proof with
transitive some
extend analogue
closures, to
find of addition
several
shortest or and related
longest problems:
between followed by b and
or plication
widest paths that need
in a not support
graph, subtraction
analyse the dataorflow division.
of imperative However,a we can build somea followed
useful tools by c.by introducing

・ semir- closedPlain semirings


semiring, whichare is aasemiring
very weak structure.
equipped withWe can find
an extra op
We define
programs,
Shortestsimulations,
paths optimally
in
ings, and implement
a typeclass
pack
undirected in Haskelland
knapsacks,
a few functions
all by just providing
graphs: forperform
describing
for manipulating
an appropriate underlying
closedevent
discrete
π[v]matrices≤closed
π[u] ℓ(u,
and +tionexamplesv).closure.
called of them With in thetheintuition
wild, but unlike fields which
of multiplication as sequ

・ polynomials over them. We then show how these functions can ingthe
semiring. andtoolbox
additionofaslinear
choice, algebra,
closuretherecan be isn’t much weas can
interpreted iterad
Maximumbecapacityused to calculate paths:transitive π[v]closures,≥ min { π[u],
find shortest something
c(u, v)As}.
or longest we see in the knowing
followingonlysections,
that it is ita is
semiring.
possible to use somet
Categories
or widestand Subject
in a Descriptors
graph, analyseD.1.1 [Programming Tech- akin to Gaussian elimination on an arbitrarytools
However, we can build some useful bysemiring,
introduc

paths the data flow of imperative closed
Maximum niques]: Applicative
reliability
programs, optimally (Functional)
paths: Programming;
π[v]
pack knapsacks, ≥and π[u]
performG.2.2discrete
[Discrete
γ(u, v) .
event ingclosed semiring,
us a means whichcertain
of solving is a semiring
“linear”equipped
equations with anany
over extra st
Mathematics]:
simulations,Graph all by Theory—graph appropriate underlying closed turetion
just providing analgorithms called
with closure.
suitable notionsWith of the intuition choice
sequencing, of multiplication as se
and iteration. F
・ … semiring.closed semirings; transitive closure; linear systems;
Keywords
shortest paths
Categories and Subject Descriptors D.1.1 [Programming Tech-
ing and addition as choice, closure can
though, we need to define the notion of semiring more precise be
As we see in the following sections, it is possible to use som
akin to Gaussian elimination on an arbitrary closed semirin
interpreted as it

niques]: Applicative (Functional) Programming; G.2.2 [Discrete 2. ingSemirings us a means of solving certain “linear” equations over an
1. Mathematics]:
Introduction Graph Theory—graph algorithms Weture witha suitable
define semiringnotions
formally of assequencing,
consistingchoice
of a setandR,iteratio
two di
Key algebraic structure.
Linear algebra closed
Keywords providesClosed
an incredibly
semirings; semiring
powerful
transitive closure; (min-plus,
linear systems;bottleneck,
problem-solving though,
guished we need
elements Viterbi, named…).
oftoRdefine the0 notion
and 1, of andsemiring
two binarymoreoperat
prec
toolbox.
shortestA paths
great many problems in computer graphics and vision, + and ·, satisfying the following relations for any a, b, c 2 R:
machine learning, signal processing and many other areas can be 2. Semirings a+b=b+a
solved by simply
1. Introduction expressing the problem as a system of linear
equations and solving using standard techniques. We define a semiring a + (b formally
+ c) =as(aconsisting
+ b) + c of a set R, two
Linear
Linear algebra is provides
defined an incredibly
abstractly powerful
in terms problem-solving
of fields, of which guished elements of Ranamed + 0 =0aand 1, and two binary ope
thetoolbox.
real andAcomplexgreat many problems
numbers are inthecomputer graphicsexamples.
most familiar and vision, + and ·, satisfying the following relations for any a, b, c 2
machine learning, signal a · (b · c) = (a · b) · c
Fields are sets equipped withprocessing
some notion and of many otherand
addition areas can be
multi- a+b=b+a
solvedasbywell
plication simply expressing
as negation the problem as a system of linear
and reciprocals. a·0=0·a=0
equations
Many discrete and solving using standard
mathematical structures techniques.
commonly encountered a + (b + c) = (a + b) + c
a·1=1·a=a
Linear algebra
in computer science isdodefined
not haveabstractly
sensible in terms ℓeof fields,
notions of which
of negation. v a+0=a
the realsets,
Booleans, and graphs,
complexregularnumbers are the most
π[u]expressions, familiar
imperative examples.
programs, a · (b + c) = a · b + a · c
a · (b · c) = (a · b) · c
Fields are
datatypes andsets equipped
various other with somecan
structures notion
all beof given
addition and no-
natural multi- (a + b) · c = a · c + b · c
plication
tions of productas well as negation
(interpreted uand reciprocals.
variously as intersection, sequencing a·0=0·a=0
S
3
Many discrete mathematical structures commonly encountered We often write a · b as ab, a and · 1a=· a1 ·· aa as
= aa .
in computer science do not have sensible notions of negation. Our focus will be on closed semirings [12], which are se
Booleans, sets, graphs, regular expressions, imperative programs, ings with an additional operation a · (b + c)called = a ·closure
b + a · (denoted
c ⇤
)w
s datatypes and various other structures can all be given natural no- satisfies the axiom: (a + b) · c = a · c + b · c
tions of product (interpreted variously as intersection, sequencing a⇤ = ⇤
· aand ⇤
We often write a ·1b + as aab, =a1·+ a ·aa ·asa a3 .
Ourhave
If we focus an will
affinebemap on xclosed
7! axsemirings
+ b in some [12],closed
whichsemiare

thenings
x= with
a⇤ banis additional
a fixpoint, operation
since a⇤ b called
= (aaclosure

+ 1)b(denoted
= a(a⇤ b)
So,satisfies
a closedthe axiom: can also be thought of as a semiring w
semiring
affine maps have fixpoints. a⇤ = 1 + a · a⇤ = 1 + a⇤ · a 16
[Copyright notice will appear here once ’preprint’ option is removed.] The definition of a semiring translates neatly to Haskell:

𐄂
GOOGLE’S FOO.BAR CHALLENGE

You have maps of parts of the space station, each starting at a prison exit and
ending at the door to an escape pod. The map is represented as a matrix of 0s
and 1s, where 0s are passable space and 1s are impassable walls. The door out
of the prison is at the top left (0, 0) and the door into an escape pod is at the
bottom right (w−1, h−1).

Write a function that generates the length of a shortest path from the prison door
to the escape pod, where you are allowed to remove one wall as part of your

t
17
GOOGLE’S FOO.BAR CHALLENGE

Idea 1. Model maze as a subgraph of grid graph.


・Node for each cell.
・Bidirectional edge between two adjacent open cells (length = 1).
Idea 2. Use “graph doubling” trick.
・An edge from G to G corresponds to removing a wall (length = 2).
1 2

・No edges from G to G . 2 1

t1 t2
G1 G2
18
Edsger Dijkstra

“ What’s the shortest way to travel from Rotterdam to Groningen?


It is the algorithm for the shortest path, which I designed in
about 20 minutes. One morning I was shopping in Amsterdam
with my young ancée, and tired, we sat down on the café
terrace to drink a cup of coffee and I was just thinking about
whether I could do this, and I then designed the algorithm for
the shortest path. ” — Edsger Dijsktra

19
fi
The moral implications of implementing shortest-path algorithms

https://www.facebook.com/pg/npcompleteteens
20
4. G REEDY A LGORITHMS II

‣ Dijkstra′s algorithm
‣ minimum spanning trees
‣ Prim, Kruskal, Boruvka
‣ single-link clustering
‣ min-cost arborescences

SECTION 6.1
Cycles

Def. A path is a sequence of edges which connects a sequence of nodes.

Def. A cycle is a path with no repeated nodes or edges other than the starting and
ending nodes.

2 3

1 6 5 4

7 8

path P = { (1, 2), (2, 3), (3, 4), (4, 5), (5, 6) }

cycle C = { (1, 2), (2, 3), (3, 4), (4, 5), (5, 6), (6, 1) }

22
Cuts

Def. A cut is a partition of the nodes into two nonempty subsets S and V – S.

Def. The cutset of a cut S is the set of edges with exactly one endpoint in S.

2 3

1 6 5 4

7 8

cut S = { 4, 5, 8 }

cutset D = { (3, 4), (3, 5), (5, 6), (5, 7), (8, 7) }

23
Minimum spanning trees: quiz 1

Consider the cut S = { 1, 4, 6, 7 }. Which edge is in the cutset of S?

A. S is not a cut (not connected)

B. 1–7

C. 5–7

D. 2–3

1
3
5

7
2

4 6

24
Minimum spanning trees: quiz 2

Let C be a cycle and let D be a cutset. How many edges do C and D


have in common? Choose the best answer.

A. 0

B. 2

C. not 1

D. an even number

25
Cycle–cut intersection

Proposition. A cycle and a cutset intersect in an even number of edges.

2 3

1 6 5 4

7 8

cycle C = { (1, 2), (2, 3), (3, 4), (4, 5), (5, 6), (6, 1) }
cutset D = { (3, 4), (3, 5), (5, 6), (5, 7), (8, 7) }
intersection C ∩ D = { (3, 4), (5, 6) }

26
Cycle–cut intersection

Proposition. A cycle and a cutset intersect in an even number of edges.


Pf. [by picture]

S
cycle C

27
Spanning tree de nition

Def. Let H = (V, T ) be a subgraph of an undirected graph G = (V, E).


H is a spanning tree of G if H is both acyclic and connected.

graph G = (V, E)
spanning tree H = (V, T)

28
fi
Minimum spanning trees: quiz 3

Which of the following properties are true for all spanning trees H?

A. Contains exactly ⎜V⎟ – 1 edges.

B. The removal of any edge disconnects it.

C. The addition of any edge creates a cycle.

D. All of the above.

graph G = (V, E)
spanning tree H = (V, T) 29
Spanning tree properties

Proposition. Let H = (V, T ) be a subgraph of an undirected graph G = (V, E). Then,


the following are equivalent:
・H is a spanning tree of G.
・H is acyclic and connected.
・H is connected and has ⎜V⎟ – 1 edges.
・H is acyclic and has ⎜V⎟ – 1 edges.
・H is minimally connected: removal of any edge disconnects it.
・H is maximally acyclic: addition of any edge creates a cycle.

graph G = (V, E)
spanning tree H = (V, T) 30
A tree containing a cycle

https://maps.roadtrippers.com/places/46955/photos/374771356 31
Minimum spanning tree (MST)

Def. Given a connected, undirected graph G = (V, E) with edge costs ce,
a minimum spanning tree (V, T ) is a spanning tree of G such that the sum
of the edge costs in T is minimized.

24
4

6 23 9
18

16 5 11
8
7
10 14
21

MST cost = 50 = 4 + 6 + 8 + 5 + 11 + 9 + 7

Cayley’s theorem. The complete graph on n nodes has n n–2 spanning trees.

can’t solve by brute force


32
Minimum spanning trees: quiz 4

Suppose that you change the cost of every edge in G as follows.


For which is every MST in G an MST in G′ (and vice versa)?
Assume c(e) > 0 for each e.

A. c (e) = c(e) + 17.

B. c (e) = 17 c(e).

C. c (e) = log17 c(e).

D. All of the above.

33
𐄂



Applications

MST is fundamental problem with diverse applications.


・Dithering.
・Cluster analysis.
・Max bottleneck paths.
・Real-time face veri cation.
・LDPC codes for error correction.
・Image registration with Renyi entropy.
・Find road networks in satellite and aerial imagery.
・Model locality of particle interactions in turbulent uid ows.
・Reducing data storage in sequencing amino acids in a protein.
・Autocon g protocol for Ethernet bridging to avoid cycles in a network.
・Approximation algorithms for NP-hard problems (e.g., TSP, Steiner tree).
・Network design (communication, electrical, hydraulic, computer, road).
Network Flows: Theory, Algorithms, and Applications,
by Ahuja, Magnanti, and Orlin, Prentice Hall, 1993.

34
fi
fi
fl
fl
Fundamental cycle

Fundamental cycle. Let H = (V, T ) be a spanning tree of G = (V, E).


・For any non tree-edge e ∈ E : T ∪ { e } contains a unique cycle, say C.
・For any edge f ∈ C : (V, T ∪ { e } – { f }) is a spanning tree.

graph G = (V, E)
spanning tree H = (V, T)

Observation. If ce < cf, then (V, T) is not an MST. 35


Fundamental cutset

Fundamental cutset. Let H = (V, T ) be a spanning tree of G = (V, E).


・For any tree edge f ∈ T : (V, T – { f }) has two connected components.
・Let D denote corresponding cutset.
・For any edge e ∈ D : (V, T – { f } ∪ { e }) is a spanning tree.

graph G = (V, E)
spanning tree H = (V, T)

Observation. If ce < cf, then (V, T) is not an MST. 36


The greedy algorithm

Red rule.
・Let C be a cycle with no red edges.
・Select an uncolored edge of C of max cost and color it red.
Blue rule.
・Let D be a cutset with no blue edges.
・Select an uncolored edge in D of min cost and color it blue.
Greedy algorithm.
・Apply the red and blue rules (nondeterministically!) until all edges
are colored. The blue edges form an MST.
・Note: can stop once n – 1 edges colored blue.

37
Greedy algorithm: proof of correctness

Color invariant. There exists an MST (V, T*) containing every blue edge
and no red edge.
Pf. [ by induction on number of iterations ]

Base case. No edges colored ⇒ every MST satis es invariant.

38
fi
Greedy algorithm: proof of correctness

Color invariant. There exists an MST (V, T*) containing every blue edge
and no red edge.
Pf. [ by induction on number of iterations ]

Induction step (blue rule). Suppose color invariant true before blue rule.
・let D be chosen cutset, and let f be edge colored blue.
・if f ∈ T*, then T* still satis es invariant.
・Otherwise, consider fundamental cycle C by adding f to T*.
・let e ∈ C be another edge in D.
・e is uncolored and ce ≥ cf since
- e ∈ T* ⇒ e not red
- blue rule ⇒ e not blue and ce ≥ cf T*
・Thus, T* ∪ { f } – { e } satis es invariant.
f

cut

e
39
fi
fi
Greedy algorithm: proof of correctness

Color invariant. There exists an MST (V, T*) containing every blue edge
and no red edge.
Pf. [ by induction on number of iterations ]

Induction step (red rule). Suppose color invariant true before red rule.
・let C be chosen cycle, and let e be edge colored red.
・if e ∉ T*, then T* still satis es invariant.
・Otherwise, consider fundamental cutset D by deleting e from T*.
・let f ∈ D be another edge in C.
・f is uncolored and ce ≥ cf since
- f ∉ T* ⇒ f not blue
- red rule ⇒ f not red and ce ≥ cf T*
・Thus, T* ∪ { f } – { e } satis es invariant. ▪
f

cut

e
40
fi
fi
Greedy algorithm: proof of correctness

Theorem. The greedy algorithm terminates. Blue edges form an MST.


Pf. We need to show that either the red or blue rule (or both) applies.
・Suppose edge e is left uncolored.
・Blue edges form a forest.
・Case 1: both endpoints of e are in same blue tree.
⇒ apply red rule to cycle formed by adding e to blue forest.

Case 1

41
Greedy algorithm: proof of correctness

Theorem. The greedy algorithm terminates. Blue edges form an MST.


Pf. We need to show that either the red or blue rule (or both) applies.
・Suppose edge e is left uncolored.
・Blue edges form a forest.
・Case 1: both endpoints of e are in same blue tree.
⇒ apply red rule to cycle formed by adding e to blue forest.
・Case 2: both endpoints of e are in different blue trees.
⇒ apply blue rule to cutset induced by either of two blue trees. ▪

Case 2

42
4. G REEDY A LGORITHMS II

‣ Dijkstra′s algorithm
‣ minimum spanning trees
‣ Prim, Kruskal, Boruvka
‣ single-link clustering
‣ min-cost arborescences

SECTION 6.2
Prim′s algorithm

Initialize S = { s } for any node s, T = ∅.


Repeat n – 1 times:
・Add to T a min-cost edge with exactly one endpoint in S.
・Add the other endpoint to S.
by construction, edges in
cutset are uncolored

Theorem. Prim’s algorithm computes an MST.


Pf. Special case of greedy algorithm (blue rule repeatedly applied to S). ▪

44
Prim′s algorithm: implementation

Theorem. Prim’s algorithm can be implemented to run in O(m log n) time.


Pf. Implementation almost identical to Dijkstra’s algorithm.

PRIM (V, E, c)
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

S ← ∅, T ← ∅.
s ← any node in V.
FOREACH v ≠ s : π[v] ← ∞, pred[v] ← null; π[s] ← 0.
Create an empty priority queue pq.
FOREACH v ∈ V : INSERT(pq, v, π[v]).
WHILE (IS-NOT-EMPTY(pq)) π[v] = cost of cheapest
known edge between v and S
u ← DEL-MIN(pq).
S ← S ∪ { u }, T ← T ∪ { pred[u] }.
FOREACH edge e = (u, v) ∈ E with v ∉ S :
IF (ce < π[v])
DECREASE-KEY(pq, v, ce).
π[v] ← ce; pred[v] ← e.
45
Kruskal′s algorithm

Consider edges in ascending order of cost:


・Add to tree unless it would create a cycle.
Theorem. Kruskal’s algorithm computes an MST.
Pf. Special case of greedy algorithm.
・Case 1:
all other edges in cycle are blue
both endpoints of e in same blue tree.
⇒ color e red by applying red rule to unique cycle.
・Case 2: both endpoints of e in different blue trees.
⇒ color e blue by applying blue rule to cutset de ned by either tree. ▪

no edge in cutset has smaller cost


(since Kruskal chose it rst)

46
fi
fi
Kruskal′s algorithm: implementation

Theorem. Kruskal’s algorithm can be implemented to run in O(m log m) time.


・Sort edges by cost.
・Use union– nd data structure to dynamically maintain connected components.

KRUSKAL (V, E, c)
________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

SORT m edges by cost and renumber so that c(e1) ≤ c(e2) ≤ … ≤ c(em).


T ← ∅.
FOREACH v ∈ V : MAKE-SET(v).
FOR i = 1 TO m
(u, v) ← ei.
are u and v in
IF (FIND-SET(u) ≠ FIND-SET(v)) same component?
T ← T ∪ { ei }.
make u and v in
UNION(u, v). same component
RETURN T.
47
fi
Reverse-delete algorithm

Start with all edges in T and consider them in descending order of cost:
・Delete edge from T unless it would disconnect T.
Theorem. The reverse-delete algorithm computes an MST.
Pf. Special case of greedy algorithm.
・Case 1. [ deleting edge e does not disconnect T ]
⇒ apply red rule to cycle C formed by adding e to another path
in T between its two endpoints no edge in C is more expensive
(it would have already been considered and deleted)

・Case 2. [ deleting edge e disconnects T ]


⇒ apply blue rule to cutset D induced by either component ▪

e is the only remaining edge in the cutset


(all other edges in D must have been colored red / deleted)

Fact. [Thorup 2000] Can be implemented to run in O(m log n (log log n)3) time.

48
Review: the greedy MST algorithm

Red rule.
・Let C be a cycle with no red edges.
・Select an uncolored edge of C of max cost and color it red.
Blue rule.
・Let D be a cutset with no blue edges.
・Select an uncolored edge in D of min cost and color it blue.
Greedy algorithm.
・Apply the red and blue rules (nondeterministically!) until all edges
are colored. The blue edges form an MST.
・Note: can stop once n – 1 edges colored blue.

Theorem. The greedy algorithm is correct.


Special cases. Prim, Kruskal, reverse-delete, …

49
Borůvka′s algorithm

Repeat until only one tree.


・Apply blue rule to cutset corresponding to each blue tree.
・Color all selected edges blue.
assume edge
Theorem. Borůvka’s algorithm computes the MST.
costs are distinct
Pf. Special case of greedy algorithm (repeatedly apply blue rule). ▪

7 11

8 12 5

13

50
Borůvka′s algorithm: implementation

Theorem. Borůvka’s algorithm can be implemented to run in O(m log n) time.


Pf.
・To implement a phase in O(m) time:
- compute connected components of blue edges
- for each edge (u, v) ∈ E, check if u and v are in different components;
if so, update each component’s best edge in cutset
・≤ log2 n phases since each phase (at least) halves total # components. ▪

7 11

8 12 5

13

51
Borůvka′s algorithm: implementation

Contraction version.
・After each phase, contract each blue tree to a single supernode.
・Delete self-loops and parallel edges (keeping only cheapest one).
・Borůvka phase becomes: take cheapest edge incident to each node.
graph G contract edge 2-5
6
1 6 2 8 3 1 3

5 8
3 5 4 2 3 2, 2
5
7 9
4 7 5 9 6 4 6

1 1

delete self-loops and parallel edges

1 3

5 8
2,
3 2
5
7 9
4 6
Q. How to contract a set of edges? 1 52
CONTRACT A SET OF EDGES

Problem. Given a graph G = (V, E) and a set of edges F, contract all edges
in F, removing any self-loops or parallel edges.

Goal. O(m + n) time.

graph G

contracted graph G′

53
CONTRACT A SET OF EDGES

Problem. Given a graph G = (V, E) and a set of edges F, contract all edges
in F, removing any self-loops or parallel edges.

Solution.
・Compute the n connected components in (V, F).
・Suppose id[u] = i means node u is in connected component i.
・The contracted graph G has n nodes.
・For each edge u–v ∈ E, add an edge i–j to G , where i = id[u] and j = id[v].
Removing self loops. Easy.

Removing parallel edges.


・Create a list of edges i–j with the convention that i < j.
・Sort the edges lexicographically via LSD radix sort.
・Add the edges to the graph G , removing parallel edges.
54





Borůvka′s algorithm on planar graphs

Theorem. Borůvka’s algorithm (contraction version) can be implemented to run in O(n)


time on planar graphs.
Pf.
・Each Borůvka phase takes O(n) time:
- Fact 1: m ≤ 3n for simple planar graphs.
- Fact 2: planar graphs remains planar after edge contractions/deletions.
・Number of nodes (at least) halves in each phase.
・Thus, overall running time ≤ cn + cn / 2 + cn / 4 + cn / 8 + … = O(n). ▪

planar K3,3 not planar 55


A hybrid algorithm

Borůvka–Prim algorithm.
・Run Borůvka (contraction version) for log2 log2 n phases.
・Run Prim on resulting, contracted graph.
Theorem. Borůvka–Prim computes an MST.
Pf. Special case of the greedy algorithm.

Theorem. Borůvka–Prim can be implemented to run in O(m log log n) time.


Pf.
・The log2 log2 n phases of Borůvka’s algorithm take O(m log log n) time;
resulting graph has ≤ n / log2 n nodes and ≤ m edges.
・Prim’s algorithm (using Fibonacci heaps) takes O(m + n) time on a
graph with n / log2 n nodes and m edges. ▪

n n
O m+ log
log n log n

56
Does a linear-time compare-based MST algorithm exist?

year worst case discovered by

1975 O(m log log n) Yao iterated logarithm function

1976 O(m log log n) Cheriton–Tarjan 0 n 1


lg n =
1 + lg (lg n) n>1
1984 O(m log*n), O(m + n log n) Fredman–Tarjan <latexit sha1_base64="HJFYyV8ahuzHjW6Ka99U33ko4ac=">AAACkXicbVFdaxNBFJ1s/ajrV1p98+ViUKpC2BWhraAEfRF8qWBsIRPD7OzdZOjs7DJztyQs+Yu++z/6WnGy2QeTeGGGw7nnfsyZpNTKURT97gR7t27fubt/L7z/4OGjx92Dwx+uqKzEoSx0YS8S4VArg0NSpPGitCjyRON5cvl5lT+/QutUYb7TosRxLqZGZUoK8tSkO+N6+vM1GPgQ8gSnytTSd3PLMIKXwAnnBDWoDJZewjVCDJyHMbyBpu7I32Be7Uo/roUcTdo2nHR7UT9qAnZB3IIea+NsctA55GkhqxwNSS2cG8VRSeNaWFJS4zLklcNSyEsxxZGHRuToxnVjyRJeeCaFrLD+GIKG/beiFrlzizzxylzQzG3nVuT/cqOKspNxrUxZERq5HpRVGqiAlb+QKouS9MIDIa3yu4KcCSsk+V/YmNL0LlFuvKSeV0bJIsUtVtOcrFi5GG97tguGb/un/ejbu97gpLVznz1jz9kRi9kxG7Av7IwNmWS/2DW7YX+Cp8H7YBB8WkuDTlvzhG1E8PUvXjrGCA==</latexit>

1986 O(m log (log* n)) Gabow–Galil–Spencer–Tarjan n lg* n


(−∞, 1] 0
1997 O(m α(n) log α(n)) Chazelle
(1, 2] 1
2000 O(m α(n)) Chazelle (2, 4] 2
(4, 16] 3
2002 asymptotically optimal Pettie–Ramachandran
(16, 216] 4
20xx O(m) (216, 265536] 5

deterministic compare-based MST algorithms

Theorem. [Fredman–Willard 1990] O(m) in word RAM model.


Theorem. [Dixon–Rauch–Tarjan 1992] O(m) MST veri cation algorithm.
Theorem. [Karger–Klein–Tarjan 1995] O(m) randomized MST algorithm. 57
fi
MINIMUM BOTTLENECK SPANNING TREE

Problem. Given a connected graph G with positive edge costs, nd a spanning tree
that minimizes the most expensive edge.

Goal. O(m log m) time or better.

24

4 10 11 9

6 5 9

3 8 7 14 7

Note: not necessarily a MST 21

minimum bottleneck spanning tree T (bottleneck = 9)


58
fi
MINIMUM BOTTLENECK SPANNING TREE

Solution 1. Compute a MST T*. It is also a MBST.


Pf. Suppose for sake of contradiction that T* is not a MBST.
・Let e ∈ T* have cost strictly larger than min bottleneck cost.
・Consider cut formed by deleting e from T*.
・MBST contains at least one edge f in cutset.
・T* ∪ { f } – { e } yields better MST. ※

minimum spanning tree T*


59
MINIMUM BOTTLENECK SPANNING TREE

Solution 2.
・Sort the edges in increasing order of cost e ≤ e ≤ … ≤ e . 1 2 m

・De ne E = { e ∈ E : c ≤ x } and G = (V, E ).


x e x x

・Use binary search to nd smallest value of x for which G is connected. x

Note. This algorithm can be improved to O(m) time using linear-time median nding
and edge contractions.

Vohmw7,number1 INFORMATIONPIWX!ESSINCLETTERS January 1978

I,
THE MD&MAXSPAM’QC~GTREE P!RQBLEMAND SOME EXTENSIONS

P.M. CWERINI
Itituto di &Mtmrecnica ed Flettronica, Pol’itecnicodi Milano,
RIma L. da Vinci 32, 20133 hf&no, Italy

Received 29 July 1977; revisedversion nxeived 15 September a?77

Spanningtrees, min-maxproblems, computational complexity, matroids

From the view point of rime complexity, when G


60
is a complete graph, one of the best known algorithms
fi
fi
fi
4. G REEDY A LGORITHMS II

‣ Dijkstra′s algorithm
‣ minimum spanning trees
‣ Prim, Kruskal, Boruvka
‣ single-link clustering
‣ min-cost arborescences

SECTION 4.7
Clustering

Goal. Given a set U of n objects labeled p1, …, pn, partition into clusters so that
objects in different clusters are far apart.

outbreak of cholera deaths in London in 1850s (Nina Mishra)

Applications.
・Routing in mobile ad-hoc networks.
・Document categorization for web search.
・Similarity searching in medical image databases
・Cluster celestial objects into stars, quasars, galaxies.
・... 62
Clustering of maximum spacing

k-clustering. Divide objects into k non-empty groups.

Distance function. Numeric value specifying “closeness” of two objects.


・d(pi, pj) = 0 iff pi = pj [ identity of indiscernibles ]
・d(pi, pj) ≥ 0 [ non-negativity ]
・d(pi, pj) = d(pj, pi) [ symmetry ]

Spacing. Min distance between any pair of points in different clusters.

Goal. Given an integer k, nd a k-clustering of maximum spacing.

distance between
two clusters
min distance between
two closest clusters

4-clustering 63
fi
Greedy clustering algorithm

“Well-known” algorithm in science literature for single-linkage k-clustering:


・Form a graph on the node set U, corresponding to n clusters.
・Find the closest pair of objects such that each object is in a different cluster,
and add an edge between them.
・Repeat n – k times (until there are exactly k clusters).

Key observation. This procedure is precisely Kruskal’s algorithm


(except we stop when there are k connected components).

Alternative. Find an MST and delete the k – 1 longest edges. 64


Greedy clustering algorithm: analysis

Theorem. Let C* denote the clustering C *1, …, C *k formed by deleting the


k – 1 longest edges of an MST. Then, C * is a k-clustering of max spacing.
Pf.
・Let C denote any other clustering C1, …, Ck.
・Let pi and pj be in the same cluster in C *, say C *r , but different clusters
in C, say Cs and Ct.
・Some edge (p, q) on pi – pj path in C *r spans two different clusters in C.
・Spacing of C* = length d* of the (k – 1)st longest edge in MST.
・Edge (p, q) has length ≤ d * since it was added by Kruskal.
・Spacing of C is ≤ d * since p and q are in different clusters. ▪
this is the edge
Kruskal would have
added next had
Ct we not stopped it
Cs

C *r

edges left after deleting


k – 1 longest edges pj
from a MST pi p q

65
Dendrogram of cancers in human

Tumors in similar tissues cluster together.

gene 1

gene n

gene expressed
Reference: Botstein & Brown group
gene not expressed
66
Minimum spanning trees: quiz 5

Which MST algorithm should you use for single-link clustering?

number of objects n
A. Kruskal (stop when there are k components). can be very large

B. Prim (delete k – 1 longest edges).

C. Either A or B.

D. Neither A nor B.

67
4. G REEDY A LGORITHMS II

‣ Dijkstra′s algorithm
‣ minimum spanning trees
‣ Prim, Kruskal, Boruvka
‣ single-link clustering
‣ min-cost arborescences

SECTION 4.9
Arborescences

Def. Given a digraph G = (V, E) and a root r ∈ V, an arborescence (rooted at r) is a


subgraph T = (V, F) such that
・T is a spanning tree of G if we ignore the direction of edges.
・There is a (unique) directed path in T from r to each other node v ∈ V.

69
Minimum spanning arborescence: quiz 1

Which of the following are properties of arborescences rooted at r?

A. No directed cycles.

B. Exactly n − 1 edges.

C. For each v ≠ r : indegree(v) = 1.

D. All of the above.

70
Arborescences

Proposition. A subgraph T = (V, F) of G is an arborescence rooted at r iff


T has no directed cycles and each node v ≠ r has exactly one entering edge.
Pf.
⇒ If T is an arborescence, then no (directed) cycles and every node v ≠ r
has exactly one entering edge—the last edge on the unique r↝v path.

⇐ Suppose T has no cycles and each node v ≠ r has one entering edge.
・To construct an r↝v path, start at v and repeatedly follow edges in the backward
direction.
・Since T has no directed cycles, the process must terminate.
・It must terminate at r since r is the only node with no entering edge. ▪

71
Minimum spanning arborescence: quiz 2

Given a digraph G, how to nd an arborescence rooted at r?

A. Breadth- rst search from r.

B. Depth- rst search from r.

C. Either A or B.

D. Neither A nor B.

72
fi
fi
fi
Min-cost arborescence problem

Problem. Given a digraph G with a root node r and edge costs ce ≥ 0,


nd an arborescence rooted at r of minimum cost.

2 6

5 1 3 8 9

4 7

Assumption 1. All nodes reachable from r.


Assumption 2. No edge enters r (safe to delete since they won’t help).

73
fi
Minimum spanning arborescence: quiz 3

A min-cost arborescence must…

A. Include the cheapest edge.

B. Exclude the most expensive edge.

C. Be a shortest-paths tree from r.

D. None of the above.

4 12

10 3 6

74
A suf cient optimality condition

Property. For each node v ≠ r, choose a cheapest edge entering v


and let F* denote this set of n – 1 edges. If (V, F*) is an arborescence,
then it is a min-cost arborescence.

Pf. An arborescence needs exactly one edge entering each node v ≠ r


and (V, F*) is the cheapest way to make each of these choices. ▪

1 6

5 2 3 8 9

4 7

F* = thick black edges

75
fi
A suf cient optimality condition

Property. For each node v ≠ r, choose a cheapest edge entering v


and let F* denote this set of n – 1 edges. If (V, F*) is an arborescence,
then it is a min-cost arborescence.

Note. F* may not be an arborescence (since it may have directed cycles).

2 6

5 1 3 8 9

4 7

F* = thick black edges

76
fi
Reduced costs

Def. For each v ≠ r, let y(v) denote the min cost of any edge entering v.
De ne the reduced cost of an edge (u, v) as c (u, v) = c(u, v) – y(v) ≥ 0.

Observation. T is a min-cost arborescence in G using costs c iff


T is a min-cost arborescence in G using reduced costs c .
Pf. For each v ≠ r : each arborescence has exactly one edge entering v. ▪

costs c reduced costs c′


1 9
2 9 1 0

7 1 3 3 0 0

4 0
4 3 y(v)
77
fi


Edmonds branching algorithm: intuition

Intuition. Recall F* = set of cheapest edges entering v for each v ≠ r.


・Now, all edges in F* have 0 cost with respect to reduced costs c (u, v).
・If F* does not contain a cycle, then it is a min-cost arborescence.
・If F* contains a cycle C, can afford to use as many edges in C as desired.
・Contract edges in C to a supernode (removing any self-loops).
・Recursively solve problem in contracted network G with costs c (u, v).

F* = thick edges

4 0 0

1 3 0 4 0 1 0

0 0 7

78



Edmonds branching algorithm: intuition

Intuition. Recall F* = set of cheapest edges entering v for each v ≠ r.


・Now, all edges in F* have 0 cost with respect to reduced costs c (u, v).
・If F* does not contain a cycle, then it is a min-cost arborescence.
・If F* contains a cycle C, can afford to use as many edges in C as desired.
・Contract edges in C to a supernode (removing any self-loops).
・Recursively solve problem in contracted network G with costs c (u, v).

4 0

1 0

3 1

0
7

79



Edmonds branching algorithm

EDMONDS-BRANCHING (G, r , c)
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

FOREACH v ≠ r :
y(v) ← min cost of any edge entering v.
c (u, v) ← c (u, v) – y(v) for each edge (u, v) entering v.
FOREACH v ≠ r : choose one 0-cost edge entering v and let F*
be the resulting set of edges.
IF (F* forms an arborescence) RETURN T = (V, F*).
ELSE
C ← directed cycle in F*.
Contract C to a single supernode, yielding G = (V , E ).
T ← EDMONDS-BRANCHING(G , r , c ).
Extend T to an arborescence T in G by adding all but one edge of C.
RETURN T.
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

80









Edmonds branching algorithm

Q. What could go wrong?


A. Contracting cycle C places extra constraint on arborescence.
・Min-cost arborescence in G must have exactly one edge entering
a node in C (since C is contracted to a single node)
・But min-cost arborescence in G might have several edges entering C.

min-cost arborescence in G

cycle C
b

81

Edmonds branching algorithm: key lemma

Lemma. Let C be a cycle in G containing only 0-cost edges. There exists a


min-cost arborescence T rooted at r that has exactly one edge entering C.

Pf.

Case 0. T has no edges entering C.


Since T is an arborescence, there is an r↝v path for each node v ⇒
at least one edge enters C. ※

Case 1. T has exactly one edge entering C.


T satis es the lemma.

Case 2. T has two (or more) edges entering C.


We construct another min-cost arborescence T * that has exactly one edge entering
C.

82
fi
Edmonds branching algorithm: key lemma

Case 2 construction of T *.
・Let (a, b) be an edge in T entering C that lies on a shortest path from r.
・We delete all edges of T that enter a node in C except (a, b).
・We add in all edges of C except the one that enters b. this path from r to C
uses only one node in C

T cycle C
b

83
Edmonds branching algorithm: key lemma

Case 2 construction of T *.
・Let (a, b) be an edge in T entering C that lies on a shortest path from r.
・We delete all edges of T that enter a node in C except (a, b).
・We add in all edges of C except the one that enters b. this path from r to C
uses only one node in C

Claim. T * is a min-cost arborescence.


・The cost of T * is at most that of T since we add only 0-cost edges.
・T * has exactly one edge entering each node v ≠ r.
・T has no directed cycles.
*
T is an arborescence rooted at r

(T had no cycles before; no cycles within C; now only (a, b) enters C)

cycle C and the only path in T * to a


T* is the path from r to a
b
(since any path must follow unique
entering edge back to r)

84
Edmonds branching algorithm: analysis

Theorem. [Chu–Liu 1965, Edmonds 1967] The greedy algorithm nds a min-cost
arborescence.
Pf. [ by strong induction on number of nodes ]
・If the edges of F* form an arborescence, then min-cost arborescence.
・Otherwise, we use reduced costs, which is equivalent.
・After contracting a 0-cost cycle C to obtain a smaller graph G ,
the algorithm nds a min-cost arborescence T in G (by induction).
・Key lemma: there exists a min-cost arborescence T in G that corresponds to T .

Theorem. The greedy algorithm can be implemented to run in O(m n) time.


Pf.
・At most n contractions (since each reduces the number of nodes).
・Finding and contracting the cycle C takes O(m) time.
・Transforming T into T takes O(m) time. ▪
un-contracting cycle C,
remove all but one edge entering C,
taking all but one edge in C 85
fi




fi

Min-cost arborescence

Theorem. [Gabow–Galil–Spencer–Tarjan 1985] There exists an O(m + n log n) time


algorithm to compute a min-cost arborescence.

86

You might also like