Greedy PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Greedy Algorithms

CSE 421: Introduction to


Algorithms

Hard to define exactly but can give general


properties
Solution is built in small steps
Decisions on how to build the solution are made to
maximize some criterion without looking to the
future
 Want the best current partial solution as if the
current step were the last step




Greedy Algorithms
Paul Beame

May be more than one greedy algorithm


using different criteria to solve a given
problem

Greedy Algorithms


Greedy algorithms




Interval Scheduling


Easy to produce
Fast running times
Work only on certain classes of problems

Interval Scheduling
Single resource
Reservation requests
 Of form Can I reserve it from start time
s to finish time f?
 s < f
Find: maximum number of requests that
can be scheduled so that no two
reservations have the resource at the
same time




Two methods for proving that greedy


algorithms do work


Greedy algorithm stays ahead


 At each step any other algorithm will have a
worse value for the criterion
Exchange Argument
 Can transform any other solution to the greedy
solution at no loss in quality

Interval scheduling


Interval Scheduling

Formally


Requests 1,2,,n
 request i has start time s and finish time f > s
i
i
i
Requests i and j are compatible iff either
 request i is for a time entirely before request j
 fi s j
 or, request j is for a time entirely before
request i
 fj s i
Set A of requests is compatible iff every pair of
j is compatible
requests i,j A, i
Goal: Find maximum size subset A of compatible
requests

Interval scheduling.
Job j starts at sj and finishes at fj.
Two jobs compatible if they don't overlap.
Goal: find maximum subset of mutually compatible jobs.





a
b
c
d
e
f
g
h
0
5

10

11

Time
6

Greedy Algorithms for Interval


Scheduling


Greedy Algorithms for Interval


Scheduling

What criterion should we try?




What criterion should we try?




Earliest start time si

Shortest request time fi-si


 Doesnt work

Shortest request time fi-si




Earliest start time si


 Doesnt work

Earliest finish fime fi

Even fewest conflicts doesnt work

Earliest finish fime fi


 Works

Greedy Algorithm for Interval


Scheduling

Greedy Algorithm for Interval


Scheduling

Rset of all requests


A
While R do
Choose request i
R with smallest
finishing time fi
Add request i to A
Delete all requests in R that are not
compatible with request i
Return A

Claim: A is a compatible set of requests and


these are added to A in order of finish time


When we add a request to A we delete all


incompatible ones from R

Claim: For any other set O


R of compatible
requests then if we order requests in A and O
by finish time then for each k: Enough to prove that A is optimal



If O contains a kth request then so does A and


the finish time of the kth request in A, is the
finish time of the kth request in O, i.e. ak ok
where ak and ok are the respective finish times

10

Inductive Proof of Claim: akok




Interval Scheduling: Analysis


Therefore we have:

Theorem. Greedy algorithm is optimal.

Base Case: This is true for the first request in A since


that is the one with the smallest finish time
Inductive Step: Suppose akok


Alternative Proof. (by contradiction)




By definition of compatibility
st request r then the start time of that
 If O contains a k+1
request must be after ok and thus after ak
 Thus r is compatible with the first k requests in A
 Therefore




Assume greedy is not optimal, and let's see what happens.


Let a1, a2, ... ak denote set of jobs selected by greedy.
Let o1, o2, ... om denote set of jobs in the optimal solution with
a1 = o1, a2 = o2, ..., ak = ok for the largest possible value of k.
job ak+1 finishes before ok+1

A has at least k+1 requests since a compatible one is


available after the first k are chosen
r was among those considered by the greedy algorithm for
that k+1st request in A

Therefore by the greedy choice the finish time of r which


is ok+1 is at least the finish time of that k+1st request in A
which is ak+1

Greedy:

a1

a1

ar

OPT:

o1

o2

or

ak+1

ok+1

...

why not replace job ok+1


with job ak+1?

11

12

Interval Scheduling: Greedy


Algorithm Implementation

Implementing the Greedy Algorithm




Sort the requests by finish time

Maintain current latest finish time scheduled


Keep array of start times indexed by request
number
Only eliminate incompatible requests as
needed

O(nlog n) time
Sort jobs by finish times so that 0 f1 f2 ... fn.

O(n log n)

A
last 0
for j = 1 to n {
if (last sj)
A A {j}
last fj
}
return A

O(n)

Walk along array of requests sorted by finish times


skipping those whose start time is before current
latest finish time scheduled
O(n) additional time for greedy algorithm
13

14

Scheduling All Intervals:


Interval Partitioning


Interval partitioning.



Interval Partitioning


Lecture j starts at sj and finishes at fj.


Goal: find minimum number of classrooms to schedule all lectures
so that no two occur at the same time in the same room.




Example: This schedule uses 4 classrooms to schedule 10


lectures.
e
c

10:30

11

11:30

12

12:30

1:30

2:30

3:30

4:30

Time

9:30

j
i

i
2

d
b

h
f

10

Example: This schedule uses only 3 classrooms

a
9:30

Lecture j starts at sj and finishes at fj.


Goal: find minimum number of classrooms to schedule all lectures
so that no two occur at the same time in the same room.

d
b

Interval partitioning.

e
10

10:30

11

11:30

12

12:30

h
1

1:30

2:30

3:30

4:30

Time

15

16

Interval Partitioning: Lower Bound


on Optimal Solution

Scheduling all intervals




Interval Partitioning Problem: We have


resources to serve more than one request at
once and want to schedule all the intervals
using as few of our resources as possible

Definition. The depth of a set of open intervals is the maximum number


that contain any given time.

Key observation. Number of classrooms needed depth.

Ex: Depth of schedule below = 3 schedule below is optimal.

d
b

Obvious requirement: At least the depth of


the set of requests

a
9

9:30

17

e
10

10:30

a, b, c all contain 9:30




11

11:30

12

12:30

h
1

1:30

2:30

3:30

4:30

Time

Q. Does there always exist a schedule equal to depth of intervals?

18

Interval Partitioning: Greedy


Analysis

A simple greedy algorithm


Sort requests in increasing order of start times (s1,f1),,(sn,fn)

For i=1 to n
j1
While (request i not scheduled)
lastj finish time of the last request
currently scheduled on resource j
if silastj then schedule request i on
resource j
jj+1
End While
End For




Observation. Greedy algorithm never schedules two


incompatible lectures in the same classroom.
Theorem. Greedy algorithm is optimal.
Proof.


Let d = number of classrooms that the greedy algorithm


allocates.
Classroom d is opened because we needed to schedule a job,
say j, that is incompatible with all d-1 other classrooms.
Since we sorted by start time, all these incompatibilities are
caused by lectures that start no later than sj.
Thus, we have d lectures overlapping at time sj + .
Key observation all schedules use d classrooms.

19

20

A simple greedy algorithm

A more efficient implementation


O(n log n) time
Sort requests in increasing order of start times (s1,f1),,(sn,fn)

Sort requests in increasing order of start times (s1,f1),,(sn,fn)


For i=1 to n
j1
While (request i not scheduled)
lastj finish time of the last request
currently scheduled on resource j
if silastj then schedule request i on
resource j
jj+1
End While
End For

May be slow
O(nd)
which may be (n2)


d1
Schedule request 1 on resource 1
last1f1
Insert 1 into priority queue Q with key = last1
For i=2 to n
j findmin(Q)
if silastj then
schedule request i on resource j
lastj fi
Increasekey(j,Q) to lastj
else
d d+1
schedule request i on resource d
lastdfi
Insert d into priority queue Q with key = lastd
End For

O(n log n)

O(n log d)

O(n log n) time


21

22

Greedy Analysis Strategies




Scheduling to Minimize Lateness

Greedy algorithm stays ahead. Show that after each


step of the greedy algorithm, its solution is at least as
good as any other algorithm's.

Scheduling to minimize lateness




Exchange argument. Gradually transform any


solution to the one found by the greedy algorithm
without hurting its quality.

Structural. Discover a simple "structural" bound


asserting that every possible solution must have a
certain value. Then show that your algorithm always
achieves this bound.




23

Single resource as in interval scheduling but instead of start


and finish times request i has
 Time requirement ti which must be scheduled in a
contiguous block
 Target deadline di by which time the request would like
to be finished
 Overall start time s
Requests are scheduled by the algorithm into time intervals
[si,fi] such that ti=fi-si
Lateness of schedule for request i is
 If di < fi then request i is late by Li= fi-di otherwise its
lateness Li= 0
Maximum lateness L=maxi Li
Goal: Find a schedule for all requests (values of si and fi for
each request i) to minimize the maximum lateness, L
24

Minimizing Lateness: Greedy


Algorithms

Scheduling to Minimizing Lateness




Greedy template. Consider jobs in some order.




Example:

tj

dj

14

15


lateness = 2

d3 = 9

d2 = 8

d6 = 15
3

d1 = 6
5

lateness = 0

10

d4 = 9
11

[Earliest deadline first] Consider jobs in ascending


order of deadline dj.

max lateness = 6

d5 = 14
8

[Shortest processing time first] Consider jobs in


ascending order of processing time tj.

12

13

14

15

[Smallest slack] Consider jobs in ascending order


of slack dj - tj.

25

26

Minimizing Lateness: Greedy


Algorithms

Greedy Algorithm:
Earliest Deadline First

Greedy template. Consider jobs in some order.

[Shortest processing time first] Consider jobs in


ascending order of processing time tj.

tj

10

dj

100

10

counterexample

Order requests in increasing order of


deadlines
Schedule the request with the earliest
deadline as soon as the resource
becomes available

[Smallest slack] Consider jobs in ascending order


of slack dj - tj.

tj

10

dj

10

counterexample

27

28

Minimizing Lateness: Greedy


Algorithm

Proof for Greedy Algorithm:


Exchange Argument

Greedy algorithm. Earliest deadline first.

Sort deadlines in increasing order (d1 d2 dn)


fs
for i
1 to n to
si f
fi si+ti
f
fi
end for

We will show that if there is another


schedule O (think optimal schedule)
then we can gradually change O so that


at each step the maximum lateness in O


never gets worse
it eventually becomes the same cost as A

max lateness = 1

d1 = 6
0

d2 = 8
3

d3 = 9
5

d4 = 9
7

d5 = 14
9

10

11

12

d6 = 15
13

14

15

29

30

Minimizing Lateness: Inversions

Minimizing Lateness: No Idle Time




Observation. There exists an optimal schedule with


no idle time.

Definition. An inversion in schedule S is a pair of


jobs i and j such that di < dj but j scheduled before i.
inversion

d=4
1

d=6
2

d=4
0

before swap

d = 12

d=6

10

11

10

11

d = 12
7

Observation. The greedy schedule has no idle time.

Observation. Greedy schedule has no inversions.


Observation. If a schedule (with no idle time) has an
inversion, it has one with a pair of inverted jobs
scheduled consecutively (by transitivity of <).

31

32

Minimizing Lateness: Inversions




Minimizing Lateness: Inversions

Definition. An inversion in schedule S is a pair of


jobs i and j such that di < dj but j scheduled before i.
inversion
before swap
after swap

fi
i

If dj > di but j is scheduled in O immediately


before i then swapping requests i and j to get
schedule O does not increase the maximum
lateness


f'j


Claim. Swapping two adjacent, inverted jobs


reduces the number of inversions by one and does
not increase the max lateness.


Lateness Li
Li since i is scheduled earlier in O
than in O
Requests i and j together occupy the same total
time slot in both schedules
 All other requests k
i,j have Lk=Lk
 f =f so L = f -d =f -d < f -d =L
j
i
j
j j
i j
j j
j
Maximum lateness has not increased!

33

34

Optimal schedules and inversions




Optimal schedules and inversions

Claim: There is an optimal schedule


with no idle time and no inversions
Proof:


Eventually these swaps will produce an


optimal schedule with no inversions


By previous argument there is an optimal


schedule O with no idle time
If O has an inversion then it has a
consecutive pair of requests in its
schedule that are inverted and can be
swapped without increasing lateness

Each swap decreases the number of


inversions by 1
There are a bounded number of (at most
n(n-1)/2) inversions (we only care that this
is finite.)

QED
35

36

Idleness and Inversions are the only


issue


Earliest Deadline First is optimal

Claim: All schedules with no inversions and no idle


time have the same maximum lateness
Proof


We know that


Schedules can differ only in how they order requests with


equal deadlines
Consider all requests having some common deadline d

Maximum lateness of these jobs is based only on the finish


time of the last of these jobs but the set of these requests
occupies the same time segment in both schedules
 Last of these requests finishes at the same time in any
such schedule.

There is an optimal schedule with no idle


time or inversions
All schedules with no idle time or
inversions have the same maximum
lateness
EDF produces a schedule with no idle time
or inversions

Therefore


EDF produces an optimal schedule

37

38

Optimal Caching/Paging


Memory systems




Optimal Caching/Paging


many levels of storage with different access times


smaller storage has shorter access time
to access an item it must be brought to the lowest
level of the memory system




Consider the management problem between


adjacent levels




Given a memory request d from U

Main memory with n data items from a set U


Cache can hold k<
<n items
Simplest version with no direct-mapping or other
restrictions about where items can be
Suppose cache is full initially
 Holds k data items to start with




If d is stored in the cache we can access it quickly


If not then we call it a cache miss and (since the
cache is full)
 we must bring it into cache and evict some
other data item from the cache
 which one to evict?

Given a sequence D=d1,d2,,dm of elements


from U corresponding to memory requests
Find a sequence of evictions (an eviction
schedule) that has as few cache misses as
possible

39

40

Caching Example






A Note on Optimal Caching

n=3, k=2, U={a,b,c}


Cache initially contains {a,b}
D= a b c b c a b
S=
a
c
C= a b
a
b c
b

In real operating conditions one typically


needs an on-line algorithm


However to design and analyze these


algorithms it is also important to understand
how the best possible decisions can be made
if one did know the future


This is optimal
41

make the eviction decisions as each memory


request arrives

Field of on-line algorithms compares the quality of


on-line decisions to that of the optimal schedule

What does an optimal schedule look like?


42

Beladys Greedy Algorithm:


Farthest-In-Future

Other Algorithms

Given sequence D=d1,d2,,dm

When di needs to be brought into the


cache evict the item that is needed
farthest in the future

Often there is flexibility, e.g.








Let NextAccessi(d)=min{ j
i : dj=d} be the
next point in D that item d will be requested
Evict d such that NextAccessi(d) is largest

k=3, C={a,b,c}
D=
abcdadeadbc
SFIF=
c
b
ed
S =
b
c
de

Why arent other algorithms better?

Exchange Argument

Least-Frequenty-Used-In-Future?
We can swap choices to convert other schedules
to Farthest-In-Future without losing quality

43

44

Optimal Offline Caching: Farthest-InFuture

Optimal Offline Caching




Caching.





Cache with capacity to store k items.


Sequence of m item requests d1, d2, , dm.
Cache hit: item already in cache when requested.
Cache miss: item not already in cache when requested:
must bring requested item into cache, and evict some
existing item, if full.
a

Goal. Eviction schedule that minimizes number


of cache misses (actually, # of evictions).
Example: k = 2, initial cache = ab,
requests: a, b, c, b, c, a, a, b.
Optimal eviction schedule: 2 cache misses.

a
b
requests

Farthest-in-future. Evict item in the cache that is not


requested until farthest in the future.
current cache:

future queries:

g a b c e d a b b a c d e af a d e f g h ...
eject this one

cache miss

Theorem. [Bellady, 1960s] FIF is an optimal eviction


schedule.
Proof. Algorithm and theorem are intuitive; proof is
subtle.

cache
45

46

Reduced Eviction Schedules




Definition. A reduced schedule is a schedule that only inserts an item


into the cache in a step in which that item is requested.

Intuition. Can transform an unreduced schedule into a reduced one


with no more cache misses.

Reduced Eviction Schedules




Claim. Given any unreduced schedule S, can transform it into a


reduced schedule S' with no more cache misses.
Proof. (by induction on number of unreduced items)

an unreduced schedule

Suppose S brings d into the cache at time t, without a request.


Let c be the item S evicts when it brings d into the cache.
Case 1: d evicted at time t', before next request for d.
Case 2: d requested at time t' before d is evicted.




S'

t'

t'

t
d

d
t'
e

S'

d evicted at time t',


before next request

t'
d requested at time t'

a reduced schedule
Case 1
47

Case 2
48

Farthest-In-Future: Analysis



Farthest-In-Future: Analysis

Theorem. FIF is optimal eviction algorithm.


Proof. (by induction on number or requests j)

Proof. (continued)


Case 3: (d is not in the cache; SFIF evicts e; S evicts f e).


 begin construction of S' from S by evicting e instead of f

Invariant: There exists an optimal reduced schedule S that makes


the same eviction schedule as SFIF through the first j+1 requests.
j


same







same

Let S be reduced schedule that satisfies invariant through j requests.


We produce S' that satisfies invariant after j+1 requests.
j+1
j

Consider (j+1)st request d = dj+1.


Since S and SFIF have agreed up until now, they have the same cache
contents before request j+1.
Case 1: (d is already in the cache). S' = S satisfies invariant.
Case 2: (d is not in the cache and S and SFIF evict the same element).
S' = S satisfies invariant.

same

S'
e

same

S'

now S' agrees with SFIF on first j+1 requests; we show that
having element f in cache is no worse than having element e

49

50

Farthest-In-Future: Analysis


Farthest-In-Future: Analysis

Let j' be the first time after j+1 that S and S' take a different action, and
let g be item requested at time j'.
must involve e or f (or both)
j'

same

Let j' be the first time after j+1 that S and S' take a different action, and
let g be item requested at time j'.

must involve e or f (or both)

same

j'

S'

same

Case 3a: g = e. Can't happen with Farthest-In-Future since there must be


a request for f before e.

S'

otherwise S' would take the same action




same

Case 3b: g = f. Element f can't be in cache of S, so let e' be the element


that S evicts.

if e' = e, S' accesses f from cache; now S and S' have same cache

if e' e, S' evicts e' and brings e into the cache; now S and S' have the
same cache

Case 3c: g e, f. S must evict e.


Make S' evict f; now S and S' have the same cache.

j'

same

same

S'

Note: S' is no longer reduced, but can be transformed into


a reduced schedule that agrees with SFIF through step j+1

51

52

Caching Perspective


Online vs. offline algorithms.









Single-source shortest paths

Offline: full sequence of requests is known a priori.


Online (reality): requests are not known in advance.
Caching is among most fundamental online problems in CS.

Given an (un)directed graph G=(V,E)


with each edge e having a non-negative
weight w(e) and a vertex v

Find length of shortest paths from v to


each vertex in G

LIFO. Evict page brought in most recently.


LRU. Evict page whose most recent access was earliest.
FF with direction of time reversed!

Theorem. FF is optimal offline eviction algorithm.






Provides basis for understanding and analyzing online algorithms.


LRU is k-competitive. [Section 13.8]
LIFO is arbitrarily bad.

53

54

A greedy algorithm


Dijsktras Algorithm
Dijkstra(G,w,s)
S{s}
d[s]0
while S
V do
of all edges e=(u,v) s.t. v
S and u
S select* one
with the minimum value of d[u]+w(e)
SS {v}
d[v]d[u]+w(e)
pred[v]u

Dijkstras Algorithm:
Maintain a set S of vertices whose shortest paths
are known
 initially S={s}
Maintaining current best lengths of paths that only
go through S to each of the vertices in G
 path-lengths to elements of S will be right, to
V-S they might not be right
Repeatedly add vertex v to S that has the shortest
path-length of any vertex in V-S
 update path lengths based on new paths
through v

*For each vS maintain d[v]=minimum value of


d[u]+w(e) over all vertices uS s.t. e=(u,v) is in of G
55

Dijkstras Algorithm
0

57

Dijkstras Algorithm
Update distances

4
10

Dijkstras Algorithm

Add to S

2
59

10

5
1

2
7

58

10

1
3

10

Add to S
4

4
1

Dijkstras Algorithm

56

2
60

10

Dijkstras Algorithm
Update distances

61

Dijkstras Algorithm
Update distances

63

Dijkstras Algorithm
Update distances

4
10

Dijkstras Algorithm

Add to S

2
65

10

13

5
1

2
7

64

8
13

10

13

5
1

2
7

4
10

Add to S

Dijkstras Algorithm

6
1

62

10

5
1

10

2
7

Add to S
1

Dijkstras Algorithm

8
13

2
66

11

Dijkstras Algorithm
Update distances

67

Dijkstras Algorithm
Update distances

15

69

Dijkstras Algorithm
Update distances

15

4 14
10

10 20

Dijkstras Algorithm

Add to S

15

4 14

16
10

2
71

10 20

13

5
1

8
13

70

6
1

16

10

13
10

15

16

5
1

4
10

Add to S

9
10

Dijkstras Algorithm

6
1

68

8
13

16

10

16

13

5
1

10

2
7

Add to S
1

Dijkstras Algorithm

8
13

16

2
72

12

Dijkstras Algorithm
Update distances

15

4
8

10 20

73

Dijkstras Algorithm
Update distances

15

Add to S

4 14

75

Dijkstras Algorithm
Update distances

15

4 14
10 19

Dijkstras Algorithm

Add to S

15

4 14

16
10

2
77

10 19

13

5
1

8
13

76

10

16

10 19

4 14

13
10

15

16

10 19

3
6

9
5

9
10

Dijkstras Algorithm

6
1

74

8
13

10 20

16

4 14
10

16

15

13

4 14

5
1

10

Add to S
1

Dijkstras Algorithm

8
13

16

2
78

13

Dijkstras Algorithm
Update distances

15

4 14
10 18

10 18

79

Suppose all distances to vertices in S are correct


and u has smallest current value in V-S
distance value of vertex in V-S=length of shortest path from s
with only last edge leaving S
Suppose some other
path to v and x= first vertex
on this path not in S

4 14
10

16

1
3

15

13

Dijkstras Algorithm Correctness

10

Add to S
1

Dijkstras Algorithm

8
13

16

2
80

Dijkstras Algorithm


Algorithm also produces a tree of


shortest paths to v following pred links


d(v) d(x)
x-v path length 0

From w follow its ancestors in the tree


back to v

If all you care about is the shortest path


from v to w simply stop the algorithm
when w is added to S

other path is longer

Therefore adding v to S keeps correct distances


81

82

Data Structure Review

Implementing Dijkstras Algorithm




Need to
 keep current distance values for nodes in
V-S
 find minimum current distance value
 reduce distances when vertex moved to S

Priority Queue:



Elements each with an associated key


Operations
 Insert
 Find-min


Decrease the key value of some element

Implementations


83

Return the element with the smallest key and delete it from the data
structure

Decrease-key


Return the element with the smallest key

Delete-min

Arrays: O(n) time find/delete-min, O(1) time insert/


decrease-key
Heaps: O(log n) time insert/decrease-key/delete-min, O(1) time
find-min

84

14

Dijkstras Algorithm with Priority


Queues


Dijskstras Algorithm with Priority


Queues

For each vertex u not in tree maintain cost of


current cheapest path through tree to u
 Store u in priority queue with key = length
of this path
Operations:



Priority queue implementations


Array
 insert O(1), delete-min O(n), decrease-key O(1)
2
2
 total O(n+n +m)=O(n )
Heap
 insert, delete-min, decrease-key all O(log n)
 total O(m log n)
d-Heap (d=m/n)
 insert, decrease-key O(log
m/n n)
 delete-min O((m/n) log
m/n n)
 total O(m log
m/n n)

n-1 insertions (each vertex added once)


n-1 delete-mins (each vertex deleted once)
 pick the vertex of smallest key, remove it from
the priority queue and add its edge to the graph
<m decrease-keys (each edge updates one
vertex)

85

Weighted Undirected Graph

Minimum Spanning Trees (Forests)




86

Given an undirected graph G=(V,E) with


each edge e having a weight w(e)

-1

if G is connected then T is a tree otherwise


it is a forest

8
11

10

13

88

Dijsktras Algorithm
Dijkstra(G,w,s)
S{s}
d[s]0
while S
V do
of all edges e=(u,v) s.t. v
S and u
S select* one
with the minimum value of d[u]+w(e)
SS {v}
d[v]d[u]+w(e)
pred[v]u

Prims Algorithm:

4 12

Greedy Algorithm

87

3
6

Find a subgraph T of G of minimum


total weight s.t. every pair of vertices
connected in G are also connected in T

7


start at a vertex s
add the cheapest edge adjacent to s
repeatedly add the cheapest edge that
joins the vertices explored so far to the rest
of the graph
Exactly like Dijsktras Algorithm but with a
different metric

*For each vS maintain d[v]=minimum value of


d[u]+w(e) over all vertices uS s.t. e=(u,v) is in of G
89

90

15

Prims Algorithm

Second Greedy Algorithm

Prim(G,w,s)
S{s}

Kruskals Algorithm
Start with the vertices and no edges
Repeatedly add the cheapest edge that
joins two different components. i.e. that
doesnt create a cycle

while S
V do
of all edges e=(u,v) s.t. v
S and u
S select* one
with the minimum value of w(e)
SS {v}

pred[v]u
*For each vS maintain small[v]=minimum value of w(e)
over all vertices uS s.t. e=(u,v) is in of G
91

Why greed is good




92

Cuts and Spanning Trees

Definition: Given a graph G=(V,E), a cut of


G is a partition of V into two non-empty
pieces, S and V-S
Lemma: For every cut (S,V-S) of G, there is
a minimum spanning tree (or forest)
containing any cheapest edge crossing the
cut, i.e. connecting some node in S with
some node in V-S.
 call such an edge safe

1
3

The greedy algorithms always


choose safe edges

8
11

8
10

13

94

Prims Algorithm

Prims Algorithm


4 12
8

93

6
-1

7
4

Always chooses cheapest edge from


current tree to rest of the graph
This is cheapest edge across a cut which
has the vertices of that tree on one side.

6
9

-1

4 12

95

5
13

9
8

8
11

10

7
96

16

The greedy algorithms always


choose safe edges


Kruskals Algorithm

Kruskals Algorithm
Always chooses cheapest edge connecting
two pieces of the graph that arent yet
connected
This is the cheapest edge across any cut
which has those two pieces on different
sides and doesnt split any current pieces.

97

98

Suppose you have an MST not using cheapest edge e

11

4 12
5

9
8

13

10

-1

8
11

Proof of Lemma:
An Exchange Argument

Kruskals Algorithm
1

4 12
8

6
-1

10

13

99

Proof of Lemma

Endpoints of e, u and v must be connected in T

100

Proof of Lemma

Suppose you have an MST T not using cheapest edge e

Suppose you have an MST T not using cheapest edge e

u
e

Endpoints of e, u and v must be connected in T

101

Endpoints of e, u and v must be connected in T

102

17

Proof of Lemma

Proof of Lemma

Suppose you have an MST not using cheapest edge e

Replacing h by e does not increase weight of T

w(e)w(h)

w(e)w(h)

u
e

Endpoints of e, u and v must be connected in T

All the same points are connected by the new tree

103

Kruskals Algorithm
Implementation & Analysis



Maintaining components


if endpoints of edge e are currently in


different components


then add to the graph


 else skip
Union-find data structure handles last part





Total cost of last part: O(m (n)) where


(n)<< log m
Overall O(m log n)

start with n different components


 one per vertex
find components of the two endpoints of e
 2m finds
union two components when edge
connecting them is added
 n-1 unions

105

106

Prims Algorithm with Priority


Queues


Prims Algorithm with Priority


Queues

For each vertex u not in tree maintain current


cheapest edge from tree to u
 Store u in priority queue with key = weight
of this edge
Operations:



104

Union-find disjoint sets data


structure

First sort the edges by weight O(m log m)


Go through edges from smallest to largest


Priority queue implementations




n-1 insertions (each vertex added once)


n-1 delete-mins (each vertex deleted once)
 pick the vertex of smallest key, remove it from
the p.q. and add its edge to the graph
<m decrease-keys (each edge updates one
vertex)

107

Array
 insert O(1), delete-min O(n), decrease-key O(1)
2
2
 total O(n+n +m)=O(n )
Heap
 insert, delete-min, decrease-key all O(log n)
 total O(m log n)
d-Heap (d=m/n)
 insert, decrease-key O(log
m/n n)
 delete-min O((m/n) log
m/n n)
 total O(m log
m/n n)
108

18

Many other minimum spanning tree


algorithms, most of them greedy

Boruvkas Algorithm (1927)




A bit like Kruskals Algorithm




Start with n components consisting of a


single vertex each
At each step, each component chooses its
cheapest outgoing edge to add to the
spanning forest
 Two components may choose to add the
same edge
Useful for parallel algorithms since
components may be processed (almost)
independently

Cheriton & Tarjan




Chazelle


O(m loglog n) time using a queue of


components
O(m (m) log (m)) time
 Incredibly hairy algorithm

Karger, Klein & Tarjan




O(m+n) time randomized algorithm that


works most of the time

109

110

Applications of Minimum Spanning


Tree Algorithms


Applications of Minimum Spanning


Tree Algorithms

Minimum cost network design:







Maximum Spacing Clustering




Build a network to connect all locations


{v1,,vn}
Cost of connecting vi to vj is w(vi,vj)>0
Choose a collection of links to create that
will be as cheap as possible
Any minimum cost solution is an MST
 If there is a solution containing a cycle
then we can remove any edge and get a
cheaper solution

Given
 a collection U of n objects {p ,,p }
1
n
 Distance measure d(p ,p ) satisfying
i j
 d(pi,pi)=0
 d(pi,pj)>0 for ij
 d(pi,pj)=d(pj,pi)
 Positive integer kn
Find a k-clustering, i.e. partition of U into k clusters
C1,,Ck, such that the spacing between the
clusters is as large possible where
spacing = min{d(pi,pj): pi and pj in different
clusters}

111

112

Greedy Algorithm



Start with n clusters each consisting of a single point


Repeatedly find the closest pair of points in different
clusters under distance d and merge their clusters
until only k clusters remain

Gets the same components as Kruskals Algorithm


does!


Proof that this works

Removing the k-1 most expensive edges from an


MST yields k components C1,,Ck and the spacing
for them is precisely the cost d* of the k-1st most
expensive edge in the tree
Consider any other k-clustering C1,,Ck


The sequence of closest pairs is exactly the MST

Alternatively we could run Kruskals algorithm once


and for any k we could get the maximum spacing
k-clustering by deleting the k-1 most expensive
edges

113

Since they are different and cover the same set of points
there is some pair of points pi,pj such that pi,pj are in some
cluster Cr but pi, pj are in different clusters Cs and Ct
 Since pi,pj Cr, pi and pj have a path between them
all of whose edges have distance at most d*
 This path must cross between clusters in the C
clustering so the spacing in C is at most d*

114

19

You might also like