0% found this document useful (0 votes)
126 views

Greedy Algorithms: Slides by Kevin Wayne. All Rights Reserved

Uploaded by

XyzAbcc
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
126 views

Greedy Algorithms: Slides by Kevin Wayne. All Rights Reserved

Uploaded by

XyzAbcc
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Chapter 4

Greedy
Algorithms

Slides by Kevin Wayne.


Copyright © 2005 Pearson-Addison Wesley.
All rights reserved.

1
4.1 Interval Scheduling
Interval Scheduling

Interval scheduling.
 Job j starts at sj and finishes at fj.
 Two jobs compatible if they don't overlap.
 Goal: find maximum subset of mutually compatible jobs.

g
h
Time
0 1 2 3 4 5 6 7 8 9 10 11

3
Interval Scheduling: Greedy Algorithms

Greedy template. Consider jobs in some natural order.


Take each job provided it's compatible with the ones already taken.

 [Earliest start time] Consider jobs in ascending order of sj.

 [Earliest finish time] Consider jobs in ascending order of fj.

 [Shortest interval] Consider jobs in ascending order of fj - sj.

 [Fewest conflicts] For each job j, count the number of


conflicting jobs cj. Schedule in ascending order of cj.

4
Interval Scheduling: Greedy Algorithms

Greedy template. Consider jobs in some natural order.


Take each job provided it's compatible with the ones already taken.

counterexample for earliest start time

counterexample for shortest interval

counterexample for fewest conflicts

5
Interval Scheduling: Greedy Algorithm

Greedy algorithm. Consider jobs in increasing order of finish time.


Take each job provided it's compatible with the ones already taken.

Sort jobs by finish times so that f1 ≤ f2 ≤ ... ≤ fn.


set of jobs selected

A ← φ
for j = 1 to n {
if (job j compatible with A)
A ← A ∪ {j}
}
return A

Implementation. O(n log n).


Remember job j* that was added last to A.
Job j is compatible with A if sj ≥ fj*.

6
Interval Scheduling: Analysis

Theorem. Greedy algorithm is optimal.

Pf. (by contradiction)


 Assume greedy is not optimal, and let's see what happens.
 Let i1, i2, ... ik denote set of jobs selected by greedy.
 Let j1, j2, ... jm denote set of jobs in the optimal solution with
i1 = j1, i2 = j2, ..., ir = jr for the largest possible value of r.

job ir+1 finishes before jr+1

Greedy: i1 i2 ir ir+1

OPT: j1 j2 jr jr+1 ...

why not replace job jr+1


with job ir+1?

7
Interval Scheduling: Analysis

Theorem. Greedy algorithm is optimal.

Pf. (by contradiction)


 Assume greedy is not optimal, and let's see what happens.
 Let i1, i2, ... ik denote set of jobs selected by greedy.
 Let j1, j2, ... jm denote set of jobs in the optimal solution with
i1 = j1, i2 = j2, ..., ir = jr for the largest possible value of r.

job ir+1 finishes before jr+1

Greedy: i1 i2 ir ir+1

OPT: j1 j2 jr ir+1 ...

solution still feasible and optimal,


but contradicts maximality of r.

8
4.1 Interval Partitioning
Interval Partitioning

Interval partitioning.
 Lecture j starts at sj and finishes at fj.
 Goal: find minimum number of classrooms to schedule all lectures
so that no two occur at the same time in the same room.

Ex: This schedule uses 4 classrooms to schedule 10 lectures.

4 e j

3 c d g

2 b h

1 a f i

9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30 3 3:30 4 4:30


Time

10
Interval Partitioning

Interval partitioning.
 Lecture j starts at sj and finishes at fj.
 Goal: find minimum number of classrooms to schedule all lectures
so that no two occur at the same time in the same room.

Ex: This schedule uses only 3.

3 c d f j

2 b g i

1 a e h

9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30 3 3:30 4 4:30


Time

11
Interval Partitioning: Lower Bound on Optimal Solution

Def. The depth of a set of open intervals is the maximum number that
contain any given time.

Key observation. Number of classrooms needed ≥ depth.

Ex: Depth of schedule below = 3 ⇒ schedule below is optimal.


a, b, c all contain 9:30

Q. Does there always exist a schedule equal to depth of intervals?

3 c d f j

2 b g i

1 a e h

9 9:30 10 10:30 11 11:30 12 12:30 1 1:30 2 2:30 3 3:30 4 4:30


Time

12
Interval Partitioning: Greedy Algorithm

Greedy algorithm. Consider lectures in increasing order of start time:


assign lecture to any compatible classroom.

Sort intervals by starting time so that s1 ≤ s2 ≤ ... ≤ sn.


d ← 0 number of allocated classrooms

for j = 1 to n {
if (lecture j is compatible with some classroom k)
schedule lecture j in classroom k
else
allocate a new classroom d + 1
schedule lecture j in classroom d + 1
d ← d + 1
}

Implementation. O(n log n).


For each classroom k, maintain the finish time of the last job added.
Keep the classrooms in a priority queue.

13
Interval Partitioning: Greedy Analysis

Observation. Greedy algorithm never schedules two incompatible


lectures in the same classroom.

Theorem. Greedy algorithm is optimal.


Pf.
 Let d = number of classrooms that the greedy algorithm allocates.
 Classroom d is opened because we needed to schedule a job, say j,
that is incompatible with all d-1 other classrooms.
 These d jobs each end after sj.
 Since we sorted by start time, all these incompatibilities are caused
by lectures that start no later than sj.
 Thus, we have d lectures overlapping at time sj + ε.
 Key observation ⇒ all schedules use ≥ d classrooms. ▪

14
4.2 Scheduling to Minimize Lateness
Scheduling to Minimizing Lateness

Minimizing lateness problem.


 Single resource processes one job at a time.
 Job j requires tj units of processing time and is due at time dj.
 If j starts at time sj, it finishes at time fj = sj + tj.
 Lateness: lj = max { 0, fj - dj }.
 Goal: schedule all jobs to minimize maximum lateness L = max lj.

1 2 3 4 5 6
Ex:
tj 3 2 1 4 3 2

dj 6 8 9 9 14 15

lateness = 2 lateness = 0 max lateness = 6

d3 = 9 d2 = 8 d6 = 15 d1 = 6 d5 = 14 d4 = 9
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

16
Minimizing Lateness: Greedy Algorithms

Greedy template. Consider jobs in some order.

 [Shortest processing time first] Consider jobs in ascending order


of processing time tj.

 [Earliest deadline first] Consider jobs in ascending order of


deadline dj.

 [Smallest slack] Consider jobs in ascending order of slack dj - tj.

17
Minimizing Lateness: Greedy Algorithms

Greedy template. Consider jobs in some order.

 [Shortest processing time first] Consider jobs in ascending order


of processing time tj.
1 2

tj 1 10 counterexample
dj 100 10

 [Smallest slack] Consider jobs in ascending order of slack dj - tj.

1 2
tj 1 10
counterexample
dj 2 10

18
Minimizing Lateness: Greedy Algorithm

Greedy algorithm. Earliest deadline first.

Sort n jobs by deadline so that d1 ≤ d2 ≤ … ≤ dn

t ← 0
for j = 1 to n
Assign job j to interval [t, t + tj]
sj ← t, fj ← t + tj
t ← t + tj
output intervals [sj, fj]

max lateness = 1

d1 = 6 d2 = 8 d3 = 9 d4 = 9 d5 = 14 d6 = 15
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

19
Minimizing Lateness: No Idle Time

Observation. There exists an optimal schedule with no idle time.

d=4 d=6 d = 12
0 1 2 3 4 5 6 7 8 9 10 11

d=4 d=6 d = 12
0 1 2 3 4 5 6 7 8 9 10 11

Observation. The greedy schedule has no idle time.

20
Minimizing Lateness: Inversions

Def. Given a schedule S, an inversion is a pair of jobs i and j such that:


i < j but j scheduled before i.
inversion
fi

before swap j i

[ as before, we assume jobs are numbered so that d1 ≤ d2 ≤ … ≤ dn ]

Observation. Greedy schedule has no inversions.

Observation. If a schedule (with no idle time) has an inversion, it has


one with a pair of inverted jobs scheduled consecutively.

21
Minimizing Lateness: Inversions

Def. Given a schedule S, an inversion is a pair of jobs i and j such that:


i < j but j scheduled before i.
inversion
fi

before swap j i

after swap i j
f'j

Claim. Swapping two consecutive, inverted jobs reduces the number of


inversions by one and does not increase the max lateness.

Pf. Let l be the lateness before the swap, and let l ' be it afterwards.
 l 'k = lk for all k ≠ i, j
 l 'i ≤ li l"j = f j" # d j (definition)
= fi # d j ( j finishes at time fi )
 If job j is late:
$ fi # d i (i < j)
$ li (definition)

22

!
Minimizing Lateness: Analysis of Greedy Algorithm

Theorem. Greedy schedule S is optimal.


Pf. Define S* to be an optimal schedule that has the fewest number of
inversions, and let's see what happens.
 Can assume S* has no idle time.
 If S* has no inversions, then S = S*.
 If S* has an inversion, let i-j be an adjacent inversion.
– swapping i and j does not increase the maximum lateness and
strictly decreases the number of inversions
– this contradicts definition of S* ▪

23
Greedy Analysis Strategies

Greedy algorithm stays ahead. Show that after each step of the greedy
algorithm, its solution is at least as good as any other algorithm's.

Structural. Discover a simple "structural" bound asserting that every


possible solution must have a certain value. Then show that your
algorithm always achieves this bound.

Exchange argument. Gradually transform any solution to the one found


by the greedy algorithm without hurting its quality.

Other greedy algorithms. Kruskal, Prim, Dijkstra, Huffman, …

24
4.3 Optimal Caching
Optimal Offline Caching

Caching.
 Cache with capacity to store k items.
 Sequence of m item requests d1, d2, …, dm.
 Cache hit: item already in cache when requested.
 Cache miss: item not already in cache when requested: must bring
requested item into cache, and evict some existing item, if full.

Goal. Eviction schedule that minimizes number of cache misses.

a a b
Ex: k = 2, initial cache = ab, red = cache miss
b a b
requests: a, b, c, b, c, a, a, b.
c c b
Optimal eviction schedule: 2 cache misses.
b c b
c c b
a a b
a a b
b a b
requests cache
26
Optimal Offline Caching: Farthest-In-Future

Farthest-in-future. Evict item in the cache that is not requested until


farthest in the future.

current cache: a b c d e f

future queries: g a b c e d a b b a c d e a f a d e f g h ...

cache miss eject this one

Theorem. [Bellady, 1960s] FF is optimal eviction schedule.


Pf. Algorithm and theorem are intuitive; proof is subtle.

27
Reduced Eviction Schedules

Def. A reduced schedule is a schedule that only inserts an item into


the cache in a step in which that item is requested.

Intuition. Can transform an unreduced schedule into a reduced one


with no more cache misses.

a a b c a a b c
a a x c a a b c
c a d c c a b c
d a d b d a d c
a a c b a a d c
b a x b b a d b
c a c b c a c b
a a b c a a c b
a a b c a a c b

an unreduced schedule a reduced schedule

28
Reduced Eviction Schedules

Claim. Given any unreduced schedule S, can transform it into a reduced


schedule S' with no more cache misses. doesn't enter cache at requested
Pf. (by induction on number of unreduced items) time
 Suppose S brings d into the cache at time t, without a request.
 Let c be the item S evicts when it brings d into the cache.
 Case 1: d evicted at time t', before next request for d.
 Case 2: d requested at time t' before d is evicted. ▪

S S' S S'
c c c c

t t t t

d d

t' t' t' t'


e d evicted at time t',
e d requested at time t'
d
before next request

Case 1 Case 2
29
Farthest-In-Future: Analysis

Theorem. FF is optimal eviction algorithm.


Pf. (by induction on number or requests j)

Invariant: There exists an optimal reduced schedule S that makes


the same eviction schedule as SFF through the first j+1 requests.

Let S be reduced schedule that satisfies invariant through j requests.


We produce S' that satisfies invariant after j+1 requests.
 Consider (j+1)st request d = dj+1.
 Since S and SFF have agreed up until now, they have the same cache
contents before request j+1.
 Case 1: (d is already in the cache). S' = S satisfies invariant.
 Case 2: (d is not in the cache and S and SFF evict the same element).
S' = S satisfies invariant.

30
Farthest-In-Future: Analysis

Pf. (continued)
 Case 3: (d is not in the cache; SFF evicts e; S evicts f ≠ e).
– begin construction of S' from S by evicting e instead of f

j same e f same e f
S S'

j+1
j same e d same d f
S S'

– now S' agrees with SFF on first j+1 requests; we show that having
element f in cache is no worse than having element e

31
Farthest-In-Future: Analysis

Let j' be the first time after j+1 that S and S' take a different action,
and let g be item requested at time j'.
must involve e or f (or both)

j' same e same f


S S'

 Case 3a: g = e. Can't happen with Farthest-In-Future since there


must be a request for f before e.

 Case 3b: g = f. Element f can't be in cache of S, so let e' be the


element that S evicts.
– if e' = e, S' accesses f from cache; now S and S' have same cache
– if e' ≠ e, S' evicts e' and brings e into the cache; now S and S'
have the same cache

Note: S' is no longer reduced, but can be transformed into


a reduced schedule that agrees with SFF through step j+1

32
Farthest-In-Future: Analysis

Let j' be the first time after j+1 that S and S' take a different action,
and let g be item requested at time j'.
must involve e or f (or both)

j' same e same f


S S'

otherwise S' would take the same action

 Case 3c: g ≠ e, f. S must evict e.


Make S' evict f; now S and S' have the same cache. ▪

j' same g same g


S S'

33
Caching Perspective

Online vs. offline algorithms.


 Offline: full sequence of requests is known a priori.
 Online (reality): requests are not known in advance.
 Caching is among most fundamental online problems in CS.

LIFO. Evict page brought in most recently.


LRU. Evict page whose most recent access was earliest.

FF with direction of time reversed!

Theorem. FF is optimal offline eviction algorithm.


Provides basis for understanding and analyzing online algorithms.
LRU is k-competitive. [Section 13.8]
LIFO is arbitrarily bad.

34
4.4 Shortest Paths in a Graph

shortest path from Princeton CS department to Einstein's house


Shortest Path Problem

Shortest path network.


Directed graph G = (V, E).
Source s, destination t.
Length le = length of edge e.

Shortest path problem: find shortest directed path from s to t.

cost of path = sum of edge costs in path

2 23 3
9
s
14
18 Cost of path s-2-3-5-t
= 9 + 23 + 2 + 16
2 6
6
30 4 19 = 50.
11
15 5
5
6
20 16

7 t
44

36
Dijkstra's Algorithm

Dijkstra's algorithm.
 Maintain a set of explored nodes S for which we have determined
the shortest path distance d(u) from s to u.
 Initialize S = { s }, d(s) = 0.
 Repeatedly choose unexplored node v which minimizes
" (v ) = min d (u ) + l e ,
e = (u , v ) : u ! S

add v to S, and set d(v) = π(v). shortest path to some u in explored


part, followed by a single edge (u, v)

le v
d(u)
u
S
s

37
Dijkstra's Algorithm

Dijkstra's algorithm.
 Maintain a set of explored nodes S for which we have determined
the shortest path distance d(u) from s to u.
 Initialize S = { s }, d(s) = 0.
 Repeatedly choose unexplored node v which minimizes
" (v ) = min d (u ) + l e ,
e = (u , v ) : u ! S

add v to S, and set d(v) = π(v). shortest path to some u in explored


part, followed by a single edge (u, v)

le v
d(u)
u
S
s

38
Dijkstra's Algorithm: Proof of Correctness

Invariant. For each node u ∈ S, d(u) is the length of the shortest s-u path.
Pf. (by induction on |S|)
Base case: |S| = 1 is trivial.
Inductive hypothesis: Assume true for |S| = k ≥ 1.
 Let v be next node added to S, and let u-v be the chosen edge.
 The shortest s-u path plus (u, v) is an s-v path of length π(v).
 Consider any s-v path P. We'll see that it's no shorter than π(v).
Let x-y be the first edge in P that leaves S,
P

and let P' be the subpath to x.


 P is already too long as soon as it leaves S. P' x y

S u
v
l (P) ≥ l (P') + l (x,y) ≥ d(x) + l (x, y) ≥ π(y) ≥ π(v)

nonnegative inductive defn of π(y) Dijkstra chose v


weights hypothesis instead of y

39
Dijkstra's Algorithm: Implementation

For each unexplored node, explicitly maintain " (v) = min d (u) + l e .
e = (u,v) : u # S

 Next node to explore = node with minimum π(v).


 When exploring v, for each incident edge e = (v, w), update
!
" (w) = min { " (w), " (v) + l e }.

Efficient implementation. Maintain a priority queue of unexplored


! nodes, prioritized by π(v).

Priority Queue
PQ Operation Dijkstra Array Binary heap d-way Heap Fib heap †

Insert n n log n d log d n 1


ExtractMin n n log n d log d n log n
ChangeKey m 1 log n log d n 1
IsEmpty n 1 1 1 1
Total n2 m log n m log m/n n m + n log n

† Individual ops are amortized bounds

40
Edsger W. Dijkstra

The question of whether computers can think is like the


question of whether submarines can swim.

Do only what only you can do.

In their capacity as a tool, computers will be but a ripple


on the surface of our culture. In their capacity as
intellectual challenge, they are without precedent in the
cultural history of mankind.

The use of COBOL cripples the mind; its teaching should,


therefore, be regarded as a criminal offence.

APL is a mistake, carried through to perfection. It is the


language of the future for the programming techniques
of the past: it creates a new generation of coding bums.

41
Extra Slides
Coin Changing

Greed is good. Greed is right. Greed works.


Greed clarifies, cuts through, and captures the
essence of the evolutionary spirit.
- Gordon Gecko (Michael Douglas)
Coin Changing

Goal. Given currency denominations: 1, 5, 10, 25, 100, devise a method


to pay amount to customer using fewest number of coins.

Ex: 34¢.

Cashier's algorithm. At each iteration, add coin of the largest value


that does not take us past the amount to be paid.

Ex: $2.89.

44
Coin-Changing: Greedy Algorithm

Cashier's algorithm. At each iteration, add coin of the largest value


that does not take us past the amount to be paid.

Sort coins denominations by value: c1 < c2 < … < cn.


coins selected

S ← φ
while (x ≠ 0) {
let k be largest integer such that ck ≤ x
if (k = 0)
return "no solution found"
x ← x - ck
S ← S ∪ {k}
}
return S

Q. Is cashier's algorithm optimal?

45
Coin-Changing: Analysis of Greedy Algorithm

Theorem. Greedy algorithm is optimal for U.S. coinage: 1, 5, 10, 25, 100.
Pf. (by induction on x)
 Consider optimal way to change ck ≤ x < ck+1 : greedy takes coin k.
 We claim that any optimal solution must also take coin k.
– if not, it needs enough coins of type c1, …, ck-1 to add up to x
– table below indicates no optimal solution can do this
 Problem reduces to coin-changing x - ck cents, which, by induction, is
optimally solved by greedy algorithm. ▪

All optimal solutions Max value of coins


k ck
must satisfy 1, 2, …, k-1 in any OPT
1 1 P≤4 -
2 5 N≤1 4
3 10 N+D≤2 4+5=9
4 25 Q≤3 20 + 4 = 24
5 100 no limit 75 + 24 = 99

46
Coin-Changing: Analysis of Greedy Algorithm

Observation. Greedy algorithm is sub-optimal for US postal


denominations: 1, 10, 21, 34, 70, 100, 350, 1225, 1500.

Counterexample. 140¢.
 Greedy: 100, 34, 1, 1, 1, 1, 1, 1.
 Optimal: 70, 70.

47
Selecting Breakpoints
Selecting Breakpoints

Selecting breakpoints.
 Road trip from Princeton to Palo Alto along fixed route.
 Refueling stations at certain points along the way.
 Fuel capacity = C.
 Goal: makes as few refueling stops as possible.

Greedy algorithm. Go as far as you can before refueling.

C C C C

Princeton C C C Palo Alto

1 2 3 4 5 6 7

49
Selecting Breakpoints: Greedy Algorithm

Truck driver's algorithm.

Sort breakpoints so that: 0 = b0 < b1 < b2 < ... < bn = L

S ← {0} breakpoints selected


x ← 0 current location

while (x ≠ bn)
let p be largest integer such that bp ≤ x + C
if (bp = x)
return "no solution"
x ← bp
S ← S ∪ {p}
return S

Implementation. O(n log n)


Use binary search to select each breakpoint p.

50
Selecting Breakpoints: Correctness

Theorem. Greedy algorithm is optimal.

Pf. (by contradiction)


 Assume greedy is not optimal, and let's see what happens.
 Let 0 = g0 < g1 < . . . < gp = L denote set of breakpoints chosen by greedy.
 Let 0 = f0 < f1 < . . . < fq = L denote set of breakpoints in an optimal
solution with f0 = g0, f1= g1 , . . . , fr = gr for largest possible value of r.
 Note: gr+1 > fr+1 by greedy choice of algorithm.

g0 g1 g2 gr gr+1
Greedy:

OPT: ...
f0 f1 f2 fr fr+1 fq
why doesn't optimal solution
drive a little further?

51
Selecting Breakpoints: Correctness

Theorem. Greedy algorithm is optimal.

Pf. (by contradiction)


 Assume greedy is not optimal, and let's see what happens.
 Let 0 = g0 < g1 < . . . < gp = L denote set of breakpoints chosen by greedy.
 Let 0 = f0 < f1 < . . . < fq = L denote set of breakpoints in an optimal
solution with f0 = g0, f1= g1 , . . . , fr = gr for largest possible value of r.
 Note: gr+1 > fr+1 by greedy choice of algorithm.

g0 g1 g2 gr gr+1
Greedy:

OPT: ...
f0 f1 f2 fr fq
another optimal solution has
one more breakpoint in common
⇒ contradiction

52

You might also like