3 Greedy Method New
3 Greedy Method New
1
• A greedy algorithm always makes the choice
that looks best at the moment.
2
Optimization problems
• An optimization problem is one in which you want
to find, not just a solution, but the best solution
• A “greedy algorithm” sometimes works well for
optimization problems
• A greedy algorithm works in phases. At each
phase:
– You take the best you can get right now, without regard
for future consequences
– You hope that by choosing a local optimum at each
step, you will end up at a global optimum
3
A scheduling problem
• You have to run nine jobs, with running times of 3, 5, 6,
10, 11, 14, 15, 18, and 20 minutes
• You have three processors on which you can run these jobs
• You decide to do the longest-running jobs first, on
whatever processor is available
P1 20 10 3
P2 18 11 6
P3 15 14 5
P2 5 11 18
P3 6 14 20
• That wasn’t such a good idea; time to completion is now
6 + 14 + 20 = 40 minutes
• Note, however, that the greedy algorithm itself is fast
– All we had to do at each stage was pick the minimum or maximum 5
An optimum solution
• Better solutions do exist:
P1 20 14
P2 18 11 5
P3 15 10 6 3
• This solution is clearly optimal (why?)
• Clearly, there are other optimal solutions (why?)
• How do we find such a solution?
– One way: Try all possible assignments of jobs to processors
– Unfortunately, this approach can take exponential time
6
Fractional Knapsack Problem
We are given n objects and a bag or knapsack.
Object i has a weight w i
the knapsack has a capacity m.
If a fraction x I ,0 x i 1 of
Object i is placed into the bag
then a profit of pi x i is earned.
subject to
wx
1i n
i i m
and
o xi 1, 1 i n
A feasible solution is any set ( x 1 ,…. x n ) satisfying the above
inequalities. An optimal solution is a feasible solution for which
objective function is maximized. 8
0-1 knapsack problem
max imize pi xi
1i n
subject to wx
1i n
i i m
and 0 xi 1, 1 i n
The prifits and weights are positive numbers.
Xi are o or 1
9
Greedy Algorithms for Knapsack
Problem
Several simple greedy strategies are there.
10
• Example: n= 3, m = 20,
• (p1 ,p 2 , p 3 ) = (25 , 24, 15)
• (w 1 , w 2 , w 3 ) = (18 , 15 , 10)
11
But this solution is not the best one.
(0,1,1/2) is a solution
having profit 31.5
12
Another greedy approach
To become greedy w.r.t. capacity:
13
New greedy approach
14
So this strategy gave optimal solution for this
data:
15
There seem to be 3 obvious greedy strategies:
(Max value) Sort the objects from the highest value to the
lowest, then pick them in that order.
(Min weight) Sort the objects from the lowest weight to the
highest, then pick them in that order.
16
Example: Given n = 5 objects and a knapsack capacity
W = 100 as in Table I. The three solutions are given in
Table II.
select xi value
w 10 20 30 40 50
Max vi 0 0 1 0.5 1 146
v 20 30 66 40 60
Min wi 1 1 1 1 0 156
v/w 2.0 1.5 2.2 1.0 1.2
Max vi/wi 1 1 1 0 0.8 164
17
The Optimal Knapsack Algorithm:
Algorithm (of time complexity O(n lgn))
18
Theorem: If p1 /w1 p2/ w2 …pn/wn
.then selecting items w.r.t. this ordering
gives optimal solution.
19
let j be the least index such that xj 1.
So x i = 1 for all 1 i < j and
xi = 0 for j < i n and 0 xj <1.
Let y= (y1 , y2 ,…yn) be an optimal solution
1 1 1 ……1 Xj 0 0 0 0 0 0
20
1 1 1 ……1 Xj 0 0 0 0 0 0
y1 y2 yk y k+1……………..
22
pi zi piyi (zk yk )wk pk / wk
1i n 1i n
( yi zi)wi pi / wi
k i n
pi yi
1i n
If pi zi > pi yi then y could not have been optimal
solution. 23
• If these sums are equal then either z = x and
x is optimal or z x.
24
Optimal 2-way Merge patterns and Huffman Codes:
Example. Suppose there are 3 sorted lists L1, L2, and L3, of
sizes 30, 20, and 10, respectively, which need to be merged
into a combined sorted list, but we can merge only two at a
time.
We intend to find an optimal merge pattern which minimizes
the total number of comparisons.
For example, we can merge L1 and L2, which uses 30 + 20 =
50 comparisons resulting in a list of size 50.
We can then merge this list with list L3, using another 50 + 10
= 60 comparisons,
so the total number of comparisons 50 + 60 = 110.
25
Alternatively, we can first merge lists L2 and L3,
using 20 + 10 = 30 comparisons,
the resulting list (size 30) can then be merged with list L1,
for another 30 + 30 = 60 comparisons.
So the total number of comparisons is 30 + 60 = 90.
It doesn’t take long to see that this latter merge pattern is the
optimal one.
26
Binary Merge Trees:
We can depict the merge patterns using a binary tree, built from
the leaf nodes (the initial lists) towards the root in which each
merge of two nodes creates a parent node whose size is the sum of
the sizes of the two children. For example, the two previous merge
patterns are depicted in the following two figures:
27
Cost = 30*2 +
20*2 + 10*1 Cost = 30*1 +
= 110 20*2 + 10*2
= 90
60 60
50 10 30 30
30 20 20 10
2 3 5 7 9
Iteration 1: merge 2 and 3 into 5
10
Iteration 2:
5 5 16 Iteration 3: merge 7 and
merge 5 and
9 (chosen among 7, 9,
5 into 10
2 3 7 9 and 10) into 16
26
Iteration 4: merge
16
10 10 and 16 into 26
5 5 7 9
Cost = 2*3 + 3*3 + 5*2 + 7*2
2 3 + 9*2 = 57.
30
Proof of optimality of the binary merge tree algorithm:
We use induction on n 2 to show that the binary merge tree is optimal in that it gives
the minimum total weighted external path lengths (among all possible ways to merge
the given leaf nodes into a binary tree).
(Basis) When n = 2. There is only one way to merge two nodes.
(Induction Hypothesis) Suppose the merge tree is optimal when there are k leaf
nodes, for some k 2.
(Induction) Consider (k + 1) leaf nodes. Call them a1, a2, …, and ak+1. We may
assume nodes a1, a2 are of the smallest values, which are merged in the first step of the
merge algorithm into node b. We call the merge tree T, the part excluding a1, a2 T’ (see
figure). Suppose an optimal binary merge tree is S. We make two observations.
(1) If node x of S is a deepest internal node, we may swap its two children with
nodes a1, a2 in S without increasing the total weighted external path lengths. Thus, we
may assume tree S has a subtree S’ with leaf nodes x, a2, …, and ak+1.
(2) The tree S’ must be an optimal merge
tree for k nodes x, a2, …, and ak+1. By
induction hypothesis, tree S’ has a
total weighted external path lengths equal T T’
S
S’
to that of tree T’. Therefore, the total b x weighted
external path lengths of T equals to that of tree S,
proving the optimality of T. a1 a2 a1 a2
31
Minimum Spanning Tree problem
• Kruskal’s Algorithm
• Prim’s algorithm
Both are based on greedy algorithm
Algorithm and proof of getting optimal
solution
32
Minimum Spanning Tree
Spanning subgraph
– Subgraph of a graph G containing ORD 10
all the vertices of G
1
Spanning tree PIT
– Spanning subgraph that is itself a DEN 6 7
(free) tree 9
Minimum spanning tree (MST) 3 DCA
– Spanning tree of a weighted graph 4 STL
with minimum total edge weight
• Applications 8 5 2
– Communications networks
– Transportation networks DFW ATL
33
Greedy Algorithms
• We are trying to solve a problem in an optimal
way.
• We start with a set of candidates
• As the algorithm proceeds, we keep two sets,
one for candidates already chosen, and one for
candidates rejected.
• Functions exist for choosing the next element
and testing for solutions and feasibility.
• Let’s identify these points for the Making
Change problem. 34
Minimum Spanning Trees
• Given G={N,A }, G is undirected and connected
• Find T, a subset of A, such that the nodes are connected
and the sum of the edge weights is minimized. T is a
minimum spanning tree for G.
1 2
1 2 3
6 5
4 4 6
3 8
4 5 6
4 3
7
7 35
Minimum Spanning Trees
• How many edges in T?
• Let’s come up with some algorithms to
compute MSTs.
1 2
1 2 3
4
3
4 5 6
4 3
7 36
Algorithm #1
• Start with an empty set.
• From all the unchosen and unrejected edges,
select the shortest
1 edge. 2
1 2 3
6 5
4 4 6
3 8
4 5 6
4 3
7
7 37
Algorithm #1
• Note we actually have multiple “correct”
answers.
• Sometimes,
1 not just one MST.2
1 2 3
6 5
4 4 6
3 8
4 5 6
4 3
7
7 38
• We keep two sets, one MST
for candidates already chosen,
and one for candidates rejected.
• Functions exist for choosing the next element and
testing for solutions and feasibility.
1 2
1 2 3
6 5
4 4 6
3 8
4 5 6
4 3
7
7 39
Kruskal’s Algorithm
• Each node is in its own set
• Sort edges in increasing order
• Add shortest edge that connects two sets
1 2
1 2 3
6 5
4 4 6
3 8
4 5 6
4 3
7
7 40
Kruskal’s Algorithm
1 2
1 2 3
6 5
4 4 6
3 8
4 5 6
4 3
7
S = {1, 2, 3, 4, 5} {6, 7}
E = (1-2) (2-3) (6-7) (4-5) (1-4) (2-5) (4-7) (3-5) (2-4) (3-
6) (6-7)
46
Kruskal’s Algorithm
1 2
1 2 3
6 5
4 4 6
3 8
4 5 6
4 3
7
S = {1, 2, 3, 4, 5, 6, 7}
E = (1-2) (2-3) (6-7) (4-5) (1-4) (2-5) (4-7) (3-5) (2-4) (3-
6) (6-7)
47
Kruskal’s Algorithm
1 2
1 2 3
6 5
4 4 6
3 8
4 5 6
4 3
7
S = {1, 2, 3, 4, 5, 6, 7}
E = (1-2) (2-3) (6-7) (4-5) (1-4) (4-7)
Total = 1 + 2 + 3 + 3 + 4 + 4 = 17
48
Kruskal’s Algorithm
9 8
1 2 3
4 5
6 6 4
7 2
4 5 6
6 7
3
49
Kruskal’s Algorithm
9 8
1 2 3
4 5
6 6 4
7 2
4 5 6
6 7
3
7 51
Prim-Jarnik’s Algorithm
• We pick an arbitrary vertex s and we grow the MST as a cloud
of vertices, starting from s
• We store with each vertex v a label d(v) = the smallest weight
of an edge connecting v to a vertex in the cloud
At each step:
We add to the cloud the
vertex u outside the cloud
with the smallest distance
label
We update the labels of the
vertices adjacent to u
52
Example
7
7 D 7 D
2 2
B 4 B 4
8 9 5 9
2 5 F 2 5 F
C C
8 8
8 3 8 3
E E
A 7 A 7
0 7 0 7
7 7
7 D D
2 2 7
B 4 B 4
5 9 9 4
2 5 F 5 5
C 2 F
8 C
3 8
8 8 3
E E
A 7 A
0 7 7 7
0
53
Example (contd.)
7
7 D
2
B 4
5 9 4
2 5 F
C
8
8 3
E
A 3 7
0 7
7 D
2
B 4
5 9 4
2 5 F
C
8
8 3
E
A 3
0 7
54
Kruskal’s Algorithm generates
optimal spanning tree
55
Let
T: Sp.tree obtained by Kruskal’s algorithm
T’: Optimal(min) spanning tree
56
So assume E(T) E(T’)
Let e be a min cost edge such that e is in E(T)
but not in E(T’)
It implies that all the edges of Sp tree T which are of
lesser weight than e are also in T’.
Add e to T’
A cycle containing e
is formed in T’
57
Let e, e1, e2…. ek be this cycle.
At least one of these edges say ej will not be in
T?
60
Fixed length code
• If only six different characters are used in a
text then we need 3 bits to represent six
characters.
• a = 000; b = 001; …f = 101.
• This method thus requires 300,000 bits to
code the entire file having 100,000
characters.
61
Variable Length code
We can do considerable better by giving
frequent characters short code words and
infrequent characters log code words.
frequency 45 13 12 16 9 5
63
Prefix code: no code word is also a prefix of
some other codeword.
These prefix codes are desirable because they
simplify decoding.
64
Huffman Encoding
• Compression
– Typically, in files and messages,
• Each character requires 1 byte or 8 bits
• Already wasting 1 bit for most purposes!
• Question
– What’s the smallest number of bits that can be used to
store an arbitrary piece of text?
• Idea
– Find the frequency of occurrence of each character
– Encode Frequent characters short bit strings
– Rarer characters longer bit strings
65
Huffman Encoding
• Encoding
– Use a tree
– Encode by following
tree from root to leaf
– eg
• E is 00
• S is 011
– Frequent characters
E, T 2 bit encodings
– Others
A, S, N, O 3 bit encodings
66
Huffman Encoding
• Encoding A 010
– Use a tree B
• Inefficient in practice :
– Use a direct-addressed lookup E 00
table :
N 110
:
67
Huffman Encoding - Operation
Initial sequence
Sorted by frequency
Move it to correct
place
68
After shifting sub-tree
to its correct place ...
Move sub-tree to
correct place
69
Move the new tree
to the correct place ...
70
Move the new tree
to the correct place ...
71
Combine
last two trees
72
73
Huffman Encoding - Time
Complexity
• Sort keys O(n log n)
• Repeat n times
– Form new sub-tree O(1)
– Move sub-tree O(logn)
(binary search)
– Total O(n log n)
• Overall O(n log n)
74
Theorem
• Let C be an alphabet in which each
character c of C has frequency f(c ) .
• Let x and y be two characters in C having
lowest frequencies.
• Then there exists an optimal prefix code for
C in which the code words for x and y have
the same length and differ only in the last
bit.
75
T:
T’:
a
x
y y
a b x b
x y
77
• The number of bits required to encode a file is
• B(T) = ∑f (c ) d T(c)
• we can call it the cost of the tree T.
• B(T) – B(T’) = ∑f (c ) d T(c) - ∑f (c ) d T’(c)
= f(x) dT(x) + f(a) dT(a) - f(x) dT’(x) – f(a) dT’(a)
= f(x) dT(x) + f(a) dT(a) - f(x) dT(a) – f(a) dT(x)
=(f(a) – f(x))(dT(a) – dT(x))
>= 0 because both
f(a)-f(x) and dT(a) – dT(x) are nonnegative.
78
• Similarly exchanging y and b does not increase
the cost so
B(T’) - B(T’’) is non negative.
There fore
B(T’’) ≤ B(T)
• And since T was taken to be optimal
• B(T) ≤ B(T’’)
• Which implies
• B(T’’) =B(T)
• Thus T’’ is an optimal tree in which x and y
appear as sibling leaves of maximum depth.
79
Theorem - Let C be a given alphabet with frequency
f(c). Let x and y be two characters in C with
minimum frequencies. Let C’ be the alphabet
obtained from C as
C’ = C-{x,y}{z}.
Frequencies for new set is same as for C except that
f(z) = f(x) + f(y).
Let T’ be any optimal tree representing optimal code
for C’, then the tree T obtained from T’ by
replacing the leaf node for z with an internal node
having x and y as children, represents an optimal
prefix code for the alphabet C
80
Proof: For any c ε C – {x ,y}
dT(c ) = dT’(c ) but for x and y
dT(x) = dT(y) = dT’(z) + 1
We have
f(x)dT(x) + f(y) dT(y) = (f(x) + f(y) )(dT(z) + 1)
= f(z) dT(z) + (f(x) + f(y))
B(T) = B(T’) + f(x) + f(y)
81
From which we conclude that
B(T) = B(T’) + f(x) + f(y)
Or
B(T’) = B(T) – f(x) – f(y)
We now prove by contradiction.
Suppose that T is not optimal for C then there is
another optimal tree T’’ such that
B(T’’) < B(T)
Without any loss of generality we can assume that x
and y are siblings here.
Let T’’’ be the tree obtained from T’’with the
common parent of x and y replaced by a leaf z
with freq f(z) = f(x) +f(y)
then
82
• B(T’’’) = B(T’’) – f(x) – f(y)
< B(T) – f(x) – f(y)
=B(T’)
This contradicts the assumption that T’
represents an optimal prefix code for C’
Thus T must represent an optimal prefix code
for the alphabet C.
83
Job Sequencing with dead lines
We are given n jobs,
associated with each job i, here is an (integer)
deadline di ≥ 0 and a profit pi > 0.
This profit pi will be earned only if job is completed
before its deadline.
Each job needs processing for one unit time on a
single machine available.
Feasible solution-is a subset of jobs each of which
can be completed without crossing any deadline.
Optimal solution is a feasible solution with maximum
profit
84
Example: n=4 , p=(100,10,15,27) and
d=(2,1,2,1)
Possible feasible solutions:
Subset sequence value
{1,2} 2,1 110
{1,3} 1,3 or 3,1 115
{1,4} 4,1 127
{2,3} 2,3 25
{3,4} 4,3 42
{1} 1 100
85
Greedy approach
objective is to maximize ∑pi
88
Theorem: greedy approach always obtains an
optimal solution for job sequencing with dead
lines
Proof: Let I be the set of jobs obtained by
greedy method.
And let J be the set of jobs in an optimal
solution.
We will show that I and J both have same
profit values.
We assume that I ≠ J.
J cannot be subset of I as J is optimal
Also I can not be subset of J by the algo. 89
So there exists a job a in I such that a is
not in J
And a job b in J Which is not in I
Let us take a “a highest profit job” such
that a is in I and not in J.
Clearly pa ≥ pb for all jobs that are in J
but not in I.
Since if pb > pa then greedy approach
would consider job b before job a and
included into I.
Let Si and Sj be the feasible sequences
for feasible sets I and J.
90
We will assume that common jobs of I
and J can be processed in same time
intervals.
Let i be a job scheduled in t to t+1 in S i
and t’ to t’+1 in Sj
If t < t’ then interchange the job if any in
[t’ , t’+1] in Si with job i.
Similarly if t’ < t similar transformation can
be done in Sj
91
Now consider the interval [ ta, ta+1] in Si in
which job a is scheduled.
Let b the job(if any) scheduled in Sj in this
interval.
pa ≥ pb from the choice of a
So scheduling a from ta to ta+1 in Sj and
discarding job b gives a feasible schedule for
the set J’ = J- {b} + {a}.
Clearly J’ has profit no less than that of J and
differs from I in one less job than J does.
By repeatedly using this approach J can be
transformed into I without decreasing the
profit value.
92