Unit 2 Greedy Method
Unit 2 Greedy Method
Greedy
Method/Technique
1
Lecture Content
2
•The greedy approach suggests constructing a
solution through a sequence of steps, each
expanding a partially constructed solution
obtained so far, until a complete solution to the
problem is reached. On each step, the choice made
must be:
3
Greedy Technique
Approximations/heuristics:
traveling salesman problem (TSP)
knapsack problem
other combinatorial optimization problems
Change-Making Problem
25c =1
Greedy solution: 10c=2
<1, 2, 0, 3> 1c=3
Greedy solution is 25*1+10*2+1*3=48c
optimal for any amount and “normal’’ set of denominations
Ex: Prove the greedy algorithm is optimal for the below denominations.
7
• The greedy approach is based on the principle:
- a sequence of locally optimal choices will
yield a globally optimal solution to the entire
problem.
• However, a sequence of locally optimal choices
does not always yield a globally optimal solution
8
1. The Knapsack Problem
Knapsack Problem-
• A knapsack (kind of shoulder bag) with limited weight capacity.
• Few items each having some weight and value.
The problem states-
Which items should be placed into the knapsack such that-
• The value or profit obtained by putting the items into the knapsack is
maximum.
• And the weight limit of the knapsack does not exceed.
9
1. The Knapsack Problem
1
1. The Knapsack Problem
1
1. The Knapsack Problem
Fractional knapsack problem is solved using greedy method in the following steps-
Step-01:
Step-02:
Arrange all the items in decreasing order of their value / weight ratio.
Step-03:
Start putting the items into the knapsack beginning from the item with the highest ratio.
Put as many items as you can into the knapsack.
1
1. The Knapsack Problem
1
1. The Knapsack Problem
n the sum
selected items can be expressed by i w i xi ,
and their total value by the sum vi xi . 1
i 1
1
1. The Knapsack Problem
1
The knapsack algorithm
Problem-
For the given set of items and knapsack capacity = 60 kg, find the optimal solution for
the fractional knapsack problem making use of greedy approach.
16
The knapsack example
OR
Find the optimal solution for the fractional knapsack problem making use of greedy
approach. Consider-
n=5
w = 60 kg
(w1, w2, w3, w4, w5) = (5, 10, 15, 22, 25)
(b1, b2, b3, b4, b5) = (30, 40, 45, 77, 90)
OR
A thief enters a house for robbing it. He can carry a maximal weight of 60 kg into his
bag. There are 5 items in the house with the following weights and values. What items
should thief take if he can even take the fraction of any item with him?
17
The knapsack example
Solution-
Step-01:
Step-02:
Sort all the items in decreasing order of their value / weight ratio-
I1 I2 I5 I4 I3
(6) (4) (3.6) (3.5) (3)
Step-03:
Start filling the knapsack by putting the items into it one by one.
18
The knapsack example
Step-03:
Now,
Knapsack weight left to be filled is 20 kg but item-4 has a weight of 22 kg.
Since in fractional knapsack problem, even the fraction of any item can be taken.
So, knapsack will contain the following items-
< I1 , I2 , I5 , (20/22) I4 >
20
The knapsack algorithm
21
The knapsack algorithm
22
Knapsack algorithm
Algorithm fractionalKnapsack(S, W)
Input: set S of items w/ benefit bi
and weight wi; max. weight W
Output: amount xi of each item i
to maximize benefit w/ weight
at most W
for each item i in S
xi 0
vi bi / wi {value}
w0 {total
weight}
while w < W
remove item i w/ highest vi
xi min{wi , W - w}
w w + min{wi , W - w}
0/1 Knapsack Problem - Example 1
24
0/1 Knapsack Problem - Example 1
25
0/1 Knapsack Problem - Example 1
26
0/1 Knapsack Problem - Example 2
27
0/1 Knapsack Problem - Example 2 - Brute Force
28
1. The Knapsack Problem
30
1. The Knapsack Problem
3
1. The Knapsack Problem
33
0/1 Knapsack Problem - Example 1
35
0/1 Knapsack Problem - Example 2 - Greedy
37
1. The Knapsack Problem
Remarks:
-The greedy strategy does not always generate the
(globally) optimal solution.
-Some optimization problem can not be solved by
the greedy strategy.
38
Job sequencing with deadlines
39
Job sequencing with deadlines
Approach to Solution-
A feasible solution would be a subset of jobs where each job of the subset
gets completed within its deadline.
Value of the feasible solution would be the sum of profit of all the jobs
contained in the subset.
An optimal solution of the problem would be a feasible solution which
gives the maximum profit.
Greedy Algorithm-
Greedy Algorithm is adopted to determine how the next job is selected for
an optimal solution.
The greedy algorithm described below always gives an optimal solution to
the job sequencing problem-
40
Job sequencing with deadlines
Step-01:
Step-02:
Step-03:
41
Job sequencing with deadlines
Problem-
42
Job sequencing with deadlines
Solution-
Step-01:
Step-02:
43
Job sequencing with deadlines
Now,
We take each job one by one in the order they appear in Step-01.
We place the job on Gantt chart as far as possible from 0.
Step-03:
Step-04:
44
Job sequencing with deadlines
Step-05:
Step-06:
45
Job sequencing with deadlines
Step-07:
Now,
The only job left is job J6 whose deadline is 2.
All the slots before deadline 2 are already occupied.
Thus, job J6 can not be completed.
Now, the given questions may be answered as-
Part-01:
The optimal schedule is-
J2 , J4 , J3 , J5 , J1
This is the required order in which the jobs must be completed in order to obtain
the maximum profit.
46
Job sequencing with deadlines
Part-02:
Part-03:
47
Job sequencing with deadlines
i 1 2 3 4 5
pi 20 15 10 5 1
di 2 2 1 3 3
48
Algorithm:
Step 1: Sort pi into nonincreasing order. After sorting p1 p2
p3 … pn.
Step 2: Add the next job i to the solution set if i can be
completed by its deadline. Assign i to time slot [r-1, r], where r
is the largest integer such that 1 r di and [r-1, r] is free.
Step 3: Stop if all jobs are examined. Otherwise, go to step 2.
49
e.g.
i pi di
1 20 2 assign to [1, 2]
2 15 2 assign to [0, 1]
3 10 1 reject
4 5 3 assign to [2, 3]
5 1 3 reject
solution = {1, 2, 4}
total profit = 20 + 15 + 5 = 40
50
A simple example
51
The greedy method
52
The activity selection problem
53
The activity selection problem
54
The activity selection problem
55
The activity selection problem
56
Algorithm
activities.
Steps for Activity Selection Problem
Following are the steps we will be following to solve the activity selection problem,
Step 1: Sort the given activities in ascending order according to their finishing time.
Step 2: Select the first activity from sorted array act[] and add it to sol[] array.
Step 3: Repeat steps 4 and 5 for the remaining activities in act[].
Step 4: If the start time of the currently selected activity is greater than or equal to
the finish time of previously selected activity, then add it to the sol[] array.
Step 5: Select the next activity in act[] array.
Step 6: Print the sol[] arra
Activity Selection Problem
Example
A
possible solution wo
uld be:
Step 1: Sort the
Start Time Finish Time (f) Activity Name
(s)
given activities in 4
1 2 a2
3 a3
ascending order
0
5
6
7
a4
a5
according to 5their9 a1
finishing time.
8 9 a6
Sorted table
Activity Selection Problem
Example
Step 2: Select the first activity from sorted array act[] and
add it to the sol[] array, thus sol = {a2}.
Step 3: Repeat the steps 4 and 5 for the remaining activities
in act[].
Step 4: If the start time of the currently selected activity is
greater than or equal to the finish time of the previously
selected activity, then add it to sol[].
Step 5: Select the next activity in act[]
Activity Selection Problem
Example
64
The activity selection problem
65
Activity-selection
problem
Greedy_Activity_Selector(s,f)
{ n = length(s)
A = {1}
j=1
for i = 2 to n
if (s[i] >= f[j])
A = A U {i}
j=i
end if
end for
Return A
}
Assume the array f already sorted
Complexity: T(n) = O(n)
The activity selection problem
67
5. Huffman Code
Any binary tree with edges labeled with 0’s and 1’s yields a prefix-free code of
characters assigned to its leaves
0 1
Initialize n one-node trees with alphabet characters and the tree weights with
their frequencies.
Repeat the following step n-1 times: join two binary trees with smallest
weights into one (as left and right subtrees) and make its weight equal the
sum of the weights of the two trees.
Mark edges leading to left and right subtrees with 0’s and 1’s, respectively.
Huffman codes
3. It ensures that the code assigned to any character is not a prefix of the code
73
Constructed Huffman Tree
74
5. Huffman Code - Example
77
Huffman Tree Construction Time
eights 78
Huffman Tree Construction Time Complexity
Time Complexity-
79
Huffman Tree Construction Time Complexity
Important Formulas-
The following 2 formulas are important to solve the problems based on Huffman Coding-
Formula-02:
80
Example
81
Huffman Tree
Huffman Tree after assigning weight
Huffman Code For Characters-
To write Huffman Code for any character, traverse the Huffman Tree from
root node to the leaf node of that character.
Following this rule, the Huffman Code for each character is-
a = 111
e = 10
i = 00
o = 11001
u = 1101
s = 01
t = 11000
3 -86
An example of Huffman algorithm
Symbols: A, B, C, D, E, F, G
freq. : 2, 3, 5, 8, 13, 15, 18
Huffman codes:
A: 10100 B: 10101 C:
1011
D: 100 E: 00 F: 01
G: 11
3-
87
Exercises
Example:
6 c 6 c c
a a a
1 4 1 1
4
2 2
d d d
b 3 b b 3
Another greedy algorithm for MST: Kruskal’s
On each iteration, add the next edge on the sorted list unless this
would create a cycle. (If it would, skip the edge.)
Example
4 c 4 c 4 c
a a a
1 1 1
6 6 6
2 2 2
d d d
b 3 b 3 b 3
4 c c c
a a a
1 1 1
6 6
2 2 2
d d d
b 3 b 3 b 3
Notes about Kruskal’s algorithm
Runs in O(m log m) time, with m = |E|. The time is mostly spent
on sorting.
Minimum Spanning Tree (MST) Problem
93
Minimum Spanning Tree (MST) Problem
[R. C. T. Lee]
Algorithm: Kruskal’s algorithm to find
a minimum spanning tree
Input: A weighted, connected, undirected graph G
= (V, E).
Output: A minimum spanning tree for G.
// n = |V|: number of vertices
// m = |E|: number of edges
95
2. Kruskal’s Algorithm
98
2. Kruskal’s Algorithm
[Anany Levitin]
ALGORITHM Kruskal(G)
//Kruskal’s algorithm for constructing a minimum
spanning tree
//Input: A weighted connected graph G = (V, E)
//Output: ET, the set of edges composing
a minimum spanning tree of G
99
2. Kruskal’s Algorithm
10
2. Kruskal’s Algorithm
10
2. Kruskal’s Algorithm - Example 1
10
2. Kruskal’s Algorithm - Example 1
[ 50 60 70 75 80 90 200 300 ]
10
2. Kruskal’s Algorithm - Example 1
10
2. Kruskal’s Algorithm - Example 1
10
2. Kruskal’s Algorithm - Example 1
10
2. Kruskal’s Algorithm - Example 1
10
2. Kruskal’s Algorithm
• Remarks:
-If |E| is small, then Kruskal’s algorithm (O(m
log m)) is preferred.
-If |E| is large (i.e., |E| = (|V|2)), then Prim’s
algorithm (O(m log n)) is preferred.
10
Prim’s MST algorithm
Start with tree T1 consisting of one (any) vertex and “grow” tree
one vertex at a time to produce MST through a series of
expanding subtrees T1, T2, …, Tn
4 c 4 c 4 c
a a a
1 1 1
6 6 6
2 2 2
d d d
b 3 b 3 b 3
4 c
4 c
a
a 1
1 6
6
2
2
d
d b 3
b 3
Notes about Prim’s algorithm
Efficiency
O(n2) for weight matrix representation of graph and array
implementation of priority queue
O(m log n) for adjacency lists representation of graph with n
vertices and m edges and min-heap implementation of the priority
queue
The Crucial Property behind Prim’s
Algorithm
Y
y
X
3. Prim’s Algorithm
[R. C. T. Lee]
Algorithm: The basic Prim’s algorithm to find
a minimum spanning tree
Input: A weighted, connected, undirected graph
G = (V, E).
Output: A minimum spanning tree for G.
Step 1: Let x be any vertex in V. Let X = {x} and
Y = V - {x}.
11
3. Prim’s Algorithm
[Anany Levitin]
ALGORITHM Prim(G)
//Prim’s algorithm for constructing a
minimum spanning tree
//Input: A weighted connected graph G = (V, E)
//Output: ET, the set of edges composing
a minimum spanning tree of G
11
3. Prim’s Algorithm
11
3. Prim’s Algorithm
for i ← 1 to |V| - 1 do
find a minimum-weight edge e* = (u*, v*)
among all the edges (u, v) such that
u is in VT and v is in V - VT
VT ← VT ∪ {v*} // add v* to VT
ET ← ET ∪ {e*} // add (u*, v*) to ET
return ET // all vertices included the
currently constructed tree 11
3. Prim’s Algorithm - Example 1
11
3. Prim’s Algorithm - Example 1
11
3. Prim’s Algorithm - Example 1
12
3. Prim’s Algorithm - Example 1
12
3. Prim’s Algorithm - Example 1
12
3. Prim’s Algorithm
12
3. Prim’s Algorithm
12
3. Prim’s Algorithm
• Remarks:
-If |E| is small, then Kruskal’s algorithm (O(m
log m)) is preferred.
-If |E| is large (i.e., |E| = (|V|2)), then Prim’s
algorithm (O(m log n)) is preferred.
12
3. Prim’s Algorithm - Example 2
12
3. Prim’s Algorithm - Example 2
12
3. Prim’s Algorithm - Example 2
13
3. Prim’s Algorithm - Example 2
13
Single-Source Shortest-Paths Problem
13
Single-Source Shortest-Paths Problem
13
Single-Source Shortest-Paths Problem
13
Single-Source Shortest-Paths Problem
13
4. Dijkstra’s Algorithm
[R. C. T. Lee]
Algorithm: Dijkstra’s algorithm to generate
single-source shortest paths
Input: A weighted connected graph G = (V, E)
and a source vertex v0. For each edge
(u, v) E, there is a non-negative number
c(u, v) associated with it. |V| = n + 1.
Output: For each v V, the length of a shortest path
from v0 to v. 66
4. Dijkstra’s Algorithm
S = {v0} // (1)
for i = 1 to n do // consider v1, v2, …, vn // n times
{
if ((v0, vi) E) L(vi) = c(v0, vi); // (1)
// vi is adjacent to v0, fringe vertex
else L(vi) = ∞; // vi is not adjacent to
v0 // (1)
} // (n) 67
4. Dijkstra’s Algorithm
for i = 1 to n do // n times
{
Choose u from V - S such that L(u) is the smallest
// greedy choice, // (n)
S = S {u} // put u into S, move u from V - S to S
for all w in V - S do // (m) throughout algorithm
if (u, w) E, then // u is latest vertex added to S
L(w) = min{L(w), L(u) + c(u, w)} // update
L(w)
} // (n2) 68
4. Dijkstra’s Algorithm - Example 1
13
4. Dijkstra’s Algorithm - Example 1
14
4. Dijkstra’s Algorithm - Example 2
14
4. Dijkstra’s Algorithm - Example 2
14
4. Dijkstra’s Algorithm - Example 2
14
4. Dijkstra’s Algorithm - Example 2
from a to b : a - b of length 3
from a to d : a - b - d of length 5
from a to c : a - b - c of length
7
from a to e : a - b - d - e of
length 9
14
4. Dijkstra’s Algorithm
Remarks:
• Dijstra’s algorithm does not always
work if the edge weight is negative.
correctly
• The algorithm that is used to solve the negative-
weighted, single-source shortest-paths path
problem is known as the Bellman-Ford’s
algorithm (using dynamic programming).
14
4. Dijkstra’s Algorithm
14
Exercises
14