0% found this document useful (0 votes)
33 views78 pages

2.2.1 GreedyAlgorithms 2

Greedy algorithms make locally optimal choices at each step to arrive at a global optimum. They work by selecting the best choice at each iteration without considering future consequences. Examples discussed include activity scheduling problems which can be solved optimally using earliest finish time, and the fractional knapsack problem which has a greedy solution of taking items in order of highest value per unit weight.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views78 pages

2.2.1 GreedyAlgorithms 2

Greedy algorithms make locally optimal choices at each step to arrive at a global optimum. They work by selecting the best choice at each iteration without considering future consequences. Examples discussed include activity scheduling problems which can be solved optimally using earliest finish time, and the fractional knapsack problem which has a greedy solution of taking items in order of highest value per unit weight.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 78

GREEDY

ALGORITHMS
GREEDY
 A greedy algorithm always makes the choice that looks best at the
moment
 The algorithm is usually an iterative algorithm
 The solution usually composed of elements. For example, a path which
composed of edges, a set which composed of elements, etc.
 In each iteration or step, the algorithm adds one component/element to the
solution set. After final step, the solution set will be complete solution of
the problem.
GREEDY ALGORITHM:
PRINCIPLES
 A greedy algorithm works in phases. At each phase:
 The algorithm chooses the best you can get right now, without regard for future
consequences
 The algorithm hopes that by choosing a local optimum at each step, it will end up at
a global optimum
 For some problems, the global optimum can be achieved
ACTIVITY SCHEDULING
PROBLEM
 Input: A set of activities,
o Each activity has a start time and a finish time where
o By doing each job, you will get a constant amount of reward
o You want to maximize your rewards in a day
o This means you want to do as many jobs (e.g., classes) as possible
o Surely, we will do all jobs if it is possible. But job times will overlap, and it is not possible to do all
jobs.
o Some jobs will have to be sacrificed for other jobs
o Your task is to find the maximum number of doable jobs

 Two activities are compatible if and only if their intervals do not overlap
 Output: a maximum-size subset of mutually compatible activities
ACTIVITY SCHEDULING
PROBLEM
 The greedy choice can be applied to find solutions (the maximum number of activities that can
be performed) by using

o earliest finish time Always gives optimal


solution

o earliest start time


Does not always give
optimal solution
o shortest interval

Does not always give


optimal solution
ACTIVITY SCHEDULING: EARLY
FINISH TIME
 Select the activity with the earliest finish time from the consideration list
 Eliminate the activity from consideration list
 Eliminate all non-compatible activities from the consideration list
 Repeat unless the consideration list is empty
ACTIVITY SCHEDULING
PROBLEM
1 2 3 4 5 6 7 8 9 10 11
1

1 3 0 5 3 5 6 8 8 2 12
2

4 5 6 7 8 9 10 11 12 13 14
3
4
5
6
7
8
9
11 10
ACTIVITY SCHEDULING
PROBLEM
1 2 3 4 5 6 7 8 9 10 11
1

1 3 0 5 3 5 6 8 8 2 12
2

4 5 6 7 8 9 10 11 12 13 14
3

𝐴={𝑎1 , 𝑎 4 , 𝑎 8 ,𝑎 11 }
4
5
6
7
8
9
11 10
ACTIVITY SCHEDULING: EARLY
FINISH TIME
Greedy-Activity-Scheduling ()

1 sort arrays and based on finish times


2
3
4 for to
5 if
6
7
8 return

Running time is  from sorting operation


ACTIVITY SCHEDULING: EARLY
FINISH TIME
Theorem 1. The job with the earliest finish time is in an optimal solution.

 We will prove this by contradiction!


 Suppose that the job (say job id ) with the earliest finish time is not in any of the optimal solutions (there can be multiple
optimal solutions)
 Consider arbitrarily any one of these optimal solution sets (say )
 Select the job that has the earliest finish (or start time) time in this solution set . Let, the job id of this selected job be .
 Surely, because job has the earliest finish time among all jobs initially
 Observer that, we can replace job with job without creating any conflict with the other jobs in set and still remains to be
optimal [same size]  A contraction to our assumption that job is not in any optimal solutions!
ACTIVITY SCHEDULING: EARLY
FINISH TIME
Prove that Greedy-Activity-Scheduling generate an optimal solution.

 According theorem 1, we select the first job since it is in optimal solution.


 Then, we remove all conflicting jobs from the job set.
 The original problem is then transformed to a similar problem of smaller size.
 So, we continue selecting jobs following theorem 1 and we always remain optimal.
THE FAMOUS KNAPSACK
PROBLEM
A thief breaks into a museum. Fabulous paintings, sculptures, and jewels are everywhere. The
thief has a good eye for the value of these objects and knows that each will fetch hundreds or
thousands of dollars on the clandestine art collector’s market. But the thief has only brought a
single knapsack to the scene of the robbery and can take away only what he can carry. What
items should the thief take to maximize the haul?

Which items
should I take?
THE FAMOUS KNAPSACK
PROBLEM
There are two versions of the problem:

(1) “0-1 knapsack problem”


Items are indivisible: you either take an item or not.
Solved with dynamic programming.

(2) “Fractional knapsack problem”


Items are divisible: you can take any fraction of an item.
Solved with a greedy algorithm.
THE FAMOUS KNAPSACK
PROBLEM
 More formally, the 0-1 knapsack problem:

 The thief must choose among items, where the item worth vi dollars and weighs wi pounds
 Carrying at most pounds, maximize value
 Note: assume vi, wi, and W are all integers
 “0-1” because each item must be taken or left in entirety

 A variation, the fractional knapsack problem:


 Thief can take fractions of items

 Think of items in 0-1 problem as gold ingots, in fractional problem as buckets of gold dust.
FRACTIONAL KNAPSACK
PROBLEM
 Knapsack capacity:
 There are items: item has a value of and wight
 Goal:
 Find such that for all
o
o is maximum
FRACTIONAL KNAPSACK
PROBLEM
Greedy strategy:
 Pick the item with the maximum value per pound vi/wi

 If the supply of that element is exhausted and the thief can carry more: take as much as
possible from the item with the next greatest value per pound
 It is good to order items based on their value per pound
FRACTIONAL KNAPSACK
PROBLEM

Item 3 Item Weight Value Fraction


i (wi) (vi) vi/wi (xi)
Item 2 50

Item 1 30 1 10 60 6
20
10 2 20 100 5
$60 $100 $120 3 30 120 4
$6/pound $5/pound $4/pound
FRACTIONAL KNAPSACK
PROBLEM

Item 3 Item Weight Value Fraction


i (wi) (vi) vi/wi (xi)
Item 2 50 50

Item 1 30 1 10 60 6
20
10 2 20 100 5
$60 $100 $120 3 30 120 4
$6/pound $5/pound $4/pound
FRACTIONAL KNAPSACK
PROBLEM

Item 3 Item Weight Value Fraction


i (wi) (vi) vi/wi (xi)
Item 2 50 40

Item 1 30 1 10 60 6 1
20
10 10 $60 2 20 100 5
$60 $100 $120 $60 3 30 120 4
$6/pound $5/pound $4/pound
FRACTIONAL KNAPSACK
PROBLEM

Item 3 Item Weight Value Fraction


i (wi) (vi) vi/wi (xi)
Item 2 50 40
30 1 10 60 6 1
20
10 $60 2 20 100 5
$100 $120 $60 3 30 120 4
$5/pound $4/pound
FRACTIONAL KNAPSACK
PROBLEM
20
Item 3 Item Weight Value Fraction
i (wi) (vi) vi/wi (xi)
Item 2 50 50
20 $100
30 1 10 60 6 1
20 +
10 $60 2 20 100 5 1
$100 $120 $160 3 30 120 4
$5/pound $4/pound
FRACTIONAL KNAPSACK
PROBLEM
20
Item 3 Item Weight Value Fraction
i (wi) (vi) vi/wi (xi)
50 50
20 $100
30 1 10 60 6 1
+
10 $60 2 20 100 5 1
$120 $160 3 30 120 4
$4/pound
FRACTIONAL KNAPSACK
PROBLEM
20
Item 3 Item Weight Value Fraction
i (wi) (vi) vi/wi (xi)
50 50
20 $100
30 1 10 60 6 1
+
10 $60 2 20 100 5 1
$120 $160 3 30 120 4 2/3
$4/pound
FRACTIONAL KNAPSACK
PROBLEM
20 $80
20
30
Item 3 Item Weight Value Fraction
+
i (wi) (vi) vi/wi (xi)
50 50
20 $100
1 10 60 6 1
+
10 10 $60 2 20 100 5 1
$120 $240 3 30 120 4 2/3
$4/pound
FRACTIONAL KNAPSACK
PROBLEM
Greedy-Fractional-Knapsack ()
1 sort arrays and based on for each item
2 for to
3
4
5 while
6 if
7
8
9 else
10
11
12
13 return

Running time is  from sorting operation


Let’s get back a bit to Graph Theory
TREES AND FORESTS
 Tree
 undirected, connected and acyclic graph
 a connected graph that does not contain any single cycle is called a tree
 vertices of trees are called their nodes and the edges of the tree are called branches
 a tree with vertices has () edges
 leaf in a tree is a vertex of degree
TREES AND FORESTS
 Forest
 undirected, disconnected and acyclic graph
 collection of disjoint trees
SPANNING TREE
Given an undirected and connected graph G = (V, E). A spanning tree of the graph G is a tree
that spans G (includes every vertex of G) and is a subgraph of G (every edge in the tree belongs
to G)
MINIMUM SPANNING TREE
 A graph can have multiple spanning trees
 The cost of the spanning tree is the sum of the weights of all the edges in the tree
 Minimum spanning tree is the spanning tree where the cost is minimum among all the
spanning trees
 There also can be many minimum spanning trees
FINDING MINIMUM SPANNING
TREE
 Use greedy techniques
8 7
 Food for thought 1 2 3
4 9
2
11 4
0 8 14 4
7
6
8 10
7 6 5
1 2
FINDING MST: KRUSKAL’S
ALGORITHM
weight
1
source dest
7 6
2 8 2
2 6 5
4 0 1 8 7
1 2 3
4 2 5 4 9
2
6 8 6
11 4
7 2 3 0 8 14 4
7 7 8 7
6
8 10
8 0 7
7 6 5
8 1 2 1 2
9 3 4
10 5 4
11 1 7
14 3 5
FINDING MST: KRUSKAL’S
ALGORITHM
weight
1
source dest
7 6
2 8 2
2 6 5
4 0 1 8 7
1 2 3
4 2 5 4 9
2
6 8 6
11 4
7 2 3 0 8 14 4
7 7 8 7
6
8 10
8 0 7
7 6 5
8 1 2 1 2
9 3 4
10 5 4
11 1 7
14 3 5
FINDING MST: KRUSKAL’S
ALGORITHM
weight
1
source dest
7 6
2 8 2
2 6 5
4 0 1 8 7
1 2 3
4 2 5 4 9
2
6 8 6
11 4
7 2 3 0 8 14 4
7 7 8 7
6
8 10
8 0 7
7 6 5
8 1 2 1 2
9 3 4
10 5 4
11 1 7
14 3 5
FINDING MST: KRUSKAL’S
ALGORITHM
weight
1
source dest
7 6
2 8 2
2 6 5
4 0 1 8 7
1 2 3
4 2 5 4 9
2
6 8 6
11 4
7 2 3 0 8 14 4
7 7 8 7
6
8 10
8 0 7
7 6 5
8 1 2 1 2
9 3 4
10 5 4
11 1 7
14 3 5
FINDING MST: KRUSKAL’S
ALGORITHM
weight
1
source dest
7 6
2 8 2
2 6 5
4 0 1 8 7
1 2 3
4 2 5 4 9
2
6 8 6
11 4
7 2 3 0 8 14 4
7 7 8 7
6
8 10
8 0 7
7 6 5
8 1 2 1 2
9 3 4
10 5 4
11 1 7
14 3 5
FINDING MST: KRUSKAL’S
ALGORITHM
weight
1
source dest
7 6
2 8 2
2 6 5
4 0 1 8 7
1 2 3
4 2 5 4 9
2
6 8 6
11 4
7 2 3 0 8 14 4
7 7 8 7
6
8 10
8 0 7
7 6 5
8 1 2 1 2
9 3 4
10 5 4
11 1 7
14 3 5
FINDING MST: KRUSKAL’S
ALGORITHM
weight
1
source dest
7 6
2 8 2
2 6 5
4 0 1 8 7
1 2 3
4 2 5 4 9
2
6 8 6
11 4
7 2 3 0 8 14 4
7 7 8 7
6
8 10
8 0 7
7 6 5
8 1 2 1 2
9 3 4
10 5 4
11 1 7
14 3 5
FINDING MST: KRUSKAL’S
ALGORITHM
weight
1
source dest
7 6
2 8 2
2 6 5
4 0 1 8 7
1 2 3
4 2 5 4 9
2
6 8 6
11 4
7 2 3 0 8 14 4
7 7 8 7
6
8 10
8 0 7
7 6 5
8 1 2 1 2
9 3 4
10 5 4
11 1 7
14 3 5
FINDING MST: KRUSKAL’S
ALGORITHM
weight
1
source dest
7 6
2 8 2
2 6 5
4 0 1 8 7
1 2 3
4 2 5 4 9
2
6 8 6
11 4
7 2 3 0 8 14 4
7 7 8 7
8 10
8 0 7
7 6 5
8 1 2 1 2
9 3 4
10 5 4
11 1 7
14 3 5
FINDING MST: KRUSKAL’S
ALGORITHM
weight
1
source dest
7 6
2 8 2
2 6 5
4 0 1 8 7
1 2 3
4 2 5 4 9
2
6 8 6
11 4
7 2 3 0 8 14 4
7 7 8 7
8 10
8 0 7
7 6 5
8 1 2 1 2
9 3 4
10 5 4
11 1 7
14 3 5
FINDING MST: KRUSKAL’S
ALGORITHM
weight
1
source dest
7 6
2 8 2
2 6 5
4 0 1 8 7
1 2 3
4 2 5 4 9
2
6 8 6
11 4
7 2 3 0 8 14 4
7 7 8 7
8 10
8 0 7
7 6 5
8 1 2 1 2
9 3 4
10 5 4
11 1 7
14 3 5
FINDING MST: KRUSKAL’S
ALGORITHM
weight
1
source dest
7 6
2 8 2
2 6 5
4 0 1 8 7
1 2 3
4 2 5 4 9
2
6 8 6
11 4
7 2 3 0 8 14 4
7 7 8
8 10
8 0 7
7 6 5
8 1 2 1 2
9 3 4
10 5 4
11 1 7
14 3 5
FINDING MST: KRUSKAL’S
ALGORITHM
weight
1
source dest
7 6
2 8 2
2 6 5
4 0 1 8 7
1 2 3
4 2 5 4 9
2
6 8 6
11 4
7 2 3 0 8 14 4
7 7 8
8 10
8 0 7
7 6 5
8 1 2 1 2
9 3 4
10 5 4
11 1 7
14 3 5
FINDING MST: KRUSKAL’S
ALGORITHM
weight
1
source dest
7 6
2 8 2
2 6 5
4 0 1 7
1 2 3
4 2 5 4 9
2
6 8 6
11 4
7 2 3 0 8 14 4
7 7 8
8 10
8 0 7
7 6 5
8 1 2 1 2
9 3 4
10 5 4
11 1 7
14 3 5
FINDING MST: KRUSKAL’S
ALGORITHM
weight
1
source dest
7 6
2 8 2
2 6 5
4 0 1 7
1 2 3
4 2 5 4 9
2
6 8 6
11 4
7 2 3 0 8 14 4
7 7 8
8 10
8 0 7
7 6 5
8 1 2 1 2
9 3 4
10 5 4
11 1 7
14 3 5
FINDING MST: KRUSKAL’S
ALGORITHM
weight
1
source dest
7 6
2 8 2
2 6 5
4 0 1 7
1 2 3
4 2 5 4 9
2
6 8 6
4
7 2 3 0 8 4
7 7 8
8
8 0 7
7 6 5
8 1 2 1 2
9 3 4
10 5 4 MST with cost = 37
11 1 7
14 3 5
FINDING MST: KRUSKAL’S
ALGORITHM

8 7
1 2 3
4 9
2
11 4
0 8 14 4
7
6
8 10
7 6 5
1 2
FINDING MST: KRUSKAL’S
ALGORITHM

8 7
1 2 3
4 9
2
Line 4 sorts the edges in time
11 4
0 8 14 4
Running time can be written as 7
6
8 10
7 6 5
1 2
FINDING MST: PRIME’S
ALGORITHM
8 7
1 2 3
4 9
0,1,2,3,4, 2
5,6,7,8 4
0 11 8 14 4
7
6
8 10
7 6 5
1 2
unexplored explored
FINDING MST: PRIME’S
ALGORITHM
8 7
1 2 3
4 9
1,2,3,4, 2
0
5,6,7,8 4
0 11 8 14 4
7
6
8 10
7 6 5
1 2
unexplored explored
FINDING MST: PRIME’S
ALGORITHM
8 7
1 2 3
4 9
2,3,4, 2
0,1
5,6,7,8 4
0 11 8 14 4
7
6
8 10
7 6 5
1 2
unexplored explored
FINDING MST: PRIME’S
ALGORITHM
8 7
1 2 3
4 9
3,4, 2
0,1,2
5,6,7,8 4
0 11 8 14 4
7
6
8 10
7 6 5
1 2
unexplored explored
FINDING MST: PRIME’S
ALGORITHM
8 7
1 2 3
4 9
3,4, 2
0,1,2,8
5,6,7 4
0 11 8 14 4
7
6
8 10
7 6 5
1 2
unexplored explored
FINDING MST: PRIME’S
ALGORITHM
8 7
1 2 3
4 9
3,4, 2
0,1,2,8,5
6,7 4
0 11 8 14 4
7
6
8 10
7 6 5
1 2
unexplored explored
FINDING MST: PRIME’S
ALGORITHM
8 7
1 2 3
4 9
3,4, 0,1,2,8,5, 2
7 6 4
0 11 8 14 4
7
6
8 10
7 6 5
1 2
unexplored explored
FINDING MST: PRIME’S
ALGORITHM
8 7
1 2 3
4 9
0,1,2,8,5, 2
3,4
6,7 4
0 11 8 14 4
7
6
8 10
7 6 5
1 2
unexplored explored
FINDING MST: PRIME’S
ALGORITHM
8 7
1 2 3
4 9
0,1,2,8,5, 2
3,4
6,7 4
0 11 8 14 4
7
8 10
7 6 5
1 2
unexplored explored
FINDING MST: PRIME’S
ALGORITHM
8 7
1 2 3
4 9
0,1,2,8,5, 2
4
6,7, 3 4
0 11 8 14 4
7
8 10
7 6 5
1 2
unexplored explored
FINDING MST: PRIME’S
ALGORITHM
8 7
1 2 3
4 9
0,1,2,8,5, 2
4
6,7, 3 4
0 11 8 14 4

8 10
7 6 5
1 2
unexplored explored
FINDING MST: PRIME’S
ALGORITHM
8 7
1 2 3
4 9
0,1,2,8,5, 2
4
6,7, 3 4
0 11 8 14 4

10
7 6 5
1 2
unexplored explored
FINDING MST: PRIME’S
ALGORITHM
8 7
1 2 3
4 9
0,1,2,8,5, 2
6,7, 3,4 4
0 11 8 14 4

10
7 6 5
1 2
unexplored explored
FINDING MST: PRIME’S
ALGORITHM
8 7
1 2 3
4 9
0,1,2,8,5, 2
6,7, 3,4 4
0 8 4

7 6 5
1 2
unexplored explored
MST with cost = 37
FINDING MST: PRIME’S
ALGORITHM

8 7
1 2 3
4 9
2
11 4
0 8 14 4
7
6
8 10
7 6 5
1 2
FINDING MST: PRIME’S
ALGORITHM
BUILD-MIN-HEAP 
FINDING MST: PRIME’S
ALGORITHM
BUILD-MIN-HEAP 

times
FINDING MST: PRIME’S
ALGORITHM
BUILD-MIN-HEAP 

times
each EXTRACT-MIN will take time
FINDING MST: PRIME’S
ALGORITHM
BUILD-MIN-HEAP 

times
each EXTRACT-MIN will take time
𝑂¿
FINDING MST: PRIME’S
ALGORITHM
BUILD-MIN-HEAP 

times
each EXTRACT-MIN will take time
𝑂¿

sum of the lengths of


all adjacency lists is
FINDING MST: PRIME’S
ALGORITHM
BUILD-MIN-HEAP 

times
each EXTRACT-MIN will take time
𝑂¿

sum of the lengths of


all adjacency lists is

constant time  keep a set bit for each vertex


that tells whether or not it is in

DECREASE-KEY operation
FINDING MST: PRIME’S
ALGORITHM
BUILD-MIN-HEAP 

times
each EXTRACT-MIN will take time
𝑂¿

sum of the lengths of


all adjacency lists is

constant time  keep a set bit for each vertex 𝑂¿


that tells whether or not it is in

DECREASE-KEY operation
FINDING MST: PRIME’S
ALGORITHM
BUILD-MIN-HEAP 

times
each EXTRACT-MIN will take time
𝑂¿

sum of the lengths of


all adjacency lists is

constant time  keep a set bit for each vertex 𝑂¿


that tells whether or not it is in

DECREASE-KEY operation

Overall running time


FINDING MST: PRIME’S
ALGORITHM
BUILD-MIN-HEAP 

times
each EXTRACT-MIN will take time
𝑂¿

sum of the lengths of


all adjacency lists is

constant time  keep a set bit for each vertex 𝑂¿


that tells whether or not it is in

DECREASE-KEY operation

Overall running time


FINDING MST: PRIME’S
ALGORITHM
BUILD-MIN-HEAP 

times
each EXTRACT-MIN will take time
𝑂¿

sum of the lengths of


all adjacency lists is

constant time  keep a set bit for each vertex 𝑂¿


that tells whether or not it is in

DECREASE-KEY operation

Overall running time Similar to Kruskal’s


FINDING MST: PRIME’S
ALGORITHM
Fibonacci-Heap

BUILD-MIN-HEAP 

times
each EXTRACT-MIN will take time
𝑂¿

sum of the lengths of


all adjacency lists is

constant time  keep a set bit for each vertex


that tells whether or not it is in

DECREASE-KEY operation
FINDING MST: PRIME’S
ALGORITHM
Fibonacci-Heap

BUILD-MIN-HEAP 

times
each EXTRACT-MIN will take time
𝑂¿

sum of the lengths of


all adjacency lists is

constant time  keep a set bit for each vertex 𝑂 (| 𝐸|)


that tells whether or not it is in

DECREASE-KEY operation
FINDING MST: PRIME’S
ALGORITHM
Fibonacci-Heap

BUILD-MIN-HEAP 

times
each EXTRACT-MIN will take time
𝑂¿

sum of the lengths of


all adjacency lists is

constant time  keep a set bit for each vertex 𝑂 (| 𝐸|)


that tells whether or not it is in

DECREASE-KEY operation

Overall running time Improved running time


CONTENT COURTESY
 Dr. Abul Kashem Mia, Professor, CSE, BUET
 Sukarna Barua, Assistant Professor, CSE, BUET

You might also like