323 Lecture Notes 8 Part 1
323 Lecture Notes 8 Part 1
Greedy Technique
2
Greedy Technique
On each step, the choice made must be:
Feasible: it has to satisfy the problem’s constraints
Locally optimal: has to be the best local choice among all feasible
choices available at the current step
Irrevocable: Once it is made, it cannot be changed on subsequent
steps
3
Greedy Technique
Question: Does the Greedy strategy works or not ?
Answer: It depends on the problem. For some problems, it can
only be an approximation to the optimal solution!
Greedy algorithms are simple and appealing.
But to prove that it yields an optimal solution for any problem
instance can be difficult.
4
Greedy Technique
Ways to prove:
Show that a partially constructed solution obtained on each iteration
can be extended to an optimal solution to the problem (by induction)
Show that on each step it does at least as well as any other algorithm
could in advancing toward the problem’s goal
Show that the result is optimal based on the algorithm’s output rather
than the way it operates
A sophisticated theory behind Greedy technique - abstract
combinatorial structure called: Matroid
5
Activity Selection Problem
Activities use common resource
of proposed activities
: start time of an activity
: finish time of an activity where
happens in half-open interval
Definition: and are compatible if and do not overlap
Definition: (Activity Selection (AS) Problem)
Select a maximum-size subset of mutually compatible activities
6
Activity Selection Problem
Example: Given 11 activities with their start and end times
1 2 3 4 5 6 7 8 9 10 11
1 3 0 5 3 5 6 8 8 2 12
4 5 6 7 8 9 10 11 12 13 14
Example:
𝑎𝑖 𝑎𝑝 𝑎 𝑗
𝑎𝑡 𝑎𝑞 𝑎𝑧
Time
0
Activity Selection Problem
Step #1 (cntd.): Optimal Substructure
How to define given problem having activities in terms of ?
Answer: Add two new dummy arguments and that define
boundaries where and .
So, where .
Assume activities are sorted by their finish times at cost,
So, .
when .
Activity Selection Problem
Step #1 (cntd.): Optimal Substructure
Let contain an activity . Then, we can divide the problem of
finding optimal solution for into two sub-problems: and
So, we can write .
Assume that optimal solution to includes activity .
Then, solutions to and to must also be optimal.
and we do not know . But, in every step
of decision we must be able select the correct activity that
leads us to obtain the set at the end.
Activity Selection Problem
Step #2: Construction of recursive solution
should be solved for optimal solution
Let be the number of activities in a maximum-size of
mutually compatible activities in
when . Also, when
if
if
Time
complexity:
Elements of Greedy Strategy
Steps of the greedy strategy:
1. Determine the optimal substructure of the problem (from DP)
2. Develop a recursive algorithm (formula) (from DP)
3. Prove that one of the optimal choices is the Greedy choice. In other
words, it is always safe to make the Greedy choice (e.g. Theorem 1
of the AS problem)
4. Show that all but one of the sub-problems induced by having made
the Greedy choice are empty (e.g. Theorem 2 of the AS problem)
5. Develop a recursive algorithm that implements the Greedy strategy
6. Convert the recursive algorithm to an iterative algorithm
Elements of Greedy Strategy
Two keys to solve an optimization problem by Greedy approach:
1. Greedy choice property
2. Optimal substructure (also for DP)
In greedy approach: First make the choice (locally best) then solve
the sub-problems (Top-down)
Do not care about future choices!
Before solving the problem, we may arrange/transform the inputs
without damaging the problem definition (e.g. Sorting the activities by
their finish time in AS problem)
Greedy vs Dynamic Programming
0-1 Knapsack problem: Given a set of items with their sizes and
values, also a knapsack (i.e. a capacity) to be filled by the items.
Problem: Which items should be selected such that their total value
is maximum while the knapsack capacity is not exceeded ?
0 : Do not take the item 1 : Take the item
Greedy vs Dynamic Programming
Greedy vs Dynamic Programming
A greedy algorithm:
1. Sort the items by value-per-size in descending order.
2. Select each item with current best value-per-size while checking the
remaining capacity of the knapsack
For the 0-1 Knapsack problem, the above algorithm does not
guarantee the optimal solution to be found.
0-1 Knapsack problem can be solved by DP.
Greedy vs Dynamic Programming
If we relax the problem and allow fractions of items to be taken
rather than binary (0-1) choices, it is called Fractional Knapsack
problem
The above Greedy algorithm guarantees optimality of the
solution for the fractional version of the problem
Both 0-1 and Fractional Knapsack problems shows optimal
substructure property.