0% found this document useful (0 votes)
58 views25 pages

323 Lecture Notes 8 Part 1

The document discusses the Greedy Technique as an algorithm design approach for optimization problems, highlighting its steps and requirements for making feasible, locally optimal, and irrevocable choices. It also introduces the Activity Selection Problem as an example, detailing the process of constructing optimal solutions and the differences between Greedy and Dynamic Programming methods, particularly in the context of the 0-1 Knapsack problem. The document emphasizes the importance of proving the Greedy choice property and optimal substructure in solving such problems.

Uploaded by

ozenedabusee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views25 pages

323 Lecture Notes 8 Part 1

The document discusses the Greedy Technique as an algorithm design approach for optimization problems, highlighting its steps and requirements for making feasible, locally optimal, and irrevocable choices. It also introduces the Activity Selection Problem as an example, detailing the process of constructing optimal solutions and the differences between Greedy and Dynamic Programming methods, particularly in the context of the 0-1 Knapsack problem. The document emphasizes the importance of proving the Greedy choice property and optimal substructure in solving such problems.

Uploaded by

ozenedabusee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 25

LECTURE 8: (Part 1)

Greedy Technique

CMPE 323 Algorithms


Based on
Levitin, “Introduction to the Design & Analysis Algorithms“,
Pearson, 2012

Cormen, Leiserson, Rivest, Stein, “Introduction to Algorithms“,


The MIT Press, 2009
Greedy Technique
 A general algorithm design technique applicable to optimization
problems only
 Construct a solution through a sequence of steps, each
expanding a partially constructed solution obtained so far, until
a complete solution to the problem is reached

2
Greedy Technique
 On each step, the choice made must be:
 Feasible: it has to satisfy the problem’s constraints
 Locally optimal: has to be the best local choice among all feasible
choices available at the current step
 Irrevocable: Once it is made, it cannot be changed on subsequent
steps

3
Greedy Technique
 Question: Does the Greedy strategy works or not ?
Answer: It depends on the problem. For some problems, it can
only be an approximation to the optimal solution!
 Greedy algorithms are simple and appealing.
 But to prove that it yields an optimal solution for any problem
instance can be difficult.

4
Greedy Technique
 Ways to prove:
 Show that a partially constructed solution obtained on each iteration
can be extended to an optimal solution to the problem (by induction)
 Show that on each step it does at least as well as any other algorithm
could in advancing toward the problem’s goal
 Show that the result is optimal based on the algorithm’s output rather
than the way it operates
 A sophisticated theory behind Greedy technique - abstract
combinatorial structure called: Matroid
5
Activity Selection Problem
 Activities use common resource
of proposed activities
: start time of an activity
: finish time of an activity where
happens in half-open interval
 Definition: and are compatible if and do not overlap
 Definition: (Activity Selection (AS) Problem)
Select a maximum-size subset of mutually compatible activities

6
Activity Selection Problem
 Example: Given 11 activities with their start and end times
1 2 3 4 5 6 7 8 9 10 11
1 3 0 5 3 5 6 8 8 2 12
4 5 6 7 8 9 10 11 12 13 14

An infeasible solution - Overlapping!


A solution feasible but not optimal
A solution feasible and optimal
Another feasible and optimal solution
An infeasible solution
Optimal, but not feasible solution ???
7
Activity Selection Problem
 Start with an application of DP
 Step #1: Optimal Substructure
Define suitable space of subproblems:

is the set of all activities that are compatible with


They are also compatible with the activities that finish no later
than starts and the activities that start no earlier than
finishes
Activity Selection Problem
 Step #1 (cntd.): Optimal Substructure
Example: Consider the activities on Slide-7

Example:
𝑎𝑖 𝑎𝑝 𝑎 𝑗

𝑎𝑡 𝑎𝑞 𝑎𝑧
Time
0
Activity Selection Problem
 Step #1 (cntd.): Optimal Substructure
How to define given problem having activities in terms of ?
Answer: Add two new dummy arguments and that define
boundaries where and .
So, where .
Assume activities are sorted by their finish times at cost,
So, .
when .
Activity Selection Problem
 Step #1 (cntd.): Optimal Substructure
Let contain an activity . Then, we can divide the problem of
finding optimal solution for into two sub-problems: and
So, we can write .
Assume that optimal solution to includes activity .
Then, solutions to and to must also be optimal.
and we do not know . But, in every step
of decision we must be able select the correct activity that
leads us to obtain the set at the end.
Activity Selection Problem
 Step #2: Construction of recursive solution
should be solved for optimal solution
Let be the number of activities in a maximum-size of
mutually compatible activities in
when . Also, when
if
if

Find such that maximizes the total # of compatible activities.


Activity Selection Problem
 Step #3 & #4: Prove that Greedy choice exists and all but one of
the sub-problems are empty
Theorem: If then let be the activity in with the
earliest finish time. . Then,
1. can be used in some maximum-size subset of mutually
compatible activities of
2. The sub-problem is empty (since is the first activity in )
Activity Selection Problem
 Step #3 & #4 (cntd.):
Proof (1):
Let be the first activity in . If , it means is part of
the optimal solution. If then let .
Then, the activities in are again disjoint since is the first
compatible activity and . That implies, the number of
activities in which makes the solution is one of the
maximum-size subset of mutually compatible activities of .
Activity Selection Problem
 Step #3 & #4 (cntd.):
Proof (2):
Let be non-empty where is a member of then

Contradicts with is the


activity with minimum
finishing time
Activity Selection Problem
 Step #3 & #4 (cntd.):
As a result, instead of solving two subproblems with .
choices for in DP approach, we can solve 1 sub-problem (the
other sub-problem is always empty).
In solving the only sub-problem, we need to consider only one
choice: The one with earliest finish time in . (Greedy Choice!)
No need for bottom-up table driven approach.
Activity Selection Problem
 Step #5: Develop a recursive algorithm that implements the
Greedy strategy
Time
complexity:
Activity Selection Problem
 Step #5 (cntd.):
Example:
Recursive Activity Selector
Activity Selection Problem
 Step #6: Convert the recursive algorithm to an iterative one

Time
complexity:
Elements of Greedy Strategy
 Steps of the greedy strategy:
1. Determine the optimal substructure of the problem (from DP)
2. Develop a recursive algorithm (formula) (from DP)
3. Prove that one of the optimal choices is the Greedy choice. In other
words, it is always safe to make the Greedy choice (e.g. Theorem 1
of the AS problem)
4. Show that all but one of the sub-problems induced by having made
the Greedy choice are empty (e.g. Theorem 2 of the AS problem)
5. Develop a recursive algorithm that implements the Greedy strategy
6. Convert the recursive algorithm to an iterative algorithm
Elements of Greedy Strategy
 Two keys to solve an optimization problem by Greedy approach:
1. Greedy choice property
2. Optimal substructure (also for DP)
 In greedy approach: First make the choice (locally best) then solve
the sub-problems (Top-down)
 Do not care about future choices!
 Before solving the problem, we may arrange/transform the inputs
without damaging the problem definition (e.g. Sorting the activities by
their finish time in AS problem)
Greedy vs Dynamic Programming
 0-1 Knapsack problem: Given a set of items with their sizes and
values, also a knapsack (i.e. a capacity) to be filled by the items.
 Problem: Which items should be selected such that their total value
is maximum while the knapsack capacity is not exceeded ?
 0 : Do not take the item 1 : Take the item
Greedy vs Dynamic Programming
Greedy vs Dynamic Programming
 A greedy algorithm:
1. Sort the items by value-per-size in descending order.
2. Select each item with current best value-per-size while checking the
remaining capacity of the knapsack
 For the 0-1 Knapsack problem, the above algorithm does not
guarantee the optimal solution to be found.
 0-1 Knapsack problem can be solved by DP.
Greedy vs Dynamic Programming
 If we relax the problem and allow fractions of items to be taken
rather than binary (0-1) choices, it is called Fractional Knapsack
problem
 The above Greedy algorithm guarantees optimality of the
solution for the fractional version of the problem
 Both 0-1 and Fractional Knapsack problems shows optimal
substructure property.

You might also like