0% found this document useful (0 votes)
19 views61 pages

DAA Unit3 Greedy Methods

Uploaded by

RN Patmase
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views61 pages

DAA Unit3 Greedy Methods

Uploaded by

RN Patmase
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 61

ITU424 DESIGN AND ANALYSIS

OF ALGORITHMS
Unit III

Greedy Method
Greedy Algorithms
• Greedy is an algorithmic paradigm that builds
up a solution piece by piece, always choosing
the next piece that offers the most obvious and
immediate benefit.
• Typically used to solve optimization problems.
• So the problems where choosing locally
optimal also leads to global solution are best fit
for Greedy.
• There is no need to evaluate alternative
Example
• Devise an algo. for paying given amount using smallest
possible number of coins
• Available coins:
– Dollars(100 cents)
– Quarters (25 cents)
– Dimes(10 cents)
– Nickles(5 cents)
– Pennies(1 cents)
• If we have to pay 289 cents
• Best solution:-
– 10 coins(2 dollar, 3 quarter,1 dime and 4 pennies)
General characteristics of Greedy
• A candidate set, from which a solution is created
• A selection function, which chooses the best
candidate to be added to the solution
• A feasibility function, that is used to determine if a
candidate can be used to contribute to a solution
• An objective function, which assigns a value to a
solution, or a partial solution, and
• A solution function, which will indicate when we
have discovered a complete solution
Function greedy
Elements of greedy technique
• A greedy algorithm always makes the choice
that looks best at the moment. That is, it
makes a locally optimal choice in the hope
that this choice will lead to a globally optimal
solution. This chapter explores optimization
problems that are solvable by greedy
algorithms.
• Greedy algorithms do not always yield optimal
solutions, but for many problems they
To develop a greedy algorithm was a bit more involved than is
typical. We went through the following steps:
• 1. Determine the optimal substructure of the problem.
• 2. Develop a recursive solution.
• 3. Prove that at any stage of the recursion, one of the optimal
choices is the greedy choice. Thus, it is always safe to make the
greedy choice.
• 4. Show that all but one of the subproblems induced by having
made the greedy choice are empty.
• 5. Develop a recursive algorithm that implements the greedy
strategy.
• 6. Convert the recursive algorithm to an iterative algorith
An activity-selection problem
• scheduling several competing activities that require
exclusive use of a common resource, with a goal of selecting
a maximum-size set of mutually compatible activities.
• Suppose we have a set S = {a1, a2, ..., an} of n proposed
activities that wish to use a resource, such as a lecture hall,
which can be used by only one activity at a time.
• Each activity ai has a start time si and a finish time fi, where
0 ≤ si < fi < ∞. If selected, activity ai takes place during the
half-open time interval [si, fi). Activities ai and aj are
compatible if the intervals [si, fi) and [sj fj) do not overlap
(i.e., ai and aj are compatible if si ≥ fj or sj ≥ fi)
activity-selection problem
The final activity schedule is (A1,A3,A4,A6,A7,A9,A10)
example
• Given 10 activities along with their start and
finish time as
Activity 1 2 3 4 5 6 7 8 9 10
Start 1 2 3 4 5 6 7 8 9 10
time
Finish 5 3 4 6 7 8 11 10 12 13
time

• Compute schedule where the largest number


of activities take place
KNAPSACK PROBLEM
• We want to pack n items in the our luggage
– The ith item is worth vi dollars and weighs wi pounds
– Take as valuable a load as possible, but cannot exceed
W pounds
– vi, wi and W are integers
• 0/1 knapsack
– Each item is taken or not taken
– Cannot take fractional amount of an item or take an
item more than once
• Fractional knapsack
– Fraction of items can be taken
Fractional Knapsack Problem
• Steps
– Compute value/ratio vi/wi for each item
– Select the item as per the descending order of
ratio.
– Start filling the knapsack by putting the items into
it one by one
– The total weight of knapsack should not exceed
W(capacity of knapsack)
• Hence, the objective of this algorithm is to
Fractional Knapsack
Algorithm
function knapsack(w[1..n],v[1..n],W):array[1..n]
for i=1 to n do x[i]=0
weight=0
{greedy loop}
while weight < W do
i=the best remaining object
if weight + w[i]<= W then x[i]=1
weight =weight + w[i]
else x[i]=(W – weight) / w[i]
weight=W
return x
examples
Analysis
• If the provided items are already sorted into a
decreasing order of pi/wi
• then the while loop takes a time in O(n);
Therefore, the total time including the sort is
in O(n logn).
Fractional Knapsack example
• Consider 5 items along with their respective
weights and values
Item 1 2 3 4 5
Weight 10 20 30 40 50
value 20 30 66 40 60

• The capacity of knapsack is W=100 find the


optimal solution
Huffman Coding
• Huffman Coding is a famous Greedy Algorithm.
• It is used for the lossless compression of data.
• It uses variable length encoding.
• It assigns variable length code to all the characters.
• The code length of a character depends on how frequently it
occurs in the given text.
• The character which occurs most frequently gets the
smallest code.
• The character which occurs least frequently gets the largest
code.
• It is also known as Huffman Encoding.
Prefix Rule-

• Huffman Coding implements a rule known as a


prefix rule.
• This is to prevent the ambiguities while
decoding.
• It ensures that the code assigned to any
character is not a prefix of the code assigned
to any other character.
• The algorithm builds the tree T corresponding
to the optimal code in a bottom up manner.
• It begins with the set C leaves and perform a
sequence of C – 1 merging operations create
final tree.
Algorithm
Example
Example
• Since there are 8 letters in the alphabet, the
initial queue size is n = 8, and 7 merge steps are
required to build the tree. The final tree
represents the optimal prefix code. The
codeword for a letter is the sequence of the edge
labels on the path from the root to the letter.
Thus, the optimal Huffman code is as follows
Solution:-
Frequencies:-2,2,3,5,5,5
Finding Single Source Shortest path in graph: Dijkstra's
Algorithm
• We want to compute the shortest path distance from a source node S to all other
nodes. We associate lengths or costs on edges and find the shortest path.
• We can’t use edges with a negative cost. If graph has cycles, we can take endless
loops to reduce the cost by continuing to travel along the negative cost edge.
• Finding a path from vertex S to vertex T has the same cost as finding a path from
vertex S to all other vertices in the graph (within a constant factor).
• If all edge lengths are equal, then the Shortest Path algorithm is equivalent to the
breadth-first search algorithm. Breadth first search will expand the nodes of a
graph in the minimum cost order from a specified starting vertex (assuming equal
edge weights everywhere in the graph).
• Dijkstra’s Algorithm: This is a greedy algorithm to find the minimum distance from
a node to all other nodes.
• At each iteration of the algorithm, we choose the minimum distance vertex from
all unvisited vertices in the graph
• It computes the shortest path from one particular source node to all other
remaining nodes of the graph
function Dijkstra(L[1..n,1..n]):array[2:n] array D[2..n]
C={2,3,……n} {S = N\C exists only implicitly}
for i= 2 to n do D[i]=L[1,i]
{greedy loop}
repeat n-2 times
v=some element of C minimizing D[v]
C=C\{v} {and implicitly S=S Ս{v}
for each wє C do
D[w]=min(D[w],D[v]+L[v,w])
return D
Example
Using Dijkstra’s Algorithm, find the shortest distance from source vertex ‘S’
to remaining vertices in the following graph-
Solution
Example
Consider the directed graph shown in the figure below. There are multiple
shortest paths between vertices S and T. Which one will be reported by Dijkstra?s
shortest path algorithm? Assume that, in any iteration, the shortest path to a
vertex v is updated only when a strictly shorter path to v is discovered.

1)SDT 2)SBDT 3)SACDT 4)SACET


Spanning Tree
• A spanning tree is a subset of Graph G, which
has all the vertices covered with minimum
possible number of edges. Hence, a spanning
tree does not have cycles and it cannot be
disconnected..
Minimum Spanning Trees
• Electronic circuit designs often need to make
the pins of several components electrically
equivalent by wiring them together. To
interconnect a set of n pins, we can use an
arrangement of n 1 wires, each connecting
two pins. Of all such arrangements, the one
that uses the least amount of wire is usually
the most desirable.
The algorithms of Kruskal and Prim
• In Kruskal’s algorithm, the set A is a forest whose
vertices are all those of the given graph.
• The safe edge added to A is always a least-weight
edge in the graph that connects two distinct
components.
• In Prim’s algorithm, the set A forms a single tree.
The safe edge added to A is always a least-weight
edge connecting the tree to a vertex not in the
tree.
Kruskal’s algorithm
• Kruskal’s algorithm finds a safe edge to add to the
growing forest by finding, of all the edges that
connect any two trees in the forest, an edge (u,v)
of least weight.
• Let C1 and C2 denote the two trees that are
connected by (u,v) .Since (u,v) must be a light edge
connecting C1 to some other tree,
• Kruskal’s algorithm qualifies as a greedy algorithm
because at each step it adds to the forest an edge
of least possible weight
Kruskal’s algorithm
• The operation FIND-SET (u) returns a representative element from the set
that contains u. Thus, we can determine whether two vertices u and v
belong to the same tree by testing whether FIND-SET (u) equals FIND-
SET(v)
• To combine trees, Kruskal’s algorithm calls the UNION procedure.
Kruskal’s algorithm
• Figure shows how Kruskal’s algorithm works. Lines 1–3 initialize the set A to the
empty set and create |V | trees, one containing each vertex.
• The for loop in lines 5–8 examines edges in order of weight, from lowest to
highest. The loop checks, for each edge(u,v) whether the endpoints u and belong
to the same tree. If they do, then the edge (u,v) cannot be added to the forest
without creating a cycle, and the edge is discarded Otherwise, the two vertices
belong to different trees.
• In this case, line 7 adds the edge (u,v) to A, and line 8 merges the verticesin the
two trees
Kruskal’s algorithm
Prim’s algorithm
• Prim’s algorithm has the property that the
edges in the set A always form a single tree.
• The tree starts from an arbitrary root vertex r
and grows until the tree spans all the vertices
in V
• Each step adds to the tree A a light edge that
connects A to an isolated vertex—one on
which no edge of A is incident.
Prim’s algorithm
• This strategy qualifies as greedy since at each step
it adds to the tree an edge that contributes the
minimum amount possible to the tree’s weight.
• During execution of the algorithm, all vertices that
are not in the tree reside in a min-priority queue
Q based on a key attribute. For each vertex v, the
attribute v.key is the minimum weight of any edge
connecting v to a vertex in the tree; by
convention, v.key = ∞if there is no such edge.
• The attribute v.ℼ names the parent of v in the
tree.
• When the algorithm terminates, the min-
priority queue Q is empty; the minimum
spanning tree A for G is thus

You might also like