3 Greedy-Lec
3 Greedy-Lec
1
The greedy method
• An optimization problem is one in which you want to find, not just a
solution, but the best solution
– A “greedy algorithm” works well for most optimization problems
• Greedy method suggests that one can devise an algorithm that works
in phases:
– At each phase, take one input at a time (from the ordered input
according to the selection criteria) and decide whether it is an
optimal solution.
2
Feasible vs. optimal solution
• Greedy method solves problem by making a sequence of
decisions.
• Decisions are made one by one in some order.
• Each decision is made using a greedy criterion.
• A decision, once made, is (usually) not changed later.
• Given n inputs we are required to obtain a subset that satisfies
some constraints
– Any subset that satisfies the given constraints is called a
feasible solution
– A feasible solution that either maximizes or minimizes a given
objective function is called an optimal solution.
3
Greedy algorithm
To apply greedy algorithm
• Decide optimization measure (maximization of profit or
minimization of cost)
– Sort the input in increasing or decreasing order based on the
optimization measure selected for the given problem
• Formulate a multistage solution
– Take one input at a time as per the selection criteria
• Select an input that is feasible and part of the optimal solution
– from the ordered list pick one input at a time and include it in
the solution set if it fulfills the criteria
4
Greedy Choice Property
• Greedy algorithm always makes the choice that looks best at
the moment
– With the hope that a locally optimal choice will lead to a
globally optimal solution
• Greedy choice property says that a globally optimal solution
can be arrived at by making a locally optimal choice
– Locally optimal choice globally optimal solution
5
The Problem of Making Coin Change
• Assume the coin denominations are: 50, 25, 10, 5, and 1.
• Problem: Make a change of a given amount using the smallest
possible number of coins
• Example: make a change for x = 92.
–Mathematically this is written as
x = 50a + 25b + 10c + 5d + 1e
So that a + b + c + d +e is minimum & a, b, c, d, e ≥ 0.
• Greedy algorithm for coin changing
–Order coins in decreasing order
–Select coins one at a time (divide x by denomination)
–Solution: contains a =1 , b = 1, c = 1 , d = 1 e = 2.
6
Algorithm
procedure greedy (A, n)
Solution ← { }; // set that will hold the solution set.
FOR i = 1 to n DO
x = SELECT (A)
IF FEASIBLE (Solution, x) THEN
Solution = UNION (Solution, x)
end if
end FOR
RETURN Solution
end procedure
• SELECT function:
–selects the most promising candidates from A[ ] and removes it
from the list.
• FEASIBLE function:
–a Boolean valued function that determines whether x can be
included into the solution vector.
• UNION function:
–combines x with the solution
7
A failure of the greedy algorithm
In some (fictional) monetary system, “krons”
come in 1 kron, 7 kron, and 10 kron coins
Using a greedy algorithm to count out 15
krons, you would get
A 10 kron piece
Five 1 kron pieces, for a total of 15 krons
This requires six coins
Is there any better solution?
8
Minimum Spanning Trees
Problem: Laying Telephone Wire
Central office
9
Minimum Spanning Tree (MST)
• Assume you have an undirected graph G = (V,E) with weights
assigned to edges.
•The objective is “use smallest set of edges of the given graph to
connect everything together”. How?
•A minimum spanning tree is a least-cost subset of the edges of a
graph that connects all the nodes
• MST is a sub-graph of an undirected weighted graph G, such that:
•It is a tree (i.e., it is acyclic)
•It covers all the vertices V
•contains |V| - 1 edges
•The total cost associated with tree edges is the minimum among all
possible spanning trees
Applications of MST
• Network design, road planning, hydraulic, electric,
telecommunication etc.
10
How can we generate a MST?
• A MST is a least-cost subset of the edges of a graph that connects all
the nodes
• A greedy method to obtain a minimum-cost spanning tree builds this
tree edge by edge.
–The next edge to include is chosen according to some optimization
criterion.
• Criteria: to choose an edge that results in a minimum increase in the
sum of the costs of the edges so far included.
• General procedure:
4
6
– Start by picking any node and adding it to the tree 2
– Repeatedly: Pick any least-cost edge from a node in 1 4
the tree to a node not in the tree, and add the edge 5
3 2
and new node to the tree
– Stop when all nodes have been added to the tree 3 2 3
• Two techniques: Prim and Kruskal algorithms
3 3
4
2 11
4
Kruskal’s algorithm
• Kruskal algorithm: Always tries the lowest-cost remaining edge
• It considers the edges of the graph in increasing order of cost.
• In this approach, the set T of edges so far selected for the spanning
tree may not be a tree at all stages in the algorithm. But it is possible
to complete T into a tree.
• Create a forest of trees from the vertices
• Repeatedly merge trees by adding “safe edges” until only one tree
remains. A “safe edge” is an edge of minimum weight which does
not create a cycle
• Example:
9 b Initially there is a forest:
a 2 6
d {a}, {b}, {c}, {d}, {e}
4 5
5 4 E = {(a,d), (c,d), (d,e), (a,c),
5 e (b,e), (c,e), (b,d), (a,b)}
c MST with cost= 13(2+4+4+5)
12
Kruskal Algorithm
procedure kruskalMST(G, cost, n, T)
i = mincost = 0
while i < n – 1 do • After each iteration,
every tree in the
delete a minimum cost edge (u,v)
forest is a MST of the
j = find(u) vertices it connects
k = find(v)
• Algorithm terminates
if j ≠ k then when all vertices are
i = i +1 connected into one
T(i,1) = u; T(i,2) = v tree
mincost = mincost + cost(u,v) • Running time is
union(j,k) bounded by sorting
end if (or findMin): O(n2)
end while
if i ≠ n-1 then return “no spanning tree”
return mincost
end procedure
13
Correctness of Kruskal
• If the algorithm is correct it halts with the right
answer or optimal solution.
• Optimal solution is obtained if:
• Kruskal algorithm adds n-1 edges (with minimum
cost) to the spanning tree without creating a cycle
• Proof that Kruskal algorithm creates a minimum
spanning tree of any connected graph.
• Prove by contradiction
• Suppose it wasn't construct minimum spanning
tree.
14
Prim’s algorithm
• Prim’s: Always takes the lowest-cost edge between nodes in the
spanning tree and nodes not yet in the spanning tree
• If A is the set of edges selected so far, then A forms a tree.
–The next edge (u,v) to be included in A is a minimum cost edge not
in A such that A υ {(u,v)} is also a tree, where u is in the tree & v is
not.
• Property: At each step, we add the edge (u,v) s.t. the weight of (u,v)
is minimum among all edges
–This spanning tree grows by one new node and edge at each
iteration.
•Each step maintains a minimum spanning tree of the vertices that
have been included thus far
–When all vertices have been included, we have a minimum cost
spanning tree for the graph
15
Prim’s algorithm
•Example: find the minimum spanning tree using
Prim algorithm
9 2 9 2
1 2 6 1 2 6
4 4
4 5 4 5
5 4 5 4
5 5 5 5
3 3
16
Prim’s Algorithm
procedure primMST(G, cost, n, T)
Pick a vertex 1 to be the root of the spanning tree T
mincost = 0
for i = 2 to n do near (i) = 1
near(1) = 0
for i = 1 to n-1 do
find j such that near(j) ≠ 0 and cost(j,near(j)) is min
T(i,1) = j; T(i,2) = near (j)
mincost = mincost + cost(j,near(j))
near (j) = 0
for k = 1 to n do
if near(k) ≠ 0 and cost(k,near(k) > cost(k,j) then
near (k) = j
end for
end for
return mincost
end procedure 17
Correctness of Prim’s
• If the algorithm is correct it halts with the right answer or
optimal solution.
• Optimal solution is obtained if:
• Prim algorithm adds n-1 edges (with minimum cost) to the
spanning tree without creating a cycle
• Proof that PRIM algorithm creates a minimum spanning tree
of any connected graph.
• Prove by contradiction
• Suppose it wasn't.
18
Single source shortest path
• Path problem asks questions such as, given a weighted directed graph G = (V,E)
with cost associated with every edges
–Is there a path from vi to vj
–If there is more than one path from A to B, which is the shortest path?
• The length of a path is defined by the sum of the weights of the edges on that path.
–The starting vertex of the path is referred as the source
–The last vertex is the destination
• To formulate a greedy approach to generate the shortest path, starting source vertex
v0 think of:
20
Dijkstra’s shortest-path algorithm
PROCEDURE shortestPath(v,COST,DIST,n)
start with V1(s)
FOR i = 1 to n DO //initialize S and DIST
S(i) = 0; DIST(i) = COST(v,i);
END FOR
S(v1) = 1
FOR num = 2 to n-1 DO
choose vertex u such that S(u) = 0 and DIST(u) is min
S(u) = 1 //put u in S
FOR each w adjacent to u with S(w) = 0 DO
IF DIST[w] > DIST[u] + COST(u,w) THEN
DIST[w] = DIST[u] + COST(u,w);
END FOR
END FOR
END PROCEDURE 21