Greedy Model: Analysis of Algorithms
Greedy Model: Analysis of Algorithms
Chapter 3
Greedy Model
General Method
Job Sequencing with Deadlines
Optimal Merge Pattern
Minimum Spanning Trees
Single Source Shortest Path
Knapsack Problem
The greedy method is perhaps the most straight forward design technique. Most, though not all, of these
problems have n inputs and require us to obtain a subset that satisfies some constraints. Any subset that satisfies
those constraints is called a feasible solution. We need to find a feasible solution that either maximizes or
minimizes a given objective function. A feasible solution that does this is called an optimal solution. There is
usually an obvious way to determine a feasible solution but not necessarily an optimal solution.
The greedy method suggests that one can design an algorithm that works in stages, considering one input
at a time. At each stage, a decision is made regarding whether a particular input is in an optimal solution. This is
done by considering the inputs in an order determined by some selection procedure. If the inclusion of the next
input into the partially constructed optimal solution will result in an infeasible solution, then this input is not
added to the partial solution. Otherwise, it is added. The selection procedure itself is based on some optimization
measure. This measure may be the objective function.
The function Select selects an input from a[ ] and removes it. The selected input's value is assigned to x.
Feasible is a Boolean-valued function that determines whether x can be included into the solution vector. The
function Union combines x with the solution and updates the objective function. The function Greedy describes
the essential way that a greedy algorithm will look, once a particular problem is chosen and the functions Select,
Feasible, and Union are properly implemented.
We are given a set of n jobs. Associated with job i is an integer deadline di ≥0 and a profit pi>0. For any
job i the profit pi is earned iff the job is completed by its deadline. To complete a job, one has to process the job
on a machine for one unit of time. Only one machine is available for processing jobs. A feasible solution for this
problem is a subset J of jobs such that each job in this subset can be completed by its deadline. The value of a
feasible solution J is the sum of the profits of the jobs in J, or ∑i ϵ jpi an optimal solution is a feasible solution with
maximum value.
Example: Let n = 4, (p1,p2,p3,p4) = (100,10,15,27) and (d1,d2,d3,d4) = (2,1,2,1). The feasible solutions and
their values are:
Solution 3 is optimal. In this solution only jobs1and 4 are processed and the value is 127.These jobs must be
processed in the order job 4 followed by job1.Thus the processing of job 4 begins at time zero and that of job1
is completed at time2.
0 1 2
Step 3: Assign high profited job as later as possible deadline.
J4 J1
0 1 2
Step 4: Find the profit. Total Profit = Profit of J4 + Profit of J1
2 Prepared by: Mujeeb Rahman
= 100 + 27 = 127.
Example : Number of jobs, n = 5. Soln:
Process Profit Deadline
Process Profit Deadline J5 80 4
J1 10 3 J2 20 3
J2 20 3 J3 15 3
J1 10 3
J3 15 3
J4 5 4
J4 5 4
Maximum Deadline = 4
J5 80 4
J1 J3 J2 J5
Find maximum profit scheduling. 0 1 2 3 4
Maximum Profit = 80+20+15+10 = 125.
Example: The files x1, x2 and x3 are three sorted files of length 30, 20, and 10 records each. Merging x1 and x2
requires 50 record moves. Merging the result with x3 requires another 60 moves. The total number of record
moves required to merge the three files this way is 110. If, instead, we first merge x2 and X3 (taking 30 moves)
and then x1 (taking 60 moves), the total record moves made is only 90. Hence, the second merge pattern is faster
than the first.
A greedy attempt to obtain an optimal merge pattern is easy to formulate. Since merging an n-record file
and an m-record file requires possibly n+m record moves, the obvious choice for a selection criterion is: at each
step merge the two smallest size files together. Thus, if we have five files (x1, x2, x3, x4, x5) with sizes (20, 30,
10, 5, 30), our greedy rule would generate the following merge pattern: merge x4 and x3 to get z1 (|z1| = 15),
merge z1 and x1 to get z2 (|z2| = 35), merge x2 and x5 to get z3 (|z3| = 60), and merge z2 and z3 to get the
answer z4. The total number of record moves is 205. One can verify that this is an optimal merge pattern for the
given problem instance.
This sum is called the weighted external path length of the tree.
Example: Find the optimal merge tree and optimal number of comparisons of given data.
File x1 x2 x3 x4 x5 x6
Size 2 3 5 7 9 13
Soln:
= 93
In the design of electronic circuitry, it is often necessary to make the pins of several components
electrically equivalent by wiring them together. To interconnect a set of n pins, we can use an arrangement of n −
1 wires, each connecting two pins. Of all such arrangements, the one that uses the least amount of wire is usually
the most desirable.
We can model this wiring problem with a connected, undirected graph G = (V, E), where V is the set of
pins, E is the set of possible interconnections between pairs of pins, and for each edge (u, v) ∈ E, we have a
weight w(u, v) specifying the cost (amount of wire needed) to connect u and v. We then wish to find an acyclic
subset T ⊆ E that connects all of the vertices and whose total weight
is minimized. Since T is acyclic and connects all of the vertices, it must form a tree, which we call a spanning
tree since it “spans” the graph G. We call the problem of determining the tree T the minimum-spanning-tree
problem. We shall examine two algorithms for solving the minimum spanning- tree problem: Kruskal’s
algorithm and Prim’s algorithm.
A spanning tree whose weight is minimum over all spanning trees is called a minimum spanning tree, or MST.
Prim’s Algorithm
This algorithm continuously increases the size of a tree starting with a single vertex until it spans all the
vertices.
1. Input: A connected weighted graph with vertices V and edges E.
2. Initialize: Vnew = {x}, where x is an arbitrary node (starting point) from V, Enew = {}
3. Repeat until Vnew = V:
3.1 Choose edge (u,v) with minimal weight such that u is in Vnew and v is not (if there are
multiple edges with the same weight, choose arbitrarily but consistently)
3.2 Add v to Vnew, add (u, v) to Enew
4. Output: Vnew and Enew which describes a minimal spanning tree.
Here, we shall focus on the single-source shortest-paths problem: given a graph G = (V, E), we want to find a
shortest path from a given source vertex v0 ∈ V to each vertex v ∈ V. The starting vertex of the path is referred to
as the source, and the last vertex the destination. The graphs are digraphs to allow for one-way streets. In the
problem we consider, we are given a directed graph G = (V,E), a weighting function cost for the edges of G, and
a source vertex v0. The problem is to determine the shortest paths from v0 to all the remaining vertices of G. It is
assumed that all the weights are positive. The shortest path between vo and some other node v is an ordering
among a subset of the edges.
Consider the directed graph of Figure(a). The numbers on the edges are the weights. If node1 is the source
vertex, then the shortest path from 1to 2 is 1, 4, 5, 2. The length of this path is 10+ 15+ 20 = 45. Even though
there are three edges on this path, it is shorter than the path 1,2 which is of length 50. There is no path from 1to
6. Figure(b) lists the shortest paths from node1to nodes 4, 5, 2, and 3, respectively. The paths have been listed in
non-decreasing order of path length. The algorithm which we are using to find single source shortest path is
Dijkstra’s algorithm.
Knapsack Problem
Let us try to apply the greedy method to solve the knapsack problem. We are given n objects and a
knapsack or bag. Object i has a weight wi and the knapsack has a capacity m. If a fraction xi, 0 ≤xi ≤1, of object i
is placed into the knapsack, then a profit of pixi is earned. The objective is to obtain a filling of the knapsack that
maximizes the total profit earned. Since the knapsack capacity is m, we require the total weight of all chosen
objects to be at most m. Formally, the problem can be stated as
Example: Consider the following instance of the knapsack problem: n = 3,m= 20, (p1,p2,p3)=(25,24,15) and,
(w1,w2,w3)= (18,15,10).
Soln.
Find profit per weight ratio of each items: Now fill the knapsack according to the decreasing
Items Profit Weight p/w value of p/w.
x1 25 18 1.38
x2 24 15 1.6 First we choose item x2 whose weight is 15.
x3 15 10 1.5 Remaining capacity of knapsack is 20-15=5.
Sort according to p/w Now, we can choose fractional part of item x3 since its
Items Profit Weight p/w weight is greater than remaining capacity of knapsack.
x2 24 15 1.6 We can choose 5/10 fractional part of x3.
x3 15 10 1.5
x1 25 18 1.38 Maximum profit = (15/15)*24 + (5/10)*15 =31.5
***