0% found this document useful (0 votes)
105 views8 pages

Greedy Model: Analysis of Algorithms

The document discusses the greedy algorithm design technique and provides examples of its application to optimization problems like job scheduling with deadlines and optimal file merging. - The greedy technique works in stages, making locally optimal choices at each step to construct an overall solution. - For job scheduling, it sorts by profit and schedules highest profit jobs as late as their deadlines allow, yielding the optimal solution. - For file merging, it recursively merges the smallest files at each step, minimizing comparisons, and can be represented by a merge tree.

Uploaded by

gashaw asmamaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
105 views8 pages

Greedy Model: Analysis of Algorithms

The document discusses the greedy algorithm design technique and provides examples of its application to optimization problems like job scheduling with deadlines and optimal file merging. - The greedy technique works in stages, making locally optimal choices at each step to construct an overall solution. - For job scheduling, it sorts by profit and schedules highest profit jobs as late as their deadlines allow, yielding the optimal solution. - For file merging, it recursively merges the smallest files at each step, minimizing comparisons, and can be represented by a merge tree.

Uploaded by

gashaw asmamaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Analysis of Algorithms

Chapter 3
Greedy Model
 General Method
 Job Sequencing with Deadlines
 Optimal Merge Pattern
 Minimum Spanning Trees
 Single Source Shortest Path
 Knapsack Problem

The greedy method is perhaps the most straight forward design technique. Most, though not all, of these
problems have n inputs and require us to obtain a subset that satisfies some constraints. Any subset that satisfies
those constraints is called a feasible solution. We need to find a feasible solution that either maximizes or
minimizes a given objective function. A feasible solution that does this is called an optimal solution. There is
usually an obvious way to determine a feasible solution but not necessarily an optimal solution.
The greedy method suggests that one can design an algorithm that works in stages, considering one input
at a time. At each stage, a decision is made regarding whether a particular input is in an optimal solution. This is
done by considering the inputs in an order determined by some selection procedure. If the inclusion of the next
input into the partially constructed optimal solution will result in an infeasible solution, then this input is not
added to the partial solution. Otherwise, it is added. The selection procedure itself is based on some optimization
measure. This measure may be the objective function.
The function Select selects an input from a[ ] and removes it. The selected input's value is assigned to x.
Feasible is a Boolean-valued function that determines whether x can be included into the solution vector. The
function Union combines x with the solution and updates the objective function. The function Greedy describes
the essential way that a greedy algorithm will look, once a particular problem is chosen and the functions Select,
Feasible, and Union are properly implemented.

 Greedy design technique is primarily used in Optimization problems.


 Optimization problems are problems where we would like to find the best of all possible
solutions.
 In other words we need to find the solution which has the optimal (maximum or minimum)
value satisfying the given constraints.
 The Greedy approach helps in constructing a solution for a problem through a sequence of steps
 each step is considered to be a partial solution.
1 Prepared by: Mujeeb Rahman
 the partial solution is extended progressively to get the complete solution
 Constructs a solution to an optimization problem step by step through a sequence of choices that are:
 feasible, i.e. satisfying the constraints
 locally optimal (with respect to some neighborhood definition)
 irrevocable i.e. not altered

Job Sequencing with Deadlines

We are given a set of n jobs. Associated with job i is an integer deadline di ≥0 and a profit pi>0. For any
job i the profit pi is earned iff the job is completed by its deadline. To complete a job, one has to process the job
on a machine for one unit of time. Only one machine is available for processing jobs. A feasible solution for this
problem is a subset J of jobs such that each job in this subset can be completed by its deadline. The value of a
feasible solution J is the sum of the profits of the jobs in J, or ∑i ϵ jpi an optimal solution is a feasible solution with
maximum value.

Example: Let n = 4, (p1,p2,p3,p4) = (100,10,15,27) and (d1,d2,d3,d4) = (2,1,2,1). The feasible solutions and
their values are:

Solution 3 is optimal. In this solution only jobs1and 4 are processed and the value is 127.These jobs must be
processed in the order job 4 followed by job1.Thus the processing of job 4 begins at time zero and that of job1
is completed at time2.

Above example can be solved as follows:


Step 1: Arrange the jobs in descending order of their profit.

Process Profit Deadline


J1 100 2
J4 27 1
J3 15 2
J2 10 1

Step 2: Find the maximum deadline. (max deadline is 2 in this example)

0 1 2
Step 3: Assign high profited job as later as possible deadline.

J4 J1
0 1 2
Step 4: Find the profit. Total Profit = Profit of J4 + Profit of J1
2 Prepared by: Mujeeb Rahman
= 100 + 27 = 127.
Example : Number of jobs, n = 5. Soln:
Process Profit Deadline
Process Profit Deadline J5 80 4
J1 10 3 J2 20 3
J2 20 3 J3 15 3
J1 10 3
J3 15 3
J4 5 4
J4 5 4
Maximum Deadline = 4
J5 80 4
J1 J3 J2 J5
Find maximum profit scheduling. 0 1 2 3 4
Maximum Profit = 80+20+15+10 = 125.

Optimal Merge Pattern


When more than two sorted files are to be merged together, the merge can be accomplished by
repeatedly merging sorted files in pairs. Thus, if files x1, x2, x3 and x4 are to be merged, we could first merge x1
and x2 to get a file y1. Then we could merge y1 and x3 to get y2. Finally, we could merge y2 and x4 to get the
desired sorted file. Alternatively, we could first merge x1 and x2 getting y1, then merge x3 and x4 and get y2,
and finally merge y1 and y2 and get the desired sorted file. Given n sorted files, there are many ways in which to
pair wise merge them into a single sorted file. Different pairings require differing amounts of computing time.
The problem we address ourselves to now is that of determining an optimal way (one requiring the fewest
comparisons) to pairwise merge n sorted files. Since this problem calls for an ordering among the pairs to be
merged, it fits the ordering paradigm.

Example: The files x1, x2 and x3 are three sorted files of length 30, 20, and 10 records each. Merging x1 and x2
requires 50 record moves. Merging the result with x3 requires another 60 moves. The total number of record
moves required to merge the three files this way is 110. If, instead, we first merge x2 and X3 (taking 30 moves)
and then x1 (taking 60 moves), the total record moves made is only 90. Hence, the second merge pattern is faster
than the first.
A greedy attempt to obtain an optimal merge pattern is easy to formulate. Since merging an n-record file
and an m-record file requires possibly n+m record moves, the obvious choice for a selection criterion is: at each
step merge the two smallest size files together. Thus, if we have five files (x1, x2, x3, x4, x5) with sizes (20, 30,
10, 5, 30), our greedy rule would generate the following merge pattern: merge x4 and x3 to get z1 (|z1| = 15),
merge z1 and x1 to get z2 (|z2| = 35), merge x2 and x5 to get z3 (|z3| = 60), and merge z2 and z3 to get the
answer z4. The total number of record moves is 205. One can verify that this is an optimal merge pattern for the
given problem instance.

3 Prepared by: Mujeeb Rahman


The merge pattern such as the one just described will be referred to as a two-way merge pattern (each
merge step involves the merging of two files). The two-way merge patterns can be represented by binary merge
trees. Figure shows a binary merge tree representing the optimal merge pattern obtained for the above five files.
The leaf nodes are drawn as squares and represent the given five files. These nodes are called external nodes.
The remaining nodes are drawn as circles and are called internal nodes. Each internal node has exactly two
children, and it represents the file obtained by merging the files represented by its two children. The number in
each node is the length (i.e. The number of records) of the file represented by that node.
The external node x4 is at a distance of 3 from the root node z4 (a node at level i is at a distance of i-1
from the root). Hence, the records of file x4 are moved three times, once to get z1, once again to get z2, and
finally one more time to get z4. If di is the distance from the root to the external node for file xi and qi, the length
of xi is then the total number of record moves for this binary merge tree is

This sum is called the weighted external path length of the tree.

Example: Find the optimal merge tree and optimal number of comparisons of given data.

File x1 x2 x3 x4 x5 x6

Size 2 3 5 7 9 13

Soln:

Total number of comparisons:


= 7*2+9*2+2*4+3*4+5*3+13*2
= 14+18+8+12+15+26

= 93

4 Prepared by: Mujeeb Rahman


Minimum Spanning Trees

In the design of electronic circuitry, it is often necessary to make the pins of several components
electrically equivalent by wiring them together. To interconnect a set of n pins, we can use an arrangement of n −
1 wires, each connecting two pins. Of all such arrangements, the one that uses the least amount of wire is usually
the most desirable.
We can model this wiring problem with a connected, undirected graph G = (V, E), where V is the set of
pins, E is the set of possible interconnections between pairs of pins, and for each edge (u, v) ∈ E, we have a
weight w(u, v) specifying the cost (amount of wire needed) to connect u and v. We then wish to find an acyclic
subset T ⊆ E that connects all of the vertices and whose total weight

is minimized. Since T is acyclic and connects all of the vertices, it must form a tree, which we call a spanning
tree since it “spans” the graph G. We call the problem of determining the tree T the minimum-spanning-tree
problem. We shall examine two algorithms for solving the minimum spanning- tree problem: Kruskal’s
algorithm and Prim’s algorithm.
A spanning tree whose weight is minimum over all spanning trees is called a minimum spanning tree, or MST.

--A graph and its minimum cost spanning tree--


Kruskal’s algorithm
The edges of the graph are considered in non-decreasing order of cost. This interpretation is that the set t
of edges so far selected for the spanning tree be such that it is possible to complete t into a tree. Thus t may not
be a tree at all stages in the algorithm. In fact, it will generally only be a forest since the set of edges t can be
completed into a tree iff there are no cycles in t.

5 Prepared by: Mujeeb Rahman


Total Cost = 99

Prim’s Algorithm

This algorithm continuously increases the size of a tree starting with a single vertex until it spans all the
vertices.
1. Input: A connected weighted graph with vertices V and edges E.
2. Initialize: Vnew = {x}, where x is an arbitrary node (starting point) from V, Enew = {}
3. Repeat until Vnew = V:
3.1 Choose edge (u,v) with minimal weight such that u is in Vnew and v is not (if there are
multiple edges with the same weight, choose arbitrarily but consistently)
3.2 Add v to Vnew, add (u, v) to Enew
4. Output: Vnew and Enew which describes a minimal spanning tree.

6 Prepared by: Mujeeb Rahman


Total Cost = 99

Single Source Shortest Path

Here, we shall focus on the single-source shortest-paths problem: given a graph G = (V, E), we want to find a
shortest path from a given source vertex v0 ∈ V to each vertex v ∈ V. The starting vertex of the path is referred to
as the source, and the last vertex the destination. The graphs are digraphs to allow for one-way streets. In the
problem we consider, we are given a directed graph G = (V,E), a weighting function cost for the edges of G, and
a source vertex v0. The problem is to determine the shortest paths from v0 to all the remaining vertices of G. It is
assumed that all the weights are positive. The shortest path between vo and some other node v is an ordering
among a subset of the edges.

Figure: Graph and shortestpaths from vertex1to alldestinations

Consider the directed graph of Figure(a). The numbers on the edges are the weights. If node1 is the source
vertex, then the shortest path from 1to 2 is 1, 4, 5, 2. The length of this path is 10+ 15+ 20 = 45. Even though
there are three edges on this path, it is shorter than the path 1,2 which is of length 50. There is no path from 1to
6. Figure(b) lists the shortest paths from node1to nodes 4, 5, 2, and 3, respectively. The paths have been listed in
non-decreasing order of path length. The algorithm which we are using to find single source shortest path is
Dijkstra’s algorithm.

7 Prepared by: Mujeeb Rahman


Example: Find the shortest path from source s to
all other nodes.
Item Vertex
Select s t x y z
no. Selected
- - {s} 0 10 ∞ 5 ∞
1 y {s,y} 0 8 14 5 7
2 z {s,y,z} 0 8 13 5 7
3 t {s,y,z,t} 0 8 9 5 7
4 x {s,y,z,t,x} 0 8 9 5 7

Knapsack Problem
Let us try to apply the greedy method to solve the knapsack problem. We are given n objects and a
knapsack or bag. Object i has a weight wi and the knapsack has a capacity m. If a fraction xi, 0 ≤xi ≤1, of object i
is placed into the knapsack, then a profit of pixi is earned. The objective is to obtain a filling of the knapsack that
maximizes the total profit earned. Since the knapsack capacity is m, we require the total weight of all chosen
objects to be at most m. Formally, the problem can be stated as

The profits and weights are positive numbers.

Example: Consider the following instance of the knapsack problem: n = 3,m= 20, (p1,p2,p3)=(25,24,15) and,
(w1,w2,w3)= (18,15,10).

Soln.
Find profit per weight ratio of each items: Now fill the knapsack according to the decreasing
Items Profit Weight p/w value of p/w.
x1 25 18 1.38
x2 24 15 1.6 First we choose item x2 whose weight is 15.
x3 15 10 1.5 Remaining capacity of knapsack is 20-15=5.

Sort according to p/w Now, we can choose fractional part of item x3 since its
Items Profit Weight p/w weight is greater than remaining capacity of knapsack.
x2 24 15 1.6 We can choose 5/10 fractional part of x3.
x3 15 10 1.5
x1 25 18 1.38 Maximum profit = (15/15)*24 + (5/10)*15 =31.5

***

8 Prepared by: Mujeeb Rahman

You might also like