0% found this document useful (0 votes)
6 views20 pages

Unit Iii

Uploaded by

245122733153
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views20 pages

Unit Iii

Uploaded by

245122733153
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 20

Greedy Method:

 Greedy method is the most straightforward design technique for finding the solution to a
given problem.
 As the name suggest they are short sighted in their approach taking decision on the basis of
the information immediately at the hand without worrying about the effect these decision
may have in the future.

DEFINITION:

 A problem with N inputs will have some constraints and any subsets that satisfy these
constraints are called a feasible solution.
 A feasible solution that either maximizes or minimizes a given objective function is called
an optimal solution.
 For an algorithm that uses greedy method works in stages, considering one input at a time.
Based on this input, it is decided whether the particular input gives the optimal solution or
not.

Control abstraction for Greedy Method:


1. Algorithm Greedy (a, n)
2.//a[1:n] contain the ‘n’ inputs
3. {
4. Solution =0; //Initialize the solution.
5. For i=1 to n do
6. {
7. x=Select (a);
8. if (Feasible (Solution, x))then
9. Solution =union(Solution, x);
10.}
11. return solution;
12.}

 The function Select selects an input from a[] and removes it. The selected input’s value is
assigned to X.
 Feasible is a Boolean value function that determines whether X can be included into the
solution vector.
 The function Union combines X with The solution and updates the objective function.
 The function Greedy describes the essential way that a greedy algorithm will look, once a
particular problem is chosen and the function subset, feasible & union are properly
implemented.

KNAPSACK PROBLEM

 We are given n objects and knapsack or bag with capacity M. Object i has a weight Wi and
a profit Pi where i varies from 1 to n.

 The problem is we have to fill the bag with the help of n objects and the resulting profit has
to be maximum.
 Formally the problem can be stated as

Maximize ∑ PiXi subjected to constraints ∑ WiXi<=M


Where Xi is the fraction of object and it lies between 0 to 1.

 There are so many ways to solve this problem, which will give many feasible solutions for
which we have to find the optimal solution.

 But in this algorithm, it will generate only one solution which is going to be feasible as
well as optimal.

 First, we find the profit & weight rates of each and every object and sort it according to the
descending order of the ratios.

 Select an object with highest p/w ratio and check whether its height is lesser than the
capacity of the bag.

 If so place 1 unit of the first object and decrement .the capacity of the bag by the weight of
the object you have placed.

 Repeat the above steps until the capacity of the bag becomes less than the weight of the
object you have selected .In this case place a fraction of the object and come out of the loop
,whenever you selected.

ALGORITHM:

1. Algorithm Greedy knapsack (m,n)


2//P[1:n] and the w[1:n]contain the profit
3.// & weight respectively of the n objects ordered
4.//such that p[i]/w[i] >=p[i+1]/W[i+1]
5.//n is the Knapsack size and x[1:n] is the solution vertex.
6.{
7.for i=1 to n do a[i]=0.0;
8.U=n;
9.for i=1 to n do
10.{
11.if (w[i]>u)then break;
13.x[i]=1.0;U=U-w[i];
14.}
15.if(i<=n)then x[i]=U/w[i];
16.}

Example:

Capacity=20
N=3, M=20
Wi=18, 15, 10
Pi=25, 24, 15

Pi/Wi=25/18=1.36, 24/15=1.6, 15/10=1.5


Descending Order  Pi/Wi1.6 1.5 1.36
Pi = 24 15 25
Wi = 15 10 18
Xi = 1 5/10 0
PiXi=1*24+0.5*1531.5

The optimal solution is 31.5

X1 X2 X3 WiXi PiXi
½ 1/3 ¼ 16.6 24.25
1 2/5 0 20 18.2
0 2/3 1 20 31
0 1 ½ 20 31.5

Of these feasible solutions, Solution 4 yields the Max profit so, this solution is optimal for the
given problem instance.

JOB SEQUENCING WITH DEAD LINES

We have given a set of n jobs. Each job i is associated with a profit Pi(>0) and a deadline di(>=0).
We have to find a sequence of job, which will be completed within its deadlines, and it should
yield a maximum profit.

Points To remember:
 To complete a job, one has to process the job for one unit of time.
 Only one machine is available for processing jobs.
 A feasible solution for this problem is a subset J of jobs such that each job in this subset
can be completed by this deadline.
 The value of a feasible solution J is the sum of the profits of the jobs in J, or
∑iEj Pi.

Since one job can be processed in a single m/c. The other job has to be in its waiting state until
the job is completed and the machine becomes free.

So the waiting time and the processing time should be less than or equal to the dead line of the
job.

ALGORITHM:

Algorithm JS(d,j,n)
//The job are ordered such that p[1]>p[2]…>p[n]
//j[i] is the ith job in the optimal solution
// Also at terminal d [J[ i]]<=d[J[i+1]],1<i<k
{
d[0]= J[0]=0;
J[1]=1;
k=1;
for i =1 to n do
{
// consider jobs in non increasing order of P[i]; find the position for i and check
// the feasibility of insertion
r=k;
while((d[J[r]]>d[i] )and (d[J[r]] != r) do r =r-1;
if (d[J[r]]<d[i])and (d[i]>r))then
{
for q=k to (r+1) step –1 do J [q+1]=j[q]
J[r+1]=i;
k=k+1;
}
}
return k;
}

Example :

1. n=5 (P1,P2,…P5)=(20,15,10,5,1)
(d1,d2….d3)=(2,2,1,3,3)

Feasible solution Processing Sequence Value

(1) (1) 20
(2) (2) 15
(3) (3) 10
(4) (4) 5
(5) (5) 1
(1,2) (2,1) 35
(1,3) (3,1) 30
(1,4) (1,4) 25
(1,5) (1,5) 21
(2,3) (3,2) 25
(2,4) (2,4) 20
(2,5) (2,5) 16
(1,2,3) (3,2,1) 45
(1,2,4) (1,2,4) 40

The Solution 13 is optimal

2. n=4 (P1,P2,…P4)=(100,10,15,27)
(d1,d2….d4)=(2,1,2,1)

Feasible solution Processing Sequence Value

(1,2) (2,1) 110


(1,3) (1,3) 115
(1,4) (4,1) 127
(2,3) (9,3) 25
(2,4) (4,2) 37
(3,4) (4,3) 42
(1) (1) 100
(2) (2) 10
(3) (3) 15
(4) (4) 27

The solution 3 is optimal.

MINIMUM COST SPANNING TREE

 Let G(V,E) be an undirected connected graph with vertices ‘v’ and edge ‘E’.
 A sub-graph t=(V,E’) of the G is a Spanning tree of G iff ‘t’ is a tree.
 The problem is to generate a graph G’= (V, E) where ‘E’ is the subset of E, G’ is a
Minimum cost spanning tree.
 Each and every edge will contain the given non-negative length .Connect all the nodes with
edge present in set E’ and weight has to be minimum.

NOTE:
 We have to visit all the nodes.
 The subset tree (i.e.) any connected graph with ‘N’ vertices must have at least N-1 edges
and also it does not form a cycle.

Definition:
 A spanning tree of a graph is an undirected tree consisting of only those edges that are
necessary to connect all the vertices in the original graph.
 A Spanning tree has a property that for any pair of vertices there exist only one path
between them and the insertion of an edge to a spanning tree form a unique cycle.

Application of the spanning tree:


1. Analysis of electrical circuit.
2. Shortest route problems.

Minimum cost spanning tree:


 The cost of a spanning tree is the sum of cost of the edges in that tree.
 There are 2 method to determine a minimum cost spanning tree are

1. Kruskal’s Algorithm
2. Prom’s Algorithm.

KRUSKAL’S ALGORITHM:

In kruskal's algorithm the selection function chooses edges in increasing order of length
without worrying too much about their connection to previously chosen edges, except that never to
form a cycle. The result is a forest of trees that grows until all the trees in a forest (all the
components) merge in a single tree.
 In this algorithm, a minimum cost-spanning tree ‘T’ is built edge by edge.
 Edges are considered for inclusion in ‘T’ in increasing order of their cost.
 An edge is included in ‘T’ if it doesn’t form a cycle with edge already in T.
 To find the minimum cost spanning tree the edges are inserted to tree in increasing
Order of their cost

Algorithm:

Algorithm kruskal(E,cost,n,t)
//Eset of edges in G that has ‘n’ vertices.
//cost[u,v]cost of edge (u,v).tset of edge in minimum cost spanning tree
// the final cost is returned.
{
for i=1 to n do parent[i]= -1;
i=0; mincost=0.0;
While((i<n-1)and (heap not empty)) do
{
j=find(n);
k=find(v);
if(j not equal k) than
{
i=i+1
t[i,1]=u;
t[i,2]=v;
mincost= mincost + cost[u,v];
union(j,k);
}
}
if(i not equal to n-1) then write(“No spanning tree”)
else return minimum cost;
}

Analysis
 The time complexity of minimum cost spanning tree algorithm in worst case is O(|E|log|E|),
where E is the edge set of G.

Example: Step by Step operation of Kurskal’s algorithm.

Step 1. In the graph, the Edge(g, h) is shortest. Either vertex g or vertex h could be representative.
Let’s choose vertex g arbitrarily.
Step 2. The edge (c, i) creates the second tree. Choose vertex c as representative for second tree.

Step 3. Edge (g, g) is the next shortest edge. Add this edge and choose vertex g as representative.

Step 4. Edge (a, b) creates a third tree.

Step 5. Add edge (c, f) and merge two trees. Vertex c is chosen as the representative.

Step 6. Edge (g, i) is the next next cheapest, but if we add this edge a cycle would be created.
Vertex c is the representative of both.
Step 7. Instead, add edge (c, d).

Step 8. If we add edge (h, i), edge(h, i) would make a cycle.

Step 9. Instead of adding edge (h, i) add edge (a, h).

Step 10. Again, if we add edge (b, c), it would create a cycle. Add edge (d, e) instead to complete
the spanning tree. In this spanning tree all trees joined and vertex c is a sole representative.
PRIM'S ALGORITHM

Start from an arbitrary vertex (root). At each stage, add a new branch (edge) to the tree
already constructed; the algorithm halts when all the vertices in the graph have been reached.

Algorithm Prim(E, cost, n, t)

Let (k,l) be an edge of minimum cost in E;

Mincost: =cost[k, l];

t[1,1]:=k; t[1,2]:=l;

for i:=1 to n do

If (cost[i, l]<cost[i, k]) then near[i]:=l;

else near[i]:=k;

near[k]:=near[l]:=0;
for i:=2 to n-1 do
{
Let j be an index such that near[j]≠0 and
Cost[j, near[j]] is minimum;
t[i,1]:=j; t[i,2]:=near[j];
Mincost:=mincost+ Cost[j, near[j]];
near[j]:=0;
for k:=1 to n do
If ((near[k]≠0) and (Cost[k, near[k]]>cost[k ,j])) then
near[k]:=j;
}
return mincost;

 The prims algorithm will start with a tree that includes only a minimum cost edge of G.
 Then, edges are added to the tree one by one, the next edge (i,j) to be added in such that i is
a vertex included in the tree, j is a vertex not yet included, and cost of (i,j), cost[i, j] is
minimum among all the edges.

 The working of prims will be explained by following diagram

Step 1: Step 2:

Step 3: Step 4:

Step 5: Step 6:
SINGLE SOURCE SHORTEST PATH

Graphs can be used to represent the highway structure of a state or country with vertices
representing cities and edges representing sections of highway. The edges can then be assigned
weights which may be either the distance between the two cities connected by the edge or the
average time to drive along that section of highway. A motorist wishing to drive from city A to B
would be interested in answers to the following questions:
1. Is there a path from A to B?
2. If there is more than one path from A to B? Which is the shortest path?

The problems defined by these questions are special case of the path problem we study in this
section. The length of a path is now defined to be the sum of the weights of the edges on that path.
The starting vertex of the path is referred to as the source and the last vertex the destination. The
graphs are digraphs representing streets. Consider a digraph G=(V,E), with the distance to be
traveled as weights on the edges. The problem is to determine the shortest path from v0 to all the
remaining vertices of G. It is assumed that all the weights associated with the edges are positive.
The shortest path between v0 and some other node v is an ordering among a subset of the edges.
Hence this problem fits the ordering paradigm.

Example:
Consider the digraph of fig 7-1. Let the numbers on the edges be the costs of travelling along that
route. If a person is interested to travel from v1 to v2, then he encounters many paths. Some of
them are
1. v1-> v2 = 50 units
2. v1-> v3->v4->v2 = 10+15+20=45 units
3. v1->v5-> v4-> v2 = 45+30+20= 95 units
4. v1->v3-> v4-> v5-> v4-> v2 = 10+15+35+30+20=110 units

The cheapest path among these is the path along v1 v3 v4 v2. The cost of the path is
10+15+20 = 45 units. Even though there are three edges on this path, it is cheaper than travelling
along the path connecting v1 and v2 directly i.e., the path v1 v2 that costs 50 units. One can also
notice that, it is not possible to travel to v6 from any other node.

To formulate a greedy based algorithm to generate the cheapest paths, we must conceive a
multistage solution to the problem and also of an optimization measure. One possibility is to build
the shortest paths one by one. As an optimization measure we can use the sum of the lengths of all
paths so far generated. For this measure to be minimized, each individual path must be of
minimum length. If we have already constructed i shortest paths, then using this optimization
measure, the next path to be constructed should be the next shortest minimum length path. The
greedy way to generate these paths in non-decreasing order of path length. First, a shortest path to
the nearest vertex is generated. Then a shortest path to the second nearest vertex is generated, and
so on.

A much simpler method would be to solve it using matrix representation. The steps that should be
followed is as follows,

Step 1: find the adjacency matrix for the given graph. The adjacency matrix for fig 7.1 is given
below

V1 V2 V3 V4 V5 V6

V1 - 50 10 Inf 45 Inf

V2 Inf - 15 Inf 10 Inf

V3 20 Inf - 15 inf Inf

V4 Inf 20 Inf - 35 Inf

V5 Inf Inf Inf 30 - Inf

V6 Inf Inf Inf 3 Inf -

Step 2: consider v1 to be the source and choose the minimum entry in the row v1. In the above
table the minimum in row v1 is 10.

Step 3: find out the column in which the minimum is present, for the above example it is column
v3. Hence, this is the node that has to be next visited.

Step 4: compute a matrix by eliminating v1 and v3 columns. Initially retain only row v1. The
second row is computed by adding 10 to all values of row v3.
The resulting matrix is
V2 V4 V5 V6

V1-> Vw 50 Inf 45 Inf

V1-> V3-> Vw 10+inf 10+15 10+inf 10+inf

Minimum 50 25 45 inf
Step 5: find the minimum in each column. Now select the minimum from the resulting row. In the
above example the minimum is 25. Repeat step 3 followed by step 4 till all vertices are covered or
single column is left.

The solution for the fig 7.1 can be continued as follows

V2 V5 V6

V1->Vw 50 45 Inf

V1->V3-> V4-> Vw 25+20 25+35 25+inf

Minimum 45 45 inf

V5 V6

V1-> Vw 45 Inf

V1-> V3-> V4-> V2-> Vw 45+10 45+inf

Minimum 45 inf

V6

V1-> Vw Inf

V1-> V3-> V4-> V2-> V5-> Vw 45+i


nf

Minimum inf

Finally the cheapest path from v1 to all other vertices is given by V1 V3 V4 V2 V5.

1. Algorithm Shortest Paths(v, cost, dist, n)


2. //dist[j],1<=j<=n is set to the length of the shortest path from vertex v to vertex j
3. // in a digraph G with n vertices
4. //dist[v] is set to zero. Cost adjacency matrix is cost[1:n,1:n]
5. {
6. for i:=1 to n do
7. { // initialize S
8. S[i]:=false;
9. dist[i]:=cost[v,i];
10. }
11. S[v]:=true;
12. dist[v]:=0.0; // put v in S
13. for num:=2 to n-1 do
14. {
15. //Determine n-1 paths from v
16. Choose u from among those vertices not in S such that dist[u] is minimum;
17. S[u]:=true; //put u in S
18. for(each w adjacent to u with s[w]=false) do
19. {//Update distances
20. If(dist[w]>dist[u]+cost[u, w]) then
21. dist[w]=dist[u]+cost[u, w];
22. }
23. }
24. }

The analysis of the algorithm is as follows


 The for loop (line 6) executes n+1 times therefore O(n).
 The for loop in line 13 is executed n-2 times. Each execution of this loop requires O(n)
time to select the vertex at line 16 & 17.
 The inner most for loop at line 18 takes O(n).
 So the for loop of line 13 takes O(n2).

 The time complexity of the algorithm is O(n2).

Optimal storage problem:


Input: We are given ‘n’ problem that are to be stored on computer tape of length L and the
length of program i is Li
Such that 1 ≤i≤ n and Σ 1≤k≤j Li≤ 1
Output: A permutation from all n! For the n programs so that when they are stored on tape in
the order the MRT is minimized.
Example:
Let n = 3, (l1, l2, l3) = (8, 12, 2). As n = 3, there are 3! =6 possible ordering.
All these orderings and their respective d value are given below:

Ordering d (i) Value

1, 2, 3 8 + (8+12) + (8+12+2) 50
Ordering d (i) Value

1, 3, 2 8 + 8 + 2 + 8 + 2 + 12 40

2, 1, 3 12 + 12 + 8 +12 + 8 + 2 54

2, 3, 1 12 + 12 +2 +12 + 2 + 8 48

3, 1, 2 2 + 2 + 8 + 2 + 8+ 12 34

3, 2, 1 2 + 2 +12 + 2 + 12 + 8 38

The optimal ordering is 3, 1, 2.


The greedy method is now applied to solve this problem. It requires that the programs are
stored in non-decreasing order which can be done in O (nlogn) time.
Greedy solution:
i. Make tape empty
ii. Fori = 1 to n do;
iii. Grab the next shortest path
iv. Put it on next tape.
The algorithm takes the best shortest term choice without checking to see whether it is a big
long term decision.
Algorithm:
For example, find an optimal placement for 13 programs on 3 tapes T0, T1 & T2 where the
program are of lengths 12, 5, 8, 32, 7, 5, 18, 26, 4, 3, 11, 10 and 6.
Given problem:

Length 12 5 8 32 7 5 18 26 4 3 11 10 6

Program [0] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]

We organize the program as:

Length 3 4 5 6 7 8 9 10 11 12 18 26 32

Program [8] [9] [1] [5] [12] [4] [2] [11] [10] [0] [6] [7] [3]

Optimal merge patterns


Introduction:
• Merge two files each has n & m elements, respectively: takes O (n+m).
• Given n files
What's the minimum time needed to merge all n files?
Example:
(F1, F2, F3, F4, F5)= (20, 30, 10, 5, 30).
M1= F1 & F2 Þ 20+30 = 50
M2= M1 & F3 Þ 50+10 = 60
M3= M2 & F4 Þ 60+5 = 65
M4= M3 & F5 Þ 65+30 = 95
270

Optimal merge pattern: Greedy


method. Sort the list of files:
(5,10, 20, 30, 30)= (F4, F3, F1, F2, F5)
Merge file using 2-way merge
1. Merge two file at a time.
2. Add merge file into the list of files in sorted order.

Merge first two files


(5,10, 20, 30, 30)=> (15, 20, 30, 30)
Merge next two files
(15, 20, 30, 30)=> (30, 30, 35)

Merge next two files


(30, 30, 35) => (35, 60)

Merge next two files


(35, 60)=> (95)
Total time=15+35+60+95=205.

Problem:
• Input: Given n sorted files
• Output: Merge n files in a minimum amount of time.

Working of algorithm with example:


Consider a pool of file L which contain n
files. Step 1: find first two min file .
Step -2: merge them & add new merge file to the pool L.
Step-3 : repeat step-1 & 2 until no file remain in the pool L for merging.
e.g.

L (F1, F2, F3, F4, F5)= (20, 30, 10, 5, 30).

Find first two min files which are F4 and F3

5 100

F4 F3
Next merge them & add new file M1 to pool L
15

M1

5 100

F4 F3
L (F1, F2, M1, F5)= (20, 30, 15, 30).

Now find next two min files(which are F1 and M1) , merge them and add new
merge file M2 to L.

35

M2
15 20

M1 F
100
5 1
F4 F3
L (F2, M2, F5)= ( 30, 35, 30).
Now find next two min files(which are F2 and F5) , merge them and add new
merge file M3 to L. L ( M2, M3)= ( 35, 60).

35 60

M2 M3
15 20 30 30

M1 F F2 F5
100
5 1
F4 F3

Now merge next two min file. L(M4)={95}

95

M4
35 60

M2 M3
15 20 30 30

M1 F F F
100
5 1 2 5
F4 F3

Analysis:
GREEDY METHOD UNIT III

1. Case 1 L is not sorted.


O (find minimum)= O (n).---we can use first 2 pass of selection sort
will give first two minimum which require 2n cost.(O(n))
O (cost for insertion in L)= O (1)- -L is not sorted so we can insert
in last
in array.
Time complexity :T= O (n2)
2. Case 2 L is
sorted.
Case 2.1
O (find minimum)= O (1)- because L is sorted we get 2
minimum in
fist two place.
O (cost for insertion in L)= O (n)- L is sorted so after merge
new file
will be added to specific location so that the L
remain sorted. Time complexity: T= O (n2)
Case 2.2
L is represented as a min-heap. Value in the root is less then or
equal to the values of its children.
O (find minimum)= O (log n)- -cost for deletion of minimum
in min
heap
O (cost for insertion in L)= O (log n)--cost for insertion of
minimum in min heap
Time complexity: T= O (n log n).

R SATHYA PRAKASH CSED MVSREC Page 20

You might also like