0% found this document useful (0 votes)
11 views19 pages

Daa Endsem Paper Sol Unit III

Uploaded by

13anil67
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views19 pages

Daa Endsem Paper Sol Unit III

Uploaded by

13anil67
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

B.

Tech V Sem
Design and Analysis of Algorithms

Unit-III
(2017-18)

1. Difference between Greedy Technique and Dynamic programming.

A Greedy algorithm obtains an optimal solution to a problem by making a


sequence of choices. For each decision point in the algorithm, the choice that
seems best at the moment is chosen. This choice is referred as Greedy choice.
Greedy choice property – “ A globally optimal solution can be arrived at by
making a locally optimal (greedy) choice”.

In Dynamic Programming, we make a choice at each step, but the choice usually
depend on the solutions to the subproblems. Consequently we solve dynamic
programming problems in a bottom up manner, progressing from smaller
subproblems to larger subproblems.

In a greedy algorithm, we make whatever choice seems best at the moment and
then solves the subproblem arising after the choice is made. Thus, unlike
dynamic programming, which solves the subproblems bottom up, a greedy
strategy usually progresses in a top down fashion, making one greedy choice
after another.

Not all optimization problems can be solved with greedy approach. For eg.,
Fractional Knapsack problem can be solved using greedy approach, whereas, we
cannot solve 0-1 knapsack using greedy approach.

For I=3, Pi = (60,100,120), Wi = (10,20,30) and Knapsack capacity = 50.


Pi/Wi = (6,5,4)
Using greedy for 0-1 problem, we will select I1 and I2 to get a profit of 160 which
is not optimal. Optimal solution will be I2 and I3 = 220

2. Explain Single source shortest path.

The Single-Source Shortest Path (SSSP) problem consists of finding the shortest
paths between a given vertex v and all other vertices in the graph. Algorithms
such as Dijkstra is used to solve this problem. The Single-Pair Shortest Path
(SPSP) problem consists of finding the shortest path between a single pair of
vertices.

Swapna Singh KCS-503 DAA


 The time Complexity of the implementation is O(V2). If the input graph is
represented using adjacency list, it can be reduced to O(E * log V) with the
help of a binary heap.
 Dijkstra’s algorithm doesn’t work for graphs with negative weight cycles. For
graphs with negative weight edges and cycles, the Bellman-Ford
algorithm (O(VE)) can be used.

3. What is Minimum Cost Spanning Tree? Explain Kruskal’s Algorithm and


Find MST of the Graph. Also write its Time-Complexity

 Given an undirected and connected graph G=(V,E), a spanning tree of the


graph G is a tree that spans G (that is, it includes every vertex of G) and is a
subgraph of G (every edge in the tree belongs to G).
 The cost of the spanning tree is the sum of the weights of all the edges in the
tree. There can be many spanning trees. Minimum spanning tree is the
spanning tree where the cost is minimum among all the spanning trees. There
also can be many minimum spanning trees.
 Minimum spanning tree has direct application in the design of networks. It is
used in algorithms approximating the travelling salesman problem, multi-
terminal minimum cut problem and minimum-cost weighted perfect
matching.
 Given a connected, undirected, graph G = (V, E), a spanning tree is an acyclic
subset of edges T E that connects all the vertices together.
 Assuming G is weighted, we define the cost of a spanning tree T to be the sum
of edge weights in the spanning tree

w(T) = (u,v)T w(u,v)

 A minimum spanning tree (MST) is a spanning tree of minimum weight.

Kruskal’s Algorithm : Running Time O(E log E)


A // initially A is empty
for each vertex v  V[G] // line 2-3 takes O(V) time
do Make-Set(v) // create set for each vertex
sort the edges of E by nondecreasing weight w
for each edge (u,v)  E, in order by nondecreasing weight
do if Find-Set(u)  Find-Set(v) // u & v on different trees

Swapna Singh KCS-503 DAA


then A  A  {(u,v)}
Union(u,v)
return A

A = {1,3}

A = {1,3} {4,6}

A = {1,3} {4,6} {2,5}

A = {1,3} {4,6} {2,5} {3,6}  {1,3,4,6} {2,5}

A = {1,3,4,6} {2,5} {2,3}  {1,2,3,4,5,6}

MST = cost = 20 1

2 4
2
3
6
4 3
5
5 6

4. Explain Convex –Hull problem. (2019-20)

 The convex hull of a set of S points in the plane is defined to be the smallest
convex polygon containing all the points of S.
 A polygon is said to be convex if for any two points P1 & P2 inside the polygon,
the directed segment from P1 to P2 <P1,P2> is fully contained in the polygon.
 The vertices of the convex hull of a set S of points form a (not necessarily
proper) subset of S. Given a finite set of points {P1,P2,…,Pn} the convex hull
of P is the smallest convex set C such that P ⊂ 𝑪.
 There are two variants of the convex hull problem :
 Obtain the vertices of the convex hull (also referred to as extreme points)
 Obtain the vertices of the convex hull in some order.
 Convex Hull of Q is denoted by CH(Q).
 Algorithms used to compute the convex hull of a set of n points.
 GRAHAM-SCAN
 DIVIDE-AND-CONQUER METHOD
 Both algorithms runs in O(n lg n) time.

Swapna Singh KCS-503 DAA


Convex
Hull

Not a Convex
Hull

5. Find the shortest path in the below graph from the source vertex 1 to all other
vertices by using Dijkstra’s algorithm.

 Single-source shortest path problem:


o No negative-weight edges: w(u, v) > 0  (u, v)  E
 Maintains two sets of vertices:
o S = vertices whose final shortest-path weights have already been determined
o Q = vertices in V – S: min-priority queue
 Keys in Q are estimates of shortest-path weights (d[v])
o Repeatedly select a vertex u  V – S, with the minimum shortest-path estimate d[v]

Swapna Singh KCS-503 DAA


1 2 3 4 5
0    
- 10  30 100
- - 60 30 100
- - 50 - 100
- - - - 90

10
50

30
60

6. Given the six items in the table below and a Knapsack with Weight 100, what is
the solution to the Knapsack problem in all concepts. I.e. explain greedy all
approaches and find the optimal solution

Knapsack
Optimal Solution :-

Item Selected Weight Value


A 100 40
BCE 100 65
BCF 100 61
CDEF 80 40

So Optimal Solution = 65

Greedy Solution :-

Item Selected Weight Value


E 10 10
B 10+50 10+35
F 60+10 45+6
C exceeds 100 --
D 70+20 51+4
So Greedy Solution = 55

Swapna Singh KCS-503 DAA


Fractional Knapsack

Greedy Solution :-

Item Selected Weight Value


E 10 10
B 10+50 10+35
F 60+10 45+6
C 70+30 51 + 30/40(20)

So Greedy Solution = 66

2018-19
7. Compare adjacency list and adjacency matrix representations of a graph with
suitable example and diagram.

A graph is a data structure that consists of the following two components:


 A finite set of vertices also called as nodes.
 A finite set of ordered pair of the form (u, v) called as edge. (u,v) may not be same
as (v,u) in case of directed graph.
 The pair of the form (u, v) indicates that there is an edge from vertex u to vertex v.
 The edges may contain weight/value/cost.
Adjacency Matrix and Adjacency List are the most commonly used representations of a
graph.

i. Adjacency Matrix:

Adjacency Matrix is a 2D array of size V x V where V is the number of vertices in a


graph. Let the 2D array be adj[][], a slot adj[i][j] = 1 indicates that there is an edge from
vertex i to vertex j. Adjacency matrix for undirected graph is always symmetric.
Adjacency Matrix is also used to represent weighted graphs. If adj[i][j] = w, then there
is an edge from vertex i to vertex j with weight w.

Pros: Representation is easier to implement and follow. Removing an edge takes O(1)
time. Queries like whether there is an edge from vertex ‘u’ to vertex ‘v’ are efficient
and can be done O(1).

Cons: Consumes more space O(V^2). Even if the graph is sparse(contains less number
of edges), it consumes the same space. Adding a vertex is O(V^2) time. Computing all
neighbors of a vertex takes O(V) time (Not efficient).

ii. Adjacency List:

An array of lists is used. The size of the array is equal to the number of vertices. Let the
array be an array[]. An entry array[i] represents the list of vertices adjacent to the ith
vertex. This representation can also be used to represent a weighted graph. The weights
of edges can be represented as lists of pairs.

Swapna Singh KCS-503 DAA


Pros: Saves space O(|V|+|E|). In the worst case, there can be C(V, 2) number of edges
in a graph thus consuming O(V^2) space. Adding a vertex is easier. Computing all
neighbors of a vertex takes optimal time.
Cons: Queries like whether there is an edge from vertex u to vertex v are not efficient
and can be done O(V).

In Real-life problems, graphs are sparse(|E| <<|V|2). That’s why adjacency lists Data
structure is commonly used for storing graphs.

8. Consider the weights and values of the item listed below. Note that there is only one unit
of each item. The task is to pick a subset of these items such that their total weight is no
more than 11. And their total value is maximized. Moreover, no item can be split. The
total value of items picked by an optimal algorithm is Vopt. A greedy algorithm gives the
maximum value as Vgreedy. Find the value of Vopt – Vgreedy, where max. capacity of
knapsack is 11.
Item ID A B C D
Weight 10 7 4 2
Value 60 28 20 24
Vi/Wi. 6 4 5 12

Optimal Solution for 0-1 Knapsack

Item Selected Weight Value


A 10 60
BC 11 48
BD 9 52
CD 6 44
So Optimal Solution =Vopt = 60

Greedy Solution :-

Item Selected Weight Value


D 2 24
C 2+4 24+20
F exceeds 11 --
So Greedy Solution = Vgreedy = 44

Vopt – Vgreedy = 60 – 44 = 16

Swapna Singh KCS-503 DAA


9. Prove that if the weights on the edge of the connected undirected graph are distinct
then there is a unique Minimum Spanning Tree. Give an example in this regard. Also,
discuss Prim’s Minimum Spanning Tree Algorithm in detail.

 Let G =(E,V) is a given graph and A is a minimum spanning tree of G and is not unique.
 Let B = another MST with equal weight.
 Let (u,v) be an edge in A but not in B.
 Then B should have another edge (x,y) which is not in A.
 Let w(u,v) < w(x,y).
 As B is a MST, {u,v} must contain a cycle.
 Replacing (x,y) with (u,v) in B will yield a spanning tree with smaller weight compared to B.
 This is in contradiction with above assumption that B is a MST, but it is not.

1. Q ← 
2. for each u  V
3. do key[u] ← ∞
4. π[u] ← NIL
5. INSERT(Q, u)
6. DECREASE-KEY(Q, r, 0) ► key[r] ← 0
7. while Q  
8. do u ← EXTRACT-MIN(Q)
9. for each v  Adj[u]
10. do if v  Q and w(u, v) < key[v]
11. then π[v] ← u
12. DECREASE-KEY(Q, v, w(u, v))

• Time Complexity of the above program is O(V^2) ,if the input graph is
represented using adjacency list,

Swapna Singh KCS-503 DAA


• The time complexity of Prim’s algorithm can be reduced to O(E log V) with the
help of binary heap.
• The time complexity of Prim’s algorithm can be reduced to O(E + log V) with the
help of Fibonacci Heap

10. Find the shortest path in the below graph from the source vertex A to other vertices
by using Bellman Ford algorithm. When do Dijkstra’s and the Bellman-Ford
algorithm both fail to find the shortest path.

A B C D E F G H
0       
- 2 1 4 12 4 8 7
2 1 4 11 4 7 7

i AB AD AC BF BC BE CA CE DC EG ED FH GE GF HG
1               
2               

2 4

2
8

4
11

Swapna Singh KCS-503 DAA


Bellman-Ford and Dijkastra fail to find shortest path in a graph when there are negative
cycles in a graph.
Bellman-Ford algorithm detects presence of negative cycles in a given graph.

11. Given an integer x and a positive number n, use divide and conquer approach to write
a function that computes xn with complexity O(lgn).

Power( int x, unsigned int n)


{
int temp;
if (n == 0)
return 1;
temp = power(x, n / 2);
if (n % 2 == 0)
return temp * temp;
else
return x * temp * temp;
}

2019-20
12. What are greedy algorithms? Explain their characteristics.

 Greedy algorithm: an algorithmic technique to solve optimization problems


o Always makes the choice that looks best at the moment.
o Makes a locally optimal choice in the hope that this choice will lead to a globally
optimal solution
 A greedy algorithm is an algorithmic paradigm that follows problem - solving heuristic of
making the locally optimal choice at each stage with the intent of finding a global optimum.
 In many problems, a greedy strategy does not usually produce an optimal solution, but
nonetheless a greedy heuristic may yield locally optimal solutions that approximate a
globally optimal solution in a reasonable amount of time.
 Greedy is a strategy that works well on optimization problems with the following
characteristics:
o Greedy-choice property: A global optimum can be arrived at by selecting a local
optimum. We make the choice that looks best in the current problem, without
considering results for the subproblems.
o Optimal substructure: An optimal solution to the problem contains an optimal
solution to subproblems. The second property may make greedy algorithms look
like dynamic programming. This property is essential for both dynamic
programming and greedy approach.
 In Dynamic programming, we make a choice at each step, but the choice usually depends
on the solutions of the subproblems. It solves the problem in a bottom up manner. Whereas
Greedy strategy usually progresses in a top-down manner.
 Design/Correctness
1. Cast the problem as one where you make choices and each choice results in a
smaller subproblem to solve.
2. Prove that the greedy choice property holds.

Swapna Singh KCS-503 DAA


3. Demonstrate that the solution to the subproblem can be combined with the greedy
choice to get an optimal solution for the original problem.
 A greedy algorithm obtains an optimal solution to a problem by making a sequence of
choices. For each decision point in the algorithm, the choice that seems best at the
moment is chosen. This heuristic strategy does not always produce an optimal solution,
but, sometimes it does.
 Following steps are performed in a greedy algorithm:
 Determine the optimal substructure of the problem.
 Develop a recursive solution.
 Prove that at any stage of the recursion, one of the optimal choices is the greedy
choice. Thus, it is always safe to make the greedy choice.
 Show that all but one of the subproblems induced by having made the greedy
choice are empty.
 Develop a recursive algorithm that implements the greedy strategy.
 Convert the recursive algorithm to an iterative algorithm.

13. Define Spanning Tree. Write Kruskal’s Algorithm for finding minimum cost
spanning tree. Describe how Kruskal’s algorithm is different from Prim’s Algorithm
for finding minimum cost spanning tree.

Answer 3 (2017-18)
Both Prim’s and Kruskal’s algorithm finds the Minimum Spanning Tree and follow the
Greedy approach of problem-solving. However, there are few differences :

 Prim’s Algorithm starts to build the Minimum Spanning Tree from any vertex in the
graph. Whereas Kruskal builds the Minimum Spanning Tree from the vertex carrying
minimum weight in the graph.
 Prim traverses one node more than one time to get the minimum distance. Kruskal
traverses one node only once.
 Prim’s algorithm has a time complexity of O(V2), V being the number of vertices and
can be improved up to O(E log V) using Fibonacci heaps. Kruskal’s algorithm’s
time complexity is O(E log V), V being the number of vertices.
 Prim’s algorithm gives connected component as well as it works only on connected
graph. Kruskal’s algorithm can generate forest(disconnected components) at any
instant as well as it can work on disconnected components
 Prim’s algorithm runs faster in dense graphs. Kruskal’s algorithm runs faster in sparse
graphs.
 Prim’s generates the minimum spanning tree starting from the root vertex. Kruskal’s
generates the minimum spanning tree starting from the least weighted edge.

14. Compare the various programming paradigms such as Divide and Conquer, Dynamic
Programming and Greedy approach.

Greedy Algorithm vs Divide and Conquer Algorithm vs Dynamic Algorithm:

Swapna Singh KCS-503 DAA


Dynamic
Greedy Algorithm Divide and conquer Programming

Follows Top-down Follows Top-down Follows bottom-


approach approach up approach

Used to solve
Used to solve Used to solve optimization
optimization problem decision problem problem

The optimal solution is The solution of


generated without subproblems is
revisiting previously Solution of computed once
generated solutions; subproblem is and stored in a
thus, it avoids the re- computed recursively table for later
computation more than once. use.

It is used to obtain a
solution to the given
It may or may not problem, it does not It always
generate an optimal aim for the optimal generates
solution. solution optimal solution.

Recursive in
Iterative in nature. Recursive in nature. nature.

More efficient
Efficient and fast than Less efficient and and faster than
divide and conquer. slower. greedy.

More memory is
required to store
Extra memory is not Some memory is subproblems for
required. required. later use.

Examples: Fractional Examples: 0/1


Knapsack problem, Examples: Merge Knapsack,
Activity selection sort, All pair shortest
problem, Quick sort, path,
Job sequencing Strassen’s matrix Matrix-chain
problem. multiplication. multiplication.

Swapna Singh KCS-503 DAA


15. Divide and Conquer and Greedy

Divide and conquer Algorithm: is an algorithmic paradigm in which the problem is solved
using the Divide, Conquer, and Combine strategy. It solves a problem using the following
three steps:
Divide: This involves dividing the problem into smaller sub-problems.
Conquer: Solve sub-problems by calling recursively until solved.
Combine: Combine the sub-problems to get the final solution of the whole problem.

Greedy Algorithm: is defined as a method for solving optimization problems by taking


decisions that result in the most evident and immediate benefit irrespective of the final
outcome. It is a simple, intuitive algorithm that is used in optimization problems.

Difference between the Greedy Algorithm and the Divide and Conquer Algorithm:
Divide and conquer Greedy Algorithm

Divide and conquer is used to obtain a The greedy method is used to obtain
solution to the given problem, it does not an optimal solution to the given
aim for the optimal solution. problem.

In this technique, the problem is divided


into small subproblems. These subproblems
are solved independently. Finally, all the
solutions to subproblems are collected In Greedy Method, a set of feasible
together to get the solution to the given solutions are generated and pick up one
problem. feasible solution is the optimal solution.

A greedy method is comparatively


Divide and conquer is less efficient and efficient and faster as it is iterative in
slower because it is recursive in nature. nature.

In the Greedy method, the optimal


solution is generated without
revisiting previously generated
Divide and conquer may generate duplicate solutions, thus it avoids the re-
solutions. computation

Greedy algorithms also run in


Divide and conquer algorithms mostly run polynomial time but take less time
in polynomial time. than Divide and conquer

Examples: Fractional Knapsack


Examples: Merge sort, problem,
Quick sort, Activity selection problem,
Strassen’s matrix multiplication. Job sequencing problem.

Swapna Singh KCS-503 DAA


16. Algorithm For Convex Hull

Divide and Conquer Algorithm for finding Convex-Hull

 Each recursive invocation of the algorithm takes as input a subset P ⊆ Q and arrays X
and Y, each of which contains all the points of the input subset P.
 The points in array X are sorted so that their x-coordinates are monotonically in-
creasing.
 Similarly, array Y is sorted by monotonically increasing y-coordinate.
 Suppose we know the convex hull of the left half points and the right half points, then
the problem now is to merge these two convex hulls and determine the convex hull
for the complete set.
 This can be done by finding the upper and lower tangent to the right and left convex
hulls.
 Let the left convex hull be A and the right convex hull be B. Then the lower and
upper tangents are named as 1 and 2 respectively, as shown in the figure.
Then the red outline shows the final convex hull.

 Divide:
o It finds a vertical line l that bisects the point set P into two sets PL and PR such
that |PL| = ⌈|P|/2⌉, |PR| = ⌊|P|/2⌋, all points in PL are on or to the left of line l,
and all points in PR are on or to the right of l.
o The array X is divided into arrays XL and XR, which contain the points of PL
and PR respectively, sorted by monotonically increasing x-coordinate.
o Similarly, the array Y is divided into arrays YL and YR, which contain the
points of PL and PR respectively, sorted by monotonically increasing y-
coordinate.
o The division terminates if |P| <= 3.
 Conquer:
o Having divided P into PL and PR , it makes two recursive calls, one to find the
closest pair of points in PL and the other to find the closest pair of points in PR.
o The inputs to the first call are the subset PL and arrays XL and YL; the second
call receives the inputs PR, XR, and YR.
o Let the closest-pair distances returned for PL and PR be δL and δR ,
respectively, and let δ = min(δL , δR ).

Swapna Singh KCS-503 DAA


o Closest” refers to the usual euclidean distance: the distance between points
p1=(x1,y1) and p2 = (x2,y2) is [ (x1−x2)2+(y1−y2)2]1/2
o Two points in set Q may be coincident, in which case the distance between
them is zero.
 Combine:
o The closest pair is either the pair with distance δ found by one of the recursive
calls, or it is a pair of points with one point in PL and the other in PR . To find
such a pair, if one exists, the algorithm does the following :
o It creates an array Y ′, which is the array Y with all points not in the 2δ-wide
vertical strip removed. The array Y ′ is sorted by y-coordinate, just as Y is.
o For each point p in the array Y′, the algorithm tries to find points in Y′ that are
within δ units of p. The algorithm computes the distance from p to each of
these points and keeps track of the closest-pair distance δ′ found over all pairs
of points in Y .
o If δ′ < δ, then the vertical strip does indeed contain a closer pair than was
found by the recursive calls. This pair and its distance δ′ are returned. Other-
wise, the closest pair and its distance δ found by the recursive calls are
returned.
 Analysis :
o In 𝜃(𝑛)time the set of n points is divided into two subsets, one containing the
leftmost (n/2)points and one containing the right most (n/2) points, the convex
hulls of the subsets are computed recursively, and then it takes O(n) time to
combine the hulls.
o T(n) = 2T(n/2) + O(n) = O(n lg n)

17. Explain Searching technique using divide and conquer.

Binary Search is a searching algorithm used in a sorted array by repeatedly dividing the
search interval in half. The idea of binary search is to use the information that the array is
sorted and reduce the time complexity to O(Log n).
The basic steps to perform Binary Search are:
1. Begin with the mid element of the whole array as a search key.
2. If the value of the search key is equal to the item then return an index of the search key.
3. Or if the value of the search key is less than the item in the middle of the interval,
narrow the interval to the lower half.
4. Otherwise, narrow it to the upper half.
5. Repeatedly check from the second point until the value is found or the interval is empty.

18. Discuss greedy approach to an activity selection problem of scheduling several competing
activities. Solve the following activity selection problem
S = {A1, A2, A3, A4, A5, A6, A7, A8, A9, A10}
Si = { 1, 2, 3, 4, 7, 8, 9, 9, 11, 12 }
Fi = { 3, 5, 4, 7, 10, 9, 11, 13, 12, 14 }

Scheduling several competing activities that require exclusive use of a common resource
, with a goal of selecting a maximum size set of mutually compatible activities.
 Definition –
o S={1, 2,…, n} – activities that wish to use a resource
o Each activity has a start time si and a finish time fi , where si  fi

Swapna Singh KCS-503 DAA


o If selected, activity i takes place during the half-open interval [si , fi]
o Activities i and j are compatible if [si , fi) and [sj , fj) don’t overlap » si  fj or sj  fi
o The activity-selection problem is to select a maximum-size set of mutually
compatible activities where each activity is non interfering.
o Greedy-Activity-Selector Algorithm – Suppose f1  f2  f3  … fn – (n) if the
activities are already sorted initially by their finish times
 Greedy-Activity-Selector Algorithm
o Suppose f1  f2  f3  … fn
o Greedy Choice : (n) if the activities are already sorted initially by their
finish times

 GREEDY-ACTIVITY-SELECTOR(s, f)
o n  length[s]
o a  {a1}
o i1
o for m  2 to n
o do if sm  fi
o then A  A  {am}
o im
o return A

S = {A1, A3, A2, A4, A6, A5, A7, A8, A9, A10}
Si = { 1, 3, 2, 4, 8, 7, 9, 9, 11, 12 }
Fi = { 3, 4, 5, 7, 9, 10, 11, 13, 12, 14 }

i N M Output
0 10 1 {a1}
1 10 3, {a1,a3}
2 10 2,4 {a1,a3,a4}
4 10 6 {a1,a3,a4, a6}
5 10 5, 7 {a1,a3,a4, a6 a7}
8 10 8,9 {a1,a3,a4, a6 ,a7,a9}
9 10 {a1,a3,a4, a6 ,a7 a9,a10}

19. Define Minimum Spanning Tree. Write Prim’s Minimum Spanning Tree Algorithm
in detail. (2018-19)

Swapna Singh KCS-503 DAA


V1 V2 V3 V4 V5 V6 V7 V8
0       
- 12  4    2
- 12  4 11 1 -
- 12  3 11 5 - -
- 12 6 - 11 5 - -
- 12 6 - 8 - - -
- 12 - - 8 - - -
- 9 - - - - - -

Total cost of MST = 34

20. Explain Dijkstra’s algorithm to solve single source shortest path problem with
suitable example.

Answer 5 2017-18

2021-22

21. Explain Greedy algorithm in brief. 2019-20


22. What do you mean by convex hull. (2017-18)
23. Write and explain Kruskal’s algorithm to find minimum spanning tree of a
graph with suitable example. (2017-18)
24. What is Knapsack problem. Solve Fractional Kanpsack problem using greedy
programming for the following four items with their weights

Swapna Singh KCS-503 DAA


W= (3,5,9,5) P=(45,30,45,10) with knapsack capacity = 16

 A thief robbing a store finds n items: the i th item is worth vi dollars and weighs
wi pounds, where both vi and wi are integers.
 The thief wants to take as valuable a load as possible, but he can carry only W
pounds in his knapsack, where W is some integer.
 Problem: Which items should the thief take?
 Two versions of the problem:
o 0-1 : For each item, the thief must either take it or leave it, in other
 words, he cannot take a fraction of an item.
o Fractional : The thief can take fractions of items.

Item I1 I2 I3 I4

Profit 45 30 45 10

Weight 3 5 9 5

Pi/Wi 15 6 5 2

Item Selected Remaining Load Profit accumulated

16 0

I1 16-3=13 45

I2 13-5=8 45+30=75

I3 8-8=0 75+ (45 * (8/9))= 115

So items selected (I1, I2, 8/9 of I3)

25. Bellman ford algorithm.

 Single-source shortest paths problem


o Computes d[v] and [v] for all v  V
 Allows negative edge weights and cycles
 Returns:
o TRUE if no negative-weight cycles are reachable from the source s
o FALSE otherwise  no solution exists
 Idea:
o Traverse all the edges |V| – 1 times, every time performing a relaxation
step of each edge
 Time Complexity is O(VE).

Swapna Singh KCS-503 DAA


Bellman-ford(G,w,s)

1. INITIALIZE-SINGLE-SOURCE(G, s)
2. for i = 1 to |V[G]| –1
3. for each edge (u, v)  E[G]
4. Relax(u, v, w)
5. for each edge (u, v)  E[G]
6. if d[v] > d[u] + w(u, v)
7. return false
8. return true

Swapna Singh KCS-503 DAA

You might also like