0% found this document useful (0 votes)
29 views38 pages

DAA Labmanual-Updated

The experiment aims to implement iterative and recursive binary search algorithms. Binary search is a technique for finding a value in a sorted array by repeatedly dividing the search interval in half. It has a time complexity of O(log n) in the worst case. Students implement both iterative and recursive approaches, test with different cases, analyze the results, and learn about binary search algorithms.

Uploaded by

anuragshetye
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views38 pages

DAA Labmanual-Updated

The experiment aims to implement iterative and recursive binary search algorithms. Binary search is a technique for finding a value in a sorted array by repeatedly dividing the search interval in half. It has a time complexity of O(log n) in the worst case. Students implement both iterative and recursive approaches, test with different cases, analyze the results, and learn about binary search algorithms.

Uploaded by

anuragshetye
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Department of Computer Engineering

Lab Manual
CSBS-Second Year Semester-IV
Subject: Design and Analysis of Algorithms

Even Semester

1
Index

Sr. No. Contents Page No.


1. List of Experiments 3
2. Course objective and Course outcome 4
3. Experiment Plan 5
4. Study and Evaluation Scheme 6
5. Experiment No. 1 7
6. Experiment No. 2 10
7. Experiment No. 3 13
8. Experiment No. 4 16
9. Experiment No. 5 19
10. Experiment No. 6 22
11. Experiment No. 7 25
12. Experiment No. 8 28
13. Experiment No. 9 32
14. Experiment No. 10 36

2
List of Experiments

Sr. Experiments Name


No.
1. Implementation of iterative and recursive algorithms for the Binary
Search.
2. Implementation of 8 Queen’s Problem using backtracking.

3. Implementation of Travelling Salesman Problem– Branch and


Bound Approach.
4. Implementation of Huffman Coding Algorithm using Greedy
method.
5. Implementation of Knapsack Problem using Dynamic
Programming.
6. Implementation of Single Source Shortest Path Algorithm
(Dijkstra).
7. Implementation of Single Source Shortest Path Algorithm
(Bellman-Ford).
8. Implementation of Prims and Kruskal’s minimum spanning tree
algorithms
9. Implementation of Approximation Algorithm for Travelling
Salesman Problem.
10. Implementation of Randomized Quick sort algorithm.

3
Course Objective, Course Outcome
&Experiment Plan
Course Objective:
1 To introduce the methods of designing and analyzing algorithms
2 Design and implement efficient algorithms for a specified application
3 Strengthen the ability to identify and apply the suitable algorithm for the given
real-world problem
4 Analyze worst-case running time of algorithms and understand fundamental
algorithmic problems

Lab Outcomes: At the end of the course students will be able to,

CO1 Analyze the running time and space complexity of algorithms.

Implement and analyze the complexity of algorithms using Backtracking


CO2
strategy.
Implement and analyze the complexity of algorithms using Branch and Bound
CO3
strategy.

CO4 Implement and analyze the complexity of algorithms using Greedy strategy

Implement and analyze the complexity of algorithms using Dynamic


CO5
Programming strategy
Implement and analyze the complexity of approximation and randomized
CO6
algorithm.

4
Experiment Plan:
Module Week Course Weightage
Experiments Name
No. No. Outcome
Implementation of iterative and recursive CO1 10
1 W1 algorithms for the Binary Search.
Implementation of 8 Queen’s Problem using 10
CO2
2 W2 backtracking.
Implementation of Travelling Salesman 10
3 W3 CO3
Problem– Branch and Bound Approach.
Implementation of Huffman Coding 03
4 W4 CO4
Algorithm using Greedy method.
Implementation of Knapsack Problem using 05
5 W5 CO5
Dynamic Programming.
Implementation of Single Source
CO4 03
6 W6 Shortest Path Algorithm (Dijkstra).

Implementation of Single Source Shortest Path 05


7 W7 CO5
Algorithm (Bellman-Ford).
Implementation of Prims and Kruskal’s CO4 04
8 W8 minimum spanning tree algorithms
Implementation of Approximation Algorithm CO6 05
9 W9 for Travelling Salesman Problem.
Implementation of Randomized Quick sort 05
10 W10 CO6
algorithm.

5
Study and Evaluation Scheme
Course
Course Name Teaching Scheme Credits Assigned
Code
CSL403 Design and Theory Practical Tutorial Theory Practical Tutorial Total
Analysis of -- 2 - - 1 - 1
Algorithms
Lab

Course Code Course Name Examination Scheme


CSL403 Design and Term Work Practical & Oral Total
Analysis of
25 25 50
Algorithms Lab

Term Work:

1. Term work should consist of at least 11 experiments.


2. The final certification and acceptance of term work ensures that satisfactory
performance of laboratory work and minimum passing marks in term work.
3. Term Work: Total 25 Marks = (Experiments:20 marks + Attendance (Theory and
Practical) 05 mark)

Oral and Practical Exam:

Based on the entire syllabus of CSC403and CSL403.

6
Design and Analysis of Algorithms Lab

Experiment No.: 1
Implementation of iterative and recursive algorithms for
the Binary Search.

7
Experiment No. 1
1. Aim: Implementation of iterative and recursive algorithms for the Binary Search.

2. Objectives:
• To provide mathematical approach for Analysis of Algorithms.
• To calculate time complexity and space complexity.
• To solve problems using various strategies.

3. Outcomes:
• Students will be able to prove the correctness and analyze the running time of the basic
algorithms for those classic problems in various domains using divide and conquer
strategy.

4. Hardware / Software Required: Turbo C

5. Theory:
A binary search algorithm is a technique for finding a particular value in a linear
array, by ruling out half of the data at each step, widely but not exclusively used in computer
science.
A binary search finds the median, makes a comparison to determine whether the
desired value comes before or after it, and then searches the remaining half in the same
manner. Another explanation would be: Search a sorted array by repeatedly dividing the
search interval in half Begin with an interval covering the whole array. If the value of the
search key is less than the item in the middle of the interval, narrow the interval to the lower
half Otherwise, narrow it to the upper half.

6. Algorithm

Iteration Method
do until the pointers low and high meet each other.
mid = (low + high)/2
if (x == arr[mid])
return mid
else if (x > arr[mid]) // x is on the right side
low = mid + 1
else // x is on the left side 8
high = mid – 1
Recursive Method
binarySearch(arr, x, low, high)
if low > high
return False
else
mid = (low + high) / 2
if x == arr[mid]
return mid
else if x > arr[mid] // x is on the right side
return binarySearch(arr, x, mid + 1, high)
else // x is on the right side
return binarySearch(arr, x, low, mid - 1)

7. Output Observation/Analysis: Students should perform experiments with all different cases,
do the analysis and should write in their own language.
Test Test Case Test Data Expected Result Actual
Case Description Result
No.
1
2

8. Additional Learning: Any additional information provided by the faculty should be written
here.

9. Conclusion: The method is called is binary search because at each step, we reduce the
length of the table to be searched by half and the table is divided into two equal parts.
Binary Search can be accomplished in logarithmic time in the worst case, i.e., T(n)= θ(log
n). This version of the binary search takes logarithmic time in the best case.

10. Viva Questions:


• Which strategy binary search makes use of?
• What is the best case and worst complexity of binary search?
• Differentiate between recursive approach than an iterative approach?

11. References:
• http://interactivepython.org/runestone/static/pythonds/SortSearch/TheBinarySear ch.html
• https://www.khanacademy.org/computing/computer-science/algorithms/binary-
search/a/implementing-binary-search-of-an-array
• http://www.tutorialspoint.com/data_structures_algorithms/binary_search_algorit hm.htm
9
Design and Analysis of Algorithms Lab

Experiment No.: 2
Implementation of 8 Queen’s Problem using backtracking.

10
Experiment No. 2
1. Aim: Implementation of 8 Queen’s Problem using backtracking.

2. Objectives:
• To provide mathematical approach for Analysis of Algorithms.
• To calculate time complexity and space complexity.
• To solve problems using various strategies.

3. Outcomes:
• Students will be able to create and apply backtracking technique to deal with some hard
problems.

4. Hardware / Software Required: Turbo C

5. Theory:
N Queen's problem: The n-queens problem consists of placing n queens on an n x n
chessboard so that no two queens attack each other by being in the same row or in the same
column or on the same diagonal. The queens are arranged in such a way that they do not
threaten each other, according to the rules of the game of chess. Every queen on a chessboard
square can reach the other square that is located on the same horizontal, vertical, and
diagonal line. So, there can be at most one queen at each horizontal line, at most one queen at
each vertical line, and at most one queen at each of the diagonal lines. This problem can be
solved by the backtracking technique. The concept behind backtracking algorithm used to
solve this problem is to successively place the queens in columns. When it is impossible to
place a queen in a column, the algorithm backtracks and adjusts a preceding queen.

6. Algorithm

1 Algorithm NQueens (k,n)


2 // Using backtracking, this procedure prints all
3 // possible placements of n queens on an n x n
4 // chessboard so that they are non-attacking.
5{
6 for i: =1 to n do
7{
8 if Place (k,i) then
9{
10 x[k]:=i;
11 if (k = n) then write (x[l : n]);
12 else NQueens (k+l , n);
13 }
14 } 11
15}
1 Algorithm Place (k,i)
2 // Returns true if a queen can be placed in kth row and
3 // ith column. Otherwise it returns false. x [ ] is a
4 // global array whose first (k-1) values have been set.
5 // Abs(r) returns the absolute value of r.
6{
7 for j: =1to k-1 do
8 if ((x[j] = i) // Two in the same column
9 or (Abs(x[j] - i) = Abs (j - k)))
10 // or in the same diagonal
11 then return false;
12 return true;
13}

7. Output Observation/Analysis: Students should perform experiments with all different cases,
do the analysis and should write in their own language.

8. Additional Learning: Any additional information provided by the faculty should be written
here.

9. Conclusion:
Backtracking is a refinement of the brute force approach, which systematically searches for a
solution to a problem among all available option. This technique is characterized by the
ability to undo (“backtrack”) when a potential solution is found to be invalid.

Complexity Analysis: The power of the set of all possible solutions of the n queen’s problem
is n! and the bounding function takes a linear amount of time to calculate, therefore the
running time of the n queens’ problem is O (n!).

12. Viva Questions:


• Which strategy N-queen problem uses to find solution?
• How many solutions exist for 8-queen problem?
• Draw the 4X4 board and place a 4-queens using backtracking step by step.

13. References:
• http://www.geeksforgeeks.org/backtracking-set-3-n-queen-problem/
• http://algorithms.tutorialhorizon.com/backtracking-n-queens-problem/
• https://developers.google.com/optimization/puzzles/queens

12
Design and Analysis of Algorithms Lab

Experiment No.: 3
Implementation of Travelling Salesman Problem– Branch
and Bound Approach.

13
Experiment No. 3
1. Aim: Implementation of Travelling Salesman Problem– Branch and Bound Approach.

2. Objectives:
• To provide mathematical approach for Analysis of Algorithms.
• To calculate time complexity and space complexity.
• To solve problems using various strategies.

3. Outcomes:
• Students will be able to apply branch and bound technique to deal with some hard
problems like TSP.

4. Hardware / Software Required: Turbo C

5. Theory:
Travelling Salesperson problem: Given n cities, a salesperson starts at a specified city
(source), visit all n-1 cities only once and return to the city from where he has started. The
objective of this problem is to find a route through the cities that minimizes the cost and
thereby maximizing the profit. Let G = (V, E) be a directed graph with edge costs cij. The
variable cij is defined such that Cij > 0 for all i and j and Cij = ∞ if does not belong to E. Let
|V| = n and assume n > 1. A tour of G is a directed simple cycle that includes every vertex in
V. The cost of a tour is the sum of the cost of the edges on the tour. The traveling salesperson
problem is to find a tour of minimum cost

The Branch & Bound algorithm is better because it prepares the matrices in different
steps. At each step the cost matrix is calculated. From the initial point we come to know that
what can be the minimum cost of the tour. The cost in the initial stages is not exact cost but it
gives some idea because it is the approximated cost. At each step it gives us the strong reason
that which node we should travel the next and which one not. It gives this fact in terms of the
cost of expanding a particular node.

6. Algorithm

Branch and Bound:


The Branch and Bound strategy divide a problem to be solved into a number of sub-problems.
It is a system for solving a sequence of subproblems each of which may have multiple
possible solutions and where the solution chosen for one sub-problem may affect the possible

Algorithm
//Purpose:
solution
using
//Input:
//Output:
distance
the
Begin
disconnection
current
city
travelling
visited
be
Step
and
Endvisited
path
terminate
1:exhaustive
2:
3:
4: all
Check
Find
city,
city
for
covered
The
sales
the
The
TSP
To
TSP
and
the
next
start
for
whether
cities
between
minimum
find
(start
person
next
solution
the
problem
search
the
along
city,
city
city,
the
next
city
city,
and
path)
the
has
solutions of later sub-problems.

Step 1: Choose a start node.


with
the
to
Step 2: Set bound to a very large value, let’s say infinity.
14
Step 3: Choose the cheapest arc between the current and unvisited node and add the
distance to the current distance and repeat while the current distance is less than the
bound.
Step 4: If current distance is less than bound, then we are done

Step 5: Add up the distance and bound will be equal to the current distance.

Step 6: Repeat step 5 until all the arcs have been covered.

7. Output Observation/Analysis: Students should perform experiments with all different cases,
do the analysis and should write in their own language.

8. Additional Learning: Any additional information provided by the faculty should be written
here.

9. Conclusion:
We are actually creating all the possible extensions of E-nodes in terms of tree nodes. Which
is nothing but a permutation. Suppose we have N cities, then we need to generate all the
permutations of the (N-1) cities, excluding the root city. Hence the time complexity for
generating the permutation is O((n-1)!), which is equal to O(2^(n-1)).

Hence the final time complexity of the algorithm can be O (n^2 * 2^n).

10. Viva Questions:


• Define Optimal Solution.
• Explain Travelling Sales Person Problem.
• What is the time complexity of Travelling Sales Person Problem?

11. References:
• https://iq.opengenus.org/travelling-salesman-branch-bound/
• https://www.geeksforgeeks.org/traveling-salesman-problem-using-branch-and-bound-2/

15
Design and Analysis of Algorithms Lab

Experiment No.: 4
Implementation of Huffman Coding Algorithm using
Greedy method.

16
Experiment No. 4
1. Aim: Implementation of Huffman Coding Algorithm using Greedy method.

2. Objectives:
• To provide mathematical approach for Analysis of Algorithms.
• To calculate time complexity and space complexity.
• To solve problems using various strategies.

3. Outcomes:
• Students will be able to create and apply the efficient algorithms for the effective problem
solving with the help of different strategies like greedy method.

4. Hardware / Software Required: Turbo C

5. Theory:
Huffman coding is a lossless data compression algorithm. The idea is to assign variable-
length codes to input characters; lengths of the assigned codes are based on the frequencies of
corresponding characters. The most frequent character gets the smallest code and the
least frequent character gets the largest code. The variable-length codes
assigned to input characters are Prefix Codes means the codes (bit sequences) are assigned in
such a way that the code assigned to one character is not prefixed of code assigned to any
other character. This is how Huffman Coding makes sure that there is no ambiguity when
decoding the generated bit stream. Huffman‟s greedy algorithm uses a table giving how often
each character occurs (i.e., its frequency) to build up an optimal way of representing each
character as a binary string

It is important for an encoding scheme to be unambiguous. Since variable-length encodings


are susceptible to ambiguity, care must be taken to generate a scheme where ambiguity is
avoided. Huffman coding uses a greedy algorithm to build a prefix tree that optimizes the
encoding scheme so that the most frequently used symbols have the shortest encoding. The
prefix tree describing the encoding ensures that the code for any particular symbol is never a
prefix of the bit string representing any other symbol. To determine the binary assignment for
a symbol, make the leaves of the tree correspond to the symbols, and the assignment will be
the path it takes to get from the root of the tree to that leaf.

The Huffman Coding algorithm takes in information about the frequencies or probabilities of
a particular symbol occurring. It begins to build the prefix tree from the bottom up, starting
with the two least probable symbols in the list. It takes those symbols and forms a sub tree
containing them, and then removes the individual symbols from the list. The algorithm sums
the probabilities of elements in a sub tree
17
and adds the sub tree and its probability to the list. Next, the algorithm searches the list and
selects the two symbols or sub trees with the smallest probabilities. It uses those to make a
new sub tree, removes the original sub trees/symbols from the list, and then adds the new sub
tree and its combined probability to the list. This repeats until there is one tree and all
elements have been added.

6. Algorithm

1. n = |C|
2. Q=C
3. for i = 1 to n - 1
4. allocate a new node ´
5. z. left = x = EXTRACT-MIN(Q)
6. z. right = y = EXTRACT-MIN(Q)
7. z. freq = x.freq + y.freq
8. INSERT(Q,z)
9. return EXTRACT-MIN(Q)

7. Output Observation/Analysis: Students should perform experiments with all different cases,
do the analysis and should write in their own language.

8. Additional Learning: Any additional information provided by the faculty should be written
here.

9. Conclusion:
Huffman coding is a technique used to compress files for transmission. Time complexity is
O(n log n) where n is the number of unique characters.

10. Viva Questions:


• Explain Huffman coding and its usage?
• What is the time complexity of Huffman Coding?

11. References:
• http://www.personal.kent.edu/~rmuhamma/Algorithms/ Huffman Coding.htm
• http://math.mit.edu/~rothvoss/18.304.3PM/Presentations/1-Melissa.pdf
• https://brilliant.org/wiki/huffman-encoding/
• https://www.geeksforgeeks.org/greedy-algorithms-set-3-huffman-coding/

18
Design and Analysis of Algorithms Lab

Experiment No.: 5
Implementation of Knapsack Problem using Dynamic
Programming.

19
Experiment No. 5
1. Aim: Implementation of Knapsack Problem using Dynamic Programming.

2. Objectives:
• To provide mathematical approach for Analysis of Algorithms.
• To calculate time complexity and space complexity.
• To solve problems using various strategies.

3. Outcomes:
• Students will be able to create and apply the efficient algorithms for the effective
problem solving with the help of different strategies like Dynamic Programming.

4. Hardware / Software Required: Turbo C

5. Theory:
Given n objects and a knapsack or bag. Object i has a weight wi and the knapsack has
a capacity m. If a fraction xi, 0 < 1, of object i is placed into the knapsack, then a profit of pixi
is earned. The objective is to obtain a filling of the knapsack that maximizes the total profit
earned. Since the knapsack capacity is m, the total required is the weight of all chosen objects
to be at most m.
Dynamic programming approach: To design a dynamic programming algorithm, derive a
recurrence relation that expresses a solution to an instance of the knapsack problem in terms
of solutions to its smaller sub instances. Consider an instance defined by the first i items,
1<=i<=n, with weights w1,…..,wi, values v1,……vi and knapsack capacity j, 1<=j<=W. Let
V [i,j] be the value of an optimal solution to this instance. Divide all the subsets in to two
categories: those that include the ith item and those that do not.
1) Among the subsets that do not include the ith item, the value of an optimal subset is, V [i-
1,j].
2) Among the subsets that do include the ith item , an optimal subset is made up of this item
and an optimal subset of the first i-1 items that fit into the knapsack of capacity j-wi. The
value of such a an optimal subset is vi + V[i-1,j-wi]. Thus, the value of an optimal solution
among all feasible subsets of the first I items is the maximum of these two values.
The goal is to find v[n,W] , the maximal value of subset of the n given items that fit into the
knapsack of capacity W, and an optimal subset itself.

6. Algorithm
1Algorithm 0/1DKnapsack(v,w,n,W)
2//Input: set of items with weight wi and value vi; maximum capacity of knapsack W
3//Output: maximum value of subset with weight at most W.
6{
7 for w := 0 to W do
8 V[0,w] :=0; 20
9 for i :=1 to n do
10 for w:=0 to W do

11 if (w[i]<=w)

12 V[i,w]=max{V[i-1,w],v[i]+V[i-1,w-w[i]]};
11 if (w[i]<=w)
12 V[i,w]=max{V[i-1,w],v[i]+V[i-1,w-w[i]]};
13 else
14 V[i,w]=V[i-1,w];
15 return V[n,W];
16 }

7. Output Observation/Analysis: Students should perform experiments with all different cases,
do the analysis and should write in their own language.

8. Additional Learning: Any additional information provided by the faculty should be written
here.

9. Conclusion:
The Time efficiency and Space efficiency of 0/1 Knapsack algorithm is Ө (n W).

10. Viva Questions:


• Define knapsack problem.
• Define principle of optimality.
• What is the optimal solution for knapsack problem?
• What is the time complexity of knapsack problem?

11. References:
• http://www.personal.kent.edu/~rmuhamma/Algorithms/MyAlgorithms/Dynamic/
knapsackdyn.htm
• http://www.personal.kent.edu/~rmuhamma/Algorithms/MyAlgorithms/Greedy/
knapscakFrac.htm
• http://www.geeksforgeeks.org/dynamic-programming-set-10-0-1-knapsack-problem/

21
Design and Analysis of Algorithms Lab

Experiment No.: 6
Implementation of Single Source Shortest Path Algorithm
(Dijkstra).

22
Experiment No. 6
1. Aim: Implementation of Single Source Shortest Path Algorithm (Dijkstra).

2. Objectives:
• To provide mathematical approach for Analysis of Algorithms.
• To calculate time complexity and space complexity.
• To solve problems using various strategies.

3. Outcomes:
• Students will be able to create and apply the efficient algorithms for the effective
problem solving with the help of different strategies like Greedy Method.

4. Hardware / Software Required: Turbo C

5. Theory:
Single Source Shortest Paths Problem: For a given vertex called the source in a
weighted connected graph, find the shortest paths to all its other vertices. Dijkstra’s algorithm
is the best-known algorithm for the single source shortest paths problem. This algorithm is
applicable to graphs with nonnegative weights only and finds the shortest paths to a graph’s
vertices in order of their distance from a given source. It finds the shortest path from the
source to a vertex nearest to it, then to a second nearest, and so on. It is applicable to both
undirected and directed graphs.

6. Algorithm:

DIJKSTRA (G, w, s)
1. INITIALIZE SINGLE-SOURCE (G, s)
2. S ← { } // S will ultimately contains vertices of final shortest-path weights from s
3. Initialize priority queue Q i.e., Q ← V[G]
4. while priority queue Q is not empty do
5. u ← EXTRACT_MIN(Q) // Pull out new vertex
6. S ← S È {u} // Perform relaxation for each vertex v adjacent to u
7. for each vertex v in Adj[u] do
8. Relax (u, v, w)

7. Output Observation/Analysis: Students should perform experiments with all different cases,
do the analysis and should write in their own language.

8. Additional Learning: Any additional information provided by the faculty should be written
here.

9. Conclusion:
23
Time Complexity: O (E Log V) where, E is the number of edges and V is the number of
vertices.

Space Complexity: O(V)


10. Viva Questions:
• What is meant by Shortest Path?
• What is time and space complexity of algorithm?
• Compare Dijkstra‟s and Bellman-Ford algorithm.

11. References:
• https://www.geeksforgeeks.org/dijkstras-shortest-path-algorithm-greedy-algo-7/
• https://www.google.com/search?q=dijkstra+algorithm+in+daa&hl=en&biw=1280&bih=5
20&ei=g0UNYqCyObaC1e8P-
• https://www.javatpoint.com/dijkstras-algorithm

24
Design and Analysis of Algorithms Lab

Experiment No.: 7
Implementation of Single Source Shortest Path Algorithm
(Bellman-Ford).

25
Experiment No. 7
1. Aim: Implementation of Single Source Shortest Path Algorithm (Bellman-Ford).

2. Objectives:
• To provide mathematical approach for Analysis of Algorithms.
• To calculate time complexity and space complexity.
• To solve problems using various strategies.

3. Outcomes:
• Students will be able to create and apply the efficient algorithms for the effective
problem solving with the help of different strategies like Dynamic Programming.

4. Hardware / Software Required: Turbo C

5. Theory:
The Bellman-Ford algorithm is a graph search algorithm that finds the shortest path
between a given source vertex and all other vertices in the graph. This algorithm can be used
on both weighted and unweighted graphs. Like Dijkstra's shortest path algorithm, the
Bellman-Ford algorithm is guaranteed to find the shortest path in a graph. Though it is slower
than Dijkstra's algorithm, Bellman-Ford is capable of handling graphs that contain negative
edge weights, so it is more versatile. Going around the negative cycle an infinite number of
times would continue to decrease the cost of the path (even though the path length is
increasing). Because of this, Bellman-Ford can also detect negative cycles which are a useful
feature.
Bellman-Ford algorithm returns a Boolean value indicating whether or not there is a
negative-weight cycle that is reachable from the source. If there is such a cycle, the algorithm
indicates that no solution exists. If there is no such cycle, the algorithm produces the shortest
paths and their weights. The algorithm relaxes edges, progressively decreasing an estimated
on the weight of a shortest path from the source to each vertex until it achieves the actual
shortest-path weight. The algorithm returns TRUE if and only if the graph contains no
negative-weight cycles that are reachable from the source.

6. Algorithm:
BELLMAN-FORD (G,w,s)

1. INITIALIZE-SINGLE-SOURCE (G, s)

2. for i = 1 to |V| - 1

3. for each edge (u, v) ∈ G

26
4. RELAX (u, v, w)
5. for each edge (u, v) ∈ G
6. if v.d > u.d + w(u, v)
7. return FALSE
8. return TRUE

INITIALIZE-SINGLE-SOURCE (G, s)
1. for each vertex v ∈ G.V
2. v.d = ∞
3. v.pi = NIL
4. s.d = 0

RELAX (u, v, w)
1. if v.d > u.d + w(u, v)
2. v.d = u.d + w(u, v)
3. v.pi = u

7. Output Observation/Analysis: Students should perform experiments with all different cases,
do the analysis and should write in their own language.

8. Additional Learning: Any additional information provided by the faculty should be written
here.

9. Conclusion:
The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single
source vertex to all of the other vertices in a weighted digraph. It is slower than Dijkstra's
algorithm for the same problem, but more versatile, as it is capable of handling graphs in
which some of the edge weights are negative numbers.
Time Complexity -
Since we are traversing all the edges V-1 times, and each time we are traversing all the E
vertices, therefore the time complexity is O(V.E).

10. Viva Questions:


• What is meant by Shortest Path?
• What is time and space complexity of algorithm?
• Compare Dijkstra’s and Bellman-Ford algorithm.

11. References:
• http://www.personal.kent.edu/~rmuhamma/Algorithms/Single Source Shortest Path
Algorithm /Bellman-Ford.htm
• http://quiz.geeksforgeeks.org/Single Source Shortest Path Algorithm /Bellman- Ford
• http://interactivepython.org/runestone/static/pythonds/ Single Source Shortest Path
Algorithm /Bellman-Ford.html

27
Design and Analysis of Algorithms Lab

Experiment No.: 8
Implementation of Prims and Kruskal’s minimum
spanning tree algorithms

28
Experiment No. 8
1. Aim: Implementation of Prims and Kruskal’s minimum spanning tree algorithms
2. Objectives:
• To provide mathematical approach for Analysis of Algorithms.
• To calculate time complexity and space complexity.
• To solve problems using various strategies.
3. Outcomes:
• Students will be able to create and apply the efficient algorithms for the effective
problem solving with the help of different strategies like Greedy Method.

4. Hardware / Software Required: Turbo C

5. Theory:
Prims minimum spanning tree algorithms
Prim’s algorithm finds the minimum spanning tree for a weighted connected graph G=
(V, E) to get an acyclic sub graph with |V|-1 edges for which the sum of edge weights is the
smallest. Consequently, the algorithm constructs the minimum spanning tree as expanding sub-
trees. The initial sub tree in such a sequence consists of a single vertex selected arbitrarily from
the set V of the graph’s vertices. On each iteration, expand the current tree in the greedy manner
by simply attaching to it the nearest vertex not in that tree. The algorithm stops after all the
graph’s vertices have been included in the tree being constructed.

6. Algorithm:

1Algorithm Prim(E,cost,n,t)
2 // E is the set of edges in G. cost[1: n , 1: n] is the cost
3 // adjacency matrix of an n vertex graph such that cost [i,j] is
4 // either a positive real number or ∞ if no edge (i,j) exists.
5 //A minimum spanning tree is computed and stored as a set of
6 // edges in the array t [l : n-1,1: 2] .(t[i,l],t[i,2]) is an edge in
7 //the minimum-cost spanning tree. The final cost is returned.
8{
9 Let (k,l) be an edge of minimum cost in E;
10 mincost:= cost[k,l];
11 t[1,l]:=k; t[l,2]:=l;
12 for i :=1 to n do // Initialize near.
13 if (cost[i,l] < cost[i,k]) then near[i]:=l;
14 else near[i]:=k;
15 near[k]:=near[l]:=0;
16 for i :=2 to n-1 do
17 {// Find n-2 additional edges for t.
18 Let j be an index such that near[j] ≠ 0 and
29
19 cost[ j, near[j]] is minimum;
t[i,l]:=j; t[i,2]:=near[j];
21 mincost:=mincost+ cost[j, near[j]];
22 near[j]:=0;
23 for k :=1to n do // Update near[].
24 if ((near[k] ≠ 0) and(cost[k,near[k]] > cost[k,j]))
25 then near[k]:=j;
26 }
27 return mincost;
28}

Kruskal’s minimum spanning tree algorithms

Kruskal’s algorithm finds the minimum spanning tree for a weighted connected graph
G=(V,E) to get an acyclic sub graph with |V|-1 edges for which the sum of edge weights is the
smallest. Consequently, the algorithm constructs the minimum spanning tree as an expanding
sequence of sub graphs, which are always acyclic but are not necessarily connected on the
intermediate stages of algorithm. The algorithm begins by sorting the graph’s edges in non-
decreasing order of their weights. Then starting with the empty sub graph, it scans the sorted
list adding the next edge on the list to the current sub graph if such an inclusion does not
create a cycle and simply skipping the edge otherwise.

Algorithm:

Algorithm Kruskal(E , cost, n ,t)


2 // E is the set of edges in G.G has n vertices. cost [u, v] is the
3 // cost of edge (u,v). t is the set of edges in the minimum-cost
4 // spanning tree. The final cost is returned.
5{
6 Construct a heap out of the edge costs using Heapify;
7 for i :=1 to n do parent[i]:=-1;
8 // Each vertex is in a different set.
9 i :=0; mincost:=0.0;
10 while ((i < n-1) and (heap not empty)) do
11 {
12 Delete a minimum cost edge(u,v) from the heap
13 and reheapify using Adjust;
14 j :=Find(u); k :=Find(w);
15 if (j != k) then
16 {
30
i: =i+ l;
18 t [i, 1]:=u; t [i, 2]:=v;
19 mincost:=mincost+ cost[u, v];
20 Union(j,k);
21 }
22 }
23 if (i!=n -1) then write ("No spanning tree”);
24 else return mincost;
25}

7. Output Observation/Analysis: Students should perform experiments with all different cases,
do the analysis and should write in their own language.

8. Additional Learning: Any additional information provided by the faculty should be written
here.

9. Conclusion:
Complexity: The time complexity of Prim’s algorithm will be in O (|E| log |V|).

Complexity: With an efficient sorting algorithm, the time efficiency of kruskal’s algorithm
will be in O (|E| log |E|)

10. Viva Questions


1. What is the time complexity of Prim’s algorithm?
2. What is the time complexity of Kruskal’s algorithm?
3. Define spanning tree.
4. Define minimum cost spanning tree

11. References
• https://www.studocu.com/in/document/rajiv-gandhi-proudyogiki
vishwavidyalaya/engineering-graphics/prism-algo-just-new-here/11268343
• https://www.softwaretestinghelp.com/minimum-spanning-tree-tutorial/
• https://ccsuniversity.ac.in/bridge-library/pdf/MCA-Spanning-Tree-CODE-212.pdf

31
Design and Analysis of Algorithms Lab

Experiment No.: 9
Implementation of Approximation Algorithm for
Travelling Salesman Problem.

32
Experiment No. 9
1. Aim: Implementation of Approximation Algorithm for Travelling Salesman Problem.
2. Objectives:
• To provide mathematical approach for Analysis of Algorithms.
• To analyze strategies for solving problems not solvable in polynomial time.

3. Outcomes:
• Students will be able to understand and prove that a certain problem is NP- Complete.

4. Hardware / Software Required: Turbo C

5. Theory:
Travelling Salesman Problem – Travelling Salesman Problem is based on a real life
scenario, where a salesman from a company has to start from his own city and visit all the
assigned cities exactly once and return to his home till the end of the day. The exact problem
statement goes like this,
"Given a set of cities and distance between every pair of cities, the problem is to find the
shortest possible route that visits every city exactly once and returns to the starting point."

There are two important things to be cleared about in this problem statement,
• Visit every city exactly once

• Cover the shortest path


We introduced Travelling Salesman Problem and discussed Naive and Dynamic Programming
Solutions. Both of the solutions are infeasible. In fact, there is no polynomial time solution
available for this problem as the problem is a known NP-Hard problem. There are approximate
algorithms to solve the problem though. The approximate algorithms work only if the problem
instance satisfies Triangle-Inequality.
Triangle-Inequality: The least distant path to reach a vertex j from i is always to reach j directly
from i, rather than through some other vertex k (or vertices), i.e., dis(i, j) is always less than or
equal to dis(i, k) + dist(k, j). The Triangle-Inequality holds in many practical situations.
When the cost function satisfies the triangle inequality, we can design an approximate algorithm
for TSP that returns a tour whose cost is never more than twice the cost of an optimal tour. The
idea is to use Minimum Spanning Tree (MST). Following is the MST based algorithm.

Algorithm:
1) Let 1 be the starting and ending point for salesman.

2) Construct MST from with 1 as root using Prim’s Algorithm.

3) List vertices visited in preorder walk of the constructed MST and add 1 at the end.
33
Let us consider the following example. The first diagram is the given graph. The second diagram
shows MST constructed with 1 as root. The preorder traversal of MST is 1-2-4-3. Adding 1 at the
end gives 1-2-4-3-1 which is the output of this algorithm.

Now our problem is approximated as we have tweaked the cost function/condition to traingle
inequality.

Why 2 approximate?
Following are some important points that maybe taken into account,
1. The cost of best possible Travelling Salesman tour is never less than the cost of MST.
(The definition of MST says, it is a minimum cost tree that connects all vertices).

2. The total cost of full walk is at most twice the cost of MST (Every edge of MST is visited
at-most twice)

3. The output of the above algorithm is less than the cost of full walk.

6. Output Observation/Analysis: Students should perform experiments with all different cases,
do the analysis and should write in their own language.

7. Additional Learning: Any additional information provided by the faculty should be written
here.

8. Conclusion:
We have designed an approximate algorithm for the Travelling Salesman Problem that
returns a tour whose cost is never more than twice the cost of an optimal tour.
The time complexity for obtaining MST from the given graph is O(V^2) where V is the
number of nodes. The worst case space complexity for the same is O(V^2). So, the overall time
complexity is O(V^2) and the worst case space complexity of this algorithm is O(V^2).

9. Viva Questions
• Define Optimal Solution.
• Explain Travelling Sales Person Problem.

34
• What is the time complexity of Travelling Sales Person Problem?
10. References
• Introduction to Algorithms 3rd Edition by Clifford Stein, Thomas H. Cormen, Charles E.
Leiserson, Ronald L. Rivest
• http://www.personal.kent.edu/~rmuhamma/Algorithms/MyAlgorithms/AproxAlgor/TSP/t
sp.htm
• https://www.geeksforgeeks.org/travelling-salesman-problem-set-2-approximate-using-
mst/

35
Design and Analysis of Algorithms Lab

Experiment No.: 10
Implementation of Randomized Quick sort algorithm.

36
Experiment No. 10
1. Aim: Implementation of Randomized Quick sort algorithm.
2. Objectives:
• To provide mathematical approach for Analysis of Algorithms.
• To calculate time complexity and space complexity.
• To solve problems using various strategies.

3. Outcomes:
• Students will be able to analyze the complexities of various problems in different
domains.

4. Hardware / Software Required: Turbo C

5. Theory:
Quicksort is a divide and conquer algorithm. Quicksort first divides a large array into
two smaller sub-arrays: the low elements and the high elements. Quicksort can then
recursively sort the sub-arrays. Randomized quick sort is also similar to quick sort, but here
the pivot element is randomly chosen. The steps are:

1. Pick an element, called a pivot, from the array.


2. Reorder the array so that all elements with values less than the pivot come before the pivot,
while all elements with values greater than the pivot come after it (equal values can go either
way). After this partitioning, the pivot is in its final position. This is called the partition
operation.
3. Recursively apply the above steps to the sub-array of elements with smaller values and
separately to the sub-array of elements with greater values.

6. Algorithm:

RANDOMIZED-Quicksort(A, p, q)
1: if (p ≥ q) then
2: return
3: else
4: Choose a number, say r, uniformly and at random from the set {p,p +1,...,q}.
5: Swap A[p] and A[r].
6: j =PARTITION(A, p, q).
7: Quicksort(A, p, j −1).
8: Quicksort(A, j +1, q).
9: end if

37
7. Output Observation/Analysis: Students should perform experiments with all different cases,
do the analysis and should write in their own language.

8. Additional Learning: Any additional information provided by the faculty should be written
here.

9. Conclusion:
The worst-case time complexity of a typical implementation of Quick Sort is O(n2). The
worst case occurs when the picked pivot is always an extreme (smallest or largest) element,
which happens when the input array is either sorted or reversely sorted and either first or last
element is picked as pivot.
In the randomized version of Quick Sort, we impose a distribution on input by picking
the pivot element randomly. Randomized Quick Sort works well even when the array is
sorted/reversely sorted and the complexity is more towards O (n log n). (Yet, there is still a
possibility that the randomly picked element is always an extreme.)

10. Viva Questions


• Define Randomized Algorithms.
• What is the average case time complexity of quick sort?
• Explain is divide and conquer.
• List different ways of selecting pivot element.
11. References
• https://community.wvu.edu/~krsubramani/courses/sp12/rand/lecnotes/Quicksort.pdf
• https://geekfactorial.blogspot.com/2016/08/randomized-quick-sort-algorithm.html
• https://slideplayer.com/slide/4774028/

38

You might also like