DAA Labmanual-Updated
DAA Labmanual-Updated
Lab Manual
CSBS-Second Year Semester-IV
Subject: Design and Analysis of Algorithms
Even Semester
1
Index
2
List of Experiments
3
Course Objective, Course Outcome
&Experiment Plan
Course Objective:
1 To introduce the methods of designing and analyzing algorithms
2 Design and implement efficient algorithms for a specified application
3 Strengthen the ability to identify and apply the suitable algorithm for the given
real-world problem
4 Analyze worst-case running time of algorithms and understand fundamental
algorithmic problems
Lab Outcomes: At the end of the course students will be able to,
CO4 Implement and analyze the complexity of algorithms using Greedy strategy
4
Experiment Plan:
Module Week Course Weightage
Experiments Name
No. No. Outcome
Implementation of iterative and recursive CO1 10
1 W1 algorithms for the Binary Search.
Implementation of 8 Queen’s Problem using 10
CO2
2 W2 backtracking.
Implementation of Travelling Salesman 10
3 W3 CO3
Problem– Branch and Bound Approach.
Implementation of Huffman Coding 03
4 W4 CO4
Algorithm using Greedy method.
Implementation of Knapsack Problem using 05
5 W5 CO5
Dynamic Programming.
Implementation of Single Source
CO4 03
6 W6 Shortest Path Algorithm (Dijkstra).
5
Study and Evaluation Scheme
Course
Course Name Teaching Scheme Credits Assigned
Code
CSL403 Design and Theory Practical Tutorial Theory Practical Tutorial Total
Analysis of -- 2 - - 1 - 1
Algorithms
Lab
Term Work:
6
Design and Analysis of Algorithms Lab
Experiment No.: 1
Implementation of iterative and recursive algorithms for
the Binary Search.
7
Experiment No. 1
1. Aim: Implementation of iterative and recursive algorithms for the Binary Search.
2. Objectives:
• To provide mathematical approach for Analysis of Algorithms.
• To calculate time complexity and space complexity.
• To solve problems using various strategies.
3. Outcomes:
• Students will be able to prove the correctness and analyze the running time of the basic
algorithms for those classic problems in various domains using divide and conquer
strategy.
5. Theory:
A binary search algorithm is a technique for finding a particular value in a linear
array, by ruling out half of the data at each step, widely but not exclusively used in computer
science.
A binary search finds the median, makes a comparison to determine whether the
desired value comes before or after it, and then searches the remaining half in the same
manner. Another explanation would be: Search a sorted array by repeatedly dividing the
search interval in half Begin with an interval covering the whole array. If the value of the
search key is less than the item in the middle of the interval, narrow the interval to the lower
half Otherwise, narrow it to the upper half.
6. Algorithm
Iteration Method
do until the pointers low and high meet each other.
mid = (low + high)/2
if (x == arr[mid])
return mid
else if (x > arr[mid]) // x is on the right side
low = mid + 1
else // x is on the left side 8
high = mid – 1
Recursive Method
binarySearch(arr, x, low, high)
if low > high
return False
else
mid = (low + high) / 2
if x == arr[mid]
return mid
else if x > arr[mid] // x is on the right side
return binarySearch(arr, x, mid + 1, high)
else // x is on the right side
return binarySearch(arr, x, low, mid - 1)
7. Output Observation/Analysis: Students should perform experiments with all different cases,
do the analysis and should write in their own language.
Test Test Case Test Data Expected Result Actual
Case Description Result
No.
1
2
8. Additional Learning: Any additional information provided by the faculty should be written
here.
9. Conclusion: The method is called is binary search because at each step, we reduce the
length of the table to be searched by half and the table is divided into two equal parts.
Binary Search can be accomplished in logarithmic time in the worst case, i.e., T(n)= θ(log
n). This version of the binary search takes logarithmic time in the best case.
11. References:
• http://interactivepython.org/runestone/static/pythonds/SortSearch/TheBinarySear ch.html
• https://www.khanacademy.org/computing/computer-science/algorithms/binary-
search/a/implementing-binary-search-of-an-array
• http://www.tutorialspoint.com/data_structures_algorithms/binary_search_algorit hm.htm
9
Design and Analysis of Algorithms Lab
Experiment No.: 2
Implementation of 8 Queen’s Problem using backtracking.
10
Experiment No. 2
1. Aim: Implementation of 8 Queen’s Problem using backtracking.
2. Objectives:
• To provide mathematical approach for Analysis of Algorithms.
• To calculate time complexity and space complexity.
• To solve problems using various strategies.
3. Outcomes:
• Students will be able to create and apply backtracking technique to deal with some hard
problems.
5. Theory:
N Queen's problem: The n-queens problem consists of placing n queens on an n x n
chessboard so that no two queens attack each other by being in the same row or in the same
column or on the same diagonal. The queens are arranged in such a way that they do not
threaten each other, according to the rules of the game of chess. Every queen on a chessboard
square can reach the other square that is located on the same horizontal, vertical, and
diagonal line. So, there can be at most one queen at each horizontal line, at most one queen at
each vertical line, and at most one queen at each of the diagonal lines. This problem can be
solved by the backtracking technique. The concept behind backtracking algorithm used to
solve this problem is to successively place the queens in columns. When it is impossible to
place a queen in a column, the algorithm backtracks and adjusts a preceding queen.
6. Algorithm
7. Output Observation/Analysis: Students should perform experiments with all different cases,
do the analysis and should write in their own language.
8. Additional Learning: Any additional information provided by the faculty should be written
here.
9. Conclusion:
Backtracking is a refinement of the brute force approach, which systematically searches for a
solution to a problem among all available option. This technique is characterized by the
ability to undo (“backtrack”) when a potential solution is found to be invalid.
Complexity Analysis: The power of the set of all possible solutions of the n queen’s problem
is n! and the bounding function takes a linear amount of time to calculate, therefore the
running time of the n queens’ problem is O (n!).
13. References:
• http://www.geeksforgeeks.org/backtracking-set-3-n-queen-problem/
• http://algorithms.tutorialhorizon.com/backtracking-n-queens-problem/
• https://developers.google.com/optimization/puzzles/queens
12
Design and Analysis of Algorithms Lab
Experiment No.: 3
Implementation of Travelling Salesman Problem– Branch
and Bound Approach.
13
Experiment No. 3
1. Aim: Implementation of Travelling Salesman Problem– Branch and Bound Approach.
2. Objectives:
• To provide mathematical approach for Analysis of Algorithms.
• To calculate time complexity and space complexity.
• To solve problems using various strategies.
3. Outcomes:
• Students will be able to apply branch and bound technique to deal with some hard
problems like TSP.
5. Theory:
Travelling Salesperson problem: Given n cities, a salesperson starts at a specified city
(source), visit all n-1 cities only once and return to the city from where he has started. The
objective of this problem is to find a route through the cities that minimizes the cost and
thereby maximizing the profit. Let G = (V, E) be a directed graph with edge costs cij. The
variable cij is defined such that Cij > 0 for all i and j and Cij = ∞ if does not belong to E. Let
|V| = n and assume n > 1. A tour of G is a directed simple cycle that includes every vertex in
V. The cost of a tour is the sum of the cost of the edges on the tour. The traveling salesperson
problem is to find a tour of minimum cost
The Branch & Bound algorithm is better because it prepares the matrices in different
steps. At each step the cost matrix is calculated. From the initial point we come to know that
what can be the minimum cost of the tour. The cost in the initial stages is not exact cost but it
gives some idea because it is the approximated cost. At each step it gives us the strong reason
that which node we should travel the next and which one not. It gives this fact in terms of the
cost of expanding a particular node.
6. Algorithm
Algorithm
//Purpose:
solution
using
//Input:
//Output:
distance
the
Begin
disconnection
current
city
travelling
visited
be
Step
and
Endvisited
path
terminate
1:exhaustive
2:
3:
4: all
Check
Find
city,
city
for
covered
The
sales
the
The
TSP
To
TSP
and
the
next
start
for
whether
cities
between
minimum
find
(start
person
next
solution
the
problem
search
the
along
city,
city
city,
the
next
city
city,
and
path)
the
has
solutions of later sub-problems.
Step 5: Add up the distance and bound will be equal to the current distance.
Step 6: Repeat step 5 until all the arcs have been covered.
7. Output Observation/Analysis: Students should perform experiments with all different cases,
do the analysis and should write in their own language.
8. Additional Learning: Any additional information provided by the faculty should be written
here.
9. Conclusion:
We are actually creating all the possible extensions of E-nodes in terms of tree nodes. Which
is nothing but a permutation. Suppose we have N cities, then we need to generate all the
permutations of the (N-1) cities, excluding the root city. Hence the time complexity for
generating the permutation is O((n-1)!), which is equal to O(2^(n-1)).
Hence the final time complexity of the algorithm can be O (n^2 * 2^n).
11. References:
• https://iq.opengenus.org/travelling-salesman-branch-bound/
• https://www.geeksforgeeks.org/traveling-salesman-problem-using-branch-and-bound-2/
15
Design and Analysis of Algorithms Lab
Experiment No.: 4
Implementation of Huffman Coding Algorithm using
Greedy method.
16
Experiment No. 4
1. Aim: Implementation of Huffman Coding Algorithm using Greedy method.
2. Objectives:
• To provide mathematical approach for Analysis of Algorithms.
• To calculate time complexity and space complexity.
• To solve problems using various strategies.
3. Outcomes:
• Students will be able to create and apply the efficient algorithms for the effective problem
solving with the help of different strategies like greedy method.
5. Theory:
Huffman coding is a lossless data compression algorithm. The idea is to assign variable-
length codes to input characters; lengths of the assigned codes are based on the frequencies of
corresponding characters. The most frequent character gets the smallest code and the
least frequent character gets the largest code. The variable-length codes
assigned to input characters are Prefix Codes means the codes (bit sequences) are assigned in
such a way that the code assigned to one character is not prefixed of code assigned to any
other character. This is how Huffman Coding makes sure that there is no ambiguity when
decoding the generated bit stream. Huffman‟s greedy algorithm uses a table giving how often
each character occurs (i.e., its frequency) to build up an optimal way of representing each
character as a binary string
The Huffman Coding algorithm takes in information about the frequencies or probabilities of
a particular symbol occurring. It begins to build the prefix tree from the bottom up, starting
with the two least probable symbols in the list. It takes those symbols and forms a sub tree
containing them, and then removes the individual symbols from the list. The algorithm sums
the probabilities of elements in a sub tree
17
and adds the sub tree and its probability to the list. Next, the algorithm searches the list and
selects the two symbols or sub trees with the smallest probabilities. It uses those to make a
new sub tree, removes the original sub trees/symbols from the list, and then adds the new sub
tree and its combined probability to the list. This repeats until there is one tree and all
elements have been added.
6. Algorithm
1. n = |C|
2. Q=C
3. for i = 1 to n - 1
4. allocate a new node ´
5. z. left = x = EXTRACT-MIN(Q)
6. z. right = y = EXTRACT-MIN(Q)
7. z. freq = x.freq + y.freq
8. INSERT(Q,z)
9. return EXTRACT-MIN(Q)
7. Output Observation/Analysis: Students should perform experiments with all different cases,
do the analysis and should write in their own language.
8. Additional Learning: Any additional information provided by the faculty should be written
here.
9. Conclusion:
Huffman coding is a technique used to compress files for transmission. Time complexity is
O(n log n) where n is the number of unique characters.
11. References:
• http://www.personal.kent.edu/~rmuhamma/Algorithms/ Huffman Coding.htm
• http://math.mit.edu/~rothvoss/18.304.3PM/Presentations/1-Melissa.pdf
• https://brilliant.org/wiki/huffman-encoding/
• https://www.geeksforgeeks.org/greedy-algorithms-set-3-huffman-coding/
18
Design and Analysis of Algorithms Lab
Experiment No.: 5
Implementation of Knapsack Problem using Dynamic
Programming.
19
Experiment No. 5
1. Aim: Implementation of Knapsack Problem using Dynamic Programming.
2. Objectives:
• To provide mathematical approach for Analysis of Algorithms.
• To calculate time complexity and space complexity.
• To solve problems using various strategies.
3. Outcomes:
• Students will be able to create and apply the efficient algorithms for the effective
problem solving with the help of different strategies like Dynamic Programming.
5. Theory:
Given n objects and a knapsack or bag. Object i has a weight wi and the knapsack has
a capacity m. If a fraction xi, 0 < 1, of object i is placed into the knapsack, then a profit of pixi
is earned. The objective is to obtain a filling of the knapsack that maximizes the total profit
earned. Since the knapsack capacity is m, the total required is the weight of all chosen objects
to be at most m.
Dynamic programming approach: To design a dynamic programming algorithm, derive a
recurrence relation that expresses a solution to an instance of the knapsack problem in terms
of solutions to its smaller sub instances. Consider an instance defined by the first i items,
1<=i<=n, with weights w1,…..,wi, values v1,……vi and knapsack capacity j, 1<=j<=W. Let
V [i,j] be the value of an optimal solution to this instance. Divide all the subsets in to two
categories: those that include the ith item and those that do not.
1) Among the subsets that do not include the ith item, the value of an optimal subset is, V [i-
1,j].
2) Among the subsets that do include the ith item , an optimal subset is made up of this item
and an optimal subset of the first i-1 items that fit into the knapsack of capacity j-wi. The
value of such a an optimal subset is vi + V[i-1,j-wi]. Thus, the value of an optimal solution
among all feasible subsets of the first I items is the maximum of these two values.
The goal is to find v[n,W] , the maximal value of subset of the n given items that fit into the
knapsack of capacity W, and an optimal subset itself.
6. Algorithm
1Algorithm 0/1DKnapsack(v,w,n,W)
2//Input: set of items with weight wi and value vi; maximum capacity of knapsack W
3//Output: maximum value of subset with weight at most W.
6{
7 for w := 0 to W do
8 V[0,w] :=0; 20
9 for i :=1 to n do
10 for w:=0 to W do
11 if (w[i]<=w)
12 V[i,w]=max{V[i-1,w],v[i]+V[i-1,w-w[i]]};
11 if (w[i]<=w)
12 V[i,w]=max{V[i-1,w],v[i]+V[i-1,w-w[i]]};
13 else
14 V[i,w]=V[i-1,w];
15 return V[n,W];
16 }
7. Output Observation/Analysis: Students should perform experiments with all different cases,
do the analysis and should write in their own language.
8. Additional Learning: Any additional information provided by the faculty should be written
here.
9. Conclusion:
The Time efficiency and Space efficiency of 0/1 Knapsack algorithm is Ө (n W).
11. References:
• http://www.personal.kent.edu/~rmuhamma/Algorithms/MyAlgorithms/Dynamic/
knapsackdyn.htm
• http://www.personal.kent.edu/~rmuhamma/Algorithms/MyAlgorithms/Greedy/
knapscakFrac.htm
• http://www.geeksforgeeks.org/dynamic-programming-set-10-0-1-knapsack-problem/
21
Design and Analysis of Algorithms Lab
Experiment No.: 6
Implementation of Single Source Shortest Path Algorithm
(Dijkstra).
22
Experiment No. 6
1. Aim: Implementation of Single Source Shortest Path Algorithm (Dijkstra).
2. Objectives:
• To provide mathematical approach for Analysis of Algorithms.
• To calculate time complexity and space complexity.
• To solve problems using various strategies.
3. Outcomes:
• Students will be able to create and apply the efficient algorithms for the effective
problem solving with the help of different strategies like Greedy Method.
5. Theory:
Single Source Shortest Paths Problem: For a given vertex called the source in a
weighted connected graph, find the shortest paths to all its other vertices. Dijkstra’s algorithm
is the best-known algorithm for the single source shortest paths problem. This algorithm is
applicable to graphs with nonnegative weights only and finds the shortest paths to a graph’s
vertices in order of their distance from a given source. It finds the shortest path from the
source to a vertex nearest to it, then to a second nearest, and so on. It is applicable to both
undirected and directed graphs.
6. Algorithm:
DIJKSTRA (G, w, s)
1. INITIALIZE SINGLE-SOURCE (G, s)
2. S ← { } // S will ultimately contains vertices of final shortest-path weights from s
3. Initialize priority queue Q i.e., Q ← V[G]
4. while priority queue Q is not empty do
5. u ← EXTRACT_MIN(Q) // Pull out new vertex
6. S ← S È {u} // Perform relaxation for each vertex v adjacent to u
7. for each vertex v in Adj[u] do
8. Relax (u, v, w)
7. Output Observation/Analysis: Students should perform experiments with all different cases,
do the analysis and should write in their own language.
8. Additional Learning: Any additional information provided by the faculty should be written
here.
9. Conclusion:
23
Time Complexity: O (E Log V) where, E is the number of edges and V is the number of
vertices.
11. References:
• https://www.geeksforgeeks.org/dijkstras-shortest-path-algorithm-greedy-algo-7/
• https://www.google.com/search?q=dijkstra+algorithm+in+daa&hl=en&biw=1280&bih=5
20&ei=g0UNYqCyObaC1e8P-
• https://www.javatpoint.com/dijkstras-algorithm
24
Design and Analysis of Algorithms Lab
Experiment No.: 7
Implementation of Single Source Shortest Path Algorithm
(Bellman-Ford).
25
Experiment No. 7
1. Aim: Implementation of Single Source Shortest Path Algorithm (Bellman-Ford).
2. Objectives:
• To provide mathematical approach for Analysis of Algorithms.
• To calculate time complexity and space complexity.
• To solve problems using various strategies.
3. Outcomes:
• Students will be able to create and apply the efficient algorithms for the effective
problem solving with the help of different strategies like Dynamic Programming.
5. Theory:
The Bellman-Ford algorithm is a graph search algorithm that finds the shortest path
between a given source vertex and all other vertices in the graph. This algorithm can be used
on both weighted and unweighted graphs. Like Dijkstra's shortest path algorithm, the
Bellman-Ford algorithm is guaranteed to find the shortest path in a graph. Though it is slower
than Dijkstra's algorithm, Bellman-Ford is capable of handling graphs that contain negative
edge weights, so it is more versatile. Going around the negative cycle an infinite number of
times would continue to decrease the cost of the path (even though the path length is
increasing). Because of this, Bellman-Ford can also detect negative cycles which are a useful
feature.
Bellman-Ford algorithm returns a Boolean value indicating whether or not there is a
negative-weight cycle that is reachable from the source. If there is such a cycle, the algorithm
indicates that no solution exists. If there is no such cycle, the algorithm produces the shortest
paths and their weights. The algorithm relaxes edges, progressively decreasing an estimated
on the weight of a shortest path from the source to each vertex until it achieves the actual
shortest-path weight. The algorithm returns TRUE if and only if the graph contains no
negative-weight cycles that are reachable from the source.
6. Algorithm:
BELLMAN-FORD (G,w,s)
1. INITIALIZE-SINGLE-SOURCE (G, s)
2. for i = 1 to |V| - 1
26
4. RELAX (u, v, w)
5. for each edge (u, v) ∈ G
6. if v.d > u.d + w(u, v)
7. return FALSE
8. return TRUE
INITIALIZE-SINGLE-SOURCE (G, s)
1. for each vertex v ∈ G.V
2. v.d = ∞
3. v.pi = NIL
4. s.d = 0
RELAX (u, v, w)
1. if v.d > u.d + w(u, v)
2. v.d = u.d + w(u, v)
3. v.pi = u
7. Output Observation/Analysis: Students should perform experiments with all different cases,
do the analysis and should write in their own language.
8. Additional Learning: Any additional information provided by the faculty should be written
here.
9. Conclusion:
The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single
source vertex to all of the other vertices in a weighted digraph. It is slower than Dijkstra's
algorithm for the same problem, but more versatile, as it is capable of handling graphs in
which some of the edge weights are negative numbers.
Time Complexity -
Since we are traversing all the edges V-1 times, and each time we are traversing all the E
vertices, therefore the time complexity is O(V.E).
11. References:
• http://www.personal.kent.edu/~rmuhamma/Algorithms/Single Source Shortest Path
Algorithm /Bellman-Ford.htm
• http://quiz.geeksforgeeks.org/Single Source Shortest Path Algorithm /Bellman- Ford
• http://interactivepython.org/runestone/static/pythonds/ Single Source Shortest Path
Algorithm /Bellman-Ford.html
27
Design and Analysis of Algorithms Lab
Experiment No.: 8
Implementation of Prims and Kruskal’s minimum
spanning tree algorithms
28
Experiment No. 8
1. Aim: Implementation of Prims and Kruskal’s minimum spanning tree algorithms
2. Objectives:
• To provide mathematical approach for Analysis of Algorithms.
• To calculate time complexity and space complexity.
• To solve problems using various strategies.
3. Outcomes:
• Students will be able to create and apply the efficient algorithms for the effective
problem solving with the help of different strategies like Greedy Method.
5. Theory:
Prims minimum spanning tree algorithms
Prim’s algorithm finds the minimum spanning tree for a weighted connected graph G=
(V, E) to get an acyclic sub graph with |V|-1 edges for which the sum of edge weights is the
smallest. Consequently, the algorithm constructs the minimum spanning tree as expanding sub-
trees. The initial sub tree in such a sequence consists of a single vertex selected arbitrarily from
the set V of the graph’s vertices. On each iteration, expand the current tree in the greedy manner
by simply attaching to it the nearest vertex not in that tree. The algorithm stops after all the
graph’s vertices have been included in the tree being constructed.
6. Algorithm:
1Algorithm Prim(E,cost,n,t)
2 // E is the set of edges in G. cost[1: n , 1: n] is the cost
3 // adjacency matrix of an n vertex graph such that cost [i,j] is
4 // either a positive real number or ∞ if no edge (i,j) exists.
5 //A minimum spanning tree is computed and stored as a set of
6 // edges in the array t [l : n-1,1: 2] .(t[i,l],t[i,2]) is an edge in
7 //the minimum-cost spanning tree. The final cost is returned.
8{
9 Let (k,l) be an edge of minimum cost in E;
10 mincost:= cost[k,l];
11 t[1,l]:=k; t[l,2]:=l;
12 for i :=1 to n do // Initialize near.
13 if (cost[i,l] < cost[i,k]) then near[i]:=l;
14 else near[i]:=k;
15 near[k]:=near[l]:=0;
16 for i :=2 to n-1 do
17 {// Find n-2 additional edges for t.
18 Let j be an index such that near[j] ≠ 0 and
29
19 cost[ j, near[j]] is minimum;
t[i,l]:=j; t[i,2]:=near[j];
21 mincost:=mincost+ cost[j, near[j]];
22 near[j]:=0;
23 for k :=1to n do // Update near[].
24 if ((near[k] ≠ 0) and(cost[k,near[k]] > cost[k,j]))
25 then near[k]:=j;
26 }
27 return mincost;
28}
Kruskal’s algorithm finds the minimum spanning tree for a weighted connected graph
G=(V,E) to get an acyclic sub graph with |V|-1 edges for which the sum of edge weights is the
smallest. Consequently, the algorithm constructs the minimum spanning tree as an expanding
sequence of sub graphs, which are always acyclic but are not necessarily connected on the
intermediate stages of algorithm. The algorithm begins by sorting the graph’s edges in non-
decreasing order of their weights. Then starting with the empty sub graph, it scans the sorted
list adding the next edge on the list to the current sub graph if such an inclusion does not
create a cycle and simply skipping the edge otherwise.
Algorithm:
7. Output Observation/Analysis: Students should perform experiments with all different cases,
do the analysis and should write in their own language.
8. Additional Learning: Any additional information provided by the faculty should be written
here.
9. Conclusion:
Complexity: The time complexity of Prim’s algorithm will be in O (|E| log |V|).
Complexity: With an efficient sorting algorithm, the time efficiency of kruskal’s algorithm
will be in O (|E| log |E|)
11. References
• https://www.studocu.com/in/document/rajiv-gandhi-proudyogiki
vishwavidyalaya/engineering-graphics/prism-algo-just-new-here/11268343
• https://www.softwaretestinghelp.com/minimum-spanning-tree-tutorial/
• https://ccsuniversity.ac.in/bridge-library/pdf/MCA-Spanning-Tree-CODE-212.pdf
31
Design and Analysis of Algorithms Lab
Experiment No.: 9
Implementation of Approximation Algorithm for
Travelling Salesman Problem.
32
Experiment No. 9
1. Aim: Implementation of Approximation Algorithm for Travelling Salesman Problem.
2. Objectives:
• To provide mathematical approach for Analysis of Algorithms.
• To analyze strategies for solving problems not solvable in polynomial time.
3. Outcomes:
• Students will be able to understand and prove that a certain problem is NP- Complete.
5. Theory:
Travelling Salesman Problem – Travelling Salesman Problem is based on a real life
scenario, where a salesman from a company has to start from his own city and visit all the
assigned cities exactly once and return to his home till the end of the day. The exact problem
statement goes like this,
"Given a set of cities and distance between every pair of cities, the problem is to find the
shortest possible route that visits every city exactly once and returns to the starting point."
There are two important things to be cleared about in this problem statement,
• Visit every city exactly once
Algorithm:
1) Let 1 be the starting and ending point for salesman.
3) List vertices visited in preorder walk of the constructed MST and add 1 at the end.
33
Let us consider the following example. The first diagram is the given graph. The second diagram
shows MST constructed with 1 as root. The preorder traversal of MST is 1-2-4-3. Adding 1 at the
end gives 1-2-4-3-1 which is the output of this algorithm.
Now our problem is approximated as we have tweaked the cost function/condition to traingle
inequality.
Why 2 approximate?
Following are some important points that maybe taken into account,
1. The cost of best possible Travelling Salesman tour is never less than the cost of MST.
(The definition of MST says, it is a minimum cost tree that connects all vertices).
2. The total cost of full walk is at most twice the cost of MST (Every edge of MST is visited
at-most twice)
3. The output of the above algorithm is less than the cost of full walk.
6. Output Observation/Analysis: Students should perform experiments with all different cases,
do the analysis and should write in their own language.
7. Additional Learning: Any additional information provided by the faculty should be written
here.
8. Conclusion:
We have designed an approximate algorithm for the Travelling Salesman Problem that
returns a tour whose cost is never more than twice the cost of an optimal tour.
The time complexity for obtaining MST from the given graph is O(V^2) where V is the
number of nodes. The worst case space complexity for the same is O(V^2). So, the overall time
complexity is O(V^2) and the worst case space complexity of this algorithm is O(V^2).
9. Viva Questions
• Define Optimal Solution.
• Explain Travelling Sales Person Problem.
34
• What is the time complexity of Travelling Sales Person Problem?
10. References
• Introduction to Algorithms 3rd Edition by Clifford Stein, Thomas H. Cormen, Charles E.
Leiserson, Ronald L. Rivest
• http://www.personal.kent.edu/~rmuhamma/Algorithms/MyAlgorithms/AproxAlgor/TSP/t
sp.htm
• https://www.geeksforgeeks.org/travelling-salesman-problem-set-2-approximate-using-
mst/
35
Design and Analysis of Algorithms Lab
Experiment No.: 10
Implementation of Randomized Quick sort algorithm.
36
Experiment No. 10
1. Aim: Implementation of Randomized Quick sort algorithm.
2. Objectives:
• To provide mathematical approach for Analysis of Algorithms.
• To calculate time complexity and space complexity.
• To solve problems using various strategies.
3. Outcomes:
• Students will be able to analyze the complexities of various problems in different
domains.
•
4. Hardware / Software Required: Turbo C
5. Theory:
Quicksort is a divide and conquer algorithm. Quicksort first divides a large array into
two smaller sub-arrays: the low elements and the high elements. Quicksort can then
recursively sort the sub-arrays. Randomized quick sort is also similar to quick sort, but here
the pivot element is randomly chosen. The steps are:
6. Algorithm:
RANDOMIZED-Quicksort(A, p, q)
1: if (p ≥ q) then
2: return
3: else
4: Choose a number, say r, uniformly and at random from the set {p,p +1,...,q}.
5: Swap A[p] and A[r].
6: j =PARTITION(A, p, q).
7: Quicksort(A, p, j −1).
8: Quicksort(A, j +1, q).
9: end if
37
7. Output Observation/Analysis: Students should perform experiments with all different cases,
do the analysis and should write in their own language.
8. Additional Learning: Any additional information provided by the faculty should be written
here.
9. Conclusion:
The worst-case time complexity of a typical implementation of Quick Sort is O(n2). The
worst case occurs when the picked pivot is always an extreme (smallest or largest) element,
which happens when the input array is either sorted or reversely sorted and either first or last
element is picked as pivot.
In the randomized version of Quick Sort, we impose a distribution on input by picking
the pivot element randomly. Randomized Quick Sort works well even when the array is
sorted/reversely sorted and the complexity is more towards O (n log n). (Yet, there is still a
possibility that the randomly picked element is always an extreme.)
38