Daa - 21-22
Daa - 21-22
B. TECH.
(SEM V) THEORY EXAMINATION 2021-22
DESIGN AND ANALYSIS OF ALGORITHM
Time: 3 Hours Total Marks: 100
Note: 1. Attempt all Sections. If require any missing data; then choose suitably.
2. Any special paper specific instruction.
SECTION A
1. Attempt all questions in brief. 2 x 10 = 20
a. How analyze the performance of an algorithm in different cases?
Analysis of algorithms is the determination of the amount of time and space resources
required to execute it. Usually, the efficiency or running time of an algorithm is stated as a
function relating the input length to the number of steps, known as time complexity, or
volume of memory, known as space complexity. Asymptotic analysis is a way of analysing
the behaviour of an algorithm as the size of the input grows indefinitely. The asymptotic
analysis allows you to determine the best, worst, and average case performance of an
algorithm, but it does not provide information about the algorithm's performance on specific
inputs. Analysis of algorithm is the process of analysing the problem-solving capability of
the algorithm in terms of the time and size required (the size of memory for storage while
implementation). However, the main concern of analysis of algorithms is the required time
or performance. Generally, we perform the following types of analysis −
Worst-case − The maximum number of steps taken on any instance of size a.
Best-case − The minimum number of steps taken on any instance of size a.
Average case − An average number of steps taken on any instance of size a.
Left Rotate
Algorithm
Initial tree
1. It can have multiple trees of equal degrees, and each tree doesn't need to have 2^k nodes.
2. All the trees in the Fibonacci Heap are rooted but not ordered.
QP22O1P_138 | 30-Dec-2021 08:58:24 | 115.243.37.58
Printed Page: 3 of 2
3. All the roots and siblings are stored in a separated circular-doubly-linked list.
4. The degree of a node is the number of its children. Node X -> degree = Number of X's
children.
5. Each node has a mark-attribute in which it is marked TRUE or FALSE. The FALSE
indicates the node has not any of its children. The TRUE represents that the node has lost one
child. The newly created node is marked FALSE.
6. The potential function of the Fibonacci heap is F(FH) = t[FH] + 2 * m[FH]
7. The Fibonacci Heap (FH) has some important technicalities listed below:
1. min[FH] - Pointer points to the minimum node in the Fibonacci Heap
2. n[FH] - Determines the number of nodes
3. t[FH] - Determines the number of rooted trees
4. m[FH] - Determines the number of marked nodes
5. F(FH) - Potential Function.
The Greedy method is the simplest and straightforward approach. It is not an algorithm, but it is a
technique. The main function of this approach is that the decision is taken on the basis of the
currently available information. Whatever the current information is present, the decision is made
without worrying about the effect of the current decision in future.
This technique is basically used to determine the feasible solution that may or may not be optimal.
The feasible solution is a subset that satisfies the given criteria. The optimal solution is the solution
which is the best and the most favourable solution in the subset. In the case of feasible, if more than
one solution satisfies the given criteria then those solutions will be considered as the feasible,
whereas the optimal solution is the best solution among all the solutions.
To construct the solution in an optimal way, this algorithm creates two sets where one set
contains all the chosen items, and another set contains the rejected items.
A Greedy algorithm makes good local choices in the hope that the solution should be either
feasible or optimal.
A convex hull of a set of points is defined as the smallest convex polygon containing all the points.
In other words, it is the outer boundary of the set of points that forms a shape with no indentations or
concave portions.
The convex hull can be represented as a polygon formed by connecting the outermost points
counterclockwise or clockwise. It can also be described as the intersection of all convex sets
containing the given points.
Applications
Computational geometry: It is used in algorithms for solving problems like finding the
closest pair of points or solving linear programming problems.
Image processing: Convex hulls can be used to analyze and recognize image shapes,
particularly for object recognition or tracking.
Robotics: Convex hulls are useful for collision detection and path planning in robotics
applications.
Game development: Convex hulls are employed in physics engines for collision detection
and response between objects in games.
2. At first the output matrix is same as given cost matrix of the graph. After that the output
matrix will be updated with all vertices k as the intermediate vertex.
3. The time complexity of this algorithm is O(V3), here V is the number of vertices in the
graph.
The NP problems set of problems whose solutions are hard to find but easy to verify and are
solved by Non-Deterministic Machine in polynomial time.
NP-Hard-Problem:
A Problem X is NP-Hard if there is an NP-Complete problem Y, such that Y is reducible to X in
polynomial time. NP-Hard problems are as hard as NP-Complete problems. NP-Hard Problem
need not be in NP class.
example :
1. Hamiltonian cycle .
2. optimization problem .
3. Shortest path
NP-Complete Problem:
A problem X is NP-Complete if there is an NP problem Y, such that Y is reducible to X in
polynomial time. NP-Complete problems are as hard as NP problems. A problem is NP-Complete
if it is a part of both NP and NP-Hard Problem. A non-deterministic Turing machine can solve
NP-Complete problem in polynomial time.
Example:
1. Decision problems.
2. Regular graphs
SECTION B
4. Attempt any three of the following: 10 x 3 = 30
a. Solve the recurrence
i) T (n) =3T (n/4) + cn2 using recursion tree method.
ii) T (n) = n + 2T (n/2) using Iteration method. (Given T(1)=1)
b. What is Binomial Heap? Write down the algorithm for Decrease key
operation in Binomial Heap also write its time complexity.
A binomial heap can be defined as the collection of binomial trees that satisfies the heap properties,
i.e., min-heap. The min-heap is a heap in which each node has a value lesser than the value of its
child nodes. Mainly, Binomial heap is used to implement a priority queue. It is an extension of
binary heap that gives faster merge or union operations along with other operations provided by
binary heap.
Decreasing a key
Now, let's move forward to another operation to be performed on binomial heap. Once the value of
the key is decreased, it might be smaller than its parent's key that results in the violation of min-heap
property. If such case occurs after decreasing the key, then exchange the element with its parent,
grandparent, and so on until the min-heap property is satisfied.
Let's understand the process of decreasing a key in a binomial heap using an example. Consider a
heap given below -
Decrease the key 45 by 7 of the above heap. After decreasing 45 by 7, the heap will be -
After decreasing the key, the min-heap property of the above heap is violated. Now, compare 7 wits
its parent 30, as it is lesser than the parent, swap 7 with 30, and after swapping, the heap will be -
Again compare the element 7 with its parent 8, again it is lesser than the parent, so swap the element
7 with its parent 8, after swapping the heap will be -
Now, the min-heap property of the above heap is satisfied. So, the above heap is the final heap after
decreasing a key.
Time Complexity:
Decreases the value of the key. The time complexity of this operation is O(log N). If the decreased
key value of a node is greater than the parent of the node, then we don't need to do anything.
Otherwise, we need to traverse up to fix the violated heap property.
c. Write and explain the Kruskal algorithm to find the Minimum Spanning
Tree of a graph with suitable example.
A spanning tree is a sub-graph of an undirected connected graph, which includes all the vertices
of the graph with a minimum possible number of edges. If a vertex is missed, then it is not a
spanning tree. The edges may or may not have weights assigned to them.
Methods of Minimum Spanning Tree
1. Kruskal's Algorithm
2. Prim's Algorithm
Kruskal's Algorithm:
An algorithm to construct a Minimum Spanning Tree for a connected weighted graph. It is a Greedy
Algorithm. The Greedy Choice is to put the smallest weight edge that does not because a cycle in the
MST constructed so far.
Analysis: Where E is the number of edges in the graph and V is the number of vertices, Kruskal's
Algorithm can be shown to run in O (E log E) time, or simply, O (E log V) time, all with simple data
structures. These running times are equivalent because:
For Example: Find the Minimum Spanning Tree of the following graph using Kruskal's algorithm.
First we initialize the set A to the empty set and create |v| trees, one containing each vertex with
MAKE-SET procedure. Then sort the edges in E into order by non-decreasing weight.
Step 3: then (a, b) and (i, g) edges are considered, and the forest becomes
Step 4: Now, edge (h, i). Both h and i vertices are in the same set. Thus it creates a cycle. So this
edge is discarded.Then edge (c, d), (b, c), (a, h), (d, e), (e, f) are considered, and the forest becomes.
Step 5: In (e, f) edge both endpoints e and f exist in the same tree so discarded this edge. Then (b, h)
edge, it also creates a cycle.
Step 7: This step will be required Minimum Spanning Tree because it contains all the 9 vertices and
(9 - 1) = 8 edges
Both Prim’s and Kruskal’s algorithms are developed for discovering the minimum spanning tree of a
graph. Both the algorithms are popular and follow different steps to solve the same kind of
problem.The prim’s algorithm selects the root vertex in the beginning and then traverses from vertex
to vertex adjacently. On the other hand, Krushal’s algorithm helps in generating the minimum
spanning tree, initiating from the smallest weighted edge.
d. What is N queens problem? Draw a state space tree for 4 queens problem
using backtracking.
N-Queens Problem
Since, we have to place 4 queens such as q1 q2 q3 and q4 on the chessboard, such that no two queens
QP22O1P_138 | 30-Dec-2021 08:58:24 | 115.243.37.58
Printed Page: 11 of
attack each other. In such a conditional each queen must be placed on a different
2 row, i.e., we put
queen "i" on row "i."
Now, we place queen q1 in the very first acceptable position (1, 1). Next, we put queen q2 so that
both these queens do not attack each other. We find that if we place q2 in column 1 and 2, then the
dead end is encountered. Thus the first acceptable position for q2 in column 3, i.e. (2, 3) but then no
position is left for placing queen 'q3' safely. So we backtrack one step and place the queen 'q2' in (2,
4), the next best possible solution. Then we obtain the position for placing 'q3' which is (3, 2). But
later this position also leads to a dead end, and no place is found where 'q4' can be placed safely.
Then we have to backtrack till 'q1' and place it to (1, 2) and then all other queens are placed safely by
moving q2 to (2, 4), q3 to (3, 1) and q4 to (4, 3). That is, we get the solution (2, 4, 1, 3). This is one
possible solution for the 4-queens problem. For another possible solution, the whole method is
repeated for all partial solutions. The other solutions for 4 - queens problems is (3, 1, 4, 2) i.e.
The implicit tree for 4 - queen problem for a solution (2, 4, 1, 3) is as follows
Fig shows the complete state space for 4 - queens problem. But we can use backtracking method to
generate the necessary node and stop if the next node violates the rule, i.e., if two queens are
attacking.
e. Write Rabin Karp string matching algorithm. Working modulo q=11, how
many spurious hits does the Rabin karp matcher in the text T=
3141592653589793, when looking for the pattern P=26.
5. The Rabin-Karp string matching algorithm calculates a hash value for the pattern, as well as
for each M-character subsequences of text to be compared. If the hash values are unequal,
the algorithm will determine the hash value for next M-character sequence. If the hash
values are equal, the algorithm will analyze the pattern and the M-character sequence. In
this way, there is only one comparison per text subsequence, and character matching is only
required when the hash values match.
6. RABIN-KARP-MATCHER (T, P, d, q)
7. 1. n ← length [T]
8. 2. m ← length [P]
9. 3. h ← dm-1 mod q
10. 4. p ← 0
11. 5. t0 ← 0
12. 6. for i ← 1 to m
13. 7. do p ← (dp + P[i]) mod q
14. 8. t0 ← (dt0+T [i]) mod q
15. 9. for s ← 0 to n-m
16. 10. do if p = ts
17. 11. then if P [1.....m] = T [s+1.....s + m]
18. 12. then "Pattern occurs with shift" s
19. 13. If s < n-m
20. 14. then ts+1 ← (d (ts-T [s+1]h)+T [s+m+1])mod q
21. Example: For string matching, working module q = 11, how many spurious hits does the
Rabin-Karp matcher encounters in Text T = 31415926535.......
22. T = 31415926535.......
23. P = 26
QP22O1P_138 | 30-Dec-2021 08:58:24 | 115.243.37.58
Printed Page: 14 of
2
24. Here T.Length =11 so Q = 11
25. And P mod Q = 26 mod 11 = 4
26. Now find the exact match of P mod Q...
27. Solution:
28.
Complexity:
The running time of RABIN-KARP-MATCHER in the worst case scenario O ((n-m+1) m but it
has a good average case running time. If the expected number of strong shifts is small O (1) and
prime q is chosen to be quite large, then the Rabin-Karp algorithm can be expected to run in time O
(n+m) plus the time to require to process spurious hits.
SECTION C
QP22O1P_138 | 30-Dec-2021 08:58:24 | 115.243.37.58
Printed Page: 15 of
29. Attempt any one part of the following: 2 10 x 1 = 10
(a) Write Merge sort algorithm and sort the following sequence {23, 11, 5, 15,
68, 31, 4, 17} using merge sort.
(b) Merge sort is similar to the quick sort algorithm as it uses the divide and conquer
approach to sort the elements. It is one of the most popular and efficient sorting
algorithm. It divides the given list into two equal halves, calls itself for the two
halves and then merges the two sorted halves. We have to define
the merge() function to perform the merging.The sub-lists are divided again and
again into halves until the list cannot be divided further. Then we combine the pair
of one element lists into two-element lists, sorting them in the process. The sorted
two element pairs is merged into the four-element lists, and so on until we get the
sorted list.
(c) Algorithm
(d) In the following algorithm, arr is the given array, beg is the starting element,
and end is the last element of the array.
(e) MERGE_SORT(arr, beg, end)
(f)
(g) if beg < end
(h) set mid = (beg + end)/2
(i) MERGE_SORT(arr, beg, mid)
(j) MERGE_SORT(arr, mid + 1, end)
(k) MERGE (arr, beg, mid, end)
(l) end of if
(m)
(n) END MERGE_SORT
(o) The important part of the merge sort is the MERGE function. This function
performs the merging of two sorted sub-arrays that
are A[beg…mid] and A[mid+1…end], to build one sorted array A[beg…end]. So,
the inputs of the MERGE function are A[], beg, mid, and end.The implementation
of the MERGE function is given as follows -
(p) /* Function to merge the subarrays of a[] */
(q) void merge(int a[], int beg, int mid, int end)
(r) {
(s) int i, j, k;
(t) int n1 = mid - beg + 1;
(u) int n2 = end - mid;
(v)
(w) int LeftArray[n1], RightArray[n2]; //temporary arrays
(x)
(y) /* copy data to temp arrays */
(z) for (int i = 0; i < n1; i++)
(aa) LeftArray[i] = a[beg + i];
(ab) for (int j = 0; j < n2; j++)
(ac) RightArray[j] = a[mid + 1 + j];
(ad)
(ae) i = 0, /* initial index of first sub-array */
(af) j = 0; /* initial index of second sub-array */
(ag) k = beg; /* initial index of merged sub-array */
(ah)
(ai) while (i < n1 && j < n2)
QP22O1P_138 | 30-Dec-2021 08:58:24 | 115.243.37.58
Printed Page: 16 of
(aj) { 2
(ak) if(LeftArray[i] <= RightArray[j])
(al) {
(am) a[k] = LeftArray[i];
(an) i++;
(ao) }
(ap) else
(aq) {
(ar) a[k] = RightArray[j];
(as) j++;
(at) }
(au) k++;
(av) }
(aw) while (i<n1)
(ax) {
(ay) a[k] = LeftArray[i];
(az) i++;
(ba) k++;
(bb) }
(bc)
(bd) while (j<n2)
(be) {
(bf) a[k] = RightArray[j];
(bg) j++;
(bh) k++;
(bi) }
(bj) }
What do you understand by stable and unstable sorting? Sort the following sequence {25,
57, 48, 36, 12, 91, 86, 32} using heap sort.
Stable sorting algorithms preserve the relative order of equal elements, while unstable sorting
algorithms don't. In other words, stable sorting maintains the position of two equals elements
relative to one another.
Algorithm:
createheap(x,n) /*-------------------- Function to create heap */
int x[],n;
{
int i,ele,s,f;
for (i=1;i<n;i++)
{
ele = x[i];
s = i;
f = (s-1) / 2;
while (s>0 && x[f]<ele)
{
x[s] = x[f];
s = f;
QP22O1P_138 | 30-Dec-2021 08:58:24 | 115.243.37.58
Printed Page: 19 of
f = (s-1) / 2; 2
}
x[s] = ele;
}
}
swap(x,i,j)
int x[],i,j;
{
int temp;
temp=x[i];
x[i]=x[j];
x[j]=temp;
}
heapsort(x,n) /* Repeatedly remove x[0] and insert it in proper */
int x[],n; /* position and readjust the remaining heap */
{
int i;
for (i=n-1;i>0;i--)
{
display(x,n);
swap(x,0,i);
createheap(x,i);
}
}
1. Insert the new node the way it is done in Binary Search Trees.
2. Color the node red
3. If an inconsistency arises for the red-black tree, fix the tree according to the type of
discrepancy.
4. In Red black tree if imbalancing occurs then for removing it two methods are used that are:
a) Recoloring
b) Rotation
(b) What is skip list? Explain the Search operation in Skip list with suitable
A skip list is a probabilistic data structure. The skip list is used to store a sorted list of elements or
data with a linked list. It allows the process of the elements or data to view efficiently. In one single
step, it skips several elements of the entire list, which is why it is known as a skip list.
The skip list is an extended version of the linked list. It allows the user to search, remove, and insert
the element very quickly. It consists of a base list that includes a set of elements which maintains the
link hierarchy of the subsequent elements.
Insertion operation: It is used to add a new node to a particular location in a specific situation.
Search Operation: The search operation is used to search a particular node in a skip list.
The fractional knapsack problem is also one of the techniques which are used to solve the knapsack
problem. In fractional knapsack, the items are broken in order to maximize the profit. The problem
in which we break the item is known as a Fractional knapsack problem.
This problem can be solved with the help of using two techniques:
o Brute-force approach: The brute-force approach tries all the possible solutions with all the
different fractions but it is a time-consuming approach.
o Greedy approach: In Greedy approach, we calculate the ratio of profit/weight, and
accordingly, we will select the item. The item with the highest ratio would be selected first.
o The first approach is to select the item based on the maximum profit.
o The second approach is to select the item based on the minimum weight.
o The third approach is to calculate the ratio of profit/weight.
Objects: 1 2 3 4 5 6 7
Profit (P): 10 15 7 8 9 4
Weight(w): 1 3 5 4 1 3 2
n (no of items): 7
approach:
Objects: 1 2 3 4 5 6 7
QP22O1P_138 | 30-Dec-2021 08:58:24 | 115.243.37.58
Printed Page: 23 of
Profit (P): 5 10 15 7 8 9 4 2
Weight(w): 1 3 5 4 1 3 2
Object 1: 5/1 = 5
Object 2: 10/3 = 3. 33
Object 3: 15/5 = 3
Object 5: 8/1 = 8
Object 6: 9/3 = 3
Object 7: 4/2 = 2
In this approach, we will select the objects based on the maximum profit/weight ratio. Since the P/W
of object 5 is maximum so we select object 5.
5 8 1 15 - 8 = 7
After object 5, object 1 has the maximum profit/weight ratio, i.e., 5. So, we select object 1 shown in
the below table:
5 8 1 15 - 1 = 14
1 5 1 14 - 1 = 13
After object 1, object 2 has the maximum profit/weight ratio, i.e., 3.3. So, we select object 2 having
profit/weight ratio as 3.3.
5 8 1 15 - 1 = 14
1 5 1 14 - 1 = 13
2 10 3 13 - 3 = 10
After object 2, object 3 has the maximum profit/weight ratio, i.e., 3. So, we select object 3 having
profit/weight ratio as 3.
5 8 1 15 - 1 = 14
1 5 1 14 - 1 = 13
2 10 3 13 - 3 = 10
3 15 5 10 - 5 = 5
After object 3, object 6 has the maximum profit/weight ratio, i.e., 3. So we select object 6 having
profit/weight ratio as 3.
5 8 1 15 - 1 = 14
1 5 1 14 - 1 = 13
2 10 3 13 - 3 = 10
3 15 5 10 - 5 = 5
After object 6, object 7 has the maximum profit/weight ratio, i.e., 2. So we select object 7 having
profit/weight ratio as 2.
5 8 1 15 - 1 = 14
1 5 1 14 - 1 = 13
2 10 3 13 - 3 = 10
3 15 5 10 - 5 = 5
6 9 3 5-3=2
7 4 2 2-2=0
As we can observe in the above table that the remaining weight is zero which means that the
knapsack is full. We cannot add more objects in the knapsack. Therefore, the total profit would be
equal to (8 + 5 + 10 + 15 + 9 + 4), i.e., 51.
Write down the Bellman Ford algorithm to solve the single source shortest path problem
also write its time complexity.
Bellman ford algorithm is a single-source shortest path algorithm. This algorithm is used to find the
shortest distance from the single vertex to all the other vertices of a weighted graph. There are
various other algorithms used to find the shortest path like Dijkstra algorithm, etc. If the weighted
graph contains the negative weight values, then the Dijkstra algorithm does not confirm whether it
produces the correct answer or not. In contrast to Dijkstra algorithm, bellman ford algorithm
guarantees the correct answer even if the weighted graph contains the negative weight values.
As we can observe in the above graph that some of the weights are negative. The above graph
contains 6 vertices so we will go on relaxing till the 5 vertices. Here, we will relax all the edges 5
times. The loop will iterate 5 times to get the correct answer. If the loop is iterated more than 5 times
then also the answer will be the same, i.e., there would be no change in the distance between the
vertices.
Relaxing means:
To find the shortest path of the above graph, the first step is note down all the edges which are given
below:
(A, B), (A, C), (A, D), (B, E), (C, E), (D, C), (D, F), (E, F), (C, B)
Let's consider the source vertex as 'A'; therefore, the distance value at vertex A is 0 and the distance
value at all the other vertices as infinity shown as below:
First iteration
Consider the edge (A, B). Denote vertex 'A' as 'u' and vertex 'B' as 'v'. Now use the relaxing formula:
d(u) = 0
d(v) = ∞
c(u , v) = 6
d(v) = 0 + 6 = 6
Consider the edge (A, C). Denote vertex 'A' as 'u' and vertex 'C' as 'v'. Now use the relaxing formula:
d(u) = 0
d(v) = ∞
c(u , v) = 4
d(v) = 0 + 4 = 4
Consider the edge (A, D). Denote vertex 'A' as 'u' and vertex 'D' as 'v'. Now use the relaxing formula:
d(u) = 0
d(v) = ∞
c(u , v) = 5
d(v) = 0 + 5 = 5
Consider the edge (B, E). Denote vertex 'B' as 'u' and vertex 'E' as 'v'. Now use the relaxing formula:
d(u) = 6
c(u , v) = -1
d(v) = 6 - 1= 5
Consider the edge (C, E). Denote vertex 'C' as 'u' and vertex 'E' as 'v'. Now use the relaxing formula:
d(u) = 4
d(v) = 5
c(u , v) = 3
Consider the edge (D, C). Denote vertex 'D' as 'u' and vertex 'C' as 'v'. Now use the relaxing formula:
d(u) = 5
d(v) = 4
c(u , v) = -2
d(v) = 5 - 2 = 3
Consider the edge (D, F). Denote vertex 'D' as 'u' and vertex 'F' as 'v'. Now use the relaxing formula:
d(u) = 5
d(v) = ∞
c(u , v) = -1
d(v) = 5 - 1 = 4
Consider the edge (E, F). Denote vertex 'E' as 'u' and vertex 'F' as 'v'. Now use the relaxing formula:
d(v) = ∞
c(u , v) = 3
Since (5 + 3) is greater than 4, so there would be no updation on the distance value of vertex F.
Consider the edge (C, B). Denote vertex 'C' as 'u' and vertex 'B' as 'v'. Now use the relaxing formula:
d(u) = 3
d(v) = 6
c(u , v) = -2
d(v) = 3 - 2 = 1
Second iteration:
In the second iteration, we again check all the edges. The first edge is (A, B). Since (0 + 6) is greater
than 1 so there would be no updation in the vertex B.
The next edge is (A, C). Since (0 + 4) is greater than 3 so there would be no updation in the vertex C.
The next edge is (A, D). Since (0 + 5) equals to 5 so there would be no updation in the vertex D.
The next edge is (B, E). Since (1 - 1) equals to 0 which is less than 5 so update:
=1-1=0
The next edge is (C, E). Since (3 + 3) equals to 6 which is greater than 5 so there would be no
updation in the vertex E.
The next edge is (D, C). Since (5 - 2) equals to 3 so there would be no updation in the vertex C.
The next edge is (D, F). Since (5 - 1) equals to 4 so there would be no updation in the vertex F.
The next edge is (E, F). Since (5 + 3) equals to 8 which is greater than 4 so there would be no
updation in the vertex F.
The next edge is (C, B). Since (3 - 2) equals to 1` so there would be no updation in the vertex B.
Third iteration
We will perform the same steps as we did in the previous iterations. We will observe that there will
be no updation in the distance of vertices.
Time Complexity
0 20 30 10 11
15 0 16 4 2
3 5 0 2 4
19 6 18 0 3
16 4 7 16 0
Explain the method of finding Hamiltonian cycles in a graph using backtracking method
with suitable example.
A Hamiltonian cycle in a graph is a closed path that visits each vertex of the graph exactly once. The
problem of determining whether a given graph contains a Hamiltonian cycle is called the
Hamiltonian cycle problem. This problem is of great importance in computer science, particularly in
the field of optimization and graph theory.
1. Traveling Salesman Problem (TSP): A salesman has to visit a number of cities and return to
the starting city while minimizing the total distance traveled.
2. Sequencing problems: In DNA sequencing, Hamiltonian cycles can be used to find the
shortest common superstring of a set of strings.
3. Networking: In network routing, Hamiltonian cycles can be used to find optimal routes that
minimize the total cost of visiting all nodes in a network.
Let's consider the Traveling Salesman Problem (TSP). In this problem, a salesman has to visit a
number of cities and return to the starting city while minimizing the total distance traveled. The TSP
can be represented as a graph where the cities are the vertices, and the edges represent the distances
between the cities.
Problem Statement
Given a graph with N vertices and a starting vertex, determine if there exists a Hamiltonian cycle in
the graph, and if so, find one such cycle.
We can solve the Hamiltonian cycle problem using a backtracking algorithm. The basic idea of the
backtracking algorithm is to construct a solution incrementally and backtrack whenever the current
solution cannot be extended to a complete solution.
Here's the step-by-step process to find a Hamiltonian cycle in a graph using the backtracking
algorithm:
1. Start with an empty path and push the starting vertex into it.
2. Add vertices to the path one by one, ensuring that each added vertex is adjacent to the
previously added vertex and not already in the path.
3. If the path contains all vertices and the last added vertex is adjacent to the starting vertex, a
Hamiltonian cycle is found.
4. If the path cannot be extended further, backtrack by removing the last added vertex and
trying the next adjacent vertex.
Example
Let's consider the following graph representing the distances between 5 cities:
graph = [
[0, 1, 1, 0, 1],
[1, 0, 1, 1, 1],
[1, 1, 0, 1, 0],
[0, 1, 1, 0, 1],
[1, 1, 0, 1, 0]
]
We can call the hamiltonian_cycle function with this graph and the starting city as follows:
start_vertex = 0
hamiltonian_cycle(graph, start_vertex)
Output:
5.
The decision vertex-cover problem was proven NPC. Now, we want to solve the optimal version of
the vertex cover problem, i.e., we want to find a minimum size vertex cover of a given graph. We
call such vertex cover an optimal vertex cover C*.
The idea is to take an edge (u, v) one by one, put both vertices to C, and remove all the edges
incident to u or v. We carry on until all edges have been removed. C is a VC. But how good is C?
Knuth Morris Pratt (KMP) is an algorithm, which checks the characters from left to right. When a
pattern has a sub-pattern appears more than one in the sub-pattern, it uses that property to improve
the time complexity, also for in the worst case.
findPrefix(pattern, m, prefArray)
Input − The pattern, the length of pattern and an array to store prefix location
Begin
length := 0
prefArray[0] := 0
kmpAlgorithm(text, pattern)
Input: The main text, and the pattern, which will be searched
QP22O1P_138 | 30-Dec-2021 08:58:24 | 115.243.37.58
Printed Page: 35 of
2
Output − The location where patterns are found
Begin
n := size of text
m := size of pattern
call findPrefix(pattern, m, prefArray)
while i < n, do
if text[i] = pattern[j], then
increase i and j by 1
if j = m, then
print the location (i-j) as there is the pattern
j := prefArray[j-1]
else if i < n AND pattern[j] ≠ text[i] then
if j ≠ 0 then
j := prefArray[j - 1]
else
increase i by 1
done
End