Algorithm Final Question Bankk
Algorithm Final Question Bankk
1. What is an algorithm?
An algorithm is a sequence of unambiguous instructions for solving a problem.
i.e., for obtaining a required output for any legitimate input in a finite amount of time
3. What are important problem types? (or) Enumerate some important types of
problems.
1. Sorting 2. Searching
3. Numerical problems 4. Geometric problems
5. Combinatorial Problems 6. Graph Problems
7. String processing Problems
15. What do you mean by time complexity and space complexity of an algorithm?
Time complexity indicates how fast the algorithm runs. Space complexity deals with
extra memory it require. Time efficiency is analyzed by determining the number of
repetitions of the basic operation as a function of input size. Basic operation: the
operation that contributes most towards the running time of the algorithm The running
time of an algorithm is the function defined by the number of steps (or amount of
memory) required to solve input instances of size n.
17. What are the different criteria used to improve the effectiveness of algorithm?
(i) The effectiveness of algorithm is improved, when the design, satisfies the following
constraints to be minimum.
Time efficiency - how fast an algorithm in question runs.
Space efficiency – an extra space the algorithm requires
(ii) The algorithm has to provide result for all valid inputs.
UNIT III
DYNAMIC PROGRAMMING
The General method- All pairs shortest path- Optimal binary tree-
Multistage graphs
1. Write the difference between the Greedy method and Dynamic programming.
Greedy method Dynamic programming
1.Only one sequence of decision is
generated.
1.Many number of decisions are generated.
2.It does not guarantee to give an optimal
solution always.
2.It definitely gives an optimal solution
always.
2. Define dynamic programming.
Dynamic programming is an algorithm design method that can be used when a
solution to the problem is viewed as the result of sequence of decisions. It is technique
for solving problems with overlapping subproblems.
3. What are the features of dynamic programming?
• Optimal solutions to sub problems are retained so as to avoid recomputing of their
values.
• Decision sequences containing subsequences that are sub optimal are not
considered.
• It definitely gives the optimal solution always.
4. What are the drawbacks of dynamic programming?
• Time and space requirements are high, since storage is needed for all level.
• Optimality should be checked at all levels.
5. Write the general procedure of dynamic programming.
The development of dynamic programming algorithm can be broken into a
sequence of 4 steps.
1. Characterize the structure of an optimal solution.
2. Recursively define the value of the optimal solution.
3. Compute the value of an optimal solution in the bottom-up fashion.
4. Construct an optimal solution from the computed information.
UNIT 4 AND 5
BACKTRACKING BRANCH AND BOUND
1. What are the requirements that are needed for performing Backtracking?
To solve any problem using backtracking, it requires that all the solutions satisfy a
complex set of constraints. They are:
i. Explicit constraints.
ii. Implicit constraints.
2.Define explicit constraint.
They are rules that restrict each x to take on values only from a give set. They i
depend on the particular instance I of the problem being solved. All tuples that satisfy the
explicit constraints define a possible solution space.
Quicksort is a fast sorting algorithm that works by splitting a large array of data into smaller
sub-arrays. This implies that each iteration works by splitting the input into two
components, sorting them, and then recombining them. For big datasets, the technique is
It was created by Tony Hoare in 1961 and remains one of the most effective general-purpose
sorting algorithms available today. It works by recursively sorting the sub-lists to either side of
a given pivot and dynamically shifting elements inside the list around that pivot.
Divide: Split the problem set, move smaller parts to the left of the pivot and larger
items to the right.
Repeat and combine: Repeat the steps and combine the arrays that have
previously been sorted.
Benefits of Quicksort
It has the best time complexity when compared to other sorting algorithms.
Quick sort has a space complexity of O(logn), making it an excellent choice for
situations when space is limited.
Limitations of Quicksort
Despite being the fastest algorithm, QuickSort has a few drawbacks. Let’s have a look at some
When the pivot element is the largest or smallest, or when all of the components
have the same size. The performance of the quicksort is significantly impacted by
these worst-case scenarios.
Let’s take a look at an example to get a better understanding of the Quicksort algorithm. In
this example, the array(shown in graphic below) contains unsorted values, which we will sort
using Quicksort.
The process starts by selecting one element (known as the pivot) from the list; this can be
Middle element
For this example, we’ll use the last element, 4, as our pivot.
Now, the goal here is to rearrange the list such that all the elements less than the pivot are
towards the left of it, and all the elements greater than the pivot are towards the right of
it.
The pivot element is compared to all of the items starting with the first index. If
the element is greater than the pivot element, a second pointer is appended.
When compared to other elements, if a smaller element than the pivot element
is found, the smaller element is swapped with the larger element identified
before.
Every element, starting with 7, will be compared to the pivot(4). A second pointer
will be placed at 7 because 7 is bigger than 4.
The next element, element 2 will now be compared to the pivot. As 2 is less
than 4, it will be replaced by the bigger figure 7 which was found earlier.
The numbers 7 and 2 are swapped. Now, pivot will be compared to the next
element, 1 which is smaller than 4.
The procedure continues until the next-to-last element is reached, and at the
end the pivot element is then replaced with the second pointer. Here,
number 4(pivot) will be replaced with number 6.
As elements 2, 1, and 3 are less than 4, they are on the pivot’s left side. Elements can be in
any order: ‘1’,’2’,’3’, or ‘3’,’1’,’2’, or ‘2’,’3’,’1’. The only requirement is that all of the
elements must be less than the pivot. Similarly, on the right side, regardless of their sequence,
In simple words,the algorithm searches for every value that is smaller than the pivot. Values
smaller than pivot will be placed on the left, while values larger than pivot will be placed on
the right. Once the values are rearranged, it will set the pivot in its sorted position.
Once we have partitioned the array, we can break this problem into two sub-problems.
First, sort the segment of the array to the left of the pivot, and then sort the segment of the
In the same way that we rearranged elements in step 2, we will pick a pivot
element for each of the left and right sub-parts individually.
Now, we will rearrange the sub-list such that all the elements are less than the
pivot point, which is towards the left. For example, element 3 is the largest among
the three elements, which satisfies the condition, hence the element 3 is in its
sorted position.
In a similar manner, we will again work on the sub-list and sort the
elements 2 and 1. We will stop the process when we get a single element at the
end.
Repeat the same process for the right-side sub-list. The subarrays are
subdivided until each subarray consists of only one element.
The sub-arrays are rearranged in a certain order using the partition method. You will find
various ways to partition. Here we will see one of the most used methods.
partition (array, start, end)
{
// Setting rightmost Index as pivot
pivot = arr[end];
11.Explain Dijkstra’s Algorithm in detail with example and analyze its efficiency
Dijkstra's Algorithm
Dijkstra's algorithm allows us to find the shortest path between any two vertices of a graph.
It differs from the minimum spanning tree because the shortest distance between two vertices
might not include all the vertices of the graph.
Dijkstra's Algorithm works on the basis that any subpath B -> D of the shortest path A ->
D between vertices A and D is also the shortest path between vertices B and D.
Each subpath is the shortest path
Djikstra used this property in the opposite direction i.e we overestimate the distance of each
vertex from the starting vertex. Then we visit each node and its neighbors to find the shortest
subpath to those neighbors.
The algorithm uses a greedy approach in the sense that we find the next best solution hoping
that the end result is the best solution for the whole problem.
It is easier to start with an example and then think about the algorithm.
If the path length of the adjacent vertex is lesser than new path length, don't update it
Avoid updating path lengths of already visited vertices
After each iteration, we pick the unvisited vertex with the least path length. So we choose 5
before 7
Notice how the rightmost vertex has its path length updated twice
Repeat until all the vertices have been visited
We need to maintain the path distance of every vertex. We can store that in an array of size v,
where v is the number of vertices.
We also want to be able to get the shortest path, not only know the length of the shortest path.
For this, we map each vertex to the vertex that last updated its path length.
Once the algorithm is over, we can backtrack from the destination vertex to the source vertex
to find the path.
A minimum priority queue can be used to efficiently receive the vertex with least path
distance.
function dijkstra(G, S)
for each vertex V in G
distance[V] <- infinite
previous[V] <- NULL
If V != S, add V to Priority Queue Q
distance[S] <- 0
while Q IS NOT EMPTY
U <- Extract MIN from Q
for each unvisited neighbour V of U
tempDistance <- distance[U] + edge_weight(U, V)
if tempDistance < distance[V]
distance[V] <- tempDistance
previous[V] <- U
return distance[], previous[]
import sys
num_of_vertices = len(vertices[0])
visited_and_distance[to_visit][0] = 1
i=0
In a telephone network
12.Explain in detail about prims algorithm with example and analyze its efficiency
Minimum Spanning tree - Minimum spanning tree can be defined as the spanning tree in
which the sum of the weights of the edge is minimum. The weight of the spanning tree is the
sum of the weights given to the edges of the spanning tree.
Prim's Algorithm is a greedy algorithm that is used to find the minimum spanning tree from
a graph. Prim's algorithm finds the subset of edges that includes every vertex of the graph
such that the sum of the weights of the edges can be minimized.
Prim's algorithm starts with the single node and explores all the adjacent nodes with all the
connecting edges at every step. The edges with the minimal weights causing no cycles in the
graph got selected.
Now, let's see the working of prim's algorithm using an example. It will be easier to
understand the prim's algorithm using an example.
Step 1 - First, we have to choose a vertex from the above graph. Let's choose B.
Step 2 - Now, we have to choose and add the shortest edge from vertex B. There are two
edges from vertex B that are B to C with weight 10 and edge B to D with weight 4. Among
the edges, the edge BD has the minimum weight. So, add it to the MST.
Step 3 - Now, again, choose the edge with the minimum weight among all the other edges. In
this case, the edges DE and CD are such edges. Add them to MST and explore the adjacent of
C, i.e., E and A. So, select the edge DE and add it to the MST.
Step 4 - Now, select the edge CD, and add it to the MST.
Step 5 - Now, choose the edge CA. Here, we cannot select the edge CE as it would create a
cycle to the graph. So, choose the edge CA and add it to the MST.
So, the graph produced in step 5 is the minimum spanning tree of the given graph. The cost of
the MST is given below -
Algorithm
Now, let's see the time complexity of Prim's algorithm. The running time of the prim's
algorithm depends upon using the data structure for the graph and the ordering of edges.
Below table shows some choices -
o Time Complexity
Prim's algorithm can be simply implemented by using the adjacency matrix or adjacency list
graph representation, and to add the edge with the minimum weight requires the linearly
searching of an array of weights. It requires O(|V| 2) running time. It can be improved further
by using the implementation of heap to find the minimum weight edges in the inner loop of
the algorithm.
The time complexity of the prim's algorithm is O(E logV) or O(V logV), where E is the no.
of edges, and V is the no. of vertices.
1. #include <stdio.h>
2. #include <limits.h>
3. #define vertices 5 /*Define the number of vertices in the graph*/
4. /* create minimum_key() method for finding the vertex that has minimum key-value a
nd that is not added in MST yet */
5. int minimum_key(int k[], int mst[])
6. {
7. int minimum = INT_MAX, min,i;
8.
9. /*iterate over all vertices to find the vertex with minimum key-value*/
10. for (i = 0; i < vertices; i++)
11. if (mst[i] == 0 && k[i] < minimum )
12. minimum = k[i], min = i;
13. return min;
14. }
15. /* create prim() method for constructing and printing the MST.
16. The g[vertices][vertices] is an adjacency matrix that defines the graph for MST.*/
17. void prim(int g[vertices][vertices])
18. {
19. /* create array of size equal to total number of vertices for storing the MST*/
20. int parent[vertices];
21. /* create k[vertices] array for selecting an edge having minimum weight*/
22. int k[vertices];
23. int mst[vertices];
24. int i, count,edge,v; /*Here 'v' is the vertex*/
25. for (i = 0; i < vertices; i++)
26. {
27. k[i] = INT_MAX;
28. mst[i] = 0;
29. }
30. k[0] = 0; /*It select as first vertex*/
31. parent[0] = -1; /* set first value of parent[] array to -1 to make it root of MST*/
32. for (count = 0; count < vertices-1; count++)
33. {
34. /*select the vertex having minimum key and that is not added in the MST yet fro
m the set of vertices*/
35. edge = minimum_key(k, mst);
36. mst[edge] = 1;
37. for (v = 0; v < vertices; v++)
38. {
39. if (g[edge][v] && mst[v] == 0 && g[edge][v] < k[v])
40. {
41. parent[v] = edge, k[v] = g[edge][v];
42. }
43. }
44. }
45. /*Print the constructed Minimum spanning tree*/
46. printf("\n Edge \t Weight\n");
47. for (i = 1; i < vertices; i++)
48. printf(" %d <-> %d %d \n", parent[i], i, g[i][parent[i]]);
49.
50. }
51. int main()
52. {
53. int g[vertices][vertices] = {{0, 0, 3, 0, 0},
54. {0, 0, 10, 4, 0},
55. {3, 10, 0, 2, 6},
56. {0, 4, 2, 0, 1},
57. {0, 0, 6, 1, 0},
58. };
59. prim(g);
60. return 0;
61. }
Output