DAA Question Bank
DAA Question Bank
The study of algorithms is the cornerstone of computer science. It can be recognized as the core
of computer science. Computer programs would not exist without algorithms. With computers
becoming an essential part of our professional & personal life‘s, studying algorithms becomes a
necessity, more so for computer science engineers. Another reason for studying algorithms is
that if we know a standard set of important algorithms, They further our analytical skills & help
us in developing new algorithms for required applications Algorithm An algorithm is finite set of
instructions that is followed, accomplishes a particular task. In addition, all algorithms must
satisfy the following criteria:
1. Input. Zero or more quantities are externally supplied.
2. Output. At least one quantity is produced.
3. Definiteness. Each instruction is clear and produced.
4. Finiteness. If we trace out the instruction of an algorithm, then for all cases, the algorithm
terminates after a finite number of steps.
5. Effectiveness. Every instruction must be very basic so that it can be carried out, in principal,
by a person using only pencil and paper. It is not enough that each operation be definite as in
criterion 3; it also must be feasible
• Total time taken by the algorithm is given as a function on its input size
• Logical units are identified as one step
Input’s size: Time required by an algorithm is proportional to size of the problem instance. For
e.g., more time is required to sort 20 elements than what is required to sort 10 elements. Units
for Measuring Running Time: Count the number of times an algorithm‘s basic operation is
executed. (Basic operation: The most important operation of the algorithm, the operation
contributing the most to the tot ,al running time.) For e.g., The basic operation is usually the
most time consuming operation in the algorithm‘s innermost loop.
Asymptotic notation is a way of comparing functions that ignores constant factors and small
input sizes. Three notations used to compare orders of growth of an algorithm‘s basic operation
count are: O, Ω, Θ notations
Big Oh- O notation
Definition: A function t(n) is said to be in O(g(n)), denoted t(n)=O(g(n)), if t(n) is bounded above
by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and
some nonnegative integer n0 such that t(n) ≤ cg(n) for all n ≥ n0
6. List down the steps involved in mathematical analysis of Non-Recursive Algorithms 6 Marks
General plan for analyzing efficiency of non-recursive algorithms:
1. Decide on parameter n indicating input size
2. Identify algorithm‘s basic operation
3. Check whether the number of times the basic operation is executed depends only on the
input size n. If it also depends on the type of input, investigate worst, average, and best case
efficiency separately.
4. Set up summation for C(n) reflecting the number of times the algorithm‘s basic operation is
executed.
5. Simplify summation using standard formulas
7. List down the steps involved in mathematical analysis of Recursive Algorithms 6 Marks
General plan for analyzing efficiency of recursive algorithms:
1. Decide on parameter n indicating input size
2. Identify algorithm‘s basic operation
3. Check whether the number of times the basic operation is executed depends only on the
input size n. If it also depends on the type of input, investigate worst, average, and best case
efficiency separately.
4. Set up recurrence relation, with an appropriate initial condition, for the number of times the
algorithm‘s basic operation is executed.
5. Solve the recurrence.
8. Identify the time complexity (upper bound) for the below iterative functions 6 Marks
A()
{
int i=1,s=1;
while(s<=n)
{
i++;
s=s+i;
printf(“Ravi”);
}
}
Sol:
s: 1 3 6 10 ……………………………………………………. n
i: 1 2 3 4 ……………………………………………………. k
K(k+1)
= >n
2
=𝑘 2 +𝑘
2
>n
∴K=O(n)
9. Identify the time complexity (upper bound) for the below iterative functions 6 Marks
A()
{
Int i,j,k,n;
for(i=1;i<=n;i++)
{
for(j=1;j<=i;j++)
{
for(k=1;k<=100;k++)
{
Printf(“Ravi”);
}
Sol:
i 1 2 3 4 5 ……………………………….. n
j 1 time 2 time 3 time 4 time time ……………………………….. n time
k 100 200 300 400 500 ……………………………….. N*100
=100+2*100+3*100+4*100+5*100………………………+n*100
=100(1+2+3+…………………+n)
𝑛(𝑛+1)
=100 2
∴O(n2)
10. Find the time complexity (upper bound) for the below iterative functions 10 Marks
A()
{
for(i=1;i<n;i=i*2)
{
Printf(“Ravi”);
}
Sol:
i= 1 2 4 ……………………………. n
20 21 22 2k
2k=n
K=log n
∴O(log2 n)
11. Find the time complexity (upper bound) for the below recursive functions 10 Marks
T(n)=n + T(n-1); ;n>1
T(n)=1 ;n=1
Sol:
T(n)=1+T(n-1)………..(1)
T(n-1)=1+T(n-2)……..(2)
T(n-2)=1+T(n-3)……..(3)
Substituting 2 in 1
T(n)=1+T(n-1)
=2+T(n-2)………..(4)
Substituting 3 in 4
=3+ T(n-3)
=k+T(n-k)……….(5)
=n-k=1
k=n-1……….(6)
Substituting 6 in 5
= (n-1)+t(N-(N-1))
=(n-1)+T(1)
=n-1+1
T(n)=n
O=N
12. Explain with an example how a new variable count introduced in a program can be used to
find the number of steps needed by a program to solve a problem instance.
Pass-1 Pass-2
8 5 5 5 5 5 5 5 5
5 8 7 7 7 7 7 3 3
7 7 8 3 3 3 3 7 2
3 3 3 8 2 2 2 2 7
2 2 2 2 8 8 8 8 8
Pass-3 Pass-4
5 3 3 3 2
3 5 2 2 3
2 2 5 5 5
7 7 7 7 7
8 8 8 8 8
3. Define Knapsack problem and apply on following set of data having bag capacity m=15
6 Marks
Objects 1 2 3 4 5 6 7
Profits 10 5 15 7 6 18 3
Weights 2 3 5 7 1 4 1
Solun:
Objects 1 2 3 4 5 6 7
Profits 10 5 15 7 6 18 3
Weights 2 3 5 7 1 4 1
p/w 5 1.3 3 1 6 4.5 3
∑ xiwi =15
∑ xipi =54.6
4. Write and explain selection sort algorithm with example of your choice. 10 Marks
fori← 1 to n-1 do
min j ←i;
min x ← A[i]
for j ←i + 1 to n do
min j ← j
min x ← A[j]
A[min j] ← A [i]
A[i] ← min x
So we replace 14 with 10. After one iteration 10, which happens to be the minimum value in
the list, appears in the first position of the sorted list.
For the second position, where 33 is residing, we start scanning the rest of the list in a
linear manner.
We find that 14 is the second lowest value in the list and it should appear at the second
place. We swap these values.
After two iterations, two least values are positioned at the beginning in a sorted manner.
The same process is applied to the rest of the items in the array −
5. Briefly explain Traveling Salesman Problem (TSP) using brute force strategy with example
10 Marks
Travelling Salesman Problem (TSP) : Given a set of cities and distances between every pair of
cities, the problem is to find the shortest possible route that visits every city exactly once and
returns to the starting point.
Note the difference between Hamiltonian Cycle and TSP. The Hamiltonian cycle problem is to
find if there exists a tour that visits every city exactly once. Here we know that Hamiltonian Tour
exists (because the graph is complete) and in fact, many such tours exist, the problem is to find a
minimum weight Hamiltonian Cycle.
For example, consider the graph shown in the figure on the right side. A TSP tour in the graph is
1-2-4-3-1. The cost of the tour is 10+25+30+15 which is 80.
The problem is a famous NP-hard problem. There is no polynomial-time known solution for this
problem.
7. Apply Traveling Salesman Problem (TSP) for below graph using A as source vertex. 10 Marks
Sol:
The shortest path that originates and ends at A is A → B → C → D → E → F → A
8. Write selection sort algorithm and apply on following set of integers 64, 25, 12, 22, 11
6 Marks
9. List down the steps for linear search and mention its best case, worst case and average case 4
Marks
And apply linear search on 10,15,30,70,80,60,20,90,40 to search key element 20.
Algorithm for Linear Search:
The algorithm for linear search can be broken down into the following steps:
Start: Begin at the first element of the collection of elements.
Compare: Compare the current element with the desired element.
Found: If the current element is equal to the desired element, return true or index to the current
element.
Move: Otherwise, move to the next element in the collection.
Repeat: Repeat steps 2-4 until we have reached the end of collection.
Not found: If the end of the collection is reached without finding the desired element, return that
the desired element is not in the array.
Time and Space Complexity of Linear Search:
Time Complexity:
Best Case: In the best case, the key might be present at the first index. So the best case complexity
is O(1)
Worst Case: In the worst case, the key might be present at the last index i.e., opposite to the end
from which the search has started in the list. So the worst-case complexity is O(N) where N is the
size of the list.
Average Case: O(N)
10. List down the steps involved for bubble sort and apply the same to sort 6 0 3 5
4 Marks
Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the adjacent
elements if they are in the wrong order. This algorithm is not suitable for large data sets as its
average and worst-case time complexity is quite high.
Bubble Sort Algorithm
In Bubble Sort algorithm,
traverse from left and compare adjacent elements and the higher one is placed at right side.
In this way, the largest element is moved to the rightmost end at first.
This process is then continued to find the second largest and place it and so on until the data
is sorted.
11. Write C Program or algorithm to Print all Distinct (Unique ) Elements in given Array 6 Marks
void printDistinct(int arr[], int n)
{
for (int i=0; i<n; i++)
{
int j;
for (j=0; j<i; j++)
if (arr[i] == arr[j])
break;
int main()
{
int arr[] = {6, 10, 5, 4, 9, 120, 4, 6, 10};
int n = sizeof(arr)/sizeof(arr[0]);
printDistinct(arr, n);
return 0;
}
2. List down the advantages and limitations of divide & conquer technique 4 Marks
• For solving conceptually difficult problems like Tower Of Hanoi, divide & conquer is a powerful
tool
• Divide & Conquer algorithms are adapted foe execution in multi-processor machines
• Recursion is slow
• Very simple problem may be more complicated than an iterative approach. Example: adding n
numbers etc.
Assume size n is a power of b. The recurrence for the running time T(n) is as follows:
where:
f(n) – a function that accounts for the time spent on dividing the problem into smaller ones and
on combining their solutions
Therefore, the order of growth of T(n) depends on the values of the constants a & b and the
order of growth of the function f(n).
4. State master theorem and apply the same for recurrence relation T(n) = 2T(n/2) + 1 10 Marks
Theorem: If f(n) Є Θ (nd ) with d ≥ 0 in recurrence equation
T(n) = aT(n/b) + f(n),
Then, T(n) =
Θ (nd ) if a < bd
Θ (nd log n) if a = bd
Θ (n log b a ) if a > bd
Let T(n) = 2T(n/2) + 1, solve using master theorem.
Solution:
Here: a=2
b=2
f(n) = Θ(1)
d=0
Therefore:
a > bd i.e., 2 > 20
Case 3 of master theorem holds good. Therefore:
T(n) Є Θ (n log b a )
Є Θ (n log 2 2 )
Є Θ (n)
5. Write and explain binary search algorithm with an example c 10 Marks
Binary Search, also known as half-interval search is one of the most popular search techniques to find
elements in a sorted array. Here, you have to make sure that the array is sorted.
The algorithm follows the divide and conquer approach, where the complete array is divided into two
halves and the element to be searched is compared with the middle element of the list. If the
element to be searched is less than the middle element, then the search is narrowed down to 1st half
of the array. Else, the search continues to the second half of the list.
Consider the following array and the search element to understand the Binary Search techniques.
Array considered: 09 17 25 34 49
Element to be searched: 34
Step 1: Start the Binary Search algorithm by using the formula middle = (left + right )/2 Here, left = 0
and right = 4. So the middle will be 2. This means 25 is the middle element of the array.
Step 2: Now, you have to compare 25 with our seach element i.e. 34. Since 25 < 34, left = middle + 1
and right = 4.
Step 3: So, the new middle = (3 + 4)/ 2, which is 3.5 considered as 3.
Step 4: Now, If you observe, the element to be searched = middle found in the previous step. This
implies that the element is found at a[3].
6. Write and explain quick sort algorithm with an example. 10 Marks
QucikSort (A,p,r)
{
If(p<r)
{
Q=partition(A,p,r)
Quciksort(A,P,q-1);
Quicksort(A,q+1,r);
}
}
Partition(A,p,r)
{
X=a[r];
I=p-1;
for(j=p to r-1)
{
Ifa[j]<=X)
{
I=i+1;
Exchange a[i]a[j]
}
}
Exchange a[i+1] a[r]
Return i+1;
}
7. Briefly explain working of insertion sort algorithm with an example. 6 Marks
Solun:
To understand the working of the insertion sort algorithm, let's take an unsorted array. It
will be easier to understand the insertion sort via an example.
Here, 31 is greater than 12. That means both elements are already in ascending order. So,
for now, 12 is stored in a sorted sub-array.
Here, 25 is smaller than 31. So, 31 is not at correct position. Now, swap 31 with 25. Along
with swapping, insertion sort will also check it with all elements in the sorted array.
For now, the sorted array has only one element, i.e. 12. So, 25 is greater than 12. Hence, the
sorted array remains sorted after swapping.
Now, two elements in the sorted array are 12 and 25. Move forward to the next elements
that are 31 and 8.
Now, the sorted array has three items that are 8, 12 and 25. Move to the next items that are
31 and 32.
Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.
8. Give an analysis of merge sort algorithm ? What types of Datasets work best for Merge
Sort? How does the Divide and Conquer Strategy work with Merge Sort? 6 Marks
How does the Divide and Conquer Strategy work with Merge Sort :
Merge sort works well on any type of dataset, be it large or small. But Quicksort generally is
more efficient for small datasets or on datasets where the elements are more or less evenly
distributed over the range.
How does the Divide and Conquer Strategy work with Merge Sort ?
The Divide and Conquer strategy divides the problem into smaller parts, solves them, and
combines the small solved sub problems to get the final solution. The same happens with
the Merge Sort algorithm. It keeps on dividing the array into two halves until their lengths
become 1. Then it starts combining them two at a time. First, the unit cells are combined
into sorted arrays of length 2 and these sorted subarrays are combined into another bigger
sorted subarrays and so on until the whole sorted array is formed.
Merge sort is one of the most efficient sorting algorithms. It works on the principle of Divide
and Conquer. Merge sort repeatedly breaks down a list into several sublists until each sub
list consists of a single element and merging those sublists in a manner that results into a
sorted list.
The top-down merge sort approach is the methodology which uses recursion mechanism. It
starts at the Top and proceeds downwards, with each recursive turn asking the same
question such as “What is required to be done to sort the array?” and having the answer as,
“split the array into two, make a recursive call, and merge the results.”, until one gets to the
bottom of the array-tree.
1. Divide the unsorted list into n sublists, each comprising 1 element (a list of 1 element is
supposed sorted).
Top-down Implementation
2. Repeatedly merge sublists to produce newly sorted sublists until there is only 1 sublist
remaining. This will be the sorted list.
The first element of both lists is compared. If sorting in ascending order, the smaller element
among two becomes a new element of the sorted list. This procedure is repeated until both
the smaller sublists are empty and the newly combined sublist covers all the elements of
both the sublists.
Merging of two lists
11. Briefly explain decrease and conquer with two advantages and disadvantages 6 Marks
Decrease or reduce problem instance to smaller instance of the same problem and extend
solution. Conquer the problem by solving smaller instance of the problem. Extend solution
of smaller instance to obtain solution to original problem. Basic idea of the decrease-and-
conquer technique is based on exploiting the relationship between a solution to a given
instance of a problem and a solution to its smaller instance. This approach is also known as
incremental or inductive approach. Decrease and conquer is a technique used to solve
problems by reducing the size of the input data at each step of the solution process. This
technique is similar to divide-and-conquer, in that it breaks down a problem into smaller sub
problems, but the difference is that in decrease-and-conquer, the size of the input data is
reduced at each step. The technique is used when it’s easier to solve a smaller version of the
problem, and the solution to the smaller problem can be used to find the solution to the
original problem.
Advantages of Decrease and Conquer:
1. Problem-Specific: The technique is not applicable to all problems and may not be
suitable for more complex problems.
2. Implementation Complexity: The technique can be more complex to implement when
compared to other techniques like divide-and-conquer, and may require more careful
planning.
1. Decrease by a constant
2. Decrease by a constant factor
3. Variable size decrease
Decrease by a Constant : In this variation, the size of an instance is reduced by the same
constant on each iteration of the algorithm. Typically, this constant is equal to one ,
although other constant size reductions do happen occasionally. Below are example
problems :
Insertion sort
Graph search algorithms: DFS, BFS
Topological sorting
Algorithms for generating permutations, subsets
Decrease by a Constant factor: This technique suggests reducing a problem instance by the
same constant factor on each iteration of the algorithm. In most applications, this constant
factor is equal to two. A reduction by a factor other than two is especially rare. Decrease by
a constant factor algorithms are very efficient especially when the factor is greater than 2 as
in the fake-coin problem. Below are example problems :
Binary search
Fake-coin problems
Russian peasant multiplication
Variable-Size-Decrease : In this variation, the size-reduction pattern varies from one
iteration of an algorithm to another. As, in problem of finding gcd of two number though
the value of the second argument is always smaller on the right-handside than on the left-
hand side, it decreases neither by a constant nor by a constant factor. Below are example
problems :
Computing median and selection problem.
Interpolation Search
Euclid’s algorithm
There may be a case that problem can be solved by decrease-by-constant as well as
decrease-by-factor variations, but the implementations can be either recursive or iterative.
The iterative implementations may require more coding effort, however they avoid the
overload that accompanies recursion.
Module 4 - Dynamic Programming and Greedy technique
1. Define Dynamic programming and briefly list down its properties 4 Marks
Dynamic programming (DP) is a general algorithm design technique for solving problems with
overlapping sub-problems.
Dynamic Programming Properties
• An instance is solved using the solutions for smaller instances.
• The solutions for a smaller instance might be needed multiple times, so store their results in a
table.
• Thus each smaller instance is solved only once.
• Additional space is used to save time.
2. Bring out at least three differences between divide & conquer and dynamic programming 4
Marks 4 Marks
Divide & Conquer Dynamic Programming
1 Partitions a problem into Partitions a problem into overlapping sub-
independent smaller sub- problems
problems
2 Doesn‘t store solutions of sub Stores solutions of sub problems: thus avoids
problems. (Identical sub-problems calculations of same quantity twice
may arise - results in the same
computations are performed
repeatedly.)
3 Top down algorithms: which Bottom up algorithms: in which the smallest sub-
logically progresses from the problems are explicitly solved first and the results
initial instance down to the of these used to construct solutions to
smallest sub-instances via progressively larger sub-instances
intermediate sub-instances.
3. Compare and contrast between greedy method and dynamic programming method 4 Marks
• LIKE dynamic programming, greedy method solves optimization problems.
• LIKE dynamic programming, greedy method problems exhibit optimal substructure
• UNLIKE dynamic programming, greedy method problems exhibit the greedy choice property -
avoids back-tracing
4. List down the applications of the greedy strategy 4 Marks
• Optimal solutions:
Change making
Minimum Spanning Tree (MST)
Single-source shortest paths
Huffman codes
• Approximations:
Traveling Salesman Problem (TSP)
Fractional Knapsack problem
Suppose we have a graph G[][] with V vertices from 1 to N. Now we have to evaluate
a shortest PathMatrix[][] where shortest PathMatrix[i][j] represents the shortest path
between vertices i and j. Obviously the shortest path between i to j will have
some k number of intermediate nodes. The idea behind floyd warshall algorithm is to
treat each and every vertex from 1 to N as an intermediate node one by one. The
following figure shows the above optimal substructure property in floyd warshall
algorithm:
1. Initialize the solution matrix same as the input graph matrix as a first step.
2. Then update the solution matrix by considering all vertices as an intermediate
vertex.
3. The idea is to pick all vertices one by one and updates all shortest paths which
include the picked vertex as an intermediate vertex in the shortest path.
4. When we pick vertex number k as an intermediate vertex, we already have
considered vertices {0, 1, 2, .. k-1} as intermediate vertices.
5. For every pair (i, j) of the source and destination vertices respectively, there are two
possible cases.
6. k is not an intermediate vertex in shortest path from i to j. We keep the value
of dist[i][j] as it is.
7. k is an intermediate vertex in shortest path from i to j. We update the value
of dist[i][j] as dist[i][k] + dist[k][j], if dist[i][j] > dist[i][k] + dist[k][j]
6. Apply all pair shortest path algorithm (Floyd Warshall) for the below graph
10 Marks
7. Apply bellman ford algorithm for below graph 10 Marks
Step 1: Initialize a distance array Dist[] to store the shortest distance for each vertex from the source
vertex. Initially distance of source will be 0 and Distance of other vertices will be INFINITY.
A Bellman-Ford algorithm is also guaranteed to find the shortest path in a graph, similar
to Dijkstra’s algorithm. Although Bellman-Ford is slower than Dijkstra’s algorithm, it is capable
of handling graphs with negative edge weights, which makes it more versatile. The shortest
path cannot be found if there exists a negative cycle in the graph. If we continue to go around
the negative cycle an infinite number of times, then the cost of the path will continue to
decrease (even though the length of the path is increasing). As a result, Bellman-Ford is also
capable of detecting negative cycles, which is an important feature.
Why Relaxing Edges N-1 times, gives us Single Source Shortest Path?
In the worst-case scenario, a shortest path between two vertices can have at most N-1 edges,
where N is the number of vertices. This is because a simple path in a graph with N vertices can
have at most N-1 edges, as it’s impossible to form a closed loop without revisiting a vertex. By
relaxing edges N-1 times, the Bellman-Ford algorithm ensures that the distance estimates for
all vertices have been updated to their optimal values, assuming the graph doesn’t contain any
negative-weight cycles reachable from the source vertex. If a graph contains a negative-weight
cycle reachable from the source vertex, the algorithm can detect it after N-1 iterations, since
the negative cycle disrupts the shortest path lengths. In summary, relaxing edges N-1 times in
the Bellman-Ford algorithm guarantees that the algorithm has explored all possible paths of
length up to N-1, which is the maximum possible length of a shortest path in a graph
with N vertices. This allows the algorithm to correctly calculate the shortest paths from the
source vertex to all other vertices, given that there are no negative-weight cycles.
9. Why Dijkstra’s Algorithms fails for the Graphs having Negative Edges ? 6 Marks
The problem with negative weights arises from the fact that Dijkstra’s algorithm assumes that
once a node is added to the set of visited nodes, its distance is finalized and will not change.
However, in the presence of negative weights, this assumption can lead to incorrect results.
Consider the following graph for the example:
In the above graph, A is the source node, among the edges A to B and A to C , A to B is the
smaller weight and Dijkstra assigns the shortest distance of B as 2, but because of existence of
a negative edge from C to B, the actual shortest distance reduces to 1 which Dijkstra fails to
detect.
10. List down the steps involved in Dijikstras algorithm 6 Marks
Algorithm
1. Create a set sptSet (shortest path tree set) that keeps track of vertices included in the
shortest path tree, i.e., whose minimum distance from the source is calculated and
finalized. Initially, this set is empty.
2. Assign a distance value to all vertices in the input graph. Initialize all distance values
as INFINITE. Assign the distance value as 0 for the source vertex so that it is picked
first.
3. While sptSet doesn’t include all vertices
1. Pick a vertex u that is not there in sptSet and has a minimum distance value.
2. Include u to sptSet.
3. Then update the distance value of all adjacent vertices of u.
1. To update the distance values, iterate through all adjacent vertices.
2. For every adjacent vertex v, if the sum of the distance value of u (from source)
and weight of edge u-v, is less than the distance value of v, then update the
distance value of v.
11. Why to choose greedy approach and explain the greedy choice property.
The greedy approach has a few tradeoffs, which may make it suitable for optimization.
One prominent reason is to achieve the most feasible solution immediately. In the
activity selection problem (Explained below), if more activities can be done before
finishing the current activity, these activities can be performed within the same
time. Another reason is to divide a problem recursively based on a condition, with no
need to combine all the solutions. In the activity selection problem, the “recursive
division” step is achieved by scanning a list of items only once and considering certain
activities.
Greedy choice property:
This property says that the globally optimal solution can be obtained by making a
locally optimal solution (Greedy). The choice made by a Greedy algorithm may depend
on earlier choices but not on the future. It iteratively makes one Greedy choice after
another and reduces the given problem to a smaller one.
12. List down any five characteristic components of greedy algorithm 10 Marks
1. The feasible solution: A subset of given inputs that satisfies all specified
constraints of a problem is known as a “feasible solution”.
2. Optimal solution: The feasible solution that achieves the desired extremum is
called an “optimal solution”. In other words, the feasible solution that either
minimizes or maximizes the objective function specified in a problem is known as
an “optimal solution”.
3. Feasibility check: It investigates whether the selected input fulfils all constraints
mentioned in a problem or not. If it fulfils all the constraints then it is added to a
set of feasible solutions; otherwise, it is rejected.
4. Optimality check: It investigates whether a selected input produces either a
minimum or maximum value of the objective function by fulfilling all the specified
constraints. If an element in a solution set produces the desired extremum, then it is
added to a sel of optimal solutions.
5. Optimal substructure property: The globally optimal solution to a problem
includes the optimal sub solutions within it.
6. Greedy choice property: The globally optimal solution is assembled by selecting
locally optimal choices. The greedy approach applies some locally optimal criteria
to obtain a partial solution that seems to be the best at that moment and then find
out the solution for the remaining sub-problem.
13. List down any five the advantages and disadvantages of greedy approach. 10 Marks
14. Characteristics of Greedy approach and any five Applications of Greedy Algorithms
10 Marks
Characteristics of Greedy approach
1. There is an ordered list of resources(profit, cost, value, etc.)
2. Maximum of all the resources(max profit, max value, etc.) are taken.
3. For example, in the fractional knapsack problem, the maximum value/weight is
taken first according to available capacity.
Step 2:
Step 3:
Module 5-Backtracking
1. In this case, S represents the problem's starting point. You start at S and work your
way to solution S1 via the midway point M1. However, you discovered that solution
S1 is not a viable solution to our problem. As a result, you backtrack (return) from
S1, return to M1, return to S, and then look for the feasible solution S2. This process
is repeated until you arrive at a workable solution.
2. S1 and S2 are not viable options in this case. According to this example, only S3 is a
viable solution. When you look at this example, you can see that we go through all
possible combinations until you find a viable solution. As a result, you refer to
backtracking as a brute-force algorithmic technique.
3. A "space state tree" is the above tree representation of a problem. It represents all
possible states of a given problem (solution or non-solution).
Step 3: If the current point is not an endpoint, backtrack and explore other points,
then repeat the preceding steps.
There are the following scenarios in which you can use the backtracking:
It is used to solve a variety of problems. You can use it, for example, to find a
feasible solution to a decision problem.
In some cases, it is used to find all feasible solutions to the enumeration problem.
3. Maze Solving:
Backtracking algorithms are applied to solve maze problems, where the objective is to
find a path from the starting point to the destination. By exploring different paths and
backtracking when reaching dead ends, the algorithm determines a valid solution.
4.Knight’sTourProblem:
The backtracking algorithm is utilized to solve the Knight’s Tour problem, which involves
finding a sequence of moves for a knight on a chessboard to visit every square exactly
once. It helps in identifying all possible tours or a single valid tour.
6. How does the backtracking algorithm differ from other search algorithms? Can the
backtracking algorithm handle problems with a large search space? 6Marks
8. Draw state space tree for N queens problem with 4 *4 chess board having 4 queens
Q1,Q2,Q3,Q4. 10 Marks
10. For a given set {3, 34, 4, 12, 5, 2} and the target sum = 9. Define a function
and use recursive method to check whether there exists a subset with the
given sum or not. 6 Marks
#include <stdio.h>
// Base Cases
int main()
int sum = 9;
return 0;
11. Find the minimum spanning tree (MST) by applying prims algorithm with B as
source vertex. 10 Marks
Step 1
Step 2
Step 3
Step 4
Step 5
12. Define minimum spanning tree (MST) and explain working principle of Prims
algorithm. 10 Marks
Minimum Spanning tree - Minimum spanning tree can be defined as the spanning tree
in which the sum of the weights of the edge is minimum. The weight of the spanning
tree is the sum of the weights given to the edges of the spanning tree.
Prim's Algorithm is a greedy algorithm that is used to find the minimum spanning tree
from a graph. Prim's algorithm finds the subset of edges that includes every vertex of
the graph such that the sum of the weights of the edges can be minimized.
Prim's algorithm starts with the single node and explores all the adjacent nodes with all
the connecting edges at every step. The edges with the minimal weights causing no
cycles in the graph got selected.
Prim's algorithm is a greedy algorithm that starts from one vertex and continue to add
the edges with the smallest weight until the goal is reached. The steps to implement the
prim's algorithm are given as follows
o Now, we have to find all the edges that connect the tree in the above step with the
new vertices. From the edges found, select the minimum edge and add it to the tree.
An algorithm is a sequence of unambiguous instructions for solving a problem. i.e., for obtaining
a required output for any legitimate input in a finite amount of time
.
2. Write the algorithm to find a factorial of a given number using recursion.
The factorial of a number is defined as:
algorithm F(n):
// INPUT
// n = Some non-negative integer
// OUTPUT
// The nth number in the Fibonacci Sequence
if n <= 1: return n
else: return F(n - 1) + F(n - 2)
Analyzing the time complexity for our iterative algorithm is a lot more straightforward than its
recursive counterpart. In this case, our most costly operation is assignment. Firstly, our
assignments of F[0] and F[1] cost O(1) each. Secondly, our loop performs one assignment per
iteration and executes (n-1)-2 times, costing a total of O(n-3) = O(n).
Therefore, our iterative algorithm has a time complexity of O(n) + O(1) + O(1) = O(n).
• Constant: 1
• Logarithmic: logn
• Linear: n
• N-log-n: n logn
• Quadratic: n2
• Cubic: n3
• Exponential: 2n
• Factorial: n!
5. Explain briefly any two asymptotic notations.
A function t(n) is said to be in Ω(g(n)) , denoted t(n) Є Ω((g(n)) , if t(n) is bounded below by some
positive constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and
some nonnegative integer n0 such that t(n) ≥cg(n) for all for all n ≥ n0
A function t(n) is said to be in Θ(g(n)) , denoted t(n) Є Θ (g(n)) , if t(n) is bounded both above and
below by some positive constant multiple of g(n) for all large n, i.e., if there exist some positive
constants c1 and c2 and some nonnegative integer n0 such that c1 g(n) ≤t(n) ≤ c2g(n) for all n ≥
n0
A function t(n) is said to be in O(g(n)), denoted t(n) O(g(n)) , if t(n) is bounded above by some
constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some
nonnegative integers no such that t(n) ≤ cg(n) for all n≥ n0
6. Find the time complexity (upper bound) for the below iterative functions
1. A()
{
int i=1,s=1;
while(s<=n)
{
i++;
s=s+i;
printf(“Ravi”);
}
}
Sol:
s: 1 3 6 10 ……………………………………………………. n
i: 1 2 3 4 ……………………………………………………. k
K(k+1)
= >n
2
2
=𝑘 +𝑘
2
>n
∴K=O(n)
2. A()
{
Int i,j,k,n;
for(i=1;i<=n;i++)
{
for(j=;j<=i;j++)
{
for(k=1;k<=100;k++)
{
Printf(“Ravi”);
}
Sol:
i 1 2 3 4 5 ……………………………….. n
j 1 time 2 time 3 time 4 time time ……………………………….. n time
k 100 200 300 400 500 ……………………………….. N*100
=100+2*100+3*100+4*100+5*100………………………+n*100
=100(1+2+3+…………………+n)
𝑛(𝑛+1)
=100
2
∴O(n2)
3. A()
{
for(i=1;i<n;i=i*2)
{
Printf(“Ravi”);
}
Sol:
i= 1 2 4 ……………………………. n
20 21 22 2k
2k=n
K=log n
∴O(log2 n)
4. A()
{
int I,j,k;
𝑛
for(i=n/2;i<=n;i++)
2
𝑛
for(j=1;j<=n;j=2*j) 2
for(k=1;k<=n;k++) log2 n
printf(“Ravi”);
}
}
Sol:
𝑛 𝑛
* *log2
2 2
n
2
∴O(n log2 n)
5. Find the time complexity (upper bound) for the below recursive functions
1. A()
{
If(A>1)
return (A(n-1))
}
Sol:
T(n)=1+T(n-1); ; n>1
T(n)=1 ;n=0
T(n)=1+T(n-1)…………………….(1)
T(n-1)=1+T(n-2)………………….(2)
T(n-3)=1+T(n-3)………………….(3)
Substitute (2) in (1)
T(n)=1+T(n-1)
=2+T(n-2)……(4)
Substitute (3) in (4)
=3+T(n-3)
….
….
….
….
=K+T(n-k)…………….(5)
Then n-k=1
K=n-1……………..(6)
Substituting 6 in 5
=(n-1)+T(n-(n-1)
=(n-1)+T(1)
=n-1+1
=n
T(n)=n
∴O(n)
T(n)=1 ;n=1
Sol:
T(n)=1+T(n-1)………..(1)
T(n-1)=1+T(n-2)……..(2)
T(n-2)=1+T(n-3)……..(3)
Substituting 2 in 1
T(n)=1+T(n-1)
=2+T(n-2)………..(4)
Substituting 3 in 4
=3+ T(n-3)
=k+T(n-k)……….(5)
=n-k=1
k=n-1……….(6)
Substituting 6 in 5
= (n-1)+t(N-(N-1))
=(n-1)+T(1)
=n-1+1
T(n)=n
O=N
Module 2
13. Apply bubble sort algorithm on following set of integers 8,5, 7,3,2
Pass-1 Pass-2
8 5 5 5 5 5 5 5 5
5 8 7 7 7 7 7 3 3
7 7 8 3 3 3 3 7 2
3 3 3 8 2 2 2 2 7
2 2 2 2 8 8 8 8 8
Pass-3 Pass-4
5 3 3 3 2
3 5 2 2 3
2 2 5 5 5
7 7 7 7 7
8 8 8 8 8
14. Write an algorithm to find uniqueness of elements in an array and give the mathematical
analysis of this non recursive algorithm with all steps
Objects 1 2 3 4 5 6 7
Profits 10 5 15 7 6 18 3
Weights 2 3 5 7 1 4 1
Solun:
Objects 1 2 3 4 5 6 7
Profits 10 5 15 7 6 18 3
Weights 2 3 5 7 1 4 1
p/w 5 1.3 3 1 6 4.5 3
∑ xiwi =15
∑ xipi =54.6
Objects: 1 2 3 4 5 6 7
Profit (P): 10 15 7 8 9 4
Weight(w): 1 3 5 4 1 3 2
n (no of items): 7
Solution:
5 8 1 15 - 1 = 14
1 5 1 14 - 1 = 13
2 10 3 13 - 3 = 10
3 15 5 10 - 5 = 5
6 9 3 5-3=2
7 4 2 2-2=0
The maximum profit is 51.
17. Write and explain selection sort algorithm with example of your choice.
fori← 1 to n-1 do
min j ←i;
min x ← A[i]
for j ←i + 1 to n do
min j ← j
min x ← A[j]
A[min j] ← A [i]
A[i] ← min x
For the first position in the sorted list, the whole list is scanned sequentially. The first
position where 14 is stored presently, we search the whole list and find that 10 is the lowest
value.
So we replace 14 with 10. After one iteration 10, which happens to be the minimum value in
the list, appears in the first position of the sorted list.
For the second position, where 33 is residing, we start scanning the rest of the list in a
linear manner.
We find that 14 is the second lowest value in the list and it should appear at the second
place. We swap these values.
After two iterations, two least values are positioned at the beginning in a sorted manner.
The same process is applied to the rest of the items in the array −
Module -3
18. Write and explain binary search algorithm with an example
Binary Search, also known as half-interval search is one of the most popular search
techniques to find elements in a sorted array. Here, you have to make sure that the array is
sorted.
The algorithm follows the divide and conquer approach, where the complete array is divided
into two halves and the element to be searched is compared with the middle element of the
list. If the element to be searched is less than the middle element, then the search is
narrowed down to 1st half of the array. Else, the search continues to the second half of the
list.
Consider the following array and the search element to understand the Binary Search
techniques.
Array considered: 09 17 25 34 49
Element to be searched: 34
Step 1: Start the Binary Search algorithm by using the formula middle = (left + right )/2 Here,
left = 0 and right = 4. So the middle will be 2. This means 25 is the middle element of the
array.
Step 2: Now, you have to compare 25 with our seach element i.e. 34. Since 25 < 34, left =
middle + 1 and right = 4.
Step 3: So, the new middle = (3 + 4)/ 2, which is 3.5 considered as 3.
Step 4: Now, If you observe, the element to be searched = middle found in the previous
step. This implies that the element is found at a[3].
Case-01:
Case-02:
If a = bk and
Case-03:
If a < bk and
T(n) = 3T(n/2) + n2
Solution-
Then, we have-
a=3
b=2
k=2
p=0
Now, a = 3 and bk = 22 = 4.
0, so we have-
T(n) = θ (nklogpn)
T(n) = θ (n2log0n)
Thus,
T(n) = θ (n2)
Solun:
To understand the working of the insertion sort algorithm, let's take an unsorted array. It will be easier
to understand the insertion sort via an example.
Here, 31 is greater than 12. That means both elements are already in ascending order. So, for now, 12 is
stored in a sorted sub-array.
Now, move to the next two elements and compare them.
Here, 25 is smaller than 31. So, 31 is not at correct position. Now, swap 31 with 25. Along with
swapping, insertion sort will also check it with all elements in the sorted array.
For now, the sorted array has only one element, i.e. 12. So, 25 is greater than 12. Hence, the sorted
array remains sorted after swapping.
Now, two elements in the sorted array are 12 and 25. Move forward to the next elements that are 31
and 8.
Now, the sorted array has three items that are 8, 12 and 25. Move to the next items that are 31 and 32.
Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.