0% found this document useful (0 votes)
26 views15 pages

Daa 1

The document is a question bank for the course 'Design and Analysis of Algorithms' at SRM Institute of Science and Technology, focusing on various aspects of algorithm complexity and analysis. It includes multiple-choice questions covering topics such as asymptotic notations, time complexity, and algorithm efficiency. Additionally, it provides detailed explanations of asymptotic notations, particularly Theta notation, and its mathematical representation.

Uploaded by

sai prathumnan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views15 pages

Daa 1

The document is a question bank for the course 'Design and Analysis of Algorithms' at SRM Institute of Science and Technology, focusing on various aspects of algorithm complexity and analysis. It includes multiple-choice questions covering topics such as asymptotic notations, time complexity, and algorithm efficiency. Additionally, it provides detailed explanations of asymptotic notations, particularly Theta notation, and its mathematical representation.

Uploaded by

sai prathumnan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

SRM INSTITUTE OF SCIENCE AND TECHNOLOGY

RAMAPURAM, CHENNAI – 89

FACULTY OF ENGINEERING AND TECHNOLOGY

SCHOOL OF COMPUTER SCIENCE AND ENGINEERING

DEPARTMENT OF INFORMATION TECHNOLOGY

21CSC204J DESIGN AND ANALYSIS OF ALGORITHMS QUESTION BANK- UNIT 1

PART A

1. Which notation describes the tightest bound for an algorithm's growth rate?
a) Big O (O)
b) Omega (Ω)
c) Theta (Θ)
d) Little o (o)
2. What does O(log n) time complexity indicate?
a) Linear growth
b) Logarithmic growth
c) Quadratic growth
d) Constant growth
3. If an algorithm's time complexity is O(n), doubling the input size will:
a) Double the runtime
b) Quadruple the runtime
c) Have no effect on the runtime
d) Halve the runtime
4. Which notation represents the best-case scenario of an algorithm's runtime?
a) Big O (O)
b) Omega (Ω)
c) Theta (Θ)
d) Little o (o)
5. An algorithm with O(1) time complexity has:
a) Linear time
b) Logarithmic time
c) Constant time
d) Exponential time
6. What is the primary purpose of asymptotic analysis?
a) To measure the exact runtime of an algorithm
b) To describe how an algorithm's runtime scales with input size
c) To optimize code for specific hardware
d) To count the number of lines of code
7. If an algorithm has O(n²) complexity, what happens to the runtime if the input size triples?
a) It triples
b) It doubles
c) It increases by a factor of nine
d) It stays the same
8. Which of the following is the fastest growing time complexity?
a) O(n)
b) O(log n)
c) O(n log n)
d) O(2^n)
9. What does "n" typically represent in time complexity analysis?
a) The number of operations
b) The input size
c) The execution time
d) The memory usage
10. When comparing algorithms, asymptotic notation helps us focus on:
a) The exact runtime
b) The performance on small inputs
c) The growth rate for large inputs
d) The programming language used
11. An algorithm with a time complexity of O(n log n) is generally considered:
a) Very inefficient
b) Constant
c) Efficient for many sorting algorithms
d) Exponential
12. What does it mean for an algorithm to have a time complexity of O(n)?
a) The execution time increases linearly with the input size.
b) The execution time remains constant.
c) The execution time increases quadratically.
d) The execution time decreases as the input size increases.
13. Which of these complexities is better than O(n)?
a) O(n²)
b) O(2^n)
c) O(log n)
d) O(n*n)
14. When analyzing algorithms, we are typically concerned with:
a) Best case time complexity only.
b) Average case time complexity only.
c) Worst case time complexity.
d) Machine specific run times.
15. If a piece of code has two nested loops, each iterating "n" times, what is the time complexity?
a) O(n)
b) O(log n)
c) O(n²)
d) O(n log n)
16. Which analysis provides the upper bound on the runtime of an algorithm?
a. Best-case analysis
b. Worst-case analysis
c. Average-case analysis
d. All of the above
17. Which analysis provides the lower bound on the runtime of an algorithm?
a. Best-case analysis
b. Worst-case analysis
c. Average-case analysis
d. None of the above
18. Which analysis considers the typical or expected runtime of an algorithm?
a. Best-case analysis
b. Worst-case analysis
c. Average-case analysis
d. None of the above
19. In a linear search, the best-case scenario occurs when:
a. The target element is at the end of the array.
b. The target element is at the beginning of the array.
c. The target element is not in the array.
d. The array is empty.
20. In a linear search, the worst-case scenario occurs when:
a) The target element is at the beginning of the array.
b) The target element is in the middle of the array.
c) The target element is at the end of the array or not present.
d) The array is empty.
21. Analyzing recursive algorithms often involves using:
a) Linear equations.
b) Recurrence relations.
c) Statistical methods.
d) Graphical analysis.
22. Analyzing non-recursive algorithms typically involves:
a) Counting the number of basic operations.
b) Solving recurrence relations.
c) Probabilistic analysis.
d) Graph theory.
23. The time complexity of a recursive algorithm can be found using the:
a) Master Theorem.
b) Substitution method.
c) Recursion tree method.
d) All of the above.
24. Which analysis is most useful when an algorithm's runtime varies significantly with different inputs?
a) Best-case analysis
b) Worst-case analysis
c) Average-case analysis
d) None of the above
25. For a sorting algorithm like merge sort, the best-case, worst-case, and average-case time complexities
are:
a) Different for each case.
b) The same for all cases.
c) Best case is O(n), and others are O(n log n).
d) Best case is O(1), and others are O(n log n).
26. When analyzing a recursive function, the base case is crucial because:
a) It determines the worst-case scenario.
b) It prevents infinite recursion.
c) It improves the best-case performance.
d) It is not important.
27. In an algorithm that searches a sorted array using binary search, the best-case time complexity is:
a) O(1)
b) O(log n)
c) O(n)
d) O(n log n)
28. In an algorithm that searches a sorted array using binary search, the worst-case time complexity is:
a) O(1)
b) O(log n)
c) O(n)
d) O(n log n)
29. When analyzing iterative algorithms, you primarily examine:
a) The depth of the recursion.
b) The number of loops and their iterations.
c) The probability of certain inputs.
d) The size of the call stack.
30. The average case of an algorithm is often harder to calculate than the worst or best case because:
a) It requires knowledge of the best-case scenario.
b) It ignores the input size.
c) It requires knowledge of the probability distribution of inputs.
d) It is always the same as the worst-case.
31. When analyzing a non-recursive algorithm, the primary focus is on:
a) Identifying base cases.
b) Counting the number of basic operations.
c) Solving recurrence relations.
d) Determining the depth of recursion.
32. In a simple iterative loop that runs 'n' times, the time complexity is typically:
a) O(log n)
b) O(n)
c) O(n^2)
d) O(1)
33. Nested loops in a non-recursive algorithm often lead to:
a) Logarithmic time complexity.
b) Linear time complexity.
c) Polynomial time complexity.
d) Constant time complexity.
34. To analyze the time complexity of a non-recursive algorithm, you should:
a) Use the Master Theorem.
b) Construct a recursion tree.
c) Determine the dominant operation and its frequency.
d) Apply substitution methods.
35. When analyzing a non-recursive algorithm with a fixed number of operations, regardless of input
size, the time complexity is:
a) O(n)
b) O(log n)
c) O(n log n)
d) O(1)
36. In non-recursive algorithms, the analysis of best-case, worst-case, and average-case scenarios:
a) Is always the same.
b) Can vary depending on the input data.
c) Is irrelevant.
d) Only considers the best case.
37. For a non-recursive algorithm that searches through an unsorted array, the worst-case time
complexity is:
a) O(log n)
b) O(n)
c) O(n log n)
d) O(1)
38. Analyzing non-recursive algorithms primarily involves examining:
a) The call stack.
b) The flow of control within loops.
c) The depth of recursive calls.
d) The memory allocation of recursive functions.
39. If a non-recursive algorithm has two sequential loops, one running 'n' times and the other 'm' times,
the time complexity is:
a) O(n * m)
b) O(n + m)
c) O(max(n, m))
d) O(log(n + m))
40. When analyzing a non-recursive algorithm, constant factors in the number of operations are:
a) Always included in the final complexity.
b) Typically ignored in asymptotic notation.
c) Only considered in best-case analysis.
d) Used to determine the exact runtime.
41. An iterative algorithm that processes each element of a 2D array of size n x m has a time complexity
of:
a) O(n + m)
b) O(n * m)
c) O(max(n, m))
d) O(log(n * m))
42. When analyzing a non recursive sorting algorithm like bubble sort, the number of comparisons made
will determine:
a) The space complexity.
b) The time complexity.
c) The memory allocation.
d) The algorithms stability.
43. In a non-recursive algorithm, when multiple independent code blocks are executed sequentially, the
overall time complexity is determined by:
a) The product of their individual complexities.
b) The maximum of their individual complexities.
c) The minimum of their individual complexities.
d) The average of their individual complexities.
44. When analyzing a non-recursive algorithm that uses a single for loop, what factor most effects the
time complexity?
a) The amount of memory used in the loop.
b) The number of iterations of the loop.
c) The programming language used.
d) The computers operating system.
45. What is the first step in analyzing a non-recursive algorithm?
a) Solve the recurrence relation.
b) create a recursion tree.
c) Identify the algorithms basic operations.
d) Use the master theorem.
46. What is the first step in solving a recurrence using the backward substitution method?
e) Convert the recurrence into an iterative form
f) Draw a recursion tree
g) Expand the recurrence relation step by step
h) Apply the Master Theorem
47. In backward substitution, how do you determine the time complexity?
a. By counting the number of recursive calls
B) By solving the recurrence using the Master Theorem
C) By expanding the recurrence until a base case is reached and summing the terms
D) By using the divide and conquer approach
48. What is the recurrence relation for the recursive algorithm of binary search?
b. T(n) = T(n-1) + O(1)
B) T(n) = 2T(n/2) + O(n)
C) T(n) = T(n/2) + O(1)
D) T(n) = T(n-1) + O(n)
49. What does each level of a recursion tree represent?
A) A recursive call and its corresponding work
B) The base case of the recursion
C) The iterative form of the recurrence
D) The input size doubling at each step
50. When solving a recurrence using the recursion tree method, what do you do after expanding the
recurrence?
a) Stop at the first level and count the calls
b) Identify the total work done at each level and sum up all levels
c) Convert it into an iterative formula
d) Use dynamic programming to optimize it
51. What is the time complexity of the recurrence T(n) = 2T(n/2) + O(n) using the recursion tree
method?
A)O(n)
B) O(n²)
C) O(n log n)
D) O(log n)
52. What is the base case in a recursion tree method analysis?
A)When the recursive calls double in each level
B) When the input size reduces to a constant (e.g., T(1))
C) When the total number of recursive calls equals n
D) When we find the largest subproblem
53. How many levels are there in a recursion tree for T(n) = T(n/2) + O(n)?
A)O(n)
B) O(log n)
C) O(n²)
D) O(1)
54. What is the total time complexity of T(n) = 3T(n/3) + O(n) using the recursion tree method?
A)O(n)
B) O(n²)
C) O(n log n)
D) O(log n)
55. If a recursion tree has logarithmic depth and each level contributes constant work, what is the overall
complexity?
A) O(log n)
B) O(n log n)
C) O(n²)
D) O(n)

PART B AND C

1. What are asymptotic notations?

Asymptotic Notations:
 Asymptotic Notations are mathematical tools used to analyze the performance of algorithms by
understanding how their efficiency changes as the input size grows.
 These notations provide a concise way to express the behavior of an algorithm’s time or space
complexity as the input size approaches infinity.
 Rather than comparing algorithms directly, asymptotic analysis focuses on understanding the relative
growth rates of algorithms’ complexities.
 It enables comparisons of algorithms’ efficiency by abstracting away machine-specific constants and
implementation details, focusing instead on fundamental trends.
 Asymptotic analysis allows for the comparison of algorithms’ space and time complexities by
examining their performance characteristics as the input size varies.
 By using asymptotic notations, such as Big O, Big Omega, and Big Theta, we can categorize
algorithms based on their worst-case, best-case, or average-case time or space complexities,
providing valuable insights into their efficiency.
There are mainly three asymptotic notations:
1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)
1. Theta Notation (Θ-Notation):
Theta notation encloses the function from above and below. Since it represents the upper and the lower
bound of the running time of an algorithm, it is used for analyzing the average-case complexity of an
algorithm.
.Theta (Average Case) You add the running times for each possible input combination and take the
average in the average case.
Let g and f be the function from the set of natural numbers to itself. The function f is said to be Θ(g), if
there are constants c1, c2 > 0 and a natural number n0 such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥
n0

Theta notation

Mathematical Representation of Theta notation:


Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 * g(n) ≤ f(n) ≤ c2 * g(n)
for all n ≥ n0}
Note: Θ(g) is a set
The above expression can be described as if f(n) is theta of g(n), then the value f(n) is always between
c1 * g(n) and c2 * g(n) for large values of n (n ≥ n0). The definition of theta also requires that f(n) must
be non-negative for values of n greater than n0.
The execution time serves as both a lower and upper bound on the algorithm’s time complexity.
It exist as both, most, and least boundaries for a given input value.
A simple way to get the Theta notation of an expression is to drop low-order terms and ignore leading
constants. For example, Consider the expression 3n3 + 6n2 + 6000 = Θ(n3), the dropping lower order
terms is always fine because there will always be a number(n) after which Θ(n3) has higher values
than Θ(n2) irrespective of the constants involved. For a given function g(n), we denote Θ(g(n)) is
following set of functions.
Examples :
{ 100 , log (2000) , 10^4 } belongs to Θ(1)
{ (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Θ(n)
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Θ( n2)
Note: Θ provides exact bounds.
2. Big-O Notation (O-notation):
Big-O notation represents the upper bound of the running time of an algorithm. Therefore, it gives the
worst-case complexity of an algorithm.
.It is the most widely used notation for Asymptotic analysis.
.It specifies the upper bound of a function.
.The maximum time required by an algorithm or the worst-case time complexity.
.It returns the highest possible output value(big-O) for a given input.
.Big-O(Worst Case) It is defined as the condition that allows an algorithm to complete statement
execution in the longest amount of time possible.

If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a positive constant C and
n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0
It returns the highest possible output value (big-O)for a given input.
The execution time serves as an upper bound on the algorithm’s time complexity.

Mathematical Representation of Big-O Notation:


O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }
For example, Consider the case of Insertion Sort. It takes linear time in the best case and quadratic time
in the worst case. We can safely say that the time complexity of the Insertion sort is O(n2).
Note: O(n2) also covers linear time.
If we use Θ notation to represent the time complexity of Insertion sort, we have to use two statements
for best and worst cases:
 The worst-case time complexity of Insertion Sort is Θ(n2).
 The best case time complexity of Insertion Sort is Θ(n).
The Big-O notation is useful when we only have an upper bound on the time complexity of an
algorithm. Many times we easily find an upper bound by simply looking at the algorithm.
Examples :
{ 100 , log (2000) , 10^4 } belongs to O(1)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to O(n)
U { (n^2+n) , (2n^2) , (n^2+log(n))} belongs to O( n^2)
Note: Here, U represents union, we can write it in these manner because O provides exact or upper
bounds .
3. Omega Notation (Ω-Notation):
Omega notation represents the lower bound of the running time of an algorithm. Thus, it provides the
best case complexity of an algorithm.
The execution time serves as a lower bound on the algorithm’s time complexity.
It is defined as the condition that allows an algorithm to complete statement execution in the
shortest amount of time.
Let g and f be the function from the set of natural numbers to itself. The function f is said to be Ω(g), if
there is a constant c > 0 and a natural number n0 such that c*g(n) ≤ f(n) for all n ≥ n0

Mathematical Representation of Omega notation :


Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }
Let us consider the same Insertion sort example here. The time complexity of Insertion Sort can be
written as Ω(n), but it is not very useful information about insertion sort, as we are generally interested
in worst-case and sometimes in the average case.
Examples :
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Ω( n^2)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Ω(n)
U { 100 , log (2000) , 10^4 } belongs to Ω(1)
Note: Here, U represents union, we can write it in these manner because Ω provides exact or lower
bounds.

2. Explain various design paradigms


https://medium.com/@saiesh.prabhu17/algorithm-design-techniques-406922dd3047

3. Non-Recursive Algorithm Analysis for Finding Maximum in an Array

Algorithm:
To find the maximum number in an array of n elements, we use a simple iterative approach:
cpp
CopyEdit
int findMax(int arr[], int n) {
int maxVal = arr[0]; // Assume first element is max
for (int i = 1; i < n; i++) { // Loop through the array
if (arr[i] > maxVal) {
maxVal = arr[i]; // Update max if a larger value is found
}
}
return maxVal;
}

Step-by-Step Analysis
1. Identifying Basic Operations
The key operations in the loop are:
 Comparison: arr[i] > maxVal
For each element in the array, the loop runs once, performing one comparison and at most one
assignment.
T(n)= ∑ 1 = n-1-0+1 = O(n)
4. Non-Recursive Algorithm Analysis for Matrix Addition
Algorithm:
Matrix addition involves adding corresponding elements of two matrices, A and B, to produce a result
matrix C. Given two n × m matrices, the operation follows:
cpp
CopyEdit
void matrixAddition(int A[][M], int B[][M], int C[][M], int n, int m) {
for (int i = 0; i < n; i++) { // Iterate over rows
for (int j = 0; j < m; j++) { // Iterate over columns
C[i][j] = A[i][j] + B[i][j]; // Add corresponding elements
}
}
}
Step-by-Step Analysis
1. Identifying Basic Operations
The primary operation inside the nested loops is:
 Addition: C[i][j] = A[i][j] + B[i][j] (1 operation per element)
Each element in A and B is accessed once, added, and stored in C.
2. Time Complexity Analysis
 Outer loop runs n times (rows)
 Inner loop runs m times (columns)
 Total operations: n * m additions
Thus, the time complexity is O(n * m).

If the matrix is square matrix of size (n x n) , then T(n)= O(n2)

5. Non-Recursive Algorithm Analysis for Finding Matrix Transpose


Algorithm:
The transpose of an n × m matrix A results in an m × n matrix T, where T[j][i] = A[i][j].
cpp
CopyEdit
void transposeMatrix(int A[][M], int T[][N], int n, int m) {
for (int i = 0; i < n; i++) { // Iterate over rows
for (int j = 0; j < m; j++) { // Iterate over columns
T[j][i] = A[i][j]; // Assign transposed value
}
}
}
Step-by-Step Analysis
1. Identifying Basic Operations
The core operation inside the nested loops is:
 Assignment: T[j][i] = A[i][j] (1 operation per element)
Each element of A is accessed once and stored in T.

2. Time Complexity Analysis


 Outer loop runs n times (rows)
 Inner loop runs m times (columns)
 Total operations: n * m assignments

6. Best Case, Worst Case, and Average Case Analysis for Linear Search
Algorithm:
Linear Search is a simple searching algorithm that sequentially checks each element in an
array until the target element is found.
int linearSearch(int arr[], int n, int key) {
for (int i = 0; i < n; i++) {
if (arr[i] == key) { // Check if the current element matches the key
return i; // Return the index if found
}
}
return -1; // Return -1 if the key is not found
}

Step-by-Step Complexity Analysis


1. Best Case (Ω(1))
 Scenario: The key is found at the first position (arr[0] == key).
 Operations: Only one comparison is made.
 Time Complexity: Ω(1) (constant time)
2. Worst Case (O(n))
 Scenario: The key is at the last position or not present in the array.
 Operations: The loop runs for all n elements.
 Time Complexity: O(n) (linear time)
3. Average Case (Θ(n))
 Scenario: The key is randomly positioned in the array.
 Probability Assumption: The key is equally likely to be in any of the n
positions.
 Expected Comparisons: If the key appears at index k, the average number of
checks is:
(1+2+3+...+n)/n=(n+1)/2
Time Complexity: Θ(n) (still linear)

Final Complexity Summary


Case Time Complexity
Best Case Ω(1) (Key found at the first position)
Average Case Θ(n) (Key found at a random position)
Worst Case O(n) (Key is the last element or not present)

7. Recurssive Algorithm Analysis


Backward Substitution
Backward substitution is a method used to solve recurrence relations, particularly those arising in
recursive algorithms. It involves expanding the recurrence step by step until a pattern emerges, allowing
us to express the recurrence in a closed form.
Steps for Backward Substitution
1. Expand the recurrence relation by substituting the function's definition multiple times.
2. Identify a pattern and generalize it.
3. Express it in terms of a base case (when recursion stops).
4. Solve for the closed-form expression.

Factorial
n! = 1•2•3...n and 0! = 1 (called initial case)
So the recursive defintiion n! = n•(n-1)!
Algorithm F(n)
if n = 0 then return 1 // base case
else F(n-1)•n // recursive call
Basic operation: multiplication
Formula for multiplication
M(n) = M(n-1) + 1
is a recursive formula too. This is typical.

We need the initial case which corresponds to the base case


M(0) = 0
There are no multiplications

Solve by the method of backward substitutions


M(n) = M(n-1) + 1
= [M(n-2) + 1] + 1 = M(n-2) + 2 substituted M(n-2) for M(n-1)
= [M(n-3) + 1] + 2 = M(n-3) + 3 substituted M(n-3) for M(n-2)
... a pattern evolves
= M(0) + n
=n
Not surprising!
Therefore M(n) ε Θ(n)

Tower Hanoi
Explain the problem using figure

Demo and show recursion

1. Problem size is n, the number of discs


2. The basic operation is moving a disc from rod to another
3. There is no worst or best case
4. Recursive relation for moving n discs
M(n) = M(n-1) + 1 + M(n-1) = 2M(n-1) + 1
IC: M(1) = 1
5. Solve using backward substitution
M(n) = 2M(n-1) + 1
= 2[2M(n-2) + 1] +1 = 22M(n-2) + 2+1
=22[2M(n-3) +1] + 2+1 = 23M(n-3) + 22 + 2 + 1
...
M(n) = 2iM(n-i) + ∑j=0-i2j = 2iM(n-i) + 2i-1
...
M(n) = 2n-1M(n-(n-1)) + 2n-1-1 = 2n-1M(1) + 2n-1-1 = 2n-1 + 2n-1-1 = 2n-1

M(n) ε Θ(2n)

Binary Representation
Example 10 = 1010

5=101

Algorithm BinRec(n)
if n = 1 then return 1
else return BinRec(floor(n/2)) + 1
substitute n = 2k (also k = lg(n))
A(2k) = A(2k-1) + 1 and IC A(20) = 0

A(2k) = [A(2k-2) + 1] + 1 = A(2k-2) + 2


= [A(2k-3) + 1] + 2 = A(2k-3) + 3
...
= A(2k-i) + i
...
= A(2k-k) + k
A(2k) = k

Substitute back k = lg(n)


A(n) = lg(n) ε Θ(lg n)

Recurrence Tree

Here while solving recurrences; we divide the problem into sub-problems of equal
size. For e.g., T(n) = a T(n/b) + f(n) where a > 1 ,b > 1 and f(n) is a given function F(n)
is the cost of splitting or combining the sub-problems. In a recursion tree , each
node represents the cost of a single subproblem somewhere in the set of recursive
problems invocations. we sum the cost within each level of the tree to obtain a set
of per level cost and then we sum all the per level cost to determine the total cost
of all levels of recursion.
Solve T(n) = 3T(n/4) + cn2

You might also like