Daa 1
Daa 1
RAMAPURAM, CHENNAI – 89
PART A
1. Which notation describes the tightest bound for an algorithm's growth rate?
a) Big O (O)
b) Omega (Ω)
c) Theta (Θ)
d) Little o (o)
2. What does O(log n) time complexity indicate?
a) Linear growth
b) Logarithmic growth
c) Quadratic growth
d) Constant growth
3. If an algorithm's time complexity is O(n), doubling the input size will:
a) Double the runtime
b) Quadruple the runtime
c) Have no effect on the runtime
d) Halve the runtime
4. Which notation represents the best-case scenario of an algorithm's runtime?
a) Big O (O)
b) Omega (Ω)
c) Theta (Θ)
d) Little o (o)
5. An algorithm with O(1) time complexity has:
a) Linear time
b) Logarithmic time
c) Constant time
d) Exponential time
6. What is the primary purpose of asymptotic analysis?
a) To measure the exact runtime of an algorithm
b) To describe how an algorithm's runtime scales with input size
c) To optimize code for specific hardware
d) To count the number of lines of code
7. If an algorithm has O(n²) complexity, what happens to the runtime if the input size triples?
a) It triples
b) It doubles
c) It increases by a factor of nine
d) It stays the same
8. Which of the following is the fastest growing time complexity?
a) O(n)
b) O(log n)
c) O(n log n)
d) O(2^n)
9. What does "n" typically represent in time complexity analysis?
a) The number of operations
b) The input size
c) The execution time
d) The memory usage
10. When comparing algorithms, asymptotic notation helps us focus on:
a) The exact runtime
b) The performance on small inputs
c) The growth rate for large inputs
d) The programming language used
11. An algorithm with a time complexity of O(n log n) is generally considered:
a) Very inefficient
b) Constant
c) Efficient for many sorting algorithms
d) Exponential
12. What does it mean for an algorithm to have a time complexity of O(n)?
a) The execution time increases linearly with the input size.
b) The execution time remains constant.
c) The execution time increases quadratically.
d) The execution time decreases as the input size increases.
13. Which of these complexities is better than O(n)?
a) O(n²)
b) O(2^n)
c) O(log n)
d) O(n*n)
14. When analyzing algorithms, we are typically concerned with:
a) Best case time complexity only.
b) Average case time complexity only.
c) Worst case time complexity.
d) Machine specific run times.
15. If a piece of code has two nested loops, each iterating "n" times, what is the time complexity?
a) O(n)
b) O(log n)
c) O(n²)
d) O(n log n)
16. Which analysis provides the upper bound on the runtime of an algorithm?
a. Best-case analysis
b. Worst-case analysis
c. Average-case analysis
d. All of the above
17. Which analysis provides the lower bound on the runtime of an algorithm?
a. Best-case analysis
b. Worst-case analysis
c. Average-case analysis
d. None of the above
18. Which analysis considers the typical or expected runtime of an algorithm?
a. Best-case analysis
b. Worst-case analysis
c. Average-case analysis
d. None of the above
19. In a linear search, the best-case scenario occurs when:
a. The target element is at the end of the array.
b. The target element is at the beginning of the array.
c. The target element is not in the array.
d. The array is empty.
20. In a linear search, the worst-case scenario occurs when:
a) The target element is at the beginning of the array.
b) The target element is in the middle of the array.
c) The target element is at the end of the array or not present.
d) The array is empty.
21. Analyzing recursive algorithms often involves using:
a) Linear equations.
b) Recurrence relations.
c) Statistical methods.
d) Graphical analysis.
22. Analyzing non-recursive algorithms typically involves:
a) Counting the number of basic operations.
b) Solving recurrence relations.
c) Probabilistic analysis.
d) Graph theory.
23. The time complexity of a recursive algorithm can be found using the:
a) Master Theorem.
b) Substitution method.
c) Recursion tree method.
d) All of the above.
24. Which analysis is most useful when an algorithm's runtime varies significantly with different inputs?
a) Best-case analysis
b) Worst-case analysis
c) Average-case analysis
d) None of the above
25. For a sorting algorithm like merge sort, the best-case, worst-case, and average-case time complexities
are:
a) Different for each case.
b) The same for all cases.
c) Best case is O(n), and others are O(n log n).
d) Best case is O(1), and others are O(n log n).
26. When analyzing a recursive function, the base case is crucial because:
a) It determines the worst-case scenario.
b) It prevents infinite recursion.
c) It improves the best-case performance.
d) It is not important.
27. In an algorithm that searches a sorted array using binary search, the best-case time complexity is:
a) O(1)
b) O(log n)
c) O(n)
d) O(n log n)
28. In an algorithm that searches a sorted array using binary search, the worst-case time complexity is:
a) O(1)
b) O(log n)
c) O(n)
d) O(n log n)
29. When analyzing iterative algorithms, you primarily examine:
a) The depth of the recursion.
b) The number of loops and their iterations.
c) The probability of certain inputs.
d) The size of the call stack.
30. The average case of an algorithm is often harder to calculate than the worst or best case because:
a) It requires knowledge of the best-case scenario.
b) It ignores the input size.
c) It requires knowledge of the probability distribution of inputs.
d) It is always the same as the worst-case.
31. When analyzing a non-recursive algorithm, the primary focus is on:
a) Identifying base cases.
b) Counting the number of basic operations.
c) Solving recurrence relations.
d) Determining the depth of recursion.
32. In a simple iterative loop that runs 'n' times, the time complexity is typically:
a) O(log n)
b) O(n)
c) O(n^2)
d) O(1)
33. Nested loops in a non-recursive algorithm often lead to:
a) Logarithmic time complexity.
b) Linear time complexity.
c) Polynomial time complexity.
d) Constant time complexity.
34. To analyze the time complexity of a non-recursive algorithm, you should:
a) Use the Master Theorem.
b) Construct a recursion tree.
c) Determine the dominant operation and its frequency.
d) Apply substitution methods.
35. When analyzing a non-recursive algorithm with a fixed number of operations, regardless of input
size, the time complexity is:
a) O(n)
b) O(log n)
c) O(n log n)
d) O(1)
36. In non-recursive algorithms, the analysis of best-case, worst-case, and average-case scenarios:
a) Is always the same.
b) Can vary depending on the input data.
c) Is irrelevant.
d) Only considers the best case.
37. For a non-recursive algorithm that searches through an unsorted array, the worst-case time
complexity is:
a) O(log n)
b) O(n)
c) O(n log n)
d) O(1)
38. Analyzing non-recursive algorithms primarily involves examining:
a) The call stack.
b) The flow of control within loops.
c) The depth of recursive calls.
d) The memory allocation of recursive functions.
39. If a non-recursive algorithm has two sequential loops, one running 'n' times and the other 'm' times,
the time complexity is:
a) O(n * m)
b) O(n + m)
c) O(max(n, m))
d) O(log(n + m))
40. When analyzing a non-recursive algorithm, constant factors in the number of operations are:
a) Always included in the final complexity.
b) Typically ignored in asymptotic notation.
c) Only considered in best-case analysis.
d) Used to determine the exact runtime.
41. An iterative algorithm that processes each element of a 2D array of size n x m has a time complexity
of:
a) O(n + m)
b) O(n * m)
c) O(max(n, m))
d) O(log(n * m))
42. When analyzing a non recursive sorting algorithm like bubble sort, the number of comparisons made
will determine:
a) The space complexity.
b) The time complexity.
c) The memory allocation.
d) The algorithms stability.
43. In a non-recursive algorithm, when multiple independent code blocks are executed sequentially, the
overall time complexity is determined by:
a) The product of their individual complexities.
b) The maximum of their individual complexities.
c) The minimum of their individual complexities.
d) The average of their individual complexities.
44. When analyzing a non-recursive algorithm that uses a single for loop, what factor most effects the
time complexity?
a) The amount of memory used in the loop.
b) The number of iterations of the loop.
c) The programming language used.
d) The computers operating system.
45. What is the first step in analyzing a non-recursive algorithm?
a) Solve the recurrence relation.
b) create a recursion tree.
c) Identify the algorithms basic operations.
d) Use the master theorem.
46. What is the first step in solving a recurrence using the backward substitution method?
e) Convert the recurrence into an iterative form
f) Draw a recursion tree
g) Expand the recurrence relation step by step
h) Apply the Master Theorem
47. In backward substitution, how do you determine the time complexity?
a. By counting the number of recursive calls
B) By solving the recurrence using the Master Theorem
C) By expanding the recurrence until a base case is reached and summing the terms
D) By using the divide and conquer approach
48. What is the recurrence relation for the recursive algorithm of binary search?
b. T(n) = T(n-1) + O(1)
B) T(n) = 2T(n/2) + O(n)
C) T(n) = T(n/2) + O(1)
D) T(n) = T(n-1) + O(n)
49. What does each level of a recursion tree represent?
A) A recursive call and its corresponding work
B) The base case of the recursion
C) The iterative form of the recurrence
D) The input size doubling at each step
50. When solving a recurrence using the recursion tree method, what do you do after expanding the
recurrence?
a) Stop at the first level and count the calls
b) Identify the total work done at each level and sum up all levels
c) Convert it into an iterative formula
d) Use dynamic programming to optimize it
51. What is the time complexity of the recurrence T(n) = 2T(n/2) + O(n) using the recursion tree
method?
A)O(n)
B) O(n²)
C) O(n log n)
D) O(log n)
52. What is the base case in a recursion tree method analysis?
A)When the recursive calls double in each level
B) When the input size reduces to a constant (e.g., T(1))
C) When the total number of recursive calls equals n
D) When we find the largest subproblem
53. How many levels are there in a recursion tree for T(n) = T(n/2) + O(n)?
A)O(n)
B) O(log n)
C) O(n²)
D) O(1)
54. What is the total time complexity of T(n) = 3T(n/3) + O(n) using the recursion tree method?
A)O(n)
B) O(n²)
C) O(n log n)
D) O(log n)
55. If a recursion tree has logarithmic depth and each level contributes constant work, what is the overall
complexity?
A) O(log n)
B) O(n log n)
C) O(n²)
D) O(n)
PART B AND C
Asymptotic Notations:
Asymptotic Notations are mathematical tools used to analyze the performance of algorithms by
understanding how their efficiency changes as the input size grows.
These notations provide a concise way to express the behavior of an algorithm’s time or space
complexity as the input size approaches infinity.
Rather than comparing algorithms directly, asymptotic analysis focuses on understanding the relative
growth rates of algorithms’ complexities.
It enables comparisons of algorithms’ efficiency by abstracting away machine-specific constants and
implementation details, focusing instead on fundamental trends.
Asymptotic analysis allows for the comparison of algorithms’ space and time complexities by
examining their performance characteristics as the input size varies.
By using asymptotic notations, such as Big O, Big Omega, and Big Theta, we can categorize
algorithms based on their worst-case, best-case, or average-case time or space complexities,
providing valuable insights into their efficiency.
There are mainly three asymptotic notations:
1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)
1. Theta Notation (Θ-Notation):
Theta notation encloses the function from above and below. Since it represents the upper and the lower
bound of the running time of an algorithm, it is used for analyzing the average-case complexity of an
algorithm.
.Theta (Average Case) You add the running times for each possible input combination and take the
average in the average case.
Let g and f be the function from the set of natural numbers to itself. The function f is said to be Θ(g), if
there are constants c1, c2 > 0 and a natural number n0 such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥
n0
Theta notation
If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a positive constant C and
n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0
It returns the highest possible output value (big-O)for a given input.
The execution time serves as an upper bound on the algorithm’s time complexity.
Algorithm:
To find the maximum number in an array of n elements, we use a simple iterative approach:
cpp
CopyEdit
int findMax(int arr[], int n) {
int maxVal = arr[0]; // Assume first element is max
for (int i = 1; i < n; i++) { // Loop through the array
if (arr[i] > maxVal) {
maxVal = arr[i]; // Update max if a larger value is found
}
}
return maxVal;
}
Step-by-Step Analysis
1. Identifying Basic Operations
The key operations in the loop are:
Comparison: arr[i] > maxVal
For each element in the array, the loop runs once, performing one comparison and at most one
assignment.
T(n)= ∑ 1 = n-1-0+1 = O(n)
4. Non-Recursive Algorithm Analysis for Matrix Addition
Algorithm:
Matrix addition involves adding corresponding elements of two matrices, A and B, to produce a result
matrix C. Given two n × m matrices, the operation follows:
cpp
CopyEdit
void matrixAddition(int A[][M], int B[][M], int C[][M], int n, int m) {
for (int i = 0; i < n; i++) { // Iterate over rows
for (int j = 0; j < m; j++) { // Iterate over columns
C[i][j] = A[i][j] + B[i][j]; // Add corresponding elements
}
}
}
Step-by-Step Analysis
1. Identifying Basic Operations
The primary operation inside the nested loops is:
Addition: C[i][j] = A[i][j] + B[i][j] (1 operation per element)
Each element in A and B is accessed once, added, and stored in C.
2. Time Complexity Analysis
Outer loop runs n times (rows)
Inner loop runs m times (columns)
Total operations: n * m additions
Thus, the time complexity is O(n * m).
6. Best Case, Worst Case, and Average Case Analysis for Linear Search
Algorithm:
Linear Search is a simple searching algorithm that sequentially checks each element in an
array until the target element is found.
int linearSearch(int arr[], int n, int key) {
for (int i = 0; i < n; i++) {
if (arr[i] == key) { // Check if the current element matches the key
return i; // Return the index if found
}
}
return -1; // Return -1 if the key is not found
}
Factorial
n! = 1•2•3...n and 0! = 1 (called initial case)
So the recursive defintiion n! = n•(n-1)!
Algorithm F(n)
if n = 0 then return 1 // base case
else F(n-1)•n // recursive call
Basic operation: multiplication
Formula for multiplication
M(n) = M(n-1) + 1
is a recursive formula too. This is typical.
Tower Hanoi
Explain the problem using figure
M(n) ε Θ(2n)
Binary Representation
Example 10 = 1010
5=101
Algorithm BinRec(n)
if n = 1 then return 1
else return BinRec(floor(n/2)) + 1
substitute n = 2k (also k = lg(n))
A(2k) = A(2k-1) + 1 and IC A(20) = 0
Recurrence Tree
Here while solving recurrences; we divide the problem into sub-problems of equal
size. For e.g., T(n) = a T(n/b) + f(n) where a > 1 ,b > 1 and f(n) is a given function F(n)
is the cost of splitting or combining the sub-problems. In a recursion tree , each
node represents the cost of a single subproblem somewhere in the set of recursive
problems invocations. we sum the cost within each level of the tree to obtain a set
of per level cost and then we sum all the per level cost to determine the total cost
of all levels of recursion.
Solve T(n) = 3T(n/4) + cn2