0% found this document useful (0 votes)
5 views69 pages

Unit 4

The document outlines key concepts and algorithms related to Divide and Conquer, emphasizing their importance in algorithm analysis and problem-solving. It details various algorithms such as Binary Search, Merge Sort, and methods for finding maximum and minimum values, along with their time and space complexities. Additionally, it includes a summary table and multiple-choice questions to reinforce understanding of the material.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views69 pages

Unit 4

The document outlines key concepts and algorithms related to Divide and Conquer, emphasizing their importance in algorithm analysis and problem-solving. It details various algorithms such as Binary Search, Merge Sort, and methods for finding maximum and minimum values, along with their time and space complexities. Additionally, it includes a summary table and multiple-choice questions to reinforce understanding of the material.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

Topic Importance (Stars) Reason / Past Trend

Fundamental concepts in algorithm


Algorithm Analysis, Time-Space
⭐⭐⭐⭐ (4 complexity often asked; understanding
Tradeoff, Asymptotic Notations,
Stars) big-O is critical but not always very
Conditional asymptotic notation
deep.

Removing condition from Important to understand nuances of


⭐⭐⭐⭐ (4
conditional asymptotic notation, asymptotic notation; usually comes
Stars)
Properties of big-Oh notation with theoretical questions.

Very important for analyzing recursive


Recurrence equations, Solving ⭐⭐⭐⭐ (4
algorithms; often appears in exams
recurrence equations Stars)
with solving examples.

Basic algorithm with simple complexity;


Analysis of linear search ⭐⭐⭐ (3 Stars)
expected as a foundational concept.

Divide and Conquer: General


Classic algorithms, very frequently
Method, Binary Search, Finding ⭐⭐⭐⭐⭐
asked; merge sort and binary search
Maximum and Minimum, Merge (5 Stars)
are must-know.
Sort

General Method, Multistage Important graph algorithms and


⭐⭐⭐⭐ (4
Graphs, All-Pair shortest paths, dynamic programming applications;
Stars)
Optimal binary search trees relevant for problem-solving.

Classical backtracking and NP-


General Method, 8-Queens completeness related problems;
⭐⭐⭐ (3 Stars)
problem, Hamiltonian problem sometimes asked but less frequently
than others.

Connected Components, Spanning ⭐⭐⭐⭐ (4 Important graph theory concepts;


Trees, Biconnected components Stars) spanning trees like MST often appear.

Hot topic in theory and complexity;


Introduction to NP Hard and NP- ⭐⭐⭐⭐⭐
frequently asked in some form due to
Completeness (5 Stars)
its conceptual importance.

Divide and Conquer Algorithms — Detailed Notes

What is Divide and Conquer?


Divide and Conquer is a fundamental algorithm design paradigm that solves a problem by:

1. Dividing the problem into smaller subproblems of the same type.

2. Conquering (solving) each subproblem recursively.

3. Combining the solutions of the subproblems to get the final solution.

Why Divide and Conquer?

 Helps solve problems efficiently by breaking them down.

 Often leads to algorithms with better time complexity than naive solutions.

 Especially useful in sorting, searching, and optimization problems.

General Method of Divide and Conquer

For a problem of size n:

 Divide the problem into a subproblems, each of size n/b.

 Solve each subproblem recursively.

 Combine the solutions in O(f(n)) time.

The time complexity recurrence is generally:

T(n)=a×T(n/b)+f(n)T(n) = a \times T(n/b) + f(n)T(n)=a×T(n/b)+f(n)

1. Binary Search

Problem Statement:

Given a sorted array of size n, find the position of a target element x or determine that it does
not exist.

Algorithm:

 Compare the target x with the middle element of the array.

 If they are equal, return the index.

 If x is smaller, recursively search the left half.


 If x is larger, recursively search the right half.

 Repeat until the element is found or the search space is empty.

Pseudocode:

kotlin

CopyEdit

BinarySearch(arr, low, high, x):

if low > high:

return -1 # Element not found

mid = (low + high) // 2

if arr[mid] == x:

return mid

else if arr[mid] > x:

return BinarySearch(arr, low, mid - 1, x)

else:

return BinarySearch(arr, mid + 1, high, x)

Time Complexity:

 At each step, search space is halved.

 Recurrence: T(n)=T(n/2)+O(1)T(n) = T(n/2) + O(1)T(n)=T(n/2)+O(1)

 Solution: T(n)=O(log⁡n)T(n) = O(\log n)T(n)=O(logn)

2. Finding Maximum and Minimum

Problem Statement:

Find both the maximum and minimum elements in an array of size n with minimum number of
comparisons.

Naive Approach:

 Traverse array, keep track of max and min.


 Comparisons: 2(n−1)2(n-1)2(n−1)

Divide and Conquer Approach:

 Divide the array into two halves.

 Recursively find max and min of each half.

 Compare max of left and right halves to find overall max. Similarly for min.

Algorithm:

 Divide array into halves until base case (one or two elements).

 Return max and min from each half.

 Combine results with two comparisons.

Time Complexity:

 Number of comparisons in worst case: 3n2−2\frac{3n}{2} - 223n−2, which is better than


naive 2n−22n-22n−2.

3. Merge Sort

Problem Statement:

Sort an array of size n in ascending order.

Idea:

 Divide the array into two halves.

 Recursively sort both halves.

 Merge the two sorted halves to produce a fully sorted array.

Merge Process:

 Create two temporary arrays for left and right halves.

 Use two pointers to compare elements and copy the smaller one to the main array.

Pseudocode:

sql

CopyEdit
MergeSort(arr, left, right):

if left < right:

mid = (left + right) // 2

MergeSort(arr, left, mid)

MergeSort(arr, mid+1, right)

Merge(arr, left, mid, right)

Merge(arr, left, mid, right):

# merge two sorted halves arr[left..mid] and arr[mid+1..right]

Time Complexity:

 Recurrence: T(n)=2T(n/2)+O(n)T(n) = 2T(n/2) + O(n)T(n)=2T(n/2)+O(n) (for merging)

 Using Master theorem: T(n)=O(nlog⁡n)T(n) = O(n \log n)T(n)=O(nlogn)

Space Complexity:

 Requires O(n)O(n)O(n) auxiliary space for merging.

Summary Table

Algorithm Time Complexity Space Complexity Approach Type

Divide and
Binary Search O(log⁡n)O(\log n)O(logn) O(1)O(1)O(1)
Conquer

Finding Max & O(n)O(n)O(n), fewer O(log⁡n)O(\log n)O(logn) due to Divide and
Min comparisons recursion stack Conquer

O(nlog⁡n)O(n \log Divide and


Merge Sort O(n)O(n)O(n)
n)O(nlogn) Conquer

30 MCQs on Divide and Conquer (Binary Search, Max & Min, Merge Sort)

1. What is the time complexity of Binary Search?


A) O(n)O(n)O(n)
B) O(log⁡n)O(\log n)O(logn)
C) O(nlog⁡n)O(n \log n)O(nlogn)
D) O(1)O(1)O(1)
Answer: B
Explanation: Binary Search divides the array in half each step, so the complexity is logarithmic.

2. Binary Search requires the input array to be:


A) Sorted
B) Unsorted
C) Random
D) None of these
Answer: A
Explanation: Binary Search only works on sorted arrays.

3. What is the base case in the recursive Binary Search?


A) When array size is 0
B) When element is found or low > high
C) When element is larger than mid
D) None of the above
Answer: B
Explanation: Search stops when the element is found or the search space is empty (low > high).

4. Which of the following is NOT true about Merge Sort?


A) It uses divide and conquer
B) Time complexity is O(nlog⁡n)O(n \log n)O(nlogn)
C) It is an in-place sort
D) It uses recursion
Answer: C
Explanation: Merge Sort requires additional space for merging and is not in-place.

5. What is the space complexity of Merge Sort?


A) O(1)O(1)O(1)
B) O(log⁡n)O(\log n)O(logn)
C) O(n)O(n)O(n)
D) O(nlog⁡n)O(n \log n)O(nlogn)
Answer: C
Explanation: Merge Sort needs extra space proportional to the array size for merging.

6. The recurrence relation for Merge Sort is:


A) T(n)=T(n−1)+O(1)T(n) = T(n-1) + O(1)T(n)=T(n−1)+O(1)
B) T(n)=2T(n/2)+O(n)T(n) = 2T(n/2) + O(n)T(n)=2T(n/2)+O(n)
C) T(n)=T(n/2)+O(1)T(n) = T(n/2) + O(1)T(n)=T(n/2)+O(1)
D) T(n)=2T(n−1)+O(n)T(n) = 2T(n-1) + O(n)T(n)=2T(n−1)+O(n)
Answer: B
Explanation: Merge Sort divides array into two halves and merges in linear time.

7. How many comparisons does the naive method for finding max and min in an array of size
n take?
A) nnn
B) 2n2n2n
C) 2n−22n - 22n−2
D) 3n/23n/23n/2
Answer: C
Explanation: Naive method compares every element with current max and min, leading to
2(n−1)2(n-1)2(n−1) comparisons.

8. The divide and conquer method for finding max and min requires how many comparisons?
A) nnn
B) 2n2n2n
C) 2n−22n - 22n−2
D) 3n/2−23n/2 - 23n/2−2
Answer: D
Explanation: It reduces comparisons to approximately 1.5n1.5n1.5n.

9. Which step is NOT part of divide and conquer?


A) Divide
B) Sort
C) Conquer
D) Combine
Answer: B
Explanation: Sort may be part of some algorithms but is not a generic step of divide and
conquer.

10. Binary Search algorithm fails if the array is:


A) Sorted ascending
B) Sorted descending
C) Unsorted
D) All of the above
Answer: C
Explanation: Binary Search only works on sorted arrays.

11. Which of the following sorting algorithms is NOT based on divide and conquer?
A) Quick Sort
B) Merge Sort
C) Heap Sort
D) Binary Search
Answer: C
Explanation: Heap Sort is based on heap data structure, not divide and conquer.

12. Merge operation in Merge Sort takes:


A) O(n)O(n)O(n) time
B) O(log⁡n)O(\log n)O(logn) time
C) O(nlog⁡n)O(n \log n)O(nlogn) time
D) O(1)O(1)O(1) time
Answer: A
Explanation: Merging two sorted arrays takes linear time proportional to combined size.

13. Which is the correct recurrence for Binary Search?


A) T(n)=2T(n/2)+O(1)T(n) = 2T(n/2) + O(1)T(n)=2T(n/2)+O(1)
B) T(n)=T(n/2)+O(1)T(n) = T(n/2) + O(1)T(n)=T(n/2)+O(1)
C) T(n)=T(n−1)+O(1)T(n) = T(n-1) + O(1)T(n)=T(n−1)+O(1)
D) T(n)=T(n/2)+O(n)T(n) = T(n/2) + O(n)T(n)=T(n/2)+O(n)
Answer: B
Explanation: Binary Search reduces problem size by half and does constant work at each step.

14. What happens in the "combine" step of divide and conquer for Merge Sort?
A) Split the array into halves
B) Merge two sorted halves
C) Recursively sort each half
D) Find the middle element
Answer: B
Explanation: Combine step merges two sorted halves into one sorted array.

15. Which of the following is TRUE about divide and conquer?


A) It never uses recursion
B) It divides problem into subproblems of smaller size
C) It solves all subproblems iteratively
D) It combines results by multiplication
Answer: B
Explanation: Divide and conquer breaks problems into smaller subproblems.

16. The worst-case time complexity of Binary Search is:


A) O(n)O(n)O(n)
B) O(log⁡n)O(\log n)O(logn)
C) O(nlog⁡n)O(n \log n)O(nlogn)
D) O(1)O(1)O(1)
Answer: B
Explanation: Even in worst case, Binary Search halves the search space each step.

17. In Merge Sort, the splitting of the array stops when the size becomes:
A) 0
B) 1
C) 2
D) n/2
Answer: B
Explanation: Recursion stops when sub-array size is 1 (single element arrays are sorted).
18. Which sorting algorithm uses divide and conquer and has average time complexity
O(nlog⁡n)O(n \log n)O(nlogn) but is NOT stable?
A) Merge Sort
B) Quick Sort
C) Binary Search
D) Bubble Sort
Answer: B
Explanation: Quick Sort is divide and conquer, O(nlog⁡n)O(n \log n)O(nlogn) average, but not
stable.

19. The recurrence relation T(n)=2T(n/2)+O(n)T(n) = 2T(n/2) + O(n)T(n)=2T(n/2)+O(n) is solved


using:
A) Substitution method
B) Master theorem
C) Recursion tree method
D) All of the above
Answer: D
Explanation: All are valid methods to solve recurrence relations.

20. How many times is the array divided in Merge Sort?


A) log⁡n\log nlogn times
B) nnn times
C) nlog⁡nn \log nnlogn times
D) 1 time
Answer: A
Explanation: The array is divided until size 1, so division happens log⁡n\log nlogn times.

21. Which algorithm is best suited for searching in a sorted array?


A) Linear Search
B) Binary Search
C) Merge Sort
D) Quick Sort
Answer: B
Explanation: Binary Search is optimized for sorted arrays.
22. What is the maximum depth of recursion in Binary Search on array of size n?
A) nnn
B) log⁡n\log nlogn
C) n/2n/2n/2
D) 111
Answer: B
Explanation: Each recursive call halves the array size, so depth is log⁡n\log nlogn.

23. Which of these is NOT a benefit of Divide and Conquer?


A) Parallelism
B) Easier to solve large problems
C) High memory efficiency always
D) Reusable subproblem solutions
Answer: C
Explanation: Divide and conquer can sometimes use more memory, e.g., Merge Sort.

24. Binary Search can be implemented:


A) Only recursively
B) Only iteratively
C) Both recursively and iteratively
D) None of these
Answer: C
Explanation: Both implementations are possible.

25. What is the minimum number of comparisons needed to find both max and min in an
array of size n using divide and conquer?
A) 2n2n2n
B) 1.5n−21.5n - 21.5n−2
C) n−1n - 1n−1
D) nnn
Answer: B
Explanation: Divide and conquer reduces comparisons to about 1.5n−21.5n - 21.5n−2.
26. Which step in divide and conquer algorithms can often be parallelized?
A) Divide
B) Conquer
C) Combine
D) All of the above
Answer: B
Explanation: Subproblems can be solved independently in parallel during conquer.

27. Merge Sort is a _______ sorting algorithm.


A) Stable
B) Unstable
C) In-place
D) None of these
Answer: A
Explanation: Merge Sort preserves the relative order of equal elements.

28. Binary Search algorithm works correctly on:


A) Any array
B) Only sorted arrays
C) Only arrays sorted in ascending order
D) Arrays sorted in ascending or descending order
Answer: C
Explanation: Classical binary search assumes ascending order. For descending, algorithm needs
adjustment.

29. The merging process in Merge Sort compares:


A) Elements from same half
B) Elements from left and right halves
C) All elements simultaneously
D) None of these
Answer: B
Explanation: Merge compares elements from two halves to build sorted array.
30. The efficiency of Divide and Conquer depends mainly on:
A) Number of subproblems
B) Size reduction factor
C) Time to combine solutions
D) All of the above
Answer: D
Explanation: All factors affect overall efficiency.

1. Background: Computational Complexity

 Computational complexity theory studies the efficiency of algorithms and classifies


problems based on how hard they are to solve.

 Problems are grouped into complexity classes like P, NP, NP-hard, and NP-complete.

2. Key Complexity Classes

Class Description Examples

Problems solvable in polynomial time by a deterministic Sorting, Searching,


P
Turing machine. (Efficiently solvable problems) Shortest path

Problems for which a solution can be verified in


polynomial time by a deterministic Turing machine, or Boolean satisfiability
NP
equivalently solvable in polynomial time by a non- (SAT), Hamiltonian Path
deterministic Turing machine.

Halting problem
Problems at least as hard as the hardest problems in NP.
NP-hard (undecidable),
May or may not be in NP (may not be verifiable quickly).
Optimization problems

Problems that are both in NP and NP-hard. If one NP-


NP- SAT, 3-SAT, Hamiltonian
complete problem can be solved in polynomial time, all
complete Cycle
NP problems can be solved in polynomial time (P = NP).

3. What does NP mean?

 NP stands for Nondeterministic Polynomial time.


 A problem is in NP if a candidate solution can be verified in polynomial time.

 Example: Given a path in a graph, verifying if it is Hamiltonian is easy (polynomial), but


finding it may be hard.

4. NP-hard Problems

 NP-hard problems are as difficult as NP problems but not necessarily in NP.

 They may not have a polynomial time verifier.

 They are used in optimization and decision problems where no known polynomial
algorithm exists.

5. NP-complete Problems

 NP-complete problems are the hardest problems in NP.

 They have two properties:

o They belong to NP (solutions verifiable in polynomial time).

o Every problem in NP can be reduced to them in polynomial time.

 If a polynomial-time algorithm exists for one NP-complete problem, then P = NP.

6. Polynomial Time Reduction

 To prove that a problem AAA is NP-complete, we reduce a known NP-complete problem


BBB to AAA in polynomial time.

 This shows AAA is at least as hard as BBB.

7. Examples of NP-complete Problems

 SAT (Boolean satisfiability problem): Given a Boolean formula, determine if there's an


assignment of variables that makes it true.

 3-SAT: Special case of SAT with clauses of size 3.

 Hamiltonian Cycle: Is there a cycle visiting every vertex exactly once?


 Subset Sum: Find if a subset of numbers sums to a target value.

 Clique Problem: Finding a complete subgraph of given size.

8. Importance of NP-hard and NP-completeness

 Understanding which problems are NP-complete helps in deciding whether to look for
exact algorithms or approximate/heuristic solutions.

 It guides algorithm design and practical problem solving.

9. Summary Table

Polynomial Polynomial
Term Meaning
Verifier? Reducible?

P Problems solvable in polynomial time Yes —

NP Solutions verifiable in polynomial time Yes —

At least as hard as NP problems (not


NP-hard Not necessarily —
necessarily verifiable)

NP- Yes (from every NP


NP problems that are also NP-hard Yes
complete problem)

10. Visualizing the Classes

 P ⊆ NP ⊆ NP-hard

 NP-complete is the intersection of NP and NP-hard.

Would you like me to continue now with 30 MCQs on NP Hard and NP-Completeness or move
next to 4-star topics?

You said:

always follow topic + MCQS

ChatGPT said:
Understood! From now on, I’ll give you detailed notes on the topic first, then immediately 30
MCQs with answers and explanations for that topic.

Now, here are the 30 MCQs on NP Hard and NP-Completeness with answers and explanations.

30 MCQs on NP Hard and NP-Completeness

1. What does NP stand for?


A) Non-deterministic Polynomial time
B) Non-Polynomial time
C) Non-deterministic Problem
D) None of the above
Answer: A
Explanation: NP means problems verifiable in polynomial time by a non-deterministic Turing
machine.

2. A problem is NP-hard if:


A) It is in NP
B) It is at least as hard as the hardest problems in NP
C) It can be solved in polynomial time
D) It can be verified in polynomial time
Answer: B
Explanation: NP-hard problems are at least as difficult as NP problems, may or may not be in NP.

3. NP-complete problems are:


A) Only NP problems
B) Only NP-hard problems
C) Both NP and NP-hard
D) Neither NP nor NP-hard
Answer: C
Explanation: NP-complete problems belong to both NP and NP-hard.

4. Which of these is an example of an NP-complete problem?


A) Sorting
B) Hamiltonian Cycle
C) Binary Search
D) Fibonacci Calculation
Answer: B
Explanation: Hamiltonian Cycle problem is a classic NP-complete problem.

5. What does it mean if a problem is in class P?


A) It cannot be solved efficiently
B) It can be solved in polynomial time
C) It can be verified in polynomial time
D) It is undecidable
Answer: B
Explanation: Problems in P can be solved efficiently in polynomial time.

6. Which of the following statements is true?


A) P = NP is proven
B) P ≠ NP is proven
C) P = NP is an open problem
D) P and NP are the same classes
Answer: C
Explanation: Whether P = NP or not is an unsolved problem in computer science.

7. Polynomial time reduction is used to:


A) Prove a problem is NP-hard
B) Solve a problem faster
C) Reduce input size
D) None of the above
Answer: A
Explanation: Polynomial-time reduction shows one problem is at least as hard as another.

8. Which of the following problems is NOT known to be NP-complete?


A) SAT
B) 3-SAT
C) Linear Search
D) Clique Problem
Answer: C
Explanation: Linear Search is solvable in linear time, not NP-complete.

9. If an NP-complete problem is solved in polynomial time, then:


A) Only that problem is solved faster
B) All NP problems can be solved in polynomial time
C) P ≠ NP
D) NP-hard problems become easy
Answer: B
Explanation: Solving one NP-complete problem in polynomial time implies P=NP.

10. Verifying a solution to an NP problem takes:


A) Exponential time
B) Polynomial time
C) Constant time
D) Logarithmic time
Answer: B
Explanation: By definition, NP problems have polynomial-time verifiers.

11. Which of the following is true about NP-hard problems?


A) All NP-hard problems are in NP
B) NP-hard problems may not be verifiable in polynomial time
C) NP-hard problems are easier than NP problems
D) NP-hard problems can be solved in polynomial time
Answer: B
Explanation: NP-hard problems may not be in NP or verifiable efficiently.

12. The Subset Sum problem is:


A) In P
B) NP-complete
C) NP-hard but not in NP
D) None of these
Answer: B
Explanation: Subset Sum is a classic NP-complete problem.
13. Which of the following is a characteristic of NP-complete problems?
A) Easy to solve but hard to verify
B) Hard to solve and hard to verify
C) Hard to solve but easy to verify
D) Easy to solve and easy to verify
Answer: C
Explanation: Solutions can be verified quickly but finding solutions is believed to be hard.

14. Which of the following problems is undecidable (not in NP)?


A) SAT
B) Halting Problem
C) 3-SAT
D) Hamiltonian Cycle
Answer: B
Explanation: Halting problem is undecidable and not in NP.

15. The class P is a subset of:


A) NP
B) NP-hard
C) NP-complete
D) None of these
Answer: A
Explanation: All problems solvable in polynomial time are also verifiable in polynomial time.

16. What is the importance of polynomial time reductions?


A) To prove that a problem is easy
B) To prove equivalence in hardness between problems
C) To find exact solutions
D) To reduce input size
Answer: B
Explanation: Reductions show one problem is at least as hard as another.
17. If a problem is NP-complete, what can be inferred about its complement?
A) It is also NP-complete
B) It is always in P
C) It is in co-NP
D) Nothing can be inferred without further proof
Answer: D
Explanation: Relationship between NP-complete problems and their complements requires
specific proofs.

18. Which of the following problems is a decision problem?


A) Sorting an array
B) Is there a Hamiltonian path?
C) Finding the maximum element
D) Calculating Fibonacci number
Answer: B
Explanation: Decision problems have yes/no answers.

19. Which of these statements is FALSE?


A) Every problem in P is in NP
B) Every problem in NP is in P
C) NP-complete problems are in NP
D) NP-hard problems can be harder than NP-complete
Answer: B
Explanation: It is unknown whether every NP problem is also in P.

20. The SAT problem asks:


A) Can a Boolean formula be satisfied?
B) Is a graph connected?
C) What is the shortest path?
D) How many subsets sum to zero?
Answer: A
Explanation: SAT asks if a Boolean formula is satisfiable.
21. The famous open problem in computer science related to these classes is:
A) Sorting complexity
B) P vs NP problem
C) Graph connectivity
D) Traveling Salesman Problem
Answer: B
Explanation: P vs NP problem asks if every NP problem is solvable in polynomial time.

22. What is a characteristic of NP-complete problems?


A) Solutions can only be found by brute force
B) Solutions can be found efficiently for all cases
C) Verifying solutions is easy but finding them is hard
D) They are not verifiable
Answer: C
Explanation: Verification is polynomial, solution search believed hard.

23. Which of the following is NOT a known NP-complete problem?


A) 3-SAT
B) Traveling Salesman Problem (decision version)
C) Linear Search
D) Vertex Cover
Answer: C
Explanation: Linear Search is solvable in linear time, not NP-complete.

24. What does it mean if a problem is in co-NP?


A) Its complement is in NP
B) It is NP-hard
C) It is undecidable
D) It is solvable in polynomial time
Answer: A
Explanation: co-NP contains complements of NP problems.

25. The Clique problem asks:


A) Does a graph contain a complete subgraph of size k?
B) What is the largest independent set?
C) Is a graph bipartite?
D) Is a graph connected?
Answer: A
Explanation: Clique problem looks for a complete subgraph.

26. Which of the following statements about NP problems is correct?


A) All NP problems can be solved in polynomial time
B) All NP problems can be verified in polynomial time
C) All NP problems are undecidable
D) All NP problems are solvable in constant time
Answer: B
Explanation: By definition, NP problems have polynomial-time verifiers.

27. Which technique is commonly used to prove NP-completeness?


A) Greedy algorithm
B) Polynomial-time reduction
C) Dynamic programming
D) Divide and conquer
Answer: B
Explanation: Polynomial-time reductions prove NP-completeness.

28. The Halting Problem is:


A) In P
B) NP-complete
C) NP-hard but undecidable
D) None of the above
Answer: C
Explanation: Halting problem is undecidable and NP-hard.

29. The decision version of an optimization problem:


A) Always easier than optimization version
B) Usually NP-complete if optimization is hard
C) Not related to complexity classes
D) None of these
Answer: B
Explanation: Decision versions of hard optimization problems are often NP-complete.

30. Which of these is true about the relationship between P and NP?
A) P ⊆ NP
B) NP ⊆ P
C) P = NP is proven true
D) P = NP is proven false
Answer: A
Explanation: Problems solvable in polynomial time are also verifiable in polynomial time.

1. Algorithm Analysis

 Algorithm analysis evaluates the efficiency of an algorithm in terms of time (execution


steps) and space (memory used).

 It helps in comparing algorithms and predicting their performance on large inputs.

 Two major types:

o Time Complexity: Number of operations executed

o Space Complexity: Memory consumed during execution

2. Time-Space Tradeoff

 Sometimes improving time complexity increases space usage and vice versa.

 Example: Using memoization or caching increases space but reduces recomputation


time.

 Developers must balance time and space based on constraints.

3. Asymptotic Notations

Used to describe the limiting behavior of an algorithm’s complexity as input size nnn grows
large.
 Big-O Notation O(f(n))O(f(n))O(f(n))
Describes an upper bound on the time/space complexity, i.e., the worst-case scenario.
Example: T(n)=O(n2)T(n) = O(n^2)T(n)=O(n2) means the time grows at most proportional
to n2n^2n2.

 Omega Notation Ω(f(n))\Omega(f(n))Ω(f(n))


Describes a lower bound — best-case complexity.
Example: T(n)=Ω(n)T(n) = \Omega(n)T(n)=Ω(n) means time grows at least as fast as linear
time.

 Theta Notation Θ(f(n))\Theta(f(n))Θ(f(n))


Describes a tight bound — both upper and lower.
Example: T(n)=Θ(nlog⁡n)T(n) = \Theta(n \log n)T(n)=Θ(nlogn) means time grows exactly
proportional to nlog⁡nn \log nnlogn.

4. Conditional Asymptotic Notation

 Conditional asymptotic notation involves cases where the complexity depends on


specific conditions.

 For example, quicksort has average-case O(nlog⁡n)O(n \log n)O(nlogn) but worst-case
O(n2)O(n^2)O(n2).

5. Removing Condition from Conditional Asymptotic Notation

 To simplify analysis, often worst-case asymptotic notations are used (e.g., Big-O),
ignoring average/best cases.

 This helps in making generalized guarantees about algorithm performance.

6. Properties of Big-O Notation

 Reflexive: f(n)=O(f(n))f(n) = O(f(n))f(n)=O(f(n))

 Transitive: If f(n)=O(g(n))f(n) = O(g(n))f(n)=O(g(n)) and g(n)=O(h(n))g(n) =


O(h(n))g(n)=O(h(n)), then f(n)=O(h(n))f(n) = O(h(n))f(n)=O(h(n))

 Addition: O(f(n))+O(g(n))=O(max⁡(f(n),g(n)))O(f(n)) + O(g(n)) = O(\max(f(n),


g(n)))O(f(n))+O(g(n))=O(max(f(n),g(n)))
 Multiplication: O(f(n))×O(g(n))=O(f(n)×g(n))O(f(n)) \times O(g(n)) = O(f(n) \times
g(n))O(f(n))×O(g(n))=O(f(n)×g(n))

 Constants are ignored: O(3n2)=O(n2)O(3n^2) = O(n^2)O(3n2)=O(n2)

7. Examples of Big-O Common Complexities

Complexity Example Explanation

O(1)O(1)O(1) Accessing array element Constant time

O(log⁡n)O(\log n)O(logn) Binary search Halves input each step

O(n)O(n)O(n) Linear search Checks each element

O(nlog⁡n)O(n \log n)O(nlogn) Merge sort, Heap sort Divide and conquer sorting

O(n2)O(n^2)O(n2) Bubble sort, Selection sort Nested loops over input

O(2n)O(2^n)O(2n) Recursive Fibonacci (naive) Exponential growth

8. Importance in Competitive Exams & Practical Use

 Understanding these concepts is crucial for algorithm design, optimizing solutions, and
writing efficient code.

 Most coding and competitive programming questions require quick estimation of


algorithm complexity.

Now, following your instruction, I will generate 30 MCQs on this topic with answers and
explanations.

30 MCQs on Algorithm Analysis, Time-Space Tradeoff, Asymptotic Notations

1. What does Big-O notation represent?


A) Best-case complexity
B) Average-case complexity
C) Upper bound on time complexity
D) Exact time complexity
Answer: C
Explanation: Big-O gives the worst-case upper bound of an algorithm’s time/space complexity.

2. Which of the following is an example of constant time complexity?


A) Linear search
B) Accessing an array element by index
C) Merge sort
D) Binary search
Answer: B
Explanation: Accessing an element in an array is done in constant time O(1)O(1)O(1).

3. Time-space tradeoff means:


A) Using more time reduces space used
B) Using more space can reduce time taken
C) Time and space always increase together
D) None of the above
Answer: B
Explanation: Sometimes using extra memory (space) can reduce the time needed, like in
memoization.

4. Which sorting algorithm has an average time complexity of O(nlog⁡n)O(n \log n)O(nlogn)?
A) Bubble sort
B) Merge sort
C) Selection sort
D) Insertion sort
Answer: B
Explanation: Merge sort uses divide and conquer with average and worst time complexity
O(nlog⁡n)O(n \log n)O(nlogn).

5. Which asymptotic notation represents a lower bound on an algorithm’s complexity?


A) Big-O
B) Omega
C) Theta
D) Sigma
Answer: B
Explanation: Omega notation provides a lower bound (best-case scenario).

6. Which of the following is TRUE about Big-O notation?


A) O(3n)=O(n)O(3n) = O(n)O(3n)=O(n)
B) O(n2+n)=O(n3)O(n^2 + n) = O(n^3)O(n2+n)=O(n3)
C) O(log⁡n)>O(n)O(\log n) > O(n)O(logn)>O(n)
D) O(n2)=O(n)O(n^2) = O(n)O(n2)=O(n)
Answer: A
Explanation: Constants are ignored, so O(3n)=O(n)O(3n) = O(n)O(3n)=O(n).

7. Which of these describes the worst-case time complexity?


A) Omega notation
B) Big-O notation
C) Theta notation
D) Average-case analysis
Answer: B
Explanation: Big-O is used to denote worst-case upper bound.

8. What is the time complexity of binary search?


A) O(n)O(n)O(n)
B) O(log⁡n)O(\log n)O(logn)
C) O(nlog⁡n)O(n \log n)O(nlogn)
D) O(1)O(1)O(1)
Answer: B
Explanation: Binary search halves the input in each step, so logarithmic time.

9. Which algorithm uses the divide and conquer method?


A) Bubble sort
B) Merge sort
C) Linear search
D) Selection sort
Answer: B
Explanation: Merge sort divides input into halves recursively.
10. The complexity O(n2)O(n^2)O(n2) is commonly associated with:
A) Merge sort
B) Quick sort
C) Bubble sort
D) Binary search
Answer: C
Explanation: Bubble sort has nested loops resulting in quadratic time complexity.

11. Conditional asymptotic notation considers:


A) Only the best case
B) Only the worst case
C) Different cases based on conditions
D) Average of best and worst cases
Answer: C
Explanation: It takes into account different scenarios like best, average, and worst.

12. What does Θ(f(n))\Theta(f(n))Θ(f(n)) notation imply?


A) An upper bound only
B) A lower bound only
C) Both upper and lower bound
D) None of the above
Answer: C
Explanation: Theta notation tightly bounds the function from above and below.

13. Which of the following is NOT true about space complexity?


A) It measures memory used
B) It is independent of input size
C) It is analyzed like time complexity
D) Sometimes time-space tradeoff affects it
Answer: B
Explanation: Space complexity depends on input size.
14. Which property of Big-O notation states O(f(n))+O(g(n))=O(max⁡(f(n),g(n)))O(f(n)) + O(g(n))
= O(\max(f(n), g(n)))O(f(n))+O(g(n))=O(max(f(n),g(n)))?
A) Transitive
B) Reflexive
C) Addition property
D) Multiplication property
Answer: C
Explanation: Big-O addition property simplifies sum of functions to max term.

15. Which of the following is an example of exponential time complexity?


A) O(n2)O(n^2)O(n2)
B) O(log⁡n)O(\log n)O(logn)
C) O(2n)O(2^n)O(2n)
D) O(nlog⁡n)O(n \log n)O(nlogn)
Answer: C
Explanation: Exponential time grows as powers of 2.

16. What does removing condition from conditional asymptotic notation usually imply?
A) Considering only average case
B) Considering worst-case only
C) Considering best-case only
D) Ignoring all cases
Answer: B
Explanation: Usually, worst-case Big-O is used for simplicity.

17. Which of the following is a correct property of Big-O notation?


A) O(c×f(n))=O(f(n))O(c \times f(n)) = O(f(n))O(c×f(n))=O(f(n)), where c is a constant
B) O(f(n)+g(n))=O(f(n)×g(n))O(f(n) + g(n)) = O(f(n) \times g(n))O(f(n)+g(n))=O(f(n)×g(n))
C) O(f(n)×g(n))=O(f(n)+g(n))O(f(n) \times g(n)) = O(f(n) + g(n))O(f(n)×g(n))=O(f(n)+g(n))
D) O(f(n))=O(g(n))O(f(n)) = O(g(n))O(f(n))=O(g(n)) for all functions
Answer: A
Explanation: Constants are ignored in Big-O notation.
18. What is the space complexity of an algorithm using recursion with depth nnn?
A) O(1)O(1)O(1)
B) O(n)O(n)O(n)
C) O(log⁡n)O(\log n)O(logn)
D) O(n2)O(n^2)O(n2)
Answer: B
Explanation: Recursive call stack uses space proportional to recursion depth.

19. Which of the following algorithms has a worst-case time complexity of O(n2)O(n^2)O(n2)?
A) Quick sort
B) Merge sort
C) Selection sort
D) Binary search
Answer: C
Explanation: Selection sort is O(n2)O(n^2)O(n2) in all cases.

20. Which notation describes the exact asymptotic behavior?


A) Big-O
B) Big-Theta
C) Big-Omega
D) Big-Sigma
Answer: B
Explanation: Theta notation gives exact asymptotic bounds.

21. What does the “time-space tradeoff” mean in algorithm design?


A) Increasing time increases space
B) Reducing time increases space and vice versa
C) Time and space are independent
D) None of the above
Answer: B
Explanation: To reduce computation time, more memory may be used.

22. Which of these sorting algorithms is NOT based on divide and conquer?
A) Merge sort
B) Quick sort
C) Bubble sort
D) Heap sort
Answer: C
Explanation: Bubble sort is a simple comparison-based sort without divide and conquer.

23. The worst-case time complexity of linear search is:


A) O(1)O(1)O(1)
B) O(n)O(n)O(n)
C) O(log⁡n)O(\log n)O(logn)
D) O(n2)O(n^2)O(n2)
Answer: B
Explanation: Linear search checks each element in the worst case.

24. Which is true about average-case complexity?


A) It always equals worst-case complexity
B) It is the expected time over all inputs
C) It ignores all input cases
D) It is always better than best-case
Answer: B
Explanation: Average-case is expected running time over all inputs.

25. Which of the following best describes Big-Omega notation?


A) Upper bound
B) Lower bound
C) Exact bound
D) Average bound
Answer: B
Explanation: Omega notation provides a lower bound on complexity.

26. Which of the following is an example of logarithmic time complexity?


A) Binary search
B) Bubble sort
C) Merge sort
D) Linear search
Answer: A
Explanation: Binary search runs in logarithmic time.

27. What is the worst-case complexity of quicksort?


A) O(nlog⁡n)O(n \log n)O(nlogn)
B) O(n2)O(n^2)O(n2)
C) O(n)O(n)O(n)
D) O(log⁡n)O(\log n)O(logn)
Answer: B
Explanation: Worst-case quicksort is quadratic if pivot selection is poor.

28. The best case time complexity of insertion sort is:


A) O(n2)O(n^2)O(n2)
B) O(n)O(n)O(n)
C) O(log⁡n)O(\log n)O(logn)
D) O(1)O(1)O(1)
Answer: B
Explanation: If the array is already sorted, insertion sort runs in linear time.

29. The “divide and conquer” technique:


A) Splits a problem into subproblems
B) Solves subproblems independently
C) Combines results of subproblems
D) All of the above
Answer: D
Explanation: Divide and conquer splits, solves, and merges subproblems.

30. Which of the following is NOT true for Big-O notation?


A) It ignores constant multipliers
B) It describes worst-case complexity
C) It provides exact running time
D) It helps compare algorithms
Answer: C
📘 Detailed Notes: Recurrence Equations, Solving Recurrence Equations, Linear Search
Analysis

1. Recurrence Equations

 A recurrence equation expresses the time complexity of a recursive algorithm in terms


of its size nnn and smaller inputs.

 It is a mathematical expression defining a sequence based on previous terms.

General form:

T(n)=a⋅T(n/b)+f(n)T(n) = a \cdot T(n/b) + f(n)T(n)=a⋅T(n/b)+f(n)

Where:

 aaa: number of subproblems,

 n/bn/bn/b: size of each subproblem,

 f(n)f(n)f(n): cost outside recursion (combine/divide).

2. Methods to Solve Recurrence Equations

a) Substitution Method

 Guess the solution.

 Prove it using mathematical induction.

b) Recursion Tree Method

 Draw a tree of recursive calls.

 Sum the cost at each level.

c) Master Theorem

For equations of the form:

T(n)=aT(n/b)+f(n)T(n) = aT(n/b) + f(n)T(n)=aT(n/b)+f(n)

Use the following rules:


Case Condition Complexity

f(n)=O(nlog⁡ba−ε)f(n) = O(n^{\log_b a - T(n)=Θ(nlog⁡ba)T(n) = \Theta(n^{\log_b


1
\varepsilon})f(n)=O(nlogba−ε) a})T(n)=Θ(nlogba)

T(n)=Θ(nlog⁡balog⁡n)T(n) =
f(n)=Θ(nlog⁡ba)f(n) = \Theta(n^{\log_b
2 \Theta(n^{\log_b a} \log
a})f(n)=Θ(nlogba)
n)T(n)=Θ(nlogbalogn)

f(n)=Ω(nlog⁡ba+ε)f(n) = \Omega(n^{\log_b a +
T(n)=Θ(f(n))T(n) =
3 \varepsilon})f(n)=Ω(nlogba+ε) and regularity
\Theta(f(n))T(n)=Θ(f(n))
condition holds

✅ Example 1:

T(n)=2T(n/2)+n⇒a=2, b=2, f(n)=nT(n) = 2T(n/2) + n \Rightarrow a=2,\ b=2,\


f(n)=nT(n)=2T(n/2)+n⇒a=2, b=2, f(n)=n

log⁡ba=log⁡22=1\log_b a = \log_2 2 = 1logba=log22=1, f(n)=n=nlog⁡baf(n) = n = n^{\log_b


a}f(n)=n=nlogba → Case 2 ⇒

T(n)=Θ(nlog⁡n)T(n) = \Theta(n \log n)T(n)=Θ(nlogn)

✅ Example 2:

T(n)=T(n−1)+c⇒Linear recursionT(n) = T(n-1) + c \Rightarrow \text{Linear


recursion}T(n)=T(n−1)+c⇒Linear recursion T(n)=T(n−1)+c=T(n−2)+2c=⋯=T(1)+(n−1)c=O(n)T(n) =
T(n-1) + c = T(n-2) + 2c = \dots = T(1) + (n-1)c =
O(n)T(n)=T(n−1)+c=T(n−2)+2c=⋯=T(1)+(n−1)c=O(n)

3. Analysis of Linear Search

 Linear Search: Sequentially checks every element in the list until a match is found or the
list ends.

Time Complexity:

 Best case: O(1)O(1)O(1) (element found at the first position)

 Worst case: O(n)O(n)O(n) (element not found or at last position)

 Average case: O(n)O(n)O(n)

Space Complexity:
 O(1)O(1)O(1), since no extra space is used.

Recurrence Relation (for educational purposes):

T(n)=T(n−1)+c⇒T(n)=O(n)T(n) = T(n-1) + c \Rightarrow T(n) = O(n)T(n)=T(n−1)+c⇒T(n)=O(n)

🧠 Topic: Recurrence Equations, Solving Recurrence Equations, Linear Search Analysis

🔢 30 MCQs with Answers and Explanations

1. What does a recurrence relation represent in algorithm analysis?


A) Space required
B) Number of inputs
C) Time complexity in terms of input size
D) Memory allocation
Answer: C
Explanation: A recurrence relation expresses the running time of a recursive function in terms
of its input size.

2. Which of the following is the recurrence relation for Merge Sort?


A) T(n)=T(n−1)+1T(n) = T(n-1) + 1T(n)=T(n−1)+1
B) T(n)=2T(n/2)+nT(n) = 2T(n/2) + nT(n)=2T(n/2)+n
C) T(n)=T(n)+1T(n) = T(n) + 1T(n)=T(n)+1
D) T(n)=T(n−1)+nT(n) = T(n-1) + nT(n)=T(n−1)+n
Answer: B
Explanation: Merge sort splits the array into 2 halves and merges in linear time.

3. What is the solution to T(n)=T(n−1)+cT(n) = T(n-1) + cT(n)=T(n−1)+c, where ccc is a


constant?
A) O(log⁡n)O(\log n)O(logn)
B) O(1)O(1)O(1)
C) O(n)O(n)O(n)
D) O(nlog⁡n)O(n \log n)O(nlogn)
Answer: C
Explanation: This recurrence relation grows linearly.
4. In the recurrence T(n)=2T(n/2)+nT(n) = 2T(n/2) + nT(n)=2T(n/2)+n, what is aaa and bbb?
A) a=2,b=2a=2, b=2a=2,b=2
B) a=1,b=2a=1, b=2a=1,b=2
C) a=2,b=1a=2, b=1a=2,b=1
D) a=2,b=na=2, b=na=2,b=n
Answer: A
Explanation: 2 subproblems of size n/2n/2n/2 each.

5. Which method is NOT used to solve recurrence relations?


A) Substitution
B) Master Theorem
C) Recursion Tree
D) Kruskal’s algorithm
Answer: D
Explanation: Kruskal’s is a graph algorithm.

6. What is the time complexity of linear search in the worst case?


A) O(1)O(1)O(1)
B) O(n)O(n)O(n)
C) O(log⁡n)O(\log n)O(logn)
D) O(nlog⁡n)O(n \log n)O(nlogn)
Answer: B
Explanation: It checks each element one by one.

7. What is the average case time complexity of linear search?


A) O(1)O(1)O(1)
B) O(n)O(n)O(n)
C) O(n2)O(n^2)O(n2)
D) O(nlog⁡n)O(n \log n)O(nlogn)
Answer: B
Explanation: On average, it checks n/2n/2n/2 elements, which is still O(n)O(n)O(n).
8. Which technique visualizes recursive calls and adds their costs?
A) Master Theorem
B) Substitution
C) Recursion Tree
D) Tree traversal
Answer: C
Explanation: Recursion tree method shows how cost adds at each level.

9. What is the solution of T(n)=2T(n/2)+nT(n) = 2T(n/2) + nT(n)=2T(n/2)+n using Master


Theorem?
A) O(n)O(n)O(n)
B) O(nlog⁡n)O(n \log n)O(nlogn)
C) O(n2)O(n^2)O(n2)
D) O(log⁡n)O(\log n)O(logn)
Answer: B
Explanation: It matches case 2 of Master Theorem.

10. What is the time complexity of the recurrence T(n)=T(n/2)+1T(n) = T(n/2) +


1T(n)=T(n/2)+1?
A) O(n)O(n)O(n)
B) O(log⁡n)O(\log n)O(logn)
C) O(nlog⁡n)O(n \log n)O(nlogn)
D) O(1)O(1)O(1)
Answer: B
Explanation: Each call divides input size by 2 and does constant work.

11. The Master Theorem is applicable to which form of recurrence?


A) T(n)=T(n−1)+nT(n) = T(n-1) + nT(n)=T(n−1)+n
B) T(n)=aT(n/b)+f(n)T(n) = aT(n/b) + f(n)T(n)=aT(n/b)+f(n)
C) T(n)=T(n)+1T(n) = T(n) + 1T(n)=T(n)+1
D) T(n)=nT(n)T(n) = nT(n)T(n)=nT(n)
Answer: B
Explanation: Master Theorem solves divide-and-conquer recurrences.
12. What is the time complexity of the recurrence T(n)=T(n−1)+nT(n) = T(n-1) +
nT(n)=T(n−1)+n?
A) O(n)O(n)O(n)
B) O(n2)O(n^2)O(n2)
C) O(nlog⁡n)O(n \log n)O(nlogn)
D) O(log⁡n)O(\log n)O(logn)
Answer: B
Explanation: Expands to T(n)=n+(n−1)+...+1=O(n2)T(n) = n + (n-1) + ... + 1 =
O(n^2)T(n)=n+(n−1)+...+1=O(n2)

13. The recurrence T(n)=3T(n/2)+nT(n) = 3T(n/2) + nT(n)=3T(n/2)+n solves to:


A) O(nlog⁡n)O(n \log n)O(nlogn)
B) O(nlog⁡23)O(n^{\log_2 3})O(nlog23)
C) O(n)O(n)O(n)
D) O(n2)O(n^2)O(n2)
Answer: B
Explanation: Using Master Theorem, case 1.

14. What does the base case in recurrence relations represent?


A) Worst case
B) Best case
C) Termination point
D) Space complexity
Answer: C
Explanation: Base case defines where recursion ends.

15. Which recurrence gives a logarithmic solution?


A) T(n)=T(n/2)+1T(n) = T(n/2) + 1T(n)=T(n/2)+1
B) T(n)=T(n)+1T(n) = T(n) + 1T(n)=T(n)+1
C) T(n)=T(n)+nT(n) = T(n) + nT(n)=T(n)+n
D) T(n)=T(n−1)+nT(n) = T(n-1) + nT(n)=T(n−1)+n
Answer: A
Explanation: Divides input by 2 each time with constant work.
16. Which of the following is true about linear search?
A) Best-case is O(n)O(n)O(n)
B) Worst-case is O(1)O(1)O(1)
C) Best-case is O(1)O(1)O(1)
D) Worst-case is O(log⁡n)O(\log n)O(logn)
Answer: C
Explanation: If the element is at the first position.

17. Recursion tree method helps us to:


A) Visualize recursive function calls
B) Replace recursion
C) Implement binary tree
D) Reduce time complexity
Answer: A
Explanation: It helps to compute total time cost.

18. In linear search, if item is not present, how many comparisons are made?
A) 1
B) n/2n/2n/2
C) nnn
D) nlog⁡nn \log nnlogn
Answer: C
Explanation: In worst-case, it checks all nnn elements.

19. Which is NOT true for recurrence relations?


A) They model recursive time complexity
B) They can be solved using induction
C) They are only used for sorting algorithms
D) They can be solved using Master Theorem
Answer: C
Explanation: Used in many algorithms, not just sorting.

20. What’s the space complexity of linear search?


A) O(n)O(n)O(n)
B) O(n2)O(n^2)O(n2)
C) O(1)O(1)O(1)
D) O(log⁡n)O(\log n)O(logn)
Answer: C
Explanation: No additional space is needed.

21. Which of the following is solved using recurrence relations?


A) Iterative programs
B) Recursive algorithms
C) Linear search
D) Hashing
Answer: B
Explanation: Recurrence relations model recursive calls.

22. If each recursive call halves the input, how many calls occur?
A) log⁡n\log nlogn
B) nnn
C) n2n^2n2
D) nlog⁡nn \log nnlogn
Answer: A
Explanation: Halving means logarithmic depth.

23. What is a characteristic of divide and conquer algorithms?


A) Use iteration only
B) Use recursion
C) Use only linear search
D) Constant time
Answer: B
Explanation: Divide and conquer is often recursive.

24. What is the recurrence for binary search?


A) T(n)=T(n−1)+1T(n) = T(n-1) + 1T(n)=T(n−1)+1
B) T(n)=T(n/2)+1T(n) = T(n/2) + 1T(n)=T(n/2)+1
C) T(n)=2T(n)+nT(n) = 2T(n) + nT(n)=2T(n)+n
D) T(n)=nT(n) = nT(n)=n
Answer: B
Explanation: Binary search splits array in half every time.

25. If T(n)=4T(n/2)+nT(n) = 4T(n/2) + nT(n)=4T(n/2)+n, what is time complexity?


A) O(n2)O(n^2)O(n2)
B) O(nlog⁡n)O(n \log n)O(nlogn)
C) O(n)O(n)O(n)
D) O(log⁡n)O(\log n)O(logn)
Answer: A
Explanation: Case 1 of Master Theorem, nlog⁡24=n2n^{\log_2 4} = n^2nlog24=n2

26. The recurrence T(n)=T(n/2)+T(n/2)+nT(n) = T(n/2) + T(n/2) + nT(n)=T(n/2)+T(n/2)+n


simplifies to:
A) T(n)=2T(n/2)+nT(n) = 2T(n/2) + nT(n)=2T(n/2)+n
B) T(n)=T(n/4)+nT(n) = T(n/4) + nT(n)=T(n/4)+n
C) T(n)=nT(n) = nT(n)=n
D) T(n)=T(n)+1T(n) = T(n) + 1T(n)=T(n)+1
Answer: A
Explanation: Two calls on half-sized problems.

27. If base case is T(1)=1T(1) = 1T(1)=1, how many calls does T(n)=T(n−1)+1T(n) = T(n-1) +
1T(n)=T(n−1)+1 make?
A) 1
B) nnn
C) n2n^2n2
D) log⁡n\log nlogn
Answer: B
Explanation: It recurses nnn times.

28. The recurrence T(n)=T(n/2)+log⁡nT(n) = T(n/2) + \log nT(n)=T(n/2)+logn solves to:


A) O(n)O(n)O(n)
B) O(log⁡2n)O(\log^2 n)O(log2n)
C) O(nlog⁡n)O(n \log n)O(nlogn)
D) O(1)O(1)O(1)
Answer: B
Explanation: Sum of logs over log levels = log⁡2n\log^2 nlog2n

29. Which method is best when solving recurrences with irregular terms?
A) Recursion tree
B) Master theorem
C) Substitution
D) Trial and error
Answer: C
Explanation: Substitution helps guess and verify solutions.

30. Linear search performs poorly on:


A) Large unsorted arrays
B) Small arrays
C) Sorted arrays
D) Hash tables
Answer: A
Explanation: It must scan every item in worst-case.

📘 Detailed Notes: Backtracking (General Method, 8-Queens, Hamiltonian Problem)

(★ ★ ★ ★ – High Importance for Bihar STET Paper 2)

🔷 1. What is Backtracking?

Backtracking is a general algorithmic technique for solving combinatorial problems


incrementally by building a solution piece by piece and removing those solutions that fail to
satisfy constraints (called pruning).

It is a depth-first search (DFS) technique applied to state space trees.

🔷 2. Characteristics of Backtracking

 Recursive in nature.
 Follows DFS traversal.

 Works best when:

o The solution space is exponentially large.

o Solutions need to be verified incrementally.

 Eliminates infeasible paths early using pruning conditions.

🔷 3. Backtracking Algorithm – General Method

python

CopyEdit

def backtrack(solution, constraints):

if solution is complete:

output(solution)

else:

for candidate in candidates:

if is_valid(candidate, solution):

solution.add(candidate)

backtrack(solution, constraints)

solution.remove(candidate)

Steps:

1. Start with an empty solution.

2. Add choices incrementally.

3. If constraints are violated, backtrack.

4. Continue until a valid solution is found or all possibilities are exhausted.

🔷 4. Example: N-Queens Problem (8-Queens)

🧠 Problem:
Place 8 queens on an 8×8 chessboard such that no two queens attack each other (no same row,
column, or diagonal).

🎯 Approach:

 Use backtracking to place queens row by row.

 At each step, check:

o No two queens are in the same column.

o No two queens share the same diagonal.

Python Pseudocode:

python

CopyEdit

def solve(board, row):

if row == 8:

print(board)

return

for col in range(8):

if is_safe(board, row, col):

board[row][col] = 1

solve(board, row + 1)

board[row][col] = 0

🔷 5. Example: Hamiltonian Cycle Problem

🧠 Problem:

Given a graph G(V, E), determine if there is a cycle that visits every vertex exactly once and
returns to the starting point.

🎯 Approach:

 Start at a vertex.
 Try to add one vertex at a time to the path.

 At each step, check:

o The vertex is adjacent to the previous one.

o It hasn’t been included in the path before.

o If the path is complete, check for a return edge to the start.

Pseudocode:

python

CopyEdit

def hamiltonian(graph, path, pos):

if pos == len(graph):

return graph[path[pos - 1]][path[0]] == 1

for v in range(1, len(graph)):

if is_safe(v, graph, path, pos):

path[pos] = v

if hamiltonian(graph, path, pos + 1):

return True

path[pos] = -1

return False

🔷 6. Key Applications of Backtracking

 Constraint Satisfaction Problems (CSP)

o N-Queens

o Sudoku

o Crossword Fillers

 Graph problems

o Hamiltonian cycle
o Graph Coloring

 Combinatorial problems

o Subset generation

o Permutations and combinations

🔷 7. Time Complexity

 Worst-case time complexity: O(k^n)

o Where k = number of choices at each step

o n = number of steps or decisions

Efficient backtracking reduces time through:

 Pruning

 Heuristics

 Forward checking

✅ Summary:

Concept Description

Technique Recursive + DFS + pruning

Best use Combinatorial problems, CSPs, graph cycles

Important Examples N-Queens, Hamiltonian Cycle, Sudoku

Advantage Avoids unnecessary computation via early backtracking

Limitation Can still be exponential in worst cases

🔟 Level: Basic to Moderate


1. Which algorithmic technique does Backtracking use?
A) Breadth-First Search
B) Greedy
C) Depth-First Search
D) Dynamic Programming

✅ Answer: C
📘 Explanation: Backtracking uses Depth-First Search to explore all possible solutions
recursively.

2. What is the goal of Backtracking?


A) To find the shortest path
B) To generate all combinations
C) To eliminate infeasible solutions early
D) To perform sorting

✅ Answer: C
📘 Backtracking eliminates non-viable solutions through pruning.

3. The worst-case time complexity of backtracking is usually:


A) O(n log n)
B) O(1)
C) O(kⁿ)
D) O(n²)

✅ Answer: C
📘 Backtracking may have exponential complexity where k is the number of options and n is
depth.

4. Which problem is NOT typically solved by backtracking?


A) Graph coloring
B) 8-Queens
C) Binary Search
D) Sudoku
✅ Answer: C
📘 Binary Search is a divide-and-conquer algorithm, not backtracking.

5. What is the key property of a valid solution in backtracking?


A) It must be sorted
B) It must pass all constraints
C) It must visit all elements
D) It must be the shortest

✅ Answer: B
📘 Backtracking ensures constraints are met at each decision point.

6. What does pruning mean in backtracking?


A) Eliminating duplicate elements
B) Sorting the solution
C) Cutting off paths that violate constraints
D) Generating permutations

✅ Answer: C
📘 Pruning helps avoid exploring invalid or unproductive branches.

7. Which chess problem is classically solved using backtracking?


A) Knight's Tour
B) Rook Problem
C) 8-Queens
D) Bishop Path

✅ Answer: C
📘 The 8-Queens problem is a famous backtracking example.

8. The number of solutions for the 8-Queens problem is:


A) 1
B) 8
C) 92
D) Infinite
✅ Answer: C
📘 There are 92 valid placements where no two queens attack each other.

9. In the N-Queens problem, how many queens are placed on an N×N board?
A) N
B) N/2
C) 2N
D) 1

✅ Answer: A
📘 N queens are to be placed on an N×N board with no mutual attack.

10. What data structure is typically used in backtracking recursion?


A) Queue
B) Stack
C) Heap
D) Linked List

✅ Answer: B
📘 Backtracking relies on the call stack for recursive calls (LIFO).

🔟 Level: Moderate to Advanced

11. Which one of these problems is a graph-based backtracking application?


A) Binary Search
B) Fibonacci Sequence
C) Hamiltonian Cycle
D) Quick Sort

✅ Answer: C
📘 Hamiltonian Cycle is solved using backtracking over graphs.

12. The base condition in the N-Queens backtracking solution is:


A) Column count exceeds N
B) No empty cell found
C) All queens are placed
D) Board is filled

✅ Answer: C
📘 The solution is complete when N queens are placed successfully.

13. In Hamiltonian cycle, which of the following must be true?


A) All vertices are visited twice
B) One vertex is revisited
C) All vertices are visited once and return to start
D) At least half the vertices are visited

✅ Answer: C
📘 Hamiltonian cycle must include each vertex exactly once and return.

14. What ensures the path in Hamiltonian cycle is valid?


A) All nodes are colored
B) No two nodes are connected
C) Each vertex is adjacent to the next and unused
D) All vertices are in the same row

✅ Answer: C
📘 Valid path must use only unvisited, adjacent vertices.

15. What happens after an invalid solution is reached?


A) Proceed further
B) Call dynamic programming
C) Backtrack to previous step
D) Restart entire process

✅ Answer: C
📘 Backtracking means reverting to the last decision point.
16. In backtracking, we proceed only if:
A) All elements are sorted
B) Solution meets constraints
C) There are cycles
D) We have completed recursion

✅ Answer: B
📘 At every recursive step, solution must meet the given constraints.

17. Graph coloring is solved using:


A) Binary Tree
B) Greedy Algorithm
C) Backtracking
D) Sorting

✅ Answer: C
📘 Graph coloring is a classic constraint problem, solved using backtracking.

18. The search space for N-Queens problem is of size:


A) N!
B) 2^N
C) N^2
D) O(N*N)

✅ Answer: A
📘 Initially N choices for each queen, resulting in N! permutations.

19. Which step occurs after a valid placement in N-Queens?


A) Backtrack
B) Continue to next row
C) Stop immediately
D) Swap with other queen

✅ Answer: B
📘 After placing a queen, move to the next row.
20. Which operation is essential to remove a wrong decision in backtracking?
A) Stack pop
B) Return
C) Undo the choice
D) Break

✅ Answer: C
📘 If a choice leads to failure, it is undone to explore new paths.

🔟 Mixed Application & Understanding

21. Which problem is not best suited for backtracking?


A) Sudoku Solver
B) Quick Sort
C) Crossword Puzzle
D) Graph Coloring

✅ Answer: B
📘 Quick Sort is a divide-and-conquer problem, not backtracking.

22. In the recursive tree of backtracking, each level represents:


A) An input
B) A failed solution
C) A decision step
D) A constraint

✅ Answer: C
📘 Each level represents a decision point in the recursive solution tree.

23. What type of problems are solved using backtracking?


A) Linear Problems
B) Polynomial Problems
C) Combinatorial Problems
D) Sequential Problems
✅ Answer: C
📘 Backtracking is ideal for solving combinatorial explosion problems.

24. If we use no pruning in backtracking, what happens?


A) Solution becomes more efficient
B) Time complexity decreases
C) All possibilities are explored
D) We get only optimal solution

✅ Answer: C
📘 Without pruning, backtracking degenerates into brute-force search.

25. Backtracking provides:


A) Single solution only
B) Optimal solution only
C) All feasible solutions (if required)
D) Greedy solution

✅ Answer: C
📘 Backtracking can explore all feasible solutions if not stopped early.

📘 Detailed Notes:

⭐⭐⭐ Topic 6 – Connected Components, Spanning Trees, Biconnected Components

Importance: Frequently appears in graph-related questions. Moderate priority for Bihar STET
Paper 2.

🔷 1. Connected Components

✅ Definition:

In an undirected graph, a connected component is a maximal set of vertices such that:

 Every pair of vertices in the set is reachable from each other.

🔹 Example:
For a graph with vertices:

 {A, B, C} connected

 {D, E} connected
Then it has 2 connected components.

🔹 Algorithms to Find:

 Use DFS or BFS starting from an unvisited vertex.

 All reachable nodes form one component.

 Repeat until all vertices are visited.

🔹 Time Complexity:

 O(V + E) (V = vertices, E = edges)

🔷 2. Spanning Tree

✅ Definition:

A spanning tree of a connected undirected graph is a subgraph that:

 Includes all vertices

 Is a tree (no cycles)

 Is connected

🔹 Properties:

 A graph with V vertices has exactly (V - 1) edges in its spanning tree.

 A graph can have multiple spanning trees.

🔹 Minimum Spanning Tree (MST):

A spanning tree where the sum of edge weights is minimized.

🔹 MST Algorithms:

 Prim’s Algorithm

 Kruskal’s Algorithm

🔹 Applications:
 Network design (minimum wiring cost, etc.)

🔷 3. Biconnected Components

✅ Definition:

A biconnected component (block) in a graph is a maximal subgraph such that:

 Removing any single vertex does not disconnect the subgraph.

🔹 Importance:

 Helps identify articulation points – nodes whose removal increases the number of
connected components.

🔹 How to Find:

 Use Tarjan’s Algorithm (based on DFS)

 Maintain:

o Discovery Time

o Low Value

o Parent node

🔹 Key Concept:

 A graph is biconnected if there are no articulation points.

🔷 Summary Table

Time
Concept Key Idea Algorithm
Complexity

Connected
Max set of connected vertices DFS/BFS O(V + E)
Components

DFS /
Spanning Tree Tree including all vertices O(E log V)
Prim/Kruskal
Time
Concept Key Idea Algorithm
Complexity

Biconnected No single vertex can disconnect the


Tarjan's O(V + E)
Component graph

Next up:

✅ 20 MCQs for this 3-Star Topic

Ready to continue with the MCQs?

You said:

yes

ChatGPT said:

Here are:

✅ 20 MCQs on

⭐⭐⭐ Topic: Connected Components, Spanning Trees, Biconnected Components

For Bihar STET Paper 2 – Computer Science

Each question includes:

 ✅ Correct Answer

 📘 Explanation

🔟 MCQs: Connected Components

1. What is a connected component in a graph?


A) A cycle
B) A complete subgraph
C) A set of vertices where each is reachable from the others
D) A subgraph with no edges
✅ Answer: C
📘 Explanation: A connected component contains all vertices that are mutually reachable.

2. Which traversal helps in finding connected components?


A) Inorder Traversal
B) DFS/BFS
C) Binary Search
D) Prim's Algorithm

✅ Answer: B
📘 Explanation: DFS or BFS explores all connected nodes, helping identify components.

3. How many connected components are there in a graph with no edges and 5 nodes?
A) 0
B) 1
C) 5
D) 10

✅ Answer: C
📘 Explanation: Each isolated node is a separate component.

4. Time complexity of finding connected components using DFS is:


A) O(V)
B) O(E)
C) O(V + E)
D) O(V²)

✅ Answer: C
📘 Explanation: DFS visits each vertex and edge once.

5. Connected components are only defined in:


A) Directed graphs
B) Undirected graphs
C) Binary Trees
D) AVL Trees
✅ Answer: B
📘 Explanation: In undirected graphs, connectedness is symmetric and simpler to define.

6. Which data structure is typically used in DFS?


A) Queue
B) Heap
C) Stack
D) Hash Table

✅ Answer: C
📘 Explanation: DFS uses a stack (explicit or via recursion).

7. How does BFS differ from DFS in component detection?


A) BFS is faster
B) DFS is faster
C) Both give same result
D) DFS does not work

✅ Answer: C
📘 Explanation: Both explore full components effectively.

8. In a graph with 10 vertices and no edges, how many components are there?
A) 0
B) 1
C) 10
D) 100

✅ Answer: C
📘 Explanation: Each vertex is disconnected, so 10 separate components.

9. What do you do after completing DFS from a node in component detection?


A) Exit
B) Reset the graph
C) Start DFS from another unvisited node
D) Remove edges
✅ Answer: C
📘 Explanation: This finds the next component.

10. If a graph has only 1 connected component, it is called:


A) Complete graph
B) Disconnected graph
C) Connected graph
D) Bipartite graph

✅ Answer: C
📘 Explanation: A connected graph has only one connected component.

🔟 MCQs: Spanning Trees and Biconnected Components

11. What is a spanning tree of a graph?


A) A tree that spans some of the vertices
B) A cycle
C) A tree that includes all vertices
D) A tree with max edges

✅ Answer: C
📘 Explanation: Spanning tree must include all vertices.

12. How many edges does a spanning tree of a graph with V vertices have?
A) V
B) V – 1
C) V + 1
D) 2V

✅ Answer: B
📘 Explanation: Always V – 1 edges in a spanning tree.

13. Which algorithm is commonly used to find a Minimum Spanning Tree?


A) DFS
B) BFS
C) Prim's or Kruskal's
D) Dijkstra

✅ Answer: C
📘 Explanation: Both Prim’s and Kruskal’s are standard MST algorithms.

14. A graph has more than one spanning tree if:


A) It’s disconnected
B) It’s connected
C) It has only one edge
D) It has loops

✅ Answer: B
📘 Explanation: A connected graph may have many spanning trees.

15. Which is NOT a property of a spanning tree?


A) No cycles
B) Connected
C) Includes all vertices
D) Includes all edges

✅ Answer: D
📘 Explanation: Spanning tree does not include all edges.

16. A biconnected graph is a graph that:


A) Is disconnected
B) Has at most one articulation point
C) Has no articulation point
D) Is cyclic only

✅ Answer: C
📘 Explanation: Biconnected graph has no vertex whose removal disconnects it.
17. Which algorithm is used to find articulation points and biconnected components?
A) Kruskal
B) Dijkstra
C) Tarjan’s Algorithm
D) Floyd-Warshall

✅ Answer: C
📘 Explanation: Tarjan’s algorithm detects articulation points using DFS.

18. What is an articulation point?


A) A point that joins two cycles
B) A vertex whose removal increases components
C) A vertex in MST
D) A leaf node

✅ Answer: B
📘 Explanation: Removing this vertex disconnects part of the graph.

19. What is the minimum number of articulation points in a tree with more than 2 vertices?
A) 0
B) 1
C) ≥ 1
D) Cannot say

✅ Answer: C
📘 Explanation: Trees often have articulation points unless it’s a star or chain.

20. What is the time complexity of Tarjan’s algorithm?


A) O(n²)
B) O(V + E)
C) O(log V)
D) O(E log V)

✅ Answer: B
📘 Explanation: Tarjan’s algorithm runs in linear time relative to graph size.

⭐⭐⭐⭐ Topic 6:
General Method, Multistage Graphs, All-Pair Shortest Paths, Optimal Binary Search Trees

Detailed Notes:

1. General Method (Dynamic Programming)

 Dynamic Programming (DP) is an optimization method that solves complex problems by


breaking them into simpler overlapping subproblems and storing the results of
subproblems to avoid recomputation.

 The key idea: solve each subproblem once and use its result multiple times.

 DP uses recursion with memoization or a bottom-up tabulation approach.

2. Multistage Graphs

 A multistage graph is a directed graph where vertices are divided into stages, and edges
only go from one stage to the next.

 The goal is to find the shortest path from the source node in the first stage to the sink
node in the last stage.

 Dynamic Programming can solve this by computing shortest distances from the last
stage backward to the first.

3. All-Pair Shortest Paths

 The problem involves finding shortest paths between all pairs of vertices in a weighted
graph.

 Floyd-Warshall algorithm is a classic DP-based solution.

 Time Complexity: O(V³), where V is the number of vertices.

 The algorithm updates distances by considering intermediate vertices step by step.

4. Optimal Binary Search Trees (OBST)

 Given a set of sorted keys and probabilities of searching for each key, OBST finds a BST
that minimizes the expected search cost.
 It uses DP to consider all possible roots and combines costs of left and right subtrees to
find the minimal total cost.

 Cost matrix and root matrix are computed to build the OBST structure.

25 MCQs on Topic 6 (Answers + Explanations)

1. What is the main advantage of Dynamic Programming?


A) Solves problems by brute force
B) Avoids recomputation by storing subproblem results
C) Always uses recursion only
D) Works only on trees

✅ Answer: B
📘 Explanation: DP stores results of subproblems to reuse them.

2. In multistage graphs, edges connect vertices between:


A) Same stage
B) Consecutive stages only
C) Random stages
D) No edges

✅ Answer: B
📘 Explanation: Edges only go from one stage to the next.

3. The Floyd-Warshall algorithm solves:


A) Single-source shortest path
B) All-pair shortest path
C) Minimum spanning tree
D) Topological sort

✅ Answer: B
📘 Explanation: Floyd-Warshall finds shortest paths between all pairs.
4. Time complexity of Floyd-Warshall is:
A) O(V + E)
B) O(V²)
C) O(V³)
D) O(log V)

✅ Answer: C
📘 Explanation: It uses triple nested loops over vertices.

5. Optimal Binary Search Tree aims to:


A) Minimize height of BST
B) Maximize nodes in BST
C) Minimize expected search cost
D) Maximize key values

✅ Answer: C
📘 Explanation: OBST minimizes weighted path length based on search frequencies.

6. Which technique does OBST use?


A) Divide and Conquer
B) Greedy
C) Dynamic Programming
D) Backtracking

✅ Answer: C
📘 Explanation: OBST uses DP to find minimal cost structures.

7. Which of the following is NOT true about DP?


A) Works on overlapping subproblems
B) Uses memoization or tabulation
C) Is always top-down approach
D) Avoids recomputation

✅ Answer: C
📘 Explanation: DP can be top-down or bottom-up.
8. Multistage graphs must be:
A) Undirected
B) Weighted and directed with stages
C) Cyclic graphs
D) Unweighted

✅ Answer: B
📘 Explanation: Multistage graphs are directed and weighted.

9. How does Floyd-Warshall algorithm improve distances?


A) By relaxing edges once
B) By considering intermediate vertices
C) By sorting edges
D) By DFS traversal

✅ Answer: B
📘 Explanation: It tries all intermediate vertices for shorter paths.

10. In OBST, the cost includes:


A) Frequency of keys only
B) Tree height only
C) Sum of frequency * depth for keys
D) Number of nodes

✅ Answer: C
📘 Explanation: Cost is weighted sum of key frequencies by their depth.

11. DP is suitable when subproblems are:


A) Independent
B) Overlapping
C) Mutually exclusive
D) Unrelated

✅ Answer: B
📘 Explanation: DP handles overlapping subproblems efficiently.
12. Multistage graph shortest path can be solved:
A) Using DFS only
B) Using DP backward from destination
C) Using BFS only
D) Using brute force only

✅ Answer: B
📘 Explanation: DP backward approach calculates shortest paths.

13. Floyd-Warshall algorithm can handle:


A) Negative weight cycles
B) Negative weights but no negative cycles
C) Only positive weights
D) Only unweighted graphs

✅ Answer: B
📘 Explanation: It detects negative cycles; negative weights allowed if no cycle.

14. OBST uses which matrices for solution?


A) Cost and root matrices
B) Distance and path matrices
C) Frequency and weight matrices
D) Adjacency and incidence matrices

✅ Answer: A
📘 Explanation: Cost and root matrices store minimal costs and subtree roots.

15. Which is true about stages in multistage graph?


A) Each stage has exactly one node
B) Edges exist between stages i and i+1 only
C) Graph is always cyclic
D) Stages are irrelevant for DP

✅ Answer: B
📘 Explanation: Edges only connect consecutive stages.
16. Which method can solve OBST problem efficiently?
A) Brute force recursion
B) Greedy method
C) Dynamic Programming
D) Backtracking

✅ Answer: C
📘 Explanation: DP optimally computes minimal expected cost.

17. Floyd-Warshall updates distance matrix:


A) Row-wise only
B) Column-wise only
C) Considering all pairs via intermediate vertices
D) Using only edges from source

✅ Answer: C
📘 Explanation: All pairs distances are updated iteratively.

18. DP approach is used in:


A) Multistage graphs
B) OBST
C) Floyd-Warshall
D) All of the above

✅ Answer: D
📘 Explanation: All use DP techniques.

19. The result of Floyd-Warshall is:


A) Shortest paths between source and all other nodes
B) Shortest path between two nodes only
C) Shortest paths between all pairs of nodes
D) Minimum spanning tree

✅ Answer: C
📘 Explanation: It's all-pairs shortest path algorithm.
20. What is the main difference between DP and Divide & Conquer?
A) DP solves overlapping subproblems, divide and conquer solves independent subproblems
B) DP uses recursion only
C) Divide and conquer stores subproblems
D) No difference

✅ Answer: A
📘 Explanation: DP stores solutions to overlapping problems, divide and conquer does not.

21. OBST minimizes:


A) Number of nodes
B) Average search time
C) Tree height
D) Number of leaves

✅ Answer: B
📘 Explanation: OBST optimizes expected search time based on key frequencies.

22. Multistage graph’s shortest path can be found in:


A) O(V²) time
B) O(V + E) time
C) O(VE) time
D) O(2^V) time

✅ Answer: A
📘 Explanation: DP approach runs efficiently based on stages and vertices.

23. Floyd-Warshall algorithm is:


A) Greedy
B) DP based
C) Backtracking
D) Divide and conquer

✅ Answer: B
📘 Explanation: It uses DP to iteratively update paths.
24. In OBST, if all keys have equal frequency, the optimal tree is:
A) Balanced BST
B) Linked list
C) Random BST
D) Skewed tree

✅ Answer: A
📘 Explanation: Balanced BST minimizes search cost when frequencies are equal.

25. Which problem cannot be solved by Floyd-Warshall?


A) All pairs shortest path with negative weights but no cycles
B) Detecting negative weight cycles
C) Single source shortest path
D) None of above

✅ Answer: C
📘 Explanation: Floyd-Warshall solves all pairs, not single source shortest path.

You might also like