0% found this document useful (0 votes)
13 views25 pages

AOA Viva Q & A

The document provides an overview of algorithm performance analysis, including time and space complexity using Big-O, Omega, and Theta notations. It discusses complexity classes such as P, NP, NP-Hard, and NP-Complete, as well as various algorithmic strategies including sorting algorithms, divide-and-conquer, greedy methods, dynamic programming, and backtracking techniques. Additionally, it covers methods for solving recurrences and specific algorithms like Dijkstra’s and the Travelling Salesman Problem.

Uploaded by

purvikajagtap
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views25 pages

AOA Viva Q & A

The document provides an overview of algorithm performance analysis, including time and space complexity using Big-O, Omega, and Theta notations. It discusses complexity classes such as P, NP, NP-Hard, and NP-Complete, as well as various algorithmic strategies including sorting algorithms, divide-and-conquer, greedy methods, dynamic programming, and backtracking techniques. Additionally, it covers methods for solving recurrences and specific algorithms like Dijkstra’s and the Travelling Salesman Problem.

Uploaded by

purvikajagtap
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

1.

Performance Analysis

➤ Growth of Function, Big-O, Omega, Theta Notation

●​ Big-O (O): Worst-case time complexity. It gives the upper bound of an algorithm’s
running time.​

●​ Omega (Ω): Best-case time complexity. It gives the lower bound.​

●​ Theta (Θ): Average-case or tight bound. It indicates both upper and lower bounds.​

➤ Mathematical Background for Algorithm Analysis

●​ Analyzing how functions grow (polynomial, exponential, logarithmic) helps compare


algorithms.​

●​ Understanding summations, recurrence relations, and limits is crucial here.​

2. Complexity Classes

➤ P, NP, NP-Hard, NP-Complete

●​ P: Problems that can be solved in polynomial time.​

●​ NP: Problems whose solutions can be verified in polynomial time.​

●​ NP-Complete: The hardest problems in NP. If any NP-Complete problem is solved in


P time, all NP problems can be solved in P.​

●​ NP-Hard: As hard as the hardest problems in NP, but may not be in NP (solution
verifiability isn’t required).​

3. Analysis of Sorting

➤ Selection Sort, Insertion Sort

●​ Selection Sort: Repeatedly finds the minimum and places it at the beginning. Time:
O(n²).​

●​ Insertion Sort: Builds sorted array one element at a time. Best: O(n), Worst: O(n²).​
4. Recurrences

➤ The Substitution Method: Guess the solution, prove by induction.

➤ Recursion Tree Method: Breaks the recurrence into a tree, sums the costs at each
level.

➤ Master Method: Solves divide-and-conquer recurrences of the form:

T(n) = aT(n/b) + f(n)

5. Divide and Conquer Approach

●​ General Method: Break → Solve → Combine.​

●​ Merge Sort: Divides array into halves, sorts, then merges.​

●​ Quick Sort: Picks pivot, partitions array, sorts partitions.​

●​ Finding Min/Max: Using pairwise comparison.​

●​ Analysis of Binary Search: O(log n) time, very efficient.​

6. Greedy Method Approach

●​ General Method: Choose local optimal hoping it leads to global optimal.​

●​ Dijkstra’s Algorithm: Shortest path in weighted graphs.​

●​ Fractional Knapsack: Can take fractions of items, greedy works best.​

●​ Job Sequencing with Deadlines: Schedule jobs to maximize profit.​

●​ Kruskal's & Prim's Algorithm: Find Minimum Spanning Tree.​

7. Dynamic Programming

●​ General Method: Solve subproblems, store results, reuse them.​


●​ Multistage Graphs: DP solves shortest/longest paths.​

●​ Bellman-Ford Algorithm: Shortest path (supports negative weights).​

●​ Floyd Warshall: All-pairs shortest paths.​

●​ Assembly Line Scheduling: Minimize time through multiple lines.​

●​ 0/1 Knapsack: Maximize value within weight limit, items indivisible.​

●​ Travelling Salesman Problem (TSP): Visit all cities with minimal cost (DP with
bitmasking).​

●​ Longest Common Subsequence (LCS): Longest subsequence common in two


sequences.​

8. Backtracking and Branch and Bound

➤ Backtracking

●​ Try all possibilities and backtrack if constraints break.​

●​ N-Queen: Place queens such that none attack each other.​

●​ Subset Sum: Find subsets that add up to a sum.​

●​ Graph Coloring: Color graph such that no adjacent nodes share color.​

➤ Branch and Bound

●​ Uses bounding function to prune non-promising paths.​

●​ TSP (B&B): Prunes paths with cost exceeding current minimum.​

●​ 15 Puzzle Problem: Solves sliding puzzle using state space tree.​

9. String Matching Algorithms

●​ Naïve Algorithm: Check all positions — O(n*m).​

●​ Rabin-Karp: Uses hashing, good for multiple pattern matching.


●​ Knuth-Morris-Pratt (KMP): Preprocesses pattern to avoid rechecking. Time: O(n +
m).​

🔍 1. Performance Analysis
This is the foundation of understanding how efficient an algorithm is.

➤ Time Complexity

●​ Measures how execution time grows with input size.​

●​ Expressed in terms of Big-O, Ω (Omega), and Θ (Theta).​

➤ Space Complexity

●​ Measures how much memory is used by the algorithm based on input size.​

●​ Includes space for variables, input, output, and function calls (stack space).​

➤ Growth of Function

Used to compare how algorithms scale. For example:

Function Growth Type

O(1) Constant

O(log n) Logarithmic

O(n) Linear

O(n log n) Log-linear

O(n²) Quadratic

O(2ⁿ) Exponential

O(n!) Factorial
➤ Mathematical Background

Includes:

●​ Summation formulas:​
∑i=1ni=n(n+1)2\sum_{i=1}^{n} i = \frac{n(n+1)}{2}i=1∑n​i=2n(n+1)​
●​ Limits and asymptotic analysis.​

●​ Understanding logarithms, exponents, etc.​

⚙️ 2. Asymptotic Notations
Used to describe time/space complexity:

Notation Meaning

Big-O (O) Upper bound (Worst-case)

Omega (Ω) Lower bound (Best-case)

Theta (Θ) Tight bound (Average-case or Exact growth)

Example:

●​ Binary search: Time = O(log n), Ω(1), Θ(log n)​

💡 3. Complexity Classes
These define problem difficulty and computability.

➤ P (Polynomial Time)

●​ Problems solvable in polynomial time, e.g., O(n), O(n²).​

●​ Example: Binary search, bubble sort.​

➤ NP (Nondeterministic Polynomial Time)

●​ Problems where solutions can be verified in polynomial time.​


●​ Not all NP problems are known to be solvable in P time.​

➤ NP-Hard

●​ At least as hard as the hardest NP problems.​

●​ May not even be in NP (no polynomial-time verification).​

●​ Example: Halting problem.​

➤ NP-Complete

●​ Problems that are both in NP and NP-Hard.​

●​ If any NP-complete problem can be solved in polynomial time, then P = NP.​

●​ Example: 3-SAT, Traveling Salesman (decision version).​

🔁 4. Analysis of Sorting Algorithms


➤ Selection Sort

●​ Repeatedly selects the minimum element and moves it to the front.​

●​ Time Complexity:​

○​ Best: O(n²)​

○​ Worst: O(n²)​

○​ Space: O(1)​

●​ Not stable, not adaptive.​

➤ Insertion Sort

●​ Builds the sorted array by inserting one element at a time.​

●​ Time Complexity:​
○​ Best: O(n) (already sorted)​

○​ Worst: O(n²)​

○​ Space: O(1)​

●​ Stable, adaptive.​

🔄 5. Recurrence Relations
Used to describe the time complexity of recursive algorithms.

➤ Substitution Method

●​ Guess the solution and prove it via mathematical induction.​

●​ Example:​
T(n)=2T(n/2)+n⇒T(n)=O(nlog⁡n)T(n) = 2T(n/2) + n \Rightarrow T(n) = O(n \log
n)T(n)=2T(n/2)+n⇒T(n)=O(nlogn)

➤ Recursion Tree Method

●​ Draw the tree of recursive calls and sum the work at each level.​

●​ Visual way to solve recurrence relations.​

➤ Master Theorem

Used for divide-and-conquer recurrences of the form:

T(n)=aT(n/b)+f(n)T(n) = aT(n/b) + f(n)T(n)=aT(n/b)+f(n)

Three cases:

Cas Condition Result


e

1 If f(n) = O(n^{log_b a - ε}) T(n) = Θ(n^{log_b a})

2 If f(n) = Θ(n^{log_b a}) T(n) = Θ(n^{log_b a} * log


n)

3 If f(n) = Ω(n^{log_b a + ε}) and satisfies regularity T(n) = Θ(f(n))


condition
Let me know if you’d like visual aids, solved problems, or PDF notes for these!

4o

You said:
ok but make a meaningful sentences not just short phrases to ans it properly infront of
external

ChatGPT said:

Got it! Here's a more formal and detailed explanation of each topic, structured in complete,
professional sentences — perfect for answering in front of an external examiner or during a
viva.

📌 1. Performance Analysis
When we study algorithms, it's important to understand how efficiently they perform in terms
of both time and space.

●​ Time complexity refers to the amount of time an algorithm takes to complete as a


function of the input size. It helps us estimate how fast an algorithm will run.​

●​ Space complexity, on the other hand, measures the total memory space required by
the algorithm during execution, including the input, output, and auxiliary variables.​

To compare the performance of algorithms, we examine the growth of functions, which


helps us understand how the running time increases as the input size grows. This analysis is
done using asymptotic notations like:

●​ Big-O (O), which describes the upper bound or worst-case scenario of an algorithm's
time or space usage.​

●​ Omega (Ω), which describes the best-case or minimum resource usage.​

●​ Theta (Θ) gives us a tight bound — it defines both the upper and lower limits and is
often used to describe the average case.​

In addition to these notations, a solid mathematical foundation involving limits, summations,


logarithmic growth, and basic algebra is crucial for analyzing algorithms effectively.

📌 2. Complexity Classes: P, NP, NP-Hard, NP-Complete


In computational theory, problems are grouped into complexity classes based on how difficult
they are to solve or verify.

●​ Class P contains all decision problems that can be solved in polynomial time by a
deterministic machine. These problems are considered efficiently solvable.​

●​ Class NP includes problems for which a given solution can be verified in polynomial
time by a deterministic machine, though finding the solution may not be as
straightforward.​

●​ A problem is NP-Hard if it is at least as hard as the hardest problems in NP. These


problems do not have to be in NP themselves, meaning they may not even have
verifiable solutions in polynomial time.​

●​ A problem is NP-Complete if it lies in both NP and NP-Hard. These are the most
difficult problems in NP, and solving any one of them efficiently would mean that all
NP problems can be solved efficiently — a question that leads to the famous open
problem in computer science: “Is P equal to NP?”​

📌 3. Analysis of Sorting Algorithms


➤ Selection Sort

Selection sort is a simple comparison-based sorting algorithm. It works by repeatedly


selecting the smallest (or largest, depending on sorting order) element from the unsorted
portion of the list and placing it at the correct position in the sorted portion. Despite its
simplicity, selection sort is inefficient for large datasets because its time complexity is always
O(n²), regardless of the initial order of the elements. It is not a stable algorithm, and it does
not adapt to already sorted input.

➤ Insertion Sort

Insertion sort builds the final sorted array one element at a time. For each iteration, it takes
the next element and places it in its correct position relative to the already sorted portion of
the array. It performs well on small datasets and is particularly efficient when the input is
nearly sorted, with a best-case time complexity of O(n) and a worst-case of O(n²). Unlike
selection sort, insertion sort is stable and adaptive, making it more suitable for real-world
applications where partial order exists.

📌 4. Recurrences and Methods of Solving Them


Many algorithms, especially those that use recursion, can be analyzed using recurrence
relations. A recurrence relation expresses the overall time complexity of an algorithm in
terms of the complexity of smaller instances of the same problem.

➤ Substitution Method

This method involves guessing the form of the solution and then using mathematical
induction to prove that the guess is correct. It’s a rigorous but flexible way of solving
recurrences.

➤ Recursion Tree Method

This1. Performance Analysis

➤ Growth of Function, Big-O, Omega, Theta Notation

●​ Big-O (O): Worst-case time complexity. It gives the upper bound of an algorithm’s
running time.​

●​ Omega (Ω): Best-case time complexity. It gives the lower bound.​

●​ Theta (Θ): Average-case or tight bound. It indicates both upper and lower bounds.​

➤ Mathematical Background for Algorithm Analysis

●​ Analyzing how functions grow (polynomial, exponential, logarithmic) helps compare


algorithms.​

●​ Understanding summations, recurrence relations, and limits is crucial here.​

2. Complexity Classes

➤ P, NP, NP-Hard, NP-Complete

●​ P: Problems that can be solved in polynomial time.​

●​ NP: Problems whose solutions can be verified in polynomial time.​

●​ NP-Complete: The hardest problems in NP. If any NP-Complete problem is solved in


P time, all NP problems can be solved in P.​

●​ NP-Hard: As hard as the hardest problems in NP, but may not be in NP (solution
verifiability isn’t required).​
3. Analysis of Sorting

➤ Selection Sort, Insertion Sort

●​ Selection Sort: Repeatedly finds the minimum and places it at the beginning. Time:
O(n²).​

●​ Insertion Sort: Builds sorted array one element at a time. Best: O(n), Worst: O(n²).​

4. Recurrences

➤ The Substitution Method: Guess the solution, prove by induction.

➤ Recursion Tree Method: Breaks the recurrence into a tree, sums the costs at each
level.

➤ Master Method: Solves divide-and-conquer recurrences of the form:

T(n) = aT(n/b) + f(n)

5. Divide and Conquer Approach

●​ General Method: Break → Solve → Combine.​

●​ Merge Sort: Divides array into halves, sorts, then merges.​

●​ Quick Sort: Picks pivot, partitions array, sorts partitions.​

●​ Finding Min/Max: Using pairwise comparison.​

●​ Analysis of Binary Search: O(log n) time, very efficient.​

6. Greedy Method Approach

●​ General Method: Choose local optimal hoping it leads to global optimal.​

●​ Dijkstra’s Algorithm: Shortest path in weighted graphs.​


●​ Fractional Knapsack: Can take fractions of items, greedy works best.​

●​ Job Sequencing with Deadlines: Schedule jobs to maximize profit.​

●​ Kruskal's & Prim's Algorithm: Find Minimum Spanning Tree.​

7. Dynamic Programming

●​ General Method: Solve subproblems, store results, reuse them.​

●​ Multistage Graphs: DP solves shortest/longest paths.​

●​ Bellman-Ford Algorithm: Shortest path (supports negative weights).​

●​ Floyd Warshall: All-pairs shortest paths.​

●​ Assembly Line Scheduling: Minimize time through multiple lines.​

●​ 0/1 Knapsack: Maximize value within weight limit, items indivisible.​

●​ Travelling Salesman Problem (TSP): Visit all cities with minimal cost (DP with
bitmasking).​

●​ Longest Common Subsequence (LCS): Longest subsequence common in two


sequences.​

8. Backtracking and Branch and Bound

➤ Backtracking

●​ Try all possibilities and backtrack if constraints break.​

●​ N-Queen: Place queens such that none attack each other.​

●​ Subset Sum: Find subsets that add up to a sum.​

●​ Graph Coloring: Color graph such that no adjacent nodes share color.​

➤ Branch and Bound

●​ Uses bounding function to prune non-promising paths.​


●​ TSP (B&B): Prunes paths with cost exceeding current minimum.​

●​ 15 Puzzle Problem: Solves sliding puzzle using state space tree.​

9. String Matching Algorithms

●​ Naïve Algorithm: Check all positions — O(n*m).​

●​ Rabin-Karp: Uses hashing, good for multiple pattern matching.​

●​ Knuth-Morris-Pratt (KMP): Preprocesses pattern to avoid rechecking. Time: O(n +


m).​

Would you like a printable PDF or flashcards for each topic? I can also give you solved
examples and practice problems for each algorithm.

4o

You said:

give more detailed

ChatGPT said:

Great! Let’s go deeper into each point from your updated syllabus section with full
definitions, formulas, and key takeaways.

🔍 1. Performance Analysis
This is the foundation of understanding how efficient an algorithm is.

➤ Time Complexity
●​ Measures how execution time grows with input size.​

●​ Expressed in terms of Big-O, Ω (Omega), and Θ (Theta).​

➤ Space Complexity

●​ Measures how much memory is used by the algorithm based on input size.​

●​ Includes space for variables, input, output, and function calls (stack space).​

➤ Growth of Function

Used to compare how algorithms scale. For example:

Function Growth Type

O(1) Constant

O(log n) Logarithmic

O(n) Linear

O(n log n) Log-linear

O(n²) Quadratic

O(2ⁿ) Exponential

O(n!) Factorial

➤ Mathematical Background

Includes:
●​ Summation formulas:​
∑i=1ni=n(n+1)2\sum_{i=1}^{n} i = \frac{n(n+1)}{2}i=1∑n​i=2n(n+1)​
●​ Limits and asymptotic analysis.​

●​ Understanding logarithms, exponents, etc.​

⚙️ 2. Asymptotic Notations
Used to describe time/space complexity:

Notation Meaning

Big-O (O) Upper bound (Worst-case)

Omega (Ω) Lower bound (Best-case)

Theta (Θ) Tight bound (Average-case or Exact growth)

Example:

●​ Binary search: Time = O(log n), Ω(1), Θ(log n)​

💡 3. Complexity Classes
These define problem difficulty and computability.

➤ P (Polynomial Time)

●​ Problems solvable in polynomial time, e.g., O(n), O(n²).​

●​ Example: Binary search, bubble sort.​

➤ NP (Nondeterministic Polynomial Time)


●​ Problems where solutions can be verified in polynomial time.​

●​ Not all NP problems are known to be solvable in P time.​

➤ NP-Hard

●​ At least as hard as the hardest NP problems.​

●​ May not even be in NP (no polynomial-time verification).​

●​ Example: Halting problem.​

➤ NP-Complete

●​ Problems that are both in NP and NP-Hard.​

●​ If any NP-complete problem can be solved in polynomial time, then P = NP.​

●​ Example: 3-SAT, Traveling Salesman (decision version).​

🔁 4. Analysis of Sorting Algorithms


➤ Selection Sort

●​ Repeatedly selects the minimum element and moves it to the front.​

●​ Time Complexity:​

○​ Best: O(n²)​

○​ Worst: O(n²)​

○​ Space: O(1)​

●​ Not stable, not adaptive.​

➤ Insertion Sort

●​ Builds the sorted array by inserting one element at a time.​


●​ Time Complexity:​

○​ Best: O(n) (already sorted)​

○​ Worst: O(n²)​

○​ Space: O(1)​

●​ Stable, adaptive.​

🔄 5. Recurrence Relations
Used to describe the time complexity of recursive algorithms.

➤ Substitution Method

●​ Guess the solution and prove it via mathematical induction.​

●​ Example:​
T(n)=2T(n/2)+n⇒T(n)=O(nlog⁡n)T(n) = 2T(n/2) + n \Rightarrow T(n) = O(n \log
n)T(n)=2T(n/2)+n⇒T(n)=O(nlogn)

➤ Recursion Tree Method

●​ Draw the tree of recursive calls and sum the work at each level.​

●​ Visual way to solve recurrence relations.​

➤ Master Theorem

Used for divide-and-conquer recurrences of the form:

T(n)=aT(n/b)+f(n)T(n) = aT(n/b) + f(n)T(n)=aT(n/b)+f(n)

Three cases:

Cas Condition Result


e

1 If f(n) = O(n^{log_b a - ε}) T(n) = Θ(n^{log_b a})


2 If f(n) = Θ(n^{log_b a}) T(n) = Θ(n^{log_b a} * log
n)

3 If f(n) = Ω(n^{log_b a + ε}) and satisfies regularity T(n) = Θ(f(n))


condition

Let me know if you’d like visual aids, solved problems, or PDF notes for these!

4o

You said:

ok but make a meaningful sentences not just short phrases to ans it properly infront of
external

ChatGPT said:

Got it! Here's a more formal and detailed explanation of each topic, structured in complete,
professional sentences — perfect for answering in front of an external examiner or during a
viva.

📌 1. Performance Analysis
When we study algorithms, it's important to understand how efficiently they perform in terms
of both time and space.

●​ Time complexity refers to the amount of time an algorithm takes to complete as a


function of the input size. It helps us estimate how fast an algorithm will run.​

●​ Space complexity, on the other hand, measures the total memory space required by
the algorithm during execution, including the input, output, and auxiliary variables.​

To compare the performance of algorithms, we examine the growth of functions, which


helps us understand how the running time increases as the input size grows. This analysis is
done using asymptotic notations like:

●​ Big-O (O), which describes the upper bound or worst-case scenario of an algorithm's
time or space usage.​
●​ Omega (Ω), which describes the best-case or minimum resource usage.​

●​ Theta (Θ) gives us a tight bound — it defines both the upper and lower limits and is
often used to describe the average case.​

In addition to these notations, a solid mathematical foundation involving limits, summations,


logarithmic growth, and basic algebra is crucial for analyzing algorithms effectively.

📌 2. Complexity Classes: P, NP, NP-Hard, NP-Complete


In computational theory, problems are grouped into complexity classes based on how difficult
they are to solve or verify.

●​ Class P contains all decision problems that can be solved in polynomial time by a
deterministic machine. These problems are considered efficiently solvable.​

●​ Class NP includes problems for which a given solution can be verified in polynomial
time by a deterministic machine, though finding the solution may not be as
straightforward.​

●​ A problem is NP-Hard if it is at least as hard as the hardest problems in NP. These


problems do not have to be in NP themselves, meaning they may not even have
verifiable solutions in polynomial time.​

●​ A problem is NP-Complete if it lies in both NP and NP-Hard. These are the most
difficult problems in NP, and solving any one of them efficiently would mean that all
NP problems can be solved efficiently — a question that leads to the famous open
problem in computer science: “Is P equal to NP?”​

📌 3. Analysis of Sorting Algorithms


➤ Selection Sort

Selection sort is a simple comparison-based sorting algorithm. It works by repeatedly


selecting the smallest (or largest, depending on sorting order) element from the unsorted
portion of the list and placing it at the correct position in the sorted portion. Despite its
simplicity, selection sort is inefficient for large datasets because its time complexity is always
O(n²), regardless of the initial order of the elements. It is not a stable algorithm, and it does
not adapt to already sorted input.

➤ Insertion Sort
Insertion sort builds the final sorted array one element at a time. For each iteration, it takes
the next element and places it in its correct position relative to the already sorted portion of
the array. It performs well on small datasets and is particularly efficient when the input is
nearly sorted, with a best-case time complexity of O(n) and a worst-case of O(n²). Unlike
selection sort, insertion sort is stable and adaptive, making it more suitable for real-world
applications where partial order exists.

📌 4. Recurrences and Methods of Solving Them


Many algorithms, especially those that use recursion, can be analyzed using recurrence
relations. A recurrence relation expresses the overall time complexity of an algorithm in
terms of the complexity of smaller instances of the same problem.

➤ Substitution Method

This method involves guessing the form of the solution and then using mathematical
induction to prove that the guess is correct. It’s a rigorous but flexible way of solving
recurrences.

➤ Recursion Tree Method

This technique visualizes the recurrence as a tree, where each node represents the cost of a
recursive call. The total cost is calculated by adding up the costs at all levels of the tree. This
method gives an intuitive understanding of how work is distributed across recursive calls.

➤ Master Theorem

The Master Theorem provides a direct way to solve recurrence relations of the form:

T(n)=aT(n/b)+f(n)T(n) = aT(n/b) + f(n)T(n)=aT(n/b)+f(n)

where a≥1a ≥ 1a≥1, b>1b > 1b>1, and f(n)f(n)f(n) is an asymptotically positive function. It
categorizes the solution into three cases based on the relationship between f(n)f(n)f(n) and
nlog⁡ban^{\log_b a}nlogb​a. This theorem is widely used in analyzing divide and conquer
algorithms like merge sort, quicksort, and binary search.

technique visualizes the recurrence as a tree, where each node represents the cost of a
recursive call. The total cost is calculated by adding up the costs at all levels of the tree. This
method gives an intuitive understanding of how work is distributed across recursive calls.

➤ Master Theorem

The Master Theorem provides a direct way to solve recurrence relations of the form:

T(n)=aT(n/b)+f(n)T(n) = aT(n/b) + f(n)T(n)=aT(n/b)+f(n)


where a≥1a ≥ 1a≥1, b>1b > 1b>1, and f(n)f(n)f(n) is an asymptotically positive function. It
categorizes the solution into three cases based on the relationship between f(n)f(n)f(n) and
nlog⁡ban^{\log_b a}nlogb​a. This theorem is widely used in analyzing divide and conquer
algorithms like merge sort, quicksort, and binary search.

✅ Short & Concise Answers


📘 Divide and Conquer Approach
●​ General Method: Divide the problem into subproblems, solve them recursively, and
combine results.​

●​ Merge Sort: Divides array, recursively sorts halves, and merges them. Time: O(n log
n).​

●​ Quick Sort: Selects pivot, partitions array, sorts recursively. Avg Time: O(n log n),
Worst: O(n²).​

●​ Min/Max Finding: Uses pairwise comparison recursively; takes fewer comparisons


than linear scan.​

●​ Binary Search: Divides sorted array to find element in O(log n) time.​

📘 Greedy Method Approach


●​ General Method: Makes optimal choice at each step hoping for overall optimality.​

●​ Dijkstra’s Algorithm: Finds shortest path from source in weighted graphs


(non-negative weights).​

●​ Fractional Knapsack: Items can be broken; pick by highest value/weight ratio.​

●​ Job Sequencing: Schedule jobs with deadlines to maximize profit.​

●​ MST (Kruskal & Prim): Finds tree with minimum weight covering all vertices.​

📘 Dynamic Programming Approach


●​ General Method: Solve overlapping subproblems, store results to avoid
recomputation.​
●​ Bellman-Ford: Works with negative weights; slower than Dijkstra.​

●​ Floyd-Warshall: All-pairs shortest paths in O(n³) time.​

●​ Assembly-Line Scheduling: Choose fastest path through stages of 2 assembly


lines.​

●​ 0/1 Knapsack: Maximize value with capacity constraint (no fractions allowed).​

●​ TSP (DP Version): Solve using bitmasking and memoization.​

●​ LCS: Longest common subsequence in two strings, solved via DP.​

📘 Backtracking and Branch and Bound


●​ Backtracking: Tries all possibilities, abandons paths that don’t meet criteria.​

○​ Examples: N-Queens, Sum of Subsets, Graph Coloring.​

●​ Branch and Bound: Improves brute-force using bounds to prune branches.​

○​ Examples: TSP, 15 Puzzle.​

📘 String Matching Algorithms


●​ Naïve: Brute-force match from each position; time: O(mn).​

●​ Rabin-Karp: Uses hashing for substring comparison; avg time: O(n), worst O(nm).​

●​ Knuth-Morris-Pratt: Preprocess pattern with LPS array to skip comparisons. Time:


O(n + m).​

🧠 Detailed Explanations for Each Topic


🔹 Divide and Conquer Approach
This method solves a problem by dividing it into smaller subproblems, solving each
recursively, and then combining their results. It is especially useful in problems like sorting,
searching, and computational geometry.

●​ Merge Sort recursively divides the array into two halves, sorts them, and then
merges the sorted halves. It has a consistent time complexity of O(n log n) and is
stable.​

●​ Quick Sort selects a pivot element, partitions the array into two halves around the
pivot, and recursively sorts the subarrays. It is faster in practice but has a worst-case
time of O(n²).​

●​ Finding Min/Max Using D&C reduces the number of comparisons to roughly 3n/2 -
2 instead of the usual 2n.​

●​ Binary Search is an efficient method for searching a sorted array, reducing the
problem size by half each time. Its time complexity is O(log n).​

🔹 Greedy Method Approach


Greedy algorithms build up a solution step-by-step, always choosing the most beneficial
option at each step. It does not always guarantee a globally optimal solution, but it works
well for specific problems.

●​ Dijkstra’s Algorithm finds the shortest path from a single source to all other nodes
using a priority queue. It doesn’t work correctly with negative weights.​

●​ Fractional Knapsack problem allows breaking items and choosing the fraction with
the best value-to-weight ratio to maximize value. Time complexity: O(n log n) (due to
sorting).​

●​ Job Sequencing with Deadlines schedules jobs within their deadlines to maximize
total profit using greedy ordering.​

●​ Minimum Spanning Trees:​

○​ Kruskal’s Algorithm selects edges in increasing order of weights and uses


disjoint sets.​

○​ Prim’s Algorithm grows the MST from a starting vertex, choosing the
minimum edge at each step.​
🔹 Dynamic Programming Approach
Dynamic Programming (DP) is a powerful technique for solving problems with overlapping
subproblems and optimal substructure. It stores intermediate results to avoid redundant
computations.

●​ Bellman-Ford Algorithm handles graphs with negative weights and finds shortest
paths in O(VE) time.​

●​ Floyd-Warshall Algorithm computes all-pairs shortest paths using a 3D DP table.


Time: O(n³).​

●​ Assembly-Line Scheduling finds the fastest route through two factory assembly
lines by dynamic programming. It accounts for time at each station and switch times.​

●​ 0/1 Knapsack Problem solves the selection of items without breaking them,
maximizing value without exceeding capacity. Time: O(nW) where W is the capacity.​

●​ TSP (Dynamic Version) uses bitmasking + memoization to find shortest path visiting
all cities once. Time: O(n²·2ⁿ).​

●​ Longest Common Subsequence (LCS) finds the longest sequence that appears in
the same relative order in both strings, used in DNA matching and diff utilities.​

🔹 Backtracking and Branch and Bound


Backtracking

This is a general algorithmic technique that incrementally builds candidates to the solution
and abandons them (backtracks) as soon as it determines they can’t lead to a valid solution.

●​ N-Queens: Places queens on a chessboard so that no two queens threaten each


other.​

●​ Sum of Subsets: Finds subsets of numbers that sum up to a target value.​

●​ Graph Coloring: Assigns colors to graph vertices such that no two adjacent ones
share the same color.​

Branch and Bound

This technique is used for optimization problems. It avoids exploring branches that cannot
yield better results than already found ones by calculating upper and lower bounds.
●​ Travelling Salesman Problem: Prune tours that exceed the current best-known
solution.​

●​ 15 Puzzle: Solved using heuristic functions to estimate the shortest path to the
solution.​

🔹 String Matching Algorithms


These algorithms are used to find occurrences of a pattern within a text.

●​ Naïve Algorithm: Compares the pattern with all possible substrings of the text. Time:
O(mn).​

●​ Rabin-Karp: Uses hashing to match patterns. If hashes match, it does a


character-by-character comparison to verify. Best case: O(n), worst case: O(nm).​

●​ Knuth-Morris-Pratt (KMP): Uses a preprocessed LPS (Longest Prefix Suffix) array


to avoid unnecessary rechecking. Time complexity is O(n + m).

You might also like