AOA Viva Q & A
AOA Viva Q & A
Performance Analysis
● Big-O (O): Worst-case time complexity. It gives the upper bound of an algorithm’s
running time.
● Theta (Θ): Average-case or tight bound. It indicates both upper and lower bounds.
2. Complexity Classes
● NP-Hard: As hard as the hardest problems in NP, but may not be in NP (solution
verifiability isn’t required).
3. Analysis of Sorting
● Selection Sort: Repeatedly finds the minimum and places it at the beginning. Time:
O(n²).
● Insertion Sort: Builds sorted array one element at a time. Best: O(n), Worst: O(n²).
4. Recurrences
➤ Recursion Tree Method: Breaks the recurrence into a tree, sums the costs at each
level.
7. Dynamic Programming
● Travelling Salesman Problem (TSP): Visit all cities with minimal cost (DP with
bitmasking).
➤ Backtracking
● Graph Coloring: Color graph such that no adjacent nodes share color.
🔍 1. Performance Analysis
This is the foundation of understanding how efficient an algorithm is.
➤ Time Complexity
➤ Space Complexity
● Measures how much memory is used by the algorithm based on input size.
● Includes space for variables, input, output, and function calls (stack space).
➤ Growth of Function
O(1) Constant
O(log n) Logarithmic
O(n) Linear
O(n²) Quadratic
O(2ⁿ) Exponential
O(n!) Factorial
➤ Mathematical Background
Includes:
● Summation formulas:
∑i=1ni=n(n+1)2\sum_{i=1}^{n} i = \frac{n(n+1)}{2}i=1∑ni=2n(n+1)
● Limits and asymptotic analysis.
⚙️ 2. Asymptotic Notations
Used to describe time/space complexity:
Notation Meaning
Example:
💡 3. Complexity Classes
These define problem difficulty and computability.
➤ P (Polynomial Time)
➤ NP-Hard
➤ NP-Complete
● Time Complexity:
○ Best: O(n²)
○ Worst: O(n²)
○ Space: O(1)
➤ Insertion Sort
● Time Complexity:
○ Best: O(n) (already sorted)
○ Worst: O(n²)
○ Space: O(1)
● Stable, adaptive.
🔄 5. Recurrence Relations
Used to describe the time complexity of recursive algorithms.
➤ Substitution Method
● Example:
T(n)=2T(n/2)+n⇒T(n)=O(nlogn)T(n) = 2T(n/2) + n \Rightarrow T(n) = O(n \log
n)T(n)=2T(n/2)+n⇒T(n)=O(nlogn)
● Draw the tree of recursive calls and sum the work at each level.
➤ Master Theorem
Three cases:
4o
You said:
ok but make a meaningful sentences not just short phrases to ans it properly infront of
external
ChatGPT said:
Got it! Here's a more formal and detailed explanation of each topic, structured in complete,
professional sentences — perfect for answering in front of an external examiner or during a
viva.
📌 1. Performance Analysis
When we study algorithms, it's important to understand how efficiently they perform in terms
of both time and space.
● Space complexity, on the other hand, measures the total memory space required by
the algorithm during execution, including the input, output, and auxiliary variables.
● Big-O (O), which describes the upper bound or worst-case scenario of an algorithm's
time or space usage.
● Theta (Θ) gives us a tight bound — it defines both the upper and lower limits and is
often used to describe the average case.
● Class P contains all decision problems that can be solved in polynomial time by a
deterministic machine. These problems are considered efficiently solvable.
● Class NP includes problems for which a given solution can be verified in polynomial
time by a deterministic machine, though finding the solution may not be as
straightforward.
● A problem is NP-Complete if it lies in both NP and NP-Hard. These are the most
difficult problems in NP, and solving any one of them efficiently would mean that all
NP problems can be solved efficiently — a question that leads to the famous open
problem in computer science: “Is P equal to NP?”
➤ Insertion Sort
Insertion sort builds the final sorted array one element at a time. For each iteration, it takes
the next element and places it in its correct position relative to the already sorted portion of
the array. It performs well on small datasets and is particularly efficient when the input is
nearly sorted, with a best-case time complexity of O(n) and a worst-case of O(n²). Unlike
selection sort, insertion sort is stable and adaptive, making it more suitable for real-world
applications where partial order exists.
➤ Substitution Method
This method involves guessing the form of the solution and then using mathematical
induction to prove that the guess is correct. It’s a rigorous but flexible way of solving
recurrences.
● Big-O (O): Worst-case time complexity. It gives the upper bound of an algorithm’s
running time.
● Theta (Θ): Average-case or tight bound. It indicates both upper and lower bounds.
2. Complexity Classes
● NP-Hard: As hard as the hardest problems in NP, but may not be in NP (solution
verifiability isn’t required).
3. Analysis of Sorting
● Selection Sort: Repeatedly finds the minimum and places it at the beginning. Time:
O(n²).
● Insertion Sort: Builds sorted array one element at a time. Best: O(n), Worst: O(n²).
4. Recurrences
➤ Recursion Tree Method: Breaks the recurrence into a tree, sums the costs at each
level.
7. Dynamic Programming
● Travelling Salesman Problem (TSP): Visit all cities with minimal cost (DP with
bitmasking).
➤ Backtracking
● Graph Coloring: Color graph such that no adjacent nodes share color.
Would you like a printable PDF or flashcards for each topic? I can also give you solved
examples and practice problems for each algorithm.
4o
You said:
ChatGPT said:
Great! Let’s go deeper into each point from your updated syllabus section with full
definitions, formulas, and key takeaways.
🔍 1. Performance Analysis
This is the foundation of understanding how efficient an algorithm is.
➤ Time Complexity
● Measures how execution time grows with input size.
➤ Space Complexity
● Measures how much memory is used by the algorithm based on input size.
● Includes space for variables, input, output, and function calls (stack space).
➤ Growth of Function
O(1) Constant
O(log n) Logarithmic
O(n) Linear
O(n²) Quadratic
O(2ⁿ) Exponential
O(n!) Factorial
➤ Mathematical Background
Includes:
● Summation formulas:
∑i=1ni=n(n+1)2\sum_{i=1}^{n} i = \frac{n(n+1)}{2}i=1∑ni=2n(n+1)
● Limits and asymptotic analysis.
⚙️ 2. Asymptotic Notations
Used to describe time/space complexity:
Notation Meaning
Example:
💡 3. Complexity Classes
These define problem difficulty and computability.
➤ P (Polynomial Time)
➤ NP-Hard
➤ NP-Complete
● Time Complexity:
○ Best: O(n²)
○ Worst: O(n²)
○ Space: O(1)
➤ Insertion Sort
○ Worst: O(n²)
○ Space: O(1)
● Stable, adaptive.
🔄 5. Recurrence Relations
Used to describe the time complexity of recursive algorithms.
➤ Substitution Method
● Example:
T(n)=2T(n/2)+n⇒T(n)=O(nlogn)T(n) = 2T(n/2) + n \Rightarrow T(n) = O(n \log
n)T(n)=2T(n/2)+n⇒T(n)=O(nlogn)
● Draw the tree of recursive calls and sum the work at each level.
➤ Master Theorem
Three cases:
Let me know if you’d like visual aids, solved problems, or PDF notes for these!
4o
You said:
ok but make a meaningful sentences not just short phrases to ans it properly infront of
external
ChatGPT said:
Got it! Here's a more formal and detailed explanation of each topic, structured in complete,
professional sentences — perfect for answering in front of an external examiner or during a
viva.
📌 1. Performance Analysis
When we study algorithms, it's important to understand how efficiently they perform in terms
of both time and space.
● Space complexity, on the other hand, measures the total memory space required by
the algorithm during execution, including the input, output, and auxiliary variables.
● Big-O (O), which describes the upper bound or worst-case scenario of an algorithm's
time or space usage.
● Omega (Ω), which describes the best-case or minimum resource usage.
● Theta (Θ) gives us a tight bound — it defines both the upper and lower limits and is
often used to describe the average case.
● Class P contains all decision problems that can be solved in polynomial time by a
deterministic machine. These problems are considered efficiently solvable.
● Class NP includes problems for which a given solution can be verified in polynomial
time by a deterministic machine, though finding the solution may not be as
straightforward.
● A problem is NP-Complete if it lies in both NP and NP-Hard. These are the most
difficult problems in NP, and solving any one of them efficiently would mean that all
NP problems can be solved efficiently — a question that leads to the famous open
problem in computer science: “Is P equal to NP?”
➤ Insertion Sort
Insertion sort builds the final sorted array one element at a time. For each iteration, it takes
the next element and places it in its correct position relative to the already sorted portion of
the array. It performs well on small datasets and is particularly efficient when the input is
nearly sorted, with a best-case time complexity of O(n) and a worst-case of O(n²). Unlike
selection sort, insertion sort is stable and adaptive, making it more suitable for real-world
applications where partial order exists.
➤ Substitution Method
This method involves guessing the form of the solution and then using mathematical
induction to prove that the guess is correct. It’s a rigorous but flexible way of solving
recurrences.
This technique visualizes the recurrence as a tree, where each node represents the cost of a
recursive call. The total cost is calculated by adding up the costs at all levels of the tree. This
method gives an intuitive understanding of how work is distributed across recursive calls.
➤ Master Theorem
The Master Theorem provides a direct way to solve recurrence relations of the form:
where a≥1a ≥ 1a≥1, b>1b > 1b>1, and f(n)f(n)f(n) is an asymptotically positive function. It
categorizes the solution into three cases based on the relationship between f(n)f(n)f(n) and
nlogban^{\log_b a}nlogba. This theorem is widely used in analyzing divide and conquer
algorithms like merge sort, quicksort, and binary search.
technique visualizes the recurrence as a tree, where each node represents the cost of a
recursive call. The total cost is calculated by adding up the costs at all levels of the tree. This
method gives an intuitive understanding of how work is distributed across recursive calls.
➤ Master Theorem
The Master Theorem provides a direct way to solve recurrence relations of the form:
● Merge Sort: Divides array, recursively sorts halves, and merges them. Time: O(n log
n).
● Quick Sort: Selects pivot, partitions array, sorts recursively. Avg Time: O(n log n),
Worst: O(n²).
● MST (Kruskal & Prim): Finds tree with minimum weight covering all vertices.
● 0/1 Knapsack: Maximize value with capacity constraint (no fractions allowed).
● Rabin-Karp: Uses hashing for substring comparison; avg time: O(n), worst O(nm).
● Merge Sort recursively divides the array into two halves, sorts them, and then
merges the sorted halves. It has a consistent time complexity of O(n log n) and is
stable.
● Quick Sort selects a pivot element, partitions the array into two halves around the
pivot, and recursively sorts the subarrays. It is faster in practice but has a worst-case
time of O(n²).
● Finding Min/Max Using D&C reduces the number of comparisons to roughly 3n/2 -
2 instead of the usual 2n.
● Binary Search is an efficient method for searching a sorted array, reducing the
problem size by half each time. Its time complexity is O(log n).
● Dijkstra’s Algorithm finds the shortest path from a single source to all other nodes
using a priority queue. It doesn’t work correctly with negative weights.
● Fractional Knapsack problem allows breaking items and choosing the fraction with
the best value-to-weight ratio to maximize value. Time complexity: O(n log n) (due to
sorting).
● Job Sequencing with Deadlines schedules jobs within their deadlines to maximize
total profit using greedy ordering.
○ Prim’s Algorithm grows the MST from a starting vertex, choosing the
minimum edge at each step.
🔹 Dynamic Programming Approach
Dynamic Programming (DP) is a powerful technique for solving problems with overlapping
subproblems and optimal substructure. It stores intermediate results to avoid redundant
computations.
● Bellman-Ford Algorithm handles graphs with negative weights and finds shortest
paths in O(VE) time.
● Assembly-Line Scheduling finds the fastest route through two factory assembly
lines by dynamic programming. It accounts for time at each station and switch times.
● 0/1 Knapsack Problem solves the selection of items without breaking them,
maximizing value without exceeding capacity. Time: O(nW) where W is the capacity.
● TSP (Dynamic Version) uses bitmasking + memoization to find shortest path visiting
all cities once. Time: O(n²·2ⁿ).
● Longest Common Subsequence (LCS) finds the longest sequence that appears in
the same relative order in both strings, used in DNA matching and diff utilities.
This is a general algorithmic technique that incrementally builds candidates to the solution
and abandons them (backtracks) as soon as it determines they can’t lead to a valid solution.
● Graph Coloring: Assigns colors to graph vertices such that no two adjacent ones
share the same color.
This technique is used for optimization problems. It avoids exploring branches that cannot
yield better results than already found ones by calculating upper and lower bounds.
● Travelling Salesman Problem: Prune tours that exceed the current best-known
solution.
● 15 Puzzle: Solved using heuristic functions to estimate the shortest path to the
solution.
● Naïve Algorithm: Compares the pattern with all possible substrings of the text. Time:
O(mn).