0% found this document useful (0 votes)
144 views14 pages

Viva Questions and Answers On Design and Analysis of Algorithms

The document provides a comprehensive overview of algorithms, including definitions, characteristics, and various types of algorithms such as searching, sorting, and greedy techniques. It covers key concepts like time and space complexity, dynamic programming, and specific algorithms like Dijkstra's and Kruskal's. Additionally, it discusses the applications of algorithm analysis in real-world scenarios and highlights the differences between various algorithmic approaches.

Uploaded by

madmannyj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
144 views14 pages

Viva Questions and Answers On Design and Analysis of Algorithms

The document provides a comprehensive overview of algorithms, including definitions, characteristics, and various types of algorithms such as searching, sorting, and greedy techniques. It covers key concepts like time and space complexity, dynamic programming, and specific algorithms like Dijkstra's and Kruskal's. Additionally, it discusses the applications of algorithm analysis in real-world scenarios and highlights the differences between various algorithmic approaches.

Uploaded by

madmannyj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Viva Questions and Answers on Design and

Analysis of Algorithms
1. What is an algorithm?
Answer: An algorithm is a finite sequence of well-defined steps that
provides a solution to a given problem. It takes input, processes it, and
produces an output.
2. What are the characteristics of a good algorithm?
Answer:
1. Correctness: Produces correct output for all valid inputs.
2. Efficiency: Runs in minimum time and space.
3. Finiteness: Completes execution in a finite number of steps.
4. Definiteness: Each step is precisely defined.
5. Generality: Can solve a range of problems.
3. What is time complexity?
Answer: Time complexity is a function that describes the amount of time
an algorithm takes in terms of the input size. It is often expressed using
Big-O notation.
4. What is Big-O notation?
Answer: Big-O notation expresses the upper bound of an algorithm's
running time, helping to classify algorithms according to their worst-case
performance.
5. What are the different types of time complexity?
Answer:
 Constant Time: O(1)
 Logarithmic Time: O(log n)
 Linear Time: O(n)
 Linearithmic Time: O(n log n)
 Quadratic Time: O(n^2)
 Cubic Time: O(n^3)
 Exponential Time: O(2^n)
 Factorial Time: O(n!)
6. What is space complexity?
Answer: Space complexity measures the amount of memory an
algorithm uses in relation to input size.
7. What is the difference between an iterative and recursive
algorithm?
Answer:
 Iterative: Uses loops (for, while) to repeat steps.
 Recursive: Calls itself to break the problem into smaller
subproblems.
8. What is Divide and Conquer? Give an example.
Answer: A paradigm where a problem is divided into smaller
subproblems, solved independently, and then combined. Example: Merge
Sort.
9. Explain Greedy Algorithm with an example.
Answer: A greedy algorithm makes the locally optimal choice at each
step. Example: Dijkstra’s Algorithm for shortest path.
10. What is Dynamic Programming?
Answer: Dynamic programming solves problems by breaking them into
overlapping subproblems and storing results to avoid redundant
calculations. Example: Fibonacci sequence.
11. What is an NP-complete problem?
Answer: An NP-complete problem is a problem for which no known
polynomial-time solution exists, but if one NP-complete problem is solved
in polynomial time, all NP problems can be solved in polynomial time.
12. What is the difference between P, NP, and NP-complete
problems?
Answer:
 P (Polynomial Time): Problems solvable in polynomial time.
 NP (Nondeterministic Polynomial Time): Problems verifiable in
polynomial time.
 NP-Complete: NP problems that are at least as hard as any other
NP problem.
13. Explain Breadth-First Search (BFS) and Depth-First Search
(DFS).
Answer:
 BFS: Explores all neighbors before moving to the next level (Queue-
based).
 DFS: Explores as deep as possible before backtracking (Stack-based
or Recursive).
14. What is the difference between Prim’s and Kruskal’s
algorithms?
Answer:
 Prim’s: Grows a spanning tree by adding the minimum edge from
the current tree.
 Kruskal’s: Sorts edges and adds the smallest edge without forming
a cycle.
15. What is the purpose of a hash table?
Answer: A hash table stores key-value pairs and allows for fast retrieval
using a hash function.
16. What is a binary search algorithm?
Answer: A searching technique that divides the dataset in half at each
step, reducing the search space logarithmically. Time complexity: O(log
n).
17. What is a heap data structure?
Answer: A complete binary tree where the parent node is either greater
(max heap) or smaller (min heap) than its children. Used in heap sort and
priority queues.
18. What is amortized analysis?
Answer: A technique used to analyze the average time complexity over a
sequence of operations.
19. What is backtracking?
Answer: A method for solving constraint satisfaction problems by trying
different options recursively and undoing incorrect choices.
20. What is a Red-Black Tree?
Answer: A self-balancing binary search tree where nodes follow specific
color rules to maintain balance and ensure O(log n) operations.
21. What is the master theorem?
Answer: A formula used to determine the time complexity of recurrence
relations in divide-and-conquer algorithms.
22. What are the key differences between Floyd-Warshall and
Dijkstra’s algorithms?
Answer:
 Floyd-Warshall: Solves all-pairs shortest paths.
 Dijkstra’s: Solves single-source shortest paths.
23. What is a Trie data structure?
Answer: A tree used for efficient retrieval of strings, commonly used in
autocomplete and dictionary applications.
24. What is a Skip List?
Answer: A linked list with multiple levels, allowing fast searching similar
to a binary search tree.
25. Explain the concept of memoization.
Answer: Storing previously computed results to optimize recursive
algorithms, commonly used in dynamic programming.
26. What is a B-Tree?
Answer: A self-balancing tree used in databases and file systems for
efficient searching and insertion.
27. What is the difference between quicksort and mergesort?
Answer:
 Quicksort: In-place, partition-based sorting.
 Mergesort: Divide-and-conquer sorting, requiring extra space.
28. What is the significance of Huffman coding?
Answer: Used in data compression, where frequent symbols have shorter
codes.
29. What is an AVL Tree?
Answer: A self-balancing binary search tree where height differences
between left and right subtrees are at most 1.
30. What is the purpose of the Bellman-Ford algorithm?
Answer: Finds the shortest path in a graph, even with negative weights.
31. What are the main applications of algorithm analysis in real-
world scenarios?
Answer: Algorithm analysis is used in databases, AI, networking,
cryptography, data compression, and various optimization problems.
Searching and Sorting
1. What is searching in the context of algorithms?
Answer: Searching is the process of finding a specific element in a
collection of elements, such as an array or linked list.
2. What are the types of searching algorithms?
Answer:
 Linear Search
 Binary Search
 Jump Search
 Interpolation Search
 Exponential Search
3. How does linear search work?
Answer: Linear search checks each element in the list sequentially until
the desired element is found or the list ends. Its time complexity is O(n).
4. What is the best and worst case complexity of linear search?
Answer:
 Best case: O(1) (if the element is at the beginning)
 Worst case: O(n) (if the element is at the end or not present)
5. How does binary search work?
Answer: Binary search divides the sorted array into halves and checks
the middle element. If the key is smaller, it searches the left half;
otherwise, it searches the right half.
6. What is the time complexity of binary search?
Answer:
 Best case: O(1)
 Average and worst case: O(log n)
7. What is the difference between linear search and binary search?
Answer:
 Linear search works on unsorted data, whereas binary search
requires sorted data.
 Linear search is O(n), while binary search is O(log n).
8. What is interpolation search?
Answer: It is an improved version of binary search that works well with
uniformly distributed data. Instead of dividing the array into equal halves,
it estimates the position using the formula:
pos=low+((key−arr[low])×(high−low)arr[high]−arr[low])pos = low + \left(
\frac{(key - arr[low]) \times (high - low)}{arr[high] - arr[low]} \
right)pos=low+(arr[high]−arr[low](key−arr[low])×(high−low))
Time complexity: O(log log n) for uniformly distributed data.
9. When should we use jump search instead of binary search?
Answer: When the dataset is sorted and large, but random access is
costly. Jump search works in O(√n) time.
10. What is exponential search?
Answer: It is used for searching in unbounded or large arrays. It first
finds a range by exponentially increasing the index, then applies binary
search within that range.

Sorting Algorithms
11. What is sorting in algorithms?
Answer: Sorting is the process of arranging data in a specific order
(ascending or descending) to improve search efficiency and readability.
12. Name different types of sorting algorithms.
Answer:
 Comparison-based sorting: Bubble Sort, Selection Sort, Merge
Sort, Quick Sort, Heap Sort
 Non-comparison-based sorting: Counting Sort, Radix Sort,
Bucket Sort
13. What is bubble sort?
Answer: Bubble sort repeatedly compares adjacent elements and swaps
them if they are in the wrong order. This process continues until the array
is sorted.
14. What is the time complexity of bubble sort?
Answer:
 Best case (already sorted): O(n)
 Worst and average case: O(n²)
15. How does selection sort work?
Answer: Selection sort repeatedly selects the smallest element and
swaps it with the first unsorted element.
16. What is the time complexity of selection sort?
Answer:
 Best, worst, and average case: O(n²)
17. How does insertion sort work?
Answer: It builds the sorted list one element at a time by inserting each
element into its correct position.
18. What is the best-case complexity of insertion sort?
Answer: O(n) (if the array is already sorted).
19. How does merge sort work?
Answer: Merge sort follows a divide and conquer approach:
1. Divides the array into two halves.
2. Recursively sorts each half.
3. Merges the sorted halves.
Time complexity: O(n log n).
20. What is quicksort, and how does it work?
Answer: Quicksort picks a pivot, partitions the array into elements
smaller and larger than the pivot, then recursively sorts each part.
21. What is the worst-case time complexity of quicksort?
Answer: O(n²) (when the pivot is always the smallest or largest
element).
22. How does heap sort work?
Answer:
1. Builds a max heap from the array.
2. Repeatedly removes the largest element and restores the heap.
Time complexity: O(n log n).
23. What is counting sort, and when should we use it?
Answer: Counting sort is a non-comparison sorting algorithm used
when the range of input values is small. Time complexity: O(n + k),
where k is the range of numbers.
24. What is radix sort?
Answer: Radix sort sorts numbers digit by digit using a stable sorting
algorithm like counting sort. Time complexity: O(nk), where k is the
number of digits.
25. What is bucket sort?
Answer: It distributes elements into buckets, sorts each bucket
separately (usually with insertion sort), and then merges them.
26. What is the main difference between merge sort and quicksort?
Answer:
 Merge sort is stable, quicksort is not always stable.
 Merge sort requires O(n) extra space, quicksort is in-place.
27. What is the stability of a sorting algorithm?
Answer: A sorting algorithm is stable if it maintains the relative order of
equal elements. Example: Merge Sort is stable, but Quick Sort is not.
28. Which sorting algorithms are in-place?
Answer: Quick Sort, Bubble Sort, Selection Sort, and Heap Sort.
29. Which sorting algorithms are best for large datasets?
Answer: Merge Sort, Quick Sort, and Heap Sort (O(n log n) complexity).
30. How does sorting help in searching?
Answer:
 Binary search requires sorted data.
 Sorting makes duplicate detection, merging, and range
queries faster.
Greedy techniques
1. What is the greedy technique in algorithms?
Answer: The greedy technique makes locally optimal choices at each step in the hope that
they lead to a globally optimal solution.
2. What are the characteristics of a greedy algorithm?
Answer:
 Greedy choice property: A globally optimal solution can be arrived at by choosing locally
optimal solutions.
 Optimal substructure: An optimal solution to a problem contains optimal solutions to its
subproblems.
3. How does the greedy method differ from dynamic programming?
Answer:
 Greedy algorithms make decisions without revisiting previous choices.
 Dynamic programming solves subproblems and stores results to avoid recomputation.
4. When should we use a greedy approach?
Answer: When the problem has optimal substructure and greedy choice property, such as
in scheduling and Huffman coding.
5. What is an example of a problem that cannot be solved using a greedy
algorithm?
Answer: The 0/1 Knapsack Problem, because a locally optimal choice (taking the item with
the highest value-to-weight ratio) does not always lead to a globally optimal solution.

Greedy Algorithms and Their Applications


6. What is Huffman coding, and how does it work?
Answer: Huffman coding is a lossless data compression algorithm that assigns shorter
binary codes to more frequent characters using a greedy approach. It builds a Huffman tree
using a priority queue.
7. What is the time complexity of Huffman coding?
Answer: O(n log n), where n is the number of unique characters.
8. What is Kruskal’s algorithm used for?
Answer: Kruskal’s algorithm is used to find the Minimum Spanning Tree (MST) of a
graph by sorting edges and picking the smallest one while avoiding cycles.
9. What is the time complexity of Kruskal’s algorithm?
Answer: O(E log E), where E is the number of edges.
10. What data structure is used in Kruskal’s algorithm?
Answer: The Disjoint Set Union (DSU) or Union-Find data structure is used to check for
cycles efficiently.
11. What is Prim’s algorithm?
Answer: Prim’s algorithm finds the Minimum Spanning Tree (MST) by growing the tree
from a starting node and adding the smallest edge that connects an unseen node.
12. What is the time complexity of Prim’s algorithm?
Answer: O(V²) using an adjacency matrix and O(E log V) using a priority queue (with
Fibonacci Heap).
13. What is Dijkstra’s algorithm used for?
Answer: Dijkstra’s algorithm finds the shortest path from a source node to all other nodes
in a weighted graph.
14. How does Dijkstra’s algorithm work?
Answer: It uses a priority queue to pick the nearest unvisited node and updates distances of
adjacent nodes.
15. What is the time complexity of Dijkstra’s algorithm?
Answer:
 O(V²) using an adjacency matrix.
 O(E log V) using a priority queue.
16. Can Dijkstra’s algorithm handle negative weights?
Answer: No, Dijkstra’s algorithm fails with negative weights because it assumes that adding
an edge always increases the distance.
17. What is the Greedy approach in the Activity Selection Problem?
Answer: The activity that finishes the earliest is always selected first to maximize the
number of activities.
18. What is the time complexity of the Activity Selection Problem?
Answer: O(n log n) (due to sorting).
19. How does the Fractional Knapsack problem differ from the 0/1 Knapsack
problem?
Answer: In the Fractional Knapsack problem, we can take fractional parts of items,
whereas in 0/1 Knapsack, we can take the entire item or none.
20. What is the time complexity of the Fractional Knapsack problem?
Answer: O(n log n) (sorting items by value/weight ratio).

Advanced Greedy Problems


21. How is the Huffman tree constructed in Huffman encoding?
Answer:
1. Insert all characters into a priority queue (min-heap).
2. Extract two smallest nodes, create a parent node with their sum, and insert back.
3. Repeat until one node remains (the root).
22. What is the job sequencing problem in Greedy algorithms?
Answer: The problem schedules jobs with deadlines and profits to maximize total profit.
23. What is the time complexity of the Job Sequencing problem?
Answer: O(n log n) (sorting by profit).
24. What is a greedy coloring algorithm for graphs?
Answer: It assigns colors to graph nodes such that no two adjacent nodes have the same
color using the fewest colors possible.
25. What is the best case and worst case of a greedy graph coloring algorithm?
Answer:
 Best case: O(V) (if optimal coloring is found).
 Worst case: O(V²).
26. What is the greedy approach for coin change?
Answer: Pick the largest coin possible at each step to minimize the number of coins.
27. Can the greedy method always solve the Coin Change problem optimally?
Answer: No, it works only when the coin denominations are canonical (e.g., US coins).
28. How does the greedy method solve the Egyptian Fraction problem?
Answer: It expresses a fraction as a sum of unique unit fractions using ceil division.
29. What is a real-world application of greedy algorithms?
Answer:
 Network Routing (Dijkstra’s Algorithm)
 Data Compression (Huffman Encoding)
 Scheduling (Activity Selection, Job Sequencing)
30. What are the advantages and disadvantages of greedy algorithms?
Answer:
Advantages:
 Faster and simpler than dynamic programming.
 Often provides an optimal or near-optimal solution.
Disadvantages:
 Doesn’t always yield the globally optimal solution.
 Requires a problem to have greedy choice property and optimal substructure.
Dynamic programming
1. What is Dynamic Programming (DP)?
Answer: Dynamic Programming is an optimization technique used to solve problems by
breaking them into smaller subproblems, solving each subproblem once, and storing the
results to avoid redundant computations.
2. What are the two main properties of problems that can be solved using DP?
Answer:
1. Optimal Substructure: An optimal solution to a problem contains optimal solutions to its
subproblems.
2. Overlapping Subproblems: The problem can be divided into subproblems that are solved
multiple times.
3. How does DP differ from the greedy method?
Answer:
 Greedy algorithms make locally optimal choices without revisiting previous decisions.
 DP algorithms store and reuse previous computations to make globally optimal decisions.
4. What are the two approaches to implementing DP?
Answer:
1. Top-Down (Memoization): Solves problems recursively and stores results in a table to avoid
recomputation.
2. Bottom-Up (Tabulation): Iteratively solves and stores solutions to all subproblems in a table.
5. What is the advantage of the bottom-up approach over memoization?
Answer: The bottom-up approach avoids recursion overhead, making it more memory-
efficient in some cases.

Dynamic Programming Techniques & Concepts


6. What is the time complexity of DP solutions?
Answer: Typically O(n) or O(n²), but it varies depending on the problem.
7. What type of problems can be solved using DP?
Answer:
 Shortest Paths (Floyd-Warshall, Bellman-Ford)
 Knapsack Problems
 Sequence Alignment (LCS, Edit Distance)
 Matrix Chain Multiplication
8. Why is DP not always the best approach?
Answer: DP can be inefficient if the problem does not have overlapping subproblems,
leading to excessive memory usage.
9. What is the difference between DP and recursion?
Answer:
 Recursion may solve the same subproblem multiple times, causing redundant calculations.
 DP solves each subproblem only once and stores the results for reuse.
10. How does memorization help in DP?
Answer: Memorization caches results of subproblems to prevent recomputation, improving
efficiency.

Famous DP Problems
11. What is the Fibonacci sequence, and how can it be solved using DP?
Answer:
The Fibonacci sequence is defined as:
F(n)=F(n−1)+F(n−2)
Using memorization or tabulation, we store previous values to compute Fibonacci numbers
efficiently in O(n) time.
12. What is the 0/1 Knapsack problem, and how is it solved using DP?
Answer:
The 0/1 Knapsack problem selects items with weights and values to maximize value without
exceeding weight capacity. DP constructs a table of subproblems using the recurrence:

13. What is the time complexity of the 0/1 Knapsack problem?


Answer: O(nW), where n is the number of items and W is the weight capacity.
14. How does DP solve the Longest Common Subsequence (LCS) problem?
Answer:
LCS finds the longest sequence present in both strings. It uses the recurrence:
dp[i][j]={1+dp[i−1][j−1],if X[i]=Y[j]max⁡(dp[i−1][j],dp[i][j−1]),otherwisedp[i][j] = \begin{cases} 1 + dp[i-
1][j-1], & \text{if } X[i] = Y[j] \\ \max(dp[i-1][j], dp[i][j-1]), & \text{otherwise} \end{cases}dp[i]
[j]={1+dp[i−1][j−1],max(dp[i−1][j],dp[i][j−1]),if X[i]=Y[j]otherwise
15. What is the time complexity of LCS?
Answer: O(mn), where m and n are the lengths of the input strings.
16. What is the Edit Distance problem in DP?
Answer: It calculates the minimum operations (insertion, deletion, substitution) to convert
one string to another using DP.
17. What is the Bellman-Ford algorithm, and how does it use DP?
Answer: It finds the shortest path from a source vertex to all others in O(VE) using DP by
iteratively updating distances.
18. What is the Floyd-Warshall algorithm?
Answer: It finds the shortest paths between all pairs of vertices using a DP approach in
O(V³) time.
19. How does DP solve the Matrix Chain Multiplication problem?
Answer:
It determines the best way to parenthesize matrices to minimize multiplication cost using:

20. What is the time complexity of Matrix Chain Multiplication?


Answer: O(n³)

Advanced DP Problems
21. What is the Coin Change problem, and how does DP solve it?
Answer:
It finds the number of ways to make a given amount using coins of different denominations
using:

22. What is the time complexity of the Coin Change problem?


Answer: O(n * m), where n is the amount and m is the number of coins.
23. How does DP solve the Rod Cutting problem?
Answer:
It finds the best way to cut a rod to maximize profit using a recursive DP relation similar to
Knapsack.
24. What is the Travelling Salesman Problem (TSP) using DP?
Answer:
It finds the shortest route covering all cities using Bitmask DP with a time complexity of
O(n² * 2ⁿ).
25. What is the Subset Sum problem, and how is it solved using DP?
Answer:
It checks if a subset exists with a given sum using a boolean DP table.
26. How does DP solve the Longest Increasing Subsequence (LIS) problem?
Answer:
It computes the length of the longest increasing subsequence using:

27. What is Kadane’s algorithm, and how does it relate to DP?


Answer:
Kadane’s algorithm finds the maximum subarray sum in O(n) using DP.
28. How does DP solve the Egg Dropping problem?
Answer:
It determines the minimum number of trials needed to find the highest floor from which an
egg can be dropped without breaking using:

29. What is Bitmask DP?


Answer:
It uses bit manipulation to solve subset-based problems efficiently, such as TSP.
30. What are the limitations of DP?
Answer:
 High memory usage (requires storing intermediate results).
 Not useful for problems without overlapping subproblems.
 Can be slower than greedy algorithms in some cases.
Graph Traversals
1. What are the two main graph traversal techniques?
Answer:
 Breadth-First Search (BFS)
 Depth-First Search (DFS)
2. How does BFS work?
Answer: BFS explores all neighbors of a node before moving to the next
level, using a queue (FIFO structure).
3. What is the time complexity of BFS?
Answer: O(V + E), where V is the number of vertices and E is the
number of edges.
4. How does DFS work?
Answer: DFS explores as far as possible along each branch before
backtracking, using a stack (or recursion).
5. What is the time complexity of DFS?
Answer: O(V + E)
6. What is the difference between BFS and DFS?
Answer:
 BFS is level-order traversal and uses a queue.
 DFS is depth-wise traversal and uses a stack or recursion.
7. Which traversal is better for finding the shortest path in an
unweighted graph?
Answer: BFS, because it finds the shortest path in O(V + E).
8. How can DFS be used to detect cycles in a graph?
Answer:
 In a directed graph, a cycle is detected if a node is visited and still
in the recursion stack.
 In an undirected graph, a cycle is detected if a visited node is
reached again (excluding the parent node).
9. What is the main application of BFS in AI and networking?
Answer: Used in shortest path algorithms, web crawlers, and network
broadcasting.
10. How can DFS be used in topological sorting?
Answer: Perform DFS and store nodes in a stack after visiting all their
neighbors; the stack gives the topological order.

Spanning Trees
11. What is a spanning tree?
Answer: A spanning tree of a graph is a subgraph that contains all
vertices and is a tree (connected and acyclic).
12. How many edges does a spanning tree of a graph with V
vertices have?
Answer: V - 1 edges.
13. What are the two main algorithms for finding a Minimum
Spanning Tree (MST)?
Answer:
1. Kruskal’s Algorithm (uses sorting and Disjoint Set Union - DSU).
2. Prim’s Algorithm (uses a priority queue/Min-Heap).
14. How does Kruskal’s algorithm work?
Answer:
1. Sort all edges by weight.
2. Pick the smallest edge that doesn’t form a cycle (using DSU).
3. Continue until all V - 1 edges are included.
15. What is the time complexity of Kruskal’s Algorithm?
Answer: O(E log E) due to edge sorting.
16. How does Prim’s algorithm work?
Answer:
1. Start from any node and add the smallest edge that connects to an
unvisited node.
2. Continue until all nodes are included.
17. What is the time complexity of Prim’s Algorithm?
Answer: O(E log V) using a Min-Heap.
18. Which algorithm is better for dense graphs, Kruskal or Prim?
Answer: Prim’s Algorithm performs better in dense graphs due to its
O(E log V) complexity.
19. What is the difference between MST and a normal spanning
tree?
Answer: An MST has the minimum possible total edge weight.
20. Can a graph have more than one MST?
Answer: Yes, if multiple spanning trees have the same total weight.

Shortest Path Algorithms


21. What are the main shortest path algorithms?
Answer:
 Dijkstra’s Algorithm (single-source shortest path, no negative
weights).
 Bellman-Ford Algorithm (handles negative weights).
 Floyd-Warshall Algorithm (all-pairs shortest path).
22. How does Dijkstra’s Algorithm work?
Answer:
1. Assign 0 to the source and infinity to other nodes.
2. Use a priority queue to pick the smallest distance node.
3. Relax edges and update distances.
4. Repeat until all nodes are processed.
23. What is the time complexity of Dijkstra’s Algorithm?
Answer: O((V + E) log V) using a Min-Heap (Priority Queue).
24. Can Dijkstra’s Algorithm handle negative weights?
Answer: No, it fails with negative-weight cycles.
25. How does the Bellman-Ford Algorithm work?
Answer:
1. Initialize distances (like Dijkstra).
2. Relax each edge V - 1 times.
3. Check for negative-weight cycles.
26. What is the time complexity of Bellman-Ford?
Answer: O(VE)
27. What is the main difference between Dijkstra’s and Bellman-
Ford?
Answer:
 Dijkstra is faster (O((V + E) log V)) but fails with negative
weights.
 Bellman-Ford works with negative weights but is slower (O(VE)).
28. How does Floyd-Warshall Algorithm work?
Answer: It finds all-pairs shortest paths using DP:

29. What is the time complexity of Floyd-Warshall?


Answer: O(V³)
30. When should we use Floyd-Warshall instead of Dijkstra?
Answer:
 Floyd-Warshall is better for all-pairs shortest paths in dense
graphs.
 Dijkstra is better for single-source shortest path in sparse
graphs.

You might also like