Design and Analysis of Algorithms -
Descriptive Answers
1) Binomial Coefficients using Dynamic Programming
Binomial Coefficients (C(n, k)) count the number of ways to choose k elements from a set of
n elements. It is defined as:
C(n, k) = C(n-1, k-1) + C(n-1, k), with base conditions:
- C(n, 0) = C(n, n) = 1
This recursive relation arises from Pascal's Triangle. Using Dynamic Programming, we store
previously computed values in a 2D array C[n+1][k+1] to avoid redundant computations.
DP Algorithm:
for i from 0 to n:
for j from 0 to min(i, k):
if j == 0 or j == i:
C[i][j] = 1
else:
C[i][j] = C[i-1][j-1] + C[i-1][j]
Time Complexity: O(n*k)
Space Complexity: O(n*k)
2) Floyd’s Algorithm
Floyd’s Algorithm finds shortest paths between all pairs of vertices in a weighted graph. It
uses a dynamic programming approach.
Recursive Formula:
Let D[i][j][k] be the shortest path from i to j using intermediate vertices from 1 to k.
D[i][j][k] = min(D[i][j][k-1], D[i][k][k-1] + D[k][j][k-1])
Final Iterative Form:
for k from 1 to n:
for i from 1 to n:
for j from 1 to n:
D[i][j] = min(D[i][j], D[i][k] + D[k][j])
Example:
For a graph with 3 vertices:
0 5 INF
INF 0 3
2 INF 0
After applying Floyd’s algorithm, shortest paths between all pairs are found.
Time Complexity: O(n^3)
3) Matrix Chain Multiplication using DP
Given matrices A1, A2, ..., An, determine the most efficient way to multiply them. The cost
depends on the order of multiplication.
Let m[i][j] be the minimum number of scalar multiplications needed to compute Ai...Aj.
Recursive Formula:
m[i][j] = 0 if i == j
m[i][j] = min over i <= k < j of {m[i][k] + m[k+1][j] + p[i-1]*p[k]*p[j]}
Where p[] is the dimensions array.
DP reduces redundant computations by memoizing subproblems.
Time Complexity: O(n^3)
Space Complexity: O(n^2)
4) Optimal Binary Search Tree (OBST)
Given keys k1 < k2 < ... < kn with search probabilities p1, p2, ..., pn, the goal is to build a BST
with minimal expected search cost.
Let e[i][j] be the minimum cost of OBST containing keys ki to kj.
Let w[i][j] be the sum of probabilities from pi to pj.
Cost Function:
e[i][j] = min over i <= r <= j of {e[i][r-1] + e[r+1][j] + w[i][j]}
Time Complexity: O(n^3)
Space Complexity: O(n^2)
5) Traveling Salesperson Problem (TSP)
Given n cities, find the shortest possible route that visits each city once and returns to the
origin.
Dynamic Programming Solution:
Let dp[mask][i] be the minimum cost to reach node i having visited nodes in mask.
dp[mask][j] = min(dp[mask ^ (1 << j)][i] + dist[i][j]) for all i != j
Start with dp[1<<0][0] = 0 (start at city 0)
Time Complexity: O(n^2 * 2^n)
Space Complexity: O(n * 2^n)
6) 0-1 Knapsack Problem
Given weights and values of n items, and capacity W, find maximum value that fits into the
knapsack.
Recursive Formula:
if wt[i-1] <= W:
dp[i][W] = max(val[i-1] + dp[i-1][W-wt[i-1]], dp[i-1][W])
else:
dp[i][W] = dp[i-1][W]
Iterative DP:
for i from 0 to n:
for w from 0 to W:
if i==0 or w==0:
dp[i][w] = 0
else if wt[i-1] <= w:
dp[i][w] = max(val[i-1] + dp[i-1][w-wt[i-1]], dp[i-1][w])
else:
dp[i][w] = dp[i-1][w]
Time Complexity: O(nW)
Space Complexity: O(nW)
7) Prim’s vs Kruskal’s Algorithm
Prim’s Algorithm:
- Starts from a node and grows the MST by adding the minimum weight edge from the tree.
- Uses Priority Queue or Min-Heap.
- Time: O(E log V)
Kruskal’s Algorithm:
- Sorts all edges and adds the smallest edge without forming a cycle.
- Uses Disjoint Set (Union-Find).
- Time: O(E log E)
Greedy Nature:
Both are greedy algorithms as they make locally optimal choices to find the global optimum.
8) Pseudocode for Floyd’s Algorithm
for k from 1 to n:
for i from 1 to n:
for j from 1 to n:
if dist[i][k] + dist[k][j] < dist[i][j]:
dist[i][j] = dist[i][k] + dist[k][j]
path[i][j] = k
Tracking Intermediate Nodes:
Use a 2D array path[i][j] to store intermediate nodes. Recurse using path[i][j] to retrieve the
actual path.
9) Greedy vs Dynamic Programming Example
Problem: Coin Change (minimum coins to make value V)
Greedy: Always pick the largest denomination <= remaining amount. May fail if
denominations are non-standard.
DP: Try all denominations and take minimum:
dp[i] = min(dp[i], dp[i - coin] + 1)
DP guarantees optimality, greedy doesn’t.
Example:
Coins = [1, 3, 4], Amount = 6
- Greedy: 4+1+1 = 3 coins
- DP: 3+3 = 2 coins (optimal)