Principle of Dynamic Programming
Dynamic Programming (DP) is an optimization technique used for solving problems by breaking
them into simpler subproblems.
Unlike divide and conquer, DP solves overlapping subproblems only once and stores their solutions
for reuse.
Key Principles:
1. Overlapping Subproblems: The problem can be divided into smaller, repeated subproblems.
2. Optimal Substructure: The solution to a problem can be constructed from the solutions to its
subproblems.
Approach:
- Formulate the problem as a recurrence relation.
- Store intermediate results in a table (memoization or tabulation).
Example: Fibonacci Sequence
F(n) = F(n-1) + F(n-2), with F(0) = 0 and F(1) = 1.
Table:
|n |0|1|2|3|4|5|6|
|------|---|---|---|---|---|---|---|
| F(n) | 0 | 1 | 1 | 2 | 3 | 5 | 8 |
By storing results, we avoid recalculating values, reducing time complexity from O(2^n) (recursive)
to O(n) (DP).
Control Abstraction in Dynamic Programming
Control abstraction defines the general framework for solving problems using DP.
Steps:
1. Problem Identification: Check for overlapping subproblems and optimal substructure.
2. State Definition: Identify what information represents a subproblem.
3. Transition Relation: Define how the solution for one state is derived from others.
4. Base Case: Define initial values for the smallest subproblems.
Example: 0/1 Knapsack
Given weights and values of items, determine the maximum value for a given capacity.
Items:
| Item | Weight | Value |
|------|--------|-------|
|1 |2 |3 |
|2 |3 |4 |
|3 |4 |5 |
Capacity = 5.
DP Formula:
dp[i][w] = max(dp[i-1][w], dp[i-1][w-weight[i]] + value[i])
Table:
|i\w|0|1|2|3|4|5|
|--------|---|---|---|---|---|---|
|0 |0|0|0|0|0|0|
|1 |0|0|3|3|3|3|
|2 |0|0|3|4|4|7|
|3 |0|0|3|4|5|7|
Time Analysis of Control Abstraction
The efficiency of a DP algorithm depends on:
1. Number of Subproblems: Total distinct states (e.g., cells in a table).
2. Transition Time: Time to compute each state from its predecessors.
General Complexity:
- Time Complexity: O(Number of States * Time per Transition).
- Space Complexity: O(Size of the Table).
Example: Matrix Chain Multiplication
Goal: Minimize scalar multiplications for matrices A1: 10x20, A2: 20x30, A3: 30x40.
Table:
|i\j|1 |2 |3 |
|--------|------|-------|--------|
|1 |0 | 6000 | 18000 |
|2 |- |0 | 24000 |
|3 |- |- |0 |
Recursive Formula:
m[i][j] = min_{k=i}^{j-1} (m[i][k] + m[k+1][j] + p[i-1]*p[k]*p[j])
Time Complexity: O(n^3)
Space Complexity: O(n^2)
Binomial Coefficients
Binomial coefficients are used to count combinations, represented as C(n, k) = n! / (k! * (n-k)!).
Recursive Relation:
C(n, k) = C(n-1, k-1) + C(n-1, k)
Base Cases:
- C(n, 0) = C(n, n) = 1
Example: Compute C(5, 2)
|n\k|0|1|2|3|4|5|
|--------|---|---|---|---|---|---|
|0 |1| | | | | |
|1 |1|1| | | | |
|2 |1|2|1| | | |
|3 |1|3|3|1| | |
|4 |1|4|6|4|1| |
|5 | 1 | 5 | 10| 10| 5 | 1 |
By using DP, we compute coefficients efficiently in O(n*k) time.
Optimal Binary Search Tree (OBST)
An OBST minimizes the expected cost of searching for keys.
Definition:
Given keys {k1, k2, ..., kn} with search probabilities {p1, p2, ..., pn}, construct a binary search tree
with minimal expected search cost.
Recursive Relation:
cost[i][j] = min_{r=i}^{j} (cost[i][r-1] + cost[r+1][j] + sum[i][j])
Example:
Keys = {10, 20, 30}
Probabilities = {0.2, 0.5, 0.3}
|i\j|1 |2 |3 |
|--------|-----|------|-------|
|1 | 0.2 | 0.9 | 1.7 |
|2 | - | 0.5 | 1.1 |
|3 |- |- | 0.3 |
OBST ensures minimum expected cost for frequent searches.
Chain Matrix Multiplication
Matrix chain multiplication minimizes the number of scalar multiplications for a sequence of
matrices.
Recursive Formula:
m[i][j] = min_{k=i}^{j-1} (m[i][k] + m[k+1][j] + p[i-1]*p[k]*p[j])
Example:
Matrices A1: 10x20, A2: 20x30, A3: 30x40.
|i\j|1 |2 |3 |
|--------|------|-------|--------|
|1 |0 | 6000 | 18000 |
|2 |- |0 | 24000 |
|3 |- |- |0 |
The DP table stores intermediate results to optimize computation.