Dynamic Programming
Dynamic Programming
Dynamic Programming (DP) is a method for solving problems by breaking them into smaller
overlapping subproblems, solving each subproblem just once, and storing its result to avoid
redundant computations. It is particularly useful for optimization problems and problems with
optimal substructure and overlapping subproblems.
1. Optimal Substructure:
A problem has an optimal substructure if its solution can be constructed efficiently
using the solutions of its subproblems.
o Example: In the shortest path problem, the shortest path to a destination can
be computed using the shortest paths to intermediate points.
2. Overlapping Subproblems:
A problem has overlapping subproblems if it can be divided into subproblems that are
reused multiple times.
o Example: In computing the Fibonacci sequence, many intermediate Fibonacci
numbers are calculated repeatedly if solved naively.
3. Memoization and Tabulation:
o Memoization: Top-down approach where results of solved subproblems are
stored in a lookup table (usually a dictionary or array).
o Tabulation: Bottom-up approach where a table is constructed iteratively from
smaller to larger subproblems.
dp[n]=dp[n−1]+dp[n−2]
3. Base Cases:
Define the simplest instances of the problem.
Example:
dp[0]=0,dp[1]=1
Time Complexity: Typically, O(n×m), where n and m depend on the dimensions of the
problem (e.g., lengths of strings, weight capacity, or number of items).
Given two sequences X and Y, we say that a sequence Z is a common subsequence of X and
Y if Z is a subsequence of both X and Y. For example, if X = (A, B, C, B, D, A, B) and
Y = (B, D, C, A, B, A), the sequence (B, C, A) is a common subsequence of both X and Y.
The sequence (B, C, A) is not a longest common subsequence (LCS) of X and Y, however,
since it has length 3 and the sequence (B, C, B, A), which is also common to both sequences
X and Y, has length 4. The sequence (B, C, B, A) is an LCS of X and Y, as is the sequence
(B, D, A, B), since X and Y have no common subsequence of length 5 or greater.
In the longest-common-subsequence problem, the input is two sequences X = (x1, x2 ,.. xm)
and Y = (y1, y2,…., yn), and the goal is to ûnd a maximum length common subsequence of X
and Y.
Optimal-Substructure property
Given a sequence X =(x1 , x2 , …., xm), we deûne the ith preûx of X , for i = 0, 1, …..,m,
as Xi = (x1, x2, ….., xi ). For example, if X = (A, B; C, B, D, A, B), then X4= (A, B, C, B) and
X0 is the empty sequence.
A recursive solution
Let’s deûne c [i, j] to be the length of an LCS of the sequences Xi and Yj. If either i = 0
or j = 0, one of the sequences has length 0, and so the LCS has length 0. The optimal
substructure of the LCS problem gives the recursive formula
Given:
You must determine the maximum value of items that can fit in the knapsack without
exceeding the capacity W.
In the 0/1 version, you either take an item completely (1) or leave it (0); you cannot take
fractional parts.
The dynamic programming solution builds a table to compute the solution iteratively.
Here's the process:
Let dp[i][j] represent the maximum value achievable using the first i items with a knapsack
capacity of j.
2. Recurrence Relation
3. Base Case
4. Final Solution
The value of the optimal solution is stored in dp[n][W] where n is the total number of items,
and W is the capacity of the knapsack.
1. Function Definition:
o Knapsack (weights, values, capacity): A function that takes an array of
weights, an array of values, and the maximum capacity of the knapsack.
2. Initialization:
o n is the number of items.
o A 2D array dp is created where dp[i][w] represents the maximum value that
can be obtained with the first i items and a maximum weight w.
3. Dynamic Programming Table Filling:
o The outer loop iterates through each item.
o The inner loop iterates through each possible weight from 0 to capacity.
o If the weight of the current item is less than or equal to w, it checks whether to
include the item or not by comparing the value obtained from including the
item (values[i-1] + dp[i-1] [w - weights[i-1]]) to the value obtained
from excluding it (dp[i-1] [w]).
o If the weight of the current item exceeds w, it simply carries forward the value
from the previous item.
4. Result:
o The maximum value that can be obtained with the given capacity is stored in
dp[n][capacity], which is returned.
Complexity
Example:
Problem Statement
Given a chain (A1, A2, ……, An) of n matrices, where for i = 1, 2, ……, n, matrix Ai has
dimension p i-1 x pi, fully parenthesize the product A1 A2 …. An in a way that minimizes the
number of scalar multiplications. The input is the sequence of dimensions (p0, p1, p2, …. pn)
Structure of an optimal parenthesization
Let Ai:j, where i ≤ j , denote the matrix that results from evaluating the product Ai Ai -1… Aj .
If the problem is nontrivial, that is, i < j, then to parenthesize the product
Ai Ai-1 … Aj, the product must split between Ak and Ak-1 for some integer k in the range i ≤ k
< j . That is, for some value of k, ûrst compute the matrices Ai: k and Ak+1: j, and then multiply
them together to produce the ûnal product A i: j. The cost of parenthesizing this way is the
cost of computing the matrix A i: k, plus the cost of computing A k+1: j, plus the cost of
multiplying them together.
The optimal substructure of this problem is as follows. Suppose that to optimally parenthesize
Ai Ai +1 … A j, you split the product between Ak and Ak+1. Then the way you parenthesize the
“preûx” subchain Ai Ai+1 … Ak within this optimal parenthesization of A i Ai +1 … Aj must be
an optimal parenthesization of Ai Ai+1 … Ak.
A recursive solution
Let m [i, j] be the minimum number of scalar multiplications needed to compute the matrix
Ai: j. For the full problem, the lowest-cost way to compute A1: n is thus m [1, n].
We can deûne m [i, j] recursively as follows. If i = j, the problem is trivial: the chain consists
of just one matrix Ai: i = Ai, so that no scalar multiplications are necessary to compute the
product. Thus, m [i, i] = 0 for i = 1, 2, …., n. To compute m [i, j] when i < j, we take
advantage of the structure of an optimal solution from step 1. Suppose that an optimal
parenthesization splits the product Ai Ai+1 … Aj between Ak and Ak+1, where i ≤ k < j . Then,
m [i, j] equals the minimum cost m [i, k] for computing the subproduct Ai: k, plus the
minimum cost m [k +1, j] for computing the subproduct, Ak+1: j, plus the cost of multiplying
these two matrices together. Because each matrix Ai is pi -1 x pi, computing the
matrix product Ai: k Ak+1: j takes pi -1 pk pj scalar multiplications. Thus, we obtain
Thus, the recursive deûnition for the minimum cost of parenthesizing the product
Ai Ai+1 … Aj becomes
Constructing an optimal solution
Example:
The tables are rotated so that the main diagonal runs horizontally. The m table uses only the
main diagonal and upper triangle, and the s table uses only the upper triangle. The minimum
number of scalar multiplications to multiply the 6 matrices is m [1, 6] = 15,125. Of the
entries that are not tan, the pairs that have the same color are taken together in line 9 when
computing