Dynamic Programming
Dynamic Programming
Dynamic Programming
• Dynamic programming, like the divide-and-conquer method, solves
problems by combining the solutions to sub-problems.
• “Programming” in this context refers to a tabular method, not to
writing computer code.
• We typically apply dynamic programming to optimization problems
• Many possible solutions. Each solution has a value, and we wish to
find a solution with the optimal (minimum or maximum) value.
Dynamic Programming
• When developing a dynamic-programming algorithm, we follow a
sequence of four steps:
1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution, typically in a bottom-up fashion.
4. Construct an optimal solution from computed information.
• If we need only the value of an optimal solution, and not the solution
itself, then we can omit step 4.
Rod cutting
• Some enterprises buys long steel rods and cuts them into shorter
rods, which it then sells. Each cut is free. The management of that
enterprise wants to know the best way to cut up the rods.
• We assume that we know the price of each cut
• The rod-cutting problem is the following. Given a rod of length n
inches and a table of prices pi for i = 1,2, … n, determine the
maximum revenue rn obtainable by cutting up the rod and selling the
pieces.
• Note that if the price pn for a rod of length n is large enough, an
optimal solution may require no cutting at all.
Rod cutting
Rod cutting
• We can cut up a rod of length n in 2n-1
• If an optimal solution cuts the rod into k pieces, for some 1 <= k <= n,
then an optimal decomposition
To be precise, given a sequence X = {x1; x2; : : : ;xm}, we define the ith prefix of X, for i
= { 0; 1; : : : ;m} as Xi = {x1; x2; : : : ; xi }.
For example, if X = {A; B; C; B; D; A; B}, then X4 = {A;B;C;B} and X0 is the empty
sequence.
Step 2: A recursive solution
• We can readily see the overlapping-subproblems property in the LCS
problem.
• To find an LCS of X and Y , we may need to find the LCSs of X and Yn-1
and of Xm-1 and Y . But each of these subproblems has the
subsubproblem of finding an LCS of Xm-1 and Yn-1. Many other
subproblems share subsubproblems.
Step 3: Computing the length of an LCS
• Procedure LCS-LENGTH takes two sequences X = {x1; x2; : : : ; xm} and Y
= {y1;y2; : : : ;yn} as inputs. It stores the c[I, j] values in a table c[0..m 0.. n],
• It computes the entries in row-major order.
• The procedure also maintains the table b[1 .. m 1 … n] to help us
construct an optimal solution.
Step 3: Computing the length of an LCS
Step 3: Computing the length of an LCS
Step 4: Constructing an LCS
Optimal binary search trees
• Given sequence K = k1, k2, . . . , kn of n distinct keys, sorted (k1 < k2 < · ·
· < kn).
• Want to build a binary search tree from the keys.
• For ki , have probability pi that a search is for ki .
• Want BST with minimum expected search cost.
Optimal binary search trees: Example
Example
Optimal substructure
Optimal substructure (1)
Recursive solution
Recursive solution (1)
Computing an optimal solution
Computing an optimal solution(1)
Computing an optimal solution(2)
Computing an optimal solution(3)
Construct an optimal solution