0% found this document useful (0 votes)
11 views

Elements of Dynamic Programming

Uploaded by

Pushpraj Pandey
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Elements of Dynamic Programming

Uploaded by

Pushpraj Pandey
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Elements of Dynamic

Programming
Dr. Raghavendra Mishra
Assistant Professor, VIT Bhopal University
Introduction
• Dynamic Programming is an approach to solve problems by dividing the
main complex problem int smaller parts, and then using these to build up
the final solution.
• The dynamic programming to apply: optimal substructure and
overlapping subproblems.
• There are two main components of DP
• Optimal substructure
• Overlapping-subproblems
Optimal substructure
• Show that a solution to a problem consists of making a choice, which leaves one or
more subproblems to solve.
• Suppose that you are given this last choice that leads to an optimal solution.
• Given this choice, determine which subproblems arise and how to characterize the
resulting space of subproblems.
• Show that the solutions to the subproblems used within the optimal solution must
themselves be optimal. Usually use cut-and-paste.
• Need to ensure that a wide enough range of choices and subproblems are
considered.
Optimal substructure
• Optimal substructure varies across problem domains:
• 1. How many subproblems are used in an optimal solution.
• 2. How many choices in determining which subproblem(s) to use.
• Informally, running time depends on (# of subproblems overall) × (# of
choices).
• How many subproblems and choices do the examples considered contain?
• Dynamic programming uses optimal substructure bottom up.
• First find optimal solutions to subproblems.
• Then choose which to use in optimal solution to the problem.
Optimal substructure
• Does optimal substructure apply to all optimization problems? No.
• Applies to determining the shortest path but NOT the longest simple path of an
unweighted directed graph.
• Why?
• Shortest path has independent subproblems.
• Solution to one subproblem does not affect solution to another subproblem of the same
problem.
• Subproblems are not independent in longest simple path.
• Solution to one subproblem affects the solutions to other subproblems.
• Example:
Matrix-Chain Multiplication
Problem: given a sequence 〈A1, A2, …, An〉, compute the product:
A1 ⋅ A2 ⋅⋅⋅ An

• Matrix compatibility:
C=A⋅B C=A1 ⋅ A2 ⋅⋅⋅ Ai ⋅ Ai+1 ⋅⋅⋅ An
colA = rowB coli = rowi+1
rowC = rowA rowC = rowA1
colC = colB colC = colAn
Matrix-Chain Multiplication
• In what order should we multiply the matrices?
A1 ⋅ A2 ⋅⋅⋅ An
Parenthesize the product to get the order in which matrices are multiplied

• E.g.: A1 ⋅ A2 ⋅ A3 = ((A1 ⋅ A2) ⋅ A3)


= (A1 ⋅ (A2 ⋅ A3))

• Which one of these orderings should we choose?


• The order in which we multiply the matrices has a significant impact on the cost of
evaluating the product
Ex-1
A1 ⋅ A2 ⋅ A3

• A1: 10 x 100
• A2: 100 x 5
• A3: 5 x 50
1. ((A1 ⋅ A2) ⋅ A3): A1 ⋅ A2 = 10 x 100 x 5 = 5,000 (10 x 5)
((A1 ⋅ A2) ⋅ A3) = 10 x 5 x 50 = 2,500
Total: 7,500 scalar multiplications
2. (A1 ⋅ (A2 ⋅ A3)): A2 ⋅ A3 = 100 x 5 x 50 = 25,000 (100 x 50)
(A1 ⋅ (A2 ⋅ A3)) = 10 x 100 x 50 = 50,000
Total: 75,000 scalar multiplications
one order of magnitude difference!!
Overlapping Subproblems
• The space of subproblems must be “small”.
• The total number of distinct subproblems is a polynomial in the
input size.
• A recursive algorithm is exponential because it solves the same problems
repeatedly.
• If divide-and-conquer is applicable, then each problem solved will be
brand new.
Memoization
• Top-down approach with the efficiency of typical dynamic programming approach

• Maintaining an entry in a table for the solution to each subproblem


• memoize the inefficient recursive algorithm

• When a subproblem is first encountered its solution is computed and stored in that
table

• Subsequent “calls” to the subproblem simply look up that value


Elements of Dynamic Programming
• Optimal Substructure
• An optimal solution to a problem contains within it an optimal solution to subproblems
• Optimal solution to the entire problem is build in a bottom-up manner from optimal
solutions to subproblems

• Overlapping Subproblems
• If a recursive algorithm revisits the same subproblems over and over ⇒ the problem
has overlapping subproblems
Parameters of Optimal Substructure
• How many subproblems are used in an optimal solution for the original
problem Two subproblems (subproducts Ai..k, Ak+1..j)

• Matrix multiplication:

• How many choices we have in determining which subproblems to use in an


optimal solution
j - i choices for k (splitting the product)
• Matrix multiplication:
Parameters of Optimal Substructure
• Intuitively, the running time of a dynamic programming algorithm depends on two
factors:
• Number of subproblems overall
• How many choices we look at for each subproblem

• Matrix multiplication:
• Θ(n2) subproblems (1 ≤ i ≤ j ≤ n)
Θ(n3) overall
• At most n-1 choices

You might also like