Dynamic Programming

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

Dynamic programming

Dr. Raghavendra Mishra


Introduction
• Dynamic Programming is mainly an optimization over plain recursion.
• The recursive solution that has repeated calls for same inputs, we can
optimize it using Dynamic Programming.
• The idea is to simply store the results of subproblems, so that we do not
have to re-compute them when needed later.
1. Optimal Substructure
2. Overlapping subproblems
• DP , like he divide-and-conquer method, solves problems by combining the
solutions to subproblems.
• Dynamic programming is a very powerful, general tool for solving
optimization problems.
• Once understood it is relatively easy to apply, but many people have trouble
understanding it.
• Exp-1
Greedy Algorithms
• Greedy algorithms focus on making the best local choice at each decision
point.
• For example, a natural way to compute a shortest path from x to y might be
to walk out of x, repeatedly following the cheapest edge until we get to y.
WRONG!
• In the absence of a correctness proof greedy algorithms are very likely to fail.
Problem: 1
Dynamic Programming: Example
• Consider the problem of finding a shortest path between a pair of vertices in
an acyclic graph.
• An edge connecting node i to node j has cost c(i,j).
• The graph contains n nodes numbered 0,1,…, n-1, and has an edge from
node i to node j only if i < j. Node 0 is source and node n-1 is the
destination.
• Let f(x) be the cost of the shortest path from node 0 to node x.
Dynamic programming
• Dynamic Programming is an algorithm design technique for optimization problems: often
minimizing or maximizing.
• Like divide and conquer, DP solves problems by combining solutions to subproblems.
• Unlike divide and conquer, subproblems are not independent.
• Subproblems may share subsubproblems,
• However, solution to one subproblem may not affect the solutions to other subproblems of the same problem.
(More on this later.)
• DP reduces computation by
• Solving subproblems in a bottom-up fashion.
• Storing solution to a subproblem the first time it is solved.
• Looking up the solution when subproblem is encountered again.
• Key: determine structure of optimal solutions
Elements of Dynamic Programming
• Optimal substructure
• Overlapping subproblems
Optimal Substructure
• Show that a solution to a problem consists of making a choice, which leaves
one or more subproblems to solve.
• Suppose that you are given this last choice that leads to an optimal solution.
• Given this choice, determine which subproblems arise and how to
characterize the resulting space of subproblems.
• Show that the solutions to the subproblems used within the optimal solution
must themselves be optimal. Usually use cut-and-paste.
• Need to ensure that a wide enough range of choices and subproblems are
considered.
Optimal Substructure
• Optimal substructure varies across problem domains:
• 1. How many subproblems are used in an optimal solution.
• 2. How many choices in determining which subproblem(s) to use.
• Informally, running time depends on (# of subproblems overall) × (# of
choices).
• How many subproblems and choices do the examples considered contain?
• Dynamic programming uses optimal substructure bottom up.
• First find optimal solutions to subproblems.
• Then choose which to use in optimal solution to the problem.
Optimal Substucture
• Does optimal substructure apply to all optimization problems? No.
• Applies to determining the shortest path but NOT the longest simple path of an unweighted directed
graph.
• Why?
• Shortest path has independent subproblems.
• Solution to one subproblem does not affect solution to another subproblem of the same problem.
• Subproblems are not independent in longest simple path.
• Solution to one subproblem affects the solutions to other subproblems.
• Example:
Overlapping Subproblems

• The space of subproblems must be “small”.


• The total number of distinct subproblems is a polynomial in the
input size.
• A recursive algorithm is exponential because it solves the same problems
repeatedly.
• If divide-and-conquer is applicable, then each problem solved will be
brand new.
Problem: 2
Let’s consider the calculation of Fibonacci numbers:
• F(n) = F(n-2) + F(n-1)
• with seed values F(1) = 1, F(2) = 1
•or F(0) = 0, F(1) = 1

• What would a series look like:


• 0, 1, 1, 2, 3, 4, 5, 8, 13, 21, 34, 55, 89, 144, …
Recursive Algorithm:

Fib(n)
{
if (n == 0)
return 0;
if (n == 1)
return 1;
Return Fib(n-1)+Fib(n-2)
}
Recursive Algorithm:

Fib(n)
{
if (n == 0)
return 0; It has a serious issue!
if (n == 1)
return 1;
Return Fib(n-1)+Fib(n-2)
}
Recursion tree
What’s the problem?
• Fib(n)
• {
• if (n == 0)
Memoization:
• return M[0];
• if (n == 1)
• return M[1];
• if (Fib(n-2) is not already calculated)
• call Fib(n-2);
• if(Fib(n-1) is already calculated)
• call Fib(n-1);
• //Store the ${n}^{th}$ Fibonacci no. in memory & use previous results.
• M[n] = M[n-1] + M[n-2]
• Return M[n];
•}
already calculated …
Dynamic programming

• - Main approach: recursive, holds answers to a sub problem in a


• table, can be used without recomputing.
- Can be formulated both via recursion and saving results in a
• table (memoization). Typically, we first formulate the recursive
• solution and then turn it into recursion plus dynamic
• programming via memoization or bottom-up.
• -”programming” as in tabular not programming code
Problem-3
The 0-1 knapsack problem
• A thief breaks into a house, carrying a knapsack...
• He can carry up to 25 pounds of loot
• He has to choose which of N items to steal
• Each item has some weight and some value
• “0-1” because each item is stolen (1) or not stolen (0)
• He has to select the items to steal in order to maximize the value of his
loot, but cannot exceed 25 pounds
The 0-1 knapsack problem
• A greedy algorithm does not find an optimal solution
• A dynamic programming algorithm works well.
The 0-1 knapsack problem
• This is similar to, but not identical to, the coins problem
• In the coins problem, we had to make an exact amount of change
• In the 0-1 knapsack problem, we can’t exceed the weight limit, but the
optimal solution may be less than the weight limit
• The dynamic programming solution is similar to that of the coins
problem
Steps for Solving DP Problems
• Define sub-problems
• Write down the recurrence that relates sub-problems
• Recognize and solve the base cases

• Each step is very important.


Comments

• Dynamic programming relies on working “from the bottom up” and


saving the results of solving simpler problems
• These solutions to simpler problems are then used to compute the solution to more
complex problems

• Dynamic programming solutions can often be quite complex and tricky


Comments

• Dynamic programming is used for optimization problems, especially


ones that would otherwise take exponential time
• Only problems that satisfy the principle of optimality are suitable for dynamic
programming solutions

• Since exponential time is unacceptable for all but the smallest problems,
dynamic programming is sometimes essential.
Steps in Dynamic Programming
1. Characterize structure of an optimal solution.
2. Define value of optimal solution recursively.
3. Compute optimal solution values either top-down with caching or
bottom-up in a table.
4. Construct an optimal solution from computed values.
We’ll study these with the help of examples.

You might also like