0% found this document useful (0 votes)
14 views

Dynamic Programming

Dynamic programming is a powerful algorithm design technique that can solve optimization problems more efficiently than naive recursive solutions. It works by breaking problems down into overlapping subproblems and storing (memorizing) the solutions to subproblems, rather than recomputing them multiple times. The document discusses dynamic programming approaches for problems like the Fibonacci sequence, shortest paths, knapsack, and matrix chain multiplication.

Uploaded by

lyhoangkhang831
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Dynamic Programming

Dynamic programming is a powerful algorithm design technique that can solve optimization problems more efficiently than naive recursive solutions. It works by breaking problems down into overlapping subproblems and storing (memorizing) the solutions to subproblems, rather than recomputing them multiple times. The document discusses dynamic programming approaches for problems like the Fibonacci sequence, shortest paths, knapsack, and matrix chain multiplication.

Uploaded by

lyhoangkhang831
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Dynamic Programming

Click to add Text


Plan

 Paradigm
 0/1 Knapsack
 Matrix chain multiplication
 Bellman-Ford algorithm

2
Dynamic Programming - DP

 Powerful algorithmic design technique


 Large class of seemingly exponential problems
have a polynomial solution (“only”) via DP
 Particularly for optimization problems (min/ max)
(e.g., shortest paths)
 DP ~ “careful brute force”

3
Plan

 Paradigm
 Fibonacci
 Shortest path in multistage graph
 0/1 Knapsack
 Matrix chain multiplication
 Bellman-Ford algorithm

4
Fibonacci number

 F1 = F2 = 1
 Fn = Fn-1 + Fn-2

 Compute Fn

5
Naive Algorithm

 Follow recursive definition


int F(int n){
if (n<=2) return 1;
else return F(n-1) + F(n-2);
}
 T(n) = T(n-1) + T(n-2) + O(1)
 Fn   n
 2T(n  2)  O(1)  2n/2

6
Memorized DP algorithm

 Pseudo code
memo = {}
int Fibo(int n){
if (n in memo) return memo[n]
if (n<=2) f=1
else f = Fibo(n-1) + Fibo(n-2)
memo[n] = f
return f
}
 T(n) = n  T(n) = O(n)

7
DP ~ recursion + memorization

 Memorize (remember) and re-use solutions to


subproblems that help solve the problem

 Time = # of subproblems * time/subproblem

8
Bottom-up DP Algorithm

 Bottom-up solution - Tabulation method


int F(int n){
int fib[n],f;
for(int i=1;i<=n;i++)
{
if (i<=2) f=1;
else f = fib[i-1] + fib[i-2];
fib[i]=f;
}
fib[n];
}

 T(n) = O(n)

9
Bottom-up DP Algorithm

 Exactly same computation


 Topological sort of sub problem dependency
Directed acyclic graph (DAG)

10
Plan

 Paradigm
 Fibonacci
 Shortest path in multistage graph
 0/1 Knapsack
 Matrix chain multiplication
 Bellman-Ford algorithm

11
Shortest path in multistage graph

 A directed vertex graph where vertexes are divided


into stages such that the vertex is just connected
from one stage to next stage.
 The starting and ending stages just have only one
vertex  the source and the sink of the graph
 Allocation resource problem
 Problem: Find the path from source to sink that
gives the minimum cost

12
Guessing

13
Multistage graph

14
Tables

15
Tables

16
Forward to produce results

 1 - 2 - 7 - 10 - 12
 1 - 3 - 6 - 10 - 12

17
Some summary of DP

 DP ~ “careful brute force”


 DP ~ guessing + recursion + memorization
 DP ~ shortest path in some DAGs
 Memorize (remember) and re-use solutions to
subproblems that help solve the problem

 Time = # of subproblems * time/subproblem

18
5 steps for DP

 Define subproblems
 Guess (part of solution)
 Relate subproblem solution
 Recurse and memorize or build DP table bottom
up
 Solve original problem

19
Divide and Conquer vs. Dynamic
Programming

20
Principle of optimality
 In an optimal sequence of decisions or choices, each
subsequence must also be optimal
 For some problems, an optimal sequence may be found by
making decisions one at a time and never making a mistake
 True for greedy algorithms (except label correctors)
 For many problems it’s not possible to make stepwise
decisions based only on local information so that the
sequence of decisions is optimal
 One way to solve such problems is to enumerate all possible decision
sequences and choose the best
 Dynamic programming can drastically reduce the amount of
computation by avoiding sequences that cannot be optimal by the
“principle of optimality”

21
Plan

 Paradigm
 0/1 Knapsack
 Matrix chain multiplication
 Bellman-Ford algorithm

22
0/1 Knapsack problem

 There are n objects, each with weight wi and profit pi.


The knapsack has capacity M

max  pi x i
0 i n

x i  {0, 1}
w x
0  in
i i M

pi  0, w i  0, 0  i  n

23
0/1 Knapsack example

 p = {1, 2, 5, 6}
 w = {2, 3, 4, 5}

 M = 8; n=4

 Let V[i, w] denote the total profits of i chosen items


where the remaining weight of the knapsack is w.
 How to calculate V[i, w]?

24
Filling the table
pi wi V 0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 1 2 2 3 3 3 3
5 4 3 0 0 1 2 5 5 6 7 7
6 5 4 0 0 1 2 5 6 6 7 8

 Table has n+1 rows, M + 1 columns


 V[i, w] = Max{V[i-1,w], V[i-1, w-wi] + pi}

25
Trace result
pi wi V 0 1 2 3 4 5 6 7 8
0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 1 2 2 3 3 3 3
5 4 3 0 0 1 2 5 5 6 7 7
6 5 4 0 0 1 2 5 6 6 7 8

 {x1 x2 x3 x4} 8 - 5 = 3 (column) 8 - 6 = 2 (profit)


{0 1 0 1} 3 - 3 = 0 (column) 2 - 2 = 0 (profit)

  x = {0, 1, 0, 1}: total profits: 8

26
Plan

 Paradigm
 0/1 Knapsack
 Matrix chain multiplication

27
Matrix chain multiplication

 Given dimension d0, d1, …, dn corresponding to matrix


sequence A1, A2, …, An where Ai has dimension di-1di
 Determine the “multiplication sequence” that minimizes
the number of scalar multiplications in computing
A1A2…An; or determine how to parenthesize the
multiplication
 A1A2A3A4 = (A1A2)(A3A4)
= A1(A2(A3A4)) = A1((A2A3)A4)
= ((A1A2)A3)A4 = (A1(A2A3))A4

 Exhaustive search: Ω(4n/n3/2)


 Any better solution?

28
Developing algorithm

 Give A1, A2, A3 having 2x3, 3x4, 4x2 size,


respectively
 Goal: Build the formula for the problem

 Let C[i, j] be the cost of the chain multiplication


from matrix i to matrix j
 Find C[1, 3]

 Starting point: How many ways to multiply that


matrix chain? How many multiplications each?

29
Developing algorithm

30
Formula

31
Example

A1 x A2 x A3 x A4
3 2 2 4 4 2 2 5
d0 d1 d1 d2 d2 d3 d3 d4

 The minimum cost of A1xA2xA3xA4


 Find C[1, 4]

32
Formula
Example
C 1 2 3 4 k 1 2 3 4
1 0 24 1 1
2 0 26 2 2
3 0 40 3 3
4 0 4

33
Formula
Example
C 1 2 3 4 k 1 2 3 4
1 0 24 28 58 1 1 1 3
2 0 26 36 2 2 3
3 0 40 3 3
4 0 4

34
Result

 ((A1)(A2A3))(A4)

 Number of multiplications: 58

35
Plan

 Paradigm
 0/1 Knapsack
 Matrix chain multiplication
 Bellman-Ford algorithm

36
Single-source shortest path problem

 Given a graph and a source vertex start, find shortest


paths from start to all vertices. The graph may contain
negative weight edges.
 Algorithms
 Dijsktra: greedy approach, but may not work for graphs with
negative weight edges.
 Bellman-Ford: dynamic programming approach, work for graphs
with negative weight edges.

37
Building DP algorithm

38
Algorithm

39
Example 1

 Find the shortest path from 1 to other vertices


E ={(1, 2), (1, 3), (2, 3), (2, 4), (2, 5),
(3, 2), (3, 4), (3, 5), (5, 4)}
n = 5; m = 9
1 2 3 4 5
0 0
1 0 3 2 2 7
2 0 3 2 1 6
3 0 3 2 1 6
4 0 3 2 1 6 No negative weight cycle
5 0 3 2 1 6 found

40
Example 2

 Find the shortest path from 1 to other vertices


E ={(1, 2), (2, 3), (3, 4), (3, 6), (4, 2), (4, 5)}
n = 6; m = 6

1 2 3 4 5 6
0 0
1 0 4 9 13 16 14
2 0 2 7 11 14 12
3 0 0 5 9 12 10
4 0 -2 3 7 10 8
There exists negative
5 0 -4 1 5 8 6
weight cycle
6 0 -6 -1 3 6 4

41
Thanks for your attention!

42

You might also like