0% found this document useful (0 votes)
6 views22 pages

Dynamic Pro Itroduction AD2

The document discusses algorithm design techniques, specifically focusing on dynamic programming and its application in solving problems like Fibonacci numbers and combinations. It explains how dynamic programming optimizes recursive algorithms by storing previously computed results to avoid redundant calculations. The document also highlights the importance of optimal substructure and overlapping subproblems in applying dynamic programming effectively.

Uploaded by

aryan090920
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views22 pages

Dynamic Pro Itroduction AD2

The document discusses algorithm design techniques, specifically focusing on dynamic programming and its application in solving problems like Fibonacci numbers and combinations. It explains how dynamic programming optimizes recursive algorithms by storing previously computed results to avoid redundant calculations. The document also highlights the importance of optimal substructure and overlapping subproblems in applying dynamic programming effectively.

Uploaded by

aryan090920
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Presented

By

Dr. Rajesh Purkait


Asst. Professor
Dept. of CSE ITER
(S`O’A Deemed To Be University)
E-mail: [email protected]
 An algorithm design technique (like divide
and conquer)
 Divide and conquer
◦ Partition the problem into independent
subproblems
◦ Solve the subproblems recursively
◦ Combine the solutions to solve the original problem

Rajesh Purkait 10/18/2021 2


• Fibonacci numbers:
– F0 = 0
– F1 = 1
– Fn = Fn-1 + Fn-2 for n > 1
• Sequence is 0, 1, 1, 2, 3, 5, 8, 13, …
• Obvious recursive algorithm:

• Fib(n):
– if n = 0 or 1 then return n
– else return (F(n-1) + Fib(n-2))
Fib(5)

Fib(4) Fib(3)

Fib(3) Fib(2) Fib(2) Fib(1)

Fib(2) Fib(1) Fib(1) Fib(0) Fib(1) Fib(0)

Fib(1) Fib(0)
• If all leaves had the same depth, then there
would be about 2n recursive calls.
• But this is over-counting.
• However with more careful counting it can be
shown that it is ((1.6)n)
• Exponential!
• Wasteful approach - repeat work
unnecessarily
– Fib(2) is computed three times
• Instead, compute Fib(2) once, store result in a table,
and access it when needed
• F[0] := 0; F[1] := 1; F[n] := Fib(n);
• Fib(n):
– if n = 0 or 1 then return F[n]
– if F[n-1] = NIL then F[n-1] :=
Fib(n-1)
– if F[n-2] = NIL then F[n-2] :=
Fib(n-2)
– return (F[n-1] + F[n-2])

• computes each F[i] only once


F
fills in F[4] with 3,
0 0 Fib(5) returns 3+2 = 5
1 1
fills in F[3] with 2,
2 N1IL
NIL Fib(4) returns 2+1 = 3
3 N2IL
4 N3IL fills in F[2] with 1,
Fib(3)
5 N5IL returns 1+1 = 2

Fib(2) returns 0+1 = 1


• Recursion adds overhead
– extra time for function calls
– extra space to store information on the runtime
stack about each currently active function call
• Avoid the recursion overhead by filling in the table
entries bottom up, instead of top down.
• Figure out which subproblems rely on
which other subproblems
• Example:

F0 F1 F2 F3 … Fn-2 Fn-1 Fn
• Then figure out an order for computing the
subproblems that respects the dependencies:
– when you are solving a subproblem, you
have already solved all the subproblems on
which it depends
• Example: Just solve them in the order
F0, F1, F2, F3,…
• Fib(n):
– F[0] := 0; F[1] := 1;
– for i := 2 to n do
• F[i] := F[i-1] + F[i-2]
– return F[n]
• Can perform application-specific
optimizations
– e.g., save space by only keeping last two
numbers computed
 (x + y)2 = x2 + 2xy + y2, coefficients are 1,2,1
 (x + y)3 = x3 + 3x2y + 3xy2 + y3, coefficients are 1,3,3,1
 (x + y)4 = x4 + 4x3y + 6x2y2 + 4xy3 + y4, coefficients are
1,4,6,4,1
 (x + y)5 = x5 + 5x4y + 10x3y2 + 10x2y3 + 5xy4 + y5, coefficients
are 1,5,10,10,5,1
 The n+1 coefficients can be computed for (x + y)n
according to the formula c(n, i) = n! / (i! * (n – i)!)
for each of i = 0..n
 The repeated computation of all the factorials gets to
be expensive
 We can use dynamic programming to save the factorials
as we go
14
 Applicable when subproblems are not independent
◦ Subproblems share subsubproblems
E.g.: Combinations:
n n-1 n-1
= +
k k k-1

n n
=1 =1
1 n
◦ A divide and conquer approach would repeatedly solve the
common subproblems
◦ Dynamic programming solves every subproblem just once
and stores the answer in a table

Rajesh Purkait 10/18/2021 15


Comb (6,4)

= Comb (5, 3) + Comb (5, 4)

= Comb (4,2) Comb (4, 3) + Comb (4, 3) + Comb (4, 4)


+

= Comb (3, 1)+ + Comb (3, 2) + Comb (3, 2) + Comb


+ (3, 3) + Comb
+ (3, 2) + Comb
+ (3, 3) + 1

= 3 + 1) + Comb (2, 2) + +Comb (2, 1) + Comb (2,


+ Comb (2, + 2) + 1 + +Comb (2, 1) + Comb (2,
+ 2) + 1 + 1+

= 3 + 2 + 1 + 2 + 1 + 1 + 2 + 1 + 1 + 1

n n-1 n-1
= +
k k k-1

10/18/2021 16
 n c(n,0) c(n,1) c(n,2) c(n,3) c(n,4) c(n,5) c(n,6)
 0 1
 1 1 1
 2 1 2 1
 3 1 3 3 1
 4 1 4 6 4 1
 5 1 5 10 10 5 1
 6 1 6 15 20 15 6 1
 Each row depends only on the preceding row
 Only linear space and quadratic time are needed
 This algorithm is known as Pascal’s Triangle

17
Rajesh Purkait 10/18/2021 18
 public static int binom(int n, int m) {
int[ ] b = new int[n + 1];
b[0] = 1;
for (int i = 1; i <= n; i++) {
b[i] = 1;
for (int j = i – 1; j > 0; j--) {
b[j] += b[j – 1];
}
}
return b[m];
}
 Source: Data Structures and Algorithms with Object-Oriented Design
Patterns in Java by Bruno R. Preiss

19
• DP is typically applied to Optimization
problems.
• DP can be applied when a problem exhibits:
• Optimal substructure:
– Is an optimal solution to the problem contains
within it optimal solutions to subproblems.
• Overlapping subproblems:
– If recursive algorithm revisits the same problem
over and over again.
• DP can be applied when the solution of a problem
includes solutions to subproblems
• We need to find a recursive formula for the solution
• We can recursively solve subproblems, starting from the
trivial case, and save their solutions in memory
• In the end we’ll get the solution of the whole problem
Rajesh Purkait 10/18/2021 22

You might also like