0% found this document useful (0 votes)
2 views3 pages

Dynamic Programming (Unit III)

Dynamic Programming (DP) is a method for solving problems by breaking them into overlapping sub-problems and storing their solutions to avoid redundant calculations, a process known as memoization. It contrasts with Divide and Conquer by solving sub-problems that overlap rather than disjoint ones. DP can be implemented using either a Top-Down approach with recursion and memoization or a Bottom-Up approach that builds solutions iteratively, and it is applicable in optimization and combinatorial problems.

Uploaded by

ashiqmirza97496
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views3 pages

Dynamic Programming (Unit III)

Dynamic Programming (DP) is a method for solving problems by breaking them into overlapping sub-problems and storing their solutions to avoid redundant calculations, a process known as memoization. It contrasts with Divide and Conquer by solving sub-problems that overlap rather than disjoint ones. DP can be implemented using either a Top-Down approach with recursion and memoization or a Bottom-Up approach that builds solutions iteratively, and it is applicable in optimization and combinatorial problems.

Uploaded by

ashiqmirza97496
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Dynamic Programming

1. Introduction
Let us consider a problem which can be broken down into smaller sub-problems, and
these smaller sub-problems can still be broken into smaller ones. If we manage to find
out that there are some overlapping sub-problems, then this problem can be solved
using Dynamic Programming (DP).
In Dynamic Programming, we solve each sub-sub-problem just once and then saves its
answer in a table, thereby avoiding the work of re-computing the answer every time it
solves each sub-sub-problem. This process is known as memoization. This approach
is different from Divide and Conquer (D&C) as D&C partitions a problem into disjoint
sub-problems, solves the sub-problems recursively, and then combines their solutions
to solve the original problem. A D&C algorithm does more work than necessary,
repeatedly solving the common sub-problems.
2. Methodology
Following is a code snippet which finds the nth Fibonacci number using pure recursion.
int fib (int n) {
if (n < 2)
return 1;
return fib(n-1) + fib(n-2);
}
The recursion tree generated by this code snippet for n=5 is shown in figure1.1. In figure
1.1 we can observe that recursive calls for fib(3) are made twice and that for fib(2) are
made thrice. It indicates that the D&C algorithm does more work than necessary by
repeatedly solving the common sub-problems.

Figure 1.1 Recursion tree generated for fib(5). Nodes colored orange and black indicate overlapping sub-problems.

In DP, the basic idea is to remember answers to the sub-problems that have already
been solved. The intuition is that we trade space for time, i.e. instead of calculating all
the states (taking a lot of time but no space), we take up space to store the results of all
the sub-problems to save time later.
The following code snippet finds the nth Fibonacci number using Dynamic Programming.
Here, we use an array fibresult[] where we initialize the base cases i.e. fibresult[0] to 0
and fibresult[1] to 1. We use these base cases to build the solution of fibresult[n]
iteratively by using the values that have already been saved/computed in the fibresult[]
array during these iterations.
void fib () {
fibresult[0] = 1;
fibresult[1] = 1;
for (int i = 2; i < n; i++)
fibresult[i] = fibresult[i-1] + fibresult[i-2];
}
The code snippet shown above uses a bottom-up approach to solve the Dynamic
Programming Problem. Figure 1.2 shows how the values of fibresult[] array are
computed during each iteration for fib(5).
Iteration 1 Iteration 2 Iteration 3 Iteration 4
fibresult[2]=fibresult[1] fibresult[3]=fibresult[2] fibresult[4]=fibresult[3] fibresult[5]=fibresult[4]
+ fibresult[0]; + fibresult[1]; + fibresult[2]; + fibresult[3];

fibresult fibresult fibresult fibresult

0 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1
2 1 2 1 2 1 2 1
3 3 2 3 2 3 2
4 4 4 3 4 3
5 5 5 5 5

3. Dynamic Programming Schema


Every Dynamic Programming problem solution has a schema to be followed which
includes the following steps:
i. Show that the problem can be broken down into optimal sub-problems.
ii. Recursively define the value of the solution by expressing it in terms of optimal
solutions for smaller sub-problems.
iii. Compute the value of the optimal solution, typically in bottom-up fashion.
iv. Construct an optimal solution from the computed information.
Steps i-iii form the basis of a dynamic-programming solution to a problem. If we need
only the value of an optimal solution, and not the solution itself, then we can omit step
iv. When we do perform step iv, we sometimes maintain additional information during
step iii so that we can easily construct an optimal solution.
4. Top-Down vs Bottom-Up approach
A Dynamic Programming solution can be implemented using one of the following two
approaches.
4.1 Top-Down approach: In this approach, we write the procedure recursively in a
natural manner, but modified to save the result of each sub-problem (usually in
an array or hash table). The procedure first checks to see whether it has
previously solved this sub-problem. If so, it returns the saved value, saving
further computation at this level; if not, the procedure computes the value in the
usual manner. We say that the recursive procedure has been memoized; it
“remembers” what results it has computed previously.
4.2 Bottom-Up approach: This approach typically depends on some natural notion
of the “size” of a sub-problem, such that solving any particular sub-problem
depends only on solving “smaller” sub-problems. We sort the sub-problems by
size and solve them in size order, smallest first. When solving a particular sub-
problem, we have already solved all of the smaller sub-problems its solution
depends upon, and we have saved their solutions. We solve each sub-problem
only once, and when we first see it, we have already solved all of its prerequisite
sub-problems. This is the same approach we used in the above example for nth
Fibonacci number using Dynamic Programming.

5. Applications
Majority of the Dynamic Programming problems can be categorized into two types 1)
Optimization problems and 2) Combinatorial problems. Optimization problems expect us
to select a feasible solution, so that the value of the required function is minimized or
maximized. Combinatorial problems on the other hand expect us to figure out the
number of ways to do something, or the probability of some event happening.

You might also like