0% found this document useful (0 votes)
6 views

Dynamic Programming

Dynamic programming is a technique for solving problems with overlapping subproblems by breaking them down into simpler subproblems and storing the solutions rather than recomputing them. It was invented in the 1950s and involves solving each subproblem only once and storing results in a table to obtain the original problem's solution. The document provides examples of problems solved using dynamic programming like the Fibonacci sequence, transitive closure, and binomial coefficients.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Dynamic Programming

Dynamic programming is a technique for solving problems with overlapping subproblems by breaking them down into simpler subproblems and storing the solutions rather than recomputing them. It was invented in the 1950s and involves solving each subproblem only once and storing results in a table to obtain the original problem's solution. The document provides examples of problems solved using dynamic programming like the Fibonacci sequence, transitive closure, and binomial coefficients.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Dynamic programming

• Invented by a U.S. mathematician, Richard Bellman in the 1950s.


• Dynamic programming is a technique for solving problems with
overlapping subproblems.
• a given problem’s solution to solutions of its smaller subproblems.
• Rather than solving overlapping subproblems again and again,
• dynamic programming suggests solving each of the smaller
subproblems only once and recording the results in a table from
which a solution to the original problem can then be obtained.
• Eg: The Fibonacci numbers are the elements of the sequence 0, 1,
1, 2, 3, 5, 8, 13, 21, 34, . . . ,
• which can be defined by the simple recurrence
• F(n) = F(n − 1) + F(n − 2) for n > 1
• Two initial conditions are F(0)=0, F(1)=1

1
• dynamic programming is a method for solving a complex problem by
breaking it down into a collection of simpler subproblems,
• solving each of those subproblems just once, and storing their solutions -
using a memory-based data structure.
• The next time the same subproblem occurs, instead of recomputing its
solution, one simply looks up the previously computed solution, thereby
saving computation time at the expense of a modest expenditure in storage
space.
• The technique of storing solutions to subproblems instead of recomputing
them is called "memorization". 2
DIFFERENCE BETWEEN DC & DP
DIVIDE AND CONQUER DYNAMIC PROGRAMMING
1. Divide the given problem into many 1. Many decisions and sequences
Subproblems. Find the individual are guaranteed and all the
solutions and combine them to get the overlapping subinstances
solution for the main problem. are considered.

2. Follows top down technique. 2. Follows bottom up technique.


(recursive) (iterative)

3. Split the input only at specific points 3. Split the input at every possible
(midpoint). Points rather than at a
particular point.After trying all split
points it determines which split point
is optimal.

4. Each problem is independent. 4. Sub-problems are dependent on the


main problem.
3
DIFFERENCE BETWEEN DC & DP
DIVIDE AND CONQUER DYNAMIC PROGRAMMING

Less efficient because of rework on Efficient than Divide & conquer


solutions
Duplicate sub solutions may be Duplications in solutions is avoided
obtained(duplications are neglected) totally.

principle of optimality

• an optimal solution to any instance of an optimization problem is


composed of optimal solutions to its subinstances.
• an optimization problem is the problem of finding the best solution
from all feasible solutions. Two major types of optimization problems:
minimization or maximization

4
Warshall’s algorithm
Computing transitive closure of a directed graph / digraph.

Warshall’s algorithm constructs the transitive closure through a series


of n × n boolean matrices:
R(0), . . . , R(k−1) , R(k) , . . . R(n ).
5
6
7
• Now will compute R(1)
• i=1 to 4 and j=1 to 4, k=1

8
Computing a Binomial Coefficient
• Computing binomial coefficient is a typical example of applying
dynamic programming. In mathematics, particularly in
combinatory, binomial coefficient is a coefficient of any of the
terms in the expansion of (a+b)n.

9
Floyd’ algorithm
• Floyd’s algorithm is for finding the shortest path
between every pair of vertices of a graph. The
algorithm works for both directed and undirected
graphs.

• The graph may contain negative edges but it should not


contain negative cycles.

10
Optimal Binary Search Trees
• The element with least probability should be placed away from the
root. The binary search tree created with such kind of arrangement
is called optimal binary search tree.

11
Optimal Binary Search Trees
ALGORITHM OptimalBST(P [1..n])
//Finds an optimal binary search tree by dynamic programming
//Input: An array P[1..n] of search probabilities for a sorted list of
n keys
//Output: Average number of comparisons in successful searches
in the optimal BST and table R of subtrees’ roots in the optimal
BST
for i ←1 to n do
C[i, i − 1]←0
C[i, i]←P[i]
R[i, i]←i
C[n + 1, n]←0

12
Optimal Binary Search Trees
for d ←1 to n − 1 do //diagonal count
for i ←1 to n − d do
j ←i + d
minval←∞
for k←i to j do
if C[i, k − 1]+ C[k + 1, j]< minval
minval←C[i, k − 1]+ C[k + 1, j]; kmin←k
R[i, j ]←kmin
sum←P[i]; for s ←i + 1 to j do sum←sum + P[s]
C[i, j ]←minval + sum
return C[1, n], R

13
C[i, i − 1]←0 C[i, i]←P[i] R[i, i]←i C[n + 1, n]←0

0 1 2 3 4

1 0 0.1 0.4 1.1 1.7

2 0 0.2 0.8 1.4

3 0 0.4 1.0

4 0 0.3

5 0

14
for d ←1 to n − 1 do //diagonal count d=1 to 4-1 do
d=1
for i ←1 to n − d do i=1 to 4-1 do
i=1 to 3 do
Now i=1
j ←i + d j=1+1

Now d=1 , i=1, j=2 (that is C(1,2))

minval←∞

for k←i to j do substitute k=1,2

15
if C[i, k − 1]+ C[k + 1, j]< minval
minval←C[i, k − 1]+ C[k + 1, j]; kmin←k
R[i, j ]←kmin
sum←P[i]; for s ←i + 1 to j do
sum←sum + P[s]
C[i, j ]←minval + sum
return C[1, n], R

16
for d ←1 to n − 1 do //diagonal count d=1 to 4-1 do
d=1
for i ←1 to n − d do i=1 to 4-1 do
i=1 to 3 do
Now i=2
j ←i + d j=2+1

Now d=1 , i=2, j=3 (that is C(2,3))

minval←∞

for k←i to j do substitute k=2,3

17
for d ←1 to n − 1 do //diagonal count d=1 to 4-1 do
d=1
for i ←1 to n − d do i=1 to 4-1 do
i=1 to 3 do
Now i=3
j ←i + d j=3+1

Now d=1 , i=3, j=4 (that is C(3,4))

minval←∞

for k←i to j do substitute k=3,4

18
for d ←1 to n − 1 do //diagonal count d=1 to 4-1 do
d=2
for i ←1 to n − d do i=1 to 4-2 do
i=1 to 2 do
Now i=1
j ←i + d j=1+2

Now d=2 , i=1, j=3 (that is C(1,3))

minval←∞

for k←i to j do substitute k=1,2,3,4

19
for d ←1 to n − 1 do //diagonal count d=1 to 4-1 do
d=2
for i ←1 to n − d do i=1 to 4-2 do
i=1 to 2 do
Now i=2
j ←i + d j=2+2

Now d=2 , i=2, j=4 (that is C(2,4))

minval←∞

for k←i to j do substitute k=2,3,4

20
for d ←1 to n − 1 do //diagonal count d=1 to 4-1 do
d=3
for i ←1 to n − d do i=1 to 4-3 do
i=1 to 1 do
Now i=1
j ←i + d j=1+3

Now d=3 , i=1, j=4 (that is C(1,4))

minval←∞

for k←i to j do substitute k=1,2,3,4

21
Knapsack Problem and Memory Functions
• dynamic programming algorithm for the knapsack
problem:
• given n items of known weights w1, . . . , wn and
values v1, . . . , vn and a knapsack of capacity W, find
the most valuable subset of the items that fit into the
knapsack.
• Let us consider an instance defined by the first i items,
1≤ i ≤ n, with weights w1, . . . , wi, values v1, . . . , vi ,
and knapsack capacity j, 1 ≤ j ≤ W.
• Let F(i, j) be the value of an optimal solution to this
instance, i.e., the value of the most valuable subset of
the first i items that fit into the knapsack of capacity j.
22
We can divide all the subsets of the first i items that fit the knapsack of
capacity j into two categories: those that do not include the ith item
and those that do.
Note the following:
1. Among the subsets that do not include the ith item, the value of
an optimal subset is, by definition, F(i − 1, j).
2. Among the subsets that do include the ith item (hence, j − w ≥ 0),
an optimal subset is made up of this item and an optimal subset of
the first i − 1 items that fits into the knapsack of capacity j − wi . The
value of such an optimal subset is vi + F(i − 1, j − wi).

23
24
25
i j 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0
2 0
3 0
4 0

26
Knapsack Problem and Memory
Functions
Steps to select actual knapsack item:

Let i = n and k = W then


while (i>0 and k>0)
{
if(table [i,k] table[i-1,k]) then

mark ith item as in knapsack

27
Knapsack Problem and Memory
Functions (Cont.…)

i = i-1 and k=k-wi // selection of ith item.


else
i = i-1 // do not select of ith item.
}

28
Knapsack Problem and Memory
Functions (Cont.…)
Example:

29
Greedy Technique
• The greedy method uses the subset paradigm or ordering
paradigm to obtain the solution.
• In subset paradigm, at each stage the decision is made based on
whether a particular input is in optimal solution or not.
• Prim’s Algorithm
Minimum Spanning Tree:
• A minimum spanning tree of a weighted connected graph G is
a minimum tree with minimum or smallest weight.
Spanning tree:
• A spanning tree of a graph G is a sub graph which is basically
a tree and it contains all the vertices of G containing no circuit

30
Prim’s Algorithm (Cont.….)
• ALGORITHM Prim (G)
//Prim’s algorithm for constructing a minimum spanning tree
//Input: A weighted connected graph G = (V, E)
//Output: ET, the set of edges composing a minimum spanning tree
of G
VT← {v0} //the set of tree vertices can be initialized with any vertex
ET←∅for i ←1 to |V| − 1 do
find a minimum-weight edge e∗ = (v∗, u∗) among all the edges (v, u)
such that v is in VT and u is in V − VT
VT←VT∪ {u∗}
ET←ET∪ {e∗}

31
Kruskal’s Algorithm
Kruskal’s algorithm looks at a minimum spanning tree of a weighted
connected graph G = (V, E) as an acyclic sub graph with |V| − 1 edges
for which the sum of the edge weights is the smallest.
Kruskal’s Algorithm Prim’s algorithm

∙ This algorithm is for ∙ This algorithm is for


obtaining minimum spanning obtaining minimum spanning
tree by selecting the adjacent tree but it is not necessary to
vertices of already selected choose adjacent vertices of
vertices. already selected vertices.

32
Dijkstra’s Algorithm
• Dijkstra’s algorithm is used to find shortest path. This algorithm is
also known as single source shortest path algorithm. In this algorithm,
for a given vertex called source the shortest path to all other vertices
is obtained.
• In this algorithm the main focus is not to find only one single path
but to find the shortest paths from any vertex to all other remaining
vertices. This algorithm is applicable to graphs with non-negative
weights only.
Algorithm DagShortestPaths (G, s)
//Solves the single-source shortest paths problem for a dag
//Input: A weighted dag G = [V,E] and its vertex s
//Output: The length dv of a shortest path from s to v and
// its ultimate vertex pv for every vertex v in V topologically sort the
vertices of G
33
Dijkstra’s Algorithm (Cont…)
for every vertex v do
dv ←∞; pv ← null
ds ← 0
for every vertex v taken in topological order do
for every vertex u adjacent to v do
if dv + w(v, u) < du
Huffman Trees
• The Huffman trees are constructed for encoding a given text of
n characters. While encoding a given text, each character is
associated with some bit sequence. Such a bit sequence is
called code word. The encoding can be of two types,
• Fixed Length Encoding
• Variable Length Encoding
34
Huffman Trees (Cont.…)
• Huffman encoding is used in file compression algorithm.
• Huffman’s code is used in transmission of data in an encoded form.
• This encoding is used in game playing method in which decision trees
need to be formed.
Fixed length encoding
• In fixed length encoding each character is associated with a bit string of
some fixed length.
Variable length encoding
• In variable length encoding each character is associated with a code
word of different length.
• Step 1: Initialize n one-node trees and label them with the symbols
of the alphabet given. Record the frequency of each symbol in its
tree’s root to indicate the tree’s weight.

35
Huffman Trees (Cont.…)
Step 2: Repeat the following operation until a single tree is obtained.
Find two trees with the smallest weight (ties can be broken arbitrarily,
but see Problem 2 in this section’s exercises). Make them the left and
right sub tree of a new tree and record the sum of their weights in the root
of the new tree as its weight.
• Formula for calculating number of bits per character:
• Number of bits per character = Length of code word Probability of
corresponding character
• For example,
• A-01 Probability=0.40
• B-000 Probability=0.1
• C-11 Probability=0.25
• D-10 Probability=0.2
• E-001 Probability=0.15
• Number of bits per character = 3 bits per character are allowed
36

You might also like