UNIT 3 Dynamic Programming 1714885902797
UNIT 3 Dynamic Programming 1714885902797
Dynamic Programming:
• Dynamic Programming is an algorithm design technique.
• Dynamic Programming is a technique for solving problems with
overlapping sub-problems.
• Rather than solving overlapping subproblems again and again, dynamic
programming suggests solving each of the smaller subproblems only once
and recording the results in a table from which we can then obtain a
solution to the original problem.
• Invented by American mathematician Richard Bellman in the 1950s to
solve optimization problems.
• “Programming” means here “planning”.
Dynamic Programming:
Examples:
➢ Computing the nth Fibonacci number
➢ Computing a binomial coefficient
➢ Warshall’s Algorithm
➢ Floyd’s Algorithm
➢ Optimal Binary Search Tree (OBST)
➢ Knapsack Problem
Example - 1: Fibonacci Numbers
F(n)
F(n-1) + F(n-2)
• Binomial coefficient is a
coefficient of any of the terms in
the expansion of (a + b)n. It is
𝑛
denoted by c(n, k) or 𝑘
. Where
(0 < k < n).
Example - 2: Binomial Coefficient (Cont…)
Coefficient (Cont…) • Here the first k + 1 rows of the table form a triangle, the
remaining n-k rows form a rectangle.
• Therefore, the sum expressing A(n, k) is splitted into two parts.
Example - 3: Warshall’s Algorithm
• This algorithm constructs the transitive closure of a given
digraph with n-vertices through a series of n-by-n Boolean
matrices.
R(0)……………….R(k-1), R(k) …………………….., R(n)
• Adjacency Matrix: The adjacency matrix of a given directed
graph G=(V, E) is A = {aij}, a Boolean matrix that has 1 in its ith
row and jth column if and only if there is a directed edge from
the ith vertex to jth vertex.
Example - 3: Warshall’s Algorithm
3
1
0 0 1 0
0 0 1 0
1 1 1 1
1 0 0 1
0 0 0 0
4 0 0 0 0
2 1 1 1 1
0 1 0 0
Warshall’s Algorithm
Constructs transitive closure T as the last matrix in the sequence of n-
by-n matrices R(0), … , R(k), … , R(n) where
R(k)[i,j] = 1 iff there is nontrivial path from i to j with only first k
vertices allowed as intermediate
Note that R(0) = A (adjacency matrix), R(n) = T (transitive closure)
3 3 3 3 3
1 1 1 1 1
4 4 4 2 4 4
2 2 2 2
19
Warshall’s Algorithm (recurrence)
On the kth iteration, the algorithm determines for every pair of vertices i, j if a path exists
from i and j with just vertices 1,…,k allowed as intermediate
{
R(k)[i,j] =
R(k-1)[i,j]
or
R(k-1)[i,k] and R(k-1)[k,j]
(path using just 1 ,…,k-1)
k
i
j
20
Warshall’s Algorithm (matrix generation)
Recurrence relating elements R(k) to elements of R(k-1) is:
21
Warshall’s Algorithm (example)
3
1 0 0 1 0 0 0 1 0
1 0 0 1 1 0 1 1
R(0) =
0 0 0 0 R(1) =
0 0 0 0
2 4
0 1 0 0 0 1 0 0
0 0 1 0 0 0 1 0 0 0 1 0
1 0 1 1 1 0 1 1 1 1 1 1
R(2) =
0 0 0 0 R(3) =
0 0 0 0 R(4) =
0 0 0 0
1 1 1 1 1 1 1 1 1 1 1 1
22
Floyd’s Algorithm: All pairs shortest paths
Problem: In a weighted (di)graph, find shortest paths between
every pair of vertices
Example:
4 3
1
1
6
1 5
4
2 3
23
Floyd’s Algorithm (matrix generation)
On the k-th iteration, the algorithm determines shortest paths
between every pair of vertices i, j that use only vertices among
1,…,k as intermediate
D(k-1)[i,k]
k
i
D(k-1)[k,j]
D(k-1)[i,j]
j
24
Floyd's Algorithm:
Objective:
• Finding the shortest path from each vertex to all other vertices of
a given weighted connected graph (Directed/Undirected) named
as All Pair Shortest Path problem.
• Anany Levitin named the all-pair shortest path algorithm as
Floyd's algorithm.
• If there is no direct (edge) connection between the two vertices, it is represented as infinity.
• All self-edges are represented by the number 0.
26
Floyd’s Algorithm (pseudocode and analysis)
28
Floyd’s Algorithm (example)
1
2 2
3 6 7
3 4
1
29
Floyd’s Algorithm (example)
2
1 2 0 ∞ 3 ∞ 0 ∞ 3 ∞
3 6 7 2 0 ∞ ∞ 2 0 5 ∞
D(0) =
∞ 7 0 1 D(1) =
∞ 7 0 1
3
1
4 6 ∞ ∞ 0 6 ∞ 9 0
0 ∞ 3 ∞ 0 10 3 4 0 10 3 4
2 0 5 ∞ 2 0 5 6 2 0 5 6
D(2) =
9 7 0 1 D(3) =
9 7 0 1 D(4) =
7 7 0 1
6 ∞ 9 0 6 16 9 0 6 16 9 0
Optimal Optimal
BST for BST for
a i , ..., ak-1 a k+1 , ..., aj
Dynamic Programming for Optimal BST Problem
Dynamic Programming for Optimal BST Problem
Table of the dynamic programming algorithm for
constructing an optimal binary search tree.
Example
Solution
Solution
Solution
Analysis:
• The algorithm’s space efficiency is quadratic.
• The time efficiency of this algorithm is cubic.
• A more careful analysis shows that entries in the root table are always
nondecreasing along each row and column.
• This limits values for R(i, j) to the range R(i, j − 1), . . . , R(i + 1, j) and
makes it possible to reduce the running time of the algorithm to
Ɵ(n2).
Knapsack Problem
Dynamic Programming
Problem Statement
Given n items of
integer weights: w1, w2, … , wn
values: v1, v2, … , vn
a knapsack of integer capacity: W
find most valuable subset of the items that fit into the knapsack.
Recurrence relation
• A recurrence relation that expresses a solution to an instance of the
knapsack problem in terms of solutions to its smaller subinstances.
F(i, j) is the value of an optimal solution to this instance, i.e., the value of the most
valuable subset of the first i items that fit into the knapsack of capacity j.
Initial conditions:
Solution to knapsack instance
The subsets of the first i items that fit the knapsack of capacity j is
divided into two categories: those that do not include the ith item and
those that do.
1. Among the subsets that do not include the ith item, the value of an
optimal subset is, by definition, F(i − 1, j).
2. Among the subsets that do include the ith item (hence, j − wi ≥ 0),
an optimal subset is made up of this item and an optimal subset of
the first i − 1 items that fits into the knapsack of capacity j − wi . The
value of such an optimal subset is vi + F(i − 1, j − wi).
Table for solving the knapsack problem by
dynamic programming
Our goal is to find F(n, W), the maximal value of a subset of the n given
items that fit into the knapsack of capacity W, and an optimal subset itself.
Example
Solution
Memory functions
• Classical approach works bottom up: it fills a table with solutions to
all smaller subproblems.
• An unsatisfying aspect of this approach is that solutions to some of
these smaller subproblems are often not necessary for getting a
solution to the problem given.
• A method that solves only subproblems that are necessary and does
so only once is called memory functions.
Memory function method for Knapsack
• This method solves a given problem in the top-down manner but, in
addition, maintains a table of the kind that would have been used by
a bottom-up dynamic programming algorithm.
• Initially, all the table’s entries are initialized with a special “null”
symbol to indicate that they have not yet been calculated.
• Thereafter, whenever a new value needs to be calculated, the method
checks the corresponding entry in the table first: if this entry is not
“null,” it is simply retrieved from the table; otherwise, it is computed
by the recursive call whose result is then recorded in the table.
Solving knapsack problem by memory
function algorithm
Optimal solution
• Optimal subset is found by backtracing the computations in the table.
• Since F(4, 5) > F(3, 5), item 4 has to be included in an optimal solution
along with an optimal subset for filling 5 − 2 = 3 remaining units of the
knapsack capacity.
• The value of the latter is F(3, 3). Since F(3, 3) = F(2, 3), item 3 need
not be in an optimal subset.
• Since F(2, 3) > F(1, 3), item 2 is a part of an optimal selection, which
leaves element F(1, 3 − 1) to specify its remaining composition.
• Since F(1, 2) > F(0, 2), item 1 is the final part of the optimal solution
{item 1, item 2, item 4}.
Algorithm