0% found this document useful (0 votes)
20 views67 pages

UNIT 3 Dynamic Programming 1714885902797

Uploaded by

wifpem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views67 pages

UNIT 3 Dynamic Programming 1714885902797

Uploaded by

wifpem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 67

Dynamic Programming

Dynamic Programming:
• Dynamic Programming is an algorithm design technique.
• Dynamic Programming is a technique for solving problems with
overlapping sub-problems.
• Rather than solving overlapping subproblems again and again, dynamic
programming suggests solving each of the smaller subproblems only once
and recording the results in a table from which we can then obtain a
solution to the original problem.
• Invented by American mathematician Richard Bellman in the 1950s to
solve optimization problems.
• “Programming” means here “planning”.
Dynamic Programming:
Examples:
➢ Computing the nth Fibonacci number
➢ Computing a binomial coefficient
➢ Warshall’s Algorithm
➢ Floyd’s Algorithm
➢ Optimal Binary Search Tree (OBST)
➢ Knapsack Problem
Example - 1: Fibonacci Numbers

Fibonacci number Series: 0, 1, 1, 2, 3, 5, 8, ……………………………..


Defined by the Simple Recurrence:

F(n) = F(n-1) + F(n-2) for n > 1 and

Two Initial Conditions are: F(0) = 0, F(1) = 1

Computing the nth Fibonacci number if recurrence relation is used directly.


Example - 1: Fibonacci Numbers (cont.)
Computing the nth Fibonacci number recursively (Top-Down Approach):

F(n)

F(n-1) + F(n-2)

F(n-2) + F(n-3) F(n-3) + F(n-4)


...
Example - 1: Fibonacci Numbers (cont.)
If n = 5 ▪ Recursive call structure F(2) = 1 is computed
thrice. To overcome this, the solution of the
sub problem to be stored and reused from
the first-time computation itself.
▪ The ith Fibonacci value is computed by
adding the last two locations.
▪ Dynamic Programming, avoids the
recomputation of the same function for
many times.
Example - 2: Binomial Coefficient
Computing Binomial Coefficient c(n, k)

• Binomial coefficient is a
coefficient of any of the terms in
the expansion of (a + b)n. It is
𝑛
denoted by c(n, k) or 𝑘
. Where
(0 < k < n).
Example - 2: Binomial Coefficient (Cont…)

Example: Binomial Coefficients Recursion Tree for C(5, 2) = 10


Example - 2: Binomial Coefficient (Cont…)

• To compute c(n, k), fill the table row by row,


C(5, 2) = 10
from 0 to n. Each row I is filled left to right,
starting with 1 because C(n,0) = 1.
• The diagonal entries C(i, i) = 1 for 0 < I < k.
• The remaining entries are computed by adding
the contents of previous row, same column
C[i-1, j] and previous column C[i-1, j-1].
Example - 2: Binomial Coefficient (Cont…)
• ANALYSIS:
• Basic Operation is Addition, A(n, k) is the total number of
Example - 2: Binomial additions made in computing C(n, k).

Coefficient (Cont…) • Here the first k + 1 rows of the table form a triangle, the
remaining n-k rows form a rectangle.
• Therefore, the sum expressing A(n, k) is splitted into two parts.
Example - 3: Warshall’s Algorithm
• This algorithm constructs the transitive closure of a given
digraph with n-vertices through a series of n-by-n Boolean
matrices.
R(0)……………….R(k-1), R(k) …………………….., R(n)
• Adjacency Matrix: The adjacency matrix of a given directed
graph G=(V, E) is A = {aij}, a Boolean matrix that has 1 in its ith
row and jth column if and only if there is a directed edge from
the ith vertex to jth vertex.
Example - 3: Warshall’s Algorithm

• It is used to compute the transitive closureness of a


directed graph.
• Transitive Closure: The transitive closure of a directed
graph with ‘n’ vertices can be defined as an n x n Boolean
matrix T = { tij } in which the element in the ith row and jth
column is 1, if there exists a non-trivial directed path from
ith vertex to jth vertex; otherwise, it is 0.
Example - 3: Warshall’s Algorithm (Cont…)

Adjacency Matrix Transitive Closure


Example - 3: Warshall’s Algorithm (Cont…)
Example - 3: Warshall’s Algorithm (Cont…)
Example - 3: Warshall’s Algorithm:
ANALYSIS
• The basic operation of this algorithm is
computation of R(k)[I, j]. It is located
with in three nested for loops.
• Time efficiency: Θ(n3)
• Space efficiency: Matrices can be
written over their predecessors
Warshall’s Algorithm: Transitive Closure
• Computes the transitive closure of a relation
• Alternatively: existence of all nontrivial paths in a digraph
• Example of transitive closure:

3
1
0 0 1 0
0 0 1 0
1 1 1 1
1 0 0 1
0 0 0 0
4 0 0 0 0
2 1 1 1 1
0 1 0 0
Warshall’s Algorithm
Constructs transitive closure T as the last matrix in the sequence of n-
by-n matrices R(0), … , R(k), … , R(n) where
R(k)[i,j] = 1 iff there is nontrivial path from i to j with only first k
vertices allowed as intermediate
Note that R(0) = A (adjacency matrix), R(n) = T (transitive closure)
3 3 3 3 3
1 1 1 1 1

4 4 4 2 4 4
2 2 2 2

R(0) R(1) R(2) R(3) R(4)


0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0
1 0 0 1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1

19
Warshall’s Algorithm (recurrence)
On the kth iteration, the algorithm determines for every pair of vertices i, j if a path exists
from i and j with just vertices 1,…,k allowed as intermediate

{
R(k)[i,j] =
R(k-1)[i,j]
or
R(k-1)[i,k] and R(k-1)[k,j]
(path using just 1 ,…,k-1)

(path from i to k and from k to j using just 1 ,…,k-1)

k
i

j
20
Warshall’s Algorithm (matrix generation)
Recurrence relating elements R(k) to elements of R(k-1) is:

R(k)[i,j] = R(k-1)[i,j] or (R(k-1)[i,k] and R(k-1)[k,j])

It implies the following rules for generating R(k) from R(k-1):

Rule 1 If an element in row i and column j is 1 in R(k-1),


it remains 1 in R(k)

Rule 2 If an element in row i and column j is 0 in R(k-1),


it has to be changed to 1 in R(k) if and only if
the element in its row i and column k and the element
in its column j and row k are both 1’s in R(k-1)

21
Warshall’s Algorithm (example)
3
1 0 0 1 0 0 0 1 0
1 0 0 1 1 0 1 1
R(0) =
0 0 0 0 R(1) =
0 0 0 0
2 4
0 1 0 0 0 1 0 0

0 0 1 0 0 0 1 0 0 0 1 0
1 0 1 1 1 0 1 1 1 1 1 1
R(2) =
0 0 0 0 R(3) =
0 0 0 0 R(4) =
0 0 0 0
1 1 1 1 1 1 1 1 1 1 1 1

22
Floyd’s Algorithm: All pairs shortest paths
Problem: In a weighted (di)graph, find shortest paths between
every pair of vertices

Same idea: construct solution through series of matrices D(0), …,


D (n) using increasing subsets of the vertices allowed
as intermediate

Example:
4 3
1
1
6
1 5

4
2 3
23
Floyd’s Algorithm (matrix generation)
On the k-th iteration, the algorithm determines shortest paths
between every pair of vertices i, j that use only vertices among
1,…,k as intermediate

D(k)[i,j] = min {D(k-1)[i,j], D(k-1)[i,k] + D(k-1)[k,j]}

D(k-1)[i,k]
k
i
D(k-1)[k,j]
D(k-1)[i,j]

j
24
Floyd's Algorithm:
Objective:
• Finding the shortest path from each vertex to all other vertices of
a given weighted connected graph (Directed/Undirected) named
as All Pair Shortest Path problem.
• Anany Levitin named the all-pair shortest path algorithm as
Floyd's algorithm.

Input: A weighted graph.


Output: A graph/matrix with shortest path among all the vertices.
Floyd’s Algorithm (example)
a b c d a b c d
a
2 b a 0 ∞ 3 ∞ a 0 ∞ 3 ∞
b 2 0 ∞ ∞ b 2 0 ∞ ∞
3 6 7 D(0) =
c ∞ 7 0 1 D(0) =
c ∞ 7 0 1
c d d 6 ∞ ∞ 0 d 6 ∞ ∞ 0
1

• If there is no direct (edge) connection between the two vertices, it is represented as infinity.
• All self-edges are represented by the number 0.

26
Floyd’s Algorithm (pseudocode and analysis)

Time efficiency: Θ(n3)


Space efficiency: Matrices can be written over their predecessors
Note: Shortest paths themselves can be found, too

28
Floyd’s Algorithm (example)
1
2 2

3 6 7

3 4
1

29
Floyd’s Algorithm (example)
2
1 2 0 ∞ 3 ∞ 0 ∞ 3 ∞
3 6 7 2 0 ∞ ∞ 2 0 5 ∞
D(0) =
∞ 7 0 1 D(1) =
∞ 7 0 1
3
1
4 6 ∞ ∞ 0 6 ∞ 9 0

0 ∞ 3 ∞ 0 10 3 4 0 10 3 4
2 0 5 ∞ 2 0 5 6 2 0 5 6
D(2) =
9 7 0 1 D(3) =
9 7 0 1 D(4) =
7 7 0 1
6 ∞ 9 0 6 16 9 0 6 16 9 0

D(3): 3 to 1 not allowing 4=9. D(4): 3 to 1 with allowing 4=7


30
Optimal Binary Search Tree
Dynamic Programming
Binary Search Tree:
• Binary Search Tree is a data structure, and its application is to
implement a dictionary.
• To increase the search efficiency more frequently used words are
arranged nearer to the root and less frequently used words away
from the root, with the help of probability value of each word. This
type of arrangement is called optimal binary search tree.
Binary Search Tree:
OBST Definition:
• Let a1, a2…….an be the distinct keys ordered from the smallest to the largest value.
Let p1, p2……. Pn be the probabilities of searching the corresponding key.
• Let p(i) is the probability of successful search and q(i) is the probability of
unsuccessful search.
• Then a tree which is build with optimum cost from σ𝑛𝑖=1 𝑝(𝑖) and σ𝑛𝑖=1 𝑞(𝑖) is
called optimal binary search tree.
• The cost of the tree is denoted by c[i,j] and its corresponding root is denoted by
r[i,j].
Problem Statement
Given n keys a1 < …< an and probabilities p1 ≤ … ≤ pn searching for them,
find a BST with a minimum average number of comparisons in
successful search.

Since total number of BSTs with n nodes is given by


Catalan number Cn = (2n)! / ((n + 1)! * n!),
which grows exponentially, brute force is hopeless.
Example
What is an optimal BST for keys A, B, C, and D with search probabilities
0.1, 0.2, 0.4, and 0.3, respectively?

The average number of comparisons in a successful search:


For the First Tree is : (0.1 * 1) + (0.2 * 2) + (0.4 * 3) + (0.3 * 4) = 2.9 and
For the Second Tree is: (0.1 * 2) + (0.2 * 1) + (0.4 * 2) + (0.3 * 3) = 2.1
Dynamic Programming for Optimal BST Problem
• Let T[i,j] be optimal BST for keys ai < …< aj , C[i,j] be minimum average
number of comparisons made in T[i,j], for 1 ≤ i ≤ j ≤ n.
• Consider optimal BST among all BSTs with some ak (i ≤ k ≤ j ) as their
root; T[i,j] is the best among them.
ak

Optimal Optimal
BST for BST for
a i , ..., ak-1 a k+1 , ..., aj
Dynamic Programming for Optimal BST Problem
Dynamic Programming for Optimal BST Problem
Table of the dynamic programming algorithm for
constructing an optimal binary search tree.
Example
Solution
Solution
Solution
Analysis:
• The algorithm’s space efficiency is quadratic.
• The time efficiency of this algorithm is cubic.
• A more careful analysis shows that entries in the root table are always
nondecreasing along each row and column.
• This limits values for R(i, j) to the range R(i, j − 1), . . . , R(i + 1, j) and
makes it possible to reduce the running time of the algorithm to
Ɵ(n2).
Knapsack Problem
Dynamic Programming
Problem Statement
Given n items of
integer weights: w1, w2, … , wn
values: v1, v2, … , vn
a knapsack of integer capacity: W
find most valuable subset of the items that fit into the knapsack.
Recurrence relation
• A recurrence relation that expresses a solution to an instance of the
knapsack problem in terms of solutions to its smaller subinstances.

F(i, j) is the value of an optimal solution to this instance, i.e., the value of the most
valuable subset of the first i items that fit into the knapsack of capacity j.

Initial conditions:
Solution to knapsack instance
The subsets of the first i items that fit the knapsack of capacity j is
divided into two categories: those that do not include the ith item and
those that do.
1. Among the subsets that do not include the ith item, the value of an
optimal subset is, by definition, F(i − 1, j).
2. Among the subsets that do include the ith item (hence, j − wi ≥ 0),
an optimal subset is made up of this item and an optimal subset of
the first i − 1 items that fits into the knapsack of capacity j − wi . The
value of such an optimal subset is vi + F(i − 1, j − wi).
Table for solving the knapsack problem by
dynamic programming
Our goal is to find F(n, W), the maximal value of a subset of the n given
items that fit into the knapsack of capacity W, and an optimal subset itself.
Example
Solution
Memory functions
• Classical approach works bottom up: it fills a table with solutions to
all smaller subproblems.
• An unsatisfying aspect of this approach is that solutions to some of
these smaller subproblems are often not necessary for getting a
solution to the problem given.
• A method that solves only subproblems that are necessary and does
so only once is called memory functions.
Memory function method for Knapsack
• This method solves a given problem in the top-down manner but, in
addition, maintains a table of the kind that would have been used by
a bottom-up dynamic programming algorithm.
• Initially, all the table’s entries are initialized with a special “null”
symbol to indicate that they have not yet been calculated.
• Thereafter, whenever a new value needs to be calculated, the method
checks the corresponding entry in the table first: if this entry is not
“null,” it is simply retrieved from the table; otherwise, it is computed
by the recursive call whose result is then recorded in the table.
Solving knapsack problem by memory
function algorithm
Optimal solution
• Optimal subset is found by backtracing the computations in the table.
• Since F(4, 5) > F(3, 5), item 4 has to be included in an optimal solution
along with an optimal subset for filling 5 − 2 = 3 remaining units of the
knapsack capacity.
• The value of the latter is F(3, 3). Since F(3, 3) = F(2, 3), item 3 need
not be in an optimal subset.
• Since F(2, 3) > F(1, 3), item 2 is a part of an optimal selection, which
leaves element F(1, 3 − 1) to specify its remaining composition.
• Since F(1, 2) > F(0, 2), item 1 is the final part of the optimal solution
{item 1, item 2, item 4}.
Algorithm

You might also like