5. Dynamic Programming
5. Dynamic Programming
fib(4) fib(3)
fib(1) fib(0)
Dynamic Programming 73
y In the above case, fib(2) was evaluated int fib (int n)
three-time (over-lapping of subproblems) {
y If the n is large, then other values of fib
(subproblems) are appraised which leads if(n == 0) return 0;
to an exponential-time algorithm. if(n == 1) return 1;
y It is better, instead of solving the same if(fib[n]! = 0) /*check if already
problem, again and again, to store the calculated*/
result once and reuse it; it is a way to
return fib[n];
reduce the complexity.
y Memoization works like this: Begin with return fib[n] ¬ fib (n-1) + fib (n-2);
a recursive function and add a table that }
maps the parameters of the function to
y In both methods, Fibonacci Series
the results calculated by the function.
complexity is reduced to O(n).
y When the function is called with identical
y Because we use already computed values
parameter values that have already been
from the table.
called once, the stored value is returned
TC: O(n).
from the table.
SC: O(n).
Improving:
Note:
y Now, we see how DP bring down this Both the approaches can be derived for the
problem complexity from exponential to dynamic programming problems.
polynomial.
y We have two methods for this: Matrix Chain Multiplication
y 1. bottom-up approach: Start with the
lower value of input and slowly calculate Problem:
a higher value. Let’s say a series of matrices are given, A1
int fib[n]; × A2 × A3 × ... × An, with their value, what is
int fib(int n) the right way to parenthesize them so that
{ it provides the minimum the number of total
multiplication. Assume that we are using
if (n < = 1) standard matrix multiplication and not
return n; Strassen’s matrix multiplication algorithm.
fib[0] ¬ 0 Input:
fib[1] ¬ 1 Chain of matrices A1 × A2 × A3 × ... × An, where
for ¬ 2 to n Ai has dimensions or size Pi-1 × Pi. The value
is specified in an array P.
fib[i] ¬ fib[i-1] + fib[i-2];
Goal:
return fib[n];
} Finding a way of multiplying matrices(in
parenthesized form) such that it gives the
y 2. Top-down approach: We just save the optimal number of multiplications.
values of recursive calls and use them in Solution:
future.
y The implementation y We know the fact of mathematics that
int fib[n]; matrix multiplication is associative.
y Parenthesizing does not have any effect
for (i ¬ 0 to n)
on the result.
fib[i] ¬ 0; /* initialisation */
74 Dynamic Programming
y For example – 4 matrices A, B, C, and D, 0 , if i = j
the number of ways could be: M[i, j] =
(=
A(BC) ) D (=
(AB)C ) D (AB)(CD)
{
i≤k < j }
Min M[i,k] + M[k + 1, j] + pi−1pkp j if i < j
(B(CD) ) A ((BC)(D) )
= A= y By using the above recursive equation, we
find point k that helps to minimise the
y Number of ways we can parenthesis the
number of Scalar multiplications.
(2n) !
matrix = , y After calculating all the possible values
(n + 1) !n! of k, the value which gives the minimum
Where n = number of matrix - 1 number of scalar multiplications is
y Multiplying A(p×q) matrix with B(q×r) requires selected.
p*q*r multiplications, i.e. the number of y For reconstructing the optimal
scalar multiplication. parenthesization, we use a table (say, S[i, j]).
y Different ways to parenthesize matrix y By using the bottom-up technique, we can
produce a different number of scalar evaluate the M[ i, j ] and S[ i, j].
multiplication. /* p is the size of the matrices, Matrix i
y Choosing the best parenthesizations using has the dimension p[i-1] × p[i].
brute force metthod gives O(2n). M[i,j] is the least cost of multiplying
y This time, complexity can be improved matrices i through j.
using Dynamic Programming. S[i ,j] saves the multiplication point, and
y Let M[i, j] represent the minimum number we use this for backtracking, and length is
of scalar multiplications required to the size of p array.*/
multiply Ai…Aj. Void MatrixChainOrder (int p[], int length)
{
int n = length – 1, M[n] [n], S[n] [n];
for i ¬ 1 to n
M[i] [i] ¬ 0; // fills in matrix by diagonals
for l ¬ 2 to n // l is chain length
{
for i ¬ 1 to n – l + 1 {
int j ¬ i + l – 1;
M[i] [j] ¬ MAX VALUE;
/* Try all possible division points i…k and k…j */
for k ¬ i to j – 1
{
int this.cost ¬ M[i] [k] + M[k+1] [j] + p[i-1] * p[k] * p[j];
if(this.cost <M[i] [j])
{
M[i] [j] ¬ this.cost;
S[i] [j] ¬ k;
}
}
}
}
}
Dynamic Programming 75
Top-Down Dynamic Programming Matrix dynamic programming is that the number
Chain Multiplication of function, calls in top-down dynamic
programming and the number of shells in
y Top-down method is also called bottom-up are almost the same.
memoization or memoized. y In the top-down method, we are not going
y Both the top-down method and bottom- to compute any function, twice. Whenever
up method are going to use the same we compute a function, for the 1st time, we
unique problem. save it in the table and next time, when we
y The basic similarity between top-down want that function we take it from the table.
dynamic programming and bottom-up
MEMOIZED_MATRIX_CHAIN(P)
{
1. n = p . length – 1 /* p is a sequence of all dimensions, and n is equal to the number
of matrixces */
2. Let m[1….n, 1….n] be a new table
/* m[i, j] represents the least number of scalar
multiplication needed to multiply Ai….Aj */
3. for i = 1 to n
4. for j = 1 to n
5. m[i, j] = ∞
6. return LOOKUP_CHAIN (m, p, 1, n)
}
LOOKUP_CHAIN (m, p, i, j)
{
7. if m[i, j] < ∞ /* these two line check whether it is visited for
the 1st time or not */
8. return m[i, j]
9. if i == j
10. m[i, j] = 0
11. else for k = i to j – 1
12. q = LOOKUP_CHAIN(m, p, i, k) + LOOKUP_CHAIN (m, p, k+1, j) + pi-1 pk pj
13. if q < m[i, j]
14. m[i, j] = q
15. return m[i, j]
}
Analysis:
distinct sub-problem, the for-loop in the
y In lines 3 to 5, initially, it is placing infinity
worst-case run for ‘n’ times.
in the entire table.
y Therefore, the time complexity is O(n3)
y It is done so to know whether it is the 1st
same as the bottom-up method.
time program has visited some entry, or it
y Actual space complexity is more because
has already computed.
of recursion, but the order of space
y If the entry is ∞ , then the it means that the
complexity is not going to change.
program has visited that shell for the 1st time. y The depth of the tree is going to be O(n).
Time complexity: When the depth of the recursion tree
y Number of distinct sub-problem is O(n2), is O(n), then the stack size required is
O(n) because, at any time, the maximum
and whenever the program call any
76 Dynamic Programming
number of elements that will be present in Recursive solution:
the stack will be equal to the depth of the
y Before DP Solution, we form a recursive
recursion tree and O(n2) for table m[n, n].
solution for this, and later examine the
Therefore, space complexity = O(n2) + O(n)
LCS problem.
= O(n2)
y Given two strings, “ABCBDAB” and
y Space complexity is also same as bottom-
“BDCABA”, draw the lines from the letters
up method.
in the first string to the corresponding
letter in the second, no two lines should
cross each other.
Previous Years’ Question
A B C B D A B
Dynamic Programming 77
y If xm = yn, then we solve 1 subproblem. y b[i, j] points to the table entry
Evaluating longest common subsequence corresponding to the optimal subproblem
on Xm-1 and Yn-1. computed and selected when evaluating
y If Xm ! = Yn, then we must solve two c[i, j].
subproblems. finding an LCS of Xm-1 and Y y The procedure returns the table b and c,
and finding the LCS X and Yn-1. c[m, n] contains the length of an LCS of X
y Whichever of these two LCSs is longer is and Y.
the LCS of X and Y; these cases exhaust LCS_LENGTH(X, Y)
all possibilities. 1. m ← X ⋅ length
y One of the optimal subproblem solutions
seems like inside a longest common 2. n ← Y ⋅ length
subsequence of X and Y. 3. Let b[1…m, 1…n] and c[0…m, 0…n] be new
y As in the matrix chain multiplication tables
problem, a recursive solution to the LCS
4. for i ¬ 1 to m
problems requires to initiate a recurrence
for the value of an optimal solution. 5. c[i,0] ¬ 0
y c[i, j] = length of the LCS of the sequences 6. for j ¬ 0 to n
<x1, x2, ...xi> and <y1, y2, ...yj>
y If (i = 0 or j = 0) then one of the sequence 7. c[0,j] ¬ 0
has length 0, and LCS has length 0. 8. for i ¬ 1 to m
y The recursive formula comes from the 9. for j ¬ 1 to n
optimal substructure of the LCS problem.
10. if xi == yj
0 =if i 0= or j 0,
11. c[i,j] ¬ c[i-1, j-1] + 1
c[i, j]= c[i − 1, j − 1] + 1 if i, j > 0 and xi = y j ,
12. b[i,j] ¬ “ ”
max(c[i, j − 1], c[i − 1, j]) if i, j > 0 and xi ≠ y j
13. elseif c[i-1, j] ≥ c[i, j-1]
When Xi = Yj, we consider the subproblem
of evaluating an LCS of Xi-1 and Yj-1 else we 14. c[i,j] ¬ c[i-1, j]
evaluate two subproblems, LCS of Xi and 15. b[i,j] ¬ “ ↑ ”
Y(j-1), LCS of X(i-1) and Yj
16. else c[i,j] ¬ c[i, j-1]
In the LCS problem; we have only O(mn)
different subproblems, we can use DP to 17. b[i,j] ¬ “ ← ”
evaluate the problem. 18. return c and b
Method LCS-LENGTH grab 2 sequences
X = <x1, x2, …, xm> and Y = <Y1, Y2, …, Yn> as y The running time of the procedure is θ(mn),
inputs. since each table entry takes θ(1) time to
y We store c[i, j] values in a table c[0..m, compute.
0..n], and evaluate the entries in Row- y Space complexity is θ(mn) because of
Major order. (the procedure fills in the
first row of C from left to right, then the tables.
second row, and so on). Example:
y b[1...m, 1...n] helps us in constructing an Let X = <P, Q, R, Q, S, P, Q> and Y <Q, S, R, P,
optimal solution. Q, P> be two sequences.
78 Dynamic Programming
2. Return
3. If b[i, j] == “ ”
4. PRINT-LCS(b, X, i-1, j-1)
5. Print Xi
6. Elseif b[i, j] == “ ↑ ”
7. PRINT-LCS(b, X, i-1, j)
8. Else PRINT-LCS(b, X, i, j-1)
Dynamic Programming 79
y Let ‘s’ and ‘t’ be the source and destination
respectively.
Previous Years’ Question
y The sum of the costs of the edges on a
path from source (s) to destination (t) is
Consider the data given in the above
the path’s cost.
question. The value of l(i,j) could be
y The objective of the MULTISTAGE GRAPH
obtained by dynamic programming
problem is to find the minimum path from
based on the correct recursive definition
‘s’ to ‘t’.
of l(i,j) of the form given above, using an
y Each set Vi defines a stage in the graph.
array L[M, NJ, where M = m + 1 and N =
Every path from ‘s’ to ‘t’ starts in stage-1,
n + 1, such that L[i,jJ = l(i,j).
goes to stage-2 then to stage-3, then to
Which one of the following statements
stage-4, and so on, and terminates in
would be true regarding the dynamic
stage-k.
programming solution for the recursive
y This MULISTAGE GRAPH problem can be
definition of l(i,j)?
solved in 2 ways.
(A) All elements L should be initialized to
ο Forward Method
O for the value of l(i, j) to be properly
ο Backward Method
computed.
(B) The values of l(i, j) may be computed Forward method:
in a row major order or column major Algorithm FGraph (G,k,n,p)
order of L[M, N] // The input is a k-stage graph G=(V,E) with
(C) The values of l(i, j) cannot be ‘n’ vertex.
computed in either row major order // Indexed in order of stages and E is a set
or column major order of L[M, NJ of edges.
(D) L[p, qJ needs to be computed before // and c[i,j] is the cost of edge <i,j> is a
L[r, sJ if either p < r or q < s. minimum-cost path.
Solution: (B) [GATE 2009] {
cost[n]=0.0;
for j=n-1 to 1 do
{
Previous Years’ Question // compute cost[j],
// let ‘r’ be the vertex such that
<j,r> is an edge of ‘G’ &
Consider Two Strings A = “Qpqrr” And B // c[j,r]+cost[r] is minimum.
= “Pqprqrp”. Let X Be The Length Of The cost[j] = c[j,r] + cost[r];
Longest Common Subsequence (Not d[j] = r;
Necessarily Contiguous) Between A And B }
And Let Y Be The Number Of Such Longest // find a minimum cost path.
Common Subsequences Between A And B. P[1] = 1;
Then X + 10y = ____ P[k]=n;
Solution: 34 [Gate 2014 (Set-2)] For j=2 to k-1 do
P[j]=d[P[j-1]];
}
Multistage Graph:
y Assume that there are ‘k’ stages in a graph.
y A multistage graph G = (V,E) is a directed y In this FORWARD approach, we find out
graph in which the vertices are partitioned the cost of each and every node starting
into K ≥ 2 disjoint sets of Vi, 1 ≤ i ≤ k from the ‘k’ th stage to the 1st stage.
y In addition, if < u, v > is an edge in E, then y We will find out the path (i.e.) minimum
u is in Vi and v belongs to Vi + 1 for some i, cost path from source to the destination
1 ≤ i < k. (i.e,) [ Stage-1 to Stage-k ].
80 Dynamic Programming
y Maintain a cost matrix cost[n] which cost(8) = min {c (8,10) + cost (10), c(8,11) +
stores the distance from any vertex to the Cost(11)}
destination. = min (5+2, 6+5)
y If a vertex is having more than one path, =7
then we have to choose the minimum cost(8) = 7 =>D(8) = 10
distance path and the intermediate cost(7) = min(c (7,9) + cost(9), c(7,10) +
vertex, which gives the minimum distance cost(10))
path and will be stored in the distance = min(4+4, 3+2)
array ‘d’.
= min(8,5)
y Thus, we can find out the minimum cost
=5
path from each and every vertex.
cost(7) = 5 => D(7) = 10
y Finally cost(1) will give the shortest
cost(6) = min(c (6,9) + cost(9), c(6,10) +
distance from source to destination.
cost(10))
y For finding the path, start from vertex-1 then
= min(6+4, 5+2)
the distance array D(1) will give the minimum
cost neighbour vertex which in turn give the = min(10, 7)
next nearest vertex and proceed in this way =7
till we reach the destination. cost(6) = 7 =>D(6) = 10
y For a ‘k’ stage graph, there will be ‘k’ vertex cost(5) = min(c (5,7) + cost(7), c(5,8) + cost(8))
in the path. = min(11+5, 8+7)
Example: = min(16,15)
= 15
cost(5) = 15 =>D(5) = 8
cost(4) = min(c (4,8) + cost(8))
= min(11+7)
= 18
cost(4) = 18 =>D(4) = 8
cost(3) = min(c (3,6) + cost(6), c(3,7) +cost(7))
= min(2+7, 7+5)
= min(9, 12)
=9
cost(3) = 9 =>D(3) = 6
cost(2) = min(c (2, 6) + cost(6), c(2,7) +
cost(7), c(2,8) + cost(8))
= min(4+7, 2+5, 1+7)
y In the above graph V1…V5 represent the
= min(11, 7, 8)
stages. This 5 stage graph can be solved
=7
by using forward approach as follows,
cost(2) = 7=>D(2) = 7
STEPS: DESTINATION, D
cost(1) = min(c (1,2)+cost(2), c(1,3) + cost(3),
y Cost (12) = 0 D (12) = 0
c(1,4) + cost(4), c(1,5) + cost(5))
y Cost (11) = 5 D (11) = 12
y Cost (10) = 2 D (10) = 12 = min(9+7, 7+9, 3+18, 2+15)
y Cost (9) = 4 D (9) = 12 = min(16, 16, 21, 17)
For forward approach, = 16
cost(1) = 16 =>D(1) = 2
Cost (i,j) = min {c (j,l) + Cost (i+1,l) }
Start from vertex -2
l Î Vi + 1
D (1) = 2
(j,l) Î E
D (2) = 7
Dynamic Programming 81
D (7) = 10
D (10) = 12
So, the minimum-cost path is,
1 2 7 10 12
Backward method:
y It is similar to forward approach, but
differs only in two or three ways.
y Maintain a cost matrix to store the cost
of every vertices and a distance matrix to
store the minimum distance vertex. Fig. 5.1
y Find out the cost of each and every vertex Cost(1) =0 = > D(1)= 0
starting from vertex 1 up to vertex k. Cost(2) =9 = > D(2)=1
y To find out the path start from vertex ‘k’, Cost(3) =7 = > D(3)=1
then the distance array D (k) will give the Cost(4) =3 = > D(4)=1
minimum cost neighbour vertex which Cost(5) =2 = > D(5)=1
in turn gives the next nearest neighbour Cost(6) = min(c (2,6) + cost(2), c(3,6)
vertex and proceed till we reach the + cost(3))
destination. = min(13,9)
Algorithm: backward method: cost(6) = 9 => D(6)=3
Algorithm BGraph (G,k,n,p) Cost(7) = min(c (3,7) + cost(3), c(5,7)
+ cost(5), c(2,7) + cost(2))
// The input is a k-stage graph G=(V,E) with = min(14,13,11)
‘n’ vertex. cost(7) = 11 =>D(7)=2
// Indexed in order of stages and E is a set Cost(8) = min(c (2,8) + cost(2), c(4,8)
of edges. + cost(4), c(5,8) + cost(5))
// and c[i,j] is the cost of edge <i,j> (i,j are = min(10,14,10)
the vertex number), p[k] is a minimum cost(8) = 10 =>D(8)=2
cost path. Cost(9) = min(c (6,9) + cost(6), c(7,9)
+ cost(7))
{
= min(15, 15)
bcost[1]=0.0; cost(9) = 15 =>D(9)=6
for j=2 to n do Cost(10) = min(c(6,10) + cost(6), c(7,10)
{ + cost(7)), c(8,10) + cost(8))
// compute bcost[j] = min(14,14,15)
// let ‘r’ be the vertex such that cost(10) = 14 =>D(10)=6
<r,j> is an edge of ‘G’ & Cost(11) = min(c (8,11) + cost(8))
// bcost[r]+c[r,j] is minimum. cost(11) = 16 =>D(11)=8
bcost[j] = bcost[r] + c[r,j]; Cost(12) = min(c(9,12) + cost(9), c(10,12)
d[j]=r; + cost(10), c(11,12) + cost(11))
} = min(19,16,21)
// find a minimum cost path. cost(12) = 16 = >D(12) = 10
P[1]=1; Start from vertex-12:
P[k]=n; D(12) = 10
For j= k-1 to 2 do D(10) = 6
P[j]=d[P[j+1]]; D(6) = 3
} D(3) = 1
82 Dynamic Programming
So the minimum cost path is, length of the shortest path visiting each
7 2 5 2 node in S exactly once, starting at 1 and
1 → 3 → 6 → 10 → 12
ending at j.
The cost is 16. When |S| > 1, we define C(S,1) = ∞ since the
path cannot both start and end at 1.
Travelling Salesman Problem
Now, let’s express C(S,j) in terms of smaller
y The traveling salesman problem (TSP) is to subproblems. We need to start at 1 and
find the shortest possible route that visits end at j; what should we pick as the
each city exactly once and returns to the second-to-last city? It has to be some
starting point given a set of cities and the i ∈ S , so the overall path length is the
distance between each pair of cities. distance from 1 to i, namely, C(S – {j}, i),
y Take note of the distinction between plus the length of the final edge. dij. We
the Hamiltonian cycle and the TSP. must pick the best such i:
The Hamiltonian cycle problem entails j) min C(S − { j}, i) + dij .
C(S,=
determining whether a tour exists that i∈S:i≠ j
visits each city exactly once. The problem The subproblems are ordered by |S|. Here’s
is to find a minimum weight Hamiltonian the code.
cycle. We know that Hamiltonian tours C({1},1) = 0
exist (because the graph is complete), and for s = 2 to n:
there are many of them. for all subsets S ⊆ {1, 2, ...,n} of size s and
y Let the number of vertices in the given set containing 1:
be 1, 2, 3, 4,...n. Let’s use 1 as the starting C(S,1) = ∞
and ending points for the output. We find for all j ∈ S, j ≠ 1 :
the minimum cost path with 1 as the C(S,
= j) min{C(S − { j}, i) + dij : i ∈ S, i ≠ j}
starting point, i as the ending point, and
all vertices appearing exactly once. return min j C({1, ...,n}, j) + dj1
y Let’s say the cost of this path is cost(i), There are at most 2n⋅n subproblems, and
then the cost of the corresponding cycle each one takes linear time to solve. The
is cost(i) + dist(i, 1), where dist(i, 1) is the total running time is, therefore O(n22n).
distance between from i to 1. Finally, we
Example:
return the value that is the smallest of all
[cost(i) + dist(i, 1)]. So far, this appears to y Consider the graph
be straightforward. The question now is 1
how to obtain cost(i).
y We need a recursive relation in terms of 20
10 15
sub-problems to calculate the cost(i)
using dynamic programming. 4
y Let’s say C(S, i) is the cost of the minimum 25 30
cost path visiting each vertex in set S
2 35 3
exactly once, starting at 1 and ending at i.
y We begin with all subsetds of size 2 and
y Matrix representation of the above graph
calculate C(S, i) for all subsets where S is
the subset, then calculate C(S, i) for all 1 2 3 4
subsets of size 3, and so on. It’s worth 1 0 10 15 20
noting that 1 must appear in each subset.
2 10 0 35 25
For a subset of cities S ⊆ {1, 2, ...,n} that
3 15 35 0 30
includes 1, and j ∈ S, let C(S, j) be the
4 20 25 30 0
Dynamic Programming 83
y Lets start from node 1 y Thus, to take a bottom-up approach, we
C(4, 1) = 20 C(3,1) = 15 C(2,1) = 10 must solve the 0-1 knapsack problem for
C({3},2) = d(2,3) + C(3,1) = 50 all items and possible weights smaller
C({4},2) = d(2,4) + C(4,1) = 45 than W.
C({2},3) = d(3,2) + C(2,1) = 45 y We’ll build an n + 1 by W + 1 table of values
C({4},3) = d(3,4) + C(4,1) = 50 where the rows are indexed by item, and
C({2},4) = d(4,2) + C(2,1) = 35 the columns are indexed by total weight.
C({3},4) = d(4,3) + C(3,1) = 45 y For row i column j, we decide whether or
C({3,4,2}) = min(d(2,3) + C({4},3), d(2,4) + not it would be advantageous to include
C({3},4)) item i in the knapsack by comparing the
= min(85,70) total value of a knapsack including items
= 70 1 through i − 1 with max weight j, or the
C({2,4},3) = min(d(3,2) + C({4},2), d(3,4) + total value of including items 1 through i
C({2},4)) − 1 with max weight j − i.weight and also
= min(80,65) item i.
= 65 y To solve the problem, we simply examine
C({2,3},4) = min(d(4,2) + C({3},2), d(4,3) + the [n,W] entry of the table to determine
C({2},3)) the maximum value we can achieve.
= min(75, 75) y To read the items we include, start with
= 75 entry [n,W]. In general, proceed as follows:
y Finally If entry [i,j] equals entry [i−1, j], don’t
C({2,3,4},1) = min(d(1,2) + C({3,4},2), d(1,3) + include item i, and examine entry [i−1, j],
C({2,4},3), d(1,4)+C({2,3},4)) next.
= min(80,80,95) y If entry [i,j] doesn’t equal entry [i − 1, j],
= 80 include item i and examine entry [i − 1, j −
∴ Optimal tour length is 80 i].weight next.
Optimal tour 1-2-4-3-1 Algorithm 0-1 Knapsack(n,W)
0-1 Knapsack problem:
1. Initialize a 2D matrix KS of size (n+1) x
The 0-1 knapsack problem is as follows. A (W+1)
thief robbing a store finds n items. The ith 2. Initialize 1st row and 1st column with zero
item is worth vi dollars and weighs wi pounds, 3. For itr<- 1 to n
where vi and wi are integers. The thief wants
4. For j<-1 to W
to take as valuable a load as possible, but he
can carry at most W pounds in his knapsack, 5. If(j< W[itr])
for some integer W. Which items should he 6. KS[itr,j]<- K[itr-1][j]
take? (We call this the 0-1 knapsack problem 7. Else
because, for each item, the thief must either 8. KS[itr,j]<-max(KS[itr-1,j] , KS[itr-1 , j])
take it or leave it behind; he cannot take a 9. End for
fractional amount of an item or take an item 10. End for
more than once.)
End
Dynamic Programming Approach Analysis:
y Since the time to calculate each entry
y Suppose we know that a particular item
in the table KS[i,j] is constant; the time
of weight w is in the solution. Then we
complexity is Θ(n x W). Where n is the
must solve the subproblem on n − 1 items
number of items and W is the capacity of
with maximum weight W − w.
the Knapsack.
84 Dynamic Programming
Recurrence relation: KS
= ( 2, 1) KS
= ( 1, 1) 10
y In a Knapsack problem, we are given a set
P2 + KS ( 1, 0 ) , ( 15 + 0 ) ,
of n items where each item i is specified= KS ( 2, 2) max = max = 15
by a size/weight
max (Piw+i KS (
andi –a1, w
value P
– wi , i) . We KS ( 1, 2 ) 10
are also given the size bound W, the size
KS (i – 1, w ));if wi ≤ w P2 + KS ( 1, 1) , ( 15 + 10 ) ,
(capacity) of our Knapsack. = KS ( 2, 3) max= max
= 25
KS =
(i, w ) KS (i − 1, w ) ;if wi > w KS ( 1, 3) 10
0 ;if i 0=or w 0 P2 + KS ( 1, 2) ,
= ( 15 + 10 ) ,
max (P
(
i + KS i – 1, w – wi , ) = KS ( 2, 4 ) max= max
= 25
KS (i – 1, w ));if wi ≤ w KS ( 1, 4 ) 10
P2 + KS ( 1, 3 ) , ( 15 + 10 ) ,
KS
= (i, w ) KS (i − 1, w ) ;if wi > w = KS ( 2, 5 ) max= max
= 25
0 = ;if i 0= or w 0 KS ( 1, 5 ) 10
P2 + KS ( 1, 4 ) , ( 15 + 10 ) ,
= KS ( 2, 6 ) max= max
= 25
KS ( 1, 6 ) 10
Where KS (i,w) is the best value that can KS
= ( 3, 1) KS
= ( 2, 1) 10
be achieved, for instance, with only the
first i items and capacity w.
KS
= ( 3, 2) KS
= ( 2, 2) 15
Example: P3 + KS ( 2, 0 ) , ( 40 + 0 ) ,
KS ( 3, 3) max = max
= = 45
Item 1 2 3 KS ( 2, 3) 25
P3 + KS ( 2, 1) , ( 40 + 10 ) ,
Weight (in kgs) 1 2 3 KS ( 3, 4 ) max=
= max
= 50
KS ( 2, 4 ) 25
Values (In rupees) 10 15 40
P3 + KS ( 2, 2) , ( 40 + 15 ) ,
Capacity of bag = W = 6 kgs KS ( 3, 5 ) max=
= max
= 55
KS ( 2, 5 ) 25
0 1 2 3 4 5 6
P3 + KS ( 2, 3) , ( 40 + 25 ) ,
0 0 0 0 0 0 0 0 KS ( 3, 6 ) max=
= max
= 65
1 0 10 10 10 10 10 10 KS ( 2, 6 ) 25
2 0 10 15 25 25 25 25 Subset Sum Problem
3 0 10 15 40 50 55 65
Problem:
Given a sequence of n positive numbers
P1 + KS ( 0, 0 ) , ( 10 + 0 ) ,
KS ( 1, 1) max = max
= = 10 A1, A2…..An, give an algorithm which checks
KS ( 0, 1) 0 whether there exists a subset of A whose
P1 + KS ( 0, 1) , ( 10 + 0 ) ,
sum of all numbers is T.
KS ( 1, 2) max = max
= = 10
y This is a variation of the Knapsack problem.
KS ( 0, 2) 0
As an example, consider the following
P1 + KS ( 0, 2) , ( 10 + 0 ) , array: A = [3, 2, 4, 19, 3, 7, 13, 10, 6, 11]
KS ( 1, 3) max = max
= = 10
KS ( 0, 3) 0 Suppose if we want to check whether
there is any subset whose sum is 17. The
P1 + KS ( 0, 3 ) , ( 10 + 0 ) ,
KS ( 1, 4 ) max = max
= = 10 answer is yes, because the sum of 4 + 13
KS ( 0, 4 ) 0 = 17 and therefore {4, 13} is such a subset.
P1 + KS ( 0, 4 ) , ( 10 + 0 ) , y Greedy method is not applicable in subset-
KS ( 1, 5 ) max = max
= = 10 sum. If we try to be greedy, then we
KS ( 0, 5 ) 0
don’t know whether to take the smallest
P1 + KS ( 0, 5 ) , ( 10 + 0 ) , number or the highest number. So, there
KS ( 1, 6 ) max = max
= = 10 is no way we could go for greedy.
KS ( 0, 6 ) 0
Dynamic Programming 85
Example: y First, we check the base condition. If
Let set = {6, 2, 3, 1}. Check whether there is a there are no elements i.e., i = 0, and we
subset whose sum is 5. want to produce some sum S then it is
not possible. So, it is false.
Solution:
y If there are no elements i.e., i = 0, and sum
Here, there is a subset {3, 2} whose sum is 5
S = 0 is possible. So, it is true because
Here greedy method fails; if we try to be
Sum = 0 is always possible.
greedy on a smaller number, then it will take
y This above two are base conditions.
1, and so if we take 1, then the remaining
y Using ith element if this sum ‘S’ has to be
sum is going to be 4. So, there is no number
possible, then there are two cases:
or subset which makes to 4.
Case 1: If we include the ith element in our
So, greedy doesn’t work in subset-sum.
subset, then a sum of (S – ai) should be
Brute-fore method: possible with the remaining (i-1) elements.
y If there are ‘n’ numbers, then any number Case 2: If we don’t include the ith element
can be present or absent in subset. So, in our subset, then a sum of (S) should be
every a number has 2 options. Hence, the possible with the remaining (i-1) elements.
number of subsets is equal to 2n. So, the recursive equation look like as
y Brute force method examines every shown below:
subset, i.e, equal to 2n. To examine each
SS(i − 1, S); S < ai
sub-problem, it will take O(n) time.
y Therefore, time complexity = number of SS(i − 1, S − ai ) V S S(i − 1, S); S ≥ ai
SS(i, s) =
subproblem * time taken by each sub- true; S = 0
problem. = 0(2n) * O(n) = O(n2n) False;i
= 0, S ≠ 0
Recursive equation: y If the problem is a subset-sum of SS(n,w)
y Let us assume that SS(i, S) denote a (where n positive numbers A1A2…An and w
subset-sum from a1 to ai whose sum is is a sum). Then the number of unique sub-
equal to some number ‘S’. problem is O(nw).
Solved Examples
1. Given the set S = {6, 3, 2, 1}. Is there any subset possible whose sum is equal to 5.
Solution:
Since here, the number of elements = 4 and the sum is going to be 5 then the number of
problem = 4 × 5 = 20.
Sum
0 1 2 3 4 5
0 T F F F F F
1 T F F F F F
number
2 T F F T F F
of element
3 T F T T F T
4 T T T T T T
86 Dynamic Programming
y Whenever we want the sum = 0 then it Note:
is always possible with whatever element y Either to use dynamic programming or
because a null set is going to be sum = 0.
not depends on the value of w.
y Index (i, j) indicate, with ith element is
y If ‘w’ a is big number, then the brute force
sum j possible.
method gives better time complexity i.e.
SS(1,1) = SS(0,1) = False
O(2n) otherwise dynamic programming.
[Since the 1st element weight is ‘6’. So, it
can’t lead to sum of 1 then we have to go Conclusion:
for SS(i-1,S) i.e., SS(0, 1)] O(2n )
Time complexity = min
Similarly, O(nw)
SS(2,1) = SS(1,1) = False
y If w is n! then 2n is going to be better than
SS(2,2) = SS(1,2) = False
O(nw).
SS(2,3) = SS(1,0) VSS(1,3) = True V False =
Space complexity:
True
Space complexity is required for the table.
SS(2,4) = SS(1,1) V SS(1,4) = False V False
So, O(nw) is the space complexity.
= False
SS(2,5) = SS(1,2) V SS(1,5) = False V False All Pairs Shortest Path Floyd Warshall
= False
SS(3,1) = SS(2,1) (Here, S < i) = False Problem:
SS(3,2) = SS(2,0) V SS(2,2) = True V False Given a weighted directed graph G = (V, E),
= True where V = {1, 2, …,n}. Find the shortest path
SS(3,3) = SS(2,1) V SS(2,3) = False V True between all pair of nodes in the graph.
= True y We can solve an all-pairs shortest-paths
SS(3,4) = SS(2,2) V SS (2,4) = False V False problem by running a single-source
= False shortest-paths algorithms |V| times, once
SS(3,5) = SS(2,3) V SS (2,5) = True V False for each vertex as the source.
= True
y If all the edges of the graph contain the
SS(4,1) = SS(3, 0) V SS(3,1) = True V False
positive weight, then apply Dijkstra’s
= True
algorithm.
SS(4,2) = SS(3,1) V SS(3,2) = False V True
= True y If we use the linear-array implementation
SS(4,3) = SS(3,2) V SS (3,3) = True V True of the min-priority queue, the running
= True time is O(V3 + VE), and we know, E = V2
so, O(V3).
SS(4,4) = SS(3,3) V SS(3,4) = True V False
= True y The binary min-heap implementation of the
SS(4,5) = SS(3,4) V SS(3,5) = False V True min-priority queue yields a running time
= True of O(V+ElogV), which is an improvement if
y Since, the final answer is in shell (4,5). So, the graph is sparse.
the final answer is true. The final answer y Alternatively, we can implement the min-
will always be present in (n, w). priority queue with a Fibonacci heap,
y The subset sum can be computed either yielding a running time of O(V2logV+VE).
in row major order or column major order. y Instead, we must run the slower Bellman-
Time complexity: ford algorithm once from each vertex.
y Here, the number of the subproblem is y The resulting running time is O(V2E), which
(nw), and the time required to calculate for the dense graph is O(V4)
each subproblem is O(1). Hence, time y In the Floyd-Warshall algorithm, the
complexity = (nw) × O(1) = O(nw). negative-weight edge is allowed.
Dynamic Programming 87
y The Floyd-Warshall algorithm considers stronger statement. Because vertex k is
the intermediate vertices of a shortest not an intermediate vertex of path p1, all
path, where an intermediate vertex of a intermediate vertices of p1 are in the set
simple path P = <V1, V2, …,Vl> is any vertex {1,2,…,k-1}. Therefore, P1 is a shortest path
of P other than V1 or Vl, that is, any vertex from i to k with all intermediate vertices
in the set {V2, V3,…,Vl-1} in the set {1,2,…,k-1}.
y The Floyd-Warshall algorithm relies on y Similarly, P2 is a shortest path from vertex
the following observation. k to vertex j with all intermediate vertices
y Under our assumption that the vertices of in the set {1,2,…,k-1}.
G are V = {1,2,…,n}, let us consider a subset
{1,2,…,k} of vertices for some k. Recurrence relation:
y For any pair of vertices i, j ∈ V , consider all y Let dij(k) be the weight of a shortest path
paths from i to j whose intermediate from vertex i to vertex j for which all
vertices are all drawn from {1,2,…k} and intermediate vertices are in the set {1,
Let p be a minimum-weight path from 2,…,k}.
among them. (Path P is simple)
y When k=0, a path from vertex i to vertex j
y The Floyd-Warshall algorithm exploits with no intermediate vertex.
a relationship between path p and
shortest paths from i to j with all y Such a path has at most one edge, and
intermediate vertices in the set {1,2,…,k-1}. hence dij(0) = wij .
y The relationship depends on whether or
We define dij(k) recursively by
not k is an intermediate vertex of path p.
w if k = 0
y If k is not an intermediate vertex of path ij
dij(k) =
p, then all intermediate vertices of path p
(
min dij(k −1) , dik
(k − 1) (k–1)
+ dkj ) if k ≥ 1
are in the set {1,2,…,k-1}. Thus, a shortest
path from vertex i to vertex j with all y Because for any path, all intermediate
intermediate vertices in the set {1,2,…,k-1} vertices are in the set {1,2,…,n}, the matrix
is also the shortest path from i to j with all ( )
D(n) = dij(n) gives the final answer:
intermediate vertices in the set {1,2,…,k}.
dij(n) = δ(i, j) for i, j ∈ V .
y If k is an intermediate vertex of path p, then
we decompose p into i P1 P2
→ k → j, as
y Based on recurrence relation, we can us
the following bottom-up procedure to
shown below: compute the value dij(k) in order of
all intermediate all intermediate
increasing values of k.
vertices in {1,2,...,k-1} vertices in {1,2,...,k-1}
K Floyd-Warshall(W)
P1 P2
1. =
n W ⋅ rows
2. D(0) = W
j
3. For k = 1 to n
5. For i = 1 to n
Fig. 5.2
6. For j=1 to n
y P1 is a shortest path from vertex i to vertex
=
k with all intermediate vertices in the set
7. dij(k) min dij(k −1) , dik
(k − 1)
( (k − 1)
+ dkj )
{1,2,….,k}. In fact, we can make a slightly 8. Return D(n)
88 Dynamic Programming
y Its input is an n × n matrix W. The procedure 1 2 3 4 5
returns the matrix D(n) of shortest-path 1 0 3 -1 4 -4
weights. 2 3 0 -4 1 -1
y The running time of the Floyd-Warshall D =
(4)
3 7 4 0 5 3
algorithm is determined by the triply 4 2 -1 -5 0 -2
nested for loops of lines 3-7 because each 5 8 5 1 6 0
execution of line 7 takes O(1) time, the
1 2 3 4 5
algorithm runs in time θ(n3 ).
1 0 1 -3 2 -4
eg: 2 3 0 -4 1 -1
D =
(5)
3 7 4 0 5 3
4 2 -1 -5 0 -2
5 8 5 1 6 0
Space complexity:
Here, space complexity is O(n3), but we can
reduce the space complexity to O(n2).
Dynamic Programming 89
y The search time for an element depends y If we separate the left subtree time
which level node is present. and right subtree time, then the above
y The average number of comparisons for expressions can be written as:
r−1 n
1 + 2 + 2 + 3 + 3 11
the first tree is:
5
=
5
and = S(root) ∑( depth(root, i) + 1) × F[i]) +∑ F[i] +
i= 1 r
for the second tree, the average number n
of comparisons:
1 + 2 + 3 + 3 + 4 13
= . ∑
(depth(root, i) + 1) × F[i])
5 5 i= r + 1
Among the two, the first tree is giving Where “r” indicates the positions of the
better results. root element in the array.
y Here, frequencies are not given, and if y If we replace the left subtree and right
we want to search all elements, then the subtree times with their corresponding
above simple calculation is enough for recursive calls, then the expression
deciding the best tree. becomes:
90 Dynamic Programming
for(int r=i; r<=j; r++){
// root can be any vertex from i to j
int c = optimalCost(i,r-1,freq,cost,root) + optimalCost(r+1,j,freq,cost,root);
if(c<minCost){
minCost=c;
minRoot = r;
}
}
int freqSum = 0;
for(int k=i;k<=j;k++)
freqSum+=freq[k];
cost[i][j] = minCost + freqSum;
root[i][j] = minRoot;
return cost[i][j];
}
Node buildOptimalTree(int i,int j,int keys[], int root[][]){
// base conditions
if(i<j) return null;
if(i==j) return new Node(keys[i]);
// getting the index of optimal root of subtree[i,j] stored in the matrix
int rindex = root[i][j];
Node node = new Node(keys[rindex]);
node.left = buildOptimalTree(i,rindex-1,keys,root);
node.right = buildOptimalTree(rindex+1,j,keys,root);
return node;
}
Conclusion:
Dynamic Programming 91
Solved Examples
1. Let A1, A2, A3 and A4 be four matrices of 2. Let us define AiAi+1 as an explicitly
dimensions 1 × 2, 2 × 1, 1 × 4 and 4 × computed pair for a given parenthesization
1 respectively. The minimum number if they are directly multiplied. For
of scalar multiplications required to find example, in the matrix multiplication
the product A1A2A3A4 using the basic chain A1A2A3A4 using parenthesization
matrix multiplication method is ________ A 1 ( ( A 2 A 3 ) A 4 ) , A2 A3 are only explicitly
Solution: 7
computed pairs.
The unique function call which are made in Consider the matrix given in question
m[1, 4] are given below: number 1 that minimises the total
number of scalar multiplication, the
0 0 0 0
explicitly computed pairs is/are
(1, 1) (2, 2) (3, 3) (4, 4)
2 8 4 (A) A1A2 and A3A4 only
(1, 2) (2, 3) (3, 4) (B) A2A3 only
6 6
(1, 3) (2, 4) (C) A1A2 only
7
(1, 4) (D) A3A4 only
Solution: (A)
0 , if i = j
M[i, j] =
{ }
Min M[i,k] + M[k + 1, j] + pi−1pkp j if i < j
i≤k < j
In question 1, we got the optimal scalar as 7
in (A1A2), (A3A4). So, A1A2, and A3A4 are explicitly
Number of scalar multiplication computed pairs.
A1A2 = 1 × 2 × 1 = 2
3. Consider two strings X = “ABCBDAB”
A2A3 = 2 × 1 × 4 = 8
and Y = “BDCABA”. Let u be the length
A3A4 = 1 × 4 × 1 = 4 of the longest common subsequences
m(1, 1) + m(2, 3) + 1 × 2 × 4 (not necessarily contiguous) between X
A 1=
A2 A3 m(1,
= 3) min and Y and let V be the number of
m(1, 2) + m(3, 3) + 1 × 1 × 4
such longest common subsequences
0 + 8 + 8 between X and Y. then V + 10u is______
= min
= 6
2 + 0 + 4 Solution: 43
m(2, 2) + m(3, 4) + 2 × 1 × 1
A=
2A3A4 m(2,
= 4) min
m(2, 3) + m(4, 4) + 2 × 4 × 1
0 + 4 + 2
= min
= 6
8 + 0 + 8
m(1, 1) + m(2, 4) + 1 × 2 × 1
A 1 A2 A=
3 A 4 (1, 4) min m(1, 2) + m(3, 4) + 1 × 1 × 1
m(1, 3) + m(4, 4) + 1 × 4 × 1
0 + 6 + 2 8
= min 2 + 4 =
+ 1 min =
7 7
6 + 0 + 4 10
Hence, 7 is the minimum number of scalar
multiplication required to find the product
A1A2A3A4.
92 Dynamic Programming
Here, the subsequence are shown below: Solution: 9
X(2 3 4 6) X(4 5 6 7) X(2 3 6 7) The path is from 1 →− 4 →− 7 →− 8, which
Y(1 3 5 6) Y(1 2 4 5) Y(1 3 4 5) incur a minimum cost as 9.
T[i] denotes the minimum cost from ith node
BCBA BDAB BCAB
to node 8.
The length of the longest common
= So, T [i] minimum {cost (i, j) + T [ j]}
subsequence is 4 and there are 3 subsequence ( for all j= (i+ 1) ton)
of length 4. T[8] = 0 =j (i + 1) to n
So, u = 4 and v = 3
= T[7] minimum {cost [7, 8] + T[8]}
So, v + 10u = 3 + 40 = 43
Dynamic Programming 93
cost [2, 3] + T[3] ∞ + 187. Consider the weights and values of the
cost [2, 4] + T[4] ∞ + 4 items listed below.
cost [2, 5] + T[5] 4 + 18 Item Number Weight Values (in
= T[2] minimum = minimum = 22
cost [2, 6] + T[6] 11 + 13 (in kgs) Rupees)
cost [2, 7] + T[7] ∞ + 2
1 1 10
cost [2, 8] + T[8] ∞ + 0
2 2 12
cost [2, 3] + T[3] ∞ + 18
cost [2, 4] + T[4] ∞ + 4 3 4 28
cost [2, 5] + T[5] 4 + 18 The task is to pick a subset of these
minimum
= 22
cost [2, 6] + T[6] 11 + 13 items such that their total weight is no
cost [2, 7] + T[7] ∞ + 2 more than 6kg and their total value is
maximised. Moreover, no item may be
cost [2, 8] + T[8] ∞ + 0
split. The total values of items picked by
cost [1, 2] + T[2] 1 + 22 0/1 knapacks is______
cost [1, 3] + T[3] 2 + 18 Solution: 40
cost [1, 4] + T[4] 5 + 4
weight
= T[1] minimum cost [1,= 5] + T[5] minimum
= ∞ + 18 9
cost [1, 6] + T[6] ∞ + 13 0 1 2 3 4 5 6
cost [1, 7] + T[7] ∞ + 2 0 0 0 0 0 0 0 0
cost [1, 8] + T[8] ∞ + 0 1 0 10 10 10 10 10 10
94 Dynamic Programming
p1 + ks(0, 3) 10 + 0
ks(1, 4) max =
= max
= 10
ks(0, 4) 0
p1 + ks(0, 4) 10 + 0
ks(1, 5) max =
= max
= 10
ks(0, 5) 0
p1 + ks(0, 5) 10 + 0
ks(1, 6) max =
= max
= 10
ks(0, 6) 0
Solution: 5
ks(2,
= 1) ks(1,
= 1) 10
(1, 2) + T(2, {3, 4})
p2 + ks(1, 0) 12 + 0 T(1,=
{2, 3, 4}) min (1, 3) + T(3, {2, 4})
ks(2, 2) max = max
= =
12
ks(1, 2) 10
(1, 4) + T(4, {2, 3})
p2 + ks(1, 1) 12 + 10
ks(2, 3) max =
= max
= 22 1 + 4
ks(1, 3) 10
= min 2 + 7
p2 + ks(1, 2) 12 + 10 3 + 4
ks(2, 4) max =
= max
= 22
ks(1, 4) 10 =5
p2 + ks(1, 3) 12 + 10 (2, 3) + T(3, {4}) 4 + 8
ks(2, 5) max = max
= =
22 T(2,
= {3, 4}) min = min =
4
ks(1, 5) 10 (2, 4) + T(4, {3}) 2 + 2
p2 + ks(1, 4) 12 + 10 (3, 4) + T(4, {2}) 5 + 5
ks(2, 6) max =
= max
= 22 T(3,
= {2, 4}) min = min =
7
ks(1, 6) 10 (3, 2) + T(2, {4}) 2 + 5
ks(3,
= 1) ks(2,
= 1) 10 (4, 2) + T(2, {3}) 4 + 5
T(4,
= {2, 3}) min = min =
4
ks(3,
= 2) ks(2,
= 2) 12 (4, 3) + T(3, {2}) 1 + 3
ks(3,
= 3) ks(2,
= 3) 22 T(3, {4}) = (3, 4) + T(4, 1) = 5 + 3 = 8
Dynamic Programming 95
1 2 3 4 1 2 3 4
1 0 6 1 3 1 0 11 1 6
(A) 2 6 0 5 3 2 11 0 7 3
D=
1
3 1 5 0 2 3 1 7 0 2
4 3 3 2 0 4 6 3 2 0
1 2 3 4
1 0 11 1 3 1 2 3 4
(B) 211 0 7 3 1 0 11 1 6
3 1 7 0 2 2 11 0 7 3
D=
2
4 3 3 2 0 3 1 7 0 2
1 2 3 4 4 6 3 2 0
1 0 11 1 6
(C) 2 11 0 7 3 1 2 3 4
3 1 7 0 2 1 0 8 1 3
4 6 3 2 0 2 8 0 7 3
D3=
(D) None of above 3 1 7 0 2
4 3 3 2 0
Solution: (A)
Let Di = set of all shortest path between every 1 2 3 4
pair in such a way that the path is allowed to 1 0 6 1 3
go through node 0 to node i. 2 6 0 5 3
D4=
So, 3 1 5 0 2
1 2 3 4 4 3 3 2 0
1 0 11 1 6
2 11 0 7 3
D=
0
3 1 7 0 2
4 6 3 2 0
96 Dynamic Programming
Chapter Summary
Dynamic Programming 97
Matrix Chain Multiplication
Problem: Given a series of matrices: A1 × A2 × A3 × …×An with their dimensions, what is the
best way to parenthesize them so that it produces the minimum scalar multiplication.
(2n) !
Number of ways we can parenthesis the matrix =
(n + 1) !n!
Where n = number of matrix – 1
Let M[i, j] represents the least number of multiplications needed to multiply Ai….Aj.
0, if i = j
M[i, j] =
Min{M[i,K] + M[K + 1, j] + Pi−1PKPj } if i < j
y Bottom-up matrix chain multiplication
Time complexity = O(n3)
Space Complexity = O(n2).
Top down dynamic programming of matrix chain multiplication
Time complexity = O(n3)
Space Complexity = O(n2)
y Longest common subsequence: Given two strings: string X of length m [X(1...m)], and
string Y of length n [Y(1…n)], find longest common subsequence: The longest sequence
of characters that appear left-to-right (but not necessarily in a contiguous block) in both
strings for example X = “ABCBDAB” and Y = “BDCABA”, the LCS(X,Y) = {“BCBA”, “BDAB”,
“BCAB”}
y Brute force approach of longest common subsequence is O(n2m)
y Recursive equation of the LCS is
0 = if i 0= or j 0,
C[i, j]= C[i − 1, j − 1] + 1 if i, j > 0 and xi= y j ,
max ( C[i, j − 1], C[i − 1, j]) , if i, j > 0 and xi ≠ y j
y Dynamic programming approach of longest common subsequence takes θ(mn) as both
time complexity and space complexity.
y Multistage graph: It is a directed graph in which the nodes can be divided into a set of
stages such that all edges are from one stage to the next stage only, and there will be no
edges between vertices of the same stage.
Greedy methods fails to find the shortest path from source to destination in multistage
graph.
y Let ‘S’ be the source node and ‘T’ be the target node and let at stage 2 have nodes A, B
and C. So, the recursive equation look like as shown below:
M(N − 1, i) =i→T
S → A + M(2, A)
M(1, S) minimum S → B + M(2,B)
=
S → C + M(2, C)
98 Dynamic Programming
0/1 knapsack problem: In this problem, ‘n’ distinct objects are there, which are associated
with two integer value s and v where, s represents the size of the object and v represents
the value of the object. We need to fill a knapsack of total capacity w with items of
maximum value. Hence, we are not allowed to take partial of items.
Let us say KS(i,w) represent knapsack. Here, i is the ith elements to be considered, and
the capacity of knapsack remaining is w.
The recurrence equation is:
O(2n )
Time complexity of 0/1 knapsack = minimum (n is the number of object and
O(nw)
w is the total capacity of the knapsack.)
y Subset sum problem: Given a sequence of n positive numbers A1, A2,….An, give an
algorithm which checks whether there exists a subset of A whose sum of all numbers is
T.
y Brute force method time complexity of subset sum problem is O(2n).
y Let us assume that SS(i,S) denote sum from a1 to ai whose sum is equal to some number
‘S’.
y Recursive equation of subset-sum is given below:
O(2n )
y Time complexity of subset-sum = minimum
O(nw)
All Pair Shortest Path Floyd Warshall
Problem: Given a weighted directed graph G=(V,E), where V = {1, 2, …,n}. Find the shortest
path between all pair of nodes in the graph.
y The running time of Dijkstra’s algorithm if we use to find all pairs shortest path is O(VElogV).
y The running time of Bellman-Ford algorithm if we use to find all pair shortest path is
O(V2E).
y Let dij(k) be the weight of a shortest path from vertex i to vertex j for which all intermediate
vertices are in the set {1, 2, …,k}
The recurrence equation is:
wij , if k = 0
dij(k) =
( (k − 1) (k − 1)
min dij , dik + dkj
(k − 1)
) , if k ≥ 1
Dynamic Programming 99
y The time complexity of all pairs shortest path Floyd–Warshall in is O(V3).
(where V = number of vertices)
y The space complexity of all pairs shortest pat Floyd–Warshall is O(V3), but it can be
reduced to O(V2). (where V = number of vertices).