Dynamic Progr.
Dynamic Progr.
Algorithms
Unit - III
Periyar Govt. Arts College
Cuddalore
Dr. R. Bhuvaneswari
Assistant Professor
Department of Computer Science
Periyar Govt. Arts College, Cuddalore.
1 Dr. R. Bhuvaneswari
Dynamic Programming
Syllabus
UNIT-III
Dynamic Programming: General Method – Multistage Graphs –
All-Pair shortest paths –Optimal binary search trees – 0/1 Knapsack
– Travelling salesperson problem.
Text Book:
Ellis Horowitz, Sartaj Sahni and Sanguthevar Rajasekaran,
Computer Algorithms C++, Second Edition, Universities Press,
2007. (For Units II to V)
General Method:
• It is an algorithm design method that can be used when the solution to
a problem can be viewed as a sequence of decisions.
• It obtains the solution using “Principle of Optimality”.
• It states that “ In an optimal sequence of decisions or choices, each
subsequence must also be optimal”, ie., whatever the initial state and
decision are, the remaining decisions must constitute an optimal
decision sequence.
• The difference between the greedy method and dynamic programming
is that in the greedy method only one decision sequence is ever
generated.
• In dynamic programming, many decision sequences may be generated.
• Sequences containing suboptimal subsequences cannot be optimal and
so will not be generated.
Periyar Govt. Arts College
3 Dr. R. Bhuvaneswari Cuddalore
Multistage Graphs
Forward Approach
• In the forward approach, the cost of each and every node is found
starting from the k stage to the 1st stage.
• The minimum cost path from the source to destination is found ie.,
stage 1 to stage k.
• For forward approach,
Cost(i ,j) = min{c(j, l) + cost(i+1, l)}
lVi+1
j, lE
where i is the level number.
• Time complexity: O(V+E)
V1 V2 V3 V4 V5
Min. Cost
cost(2,2) min{c(2,6)+cost(3,6), c(2,7)+cost(3,7), 7
c(2,8)+cost(3,8)}
= min{4+7, 2+5, 1+7}
cost(2,3) min{c(3,6)+cost(3,6), c(3,7)+cost(3,7)} 9
= min{2+7, 7+5}
cost(2,4) min{c(4,8)+cost(3,8)} 18
= min{11+7}
cost(2,5) min{c(5,7)+cost(3,7), c(5,8)+cost(3,8)} 15
= min{11+5, 8+7}
cost(1,1) min{c(1,2)+cost(2,2), c(1,3)+cost(2,3), 16
c(1,4)+cost(2,4), c(1,5)+cost(2,5)} =
min{9+7, 7+9, 3+18, 2+15}
1 2 7 10 12
1 3 6 10 12 Periyar Govt. Arts College
8 Dr. R. Bhuvaneswari Cuddalore
Multistage Graphs
Algorithm FGraph(G, k, n, p)
//p[1:k] is a minimum cost path
{
cost[n] = 0.0;
for j = n-1 to 1 step -1 do
{
Let r be a vertex such that j, r is an edge of G and c[j, r]+cost[r] is
minimum;
cost[j] = c[j, r] + cost[r];
d[j] = r;
}
p[1] = 1; p[k] = n;
for j = 2 to k-1 do
p[j] = d[p[j-1]];
}
Periyar Govt. Arts College
9 Dr. R. Bhuvaneswari Cuddalore
Multistage Graphs
Backward Approach
• In the backward approach, the cost of each and every node is found
starting from the 1st stage to the kth stage.
• The minimum cost path from the source to destination is found ie., stage
k to stage 1.
• For backward approach,
bcost(i, j) = min{bcost(i-1, l) + c(l, j)}
lVi-1
l, jE
where i is the level number.
V1 V2 V3 V4 V5
Min. Cost
bcost(3,8) min{bcost(2,2)+c(2,8),bcost(2,4)+c(4,8), 10
bcost(2,5)+c(5,8)}
= min{9+1,3+11,2+8}
bcost(4,9) min{bcost(3,6)+c(6,9),Bcost(3,7)+c(7,9)} 15
= min{9+6,11+4}
bcost(4,10) min{bcost(3,6)+c(6,10),bcost(3,7)+c(7,10), 14
bcost(3,8)+c(8,10)}
= min{9+5,11+3,10+5}
bcost(4,11) min{bcost(3,8)+c(8,11)} = min{10+6} 16
Bcost(5,12) min{bcost(4,9)+c(9,12),bcost(4,10)+c(10,12), 16
bcost(4,11)+c(11,12)}
= min{15+4,14+2,16+5}
12 10 7 2 1 1 2 7 10 12
12 10 6 3 1 1 3 6 10 12 Periyar Govt. Arts College
13 Dr. R. Bhuvaneswari Cuddalore
Multistage Graphs
Algorithm BGraph(G, k, n, p)
{
bcost[1] = 0.0;
for j = 2 to n do
{
Let r be such that r, j is an edge of G and bcost[r] + c[r, j] is minimum;
bcost[j] = bcost[r] + c[r, j];
d[j] = r;
}
p[1] = 1; p[k] = n;
for j = k-1 to 2 step -1 do
p[j] = d[p[j+1]];
}
• All pairs shortest path problem is the determination of the shortest graph
distances between every pair of vertices in a given directed graph G.
• That is, for every pair of vertices (i, j), we are to find a shortest path from
i to j as well as from j to i. These two paths are the same when G is
undirected.
• Let G = (V, E) be a directed graph with n vertices.
• Let cost be a cost adjacency matrix for G such that cost(i, i) = 0, 1 i n.
• Cost(i, j) is the length of edge i, j if i, j E(G) and cost(i, j) = if i j
and i, j E(G).
• All pair shortest path problem is to determine a matrix A such that A(i, j)
is the length of a shortest path from i to j.
• Since each application of this procedure requires O(n2) time, the matrix A
can be obtained in O(n3) time.
Periyar Govt. Arts College
15 Dr. R. Bhuvaneswari Cuddalore
All pair shortest paths
• A Binary Search Tree (BST) is a tree in which all the nodes follow the
below mentioned properties :
The value of the key of the left sub-tree is less than the value of its
parent (root) node's key.
The value of the key of the right sub-tree is greater than or equal to
the value of its parent (root) node's key.
• The problem is to construct an optimal binary search tree (in terms of search
time) for a set of integer keys, given the frequencies with which each key
will be accessed.
Keys 20 30 40
Frequencies 2 1 6
• As there are three different keys, we can get a total of 5 various BSTs by
changing order of the keys. ie., 2nCn/(n+1) number of tree can be generated.
• The cost is computed by multiplying the each node’s frequency with the
level of tree( Here we are assuming that the tree starts from level 1) and
then add them to compute the overall cost of BST.
• The above example, the 4th BST has the least cost among all, so it is the
Optimal Binary Search Tree for the given data.
• If the number of nodes are less we can find optimal BST by checking all
possible arrangements, but if the nodes are greater than 3 like 4,5,6….. then
respectively 14,42,132….. , different BST’s are possible so by checking all
arrangements to find Optimal Cost may lead to extra overhead.
• So Dynamic Programming approach can be used to find the Optimal
Binary Search Tree.
• The search time can be improved in Optimal Cost Binary Search Tree,
placing the most frequently used data in the root and closer to the root
element, while placing the least frequently used data near leaves and in
leaves.
Periyar Govt. Arts College
21 Dr. R. Bhuvaneswari Cuddalore
Optimal Binary Search Trees
• The dummy keys are leaves (external nodes), and the data keys mean
internal nodes.
• For n internal nodes n+1 external nodes will be present.
• The terminal node that is the left successor of k1 can be interpreted as
representing all key values that are not stored and are less than k1.
Similarly, the terminal node that is the right successor of kn, represents all
key values not stored in the tree that are greater than kn.
Using Dynamic Approach
𝒄 𝒊, 𝒋 = 𝐦𝐢𝐧 𝒄 𝒊, 𝒌 − 𝟏 + 𝒄 𝒌, 𝒋 + 𝒘[𝒊, 𝒋]
𝟏 <𝒌 ≤𝒋
𝐰 𝐢, 𝐣 = 𝐰 𝐢, 𝐣 − 𝟏 + 𝐩𝐣 + 𝐪𝐣
Example:
Keys = {10, 20, 30, 0} j-i =0 w00 =2 w11=3 w22=1 w33 =1 w44=1
p(1:4) = {3, 3, 1, 1} c00 =0 c11 =0 c22 =0 c33 =0 c44 =0
q(0:4) = {2, 3,1, 1, 1} r00 =0 r11 =0 r22 =0 r33=0 r44 =0
j-i=1 w01=8 w12 =7 w23 =3 w34 =3
c01 =8 c12 =7 c23 =3 c34 =3
Initially, w(i, i) = q(i); c(i, i) = 0; r01 =1 r12 =2 r23 =3 r34=4
r(i, i) = 0 j-i=2 w02=12 w13 =9 w24=5
W[0,1] = w[0,0]+p1+q1=2+6=8 c02=19 c13=12 c24 =8
r02 =1 r13=2 r24 =3
W[1,2] = w[1,1]+p2+q2=3+3+1=7
j-i=3 w03=14 w14=11
W[2,3] = w[2,2]+p3+q3=1+1+1=3 c03=25 c14=19
W[3,4] = w[3,3]+p4+q4 =1+1+1=3 r03 =2 r14 =2
W[0,2] = w[0,1]+p2+q2 j-i=4 w04 =16
W[1,3] = w[1,2]+p3+q3 c04 =32
r04 =2
W[2,4] = w[2,3]+p4+q4
W[0,3] = w[0,2]+p3+q3
W[1,4] = w[1,3]+p4+q4
W[0,4] = w[0,3]+p4+q4 Periyar Govt. Arts College
24 Dr. R. Bhuvaneswari Cuddalore
Optimal Binary Search Trees
subject to wi x i ≤ m
1≤ i ≤n
xi = 0 or 1, 1 ≤ i ≤ n
Periyar Govt. Arts College
27 Dr. R. Bhuvaneswari Cuddalore
0/1 Knapsack Problem
• To identify the items that must be put into the knapsack to obtain the
maximum profit,
Consider the last column of the table.
Start scanning the entries from bottom to top.
On encountering an entry whose value is not same as the value
stored in the entry immediately above it, mark the row label of that
entry.
After all the entries are scanned, the marked labels represent the
items that must be put into the knapsack.
• O(nw) time is taken to solve 0/1 knapsack problem using dynamic
programming.
Pi = {1, 2, 5, 6}
wi = (2, 3, 4, 5}
m = 8, n = 4
x1 = 0, x2 = 1, x3 = 0, x4 = 1
for (i = 0; i n; i++)
{
for(w = 0; w m; w++)
{
if(i==0 || w==0)
k[i][w] = 0;
else if(wt[i] w)
k[i][w] = max(p[i]+k[i-1][w-wt[i], k[i-1][w]);
else
k[i][w] = k[i-1][w];
}
}
g(i,) = Ci1, 1 i n.
S =
g(2,) = c21 = 5
g(3,) = c31 = 6
g(4, ) = c41 = 8
S = 2
g(2,{3,4}) = min{c23 +g(3,{4}), c24 + g(4,{3})}
= min{9+20, 10+15} = 25
g(3,{2,4}) = min{c32 + g(2,{4}), c34 + g(4,{2})}
= min{13+18, 12+13} = 25
g(4,{2,3}) = min{c42 + g(2,{3}), c43 + g(3,{2})}
= min{8+15, 9+18} = 23
Using equation 1, we obtain
g(1,{2,3,4}) = min{c12+g(2,{3,4}), c13+g(3,{2,4}), c14+g(4,{2,3})}
= min{10+25, 15+25, 20+23} = 35
The optimal tour is
1->2->4->3->1