DAA Unit 4
DAA Unit 4
Dynamic Programming: All pairs shortest path problem, Single-source shortest paths General weights, String Edition,
0/1knapsack problem, Reliability design.
Dynamic Programming is an algorithm design method and it can be used when the solution to a
problem can be viewed as the result of sequence of decisions.
Dynamic programming (DP) solves every subsubproblem exactly once, and is therefore more
efficient in those cases where the subsubproblems are not indepndent.
An optimal sequence of decisions is obtained by making use of Principle Of Optimality
The principle of optimality states that an optimal sequence of decisions has the property that
whatever the initial state and decisions are, the remaining decisions must constitute an optimal
decision sequence with regard to the state resulting from the first decision.
Principle of Optimality can also be referred as Optimal substructure. A problem has optimal
substructure if an optimal solution can be constructed efficiently from optimal solutions of its sub-
problems. In other words, we can solve larger problems given the solutions to its smaller sub
problems.
The idea of dynamic programming is thus quite simple: avoid calculating the same thing twice,
usually by keeping a table of known result that fills up when sub instances are solved.
Dynamic programming is a method for solving optimization problems. The idea: Compute the
solutions to the subsub-problems once and store the solutions in a table, so that they can be reused
(repeatedly) later.
1
Design and Analysis of Algorithms Unit-I
An alternate O(n3) solution can be obtained by using principle of optimality. But it requires that G should
not have cycles with negative length.
From equation (1), Ak(i, j) either goes through vertex k or it does not.
If it goes through vertex k, then Ak (i, j) Ak1(i, k ) Ak1(k, j)
If it does not go through vertex k, then Ak (i, j) Ak1(i, j)
Combining these two, we get Ak (i, j) min{ Ak1(i, j), Ak1(i, k ) Ak1(k, j)}, k 1
2
Design and Analysis of Algorithms Unit-I
Example:
Find all pairs shortest path for the following graph.
The algorithm for all pairs shortest path problem is given below
Algorithm AllPaths(cost, A, n)
//cost[1:n, 1:n] is the cost adjacency matrix of a graph with n vertices;
//A[i,j] is the cost of a shortest path from vertex i to vertex j.
{
for i:=1 to n do
for j:=1 to n do
A[i, j]:=cost[i,j]; //copy cost into A
for k:=1 to n do
for i:=1 to n do
for j:=1 to n do
A[i,j]:=min(A[i,j],A[i,k]+A[k,j]);
}
3
Design and Analysis of Algorithms Unit-I
Solution:
o Let distl[u] be the length of a shortest path from the source vertex v to vertex u under the
constraint that the shortest path contains at most l edges.
o Then dist1[u] = cost[v,u], 1 ≤ u ≤ n
o When there are no cycles of negative length, we can limit our search for shortest paths to
paths with at most n – 1 edges. Hence distn – 1 [u] is the length of an unrestricted shortest path
from v to u.
o Our goal is then to compute distn – 1 [u] for all u.
4
Design and Analysis of Algorithms Unit-I
For instance, distk[1] = 0 for all k since 1 is the source node. Also, dist1[2]=6, dist1[3] = 5 and dist1[4]=5,
since there are edges from 1 to these nodes. Distance to remaining vertices is infinity since there are no
edges to theses from 1.
Pseudocode to compute the length of the shortest path from v to each other vertex of the graph is given. This
algorithm is referred as Bellman ford algorithm.
Analysis :
A graph can be represented using adjacency matrix or adjacency list.
If adjacency matrix is used
o Each iteration of the for loop takes O(n2) time. Here n is the number of vertices in the graph.
o Overall complexity is O(n3)
If adjacency list is used
o Each iteration of the for loop takes O(e) time. Here is the number of edges in the graph.
o Overall complexity is O(ne)
String Editing
Problem: We are given two strings X = x1, x2, . . ., xn and Y =y1, y2, . . . , ym, where xi, 1 ≤ i ≤ n, and
yj, 1 ≤ j ≤ m, are members of a finite set of symbols known as the alphabet. We want to transform X into Y
using a sequence of edit operations on X.
The permissible edit operations are
o insert,
o delete and
o change (A symbol of X into another)
A cost is associated with each operation.
The cost of a sequence operations is the sum of the costs of the individual operations in the sequence.
The problem of string editing is to identify a minimum-cost sequence of edit operations that will
transform X into Y.
Let
o D(xi) be the cost of deleting the symbol xi from X,
o I(yj) be the cost of inserting a symbol yj into X,
o And C(xi, yj) be the cost of changing the symbol xi of X into yj.
5
Design and Analysis of Algorithms Unit-I
Solution :
A solution to the string editing problem consists of a sequence of decisions, one for each edit
operation. Let be ε be a minimum-cost edit sequence for transforming X into Y. The first operation O
in ε is delete, insert, or change.
If ε’ = ε – {O} and X’ is the result of applying O in X, the ε’ should be the minimum edit sequence
that transform X’ into Y. Thus the principle of optimality holds for this problem.
For i=j=0, cost(i, j)=0 since two sequences are identical and empty.
If i>0 and j>0, the transformation happens in three ways, as shown in equation.
0 i j 0
cos t(i 1,0) D(x ) j 0, i 0
cos t(i, j) i
cos t(0, j 1) I ( y j ) i 0, j 0
cos t'(i, j) i 0, j 0
We have to compute cost(i,j) for all possible values of i and j. These values will be computed in the
form of a table, M.
M(i,j) stores the cost(i,j) value.
Each entry of M takes O(1) time.
Thereform the whole algorithm takes O(mn) time.
The value cost(n,m) is the final answer we are interested in.
A minimum edit sequence can be obtained by a simple backward trace from cost(n,m). The
backward trace is enabled by recording which of the three options for i>0, j>0 yielded the minimum
cost for each i and j.
6
Design and Analysis of Algorithms Unit-I
Exercise : Transform the string this into there and find out how many edit operations are required.
Exercise 2 : Find the minimum edit cost to transform the string X=x1,x2,x3,x4,x5 = a,a,b,a,b to
Y=y1,y2,y3,y4 = b,a,b,a. Cost associated with each insertion and deletion be 1, and the cost of changing
any symbol to any other symbol be 2.
Exercise 4 : Design a pseudo code algorithm that implements the string editing algorithm.
7
Design and Analysis of Algorithms Unit-I
subject to w x
1in
i i m
xi 0 or 1, 1 i n.
A solution to the knapsack problem can be obtained by making a sequence of decisions on the variables.
Let us assume that that the decisions on xi are made in the order x1 , x2 , ...., xn .
Rules of mergring/purging:
1. If S i1 contains two pairs ( p , w ) and ( p , w ) with the property p p and w w , then the pair
j j k k j k j k
The last pair in S n is either last pair in Sn1 or it is ( p p , w w ) , where ( p , w ) Sn1 such that
j n j n j j
wj wn m and wj is maximum.
1. Now setting 0 or 1 to xi is determined by carrying out a search through S i s.
8
Design and Analysis of Algorithms Unit-I
Eg: solve the 0/1 knapsack problem using DP m=6, w=(2,3,4), p=(1,2,5)
No.o f objects = 3
Arranging them as profit-value pairs (1,2) (2,3) (5,4)
S 0 {(0,0)} S10 {(1,2)}
Reliability Design
It is a problem with a multiplicative optimization function.
The problem is to design a system that is composed of several devices connected in series.
Hence, it is desirable to duplicate devices. Multiple copies of the same device type are
connected in parallel through the use of switching circuits.
9
Design and Analysis of Algorithms Unit-I
The switching circuits determine which devices in any given group are functioning properly.
They then make use of one such device at each stage.
This problem is an example for how to use Dynamic Programming to solve a problem with a
multiplicative optimization function.
If stage i contains mi copies of Device Di, then the probability that all mi have a malfunction
is (1 – ri)mi
subject to c m c
1in
i i
mi 1 & integer, 1 i n
A Dynamic programming solution can be obtained by following the approach of knapsack problem.
Since ci > 0, each mi must be in the range 1 ≤ mi ≤ ui
n
c ci
c j
Where ui 1
ci
An optimal solution m1, m2, …, mn is the result of a sequence of decisions, one decision for
each mi.
10
Design and Analysis of Algorithms Unit-I
Clearly f0(x) = 1
To solve this,
Let Si consists of tuples of the form (f, x) where f = fi(x).
There is atmost one tuple for each different x that results from a sequence of decisions m1, m2, … mn
The dominance rule (f1, x1) dominates (f2, x2) iff f1≥f2 and x1≤x2. Dominated tuples can be discarded
from Si.
11
Design and Analysis of Algorithms Unit-I