0% found this document useful (0 votes)
16 views13 pages

DAA Unit 4

This document summarizes algorithms for dynamic programming, including all pairs shortest path problems, single-source shortest paths problems, string editing, and 0/1 knapsack problems. It describes dynamic programming as a method that solves subproblems only once and stores their solutions in a table to be reused. The principle of optimality states that optimal solutions can be constructed from optimal subsolutions. Dynamic programming involves defining a mathematical notation, proving the principle of optimality holds, developing a recurrence relation, and writing an algorithm for the relation. Algorithms are presented for the all pairs shortest path and single-source shortest path problems.

Uploaded by

mfake0969
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views13 pages

DAA Unit 4

This document summarizes algorithms for dynamic programming, including all pairs shortest path problems, single-source shortest paths problems, string editing, and 0/1 knapsack problems. It describes dynamic programming as a method that solves subproblems only once and stores their solutions in a table to be reused. The principle of optimality states that optimal solutions can be constructed from optimal subsolutions. Dynamic programming involves defining a mathematical notation, proving the principle of optimality holds, developing a recurrence relation, and writing an algorithm for the relation. Algorithms are presented for the all pairs shortest path and single-source shortest path problems.

Uploaded by

mfake0969
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Design and Analysis of Algorithms Unit-I

Dynamic Programming: All pairs shortest path problem, Single-source shortest paths General weights, String Edition,
0/1knapsack problem, Reliability design.

 Dynamic Programming is an algorithm design method and it can be used when the solution to a
problem can be viewed as the result of sequence of decisions.
 Dynamic programming (DP) solves every subsubproblem exactly once, and is therefore more
efficient in those cases where the subsubproblems are not indepndent.
 An optimal sequence of decisions is obtained by making use of Principle Of Optimality
 The principle of optimality states that an optimal sequence of decisions has the property that
whatever the initial state and decisions are, the remaining decisions must constitute an optimal
decision sequence with regard to the state resulting from the first decision.
 Principle of Optimality can also be referred as Optimal substructure. A problem has optimal
substructure if an optimal solution can be constructed efficiently from optimal solutions of its sub-
problems. In other words, we can solve larger problems given the solutions to its smaller sub
problems.
 The idea of dynamic programming is thus quite simple: avoid calculating the same thing twice,
usually by keeping a table of known result that fills up when sub instances are solved.
 Dynamic programming is a method for solving optimization problems. The idea: Compute the
solutions to the subsub-problems once and store the solutions in a table, so that they can be reused
(repeatedly) later.

Greedy vs. Dynamic Programming :


 Both techniques are optimization techniques, and both build solutions from a collection of choices of
individual elements.
 The greedy method computes its solution by making its choices in a serial forward fashion, never
looking back or revising previous choices.
 Dynamic programming computes its solution bottom up by synthesizing them from smaller
subsolutions, and by trying many possibilities and choices before it arrives at the optimal set of
choices.
 There is no a priori litmus test by which one can tell if the Greedy method will lead to an optimal
solution. By contrast, there is a litmus test for Dynamic Programming, called The Principle of
Optimality

Divide and Conquer vs. Dynamic Programming:


 Divide and conquer is a top-down method. Dynamic programming on the other hand is a bottom-up
technique.
 Both techniques split their input into parts, find subsolutions to the parts, and synthesize larger
solutions from smaller ones.
 Divide and Conquer splits its input at prespecified deterministic points (e.g., always in the middle)
Dynamic Programming splits its input at every possible split points rather than at a pre-specified
points. After trying all split points, it determines which split point is optimal.

1
Design and Analysis of Algorithms Unit-I

Steps of Dynamic Programming


Dynamic programming design involves 4 major steps:
1. Develop a mathematical notation that can express any solution and subsolution for the problem at
hand.
2. Prove that the Principle of Optimality holds.
3. Develop a recurrence relation that relates a solution to its subsolutions, using the math notation of
step 1. Indicate what the initial values are for that recurrenec relation, and which term signifies the
final solution.
4. Write an algorithm to compute the recurrence relation.

All pairs Shortest Path Problem


Let G=(V,E) be a directed graph with n vertices.
Let cost be a cost adjacency matrix for G such that cost(i, i)=0, cost(i, j) is length of the edge <i,j> . If <i, j>
exists otherwise cost(i, j)=.
 The all-pairs shortest path problem is to determine a matrix A such that A(i, j) is the length of a
shortest path from i to j.
The matrix A can be obtained by solving n single source problems using the single source shortest path
algorithm. Matrix A can be obtained in O(n3) time because this algorithm requires O(n2) time.

An alternate O(n3) solution can be obtained by using principle of optimality. But it requires that G should
not have cycles with negative length.

Examining shortest i to j path in G:


 A shortest i to j path in G, originates at vertex i and goes through zero or more intermediate vertices
and terminates at vertex j.
 If k is an intermediate vertex on this shortest path, then the subpaths from i to k and from k to j must
be shortest paths from i to k and k to j, respectively. Otherwise, the i to j path is not of minimum
length. So, the principle of optimality holds.
 If k is the intermediate vertex with highest index, then the i to k path is a shortest i to k path in G
going through no vertex of the index greater than k-1. Similarly the k to j path in G going through no
vertex of index greater than k-1.
 So the construction of i to j path first requires a decision, as which is the highest indexed
intermediate vertex k. After deciding k, find two shortest paths from i to k and k to j. These paths
should not go through a vertex with index greater than k-1.
 Using Ak(i, j) to represent the length of a shortest path from i to je going through no vertex of index
greater than k, we obtain  k 1 k 1

A(i, j)  min min A (i, k )  A (k, j) , cos t(i, j)
1k n

Clearly, A (i, j)  cos t(i, j) , 1  i  n , 1  j  n


0

 From equation (1), Ak(i, j) either goes through vertex k or it does not.
 If it goes through vertex k, then Ak (i, j)  Ak1(i, k )  Ak1(k, j)
 If it does not go through vertex k, then Ak (i, j)  Ak1(i, j)
 Combining these two, we get Ak (i, j)  min{ Ak1(i, j), Ak1(i, k )  Ak1(k, j)}, k  1

2
Design and Analysis of Algorithms Unit-I

Example:
Find all pairs shortest path for the following graph.

The initial A matrix A0, plus its values after 3


iterations A1 ,A2 ,A3 are shown below.

The algorithm for all pairs shortest path problem is given below

Algorithm AllPaths(cost, A, n)
//cost[1:n, 1:n] is the cost adjacency matrix of a graph with n vertices;
//A[i,j] is the cost of a shortest path from vertex i to vertex j.
{
for i:=1 to n do
for j:=1 to n do
A[i, j]:=cost[i,j]; //copy cost into A
for k:=1 to n do
for i:=1 to n do
for j:=1 to n do
A[i,j]:=min(A[i,j],A[i,k]+A[k,j]);
}

The time needed by all paths algorithm is (n3 ) .

Single Source Shortest Path Problem


 Problem : Given a directed graph G = (V, E), a weighting function cost for the edges of G, and a
source vertex v0, The problem is to determine the shortest paths from v0 to all the remaining vertices
of G.
 This problem can be solved using greedy method (Dijkstra’s algorithm) if the edge weights are
positive. This algorithm does not necessarily give the correct answers on the graphs which have
negative weights.

3
Design and Analysis of Algorithms Unit-I

 Dynamic Programming suggests a technique to find solution to this problem.


o Here negative edge weights are permitted, but graph should not have cycles of negative
length. This is necessary to ensure that shortest paths consist of a finite number of edges.
o In an n-vertex graph with no cycles of negative length, the shortest path between any two
perties has at most (n – 1) edges on it.

 Solution:
o Let distl[u] be the length of a shortest path from the source vertex v to vertex u under the
constraint that the shortest path contains at most l edges.
o Then dist1[u] = cost[v,u], 1 ≤ u ≤ n
o When there are no cycles of negative length, we can limit our search for shortest paths to
paths with at most n – 1 edges. Hence distn – 1 [u] is the length of an unrestricted shortest path
from v to u.
o Our goal is then to compute distn – 1 [u] for all u.

 This can be done using Dynamic Programming methodology


o If the shortest path from v to u with at most k, k > 1, edges has no more than k – 1 edges, then
distk[u] = distk-1[u]
o If the shortest path from v to u with at most k, k > 1, edges has exactly k edges, then it is
made up of a shortest path from v to some vertex j followed by the edge <j, u>. The path
from v to j has k – 1 edges, and its length is distk – 1[j]. all vertices i such that the edges
<i, u> is in the graph are candidates for j. since we are interested in a shortest path, the i that
minimizes distk – 1[i] + cost[i, u] is the correct value for j.

This will result in the following recurrence 


 dist k 1[u] 
distk [u]  min k 1 
min{dist [i]  cost[i,u]}
 i 
This recurrence can be used to compute distk from distk-1, for k = 2, 3, …, n-1

Example : Following graph has 7 vertices

The table has arrays distk, for k = 1, 2, …. 6


These values are computing by following the above equation.

4
Design and Analysis of Algorithms Unit-I

For instance, distk[1] = 0 for all k since 1 is the source node. Also, dist1[2]=6, dist1[3] = 5 and dist1[4]=5,
since there are edges from 1 to these nodes. Distance to remaining vertices is infinity since there are no
edges to theses from 1.

Pseudocode to compute the length of the shortest path from v to each other vertex of the graph is given. This
algorithm is referred as Bellman ford algorithm.

Analysis :
 A graph can be represented using adjacency matrix or adjacency list.
 If adjacency matrix is used
o Each iteration of the for loop takes O(n2) time. Here n is the number of vertices in the graph.
o Overall complexity is O(n3)
 If adjacency list is used
o Each iteration of the for loop takes O(e) time. Here is the number of edges in the graph.
o Overall complexity is O(ne)

String Editing
Problem: We are given two strings X = x1, x2, . . ., xn and Y =y1, y2, . . . , ym, where xi, 1 ≤ i ≤ n, and
yj, 1 ≤ j ≤ m, are members of a finite set of symbols known as the alphabet. We want to transform X into Y
using a sequence of edit operations on X.
 The permissible edit operations are
o insert,
o delete and
o change (A symbol of X into another)
 A cost is associated with each operation.
 The cost of a sequence operations is the sum of the costs of the individual operations in the sequence.
 The problem of string editing is to identify a minimum-cost sequence of edit operations that will
transform X into Y.
 Let
o D(xi) be the cost of deleting the symbol xi from X,
o I(yj) be the cost of inserting a symbol yj into X,
o And C(xi, yj) be the cost of changing the symbol xi of X into yj.

5
Design and Analysis of Algorithms Unit-I

Solution :
 A solution to the string editing problem consists of a sequence of decisions, one for each edit
operation. Let be ε be a minimum-cost edit sequence for transforming X into Y. The first operation O
in ε is delete, insert, or change.
 If ε’ = ε – {O} and X’ is the result of applying O in X, the ε’ should be the minimum edit sequence
that transform X’ into Y. Thus the principle of optimality holds for this problem.

 A Dynamic programming solution for this problem can be obtained as follows


 Define cost(i,j) to be the minimum cost of any edit sequence for transforming x1, x2, . . ., xi into y1,
y2, . . , yj. Compute cost(i,j) for each i and j.
 The cost(n,m) is the cost of an optimal edit sequence.

 For i=j=0, cost(i, j)=0 since two sequences are identical and empty.

 If j=0, and i>0, we can transform X into Y by a sequence of deletes.


Thus cost (i,0) = cost(i-1,0)+D(xi)

 If j=0, j>0, cost(0,j) = cost(0,j-1)+I(yj).

 If i>0 and j>0, the transformation happens in three ways, as shown in equation.

 0 i j 0
cos t(i 1,0)  D(x ) j  0, i  0

cos t(i, j)  i


cos t(0, j 1)  I ( y j ) i  0, j  0
 cos t'(i, j) i  0, j  0

 cos t(i 1, j 1) if xi  y j


  cos t(i 1, j)  D(x ) 
cos t'(i, j)    i 
mincos t(i 1, j 1)  C(xi , y j ) if xi  y j

  cos t(i, j 1)  I ( y ) 
 j 

 We have to compute cost(i,j) for all possible values of i and j. These values will be computed in the
form of a table, M.
 M(i,j) stores the cost(i,j) value.
 Each entry of M takes O(1) time.
 Thereform the whole algorithm takes O(mn) time.
 The value cost(n,m) is the final answer we are interested in.

 A minimum edit sequence can be obtained by a simple backward trace from cost(n,m). The
backward trace is enabled by recording which of the three options for i>0, j>0 yielded the minimum
cost for each i and j.

6
Design and Analysis of Algorithms Unit-I

Exercise : Transform the string this into there and find out how many edit operations are required.

Exercise 2 : Find the minimum edit cost to transform the string X=x1,x2,x3,x4,x5 = a,a,b,a,b to
Y=y1,y2,y3,y4 = b,a,b,a. Cost associated with each insertion and deletion be 1, and the cost of changing
any symbol to any other symbol be 2.

Exercise 3: Let X = a, a, b, a, a, b, a, b, a, a and Y = b, a, b, a, a, b, a, b. Find a minimum cost edit


sequence that transforms X into Y.

Exercise 4 : Design a pseudo code algorithm that implements the string editing algorithm.

7
Design and Analysis of Algorithms Unit-I

0/1 Kanpsack problem


We are given with ‘N’ objects with weight wi and profit pi where i varies from 1 to N and also a knapsack
with capacity ‘m’. The problem is, we have to fill the bag with the help of ‘N’ objects and the resulting profit
has to be maximum.
 Formally Knapsack problem can be stated as, KNAP(1, n, m) is
maximize px
1in
i i

subject to w x
1in
i i m

xi  0 or 1, 1  i  n.

A solution to the knapsack problem can be obtained by making a sequence of decisions on the variables.
Let us assume that that the decisions on xi are made in the order x1 , x2 , ...., xn .

Looking backward on the sequence of decisions xn , xn 1, ....,x1 .


f n (m)  max f n 1 (m), f n 1 (m  wn )  pn 
Generalizing f i ( y)  max f i 1 ( y), f i1 ( y  wi )  pi  --- (1)
Where fi ( y) is the value of the optimal solution to KNAP(1,i,y)
 fn (m) is an optimal solution to KNAP(1, n, m).
Clearly f0 ( y) =0.

fi ( y) is an ascending step function .


There are finite no.of y’s y1  y2  .....  yk , such that fi ( y1 )  fi ( y2 )  ...... fi ( yk ) .
To solve this use the ordered set S i  {( f ( y j ), y j ) |1  j  k} to represent fi ( y) .
Each member of S i is a pair (P,W) wehre P= f ( y ) and W= y
j j

Therefore f0 ( y) is represented as S 0  {(0,0)}


To compute S i1 from S i , we need to compute S i  {(P,W ) | (P  p ,W  w )  S i } .
1 i1 i1
i1 i i
S is computed by merging the pairs in S and S1 together.

Rules of mergring/purging:
1. If S i1 contains two pairs ( p , w ) and ( p , w ) with the property p  p and w  w , then the pair
j j k k j k j k

can be discarded because of eqution (1).


2. Purge all pairs (P,W) with W>m while generating S i s
This discarding or purging rule is also called as dominace rule.

After purging, P value of last pair in S n gives fn (m) .


Dispense with the computation of S n , since we want solution to KNAP(1,n,m)

The last pair in S n is either last pair in Sn1 or it is ( p  p , w  w ) , where ( p , w )  Sn1 such that
j n j n j j

wj  wn  m and wj is maximum.
1. Now setting 0 or 1 to xi is determined by carrying out a search through S i s.

8
Design and Analysis of Algorithms Unit-I

If ( p , w )  S n1 , then set x  0 .


1 1 n
If ( p , w )  S n1 , then ( p  p , w  w )  Sn1 and set x  1 .
1 1 1 n 1 n n

Eg: solve the 0/1 knapsack problem using DP m=6, w=(2,3,4), p=(1,2,5)
No.o f objects = 3
Arranging them as profit-value pairs (1,2) (2,3) (5,4)
S 0  {(0,0)} S10  {(1,2)}

S1  {(0,0), (1,2)} S11  {(2,3), (3,5)}

S 2  {(0,0), (1,2), (2,3),(3,5)} S12  {(5,4), (6,6),(7,7),(8,9)}

S 3  {(0,0), (1,2), (2,3),(5,4),(6,6)}


Tuple (3,5) is discarded as (5,4) is dominating (3,5).
Tuple (7,7) and (8,9) are discarded as W>m.
Last tuple in S 3 gives f3 (6)  6

Last tuple = (6,6)


(6,6)  S 3 , (6,6)  S 2  x3  1
(6,6)-(5,4)=(1,2)
(1,2)  S 2 , (1,2)  S1  x2  0
(1,2)  S , 1
(1,2)  S 0
 x1  1

Hence, the optimal solution is (x1, x2 , x3 )  (1,0,1)

Reliability Design
 It is a problem with a multiplicative optimization function.
 The problem is to design a system that is composed of several devices connected in series.

Figure : N Devices Di 1≤ i ≤ n Connected in series


 Let ri be the reliability of device Di
 (i.e., ri is the probability that device i will function properly) 
 Then the reliability of the entire system is πri
 The reliability of the system may not be good, even though the individual devices are very good.

 Eg: if n = 10 and ri=0.99, 1≤ i ≤ 10, then πri = 0.904382


 (i..e, reliability of each device = 99%, and the system is = 90% )

 Hence, it is desirable to duplicate devices. Multiple copies of the same device type are
connected in parallel through the use of switching circuits.

9
Design and Analysis of Algorithms Unit-I

 The switching circuits determine which devices in any given group are functioning properly.
They then make use of one such device at each stage.
 This problem is an example for how to use Dynamic Programming to solve a problem with a
multiplicative optimization function.
 If stage i contains mi copies of Device Di, then the probability that all mi have a malfunction
is (1 – ri)mi

Eg: if ri = 0.99 and mi = 2


Stage reliability = 1 – (1 – 0.99)2
= 0.9999

 Let ϕi(mi) be the reliability of stage i


 The reliability of the system of stages is = π1≤i≤nϕi ri
 Our problem is to use device duplication to maximize reliability. This maximization is to be carried
out under a cost constraint.
 Let Ci be the cost of each unit of device i and let c be the maximum allowable cost of the system
being designed.
We wish to solve the following maximization problem.
maximize i (mi )
1i n

subject to c m  c
1in
i i

mi  1 & integer, 1  i  n

 A Dynamic programming solution can be obtained by following the approach of knapsack problem.
Since ci > 0, each mi must be in the range 1 ≤ mi ≤ ui
 n

c  ci
  c j 
Where ui   1

 ci 
 
 An optimal solution m1, m2, …, mn is the result of a sequence of decisions, one decision for
each mi.

10
Design and Analysis of Algorithms Unit-I

 Let fi(x) represent the maximum value of subject to the constraints


c m
1 ji
j j  x and 1  mj  u j , 1  j  i

 Then the value of the optimal solution is fn(c)


 The last decision mn is to choose one from {1, 2, …, un}
 Once a value for mn has been choosen, the remaining decisions must be such as to use the
remaining funds (c - cnmn) in an optimal way.
 Therefore the principal of optimality holds.

fn (c)  max n (mn ) fn-1 (c  cnmn )


1mn un
Generalizing
fi (x)  max i (mi ) fi-1 (x  cimi )
1mi ui

Clearly f0(x) = 1
To solve this,
Let Si consists of tuples of the form (f, x) where f = fi(x).
There is atmost one tuple for each different x that results from a sequence of decisions m1, m2, … mn
The dominance rule (f1, x1) dominates (f2, x2) iff f1≥f2 and x1≤x2. Dominated tuples can be discarded
from Si.

11
Design and Analysis of Algorithms Unit-I

12 Department of CSE, Vishnu Institute of Technology


Design and Analysis of Algorithms Unit-I

Department of CSE, Vishnu Institute of Technology


13

You might also like