0% found this document useful (0 votes)
129 views72 pages

Daa Unit-3

Uploaded by

vanitha.thandur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
129 views72 pages

Daa Unit-3

Uploaded by

vanitha.thandur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 72

Dynamic Programming

It is used to solve optimization problems


1)Breaks down the problem into similar sub
problems
2) Find the optimal solution for these sub problems
3)Store the result of sub problem-memoization
4) Reuse them so that same sub problem is not
calculating more than once
5) Finally calculates the result of complex problem

Mainly D.P is applicable which are having


overlapping sub problems and optimal sub structure
Optimal substructure: A problem
has optimal substructure if its optimal
solution can be constructed efficiently
from the optimal solutions of its
subproblems.
Overlapping subproblems: A
problem has overlapping subproblems
if the same subproblem is solved
multiple times while solving a larger
problem.
Recursion: Iteration:
int fib(int n) int fib(int n)
{ {
If(n<=1)
if(n<=1)
return n;
return n;
return fib(n-2)+fib(n-1);
} f[0]=0;f[1]=1;
for(i=2;i<=n;i++)
f[i]=f[i-2]+f[i-1];
return f[n];
}
1.Matrix Chain Multiplication
• Matrix-chain multiplication problem
– Given a chain A1, A2, …, An of n matrices,
where for i=1, 2, …, n, matrix Ai has
dimension pi-1pi
– Parenthesize the product A1A2…An such that
the total number of scalar multiplications is
minimized.
• Brute force method of exhaustive search
takes time exponential in n ( 2n ).
Matrix Multiplication

p×q q×r

p×r

Cost: Number of scalar multiplications = pqr


Example
Matrix Dimensions
A1 3X2
A2 2X4
A3 4X2
A4 2X5
Parenthesization Scalar multiplications
1 ((A1 A2 ) A3 ) A4 78
2 (A1 A2 ) (A3 A4 )
3 (A1 (A2 A3 )) A4
4 A1 ((A2 A3 ) A4 )
5 A1 (A2 (A3 A4 ))

3 x 2 x 4 scalar multiplications to get (A B) 3 x 4 result


3 x 4 x 2 scalar multiplications to get ((AB)C) 3 x 2 result
3 x 2 x 5 scalar multiplications to get (((AB)C)D) 3 x 5 result
Dynamic Programming Approach
• The structure of an optimal solution
– Let us use the notation Ai..j for the matrix that results
from the product Ai Ai+1 … Aj

– An optimal parenthesization of the product A1A2…An


splits the product between Ak and Ak+1 for some
integer k where1 ≤ k < n

– First compute matrices A1..k and Ak+1..n ; then multiply


them to get the final matrix A1..n
Dynamic Programming Approach
…contd

– Key observation: parenthesizations of the


subchains A1A2…Ak and Ak+1Ak+2…An must also be
optimal if the parenthesization of the chain A1A2…An
is optimal.

– That is, the optimal solution to the problem contains


within it the optimal solution to subproblems.
Dynamic Programming Approach …
contd

• Recursive definition of the value of an


optimal solution.
– Let m[i, j] be the minimum number of scalar
multiplications necessary to compute Ai..j

– Minimum cost to compute A1..n is m[1, n]

– Suppose the optimal parenthesization of Ai..j


splits the product between Ak and Ak+1 for
some integer k where i ≤ k < j
Dynamic Programming Approach …
contd

– Ai..j = (Ai Ai+1…Ak)·(Ak+1Ak+2…Aj)= Ai..k · Ak+1..j

– Cost of computing Ai..j = cost of computing Ai..k +


cost of computing Ak+1..j + cost of multiplying Ai..k and
Ak+1..j

– Cost of multiplying Ai..k and Ak+1..j is pi-1pk pj

– m[i, j ] = m[i, k] + m[k+1, j ] + pi-1pk pj for i ≤ k < j


– m[i, i ] = 0 for i=1,2,…,n
Dynamic Programming Approach …
contd

– But… optimal parenthesization occurs at


one value of k among all possible i ≤ k < j
– Check all these and select the best one

0 if i=j
m[i, j ] =
min {m[i, k] + m[k+1, j ] + pi-1pk pj } if i<j
i ≤ k< j
Dynamic Programming Approach …
contd

• To keep track of how to construct an


optimal solution, we use a table s
• s[i, j ] = value of k at which Ai Ai+1 … Aj is
split for optimal parenthesization.
Ex:-
[A1]5×4 [A2]4×6 [A3]6×2 [A4]2×7

P0=5, p1=4, p2=6, p3=2, p4=7

1 2 3 4 J

M12= M13= M14=


1 M11=0
J K= K= K=
M23= M24=
Sequence
2 M22=0
Size, i K= K=
3 M34=
M33=0
K=
4
M44=0
Matrix Chain Multiplication Algorithm

– First computes costs for chains of length l=1


– Then for chains of length l=2,3, … and so on
– Computes the optimal cost bottom-up.
Input: Array p[0…n] containing matrix dimensions and n
Result: Minimum-cost table m and split table s

Algorithm Matrix_Chain_Mul(p[], n)
{
for i:= 1 to n do
m[i, i]:= 0 ;

for len:= 2 to n do // for lengths 2,3 and so on


{
for i:= 1 to ( n-len+1 ) do
{
j:= i+len-1;
m[i, j]:=  ;

for k:=i to j-1 do


{
q:= m[i, k] + m[k+1, j] + p[i-1] p[k] p[j];
if q < m[i, j]
{
m[i, j]:= q;
s[i, j]:= k;
}
}
}
}
return m and s
} Time complexity of above algorithm is O(n3)
Constructing Optimal Solution
• Our algorithm computes the minimum-
cost table m and the split table s
• The optimal solution can be constructed
from the split table s
– Each entry s[i, j ]=k shows where to split the
product Ai Ai+1 … Aj for the minimum cost.

11-18
Example
• Copy the table of previous example and then
construct optimal parenthesization.
Optimal Binary Search Tree(OBST)

• A binary search tree T is a binary tree, either it is


empty or each node in the tree contains an
identifier and,
– All identifiers in the left subtree of T are less than the
identifier in the root node T.
– All identifiers in the right subtree are greater than the
identifier in the root node T.
– The left and right subtree of T are also binary search
trees.
• Ex:- (a1,a2,a3)=( do,if,stop )
Here n=3
• The number of possible binary search trees=
2ncn/(n+1)
= ¼(6c3)
=5

stop

if

do
Algorithm search(x)
{
found:=false;
t:=tree;
while( (t≠0) and not found ) do
{
if( x=t->data ) then found:=true;
else if( x<t->data ) then t:=t->lchild;
else t:=t->rchild;
}
if( not found ) then return 0;
else return 1;
}
Optimal Binary Search Trees
• Problem
– Given sequence of identifiers (a1,a2 …., an) with
a1<a2 <··· < an.
– Let p(i) be the probability with which we search for ai
– Let q(i) be the probability with which we search for an
identifier x such that ai< x <ai+1 .
– Want to build a binary search tree (BST)
with minimum expected search cost.
• Identifiers : stop, if, do
stop
Internal node : successful
if
search, p(i)
E3
• External node :
do E2 unsuccessful search, q(i)
E0 E1

 The expected cost of a binary tree:


n n

 P  level(a )   Q  (level(E )  1)
n 1
i i
n 0
i i

8 -24
The dynamic programming approach
• Make a decision as which of the ai’s should be assigned
to the root node of the tree.
• If we choose ak, then it is clear that the internal nodes for
a1,a2,……,ak-1 as well as the external nodes for the
classes E0, E1,….,Ek-1 will lie in the left subtree l of the
root. The remaining nodes will be in the right subtree r.

ak
P 1 ...P k-1 P k+1 ...P n
Q 0 ...Q k-1 Q k ...Q n

a1...ak-1 ak+1...an

C(1,k-1) C(k+1,n)
cost( l)= ∑ p(i)*level(ai) + ∑ q(i)*(level(Ei)-1)
1≤i<k 0≤i<k

cost(r)= ∑ p(i)*level(ai) + ∑ q(i)*(level(Ei)-1)


k<i≤n k≤i≤n

• In both the cases the level is measured by considering


the root of the respective subtree to be at level 1.
j
• Using w(i, j) to represent the sum q(i) +∑ l=i+1 (
q(l)+p(l) ),
we obtain the following as the expected cost of the
above search tree.

p(k) + cost(l) + cost(r) + w(0,k-1) + w(k,n)


• If we use c(i,j) to represent the cost of an optimal binary
search tree tij containing ai+1,……,aj and Ei,…,Ej, then
cost(l)=c(0,k-1), and cost(r)=c(k,n).

• For the tree to be optimal, we must choose k such that


p(k) + c(0,k-1) + c(k,n) + w(0,k-1) + w(k,n) is minimum.

Hence, for c(0,n) we obtain

c(0,n)= min c(0,k-1) + c(k, n) +p(k)+ w(0,k-1) + w(k,n)


0<k≤n

We can generalize the above formula for any c(i,j) as shown below

c (i, j)= min c (i,k-1) + c (k,j) + p(k)+ w(i,k-1) + w(k,j)


i<k≤j
c(i, j)= min cost(i,k-1) + cost(k,j) + w (i, j)
i<k≤j

– Therefore, c(0,n) can be solved by first computing all c(i, j)


such that j - i=1, next we compute all c(i,j) such that j - i=2,
then all c(i, j) wiht j – i=3, and so on.

– During this computation we record the root r(i, j) of each tree


t ij, then an optimal binary search tree can be constructed from
these r(i, j).

– r(i, j) is the value of k that minimizes the cost value.

Note:1. c(i,i) = 0, w(i, i) = q(i), and r(i, i) = 0 for all 0 ≤ i ≤ n


2. w(i, j) = p(j) + q(j) + w(i, j-1 )
Ex 1: Let n=4, and ( a1,a2,a3,a4 ) = (do, if, int, while).
Let p(1 : 4 ) = ( 3, 3, 1, 1) and q(0: 4) = ( 2, 3, 1,1,1 ).
p’s and q’s have been multiplied by 16 for convenience.
Then, we get
i

w00=2 w11=3 w22=1 w33=1 w44=1


0
c00=0 c11=0 c22=0 c33=0 c44=0
r00=0 r11=0 r22=0 r33=0 r44=0
w01=8 w12=7 w23=3 w34=3
1 c01=8 c12=7 c23=3 c34=3
j-i r01=1 r12=2 r23=3 r34=4
w02=12 w13=9 w24=5
2 c02=19 c13=12 c24=8
r02=1 r13=2 r24=3
w03=14 w14=11
3 c03=25 c14=19
r03=2 r14=2
w04=16
4 c04=32
r04=2

Computation of c(0,4), w(0,4), and r(0,4)


• From the table we can see that c(0,4)=32 is the minimum
cost of a binary search tree for ( a1, a2, a3, a4 ).
• The root of tree t04 is a2.
• The left subtree is t01 and the right subtree t24.
• Tree t01 has root a1; its left subtree is t00 and right
subtree t11.
• Tree t24 has root a3; its left subtree is t22 and right
subtree t34.
• Thus we can construct OBST.
if

do int

while
Ex 2: Let n=4, and ( a1,a2,a3,a4 ) = (count, float, int,while).
Let p(1 : 4 ) =( 1/20, 1/5, 1/10, 1/20) and
q(0: 4) = ( 1/5,1/10, 1/5,1/20,1/20 ).

Using the r(i, j)’s construct an optimal binary search tree.


Time complexity of above procedure to
evaluate the c’s and r’s
• Above procedure requires to compute c(i, j) for
( j - i) = 1,2,…….,n .

• When j – i = m, there are n-m+1 c( i, j )’s to compute.

• The computation of each of these c( i, j )’s requires to find


m quantities.

• Hence, each such c(i, j) can be computed in time o(m).


• The total time for all c(i,j)’s with j – i= m is
= m( n-m+1)
= mn-m2+m
=O(mn-m2)

• Therefore, the total time to evaluate all the c(i, j)’s and
r(i, j)’s is

∑ ( mn – m2 ) = O(n3)
1≤m≤n
• We can reduce the time complexity by using the
observation of D.E. Knuth

• Observation:
• The optimal k can be found by limiting the search
to the range r( i, j – 1) ≤ k ≤ r( i+1, j )

• In this case the computing time is O(n2).


OBST Algorithm
Algorithm OBST(p,q,n)
{
for i:= 0 to n-1 do
{ // initialize.
w[ i, i ] :=q[ i ]; r[ i, i ] :=0; c[ i, i ]=0;
// Optimal trees with one
node.
w[ i, i+1 ]:= p[ i+1 ] + q[ i+1 ] + q[ i ] ;
c[ i, i+1 ]:= p[ i+1 ] + q[ i+1 ] + q[ i ] ;
r[ i, i+1 ]:= i + 1;
}
w[n, n] :=q[ n ]; r[ n, n ] :=0; c[ n, n ]=0;
// Find optimal trees with m nodes.

for m:= 2 to n do
{
for i := 0 to n – m do
{
j:= i + m ;
w[ i, j ]:= p[ j ] + q[ j ] + w[ i, j -1 ];

// Solve using Knuth’s result


x := Find( c, r, i, j );

c[ i, j ] := w[ i, j ] + c[ i, x -1 ] + c[ x, j ];
r[ i, j ] :=x;
}
}
Algorithm Find( c, r, i, j )
{
for k := r[ i, j -1 ] to r[ i+1, j ] do
{ min :=∞;
if ( c[ i, k -1 ] +c[ k, j ] < min ) then
{
min := c[ i, k-1 ] + c[ k, j ]; y:= k;
}
}
return y;
}
Traveling Salesperson Problem (TSP)

Problem:-

• You are given a set of n cities.


• You are given the distances between the cities.
• You start and terminate your tour at your home city.
• You must visit each other city exactly once.
• Your mission is to determine the shortest tour. OR
minimize the total distance traveled.
• e.g. a directed graph :
2
1 2
2
10
4
6 5 9 3
8
7
4 3
4

1 2 3 4
• Cost matrix: 1 
0 2 10 5
2 2 
0 9 
3 4 3 
0 4
4 6 8 7 
0
The dynamic programming approach

• Let g( i, S ) be the length of a shortest path starting at vertex i, going


through all vertices in S and terminating at vertex 1.

• The length of an optimal tour :

g(1,
• The general V - {1}) min {c1k  g(k, V - {1, k})}
form:
2k n
1

g(i, S) min{c ij  g(j, S - {j})} 2


jS
• Equation 1 can be solved for g( 1, V- {1} ) if we know
g( k, V- {1,k} ) for all choices of k.

• The g values can be obtained by using equation 2 .

Clearly,
g( i, Ø ) = Ci1 , 1≤ i ≤ n.

• Hence we can use eq 2 to obtain g( i, S ) for all S of


size 1. Then we can obtain g( i, s) for all S of size 2
and so on.
Thus,
g(2, Ø)=C21=2 , g(3, Ø)=C31=4

g(4, Ø)=C41=6
We can obtain
g(2, {3})=C23 + g(3, Ø)=9+4=13
g(2, {4})=C24 + g(4, Ø)=∞

g(3, {2})=C32 + g(2, Ø)=3+2=5


g(3, {4})=C34 + g(4, Ø)=4+6=10
g(4, {2})=C42 + g(2, Ø)=8+2=10
g(4, {3})=C43 + g(3, Ø)=7+4=11

Next, we compute g(i,S) with |S | =2,


g( 2,{3,4} )=min { c23+g(3,{4}), c24+g(4,{3}) }
=min {19, ∞}=19
g( 3,{2,4} )=min { c32+g(2,{4}), c34+g(4,{2}) }
=min {∞,14}=14
g(4,{2,3} )=min {c42+g(2,{3}), c43+g(3,{2}) }
=min {21,12}=12
Finally,
We obtain

g(1,{2,3,4})=min { c12+ g( 2,{3,4} ),


c13+ g( 3,{2,4} ),
c14+ g(4,{2,3} ) }
=min{ 2+19,10+14,5+12}
=min{21,24,17}
=17.
• A tour can be constructed if we retain with each g( i, s )
the value of j that minimizes the tour distance.
• Let J( i, s ) be this value, then J( 1, { 2, 3, 4 } ) = 4.
• Thus the tour starts from 1 and goes to 4.

• The remaining tour can be obtained from g( 4, { 2,3 } ).


So J( 4, { 3, 2 } )=3

• Thus the next edge is <4, 3>. The remaining tour is


g(3, { 2 } ). So J(3,{ 2 } )=2

The optimal tour is: (1, 4, 3, 2, 1)


Tour distance is 5+7+3+2 = 17
All pairs shortest path problem

Floyd’s Algorithm
All-Pairs Shortest Path Problem

• Let G=( V,E ) be a directed graph consisting of n


vertices.
• Weight is associated with each edge.

• The problem is to find a shortest path between every pair


of nodes.
Ex:-
1
1 2 3 4 5 v1 v2
3
1 0 1  1 5 9
5 1 3
2 9 0 3 2  2
v5
3   0 4  3 2
v4 v3
4   2 0 3 4
5 3    0
Idea of Floyd-Warshall Algorithm
• Assume vertices are {1,2,……n}

• Let d k( i, j ) be the length of a shortest path from i to j


with intermediate vertices numbered not higher than k
where 0 ≤ k ≤ n, then

• d0( i, j )=c( i, j ) (no intermediate vertices at all)

• d k( i, j )=min { dk-1( i, j ), dk-1( i, k )+ dk-1( k, j ) }


– and d n( i, j ) is the length of a shortest path from i to j
• In summary, we need to find d n with d 0 =cost matrix .

• General formula

d k [ i, j ]= min { d k-1[ i, j ], d k-1[ i, k ]+ d k-1[ k, j ] }


Shortest path using intermediate vertices
{ V1, . . . Vk }

d k-1[i, k] Vk d k-1[k, j]

Vj
Vi
d k-1[i, j]

Shortest Path using intermediate vertices


{ V1, . . . Vk -1 }
d0 =

d1 =

d2 =

d3 =

d4 =
d5 =
Algorithm
Algorithm AllPaths( c, d, n )
// c[1:n,1:n] cost matrix
// d[i,j] is the length of a shortest path from i to j
{
for i := 1 to n do
for j := 1 to n do
d [ i, j ] := c [ i, j ] ; // copy c into d

for k := 1 to n do
for i := 1 to n do
for j := 1 to n do
d [ i, j ] := min ( d [ i, j ] , d [ i, k ] + d [ k, j ] );
}
Time Complexity is O ( n 3 )
0/1 Knapsack Problem
Let xi = 1 when item i is selected and let xi = 0 when item i is not selected.

n
maximize pi xi
i=1
n
subject to wi xi <= c
i=1
and xi = 0 or 1 for all i
All profits and weights are positive.
Sequence Of Decisions

• Decide the xi values in the order x1, x2, x3, …,


x n.
OR
• Decide the xi values in the order xn, xn-1, xn-2,
…, x1.
Problem State
• The state of the 0/1 knapsack problem is
given by
 the weights and profits of the available items
 the capacity of the knapsack
• When a decision on one of the xi values is
made, the problem state changes.
 item i is no longer available
 the remaining knapsack capacity may be less
Problem State
• Suppose that decisions are made in the order
x1, x2, x3, …, xn.
• The initial state of the problem is described
by the pair (1, m).
 Items 1 through n are available
 The available knapsack capacity is m.
• Following the first decision the state becomes
one of the following:
 (2, m) … when the decision is to set x1= 0.
 (2, m-w1) … when the decision is to set x1= 1.
Problem State
• Suppose that decisions are made in the order
xn, xn-1, xn-2, …, x1.
• The initial state of the problem is described
by the pair (n, m).
 Items 1 through n are available
 The available knapsack capacity is m.
• Following the first decision the state becomes
one of the following:
 (n-1, m) … when the decision is to set xn= 0.
 (n-1, m-wn) … when the decision is to set xn= 1.
Dynamic programming approach

• Let fn(m) be the value of an optimal


solution, then
fn(m)= max { fn-1(m), fn-1( m-wn) + p n }

General formula

fi(y)= max { fi-1(y), fi-1( y-w i) + p i }


• We use set si is a pair ( P, W )
where P= fi(y), W=y

• Note That s0 =( 0, 0 )

• We can compute si+1 from si by first


computing

1 si ={ ( P, W ) ( P- pi+1, W- wi+1)€ s i }
OR
Si =
1 S i-1
+ (pi, wi)

Merging :- si+1 can be computed by merging the


pairs in s i and s i 1

Purging :- if si+1 contains two pairs ( p j, w j ) and ( p k, w k )


with the property that p j ≤ p k and w j ≥ w k

• When generating s i’s, we can also purge all pairs ( p, w )


with w > m as these pairs determine the value of f n (x) only
for x > m.
• The optimal solution f n (m) is given by the highest profit
pair.
Set of 0/1 values for the x i ’ s

• Set of 0/1 values for x i ’ s can be determined by


a search through the si s
– Let ( p, w ) be the highest profit tuple in s n
Step1: if ( p, w ) € s n and ( p, w ) € s n -1
xn=1
otherwise x n = 0
This leaves us to determine how either ( p, w ) or
( p – pn, w- wn ) was obtained in Sn-1 .
This can be done recursively ( Repeat Step1 ).
Reliability Design
• The problem is to design a system that is
composed of several devices connected in
series.
• The probability of working condition of the device
is called reliability
D1 D2 D3 … Dn

n devices connected in series Di , 1<=i<=n


• Let r i be the reliability of device Di ( that is, ri is the
probability that device i will function properly ).

• Then, the reliability of entire system is π ri

• Even if the individual devices are very reliable, the


reliability of the entire system may not be very good.

• Ex. If n=10 and ri = 0.99, 1≤ i ≤ 10, then π ri =0.904

• Hence, it is desirable to duplicate devices.


• Multiple copies of the same device type are connected in
parallel as shown below.

D1 D3
D2 Dn
D3
D1
D2 … Dn
D1 D3
D3 Dn

Multiple devices connected in parallel in each stage


• If stage i contains mi copies of device Di , then the
probability that all mi have malfunction is ( 1-ri )m .i
Hence the reliability of stage i becomes 1-(1-ri ) m. i

Ex:- If r I =.99 and m I =2 , the stage reliability becomes 0.9999


• Let Фi ( m i ) be the reliability of stage i, i ≤ n
• Then, the reliability of system of n stages is 1π
≤ i ≤Ф
n i( m i )
• Our problem is to use device duplication to maximize
reliability. This maximization is to be carried out under a
cost constraint.
• Let c i be the cost of each device i and c be the
maximum allowable cost of the system being designed.

• We wish to solve the following maximization problem:

maximize π Фi ( m i )
1≤i≤n

subjected to ci mi ≤ c
1≤i≤n

1≤i≤n
m i ≥1 and integer,
Dynamic programming approach

• Since, each c i >0, each mi must be in the range


1≤ m i ≤ u i, where

ui= ( c + c i - ∑ cj )
1≤j≤n Ci

• The upper bound u i follows from the observation that


m i ≥ 1.
• The optimal solution m 1,m 2,……,m n is the result of a
sequence of decisions, one decision for each m i
• Let fn(c) be the reliability of an optimal
solution, then
fn(c)= max n
{ Ф ( m ) f ( c- c m ) }
1≤ m ≤ u n
n n n-1 n n

General formula

fi(x)= max {Ф ( m ) fi-1(x - c m ) }


1≤ m ≤ u
i i
i i i i

• Clearly, f0(x)=1, for all x, 0 ≤ x ≤ c


• Let s i consist of tuples of the form ( f, x )

Where f = f i( x )

Purging rule :- if si+1 contains two pairs ( f j, x j ) and


( fk, x k ) with the property that
fj ≤ f k and x j ≥ w k , then we can
purge ( f j, x j )
• When generating s i’s, we can also purge all pairs ( f, x )
with c - x < ∑ ck as such pairs will not leave sufficient
i +1≤ k ≤ n

funds to complete the system.

• The optimal solution f n (c ) is given by the


highest reliability pair.

• Start wit S0 =(1, 0 )

You might also like