0% found this document useful (0 votes)
20 views

Dynamic Programing in Dsa

The document discusses dynamic programming and its application to problems like the longest common subsequence problem. It describes how dynamic programming problems can be solved recursively by breaking them down into optimal subproblems. It also provides a recursive formula and approach for solving the longest common subsequence problem dynamically using a table.

Uploaded by

eceiiitbhopal
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Dynamic Programing in Dsa

The document discusses dynamic programming and its application to problems like the longest common subsequence problem. It describes how dynamic programming problems can be solved recursively by breaking them down into optimal subproblems. It also provides a recursive formula and approach for solving the longest common subsequence problem dynamically using a table.

Uploaded by

eceiiitbhopal
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 32

Dynamic Programming

Longest Common Subsequence

• Problem: Given 2 sequences, X = x1,...,xm and


Y = y1,...,yn, find a common subsequence whose length is maximum.

springtime ncaa tournament basketball

printing north carolina krzyzewski

Subsequence need not be consecutive, but must be


in order.
Other sequence questions

• Edit distance: Given 2 sequences, X = x1,...,xm and Y = y1,...,yn, what is


the minimum number of deletions, insertions, and changes that you
must do to change one to another?
• Protein sequence alignment: Given a score matrix on amino acid pairs,
s(a,b) for a,b{}A,
and 2 amino acid sequences, X = x1,...,xmAm and Y = y1,...,ynAn, find
the alignment with lowest score…
More problems

Optimal BST: Given sequence K = k1 < k2 <··· < kn of n sorted keys, with a
search probability pi for each key ki, build a binary search tree (BST) with
minimum expected search cost.
Matrix chain multiplication: Given a sequence of matrices A1 A2 … An, with
Ai of dimension mini, insert parenthesis to minimize the total number of
scalar multiplications.
Minimum convex decomposition of a polygon,
Hydrogen placement in protein structures, …
Dynamic Programming

• Dynamic Programming is an algorithm design technique for optimization


problems: often minimizing or maximizing.
• Like divide and conquer, DP solves problems by combining solutions to
subproblems.
• Unlike divide and conquer, subproblems are not independent.
• Subproblems may share subsubproblems,
• However, solution to one subproblem may not affect the solutions to
other subproblems of the same problem. (More on this later.)
• DP reduces computation by
• Solving subproblems in a bottom-up fashion.
• Storing solution to a subproblem the first time it is solved.
• Looking up the solution when subproblem is encountered again.
• Key: determine structure of optimal solutions
Steps in Dynamic Programming

1. Characterize structure of an optimal solution.


2. Define value of optimal solution recursively.
3. Compute optimal solution values either top-down with caching or
bottom-up in a table.
4. Construct an optimal solution from computed values.
We’ll study these with the help of examples.
Longest Common Subsequence

• Problem: Given 2 sequences, X = x1,...,xm and


Y = y1,...,yn, find a common subsequence whose length is maximum.

springtime ncaa tournament basketball

printing north carolina snoeyink

Subsequence need not be consecutive, but must be


in order.
Naïve Algorithm

• For every subsequence of X, check whether it’s a subsequence of Y .


• Time: Θ(n2m).
• 2m subsequences of X to check.
• Each subsequence takes Θ(n) time to check:
scan Y for first letter, for second, and so on.
Optimal Substructure
Theorem
Theorem
LetZZ==zz1, ,. .. .. ., ,zzkbe
Let
1 k beany
anyLCS
LCSofofXXand
andYY. .
1.1.IfIfxxm ==yyn, ,then
thenzzk ==xxm ==yyn and
andZZk-1 isisan
anLCS
LCSofofXXm-1 and
andYYn-1. .
m n k m n k-1 m-1 n-1
2.2.IfIfxxm yyn, ,then either z  x and Z is an LCS of X and Y .
m n then either zkk  xmm and Z is an LCS of Xm-1
m-1 and Y .
3.3. or zzkkyynnand
or andZZisisan
anLCS
LCSofofXXand
andYYn-1..
n-1

Notation:
prefix Xi = x1,...,xi is the first i letters of X.
This says what any longest common subsequence must look like;
do you believe it?
Optimal Substructure
Theorem
Theorem
LetZZ==zz1, ,. .. .. ., ,zzkbe
Let
1 k beany
anyLCS
LCSofofXXand
andYY. .
1.1.IfIfxxm ==yyn, ,then
thenzzk ==xxm ==yyn and
andZZk-1 isisan
anLCS
LCSofofXXm-1 and
andYYn-1. .
m n k m n k-1 m-1 n-1
2.2.IfIfxxm yyn, ,then either z  x and Z is an LCS of X and Y .
m n then either zkk  xmm and Z is an LCS of Xm-1
m-1 and Y .
3.3. or zzkkyynnand
or andZZisisan
anLCS
LCSofofXXand
andYYn-1..
n-1
Proof: (case 1: xm = yn)
Any sequence Z’ that does not end in xm = yn can be made longer by adding xm = yn to the end.
Therefore,
(1) longest common subsequence (LCS) Z must end in xm = yn.
(2) Zk-1 is a common subsequence of Xm-1 and Yn-1, and
(3) there is no longer CS of Xm-1 and Yn-1, or Z would not be an LCS.
Optimal Substructure
Theorem
Theorem
LetZZ==zz1, ,. .. .. ., ,zzkbe
Let
1 k beany
anyLCS
LCSofofXXand
andYY. .
1.1.IfIfxxm ==yyn, ,then
thenzzk ==xxm ==yyn and
andZZk-1 isisan
anLCS
LCSofofXXm-1 and
andYYn-1. .
m n k m n k-1 m-1 n-1
2.2.IfIfxxm yyn, ,then either z  x and Z is an LCS of X and Y .
m n then either zkk  xmm and Z is an LCS of Xm-1
m-1 and Y .
3.3. or zzkkyynnand
or andZZisisan
anLCS
LCSofofXXand
andYYn-1..
n-1

Proof: (case 2: xm  yn, and zk  xm)


Since Z does not end in xm,
(1) Z is a common subsequence of Xm-1 and Y, and
(2) there is no longer CS of Xm-1 and Y, or Z would not be an LCS.
Recursive Solution

• Define c[i, j] = length of LCS of Xi and Yj .


• We want c[m,n].

00 ifif ii00or or jj 00,,



cc[[ii,, jj]]cc[[ii11,, jj11]]11 ifif ii,, jj 00and
andxxi i  yyj j,,
max(c[i  1, j ], c[i, j  1]) ifif ii,, jj 00and
max(c[i  1, j ], c[i, j  1]) andxxi i  yyj j..

This gives a recursive algorithm and solves the problem.


But does it solve it well?
Recursive Solution

00 ififempty
emptyor or empty
empty,,

cc[[,,]]cc[[prefix
prefix,,prefix
prefix]]11 end())end(
ifif end( end()),,
max(c[ prefix ,  ], c[ , prefix ])
max(c[ prefix ,  ], c[ , prefix ]) end())end(
ifif end( end())..

c[springtime, printing]

c[springtim, printing] c[springtime, printin]

[springti, printing] [springtim, printin] [springtim, printin] [springtime, printi]

[springt, printing] [springti, printin] [springtim, printi] [springtime, print]


Recursive Solution
00 ififempty
emptyor or empty
empty,,

cc[[,,]]cc[[prefix
prefix,,prefix
prefix]]11 end())end(
ifif end( end()),,
max(c[ prefix ,  ], c[ , prefix ])
max(c[ prefix ,  ], c[ , prefix ]) end())end(
ifif end( end())..

•Keep track of c[] in a table of


nm entries:
•top/down
•bottom/up
Constructing an LCS
PRINT-LCS
PRINT-LCS(b, (b,X,
X,i,i,j)j)
1.
1. ififii==00ororjj==00
2.
2. then thenreturn
return
3.
3. ififb[i,
b[i,jj]]==““ ””
4.
4. then thenPRINT-LCS(b,
PRINT-LCS(b,X, X,i 1,j
i1, j1)
1)
5.
5. print
printxxi i
6.
6. elseifelseifb[i,
b[i,jj]]==“↑”“↑”
7.
7. then
thenPRINT-LCS(b,
PRINT-LCS(b,X, X,i
i1,
1,j)j)
8.
8. else
elsePRINT-LCS(b,
PRINT-LCS(b,X, X,i,i,j
j1)
1)
•Initial call is PRINT-LCS (b, X,m, n).
•When b[i, j ] = , we have extended LCS by one character. So LCS = entries with in
them.
•Time: O(m+n)
Steps in Dynamic Programming
1. Characterize structure of an optimal solution.
2. Define value of optimal solution recursively.
3. Compute optimal solution values either top-down with caching or
bottom-up in a table.
4. Construct an optimal solution from computed values.
We’ll study these with the help of examples.
Optimal Binary Search Trees

• Problem
• Given sequence K = k1 < k2 <··· < kn of n sorted keys,
with a search probability pi for each key ki.
• Want to build a binary search tree (BST)
with minimum expected search cost.
• Actual cost = # of items examined.
• For key ki, cost = depthT(ki)+1, where depthT(ki) = depth of
ki in BST T .
Expected Search Cost

E[search cost in T ]
n
  (depth T (ki )  1)  pi
i 1
n n
  depth T (ki )  pi   pi
i 1 i 1
n Sum of probabilities is 1.
 1   depth T (ki )  pi (15.16)
i 1
Example

• Consider 5 keys with these search probabilities:


p1k= 0.25, p2 = 0.2, p3 = 0.05, p4 = 0.2, p5 = 0.3.
2

i depthT(ki) depthT(ki)·pi
1 1 0.25
k1 k4 2 0 0
3 2 0.1
4 1 0.2
5 2 0.6
k3 k5
1.15
Therefore, E[search cost] = 2.15.
Example

• p1 = 0.25, p2 = 0.2, p3 = 0.05, p4 = 0.2, p5 = 0.3.


k2

i depthT(ki) depthT(ki)·pi
1 1 0.25
k1 k5
2 0 0
3 3 0.15
4 2 0.4
k4 5 1 0.3
1.10
Therefore, E[search cost] = 2.10.

k3
This tree turns out to be optimal for this set of keys.
Example

• Observations:
• Optimal BST may not have smallest height.
• Optimal BST may not have highest-probability key at root.
• Build by exhaustive checking?
• Construct each n-node BST.
• For each,
assign keys and compute expected search cost.
• But there are (4n/n3/2) different BSTs with n nodes.
Optimal Substructure

• Any subtree of a BST contains keys in a contiguous range ki, ..., kj for
some 1 ≤ i ≤ j ≤ n.
T

T

• If T is an optimal BST and


T contains subtree T with keys ki, ... ,kj ,
then T must be an optimal BST for keys ki, ..., kj.
• Proof: Cut and paste.
Optimal Substructure

• One of the keys in ki, …,kj, say kr, where i ≤ r ≤ j,


must be the root of an optimal subtree for these k r

keys.
• Left subtree of kr contains ki,...,kr1.
• Right subtree of kr contains kr+1, ...,kkj. k k kj
i r-1 r+1

• To find an optimal BST:


• Examine all candidate roots kr , for i ≤ r ≤ j
• Determine all optimal BSTs containing ki,...,kr1 and containing kr+1,...,kj
Recursive Solution

• Find optimal BST for ki,...,kj, where i ≥ 1, j ≤ n, j ≥ i1.


When j = i1, the tree is empty.
• Define e[i, j ] = expected search cost of optimal BST for
ki,...,kj.

• If j = i1, then e[i, j ] = 0.


• If j ≥ i,
• Select a root kr, for some i ≤ r ≤ j .
• Recursively make an optimal BSTs

• for ki,..,kr1 as the left subtree, and


• for k ,..,kj as the right subtree.
r+1
Recursive Solution

• When the OPT subtree becomes a subtree of a node:


• Depth of every node in OPT subtree goes up by 1.
• Expected search cost increases by
jj
ww((ii,, jj))
ppl l
l l i i from (15.16)

• If kr is the root of an optimal BST for ki,..,kj :


• e[i, j ] = pr + (e[i, r1] + w(i, r1))+(e[r+1, j] + w(r+1, j))
= e[i, r1] + e[r+1, j] + w(i, j).
(because w(i, j)=w(i,r1) + pr + w(r + 1, j))
• But, we don’t know kr. Hence,
00 ifif jjii11
ee[[ii, ,jj]]
min {e[i, r  1]  e[r  1, j ]  w(i, j )} ififii jj
r  j {e[i , r  1]  e[ r  1, j ]  w(i , j )}
i min
 ir  j
Computing an Optimal Solution

For each subproblem (i,j), store:


• expected search cost in a table e[1 ..n+1 , 0 ..n]
• Will use only entries e[i, j ], where j ≥ i1.
• root[i, j ] = root of subtree with keys ki,..,kj, for 1 ≤ i ≤ j
≤ n.
• w[1..n+1, 0..n] = sum of probabilities
• w[i, i1] = 0 for 1 ≤ i ≤ n.
• w[i, j ] = w[i, j-1] + pj for 1 ≤ i ≤ j ≤ n.
Pseudo-code
OPTIMAL-BST(p,
OPTIMAL-BST(p,q,q,n) n)
1.1. for
fori i←
←11to tonn++11 Consider all trees with l keys.
2.2. do e[i,ii1]
doe[i, 1]← ←00 Fix the first key.
3.3. w[i,ii1]
w[i, 1]← ←00 Fix the last key
4.4. for
forl l←
←11to tonn
5.5. do
dofor
fori i←←11to tonnl l++11
6.6. do
doj j←i←i++ll11
7.7. e[i,
e[i,j j]←∞
]←∞
8.8. w[i,
w[i,j j]]← w[i,jj1]
←w[i, 1]++ppj j Determine the root
9.9. for
forrr←i ←ito toj j of the optimal
10.
10. do
dott← e[i,rr1]
←e[i, 1]++e[re[r++1,1,j j]]++w[i,
w[i,j j]] (sub)tree
11.
11. ififtt<<e[i,
e[i,j j]]
12.
12. then
thene[i, e[i,j j]]←
←tt
13.
13. root[i,
root[i,j j]]←r
←r
Elements of Dynamic Programming

• Optimal substructure
• Overlapping subproblems
Optimal Substructure

• Show that a solution to a problem consists of making a choice, which


leaves one or more subproblems to solve.
• Suppose that you are given this last choice that leads to an optimal
solution.
• Given this choice, determine which subproblems arise and how to
characterize the resulting space of subproblems.
• Show that the solutions to the subproblems used within the optimal
solution must themselves be optimal. Usually use cut-and-paste.
• Need to ensure that a wide enough range of choices and subproblems
are considered.
Optimal Substructure

• Optimal substructure varies across problem domains:


• 1. How many subproblems are used in an optimal solution.
• 2. How many choices in determining which subproblem(s) to use.
• Informally, running time depends on (# of subproblems overall)  (# of
choices).
• How many subproblems and choices do the examples considered
contain?
• Dynamic programming uses optimal substructure bottom up.
• First find optimal solutions to subproblems.
• Then choose which to use in optimal solution to the problem.
Optimal Substucture

• Does optimal substructure apply to all optimization problems? No.


• Applies to determining the shortest path but NOT the longest simple path of an
unweighted directed graph.
• Why?
• Shortest path has independent subproblems.
• Solution to one subproblem does not affect solution to another subproblem
of the same problem.
• Subproblems are not independent in longest simple path.
• Solution to one subproblem affects the solutions to other subproblems.
• Example:
Overlapping Subproblems

• The space of subproblems must be “small”.


• The total number of distinct subproblems is a
polynomial in the input size.
• A recursive algorithm is exponential because it solves the
same problems repeatedly.
• If divide-and-conquer is applicable, then each problem
solved will be brand new.

You might also like