0% found this document useful (0 votes)
2 views

Unit-IV Dynamic Programming

Dynamic programming is a method used to solve complex problems by breaking them down into simpler subproblems, storing their solutions to avoid redundant calculations, and ensuring optimal results through the principle of optimality. It can be implemented using two main approaches: top-down (memoization) and bottom-up (tabulation), both achieving time complexity of O(n). Applications of dynamic programming include the Longest Common Subsequence, Bellman-Ford Shortest Path, and Knapsack Problem, among others.

Uploaded by

s9008905
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Unit-IV Dynamic Programming

Dynamic programming is a method used to solve complex problems by breaking them down into simpler subproblems, storing their solutions to avoid redundant calculations, and ensuring optimal results through the principle of optimality. It can be implemented using two main approaches: top-down (memoization) and bottom-up (tabulation), both achieving time complexity of O(n). Applications of dynamic programming include the Longest Common Subsequence, Bellman-Ford Shortest Path, and Knapsack Problem, among others.

Uploaded by

s9008905
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 79

Dynamic Programming

Dr Pradosh Kumar
Department of Artificial Intelligence and Data Science
Dynamic Programming

Dynamic programming involves dividing a


problem into smaller subproblems, storing the
solutions to these subproblems to avoid
repetitive computation, and using these
solutions to construct the overall/optimal
solution.
Dynamic Programming
• Breaking down a problem into smaller subproblems.
• Solving each subproblem independently.
• Storing the solutions to subproblems to avoid
redundant computation.
• Using the solutions to the subproblems to construct
the overall solution.
• Using the principle of optimality to ensure that the
solution is optimal.
• This optimization reduces time complexities from
exponential to polynomial.
• For example, Fibonacci Numbers
Example (Recursive Method)
Recurrence Relation (Fibonacci Series)

0 if n=0
F(n) = 1 if n=1
F (n-1)+F (n-2) if n >1

Total no of function Call (n) = 2n /2

The time complexity of the above approach


is exponential = O( 2n )
Example (Dynamic Programming)

The time complexity of the this approach


is exponential and upper bounded by O(n)
Greedy Method vs Dynamic Programming

• Both dynamic programming and greedy


Methods are used for optimization problems
• Dynamic programming breaks down a
problem into smaller subproblems and solves
them independently, Greedy algorithms make
a locally optimal choice at each step to find a
globally optimal solution.
Dynamic Programming (DP)

Use Dynamic Programming (DP)


• Optimal Substructure:
Use the optimal results of subproblems
to achieve the optimal result of the
bigger problem.
• Overlapping Subproblems:
Same subproblems are solved repeatedly
in different parts of the problem
Dynamic Programming (DP)

Approaches of Dynamic Programming (DP)

• Top-Down Approach (Memoization)

• Bottom-Up Approach (Tabulation)


Top-Down Approach (Memoization):

• In the top-down approach, also known


as memoization,
• we keep the solution recursive and add a
memoization table to avoid repeated calls of
same subproblems.
• Before making any recursive call, we first check
if the memoization table already has solution
for it.
• After the recursive call is over, we store the
solution in the memoization table.
Top-Down Approach (Memoization)

f(n)
If (n ≤ 1)
return n;
If ( A[n] != -1)
return A[n];
else
return
A[n]=f(n-1)+f(n-2);
Bottom-Up Approach (Tabulation)

• In the bottom-up approach, also known as tabulation

• Start with the smallest subproblems and gradually build up to the final

solution.

• Write an iterative solution and build the solution in bottom-up manner.

• First fill the solution for base cases and then fill the remaining entries of

the table using recursive formula.


Bottom-Up Approach (Tabulation)

f(n)
{
A[0]=0;
A[1]=1
for (i=2;i≤n;i++)
{
A[i]=A[i-1]+A[i-2];
}
return A [n];
• Using Memoization Approach –
– Time Complexity : O(n)
– Space Complexity: O(n)
• Using Tabulation Approach
– Time Complexity : O(n)
– Space Complexity: O(n)
Advantages of Dynamic Programming (DP)

Dynamic programming has a wide range of


advantages, including:
Avoids re-computing the same sub-problems
multiple times, leading to significant time
savings.
Ensures that the optimal solution is found by
considering all possible combinations
Application
• Longest Common Subsequence (LCS):
– This is used in day to day life to find difference between two files
• Bellman–Ford Shortest Path:
– Finds the shortest path from a given source to all other vertices.
• Floyd Warshall :
– Finds shortest path from every pair of vertices.
• Knapsack Problem:
– Determines the maximum value of items that can be placed in a knapsack
with a given capacity.
• Matrix Chain Multiplication:
– Optimizes the order of matrix multiplication to minimize the number of
operations.
• Travelling Salesman Problems.
Longest Common Subsequence
A subsequence of a string is a list of characters
of the string where some characters are deleted
( or not deleted at all) and they should be in the
same order in the subsequence as in the original
string.
Example: string: “abc”
Subsequences: ‘a’, ‘ab’, ‘bc’, ’ac’

Note: For a string of length n, the number of subsequences will


be 2n.
Example
• Example 1:
Input: text1 = "a b c d e",
text2 = "a c e"
Output: 3
Explanation: The longest common subsequence
is "ace" and its length is 3.
Example
• S1 = {B, C, D, A, A, C, D}

• S2 = {A, C, D, B, A, C}

Common subsequences are {B, C}, {C, D, A, C},


{D, A, C}, {A, A, C}, {A, C}, {C, D}

{C, D, A, C} is the longest common subsequence


Dynamic Programming to find the LCS

• Create a table of dimension m+1*n+1 where m and n are the


lengths of X and Y respectively.
• The first row and the first column are filled with zeros.
• Fill each cell of the table using the following logic.
• If the character corresponding to the current row and current
column are matching, then fill the current cell by adding one
to the diagonal element. Point an arrow to the diagonal cell.
• Else take the maximum value from the previous column and
previous row element for filling the current cell. Point an
arrow to the cell with maximum value. If they are equal,
point to any of them.
LCS Example
• X = ABCB
• Y = BDCAB

What is the Longest Common Subsequence


of X and Y?

LCS(X, Y) = BCB
X=AB C B
Y= BDCAB 20
ABCB
LCS Example (0) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi
A
1
2 B

3 C

4 B

X = ABCB; m = |X| = 4
Y = BDCAB; n = |Y| = 5
Allocate array c[5,4]
21
ABCB
LCS Example (1) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0
2 B
0
3 C 0
4 B 0

for i = 1 to m c[i,0] = 0
for j = 1 to n c[0,j] = 0
22
ABCB
LCS Example (2) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0
2 B
0
3 C 0
4 B 0

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
05/24/2025 23
ABCB
LCS Example (3) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0
2 B
0
3 C 0
4 B 0

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
05/24/2025 24
ABCB
LCS Example (4) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1
2 B
0
3 C 0
4 B 0

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
05/24/2025 25
ABCB
LCS Example (5) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0
3 C 0
4 B 0

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
05/24/2025 26
ABCB
LCS Example (6) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1
3 C 0
4 B 0

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
05/24/2025 27
ABCB
LCS Example (7) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1 1 1 1
3 C 0
4 B 0

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
05/24/2025 28
ABCB
LCS Example (8) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1 1 1 1 2
3 C 0
4 B 0

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
05/24/2025 29
ABCB
LCS Example (10) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1 1 1 1 2
3 C 0 1 1
4 B 0

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
05/24/2025 30
ABCB
LCS Example (11) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1 1 1 1 2
3 C 0 1 1 2
4 B 0

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
05/24/2025 31
ABCB
LCS Example (12) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
05/24/2025 32
ABCB
LCS Example (13) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
05/24/2025 33
ABCB
LCS Example (14) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1 1 2 2

if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
05/24/2025 34
ABCB
LCS Example (15) BDCAB
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1 1 2 2 3
if ( Xi == Yj )
c[i,j] = c[i-1,j-1] + 1
else c[i,j] = max( c[i-1,j], c[i,j-1] )
05/24/2025 35
How to find actual LCS
• So far, we have just found the length of LCS,
but not LCS itself.
• We want to modify this algorithm to make it
output Longest Common Subsequence of X
and Y
Each c[i,j] depends on c[i-1,j] and c[i,j-1]
or c[i-1, j-1]
For each c[i,j] we can say how it was acquired:
2 2 For example, here
2 3 c[i,j] = c[i-1,j-1] +1 = 2+1=3
36
How to find actual LCS - continued
• Remember that
c[i  1, j  1]  1 if x[i ]  y[ j ],
c[i, j ] 
 max(c[i, j  1], c[i  1, j ]) otherwise

 So we can start from c[m,n] and go backwards


 Whenever c[i,j] = c[i-1, j-1]+1, remember
x[i] (because x[i] is a part of LCS)
 When i=0 or j=0 (i.e. we reached the
beginning), output remembered letters in
reverse order
05/24/2025 37
Finding LCS
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1 1 2 2 3

05/24/2025 38
Finding LCS (2)
j 0 1 2 3 4 5
i Yj B D C A B
0 Xi 0 0 0 0 0 0
A
1 0 0 0 0 1 1
2 B
0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 B 0 1 1 2 2 3
LCS (reversed order): B C B
LCS (straight order): B C B
(this string turned out to be a palindrome) 39
LCS Algorithm
LCS-Length(X, Y)
1. m = length(X) // get the # of symbols in X
2. n = length(Y) // get the # of symbols in Y
3. for i = 1 to m c[i,0] = 0 // special case: Y0
4. for j = 1 to n c[0,j] = 0 // special case: X0
5. for i = 1 to m // for all Xi
6. for j = 1 to n // for all Yj
7. if ( Xi == Yj )
8. c[i,j] = c[i-1,j-1] + 1
9. else c[i,j] = max( c[i-1,j], c[i,j-1] )
05/24/2025 40
Time Complexity of LCS Algorithm

• LCS algorithm calculates the values of each


entry of the array c[m,n]
• So what is the running time?

Time Complexity: O(m*n)


since each c[i,j] is calculated in constant time, and
there are m*n elements in the array

05/24/2025 41
Examples
X= ABAABA
Y= BABBAB
X= STONE
Y= LONGEST
X=ABCBDAB
Y= BDCABA
X=ABCB
Y= BDCAB
Dijkstra Algorithm(Recap)

Update/Relaxation
if (dist[v] > dist[u] + cost[u, v])) then
dist[v] := dist[u] + cost[u ,v]
Dijkstra Algorithm(Recap)
Limitation of Dijkstra’s Algorithm:

Dijkstra is not suitable when the graph


consists of negative edges.
The reason is, it doesn’t revisit those nodes
which have already been marked as visited.
If a shorter path exists through a longer route
with negative edges, Dijkstra’s algorithm will
fail to handle it.
Bellman–Ford Algorithm
• Bellman-Ford is a single source shortest path
algorithm.
• It effectively works in the cases of negative
edges
• It is able to detect negative cycles
• It works on the principle of relaxation of the
edges.
Bellman–Ford Algorithm
• Problem: Given a weighted graph with V vertices
and E edges, and a source vertex src,
• Find: find the shortest path from the source
vertex to all vertices in the given graph.
• we need (V – 1 times), where V is the number of
vertices relaxations of all the edges to achieve
single source shortest path.
• If one additional relaxation (V times) for any
edge is possible, it indicates the presence of
a negative weight cycle in the graph.
Bellman Ford Algorithm
Set of edges
(DB),(CD),(AB),(AC)
Update/Relaxation
if (dist[v] > dist[u] + cost[u, v]))
then
dist[v] := dist[u] + cost[u ,v]

A= 0
B= -2
C= 8
D= 5
Bellman Ford Algorithm
Bellman Ford
Example
Iteration-1
0 10
10
S A
8 1

8 G -4 B
2
1 1
-2 
 F C
-1 3
E D
-1 

Iteration-2
Iteration-3
Iteration-4
Iteration-5
Iteration-6
Iteration-7
Bellman-Ford

Bellman-Ford Algorithm – O(V*E) Time Complexity and O(V) Space Complexity


All pairs shortest path
• The problem: find the shortest path between every
pair of vertices of a graph

• The graph: may contain negative edges but no


negative cycles

• A representation: a weight matrix where


W(i,j)=0 if i=j.
W(i,j)=¥ if there is no edge between i and j.
W(i,j)=“weight of edge”
• Note: we have shown principle of optimality applies
to shortest path problems
The weight matrix and the graph
1
1 2 3 4 5 3 v1 v2
9
1 0 1  1 5 5
1 2 3
2 9 0 3 2  v5
3
3   0 4  v4
2
v3
4   2 0 3 4

5 3    0

Floyd’s Algorithm 64
The subproblems
• Let D(k)[i,j]=weight of a shortest path from
vi to vj using only vertices from {v1,v2,…,vk}
as intermediate vertices in the path

– D(0)=W
– D(n)=D which is the goal matrix

• How do we compute D(k) from D(k-1) ?


The Recursive Definition:
Case 1: A shortest path from vi to vj restricted to using
only vertices from {v1,v2,…,vk} as intermediate vertices
does not use vk. Then D(k)[i,j]= D(k-1)[i,j].

Case 2: A shortest path from vi to vj restricted to using


only vertices from {v1,v2,…,vk} as intermediate vertices
does use v . Then D (k)
[i,j]=
Shortest path using kintermediate vertices D(k-1)
[i,k]+ D(k-1)
[k,j].
{V1, . . . Vk } Vk

Vj
Vi

Shortest Path using intermediate vertices { V1, . . . Vk -1 }


The recursive definition
Since
D(k)[i,j]= D(k-1)[i,j] or
D(k)[i,j]= D(k-1)[i,k]+ D(k-1)[k,j].

We conclude:

D(k)[i,j]= min{ D(k-1)[i,j], D(k-1)[i,k]+ D(k-1)[k,j] }.


Example
1 2 3
1
W=D =
0

1 5 2
3
4 2 3

2
-3
Example
1 2 3
1 0 4 5
W=D =
0

1 5 2 2 0 
3  -3 0
4 2 3

2
-3
1 5

4 3
2
2
-3

1 2 3
1 0 4 5 D1[2,3] = min( D0[2,3], D0[2,1]+D0[1,3] )
= min (, 7)
D0 = 2 2 0 
=7
3  -3 0

1 2 3 D1[3,2] = min( D0[3,2], D0[3,1]+D0[1,2] )


1 = min (-3,)
D1 = 2 = -3
3
1 5 1 2 3
D =1 0
0
4 5 k=1
4 2 3
2 2 0 
Vertex 1 can be
2
-3 intermediate
3  -3 0
node
1 2 3
1 0 4 5 D1[2,3] = min( D0[2,3], D0[2,1]+D0[1,3] )
D =
1
= min (, 7)
2 2 0 7
=7
3  -3 0

D1[3,2] = min( D0[3,2], D0[3,1]+D0[1,2] )


= min (-3,)
= -3
1 5
3
4 2 k=2
2
-3

1 2 3
1 0 4 5

D1 = 2 2 0 7
D2[1,3] = min( D1[1,3], D1[1,2]+D1[2,3] )
3  -3 0
= min (5, 4+7)
=5
1 2 3
1 D2[3,1] = min( D1[3,1], D1[3,2]+D1[2,1] )
D2 = 2 = min (, -3+2)
= -1
3
1 2 3
1 5 D1 = 1 0 4 5
3
4 2 2 2 0 7 k=2
2
-3 3  -3 0

1 2 3
1 0 4 5 D2[1,3] = min( D1[1,3], D1[1,2]+D1[2,3] )
D2 = 2 2 0 7 = min (5, 4+7)
=5
3 -1 -3 0

D2[3,1] = min( D1[3,1], D1[3,2]+D1[2,1] )


= min (, -3+2)
= -1
1 5

4 3
2
2
-3

1 2 3
1 0 4 5
k=3
2 2 0 7
D =
2

3 -1 -3 0

D3[1,2] = min(D2[1,2], D2[1,3]+D2[3,2] )


1 2 3 = min (4, 5+(-3))
1 =2
D3 = 2 D3[2,1] = min(D2[2,1], D2[2,3]+D2[3,1] )
3 = min (2, 7+ (-1))
=2
1 5 D2 = 1 2 3
1 0 4 5
4 3
2 2 2 0 7
-3 k=3
2 3 -1 -3 0

1 2 3
1 0 2 5
D3[1,2] = min(D2[1,2], D2[1,3]+D2[3,2] )
D = 3
2 2 0 7 = min (4, 5+(-3))
3 -1 -3 0 =2

D3[2,1] = min(D2[2,1], D2[2,3]+D2[3,1] )


= min (2, 7+ (-1))
=2
Floyd-Warshall's Algorithm
Algorithm AllPaths(cost, A, n)
// cost[1 : n, 1 : n] is the cost adjacency matrix of a graph with n
vertices.
// A[i, j] is the cost of the shortest path from vertex i to vertex j.
// cost[i, i] = 0.0, for 1 ≤ i ≤ n.
{
for i := 1 to n do
for j := 1 to n do
A[i, j] := cost[i, j];
for k := 1 to n do
for i := 1 to n do
for j := 1 to n do
A[i, j] := min(A[i, j], A[i, k] + A[k, j]);
}

The time complexity of the above algorithm is O(n 3).


Floyd-Warshall (Example)

You might also like