0% found this document useful (0 votes)
14 views49 pages

Chapter 3 DAA

Chapter 3 discusses Dynamic Programming (DP), an algorithm design method that optimizes decision-making processes by breaking problems into smaller subproblems and storing their solutions. Key principles include Optimal Substructure and Overlapping Subproblems, which allow for efficient computation of solutions such as the Longest Common Subsequence and All-Pairs Shortest Path problems. The chapter also highlights the Travelling Salesperson Problem, illustrating how DP can be applied to find optimal solutions in complex scenarios.

Uploaded by

Nasir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views49 pages

Chapter 3 DAA

Chapter 3 discusses Dynamic Programming (DP), an algorithm design method that optimizes decision-making processes by breaking problems into smaller subproblems and storing their solutions. Key principles include Optimal Substructure and Overlapping Subproblems, which allow for efficient computation of solutions such as the Longest Common Subsequence and All-Pairs Shortest Path problems. The chapter also highlights the Travelling Salesperson Problem, illustrating how DP can be applied to find optimal solutions in complex scenarios.

Uploaded by

Nasir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 49

CHAPTER 3

DYNAMIC PROGRAMING

“SOLVE ONCE, USE IT FOR EVER”

02/15/2025 OBU 1
Introduction
Dynamic Programming:
• an algorithm design method that can be used when the solution to
a problem may be viewed as the result of a sequence of decisions.
• One way to solve problems for which it is not possible to make a
sequence of stepwise decisions leading to an optimal decision
sequence is to try out all possible decision sequences.
• We could enumerate all decision sequences and then pick out the
best.
• “Dynamic programming “often drastically reduces the amount of
enumeration by avoiding the enumeration of some decision
sequences that cannot possibly be optimal.
• In dynamic programming an optimal sequence of decisions is
arrived at by making
02/15/2025 OBUexplicit appeal to the Principle of 2
Principle of Optimality
This principle states that an optimal sequence of decisions has the

property that whatever the initial state and decision are, the remaining
decisions must constitute an optimal decision sequence with regard to
the state resulting from the first decision.
Dynamic programming:
• Used when problem breaks down into recurring small subproblem
• It is typically applied to optimization problem
• In such problem there can be many solutions.
• Each solution has a value, and we wish to find a solution with the
optimal value.
02/15/2025 OBU 3
Key Principles
Optimal Substructure
 Optimal substructure means that the optimal solution to a problem can be

constructed from the optimal solutions of its subproblems.


 For example, in the shortest path problem, the shortest path from a source to a

destination can be constructed from the shortest paths to intermediate nodes.


Overlapping Subproblems
 Overlapping subproblems occur when the same subproblems are solved multiple

times.
 In such cases, storing the results of subproblems (memoization) can save

computation time.
 For example, in the Fibonacci sequence, the same Fibonacci numbers are
02/15/2025 OBU
computed multiple times 4
DP Approach: Fibonacci Numbers
We can calculate Fn in linear time by remembering
solutions to the solved subproblems – dynamic
programming
Compute solution in a bottom-up fashion
In this case, only two values need to be remembered at
any time

02/15/2025 OBU 5
Cont…
DP will solve each of them once and their answers are stored in a

table for future use.


Dynamic programming differs from the greedy method the greedy

method produces only one feasible solution, which may or may not be
optimal,
While dynamic programming produces all possible sub-problems at

most once, one of which guaranteed to be optimal.


Optimal solutions to sub-problems are retained in a table, thereby

avoiding the work of recomputing the answer every time a sub-


problem is encountered.
02/15/2025 OBU 6
Applications of Dynamic Programming
Longest common subsequence
Shortest path problems
 Multi
Stage Graphs
 All Pair Shortest Path

 Travelling Sales Person Problem

1/0 Knapsack

02/15/2025 OBU 7
Cont…
The longest common subsequence (LCS) problem is the
problem of finding the longest subsequence common to all
sequences in a set of sequences
Subsequences
• A subsequence is a sequence that appears in the same relative
order, but not necessarily contiguous.
• In LCS ,we have to find Longest Common Subsequence that is in
the same relative order.
• String of length n has different possible subsequences.
• E.g.—
• Subsequences of “ABCDEFG”.
• “ABC”,”ABG”,”BDF”,”AEG”,’ACEFG”,…….

02/15/2025 OBU 8
Common
Subsequences
Suppose that X and Y are two sequences over a set S.
X: ABCBDAB
Y: BDCABA
Z: BCBA
We say that Z is a common subsequence of X and Y if and
only if • Z is a subsequence of X
• Z is a subsequence of
The Longest Common Subsequence Problem
Given two sequences X and Y over a set S, the longest common
subsequence problem asks to find a common subsequence of X
and Y that is of maximal length.

02/15/2025 OBU 9
Example
Let X: ABCBDAB
Y: BDCABA
Z= (B,C,A) Length 3
Z= (B,C,A,B) Length 4 longest
Z= (B,D,A,B) Length 4 longest

02/15/2025 OBU 10
A Poor Approach to the LCS Problem
A Brute-force solution:
• Enumerate all subsequences of X
• Test which ones are also subsequences of Y
• Pick the longest one.

Analysis:
• If X is of length n, then it has 2n subsequences
• This is an exponential-time algorithm!

02/15/2025 OBU 11
Dynamic Programming for LCS
 Define L[i,j] to be the length of the longest common subsequence
of X[0..i] and Y[0..j].
 L[i,j-1] = 0 and L[i-1,j]=0, to indicate that the null part of X or Y
has no match with the other.
 Then we can define L[i,j] in the general case as follows:

1. If xi=yj, then L[i,j] = L[i-1,j-1] + 1 (we can add this match)


2. If xi≠yj, then L[i,j] = max{L[i-1,j], L[i,j-1]} (no match here)

02/15/2025 OBU 12
Example: LCS
Let us take two sequences:
The first sequence

Second Sequence

02/15/2025 OBU 13
Cont…
The following steps are followed for finding the longest
common subsequence.
STEP 1: Initialise a table
Create a table of dimension n+1*m+1 where n and m are the
lengths of X and Y respectively.
The first row and the first column are filled with zeros.

02/15/2025 OBU 14
Cont…
STEP 2:Fill each cell of the table using the following logic.
 If the character corresponding to the current row and current
column are matching, then fill the current cell by adding one to the
diagonal element.
 Point an arrow to the diagonal cell.

 Else take the maximum value from the previous column and
previous row element for filling the current cell.
 Point an arrow to the cell with maximum value. If they are

equal, point to any of them.


STEP 3: STEP 2 is repeated until the table is filled.
 The value in the last row and the last column is the length of the
longest common subsequence.

02/15/2025 OBU 15
Cont…
In order to find the longest common subsequence:
Start from the last element and follow the direction of

the arrow.
The elements corresponding to arrow in diagonal form

the longest common subsequence.

02/15/2025 OBU 16
Cont…
Solution for the above
example.
STEP 1: Initialise a
table

02/15/2025 OBU 17
Cont..
Fill table by using this algorithm:
Compare X[i] and Y[j]
If X[i] = Y[j]
LCS[i][j] = 1 + LCS[i-1, j-1]
Point an arrow to LCS[i][j]
Else LCS[i][j] = max(LCS[i-1][j], LCS[i][j-1])
Point an arrow to max(LCS[i-1][j], LCS[i][j-1])

02/15/2025 OBU 18
Cont…

02/15/2025 OBU 19
Cont…

Thus, the longest common subsequence is CA


02/15/2025 OBU 20
Longest Common Subsequence Algorithm
X and Y be two given sequences:
Initialize a table LCS of dimension X.length * Y.length
X.label = X
Y.label = Y
LCS[0][] = 0 LCS[][0] = 0
Start from LCS[1][1]
Compare X[i] and Y[j]
If X[i] = Y[j] LCS[i][j] = 1 + LCS[i-1, j-1]
Point an arrow to LCS[i][j]
Else
LCS[i][j] = max(LCS[i-1][j], LCS[i][j-1])
Point an arrow to max(LCS[i-1][j], LCS[i][j-1])

02/15/2025 OBU 21
All Pair Shortest Path
The all-pairs shortest path problem is the determination of he shortest
distance between every pair of vertices in a given graph.
We have to calculate the minimum cost to find the shortest path.
• The all pair shortest path algorithm is also known as Floyd’s
algorithm is used to find shortest graph distances between every
pair of vertices in a given graph.

• As a result of this algorithm, it will generate a matrix, which will


represent the minimum distance from any node to all other nodes in
the graph.
02/15/2025 OBU 22
Procedure to find the all pairs shortest path:
First we consider “G” as a directed graph.

The cost of the graph is the length or cost of each edges and

cost(i , i)=0
If there is an edge between i and j then cost(i,j)=cost of the

edge from i to j and if there is no edge then cost(i, j)= ∞


Need to calculate the shortest path/ cost between any two
nodes using intermediary nodes.
The following equation is used to calculate the minimum
cost (i,j)=min{(i,j), (i,k)+ (k,j)}
02/15/2025 OBU 23
Algorithm of All Pairs Shortest Path
Algorithm All Paths (Cost, A,n)
// cost [1:n, 1:n] is the cost adjacency matrix of a
graph which
// n vertices; A [I, j] is the cost of a shortest path
from vertex
// i to vertex j. cost [i, i] = 0.0, for 1 <i <n.
{
for i := 1 to n do
for j:= 1 to n do
A [i, j] := cost [i,j]; // copy cost into A
for k := 1 to n do
for i := 1 to n do
for j := 1 to n do
A [i, j] := min (A [i, j], A [i, k] +
A [k,j]);
}
02/15/2025 OBU 24
Cont…
Complexity Analysis:
A Dynamic programming algorithm based on this
recurrence involves in calculating n+1 matrices, each of
size n x n.
 Therefore, the algorithm has a complexity of O(n3 ).

02/15/2025 OBU 25
Example:find shortest path between each node
Let us take graph G with three vertices and
find shortest distance between each

02/15/2025 OBU 26
Cont…
0 4 11
Here, A0=Cost= 6 0 2
3 ∞ 0
When we calculate A1 will omit column 1 and row 1 and
calculate cost for rest of the 4 element.
(2,3)=min{A1-1 (2,3), A1-1 (2,1)+ A1-1 (1,3)}
=min{2,17} =2
A1(3,2)=min{A1-1 (3,2), A1-1 (3,1)+ A1-1 (1,2)}

=min{∞,7} =7
02/15/2025 OBU 27
CONT…
0 4 11
A1= 6 0 2
3 7 0
Now we will calculate A2
A2(1,3)=min{A2-1 (1,3), A2-1 (1,2)+ A2-1 (2,3)}
=min{11,6}
=6
A2(3,1)=min{A2-1 (3,1), A2-1 (3,2)+ A2-1 (2,1)}
=min{3,13} =3
0 4 6
A2= 6 0 2
3 7 0
02/15/2025 OBU 28
Cont…
Now we will calculate A3
A3(1,2)=min{A3-1 (1,2), A3-1 (1,3)+ A3-1 (3,2)}
=min{4,13} =4
A3(2,1)=min{A3-1 (2,1), A3-1 (2,3)+ A3-1 (3,1)}
=min{6,5}
=5

0 4 6
A3= 5 0 2
3 7 0
02/15/2025 OBU 29
Cont….
Checking
For min destant
Between each
node

02/15/2025 OBU 30
TRAVELLING SALESPERSON PROBLEM:
Let G = (V, E) be a directed graph with edge costs Cij.
 The variable cij is defined such that cij > 0 for all I and j and

cij = if < i, j> E. Let |V| = n and assume n > 1.


 A tour of G is a directed simple cycle that includes every vertex in V.

 The cost of a tour is the sum of the cost of the edges on the tour.

 The traveling sales person problem is to find a tour of minimum cost.

 The tour is to be a simple path that starts and ends at vertex 1 .

02/15/2025 OBU 31
Cont…
Let g (i, S) be the length of shortest path starting at vertex i,

going through all vertices in S, and terminating at vertex 1.


The cost of a tour is the sum of the cost of the edges on the

tour.
From the principal of optimality it follows that:

C(S,j)=minC(S−{j},i)+d(i,j) where iϵS and i≠j

02/15/2025 OBU 32
EXAMPLE
For the following graph find minimum cost tour for the
traveling sales person problem:

02/15/2025 OBU 33
Cont…
More generally writing:
g (i, s) = min {cij+ g (J, s –{J})}
Clearly, g (i, 0) = ci1 , 1 ≤ i ≤ n.
g (2, 0) = C21 =5
g (3, 0) = C31 = 6
g (4, 0) = C41 =8
g (1, {2, 3, 4}) = min {c12 + g (2, {3, 4}, c13 + g (3, {2,
4}), c14 + g (4, {2,3})}
g (2, {3, 4}) = min {c23 + g (3, {4}), c24 + g (4,{3})}
= min {9 + g (3, {4}), 10 + g (4,{3})}
02/15/2025 OBU 34
Cont…
g (3, {4}) = min {c34 + g (4, 0)} = 12 + 8 =20
g (4, {3}) = min {c43 + g (3, 0)} = 9 + 6 =15
Therefore, g (2, {3, 4}) = min {9 + 20, 10 + 15} = min {29, 25} =25

g (3, {2, 4}) = min {(c32 + g (2, {4}), (c34 + g (4,{2})}


g (2, {4}) = min {c24 + g (4, 0)} = 10 + 8 =18
g (4, {2}) = min {c42 + g (2, 0)} = 8 + 5 =13
Therefore, g (3, {2, 4}) = min {13 + 18, 12 + 13} = min {41, 25} =25
02/15/2025 OBU 35
Cont…
g (4, {2, 3}) = min {c42 + g (2, {3}), c43 + g (3,{2})}
g (2, {3}) = min {c23 + g (3, 0} = 9 + 6 =15
g (3, {2}) = min {c32 + g (2, 0} = 13 + 5 =18
Therefore, g (4, {2, 3}) = min {8 + 15, 9 + 11818} = min {23, 27} =23
g (1, {2, 3, 4}) = min {c12 + g (2, {3, 4}), c13 + g (3, {2, 4}), c14 + g (4,
{2,3})} = min {10 + 25, 15 + 25, 20 + 23}
=min {35, 40, 43} =35
The optimal tour for the graph has length = 35
The optimal tour is: 1, 2, 4, 3,1.
02/15/2025 OBU 36
Multi-stage Graph
A multistage graph is a directed graph in which the vertices are
partitioned into k ≥ 2 disjoint sets Vi, 1≤i ≤k.

<u, v> is an edge in E, then u= Vi and v =Vi+1 for some i, 1≤i ≤k.

The sets V1 and Vk are such that |V1 | = |Vk |=1

s and t are the vertices in V1 and Vk respectively.

The vertex s is the source and t is the sink

The multi stage graph is to find a minimum cost path from s to t.

02/15/2025 OBU 37
Cont…
Approaches to Multistage Graph
Forward Approach
Backward Approach

02/15/2025 OBU 38
:
Forward approach algorithm
Algorithm Fgraph(G, k, n,p)
// The input is a k-stage graph G = (V, E) with n vertices // indexed in
order or stages. E is a set of edges and c [i,j] // is the cost of (i, j). p
[1 : k] is a minimum cost path.
{ cost [n] :=0.0;
for j:= n - 1 to 1 step – 1do
{ // compute cost[j] let r be a vertex such that (j, r) is an edge
of G and c [j, r] + cost [r] is minimum;
cost [j] := c [j, r] + cost[r]; d [j] :=r:
}
p [1] := 1; p [k] :=n; // Find a minimum cost path.
for j := 2 to k - 1 do
p [j] := d [p [j -1]];
} 02/15/2025 OBU 39
Example
Find the minimum cost path from s to t in the multistage graph
of five stages shown below.
Do this first using forward approach and then using backward
approach. 2 4
6
6 9
2
2 5 4
9
1

4
7 3
7
2
t
s 7 10 12
1 3 3

4 11
2
5

5
8 11
11
6

8
5

02/15/2025 OBU 40
Forward approach
 We use the following equation to find the minimum cost path from s to

t:
cost (i, j) = min {c (j, l) + cost (i + 1,l)} l in Vi +1 <j, l> in E
cost (1, 1) = min {c (1, 2) + cost (2, 2), c (1, 3) + cost (2, 3), c (1, 4) +
cost (2,4), c (1, 5) + cost (2,5)}
= min {9 + cost (2, 2), 7 + cost (2, 3), 3 + cost (2, 4), 2 + cost (2,5)}
Now first starting with,
cost (2, 2) = min{c (2, 6) + cost (3, 6), c (2, 7) + cost (3, 7), c (2, 8) +cost
(3,8)} = min {4 + cost (3, 6), 2 + cost (3, 7), 1 + cost (3,8)}

02/15/2025 OBU 41
Cont…
cost(3,6) = min {c (6, 9) + cost (4, 9), c (6, 10) + cost (4,10)}
= min {6 + cost (4, 9), 5 + cost (4,10)}
cost(4,9) = min {c (9, 12) + cost (5, 12)} = min {4 + 0) =4
cost (4, 10) = min {c (10, 12) + cost (5, 12)} =2
Therefore, cost (3, 6) = min {6 + 4, 5 + 2} =7
cost(3,7) = min {c (7, 9) + cost (4, 9) , c (7, 10) + cost (4,10)}
= min {4 + cost (4, 9), 3 + cost (4,10)}
cost(4,9) = min {c (9, 12) + cost (5, 12)} = min {4 + 0} =4
Cost (4, 10) = min {c (10, 2) + cost (5, 12)} = min {2 + 0} =2
Therefore, cost (3, 7) = min {4 + 4, 3 + 2} = min {8, 5} =5

02/15/2025 OBU 42
Cont…
cost(3,8) = min {c (8, 10) + cost (4, 10), c (8, 11) + cost (4,11)}
= min {5 + cost (4, 10), 6 + cost (4 +11)}
cost (4, 11) = min {c (11, 12) + cost (5, 12)} =5
Therefore, cost (3, 8) = min {5 + 2, 6 + 5} = min {7, 11} =7

02/15/2025 OBU 43
Cont…
Therefore, cost (2, 2) = min {4 + 7, 2 + 5, 1 + 7}
= min {11, 7, 8} =7
Therefore, cost (2, 3) = min {c (3, 6) + cost (3, 6), c (3, 7) + cost (3,7)}
= min {2 + cost (3, 6), 7 + cost (3,7)}
= min {2 + 7, 7 + 5} = min {9, 12} =9
cost (2, 4) = min {c (4, 8) + cost (3, 8)}
= min {11 + 7} =18 cost (2, 5)
= min {c (5, 7) + cost (3, 7), c (5, 8) + cost (3,8)}
= min {11 + 5, 8 + 7} = min {16, 15} =15
Therefore, cost (1, 1) = min {9 + 7, 7 + 9, 3 + 18, 2 +15}
= min {16, 16, 21, 17} =16
The minimum cost path is 16.
02/15/2025 OBU 44
Cont…
The path is 1 2 7 10 12
or

1 3 6 10 12

Reading assignmet
Backward approach

02/15/2025 OBU 45
1/0 Knapsack Problem
 We are given n objects and a knapsack. Item are indivisible; you either

take an item or not. Solved with dynamic programming.


 Each object i has a positive weight wi and a positive value Vi.

 The knapsack can carry a weight not exceeding W.

 Fill the knapsack so that the value of objects in the knapsack is

optimized.
 A solution to the knapsack problem can be obtained by making a

sequence of decisions on the variables x1, x2, . . . . , xn.


 A decision on variable xi involves determining which of the values 0

or 1 is to be assigned
02/15/2025 OBUto it. 46
Cont…

02/15/2025 OBU 47
0/1 Knapsack problem: the brute-force approach

Let’s first solve this problem with a straightforward algorithm:

• Since there are n items, there are 2n possible combinations of


items.

• We go through all combinations and find the one with the


maximum value and with total weight less or equal to m.

• Running time will be O(2n).

02/15/2025 OBU 48
02/15/2025 OBU 49

You might also like