0% found this document useful (0 votes)
46 views

Dynamic Programming: Fundamrntals of Computer Algorithms

1) Dynamic programming is an algorithm design technique that solves problems by breaking them down into smaller subproblems and storing the results of subproblems to avoid recomputing them. 2) The Fibonacci sequence problem can be solved efficiently using dynamic programming by storing previously computed values in a table rather than recomputing them via recursion. 3) Dynamic programming can be used to find shortest paths in multistage graphs by considering all possible paths as sequences of decisions and applying the principle of optimality.

Uploaded by

shanthi prabha
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views

Dynamic Programming: Fundamrntals of Computer Algorithms

1) Dynamic programming is an algorithm design technique that solves problems by breaking them down into smaller subproblems and storing the results of subproblems to avoid recomputing them. 2) The Fibonacci sequence problem can be solved efficiently using dynamic programming by storing previously computed values in a table rather than recomputing them via recursion. 3) Dynamic programming can be used to find shortest paths in multistage graphs by considering all possible paths as sequences of decisions and applying the principle of optimality.

Uploaded by

shanthi prabha
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 33

FUNDAMRNTALS OF COMPUTER ALGORITHMS

Dynamic Programming

MS.R.SHANTHI PRABHA M.Sc., m.phil.,

Assistant professor
department of computer
science
sacwc.
8 -1
FIBONACCI SEQUENCE

Fibonacci sequence: 0 , 1 , 1 , 2 , 3 , 5 , 8 , 13 , 21 , …
Fi = i if i  1
Fi = Fi-1 + Fi-2 if i  2
Solved by a recursive program: f5

f4 f3

f3 f2 f2 f1

f2 f1 f1 f0 f1 f0

f1 f0

Much replicated computation is done.


It should be solved by a simple loop.
8 -2
DYNAMIC PROGRAMMING
Dynamic Programming is an algorithm design method that can be used when the
solution to a problem may be viewed as the result of a sequence of decisions

8 -3
THE SHORTEST PATH

To find a shortest path in a multi-stage graph


3 2 7

1 4
S A B 5
T

5 6

Apply the greedy method :


the shortest path from S to T :
1+2+5=8
8 -4
THE SHORTEST PATH IN MULTISTAGE GRAPHS
e.g.
4
A D
1 18
11 9

2 5 13
S B E T
16 2
The greedy method can not be applied to this case: (S,
A, D, T) 1+4+18
5 = 23.
The real shortest pathC is: 2 F

(S, C, F, T) 5+2+2 = 9.

8 -5
DYNAMIC PROGRAMMING
APPROACH
Dynamic programming approach (forward approach):
1 A
d(A, T)

2 d(B, T)
S B T

d(C, T)
5
C

d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}


4
 d(A,T) = min{4+d(D,T), A D
d(D, T)
11+d(E,T)}
11 T
= min{4+18, 11+13} = 22. E d(E, T)
8 -6
DYNAMIC PROGRAMMING

d(B, T) = min{9+d(D, T), 5+d(E, T), 16+d(F, T)}


= min{9+18, 5+13, 16+2} = 18.
d(C, T) = min{ 2+d(F, T) } = 2+2 = 4
d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}
= min{1+22, 2+18, 5+4} = 9.
The above way of reasoning is called
backward reasoning.

8 -7
BACKWARD APPROACH
(FORWARD REASONING)
d(S, A) = 1
d(S, B) = 2
d(S, C) = 5
d(S,D)=min{d(S, A)+d(A, D),d(S, B)+d(B, D)}
= min{ 1+4, 2+9 } = 5
d(S,E)=min{d(S, A)+d(A, E),d(S, B)+d(B, E)}
= min{ 1+11, 2+5 } = 7
d(S,F)=min{d(S, A)+d(A, F),d(S, B)+d(B, F)}
= min{ 2+16, 5+2 } = 7
8 -8
d(S,T) = min{d(S, D)+d(D, T),d(S,E)+
d(E,T), d(S, F)+d(F, T)}
= min{ 5+18, 7+13, 7+2 }
=9

8 -9
PRINCIPLE OF OPTIMALITY

Principle of optimality: Suppose that in solving a


problem, we have to make a sequence of decisions D1,
D2, …, Dn. If this sequence is optimal, then the last k
decisions, 1  k  n must be optimal.
e.g. the shortest path problem
If i, i1, i2, …, j is a shortest path from i to j, then i1, i2,
…, j must be a shortest path from i1 to j
In summary, if a problem can be described by a
multistage graph, then it can be solved by dynamic
programming.

8-
10
DYNAMIC PROGRAMMING

Forward approach and backward approach:


 Note that if the recurrence relations are formulated using the
forward approach then the relations are solved backwards .
i.e., beginning with the last decision
 On the other hand if the relations are formulated using the
backward approach, they are solved forwards.
To solve a problem by using dynamic
programming:
 Find out the recurrence relations.
 Represent the problem by a multistage graph.

8-
11
THE RESOURCE ALLOCATION PROBLEM
m resources, n projects
profit p(i, j) : j resources are allocated to
project i.
maximize the total profit.

Resource
Project 1 2 3
1 2 8 9
2 5 6 7
3 4 4 4
4 2 4 5
8-
12
THE MULTISTAGE GRAPH
SOLUTION
A E I
0,1 0 0,2 0 0,3
5 4
0 5
B 7 6 F 4 4 J
1,1 0 1,2 0 1,3
2 4
5 4
S 8 C
6
G
4
K 2
T
2,1 0 2,2 0 2,3
9 0
5 4
D H L
3,1 0 3,2 0 3,3

The resource allocation problem can be described as a multistage graph.


( i, j) : i resources allocated to projects 1, 2, …, j
e.g. node H=(3, 2) : 3 resources allocated to projects 1, 2.

8-
13
Find the longest path from S to T :
(S, C, H, L, T), 8+5+0+0=13
2 resources allocated to project 1.
1 resource allocated to project 2.
0 resource allocated to projects 3, 4.

8-
14
THE TRAVELING SALESPERSON (TSP) PROBLEM

e.g. a directed graph :


2
1 2
2
10
4
6 5 9 3
8
7
4 3
4

1 2 3 4

Cost matrix: 1  2 10 5
2 2  9 
3 4 3  4
4 6 8 7 
8-
15
THE MULTISTAGE GRAPH
SOLUTION (1,2,3) 4 (1,2,3,4)
9
6
¡Û (1,2,4) 7 (1,2,4,3)
(1,2)
4
2
3 (1,3,2) ¡Û (1,3,2,4) 6
10
(1) (1,3) 1
2
4 (1,3,4) 8 (1,3,4,2)
4
A multistage graph5 can describe all possible tours of a directed graph.

Find the shortest path: (1,4) 9


8 (1,4,2) (1,4,2,3)
(1, 4, 3, 2, 1) 5+7+3+2=17 2
7
(1,4,3) 3 (1,4,3,2)

8-
16
THE REPRESENTATION OF A NODE
Suppose that we have 6 vertices in the graph.
We can combine {1, 2, 3, 4} and {1, 3, 2, 4} into one node.

(1,3,2) (1,3,2,4) (2), (4,5,6)

combine
(4), (5,6)

(3),(4,5,6) (1,2,3)
means that the(1,2,3,4)
last vertex visited
(3),is 3 and
(4,5,6) the
remaining vertices to be visited are (4, 5, 6).
(a) (b)

8-
17
THE DYNAMIC PROGRAMMING APPROACH

Let g(i, S) be the length of a shortest path starting at vertex i,


going through all vertices in S and terminating at vertex 1.
The length of an optimal tour :

g(1, V - {1})  min {c 1k  g(k, V - {1, k})}


The general form: 2 k  n

g(i, S)  min {c ij  g(j, S - {j})}


Time complexity: jS

( ),( ) (n-k)
n
n   ( n  1)( nn  2k )(n  k )  
n 2
k 2
(n-1)( n  k )
 O(n 2 2 n ) 8-
18
THE LONGEST COMMON SUBSEQUENCE (LCS)
PROBLEM

A string : A = b a c a d
A subsequence of A: deleting 0 or more symbols
from A (not necessarily consecutive).
e.g. ad, ac, bac, acad, bacad, bcd.
Common subsequences of A = b a c a d and
B = a c c b a d c b : ad, ac, bac, acad.
The longest common subsequence (LCS) of A and
B:
a c a d.
8-
19
THE LCS ALGORITHM

Let A = a1 a2  am and B = b1 b2  bn
Let Li,j denote the length of the longest common
subsequence of a1 a2  ai and b1 b2  bj.
 Li,j = Li-1,j-1 + 1 if ai=bj
max{ Li-1,j, Li,j-1 } if aibj
L0,0 = L0,j = Li,0 = 0 for 1im, 1jn.

8-
20
The dynamic programming approach for solving the LCS
problem:

L1,1 L1,2 L1,3

L2,1 L2,2

L3,1

Time complexity: O(mn)


Lm,n

8-
21
TRACING BACK IN THE LCS ALGORITHM
e.g. A = b a c a d, B = a c c b a d c b
B
a c c b a d c b
0 0 0 0 0 0 0 0 0
b 0 0 0 0 1 1 1 1 1
a 0 1 1 1 1 2 2 2 2
A c 0 1 2 2 2 2 2 3 3
a 0 1 2 2 2 3 3 3 3
d 0 1 2 2 2 3 4 4 4

After all Li,j’s have been found, we can trace back to


find the longest common subsequence of A and B.
8-
22
A C C B A D C B

0 0 0 0 0 0 0 0 0

B 0 0 0 0 11 1 1 11 11

A 0 1 1 1 1 2 2 2 2

C 0 1 2 2 2 2 2 3 3

A 0 1 2 2 2 3 3 3 3

D 0 1 2 2 2 3 4 4 4
8-
23
8-
24
8-
25
OPTIMAL BINARY SEARCH TREES
e.g. binary search trees for 3, 7, 9, 12;

3 7 7 12

3 12 3 9 9
7

9 9 12 3

7
12
(a) (b) (c) (d)
8-
26
OPTIMAL BINARY SEARCH TREES

n identifiers : a1 <a2 <a3 <…< an


Pi, 1in : the probability that ai is searched.
Qi, 0in : the probability that x is searched
where ai < x < ai+1 (a0=-, an+1=).

n n

 P  Q
i 1
i
i 1
i 1

8-
27
10
Identifiers : 4, 5, 8, 10, 11, 12,
14
5 14
Internal node : successful
search, Pi
4 8 11 E 7

External node : unsuccessful


E 0 E 1 E 2 E 3 E 4 12 search, Qi

E 5 E 6

The expected cost of a binary tree:



n n

 P  level(a )   Q  (level(E )  1)
n 1
i i
n 0
i i

The level of the root : 1



8-
28
THE DYNAMIC PROGRAMMING APPROACH
Let C(i, j) denote the cost of an optimal binary search
tree containing ai,…,aj .
The cost of the optimal binary search tree with ak as
its root :
  k 1
  n

C(1, n)  min Pk  Q 0    Pi  Q i   C1, k  1   Q k    Pi  Q i   C k  1, n   
1 k  n
  i 1   i  k 1 

ak
P 1 . .P k-1
P k+1
. .P n
Q 0 . .Q k-1 Q k . .Q n

a 1 ...a k-1 a k+1 ...a n


8-
C(1,k-1) C(k+1,n) 29
GENERAL FORMULA

  k 1

C(i, j)  min  Pk  Qi-1    Pm  Q m   C i, k  1 
i k j
  m i 
 j

 Q k    Pm  Q m   C k  1, j  
 m  k 1 
 j

 min C i, k  1  C k  1, j  Qi-1    Pm  Q m  
i k j
 m i 
ak
P 1 . .P k-1 P k+1 . .P n
Q 0 . .Q k-1 Q k . .Q n

a 1 ...a k-1
a k+1 ...a n
8-
C(1,k-1) C(k+1,n) 30
COMPUTATION RELATIONSHIPS OF SUBTREES
e.g. n=4

C(1,4)

C(1,3) C(2,4)
Time complexity : O(n3)
when j-i=m, there are (n-m) C(i, j)’s to compute.
Each C(i, j) withC(1,2) C(2,3) in O(m) time.
j-i=m can be computed C(3,4)

O(  m(n  m))  O(n )


1 m  n
3
8-
31
MATRIX-CHAIN MULTIPLICATION

n matrices A1, A2, …, An with size


p0  p1, p1  p2, p2  p3, …, pn-1  pn
To determine the multiplication order such that # of scalar
multiplications is minimized.
To compute Ai  Ai+1, we need pi-1pipi+1 scalar multiplications.

e.g. n=4, A1: 3  5, A2: 5  4, A3: 4  2, A4: 2  5


((A1  A2)  A3)  A4, # of scalar multiplications:
3 * 5 * 4 + 3 * 4 * 2 + 3 * 2 * 5 = 114
(A1  (A2  A3))  A4, # of scalar multiplications:
3 * 5 * 2 + 5 * 4 * 2 + 3 * 2 * 5 = 100
(A1  A2)  (A3  A4), # of scalar multiplications:
3 * 5 * 4 + 3 * 4 * 5 + 4 * 2 * 5 = 160
8-
32
Let m(i, j) denote the minimum cost for computing
Ai  Ai+1  …  Aj
0 if i  j
m(i, j)  
min
ik  j
 m(i, k)  m(k  1, j)  p i 1p k p i  if i  j
Computation sequence :

Time complexity : O(n3)


8-
33

You might also like