0% found this document useful (0 votes)
5 views5 pages

Daaunit Iv2

Uploaded by

sanjugowda96
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views5 pages

Daaunit Iv2

Uploaded by

sanjugowda96
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

UNIT-IV DYNAMIC PROGRAMMING

Dynamic Programming:- Dynamic programming, like the divide-and-conquer method, solves


problems by combining the solutions to sub problems. Dynamic programming is applicable when the sub
problems are not independent, that is, when sub problems share sub sub-problems. A dynamic-programming
algorithm solves every sub sub-problem just once and then saves its answer in a table, thereby avoiding the
work of re-computing the answer every time the sub sub-problem is encountered. Dynamic programming is
typically applied to optimization problems. The development of a dynamic-programming algorithm can be
broken into a sequence of four steps.
1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution in a bottom-up fashion.
4. Construct an optimal solution from computed information.
The dynamic programming technique was developed by Bellman based upon the principle known as
principle of optimality. Dynamic programming uses optimal substructure in a bottom-up fashion. That is, we
first find optimal solutions to sub problems and, having solved the sub problems, we find an optimal solution
to the problem.
Application-I: Matrix chain multiplication:- Matrix chain multiplication is an example of dynamic
programming. We are given a sequence (chain) A1, A2, ..., An of n matrices to be multiplied, and we wish to
compute the product A1 x A2 x A3 x....x An. We can evaluate the expression using the standard algorithm for
multiplying pairs of matrices as a subroutine once we have parenthesized it to resolve all ambiguities in how
the matrices are multiplied together. A product of matrices is fully parenthesized if it is either a single matrix
or the product of two fully parenthesized matrix products, surrounded by parentheses. Matrix multiplication
is associative, and so all parenthesize yield the same product. For example, if the chain of matrices is A 1, A2,
A3, A4, the product A1 x A2 x A3 x A4 can be fully parenthesized in five distinct ways:
(A1 (A2 (A3 A4))) ,
(A1 ((A2 A3) A4)) ,
((A1 A2) (A3 A4)) ,
((A1 (A2 A3)) A4) ,
(((A1 A2) A3) A4).
The way we parenthesize a chain of matrices can have a dramatic impact on the cost of evaluating the
product. Consider first the cost of multiplying two matrices. We can multiply two matrices A and B only if
they are compatible: the number of columns of A must equal the number of rows of B. If A is a m × n matrix
and B is a p ×q matrix, the resulting matrix C is a m × q matrix. The standard algorithm is given below
Algorithm Matrix_Mul(A, B)
{
if (n ≠ P) then Error "incompatible dimensions"
else
for i ← 1 to m do
for j ← 1 to q do
{
C[i, j] ← 0
for k ← 1 to n do
C[i, j] ← C[i, j] + A[i, k] * B[k, j] }
return C
}
The time to compute C is the number of multiplications which is mnq or mpq.

blog: anilkumarprathipati.wordpress.com 1
UNIT-IV DYNAMIC PROGRAMMING

Example 1:- To illustrate the different costs incurred by different parenthesize of a matrix product, consider
the problem to find the product of three matrices A1, A2, A3 i.e. A1 * A2 * A3 of three matrices. Suppose that
the dimensions of the matrices are 10 × 100, 100 × 5, and 5 × 50, respectively. If we multiply according to
the parenthesization.
((A1 A2) A3) = 10 * 100 * 5 + 10 * 5 * 50 = 7500
(A1 (A2 A3)) = 10 * 100 * 50 +100 * 5 * 50 = 75,000
Thus, computing the product according to the first parenthesization is 10 times faster.
Definition:- The matrix-chain multiplication problem can be stated as follows: given a chain A 1, A2, ...,An
of n matrices, where for i = 1, 2, ..., n, matrix A i has dimension Pi-1 ×Pi, fully parenthesize the product A1 A2
An in a way that minimizes the number of scalar multiplications.
Note:- In the matrix-chain multiplication problem, we are not actually multiplying matrices. Our goal is only
to determine an order for multiplying matrices that has the lowest cost.
Solving the matrix-chain multiplication problem by dynamic programming
Step 1: The structure of an optimal parenthesization. Our first step in the dynamic-programming paradigm is
to find the optimal substructure and then use it to construct an optimal solution to the problem from optimal
solutions to sub problems. For the matrix-chain multiplication problem, we can perform this step as follows.
For example any parenthesization of the product Ai Aj must split the product between Ak and Ak+1 for
Ai+1

some integer k in the range i ≤ k < j. That is, for some value of k, we first compute the matrices Ai,k and Ak+1,j and
then multiply them together to produce the final product Ai,j. The cost of this parenthesization is thus the cost
of computing the matrix Ai,k, plus the cost of computing Ak+1,j, plus the cost of multiplying them together.
Step 2: A recursive solution. Next, we define the cost of an optimal solution recursively in terms of the
optimal solutions to sub problems. For the matrix-chain multiplication problem, We can define m[i, j]
recursively as follows. If i = j, the matrix Ai,j = Ai, so that no scalar multiplications are necessary to compute
the product. Thus, Mij = 0 for i = j
= min { Mi,k + Mk + 1, j + Pi-1 Pk Pj } for i < j
i<=k<j
Step 3: Computing the optimal costs. We perform the third step of the dynamic-programming paradigm and
compute the optimal cost by using a tabular, bottom-up approach.

Step 4: Constructing an optimal solution. In the first level we compare M12 and M23. When M12 < M23, we parenthesize
the A1A2 in the product A1A2A3 i.e (A1A2) A3 and parenthesize the A2A3 in the product A1A2A3 i.e., A1 (A2 A3)
when M12 >M23. This process is repeated until the whole product is parenthesized. The top entry in the table
i.e M13 gives the optimum cost of matrix chain multiplication.
Example:- Find an optimal parenthesization of a matrix-chain product whose dimensions are given in the
table below.

blog: anilkumarprathipati.wordpress.com 2
UNIT-IV DYNAMIC PROGRAMMING

Solution:- Given
P0=5-, P1=4, P2=6, P3=2, P4=7
The Bottom level of the table is to be initialized.
Mi,j=0 where i = j

To compute Mij when i < j,


Mij = min { Mi,k + Mk + 1, j + Pi-1 Pk Pj } for i < j i<=k<j
Thus M12 = min{ M11+M22+P0P1P2} = 0 + 0 + 5 * 4 * 6 = 120
M23 = min{ M22+M33+P1P2P3} = 0 + 0 + 4 * 6 * 2 = 48
M34 = min{ M33+M44+P2P3P4} = 0 + 0 + 6 * 2 * 7 = 84
M13 = min{ M11+M23+P0P1P3 , M12+M33+P0P2P3 }
= min{0 + 48 + 5 * 4 * 2 , 120+ 0+ 5 * 6 * 2} = min{ 88, 180} = 88
M24 = min{ M22+M34+P1P2P4 , M23+M44+ P1P3P4 }
= min{0 + 84 + 4 * 6 * 7 , 48+ 0+ 4 * 2 * 7} = min{ 252, 104} = 104
M14 = min{ M11+M24+P0P1P4 , M12+M34+P0P2P4 , M13+M44+P0P3P4 }
= min{0 + 104 + 5 * 4 * 7 , 120+ 84+ 5 * 6 * 7 , 88 + 0 + 5 * 2 * 7}
= min{ 244,414, 158} = 158

In the first level when we compare M12 , M23 and M34. As M23=48 is minimum among three we parenthesize the QR
in the product PQRT i.e P(QR)T. In the second level when we compare M13 and M24. As M13=88 is minimum
among the two we parenthesize the P and QR in the product PQRT i.e (P(QR))T. Finally we parenthesize
the whole product i.e ((P(QR))T). The top entry in the table i.e M14 gives the optimum cost of ((P(QR))T).
Verification:- The chain of matrices is P, Q, R, T, the product P x Q x R x T can be fully parenthesized in
five distinct ways:
1. (P(Q(RT))) 2. (P((QR)T)) 3. ((PQ)(RT)) 4. ((P(QR))T) 5. (((PQ) R)T)
Cost of (P(Q(RT))) = 5*4*7 + 4*6*7 + 6*2*7= 392
Cost of (P((QR)T)) = 5*4*7 + 4*6*2 + 4*2*7= 244
Cost of ((PQ)(RT)) = 5*4*6 + 6*2*7 + 5*6*7 = 414
Cost of ((P(QR))T) = 5*4*2 + 4*6*2 + 5*2*7= 158
Cost of (((PQ) R)T) = 5*4*6 + 5*6*2 + 5*2*7 = 250

blog: anilkumarprathipati.wordpress.com 3
UNIT-IV DYNAMIC PROGRAMMING

From the above manual method also we find the optimal cost is 158 and the order of matrix
multiplication is ((P(QR))T)

Algorithm Matrix_Chain_Mul(p)
{
for i = 1 to n do
M[i, i] = 0
for len = 2 to n do
{
for i = 1 to n - len + 1 do
{
j←i+l-1
M[i, j] ← ∞
for k = i to j - 1 do
q = M[i, k] + M[k + 1, j] + Pi-1PkPj
if q < M[i, j] then
{
M[i, j] ← q
}
}
}
return m
}
Time Complexity:- Algorithm Matrix_Chain_Mul uses first For loop to initialize M[i,j] which takes O(n).
M[i, j] value is computed using three For loops which takes O(n3). Thus the overall time complexity of
Matrix_Chain_Mul is O(n3).
Application-II: OBST
Binary Search Trees: - A binary search tree T is a binary tree, either it is empty or each node in the tree
contains an indentifier and,
1.All identifiers in the left subtree of T are less than the identifier in the root node T.
2.All identifiers in the right subtree are greater than the identifier in the root node T.
3.The left and right subtree of T are also binary search trees.
Optimal Binary Search Tree problem:- Given a sequence K = {k1, k2, ..., kn } of n distinct keys in sorted
order (so that k1 < k2 < ··· < kn), and we wish to build a binary search tree from these keys such that the cost
of the binary search tree is minimum. For each key ki, we have a probability pi that a search will be for ki.
Some searches may be for values not in K, and so we also have n + 1 "dummy keys" E0, E1, E2, ..., En
representing values not in K. In particular, E0 represents all values less than k1, En represents all values
greater than kn, and for i = 1, 2, ..., n -1, the dummy key Ei represents all values between ki and ki+1. For
each dummy key Ei, we have a probability qi that a search will correspond to Ei.
Number of possible binary search trees for n keys is given by the following formula.
2nCn
Cost of binary search tree is calculated as follows

blog: anilkumarprathipati.wordpress.com 4
UNIT-IV DYNAMIC PROGRAMMING

Solving the Optimal Binary Search Tree problem by dynamic programming


Step 1. The structure of an optimal binary search tree
Construct an optimal solution to the problem from optimal solutions to subproblems. Given keys ki,
..., kj, one of these keys, say kr (i ≤ r ≤ j), will be the root of an optimal subtree containing these keys. The
left subtree of the root kr will contain the keys ki, ..., kr-1 (and dummy keys Ei-1, ..., Er-1), and the right
subtree will contain the keys kr+1, ..., kj (and dummy keys Er, ..., Ej).
Step 2. A recursive solution
Next, we define the cost of an optimal solution recursively in terms of the optimal solutions to sub
problems. For the optimal binary search tree problem, We can define W[i, j], C[i, j], and r[i, j] recursively as
follows.
{ W[i,j], C[i, j], r[i, j] } = { qi, 0, 0 } for i = j
W[i, j] = { W[i, j-1] + qj+ Pj } for i < j
C[i, j] = min { C[i, k-1] +C[k, j]}+ W[i, j] for i < j
i<k<=j
r[i, j] = k for i < j
Step 3. Computing the expected search cost of an optimal binary search tree
Computing the optimal costs. we perform the third step of the dynamic-programming paradigm and
compute the optimal cost by using a tabular, bottom-up approach.

Step 4. Constructing an optimal solution.


Now consider the last cell i.e 0th row and 3th column. The r03 in this cell specifies the root i.e r03 = k
then k element will be the root. If Tij is the tree and k th element will be the root then the tree is sub divided
th

into left sub tree i.e Ti,k-1 and into right sub tree i.e Tk,j-1. The process will be repeated until Tij is reached
where i=j. At this condition the tree will become a leaf node.
Algorithm OBST(p, q, n)
{
for i = 0 to n-1 do
{
w[i,i] = q[i]; c[i,i] = 0; r[i,i] = 0;
w[i,i+1] = q[i]+q[i+1]+p[i+1];
r[i,i+1] = i+1;
c[i,i+1] = q[i]+q[i+1]+p[i+1];
}
w[n,n] = q[n]; c[n,n] = 0; r[n,n] = 0;
for m = 2 to n do
for i = 0 to n - m do
{
j=i+m;

blog: anilkumarprathipati.wordpress.com 5

You might also like