0% found this document useful (0 votes)
73 views

Dynamic Programming: 1 Matrix-Chain Multiplication

This document discusses the technique of dynamic programming and uses the example of matrix chain multiplication to illustrate it. Dynamic programming is used to solve optimization problems by breaking them down into overlapping subproblems and storing the results of solved subproblems to avoid recomputing them. It presents an algorithm to find the most efficient way to multiply a chain of matrices by trying all possible placements of parentheses and storing the results in a table. This runs in O(n^3) time by computing each of the O(n^2) table entries in O(n) time.

Uploaded by

Prabir K Das
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views

Dynamic Programming: 1 Matrix-Chain Multiplication

This document discusses the technique of dynamic programming and uses the example of matrix chain multiplication to illustrate it. Dynamic programming is used to solve optimization problems by breaking them down into overlapping subproblems and storing the results of solved subproblems to avoid recomputing them. It presents an algorithm to find the most efficient way to multiply a chain of matrices by trying all possible placements of parentheses and storing the results in a table. This runs in O(n^3) time by computing each of the O(n^2) table entries in O(n) time.

Uploaded by

Prabir K Das
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Dynamic Programming

(CLRS 15.2-15.3)

Today we discuss a technique called Dynamic programming. It is neither especially dynamic


nor especially programming related. We will discuss dynamic programming by looking at an
example.

Matrix-chain multiplication
Problem: Given a sequence of matrices A1 , A2 , A3 , ..., An , find the best way (using the minimal
number of multiplications) to compute their product.
Isnt there only one way? (( ((A1 A2 ) A3 ) ) An )
No, matrix multiplication is associative.
e.g. A1 (A2 (A3 ( (An1 An ) ))) yields the same matrix.
Different multiplication orders do not cost the same:
Multiplying p q matrix A and q r matrix B takes p q r multiplications; result
is a p r matrix.
Consider multiplying 10 100 matrix A1 with 100 5 matrix A2 and 5 50 matrix
A3 .
(A1 A2 ) A3 takes 10 100 5 + 10 5 50 = 7500 multiplications.
A1 (A2 A3 ) takes 100 5 50 + 10 50 100 = 75000 multiplications.
In general, let Ai be pi1 pi matrix.
A1 , A2 , A3 , . . . , An can be represented by p0 , p1 , p2 , p3 , . . . , pn
Let m(i, j) denote minimal number of multiplications needed to compute Ai Ai+1 Aj
We want to compute m(1, n).
Divide-and-conquer solution/recursive algorithm:
Divide into j i 1 subproblems by trying to set parenthesis in all j i 1 positions.
(e.g. (Ai Ai+1 Ak ) (Ak+1 Aj ) corresponds to multiplying pi1 pk and pk pj
matrices.)
Recursively find best way of solving sub-problems. (e.g. best way of computing Ai
Ai+1 Ak and Ak+1 Ak+2 Aj )
Pick best solution.
1

Algorithm expressed in terms of m(i, j):


(

m(i, j) =

0
minik<j {m(i, k) + m(k + 1, j) + pi1 pk pj }

If i = j
If i < j

Program:
Matrix-chain(i, j)
IF i = j THEN return 0
m(i, j) =
FOR k = i TO j 1 DO
q = Matrix-chain(i, k) + Matrix-chain(k + 1, j) +pi1 pk pj
IF q < m(i, j) THEN m(i, j) = q
OD
Return m(i, j)
END Matrix-chain
Return Matrix-chain(1, n)
Running time:
T (n) =

n1
X

(T (k) + T (n k) + O(1))

k=1
n1
X

= 2

T (k) + O(n)

k=1

2 T (n 1)
2 2 T (n 2)
2 2 2...
= 2n
Problem is that we compute the same result over and over again.
Example: Recursion tree for Matrix-chain(1, 4)

1,4

1,1

2,4

2,2

1,2

3,4

2,3

4,4

3,3 4,4

2,2

3,3

1,1

2,2

3,4

3,3 4,4

1,3

1,1

4,4

2,3

1,2

3,3

2,2 3,3

1,1

2,2

We for example compute Matrix-chain(3, 4) twice


Solution is to remember values we have already computed in a tablememoization
Matrix-chain(i, j)
IF i = j THEN return 0
IF m(i, j) < THEN return m(i, j)

/* This line has changed */

FOR k = i to j 1 DO
q = Matrix-chain(i, k) + Matrix-chain(k + 1, j)+pi1 pk pj
IF q < m(i, j) THEN m(i, j) = q
OD
return m(i, j)
END Matrix-chain
FOR i = 1 to n DO
FOR j = i to n DO
m(i, j) =
OD
OD
return Matrix-chain(1, n)
Running time:
(n2 ) different calls to matrix-chain(i, j).
The first time a call is made it takes O(n) time, not counting recursive calls.
When a call has been made once it costs O(1) time to make it again.

O(n3 ) time
3

Another way of thinking about it: (n2 ) total entries to fill, it takes O(n) to fill one.

Alternative view of Dynamic Programming


Often (including in the book) dynamic programming is presented in a different way; As filling
up a table from the bottom.
Matrix-chain example: Key is that m(i, j) only depends on m(i, k) and m(k + 1, j) where
i k < j if we have computed them, we can compute m(i, j)
We can easily compute m(i, i) for all 1 i n (m(i, i) = 0)
Then we can easily compute m(i, i + 1) for all 1 i n 1
m(i, i + 1) = m(i, i) + m(i + 1, i + 1) + pi1 pi pi+1
Then we can compute m(i, i + 2) for all 1 i n 2
m(i, i + 2) = min{m(i, i) + m(i + 1, i + 2) + pi1 pi pi+2 , m(i, i + 1) + m(i + 2, i + 2) +
pi1 pi+1 pi+2 }
..
.
Until we compute m(1, n)
Computation order:
j

1
1
2
3

i
4
5
6
7

Program:

Computation order

FOR i = 1 to n DO
m(i, i) = 0
OD
FOR l = 1 to n 1 DO
FOR i = 1 to n l DO
j =i+l
m(i, j) =
FOR k = 1 to j 1 DO
q = m(i, k) + m(k + 1, j) + pi1 pk pj
IF q < m(i, j) THEN m(i, j) = q
OD
OD
OD
Analysis:
O(n2 ) entries, O(n) time to compute each O(n3 ).
Note:
I like recursive (divide-and-conquer) thinking.
Book seems to like table method better.

You might also like