0% found this document useful (0 votes)
176 views22 pages

Unit 10 Dynamic Programming - 1: Structure

This unit discusses dynamic programming. It begins with an introduction to dynamic programming, noting that it is a technique for solving optimization problems that involves breaking problems down into overlapping subproblems and storing the results of already solved subproblems. It then provides examples of dynamic programming approaches, including memoization where solutions to subproblems are stored in a table to avoid recomputing them. It contrasts dynamic programming with divide-and-conquer approaches. Specific dynamic programming problems discussed include calculating the nth Fibonacci number using memoization and improving on a naive recursive solution.

Uploaded by

Raj Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
176 views22 pages

Unit 10 Dynamic Programming - 1: Structure

This unit discusses dynamic programming. It begins with an introduction to dynamic programming, noting that it is a technique for solving optimization problems that involves breaking problems down into overlapping subproblems and storing the results of already solved subproblems. It then provides examples of dynamic programming approaches, including memoization where solutions to subproblems are stored in a table to avoid recomputing them. It contrasts dynamic programming with divide-and-conquer approaches. Specific dynamic programming problems discussed include calculating the nth Fibonacci number using memoization and improving on a naive recursive solution.

Uploaded by

Raj Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Analysis and Design of Algorithms Unit 10

Unit 10 Dynamic Programming – 1


Structure:
10.1 Introduction
Objectives
10.2 Overview of Dynamic Programming
Dynamic programming approach
Concept of memoization
Dynamic programming Vs divide and conquer
10.3 Fibonacci Numbers
Introduction to Fibonacci numbers
Algorithms to find nth Fibonacci number
10.4 Binomial Coefficient
Definition of binomial coefficients
Computation of binomial coefficients
10.5 Warshall’s and Floyd’s Algorithms
Overview of Warshall’s and Floyd’s algorithm
Warshall’s algorithm
Floyd’s algorithm
10.6 Summary
10.7 Glossary
10.8 Terminal Questions
10.9 Answers

10.1 Introduction
By now you must be familiar with the different algorithms with various
concepts and their space and time tradeoffs. This unit introduces you to the
concepts of dynamic programming.
Dynamic programming is a general algorithm design technique. It was
introduced in the 1950’s by American mathematician Richard Bellman to
solve optimization problems.
Objectives:
After studying this unit you should be able to:
 calculate the nth Fibonacci number
 explain the dynamic programming approach to compute binomial
coefficients
 explain Warshall’s and Floyd’s algorithms
Sikkim Manipal University B1480 Page No. 207
Analysis and Design of Algorithms Unit 10

10.2 Overview of Dynamic programming


Dynamic programming is an optimization technique for particular classes of
backtracking algorithms which repeatedly solve sub-problems. When
dealing with algorithms, dynamic programming always refers to the
technique of filling in a table with values computed from the sub-problems.
The central idea of dynamic programming is to solve several smaller
(overlapping) sub-problems and record their solutions in a table so that each
sub-problem is only solved once. Hence the final state of the table will be (or
contain) the solution.
It sometimes happens that the natural way of dividing an instance
suggested by the structure of the problem leads us to consider several
overlapping sub-instances. If we solve each of these independently, they will
in turn create a large number of identical sub-instances. If, on the other
hand, we take advantage of the duplication and solve each sub-instance
only once, saving the solution for later use, then we get a more efficient
algorithm.
10.2.1 Dynamic programming approach
Dynamic programming has the following two approaches:
 Top-down approach – In this approach, if the solution to any problem
can be formed recursively using the solution to its sub-problems, and if
its sub-problems are overlapping, then one can easily memoize or store
the solutions to the sub-problems in a table. Whenever we begin solving
a new sub-problem, we first check the table to see if it has been already
solved. If a solution has been recorded, then we can use it directly,
otherwise we solve the sub-problem first and add its solution to the
table.
Bottom-up approach – In this approach, once we formulate the solution to a
problem recursively in terms of its sub-problems, we can try reformulating
the problem in a bottom-up fashion, i.e. try solving the sub-problems first
and use their solutions to arrive at solutions to bigger sub-problems. This is
also usually done in a table, where the results of the sub-problems are
stored in a table and are iteratively used to generate solutions to bigger sub-
problems.
Let us now analyze the concept of memoization.

Sikkim Manipal University B1480 Page No. 208


Analysis and Design of Algorithms Unit 10

10.2.2 Concept of memoization


Memoization has a specialized meaning in computing. Memoization is the
technique by which we make a function perform faster by trading space for
time. It can be achieved by storing the return values of the function in a
table. Hence when the function is called again, the values stored in the table
are used to further compute the values, instead of computing the value all
over again. The set of remembered associations can be a fixed-size set
controlled by a replacement algorithm or a fixed set, depending on the
nature of the function and its use. A function can be memoized only if it is
referentially transparent; that is, only if calling the function is exactly the
same as replacing that function call with its return value.
Memoization methodology
1) Start with a backtracking algorithm
2) Look up the problem in a table; if table has a valid entry for it, return that
value
3) Else, compute the problem recursively, and then store the result in the
table before returning the value
10.2.3 Dynamic programming Vs divide and conquer
At a glance the dynamic programming approach might look exactly same as
the divide-and-conquer technique, but they differ substantially in their
problem solving approach.
The divide-and-conquer algorithms divide the problem into independent sub-
problems, solve the sub-problems recursively, and hence combine their
solutions to solve the original problem. In contrast, the dynamic
programming approach is applicable when the sub-problems are not
independent, that is, when sub-problems share sub-problems. Therefore a
divide-and-conquer algorithm does more work than necessary by repeatedly
solving the common sub-problems. A dynamic-programming algorithm on
the other hand solves every sub-problem just once and then saves its
answer in a table, thereby avoiding the need to re-compute the answer
every time the sub-problem is encountered.

Self Assessment Questions


1. The ____________ and _____________ are the two approaches to
dynamic programming.

Sikkim Manipal University B1480 Page No. 209


Analysis and Design of Algorithms Unit 10

2. ____________ is a technique to store answers to sub-problems in a


table.
3. The ___________________ algorithm is similar to the dynamic
approach.

10.3 Fibonacci Numbers


In the previous section we got an overview of the dynamic programming. In
this section we will learn about the Fibonacci numbers.
Before we analyze the dynamic programming technique, it will be beneficial
to first discuss the Fibonacci numbers and the memoization technique used
to compute the nth Fibonacci number.
10.3.1 Introduction to Fibonacci numbers
In the Fibonacci sequence, after the first two numbers i.e. 0 and 1 in the
sequence, each subsequent number in the series is equal to the sum of the
previous two numbers. The sequence is named after Leonardo of Pisa, also
known as Fibonacci. These fascinating numbers can be found in the
branching of trees, the patterns on a pineapple, the florets of a sunflower,
the spirals of a pine cone, and even in the placement of leaves on the stems
of many plants. Fibonacci numbers are one example of patterns that have
intrigued mathematicians through the ages. A real life example of a
Fibonacci sequence is given below.
Beginning with a single pair of rabbits, if every month each productive pair
bears a new pair, which becomes productive when they are 1 month old,
how many rabbits will there be after n months?
Imagine that there are xn pairs of rabbits after n months. The number of
pairs in month n+1 will be xn (in this problem, rabbits never die) plus the
number of new pairs born. But new pairs are only born to pairs at least 1
month old, so there will be xn-1 new pairs.
xn+1 = xn + xn-1 Eq: 10.1
Equation Eq:10.1 is simply the rule for generating the Fibonacci numbers.
10.3.2 Algorithms to find nth Fibonacci number
Let us now try to write an algorithm to calculate nth Fibonacci number. By
definition, the nth Fibonacci number, denoted by Fn is shown by Eq: 10.2

Sikkim Manipal University B1480 Page No. 210


Analysis and Design of Algorithms Unit 10

F0= 0
F1= 1
Fn= Fn-1+ Fn-2 Eq: 10.2
We will now try to create an algorithm for finding the nth Fibonacci-number.
Let's begin with the naive algorithm exactly coding the mathematical
definition:
Naive algorithm to calculate the nth Fibonacci number:
// fib -- compute Fibonacci(n)
function fib(integer n): integer
assert (n ≥ 0)
if n == 0: return 0 fi
if n == 1: return 1 fi
return fib(n - 1) + fib(n - 2)
end

Let us now trace the naive algorithm to calculate the nth Fibonacci number
Trace of the naive algorithm to calculate the nth Fibonacci number
n=2
function fib( integer 2) //n should be an integer
assert (2≥0)
if n == 0: return 0 fi
if n == 1: return 1 fi
return fib(2 - 1) + fib(2 - 2)=1 // the 2nd Fibonacci number
end

It should be noted that this is just an example because there is already a


mathematically closed form for Fn as shown in Eq 10.3

 n  (1   ) n
F ( n) 
5 Eq: 10.3
where
1 5

2 (Golden Ratio) Eq: 10.4

Sikkim Manipal University B1480 Page No. 211


Analysis and Design of Algorithms Unit 10

Hence using the equation F(n) we can calculate the nth Fibonacci number
efficiently when n is small. However when n is large, this method is very
inefficient.
In the equation Eq: 10.4, Φ is known as the Golden Ratio. It is an irrational
mathematical constant which is approximately 1.6180339887. This unique
ratio has served as an inspiration to thinkers of all discipline be it art,
mathematics, architecture, physiology, biology etc.

Fib(6)

Fib(5)

Fib(4) Fib(3) Fib(4)


Fib(2)
Fib(2) Fib(3) Fib(1) Fib(2) Fib(3)
Fib(1) Fib(2) Fib(0) Fib(1) Fib(1) Fib(2)
Fib(0) Fib(1)
Fib(0) Fib(1)
Fib(0) Fib(1) Fib(0) Fib(1)

Figure 10.1: Fibonacci Tree

To analyze the running time of Fibonacci sequence we will look at a call tree
for the sixth Fibonacci number:
In the Figure 10.1 every leaf of the tree has the value 0 or 1, and the sum of
these values is the final result. Thus for any n, the number of leaves in the
call tree is Fibn itself. The closed form therefore tells us that the number of
leaves in fib(n) is approximately equal to equation Eq: 10.5
n
 1 5  n
   1.618 n  2log(1.618 )  2n log(1.618 )  20.69 n
 2 
 
Eq: 10.5
This means that there are far too many leaves, considering the repeated
patterns found in the figure 10.1 tree. (The algebraic manipulation used in
equation Eq: 10.5 to make the base of the exponent as 2 should be duly
noted.)
We can use a recursive memoization algorithm that can be turned bottom-
up into an iterative algorithm that would fill in a table of solutions to sub-
problems. Here some of the sub-problems solved might not be needed at

Sikkim Manipal University B1480 Page No. 212


Analysis and Design of Algorithms Unit 10

the end while computing the result (and this is where dynamic programming
differs from memoization), but dynamic programming can be very efficient
because it can use the result stored in a better manner and have less call
overhead.
The pseudocode to compute the nth Fibonacci number is given as follows:
Pseudocode to compute the nth Fibonacci number:
function fib(integer n): integer
if n == 0 or n == 1:
return n
else-if f[n] != -1:
return f[n]
else
f[n] = fib(n - 1) + fib(n - 2)
return f[n]
fi
end

In the above code if the value of fib(n) already has been calculated it's
stored in fib[n] and then returned instead of calculating it again. That
means all the copies of the sub-call trees can be exempted from the
calculation.

Fib(6)

Fib(5)

Fib(4) Fib(3) Fib(4)

Fib(2)
Fib(2) Fib(3) Fib(1) Fib(2) Fib(3)

Fib(1) Fib(2) Fib(0) Fib(1) Fib(1) Fib(2)


Fib(0) Fib(1)
Fib(0) Fib(1)
Fib(0) Fib(1) Fib(0) Fib(1)

Figure 10.2: Fibonacci Tree Dynamically Approached

In the Figure 10.2 the values in the boxes are values that already have been
calculated and the calls can thus be skipped. Hence it is a lot faster than the
straightforward recursive algorithm. Here every value less than n is
calculated once only. Therefore the first time you execute it, the asymptotic

Sikkim Manipal University B1480 Page No. 213


Analysis and Design of Algorithms Unit 10

running time is O(n). Any other calls to it will take O(1) time since the values
have been pre-calculated (assuming each subsequent call's argument is
less than n).
However the algorithm does consume a lot of memory. When we calculate
fib(n), the values of fib(0) to fib(n) are stored in main memory, though this
can be improved, there is no need as the memory cost has fallen drastically.
The O(1) running time of subsequent calls is lost since the values aren't
stored. The value of fib(n) only depends on fib(n-1) and fib(n-2) hence we
can discard the other values by going bottom-up. If we want to calculate
fib(n), we first have to calculate fib(2) = fib(0) + fib(1). We then need to
calculate fib(3) by adding fib(1) and fib(2). After this we can discard the
values of fib(0) and fib(1), since we no longer need them to calculate any
further values. Now we can calculate fib(4) from fib(2) and fib(3) and discard
fib(2), then we can calculate fib(5) and discard fib(3) and so on. The
pseudocode to do this is given below.
Pseudocode to calculate nth Fibonacci number (Dynamic
programming approach):
function fib(integer n): integer
if n == 0 or n == 1:
return n
fi

let u := 0
let v := 1

for i := 2 to n:
let t := u + v
u := v
v := t
repeat

return v
end
We can rework the code to store the values in an array for subsequent calls,
but, we don't have to. This method is typical for dynamic programming in
which we first identify the sub-problems that we need to solve in order to

Sikkim Manipal University B1480 Page No. 214


Analysis and Design of Algorithms Unit 10

solve the entire problem. Then using iterative process on bottom-up we can
calculate the values.

Activity 1
Find the various instances in the natural world where you find the
presence of the Fibonacci number and discuss it with your friends.

Self Assessment Questions


4. The formula to calculate the nth Fibonacci series is _______________.
5. The asymptotic running time when we first run to calculate the nth
Fibonacci number is _______.
6. To compute the nth Fibonacci number we followed the __________
dynamic approach.

10.4 Binomial Coefficients


In the previous section we discussed about Fibonacci numbers and how we
can calculate the nth Fibonacci number using the memoization technique. In
this section we will be learning about the binomial coefficients.
10.4.1 Definition of binomial coefficient
The binomial coefficient is the number of ways of picking k unordered
outcomes from n possibilities. It is also known as a combination or
n
 
combinatorial number. The binomial coefficient is represented as k  or nCk
 
and can be read as “n choose k”
The value of the binomial coefficient is given by equation Eq: 10.6
n n!
C   
n k
 k  (n  k )!k! for 0  k  n Eq: 10.6
 
Writing the factorial as a gamma function z! ( z  1) allows the binomial
coefficient to be generalized to non-integer arguments, including complex n
and k.
10.4.2 Computation of binomial coefficients
There are several ways by which we can calculate the binomial coefficients.
Here we will learn to calculate binomial coefficients using Pascal’s triangle
and the dynamic programming approach.
Sikkim Manipal University B1480 Page No. 215
Analysis and Design of Algorithms Unit 10

Pascal’s triangle
Pascal’s triangle helps you determine the coefficients which arise in
binomial expansion. For example consider equation Eq: 10.7
(x+y)2 = x2 + 2xy + y2 = 1x2y0 + 2x1y1 + 1x0y2 Eq: 10.7
You might notice that the coefficients are numbers in the row of Pascal’s
triangle shown in the Figure 10.3.
0: 1
1: 1 1
2: 1 2 1
3: 1 3 3 1
4: 1 4 6 4 1
5: 1 5 10 10 5 1
6: 1 6 15 20 15 6 1
7: 1 7 21 35 35 21 7 1
8: 1 8 28 56 70 56 28 8 1

Figure 10.3: Pascal’s Triangle

Hence in general when x+y is raised to a positive integer we can give the
following equation Eq: 10.8
(x  y) n  a 0 x n  a 1 x n 1 y  a 2 x n 2 y 2    a n 1 xy n 1  a n y n
Eq: 10.8
where the coefficients ai in this expansion are exactly the numbers on row n
of Pascal's triangle and are written as equation Eq: 10.9, which is the
binomial theorem.
n
ai   
i
  Eq: 10.9
You will notice that the entire right diagonal of Pascal's triangle is the
coefficient of yn in the binomial expansions, while the next diagonal is the
coefficient of xyn−1 and so on.
Dynamic programming approach
Computing a binomial coefficient is a standard example of applying dynamic
programming to a non-optimization problem. You may recall from your
studies of elementary combinatorics that the binomial coefficient, denoted

Sikkim Manipal University B1480 Page No. 216


Analysis and Design of Algorithms Unit 10

n
 
by C (n,k) or k  is the number of combinations (subsets) of k elements from
 
an n-element set (0 ≤ k ≤ n). The name “binomial coefficients” comes from
the participation of these numbers in the binomial formula given in equation
Eq: 10.10
(a  b) n  C (n,0)a n    C (n.k )a nk b k    C (n, n)b n Eq: 10.10
Of the many properties of binomial coefficients, we concentrate on two given
by equation Eq: 10.11
C (n, k )  C (n  1, k  1)  C(n  1, k ) for n > k > 0
and Eq: 10.11
C (n,0)  C (n, n)  1
The nature of recurrence, which expresses the problem of computing C(n,k)
in terms of smaller and overlapping problems of computing C(n-1,k-1) and
C(n-1,k), lends itself to solving by dynamic programming technique. To do
this, we record the values of the binomial coefficients in a table of n+1 rows
and k+1 columns, numbered from 0 to n and from 0 to k respectively.
To compute C(n,k), we fill the table in the Figure 10.4 row by row, starting
with row 0 and ending with row n. Each row i (0 ≤ i ≤ n) is filled left to right,
starting with 1 because C(n, 0)= 1. Rows 0 through k also end with 1 on the
table’s main diagonal: C(i,i)=1 for 0 ≤ i ≤ k . We compute the other entries by
the formula to calculate C(n,k), by adding the contents of the cells in the
preceding row and the previous column and in the preceding row and the
same column. It is precisely the implementation of Pascal’s triangle.
Algorithm to find binomial coefficient (n, k)
//Computes C(n,k) by the dynamic programming algorithm
//Input: A pair of nonnegative integers n ≥ k ≥ 0
for i ← 0 to n do
for j ← 0 to min (i,k) do
if j = 0 or j = i
C[i,j] ← 1
else C[i,j] ← C[i-1 , j-1 ] + C[i-1 , j]
return C[n , k]

Sikkim Manipal University B1480 Page No. 217


Analysis and Design of Algorithms Unit 10

Let us now trace the algorithm of the binomial coefficient.


Tracing of the algorithm of binomial coefficient (4,3)
n=4,k=3
for i = 0 to 4 do //this loop will run from i = 0 to i = 4
for j = 0 to min (0,3) do //this loop will run from j = 0 to j = 0
if j = 0 or j = 0
C[0,0] = 1
else C[0,0] = C[0-1 , 0-1 ] + C[0-1 , 0] //will not perform this step
as the if condition is satisfied
return C[4 , 3]

We can clearly see that the algorithm’s basic operation is addition, so let A
(n,k) be the total number of additions made by the this algorithm in
computing C (n,k) . Note that computing each entry by formula of C (n,k)
requires just one addition. Also note that because the first k+1 rows of the
table form a triangle while the remaining n-k rows form a rectangle, we have
to split the sum expressing A (n,k) into two parts as shown by the equation
Eq: 10.13.
k i 1 n k k k
A(n, k )  1   1   (i  1)  k
i 1 j 1 i  k 1 j 1 i 1 i  k 1

(k  1)k
  k (n  k )  (nk )
2 Eq: 10.13

0 1 2 … k-1 k
0 1
1 1 1
2 1 2 1
:
k 1 1
:
n-1 1 C(n-1, k) C(n-1, k-1) C(n,k)
N 1
Figure 10.4: Table for Computing Binomial Coefficient C(n,k)

The figure 10.4 shows the table for computing binomial coefficient.

Self Assessment Questions


7. What formula can we use to find the value of the binomial coefficient?

Sikkim Manipal University B1480 Page No. 218


Analysis and Design of Algorithms Unit 10

8. The gamma function z! ( z  1) allows the binomial coefficient to be


generalized to non-integer arguments like _________________.
9. Binomial coefficients are a study of _____________.

10.5 Warshall’s and Floyd’s Algorithm


In the previous section we discussed the dynamic programming approach to
calculate binomial coefficients. In this section, to solve the ‘all pairs shortest-
paths problem’ on a directed graph G=(V, E), we shall use the Warshall and
Floyd algorithm. This Warshall-Floyd algorithm runs in θ(V3) time. The
negative-weight edges may be present here, but we assume that there are
no negative-weight cycles. We shall follow the dynamic-programming
process here to develop the algorithm.
Transitive closure
The transitive closure of a directed graph (digraph) with n vertices is an
n × n matrix such that T [i, j] is true if and only if, j is reachable from i by
some path. Our aim is to figure the transitive closure of a directed graph
shown in Figure 10.5 where 10.5 (a) is the digraph, 10.5 (b) the adjacency
matrix, and 10.5 (c) the transitive closure. The adjacency matrix is a
restriction of matrix T if we only allow paths of length 1.

a b c d
 a b c d

a b
a 0 1 0 0 a 1 1 1 1
b 0 0 0 1 b 1 1 1 1
A  T 
c d c 0 0 0 0 c 0 0 0 0
d 1 0 1 0 d 1 1 1 1
   
(a) (b) (c)
Figure 10.5: (a) Digraph (b) Its Adjacency Matrix (c) Its Transitive Closure

We can generate transitive closure of the digraph with the help of depth-first
search or breadth-first search. Performing either traversal starting at the ith
vertex gives the information about the vertices reachable from the ith vertex
and hence the columns that contain ones in the ith row of the transitive
closure. Thus by doing such a traversal for every vertex as a starting point
we obtain the transitive closure in its eternity.

Sikkim Manipal University B1480 Page No. 219


Analysis and Design of Algorithms Unit 10

10.5.1 Overview of Warshall’s and Floyd’s algorithm


The sub-problems here are connectivity/distance information where only a
subset of vertices is allowed in the paths. They are very useful to obtain
information about all pairs of vertices even though they have complexity
θ(n3).
There are other algorithms if we only want the information about one pair,
(e.g. DFS for reachability). For sparse graphs there may be better
algorithms. Moreover you have to be careful to use 1 to represent the
absence of an edge.
10.5.2 Warshall’s algorithm
Since the above mentioned method traverses the same digraph several
times, we use a better algorithm called the Warshall’s algorithm named after
S. Warshall.
R(0),…….,R(k-1), R(k),……..,R(n) Eq: 10.13
Warshall’s algorithm constructs the transitive closure of a given digraph with
n vertices through a series of n x n Boolean matrices. This series is shown
in equation Eq: 10.13
 j k

 
k 1 
R(k )   
i  1 1 
 

 

Figure 10.6: Rule for Changing Zeroes in Warshall’s Algorithm
Each of these matrices provides certain information about directed paths in
(k )
the digraph (Refer figure 10.6). Specially, the element rij in the ith row and
jth column of matrix R(k) (k=0,1,………,n) is equal to 1 if and only if there
exists a directed path (of a positive length ) from the ith vertex to the jth vertex
with each intermediate vertex, if any, not higher than k.
Let us consider the example in the figure 10.7. Here R(k)= R(4) where k=0, 1,
2, 3, 4. Hence, the series starts with R(0) (Refer Figure 10.7). It doesn’t allow
any intermediate vertices in its path, hence R(0) is nothing but adjacency
matrix of the digraph. R(1) contains the information about the paths that can
use the first vertex as intermediate with more freedom. Therefore it may
contain more ones than R(0). In general each subsequent matrix in the series

Sikkim Manipal University B1480 Page No. 220


Analysis and Design of Algorithms Unit 10

has one more vertex to use as intermediate for its paths than its
predecessor and hence may, but does not necessarily have to, contain
more ones. R(4) being the final matrix in the series, reflects paths that can
use all n vertices of the digraph as intermediate and hence is the digraph’s
transitive closure.

a b

c d

a b c d

a 1 1 1 1
b1 1 1 1
R ( 4) 
c 
0 0 0 0
d 
1 1 1 1
 
Figure 10.7: Application of Warshall’s Algorithm to the Digraph

The central part of the algorithm is that we can compute all the elements of
each matrix R(k) from its immediate predecessor R(k-1) in the series, like in
(k )
the example we can compute R(4) from R(3). Let rij , the element in the ith
row and the jth column of matrix R(k) , be equal to 1. This means that there
exists a path from the ith row and the jth vertex vj with each intermediate
vertex numbered not higher than k.

Sikkim Manipal University B1480 Page No. 221


Analysis and Design of Algorithms Unit 10

vi, a list of intermediate vertices each numbered not higher than k, vj.
Two situations regarding this path are possible. In the first, the list of its
intermediate vertices does not contain kth vertex. Then this path from vi to vj
( k 1)
has intermediate vertices numbered not higher than k-1, and therefore rij
is equal to 1 as well.

Warshall’s Algorithm (A[1…n,1…n])


//Implements Warshall’s algorithm for computing the transitive closure
//Input: The adjacency matrix A of a digraph with n vertices
//Output: The transitive closure of the digraph
R(0) ← A
for k ← 1 to n do
for i ← 1 to n do
for j ← 1 to n do
R(k) [i, j] ← R(k-1) [i, j] or (R(k-1) [i, k] and R(k-1) [k, j])
return R(n)
Let us now trace Warshall’s algorithm.
Tracing of Warshall’s algorithm for A[3,3]
R(0) = A // A is an adjacency matrix of 3x3 which is assigned to R(0)
for k = 1 to 3 do //this loop will run from k = 1 to k = 3
for i = 1 to 3 do //this loop will run from i = 1 to i = 3
for j = 1 to 3 do //this loop will run from j = 1 to j = 3
R(1) [1, 1] ← R(1-1) [1, 1] or (R(0-1) [1, 1] and R(1-1) [1, 1])
return R(3)
Several observations need to be made about Warshall’s algorithm. First, it is
very concise. Still its time efficiency is only in θ(n3). In fact, for sparse graphs
represented by their adjacency lists, the traversal-based algorithm has a
better asymptotic efficiency than Warshall’s algorithm.
We can speed up the implementation of Warshall’s algorithm for some
inputs by restructuring its inner most loop. It can also be made to work faster
by considering matrix rows as bit strings and apply the bitwise-or operation.
As to the space efficiency of Warshall’s algorithm, the situation is similar to
that of the two earlier examples of computing Fibonacci numbers and

Sikkim Manipal University B1480 Page No. 222


Analysis and Design of Algorithms Unit 10

binomial coefficients. Although we used separate matrices for recording


intermediate results of the algorithm, it is in fact unnecessary.
10.5.3 Floyd’s algorithm
We can generate the distance matrix with an algorithm that is very similar to
Warshall’s algorithm. It is called Floyd’s algorithm, after its inventor R. Floyd.
It is applicable to both undirected and directed weighted graphs provided
that they do not contain a cycle of negative length.
2
a b

3 6 7

c d
1

(a)
 a b c d
 a b c d

a 0  3  a 0 10 3 4
b   b2 0 5 6
W  2 0  D 
c  1 c  1
 7
7 0 7 0 

d  6   0 d 
 6 16 9 0

 
  
(b) (c)
Figure 10.8: (a) Digraph (b) Its Weight Matrix (c) its Distance Matrix

The all-pairs-shortest paths problem asks to find the distances from each
vertex to all other vertices in a weighted connected graph. For our
convenience to record the lengths of the shortest paths we use an n x n
matrix D called the distance matrix. The element dij in the ith row and the jth
column of this matrix indicates the length of the shortest path from the i th
vertex to the jth vertex (1≤ i, j ≤ n). An example of this is shown in Figure
10.8 where 10.8 (a) is the digraph, 10.8 (b) the weight matrix, and 10.8
(c) the distance matrix.
Floyd’s algorithm computes the distance matrix of a weighted graph with a
series of n x n matrices as given in equation Eq: 10.14
D0,…..D(k-1),D(k),…..,D(n) Eq: 10.14
Each of these matrices contains the length of the shortest paths with certain
constraints on the paths considered for the matrix in question. Specifically,
Sikkim Manipal University B1480 Page No. 223
Analysis and Design of Algorithms Unit 10

(k )
the element d ij in the ith row and the jth column of matrix D(k) (k=0,1,…,n) is
equal to the length of the shortest path among all paths from the ith vertex to
the jth vertex with each intermediate vertex, if any, are not numbered higher
than k. In particular, the series starts with D(0) which does not allow any
intermediate vertices in its path; hence D(0) is nothing but the weight matrix
of the graph, like in the example in the figure 10.9. The last matrix in the
series, D(n) contains the length of the shortest paths among all the paths that
can use all n vertices as immediate. This is nothing but the distance matrix
being sought.
As in Warshall’s algorithm, we can compute all the elements of each matrix
(k )
D(k) from its immediate predecessor D(k-1) in series. Let d ij be the element
(k )
in the ith row and the jth column of matrix D(k) . Hence d ij is the length of the
shortest path among all the paths from the ith vertex vi to the jth vertex vj with
their intermediate vertices numbered not higher than k.
vi, a list of intermediate vertices each numbered not higher than k, vj.

Figure 10.9: Application of Floyd’s Algorithm

Sikkim Manipal University B1480 Page No. 224


Analysis and Design of Algorithms Unit 10

We can partition all such paths into two disjoint subsets, those that do not
use the kth vertex vk as intermediate and those that do. Since the paths of
the first subset have their intermediate vertices numbered not higher than
k-1, the shortest of them is by definition of our matrices of length d ik( k 1) .

If the graph does not contain a cycle of a negative length, our attention gets
focused only to the paths in the second subset that uses vertex vk as its
intermediate vertex exactly once. All such paths have the following form.
vi, vertices numbered ≤ k-1, vk, vertices numbered ≤ k-1, vj.
In other words, each of the paths is made up of a path from vi to vk with each
intermediate vertex numbered not higher than k-1 and a path from vk to vj
with each intermediate vertex numbered not higher than k-1.
Since the length of the shortest path from vi to vk among the paths that use
the intermediate vertices numbered not higher than k-1 and a path from vk to
vj with each intermediate vertex numbered not higher than k-1 is equal to
d kj( k 1) and the length of the shortest path from vk to vj among the paths that
use intermediate vertices numbered not higher than k-1 is equal to
( k 1)
d ik( k 1) + d kj . Taking into account the lengths of the shortest paths in both
subsets, lead to the recurrence shown by equation Eq: 10.15.
d ij( k 1) = min { d ij( k 1) , d ik( k 1) + d ikj( k 1) } for k≥ 1, d ij0 =wij Eq: 10.15

Putting it another way, the element in the ith row and the jth column of the
current distance matrix D(k-1) is replaced by the sum of elements in the same
row i and the kth column and in the same column j and the kth column if and
only if the latter sum is smaller than its existing value.
Floyd’s algorithm: W[1…n,1…n]
//Implements Floyd’s algorithm for all-pairs shortest-path problem
//Input: The weight matrix W of a graph with no negative-length cycle
//Output: The distance matrix of the shortest paths’ lengths
D ← W //is not necessary if W can be over written
for k ←1 to n do
for i ← 1 to n do
For j ← 1 to n do
D[i , j] ← min{ D[i,j], D[i,k]+D[k,j]}
return D

Sikkim Manipal University B1480 Page No. 225


Analysis and Design of Algorithms Unit 10

Let us now trace the Floyd’s algorithm.


Tracing of Floyd’s algorithm

D = W //W is a weighted matrix of 3x3 assigned to D which is the distance


matrix
for k =1 to 3 do //this loop will run from k = 1 to k = 3
for i = 1 to 3 do //this loop will run from i = 1 to i = 3
for j = 1 to 3 do //this loop will run from j = 1 to j = 3
D[1 ,1] = min{ D[1,1], D[1,1]+D[1,1]}
return D

Activity 2
Write a pseudocode to find the weight matrix and the distance matrix for
a digraph.

Self Assessment Questions


10. Both Warshall’s and Floyd’s algorithms have the run time as ________.
11. The Warshall’s algorithm is used to solve _______________ problems.
12. The Floyd’s algorithm is used to solve ________________ problems.

10.6 Summary
In this unit we have learned about the dynamic programming technique
which is a widely acclaimed tool used in applied mathematics and in
computer science wherein it is regarded as a general algorithm design.
In dynamic programming we have learned the technique to solve
overlapping problems by solving the sub-problem only once, and storing the
result in a table as we have seen in the case of Fibonacci numbers.
Similarly we have also learnt to apply the concept to find the binomial
coefficient and find solutions to the transitive closure and shortest path
problems utilizing Warshall’s and Floyd’s algorithms respectively.
The remaining concepts like principle of optimality, optimal binary search,
Knapsack problem and memory functions will be covered in the next unit,
which will help you to broaden your horizon of dynamic programming.

Sikkim Manipal University B1480 Page No. 226


Analysis and Design of Algorithms Unit 10

10.7 Glossary
Term Description
Adjacency matrix Representation of the vertices of a graph adjacent to the
other vertices.
Combinatorics Combinatorics is a branch of mathematics concerning the
study of finite or countable discrete structures.
Digraph A digraph is short for directed graph, and it is a diagram
composed of points called vertices (nodes) and arrows called
arcs going from a vertex to a vertex.

10.8 Terminal Questions


1. Differentiate between dynamic programming and Divide and Conquer
techniques.
2. Write the algorithm to find the nth Fibonacci number.
3. Explain the dynamic programming approach to find binomial coefficients.
4. What is Warshall’s algorithm to find the transitive closure?
5. Explain the Floyd’s algorithm to find the shortest path.

10.9 Answers
Self Assessment Questions
1. Top-down, bottom-up
2. Memoization
3. Divide and conquer
 n  (1  ) n
4. F (n) 
5
5. O(n)
6. Bottom-up
n n!
7. Ck    
n
 k  (n  k )!k!
 
8. Complex n and k
9. Combinatorics
10. θ(n3)
11. Transitive closure
12. Shortest path

Sikkim Manipal University B1480 Page No. 227


Analysis and Design of Algorithms Unit 10

Terminal Questions
1. Refer to10.2.3 – Dynamic programming Vs divide and conquer
2. Refer to 10.3.2 – Algorithm to find nth Fibonacci number
3. Refer to 10.4.2 – Computation of binomial coefficients
4. Refer to 10.5.2 – Warshall’s algorithm
5. Refer to 10.5.3 – Floyd’s algorithm

References
 Anany V. Levetin (2002). Introduction to the analysis and design of
algorithms. Addison-Wesley Longman Publishing Co.
 Cormen, Thomas H., &Charles E. Leiserson., &Ronald L Rivest.,
&Clifford Stein (2006). Introduction to algorithms, 2nd Edition, PHI
E-References
 http://mathworld.wolfram.com/BinomialCoefficient.html
 http://students.ceid.upatras.gr/%7Epapagel/project/kef5_6.htm

Sikkim Manipal University B1480 Page No. 228

You might also like