Daa Module 4.1
Daa Module 4.1
Module-4
1. Introduction to Dynamic Programming 4. Travelling Salesman problem
1.1. General method with Examples 5. Bellman-Ford Algorithm
1.2. Multistage Graphs 6. Space-Time Tradeoffs
2. Transitive Closure: 6.1. Comparison Counting Sort
2.1. Warshall’s Algorithm, 6.2. Distribution Counting Sort
3. All Pairs Shortest Paths: 7. Input Enhancement in String Matching-
3.1. Floyd's Algorithm, Harspool‘s algorithm.
1.Introduction to Dynamic Programming
Dynamic programming is a technique for solving problems with overlapping subproblems.
Typically, these subproblems arise from a recurrence relating a given problem’s solution to
solutions of its smaller subproblems. Rather than solving overlapping subproblems again and
again, dynamic programming suggests solving each of the smaller subproblems only once
and recording the results in a table from which a solution to the original problem can then be
obtained. [From T1]
The Dynamic programming can also be used when the solution to a problem can be viewed
as the result of sequence of decisions. [ From T2]. Here are some examples.
Example 1
Example 2
Example 3
Example 4
1.2 Multistage Graphs
Figure: Five stage graph
Backward Approach
2.Transitive Closure using Warshall’s Algorithm,
Definition: The transitive closure of a directed graph with n vertices can be defined as the n
× n boolean matrix T = {tij }, in which the element in the ith row and the jth column is 1 if
there exists a nontrivial path (i.e., directed path of a positive length) from the i th vertex to the
jth vertex; otherwise, tij is 0.
Example: An example of a digraph, its adjacency matrix, and its transitive closure is given
below.
(a) Digraph. (b) Its adjacency matrix. (c) Its transitive closure.
We can generate the transitive closure of a digraph with the help of depthfirst search or
breadth-first search. Performing either traversal starting at the i th vertex gives the information
about the vertices reachable from it and hence the columns that contain 1’s in the ith row of
the transitive closure. Thus, doing such a traversal for every vertex as a starting point yields
the transitive closure in its entirety.
Since this method traverses the same digraph several times, we can use a better algorithm
called Warshall’s algorithm. Warshall’s algorithm constructs the transitive closure through
a series of n × n boolean matrices:
Each of these matrices provides certain information about directed paths in the digraph.
Specifically, the element 𝑟i(k) in the ith row and jth column of matrix R(k) (i, j = 1, 2, . . . , n, k
= 0, 1, . . . , n) is equal to 1j if and only if there exists a directed path of a positive length from
the ith vertex to the jth vertex with each intermediate vertex, if any, numbered not higher than
k.
Thus, the series starts with R(0) , which does not allow any intermediate vertices in its paths;
hence, R(0) is nothing other than the adjacency matrix of the digraph. R(1) contains the
information about paths that can use the first vertex as intermediate. The last matrix in the
series, R(n) , reflects paths that can use all n vertices of the digraph as intermediate and hence
is nothing other than the digraph’s transitive closure.
This means that there exists a path from the ith vertex vi to the jth vertex vj with each
intermediate vertex numbered not higher than k: vi, a list of intermediate vertices each
numbered not higher than k, vj . --- (*) Two situations regarding this path are possible.
1. In the first, the list of its intermediate vertices does not contain the kth vertex. Then this
path from vi to vj has intermediate vertices numbered not higher than k−1. i.e. r(k−1)
i
=1
2. The second possibility is that path (*) does contain the kth vertex vk among the
j
Thus, we have the following formula for generating the elements of matrix R(k) from the
elements of matrix R(k−1)
As an example, the application of Warshall’s algorithm to the digraph is shown below. New
1’s are in bold.
Analysis
Its time efficiency is Θ(n3). We can make the algorithm to run faster by treating matrix rows
as bit strings and employ the bitwise or operation available in most modern computer
languages.
Space efficiency: Although separate matrices for recording intermediate results of the
algorithm are used, that can be avoided.
(a) Digraph. (b) Its weight matrix. (c) Its distance matrix
We can generate the distance matrix with an algorithm that is very similar to Warshall’s
algorithm. It is called Floyd’s algorithm.
Floyd’s algorithm computes the distance matrix of a weighted graph with n vertices through a
series of n × n matrices:
The element 𝑑i(k) in the ith row and the jth column of matrix D(k) (i, j = 1, 2, . . . , n, k = 0, 1,
j
. . . , n) is equal to the length of the shortest path among all paths from the i vertex to the jth
th
vertex with each intermediate vertex, if any, numbered not higher than k.
As in Warshall’s algorithm, we can compute all the elements of each matrix D(k) from its
immediate predecessor D(k−1)
If 𝑑i(k) = 1, then it means that there is a path; vi, a list of intermediate vertices each numbered
not jhigher than k, vj .
We can partition all such paths into two disjoint subsets: those that do not use the kth vertex vk
as intermediate and those that do.
i. Since the paths of the first subset have their intermediate vertices numbered not higher
than k − 1, the shortest of them is, by definition of our matrices, of length 𝑑i(k−1)
j
ii. In the second subset the paths are of the form
vi, vertices numbered ≤ k − 1, vk, vertices numbered ≤ k − 1, vj .
Taking into account the lengths of the shortest paths in both subsets leads to the following
recurrence: