0% found this document useful (0 votes)
25 views16 pages

Daa Module 4.1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views16 pages

Daa Module 4.1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Design and Analysis of Algorithms (21ISI42)

Module-4
1. Introduction to Dynamic Programming 4. Travelling Salesman problem
1.1. General method with Examples 5. Bellman-Ford Algorithm
1.2. Multistage Graphs 6. Space-Time Tradeoffs
2. Transitive Closure: 6.1. Comparison Counting Sort
2.1. Warshall’s Algorithm, 6.2. Distribution Counting Sort
3. All Pairs Shortest Paths: 7. Input Enhancement in String Matching-
3.1. Floyd's Algorithm, Harspool‘s algorithm.
1.Introduction to Dynamic Programming
Dynamic programming is a technique for solving problems with overlapping subproblems.
Typically, these subproblems arise from a recurrence relating a given problem’s solution to
solutions of its smaller subproblems. Rather than solving overlapping subproblems again and
again, dynamic programming suggests solving each of the smaller subproblems only once
and recording the results in a table from which a solution to the original problem can then be
obtained. [From T1]
The Dynamic programming can also be used when the solution to a problem can be viewed
as the result of sequence of decisions. [ From T2]. Here are some examples.

Example 1

Example 2

Example 3

Example 4
1.2 Multistage Graphs
Figure: Five stage graph
Backward Approach
2.Transitive Closure using Warshall’s Algorithm,
Definition: The transitive closure of a directed graph with n vertices can be defined as the n
× n boolean matrix T = {tij }, in which the element in the ith row and the jth column is 1 if
there exists a nontrivial path (i.e., directed path of a positive length) from the i th vertex to the
jth vertex; otherwise, tij is 0.
Example: An example of a digraph, its adjacency matrix, and its transitive closure is given
below.

(a) Digraph. (b) Its adjacency matrix. (c) Its transitive closure.

We can generate the transitive closure of a digraph with the help of depthfirst search or
breadth-first search. Performing either traversal starting at the i th vertex gives the information
about the vertices reachable from it and hence the columns that contain 1’s in the ith row of
the transitive closure. Thus, doing such a traversal for every vertex as a starting point yields
the transitive closure in its entirety.
Since this method traverses the same digraph several times, we can use a better algorithm
called Warshall’s algorithm. Warshall’s algorithm constructs the transitive closure through
a series of n × n boolean matrices:

Each of these matrices provides certain information about directed paths in the digraph.
Specifically, the element 𝑟i(k) in the ith row and jth column of matrix R(k) (i, j = 1, 2, . . . , n, k
= 0, 1, . . . , n) is equal to 1j if and only if there exists a directed path of a positive length from
the ith vertex to the jth vertex with each intermediate vertex, if any, numbered not higher than
k.
Thus, the series starts with R(0) , which does not allow any intermediate vertices in its paths;
hence, R(0) is nothing other than the adjacency matrix of the digraph. R(1) contains the
information about paths that can use the first vertex as intermediate. The last matrix in the
series, R(n) , reflects paths that can use all n vertices of the digraph as intermediate and hence
is nothing other than the digraph’s transitive closure.
This means that there exists a path from the ith vertex vi to the jth vertex vj with each
intermediate vertex numbered not higher than k: vi, a list of intermediate vertices each
numbered not higher than k, vj . --- (*) Two situations regarding this path are possible.
1. In the first, the list of its intermediate vertices does not contain the kth vertex. Then this
path from vi to vj has intermediate vertices numbered not higher than k−1. i.e. r(k−1)
i
=1
2. The second possibility is that path (*) does contain the kth vertex vk among the
j

intermediate vertices. Then path (*) can be rewritten as;


vi, vertices numbered ≤ k − 1, vk, vertices numbered ≤ k − 1, vj .
i.e r(k−1) = 1 and r(k−1) = 1
ik kj

Thus, we have the following formula for generating the elements of matrix R(k) from the
elements of matrix R(k−1)

The Warshall’s algorithm works based on the above formula.

As an example, the application of Warshall’s algorithm to the digraph is shown below. New
1’s are in bold.
Analysis
Its time efficiency is Θ(n3). We can make the algorithm to run faster by treating matrix rows
as bit strings and employ the bitwise or operation available in most modern computer
languages.
Space efficiency: Although separate matrices for recording intermediate results of the
algorithm are used, that can be avoided.

3. All Pairs Shortest Paths using Floyd's Algorithm,


Problem definition: Given a weighted connected graph (undirected or directed), the all-pairs
shortest paths problem asks to find the distances—i.e., the lengths of the shortest paths - from
each vertex to all other vertices.
Applications: Solution to this problem finds applications in communications, transportation
networks, and operations research. Among recent applications of the all-pairs shortest-path
problem is pre-computing distances for motion planning in computer games.
We store the lengths of shortest paths in an n x n matrix D called the distance matrix: the
element dij in the ith row and the jth column of this matrix indicates the length of the shortest
path from the ith vertex to the jth vertex.

(a) Digraph. (b) Its weight matrix. (c) Its distance matrix
We can generate the distance matrix with an algorithm that is very similar to Warshall’s
algorithm. It is called Floyd’s algorithm.
Floyd’s algorithm computes the distance matrix of a weighted graph with n vertices through a
series of n × n matrices:
The element 𝑑i(k) in the ith row and the jth column of matrix D(k) (i, j = 1, 2, . . . , n, k = 0, 1,
j
. . . , n) is equal to the length of the shortest path among all paths from the i vertex to the jth
th

vertex with each intermediate vertex, if any, numbered not higher than k.
As in Warshall’s algorithm, we can compute all the elements of each matrix D(k) from its
immediate predecessor D(k−1)

If 𝑑i(k) = 1, then it means that there is a path; vi, a list of intermediate vertices each numbered
not jhigher than k, vj .
We can partition all such paths into two disjoint subsets: those that do not use the kth vertex vk
as intermediate and those that do.
i. Since the paths of the first subset have their intermediate vertices numbered not higher
than k − 1, the shortest of them is, by definition of our matrices, of length 𝑑i(k−1)
j
ii. In the second subset the paths are of the form
vi, vertices numbered ≤ k − 1, vk, vertices numbered ≤ k − 1, vj .

The situation is depicted symbolically in Figure, which shows


the underlying idea of Floyd’s algorithm.

Taking into account the lengths of the shortest paths in both subsets leads to the following
recurrence:

Analysis: Its time efficiency is Θ(n3), similar to the warshall’s algorithm.


Application of Floyd’s algorithm to the digraph is shown below. Updated elements are shown
in bold.
4. Travelling Sales Person problem
5. Bellman-Ford Algorithm (Single source shortest path with –ve weights)
Problem definition
Single source shortest path - Given a graph and a source vertex s in graph, find shortest paths
from s to all vertices in the given graph. The graph may contain negative weight edges.
Note that we have discussed Dijkstra’s algorithm for single source shortest path problem.
Dijksra’s algorithm is a Greedy algorithm and time complexity is O(VlogV). But Dijkstra
doesn’t work for graphs with negative weight edges.
Bellman-Ford works for such graphs. Bellman-Ford is also simpler than Dijkstra and suites
well for distributed systems. But time complexity of Bellman-Ford is O(VE), which is more
than Dijkstra.
How it works?
Like other Dynamic Programming Problems, the algorithm calculates shortest paths in
bottom-up manner. It first calculates the shortest distances for the shortest paths which have
at-most one edge in the path. Then, it calculates shortest paths with at-most 2 edges, and so
on. After the ith iteration of outer loop, the shortest paths with at most i edges are calculated.
There can be maximum |V| – 1 edges in any simple path, that is why the outer loop runs |v| –
1 times. The idea is, assuming that there is no negative weight cycle, if we have calculated
shortest paths with at most i edges, then an iteration over all edges guarantees to give shortest
path with at-most (i+1) edges.
Bellman-Ford algorithm to compute shortest path

You might also like