0% found this document useful (0 votes)
126 views

Dynamic-Programming Algorithms For Shortest Path Prob - Lems

The document summarizes two dynamic programming algorithms for solving shortest path problems on graphs: 1) The Bellman-Ford algorithm finds shortest paths from all nodes to a single target node t in O(mn) time, where m is the number of edges and n is the number of nodes. It works by iteratively computing shortest paths using up to i edges for i from 1 to n-1. 2) The Floyd-Warshall algorithm finds shortest paths between all pairs of nodes in O(n3) time by iteratively allowing paths to pass through more intermediate nodes.

Uploaded by

Yannis Fertakis
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
126 views

Dynamic-Programming Algorithms For Shortest Path Prob - Lems

The document summarizes two dynamic programming algorithms for solving shortest path problems on graphs: 1) The Bellman-Ford algorithm finds shortest paths from all nodes to a single target node t in O(mn) time, where m is the number of edges and n is the number of nodes. It works by iteratively computing shortest paths using up to i edges for i from 1 to n-1. 2) The Floyd-Warshall algorithm finds shortest paths between all pairs of nodes in O(n3) time by iteratively allowing paths to pass through more intermediate nodes.

Uploaded by

Yannis Fertakis
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

CS 3510 - Design and Analysis of Algorithms

Maria-Florina Balcan Lectures 27 and 28: March 18th and 28th, 2011

Dynamic-Programming algorithms for shortest path problems1


We are now going to look to a basic graph problem: nding shortest paths in a weighted graph, and we will look at several algorithms based on Dynamic Programming. For an edge (i, j) in our graph, lets use len(i, j) to denote its length. The basic shortest-path problem is as follows: Denition 1 Given a weighted, directed graph G, a start node s and a destination node t, the s-t shortest path problem is to output the shortest path from s to t. The single-source shortest path problem is to nd shortest paths from s to every node in G. The (algorithmically equivalent) single-sink shortest path problem is to nd shortest paths from every node in G to t. We will allow for negative-weight edges (well later see some problems where this comes up when using shortest-path algorithms as a subroutine) but will assume no negative-weight cycles (else the shortest path can wrap around such a cycle innitely often and has length negative innity). As a shorthand, if there is an edge of length from i to j and also an edge of length from j to i, we will often just draw them together as a single undirected edge. So, all such edges must have positive weight.

0.1

The Bellman-Ford Algorithm

We will now look at a Dynamic Programming algorithm called the Bellman-Ford Algorithm for the single-sink (or single-source) shortest path problem.2 Let us develop the algorithm using the following example:
These lecture notes are due to Avrim Blum. Bellman is credited for inventing Dynamic Programming, and even if the technique can be said to exist inside some algorithms before him, he was the rst to distill it as an important technique.
2 1

How can we use Dyanamic Programming to nd the shortest path from all nodes to t? First of all, as usual for Dynamic Programming, lets just compute the lengths of the shortest paths rst, and afterwards we can easily reconstruct the paths themselves. The idea for the algorithm is as follows: 1. For each node v, nd the length of the shortest path to t that uses at most 1 edge, or write down if there is no such path. This is easy: if v = t we get 0; if (v, t) E then we get len(v, t); else just put down . 2. Now, suppose for all v we have solved for length of the shortest path to t that uses i 1 or fewer edges. How can we use this to solve for the shortest path that uses i or fewer edges? Answer: the shortest path from v to t that uses i or fewer edges will rst go to some neighbor x of v, and then take the shortest path from x to t that uses i1 or fewer edges, which weve already solved for! So, we just need to take the min over all neighbors x of v. 3. How far do we need to go? Answer: at most i = n 1 edges. Specically, here is pseudocode for the algorithm. We will use d[v][i] to denote the length of the shortest path from v to t that uses i or fewer edges (if it exists) and innity otherwise (d for distance). Also, for convenience we will use a base case of i = 0 rather than i = 1. Bellman-Ford pseudocode: initialize d[v][0] = infinity for v != t. d[t][i]=0 for all i. for i=1 to n-1: for each v != t: d[v][i] = min (len(v,x) + d[x][i-1])
(v,x)E

For each v, output d[v][n-1]. Try it on the above graph! We already argued for correctness of the algorithm. What about running time? The min operation takes time proportional to the out-degree of v. So, the inner for-loop takes time 2

proportional to the sum of the out-degrees of all the nodes, which is O(m). Therefore, the total time is O(mn). So far we have only calculated the lengths of the shortest paths; how can we reconstruct the paths themselves? One easy way is (as usual for DP) to work backwards: if youre at vertex v at distance d[v] from t, move to the neighbor x such that d[v] = d[x] + len(v, x). This allows us to reconstruct the path in time O(m + n) which is just a low-order term in the overall running time.

All-pairs Shortest Paths

Say we want to compute the length of the shortest path between every pair of vertices. This is called the all-pairs shortest path problem. If we use Bellman-Ford for all n possible destinations t, this would take time O(mn2 ). We will now see two alternative DynamicProgramming algorithms for this problem: the rst uses the matrix representation of graphs and runs in time O(n3 log n); the second, called the Floyd-Warshall algorithm uses a dierent way of breaking into subproblems and runs in time O(n3 ).

1.1

All-pairs Shortest Paths via Matrix Products

Given a weighted graph G, dene the matrix A = A(G) as follows: A[i, i] = 0 for all i. If there is an edge from i to j, then A[i, j] = len(i, j). Otherwise, A[i, j] = . I.e., A[i, j] is the length of the shortest path from i to j using 1 or fewer edges. Now, following the basic Dynamic Programming idea, can we use this to produce a new matrix B where B[i, j] is the length of the shortest path from i to j using 2 or fewer edges? Answer: yes. B[i, j] = mink (A[i, k] + A[k, j]). Think about why this is true! I.e., what we want to do is compute a matrix product B = A A except we change * to + and we change + to min in the denition. In other words, instead of computing the sum of products, we compute the min of sums. What if we now want to get the shortest paths that use 4 or fewer edges? To do this, we just need to compute C = B B (using our new denition of matrix product). I.e., to get from i to j using 4 or fewer edges, we need to go from i to some intermediate node k using 2 or fewer edges, and then from k to j using 2 or fewer edges. So, to solve for all-pairs shortest paths we just need to keep squaring O(log n) times. Each matrix multiplication takes time O(n3 ) so the overall running time is O(n3 log n). 3

1.2

All-pairs shortest paths via Floyd-Warshall

Here is an algorithm that shaves o the O(log n) and runs in time O(n3 ). The idea is that instead of increasing the number of edges in the path, well increase the set of vertices we allow as intermediate nodes in the path. In other words, starting from the same base case (the shortest path that uses no intermediate nodes), well then go on to considering the shortest path thats allowed to use node 1 as an intermediate node, the shortest path thats allowed to use {1, 2} as intermediate nodes, and so on. // After each iteration of the outside loop, A[i][j] = length of the // shortest i->j path thats allowed to use vertices in the set 1..k for k = 1 to n do: for each i,j do: A[i][j] = min( A[i][j], (A[i][k] + A[k][j]); I.e., you either go through node k or you dont. The total time for this algorithm is O(n3 ). Whats amazing here is how compact and simple the code is!

You might also like