Design and Analysis of Algorithms
Dynamic Progamming Algorithms:
Graph-Related Problems
Negative Weights in Graphs: Significance
• Daily Life: Think of a hypothetical cab driver, who gets a random
payment to drive a passenger from a source to a destination, but he
pays anonther random amount from his pocket for fuel/toll etc. along
the path
• Chemistry: The weights can be used to represent the heat produced
during a chemical reaction - nodes represent the compounds
• Gaming World: Suppose a number of gamers playing games for
money - the loser pays money to the winner
• Business Transactions: Profit/Loss from selling the product incurring
the cost of transporting the same to the client
Optimal Substructure Property: Negative Weights
Definition:
• δ(u,v) = weight of the shortest path(s) from u to v
Well Definedness:
• Negative-weight cycle in graph: Some shortest paths may not be defined
• Argument: Can always get a shorter path by going around the cycle again
cycle
<0
s v
Dealing With Negative-weight Edges
• No problem, as long as no negative-weight cycles are reachable from the source
• Otherwise, we can just keep going around it, and get w(s, v) = −∞ for all v on the
cycle
Graph Algorithms: Single Source Shortest Path
With Negative Edge Weights
Single-Source All-Destinations Shortest Paths
With Possible Negative Weights
• Premises:
• Directed weighted graph
• Edges may have negative weight/cost
• No cycle whose cost is less than zero
• Objective: To find a shortest path from a given source vertex s to each of the
vertices of a directed graph that follows the above premises
• Solution: To apply the Bellman-Ford Algorithm
• The algorithm was first proposed by Alfonso Shimbel (1955), but is instead
named after Richard Bellman and Lester Ford Jr., who published it in 1958
and 1956, respectively
• It is also sometimes called the Bellman–Ford–Moore algorithm as Edward F.
Moore also published a variation of the algorithm in 1959
• It is slower than Dijkstra's algorithm for the same problem, but more versatile
Bellman-Ford Algorithm: Few Key Points
• Bellman-Ford Algorithm assumes:
• Directed graph
• Possible presence of negative edge weights
• No cycle having negative weight
• All undirected graphs are directed graph, too
• However, if the undirected graph contains a negative weighted edge,
the same implies a negative-weighted cycle in the resulting digraph!
Bellman-Ford Algorithm: Notations
For each vertex v V:
• δ(s, v): shortest-path weight
• d[v]: shortest-path weight estimate t
6
x
3 9
• Initially, d[v]=∞ 3
4
• d[v]δ(s,v) as algorithm progresses s 0
2 1
2 7
• [v] = predecessor of v on a shortest path from s 5 3
5 11
6
• If no predecessor, [v] = NIL y z
• induces a tree—shortest-path tree
Bellman-Ford Algorithm
• Idea:
• Each edge is relaxed |V–1| times by making |V-1| passes over the
whole edge set
• To make sure that each edge is relaxed exactly |V – 1| times, it puts
the edges in an unordered list and goes over the list |V – 1| times
• Allows negative edge weights and can detect negative cycles
• Returns TRUE if no negative-weight cycles are reachable from the
source s
• Returns FALSE otherwise no solution exists
• Perform this extra test after V-1 iterations
Detecting Negative Cycles
for each edge (u, v) E
do if d[v] > d[u] + w(u, v) s
2
b
0
then return FALSE
return TRUE -8 3
c
1st pass 2nd pass
s b s b Look at edge (s, b):
2 2
0
-3 2
-6
-3 -1
2
d[b] = -1
-8 3 -8 3
5
52 d[s] + w(s, b) = -4
c c
d[b] > d[s] + w(s, b)
(s,b) (b,c) (c,s)
Bellman-Ford Algorithm
Bellman-Ford(G, w, s) Initialize-Single-Source(G, s)
for each v V
1. Initialize-Single-Source(G, s) do d[v] ←
[v] ← NIL
2. for i := 1 to |V| - 1 do d[s] ← 0
3. for each edge (u, v) E do
Relax(u, v, w)
4. Relax(u, v, w) if d[v] > d[u] + w(u, v)
then d[v] := d[u] + w(u, v)
5. for each vertex v u.adj do parent[v] := u
6. if d[v] > d[u] + w(u, v)
7. then return False // there is a negative cycle
8. return True
Bellman-Ford Algorithm: Example 1
u v
5
–2
6 –3
8
z 0 7
–4
2
7
9
x y
Bellman-Ford Algorithm: Example 1
u v
5
6
–2
6 –3
8
z 0 7
–4
2
7
7 9
x y
Bellman-Ford Algorithm: Example 1
u v
5
6 4
–2
6 –3
8
z 0 7
–4
2
7
7 2
9
x y
Bellman-Ford Algorithm: Example 1
u v
5
2 4
–2
6 –3
8
z 0 7
–4
2
7
7 2
9
x y
Bellman-Ford Algorithm: Example 1
u v
5
2 4
–2
6 –3
8
z 0 7
–4
2
7
7 -2
9
x y
Bellman-Ford Algorithm: Another Look
Note: This is essentially a Dynamic Programming algorithm!!!
Let d(i, j) = Cost of the shortest path from s to i using at most j edges
0 if i = s & j = 0
d(i, j) = if i s & j = 0
min({d(k, j–1) + w(k, i): i Adj(k)} {d(i, j–1)}) if j > 0
i
z u v x y
1 2 3 4 5
j 0 0
1 0 6 7
2 0 6 4 7 2
3 0 2 4 7 2
4 0 2 4 7 –2
Bellman-Ford Algorithm: Example 2
Bellman-Ford Algorithm: Time Complexity
Bellman-Ford(G, w, s)
1. Initialize-Single-Source(G, s) O(|V|)
2. for i := 1 to |V| - 1 do
3. for each edge (u, v) E do
O(|V||E|)
4. Relax(u, v, w)
5. for each vertex v u.adj do O(|E|)
6. if d[v] > d[u] + w(u, v)
7. then return False // there is a negative cycle
8. return True
Time complexity: O(|V||E|)
Correctness of Belman-Ford Algorithm
• Theorem: Show that d[v]= δ (s, v), for every v, after |V-1| passes.
Case 1: G does not contain negative cycles which are reachable from s
• Assume that the shortest path from s to v is
p = v0, v1, . . . , vk, where s=v0 and v=vk, k≤|V-1|
• Use mathematical induction on the number of passes i to show that:
d[vi]= δ (s, vi) , i=0,1,…,k
Correctness of Belman-Ford Algorithm ...
Base Case: i=0 d[v0]= δ (s, v0)= δ (s, s)= 0
Inductive Hypothesis: d[vi-1]= δ (s, vi-1)
Inductive Step: d[vi]= δ (s, vi)
s
vi-1 vi After relaxing (vi-1, vi):
w d[vi]≤d[vi-1]+w= δ (s, vi-1)+w= δ (s, vi)
d[vi-1]= δ (s, vi-1)
From the upper bound property: d[vi]≥ δ (s, vi)
Therefore, d[vi]=δ (s, vi)
Correctness of Belman-Ford Algorithm ...
• Case 2: G contains a negative cycle which is reachable from s
<0
Proof by
Contradiction:
suppose the
d d
algorithm
returns a
d d
solution
d d
Contradiction!
Graph Algorithms: All Pair Shortest Path
All-Pairs Shortest-Paths Problem
• Problem: Given a directed graph G=(V, E), and a weight function w:
ER, for each pair of vertices u, v, compute the shortest path weight
(u, v), and a shortest path if exists.
• Output:
• A VV matrix D = (dij), where, dij contains the shortest path weight
from vertex i to vertex j
• A VV matrix =(ij), where, ij is NIL if either i=j or there is no
path from i to j, otherwise ij is the predecessor of j on some
shortest path from i
Structure of a Shortest Path
Suppose W = (wij) is the adjacency matrix such that
0, if i j
wij the weight of edge (i, j ) if i j and (i, j ) E
if i j and (i, j ) E
Consider a shortest path P from i to j, and suppose that P has at most
m edges. Then,
if i = j, P has weight 0 and no edges.
If i j, we can decompose P into i P’ kj,
P’ is a shortest path from i to k
Matrix Multiplication: Min-Plus Product
Let lij(m) be the minimum weight of any path from i to j that contains at
most m edges
• For m = 0,
• lij(0) = 0, if i = j, and otherwise
• For m 1,
• lij(m) = min{lik(m-1)+wkj},
1k n
• The desired solution: lij(n-1)
Matrix Multiplication: A Recursive Solution
• Solve the problem stage by stage (dynamic programming)
• L(1) = W
• L(2)
•…
• L(n-1)
• where L(m), contains the shortest path weight with path length m.
Matrix multiplication: Pseudo-code
Matrix multiplication (pseudo-code)
Matrix multiplication: Running Time
• O(n4)
Improving the running time:
• No need to compute all the L(m) matrices for 1 m n-1
We are interested only in L(n-1), which is equal to L(m) for all integers
m ≥ n-1, with assuming that there are no negative cycles.
Improving the Running Time
Compute the sequence
L(1) = W,
L(2) = W2 = WW ,
L(4) = W4 = W2 W2,
L(8) = W8 = W4 W4
...
We need only lg(n-1) matrix products
•Time complexity: O(n3 lg n)
Improving Running Time
All-Pairs Shortest Path Problem: Floyd Warshall’s
Algorithm
• The Problem: To find the shortest paths between every pair of vertices of the
graph; output of the algorithm is an n × n matrix D = (dij), where entry dij contains
the weight of a shortest path from vertex i to vertex j
• Restriction: The graph may contain negative edges, but no negative cycles
• Input Representation: We use an adjacency-matrix representation for weight
matrix; an n × n matrix W representing the edge weights as follows:
• Note: We noticed earlier that the principle of optimality applies to shortest path
problems: If vertices i and j are distinct and if we decompose a shortest path p
into a shortest path p’ from i to k and an edge (k, j), then (i, j ) (i, k ) wkj
The Graph and The Weight Matrix
1
1 2 3 4 5 3 v1 v2
9
1 0 1 1 5 5
1 2 3
2 9 0 3 2 v5
3
3 0 4 v4
2
v3
4 2 0 3 4
5 3 0
All-Pair Shortest Paths: Dynamic-programming
Algorithm
1. Characterize the structure of an optimal solution
2. Recursively define the value of an optimal solution
3. Compute the value of an optimal solution in a bottom-up fashion
4. Constructing an optimal solution from computed information
The Subproblems
• How can we define the shortest distance d i,j in terms of “smaller”
problems?
• One way is to restrict the paths to only include vertices from a
restricted subset
• Initially, the subset is empty
• Then, it is incrementally increased until it includes all the vertices
The Subproblems
• Let D(k)[i,j] = Weight of a shortest path from vi to vj using only
vertices from {v1,v2,…,vk} as intermediate vertices in the path
• D(0) = W
• D(n) = D which is the goal matrix
• How do we compute D(k) from D(k-1) ?
The Subproblems: Recursive Definition
Case 1: A shortest path from vi to vj restricted to using only vertices
from {v 1 ,v 2 ,…,v k } as intermediate vertices does not use v k . Then
D(k)[i,j]= D(k-1)[i,j].
Case 2: A shortest path from vi to vj restricted to using only vertices
from {v1,v2,…,vk} as intermediate vertices does use vk. Then, D(k)[i,j]=
D(k-1)[i,k]+ D(k-1)[k,j]. Shortest path using intermediate vertices
{V1, . . . Vk }
Vk
Vj
Vi
Shortest Path using intermediate vertices { V1, . . . Vk -1 }
The Subproblems: Recursive Definition
• Since
D(k)[i,j]= D(k-1)[i,j] or
D(k)[i,j]= D(k-1)[i,k]+ D(k-1)[k,j].
We conclude:
D(k)[i,j]= min{ D(k-1)[i,j], D(k-1)[i,k]+ D(k-1)[k,j] }.
Shortest path using intermediate vertices
{V1, . . . Vk } Vk
Vj
Vi
Shortest Path using intermediate vertices { V1, . . . Vk -1 }
The Pointer Matrix P
• Used to enable finding a shortest path
• Initially the matrix contains all zeros
• Each time that a shorter path from i to j is found the k that provided
the minimum is saved (highest index node on the path from i to j)
• To print the intermediate nodes on the shortest path a recursive
procedure that print the shortest paths from i and k, and from k to j
can be used
Example: Floyd Warshall’s Algorithm
1 2 3
1 0 4 5
W= D0 =
5 2 2 0
1
3 -3 0
4 2 3
1 2 3
-3 1 0 0 0
2
P= 2 0 0 0
3 0 0 0
Example: Floyd Warshall’s Algorithm
1 5 1 2 3
D0 =
1 0 4 5
4 2 3
2 2 0
-3
2 3 -3 0
1 2 3
1 0 4 5 D1[2,3] = min( D0[2,3], D0[2,1]+D0[1,3] )
D =
1
= min (, 7)
2 2 0 7
=7
3 -3 0
1 2 3
1 0 0 0 D1[3,2] = min( D0[3,2], D0[3,1]+D0[1,2] )
= min (-3,)
P= 2 0 0 1
= -3
3 0 0 0
Example: Floyd Warshall’s Algorithm
1 2 3
1 5 D1 = 1 0 4 5
4 2 3 2 2 0 7
-3 3 -3 0
2
1 2 3
1 0 4 5 D2[1,3] = min( D1[1,3], D1[1,2]+D1[2,3] )
D =
2
= min (5, 4+7)
2 2 0 7
=5
3 -1 -3 0
1 2 3
1 0 0 0
D2[3,1] = min( D1[3,1], D1[3,2]+D1[2,1] )
P= 2 0 0 1
= min (, -3+2)
3 2 0 0
= -1
Example: Floyd Warshall’s Algorithm
1 5 D2 = 1 2 3
1 0 4 5
4 2 3
2 2 0 7
-3
2 3 -1 -3 0
1 2 3
1 0 2 5 D3[1,2] = min(D2[1,2], D2[1,3]+D2[3,2] )
D3 =
2 2 0 7 = min (4, 5+(-3))
3 -1 -3 0 =2
1 2 3
1 0 3 0 D3[2,1] = min(D2[2,1], D2[2,3]+D2[3,1] )
P= 2 0 0 1 = min (2, 7+ (-1))
3 2 0 0 =2
Floyd-Warshall's Algorithm Using n+1 D matrices
Floyd-Warshall//Computes shortest distance between all pairs of
//nodes, and saves matrix P to enable finding shortest paths
1. D0 W // initialize D array to W [ ]
2. P 0 // initialize P array to [0]
3. for k 1 to n
4. do for i 1 to n
5. do for j 1 to n
6. if (Dk-1[ i, j ] > Dk-1 [ i, k ] + Dk-1 [ k, j ] )
7. then Dk[ i, j ] Dk-1 [ i, k ] + Dk-1 [ k, j ]
8. P[ i, j ] k;
9. else Dk[ i, j ] Dk-1 [ i, j ]