0% found this document useful (0 votes)
69 views40 pages

Design and Analysis of Algorithms: CSE 5311 Lecture 22 All-Pairs Shortest Paths

The document discusses algorithms for finding all-pair shortest paths in a graph. It begins by defining the all-pairs shortest path problem and describes using single-source shortest path algorithms like Dijkstra's or Bellman-Ford to solve it. It then presents a dynamic programming approach using matrix multiplication to compute the shortest path distances between all pairs of vertices in O(V^4) time. This approach works by recursively defining the shortest path problem and computing the solution through a series of matrix multiplications.

Uploaded by

shashank dwivedi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views40 pages

Design and Analysis of Algorithms: CSE 5311 Lecture 22 All-Pairs Shortest Paths

The document discusses algorithms for finding all-pair shortest paths in a graph. It begins by defining the all-pairs shortest path problem and describes using single-source shortest path algorithms like Dijkstra's or Bellman-Ford to solve it. It then presents a dynamic programming approach using matrix multiplication to compute the shortest path distances between all pairs of vertices in O(V^4) time. This approach works by recursively defining the shortest path problem and computing the solution through a series of matrix multiplications.

Uploaded by

shashank dwivedi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Design and Analysis of Algorithms

CSE 5311
Lecture 22 All-Pairs Shortest Paths

Junzhou Huang, Ph.D.


Department of Computer Science and Engineering

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 1


All Pairs Shortest Paths (APSP)

• given : directed graph G = ( V, E ),


weight function ω : E → R, |V| = n

• goal : create an n × n matrix D = ( dij ) of shortest path distances


i.e., dij = δ ( vi , vj )

• trivial solution : run a SSSP algorithm n times, one for


each vertex as the source.

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 2


All Pairs Shortest Paths (APSP)
► all edge weights are nonnegative : use Dijkstra’s algorithm
– PQ = linear array : O ( V3 + VE ) = O ( V3 )
– PQ = binary heap : O ( V2lgV + EVlgV ) = O ( V3lgV )
for dense graphs
 better only for sparse graphs
– PQ = fibonacci heap : O ( V2lgV + EV ) = O ( V3 )
for dense graphs
 better only for sparse graphs
► negative edge weights : use Bellman-Ford algorithm
– O ( V2E ) = O ( V4 ) on dense graphs

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 3


Adjacency Matrix Representation of Graphs

►n x n matrix W = (ωij) of edge weights :


ω( vi , vj ) if ( vi , vj ) ∈ E
ωij =
∞ if ( vi , vj ) ∉ E
►assume ωii = 0 for all vi ∈ V, because
– no neg-weight cycle
⇒ shortest path to itself has no edge,
i.e., δ ( vi ,vi ) = 0

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 4


Dynamic Programming

(1) Characterize the structure of an optimal solution.


(2) Recursively define the value of an optimal solution.
(3) Compute the value of an optimal solution in a
bottom-up manner.
(4) Construct an optimal solution from information
constructed in (3).

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 5


Shortest Paths and Matrix Multiplication
Assumption : negative edge weights may be present, but no negative weight cycles.

(1) Structure of a Shortest Path :


• Consider a shortest path pijm from vi to vj such that |pijm| ≤ m
► i.e., path pijm has at most m edges.
• no negative-weight cycle ⇒ all shortest paths are simple
⇒ m is finite ⇒ m ≤ n – 1

• i = j ⇒ |pii|= 0 & ω(pii) = 0

• i ≠ j ⇒ decompose path pijm into pikm-1 & vk → vj , where|pikm-1| ≤ m - 1


► pikm-1 should be a shortest path from vi to vk by optimal substructure
property.
► Therefore, δ (vi ,vj ) = δ (vi ,vk ) + ωk j

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 6


Shortest Paths and Matrix Multiplication
(2) A Recursive Solution to All Pairs Shortest Paths Problem :
• dijm = minimum weight of any path from vi to vj that contains
at most “m” edges.

• m = 0 : There exist a shortest path from vi to vj with no


edges ↔ i = j .
0 if i = j
► dij0 =
∞ if i ≠ j
• m ≥ 1 : dijm = min { dijm-1 , min1≤k≤n Λ k≠j { dikm-1 + ωkj }}
= min1≤k≤n {dikm-1 + ωkj } for all vk ∈ V,
since ωj j = 0 for all vj ∈ V.

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 7


Shortest Paths and Matrix Multiplication
• to consider all possible shortest paths with ≤ m edges from vi to vj
► consider shortest path with ≤ m -1 edges, from vi to vk , where
vk ∈ Rvi and (vk ,vj ) ∈ E
vk’s

vi vj

• note : δ (vi ,vj ) = dijn-1 = dijn = dijn+1 , since m ≤ n -1 = | V | - 1

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 8


Shortest Paths and Matrix Multiplication

(3) Computing the shortest-path weights bottom-up :

• given W = D1 , compute a series of matrices D2, D3, ..., Dn-1 ,


where Dm = ( dijm ) for m = 1, 2,..., n-1
► final matrix Dn-1 contains actual shortest path weights,
i.e., dijn-1 = δ (vi ,vj )

• SLOW-APSP( W )
D1 ← W
for m ← 2 to n-1 do
Dm ← EXTEND( Dm-1 , W )
return Dn-1

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 9


Shortest Paths and Matrix Multiplication

EXTEND ( D , W ) MATRIX-MULT ( A , B )
► D = ( d ij ) is an n x n matrix ► C = ( cij ) is an n x n result
for i ← 1 to n do matrix
for j ← 1 to n do for i ←1 to n do
d ij ← ∞ for j ← 1 to n do
for k ← 1 to n do cij ← 0
d ij ← min{d ij , d ik + ωk j} for k ← 1 to n do
return D cij ← cij + aik x bk j
return C

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 10


Shortest Paths and Matrix Multiplication
×
• relation to matrix multiplication C = A B : cij = ∑1≤k≤n aik x bk j ,
► Dm-1 ↔ A & W ↔ B & Dm ↔ C
“min” ↔ “t” & “t” ↔ “x” & “∞” ↔ “0”

• Thus, we compute the sequence of matrix products


D1 = D0 x W = W ; note D0 = identity matrix, 0 if i = j
D2 = D1 x W = W2 i.e., dij0 =
D3 = D2 x W = W3 ∞ if i ≠ j

Dn-1= Dn-2 x W = Wn-1

• running time : Θ( n4 ) = Θ( V4 )
► each matrix product : Θ( n3 )
► number of matrix products : n-1

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 11


Shortest Paths and Matrix Multiplication

• Example
2
3 4

1 3
8
2
-5
1

-4 7

5 4
6

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 12


Shortest Paths and Matrix Multiplication

2 1 2 3 4 5
3 4
1 0 3 8 ∞ -4
1 3
2
8 2 ∞ 0 ∞ 1 7
-5
1 3 ∞ 4 0 ∞ ∞
7
-4 4 2 ∞ -5 0 ∞
5 4
6 5 ∞ ∞ ∞ 6 0

D1= D0W

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 13


Shortest Paths and Matrix Multiplication

2 1 2 3 4 5
3 4
1 0 3 8 2 -4
1 3
2
8 2 3 0 -4 1 7
-5
1 3 ∞ 4 0 5 11
-4 7
4 2 -1 -5 0 -2
5 4
6
5 8 ∞ 1 6 0
D2= D1W

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 14


Shortest Paths and Matrix Multiplication

2 1 2 3 4 5
3 4
1 0 3 -3 2 -4
1 3
2
8 2 3 0 -4 1 -1
-5
1 3 7 4 0 5 11
-4 7 4 2 -1 -5 0 -2
5 4
6 5 8 5 1 6 0

D3= D2W

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 15


Shortest Paths and Matrix Multiplication

1 2 3 4 5
2
3 4 1 0 1 -3 2 -4
1 3 2 3 0 -4 1 -1
8
2
-5 3 7 4 0 5 3
1

-4 7 4 2 -1 -5 0 -2
5 4
6 5 8 5 1 6 0
D4= D3W

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 16


SSSP and Matrix-Vector Multiplication

• relation of APSP to one step of matrix multiplication


cj cj

k
dijm
= k ×
ri ri

Dm ↔ C Dm-1 ↔ A W↔B

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 17


SSSP and Matrix-Vector Multiplication

• dijn-1 at row ri and column cj of product matrix


= δ ( vi=s, vj ) for j = 1, 2, 3, ... , n

• row ri of the product matrix = solution to single-


source shortest path problem for s = vi .

► ri of C = matrix B multiplied by ri of A
⇒ Dim = Dim-1 x W

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 18


SSSP and Matrix-Vector Multiplication

0 if i = j
• let Di0 = d0, where dj0 =
∞ otherwise
• we compute a sequence of n-1 “matrix-vector” products
di1 = di0 x W
di2 = di1 x W
di3 = di2 x W
:
din-1 = din-2 x W

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 19


SSSP and Matrix-Vector Multiplication

• this sequence of matrix-vector products


► same as Bellman-Ford algorithm.
► vector dim ⇒ d values of Bellman-Ford
algorithm after m-th relaxation pass.
► dim ← dim-1x W
⇒ m-th relaxation pass over all edges.

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 20


SSSP and Matrix-Vector Multiplication

BELLMAN-FORD ( G , vi ) EXTEND ( di , W )
► perform RELAX ( u , v ) ► di is an n-vector
for for j ← 1 to n do
► every edge ( u , v ) ∈ E dj ← ∞
for k ← 1 to n do
for j ← 1 to n do dj ← min { dj , dk +
for k ← 1 to n do ωkj }
RELAX ( vk , vj )

RELAX ( u , v )
dv = min { dv , du + ωuv }

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 21


Improving Running Time Through
Repeated Squaring
• idea : goal is not to compute all Dm matrices
► we are interested only in matrix Dn-1
• recall : no negative-weight cycles ⇒ Dm = Dn-1 for all m ≥ n-1
• we can compute Dn-1 with only lg(n-1) matrix products as
D1 = W
D2 = W2 = W x W
D4 = W4 = W2 x W2
D8 = W8 = W4 x W4

lg(n - 1) 2 lg(n -1) 2 lg(n -1) - 1 2 lg(n -1) - 1


2
D = W = W W ×
• This technique is called repeated squaring.

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 22


Improving Running Time Through
Repeated Squaring
• FASTER-APSP ( W )
D1 ← W
m←1
while m < n-1 do
D2m ← EXTEND ( Dm , Dm )
m ← 2m
return Dm

• final iteration computes D2m for some n-1 ≤ 2m ≤ 2n-2 ⇒ D2m = Dn-1
• running time : Θ( n3lgn ) = Θ( V3lgV )

► each matrix product : Θ( n3 )


► # of matrix products : lg( n-1 )
► simple code, no complex data structures, small hidden
constants in Θ-notation.

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 23


Idea Behind Repeated Squaring

• decompose pij2m as pikm & pkjm, where


pij2m : vi vj
pikm : vi vk
pkjm : vk vj
vk’s

vi vj

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 24


Floyd-Warshall Algorithm
• assumption : negative-weight edges, but no negative-weight cycles

(1) The Structure of a Shortest Path :


• Definition : intermediate vertex of a path p = < v1 , v2 , v3 , ... , vk >
► any vertex of p other than v1 or vk .
• pijm : a shortest path from vi to vj with all intermediate vertices
from Vm = { v1 , v2 , ... , vm }

• relationship between pijm and pijm-1


► depends on whether vm is an intermediate vertex of pijm
- case 1: vm is not an intermediate vertex of pijm
⇒ all intermediate vertices of pijm are in Vm -1
⇒ pijm = pijm-1

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 25


Floyd-Warshall Algorithm
- case 2 : vm is an intermediate vertex of pijm
- decompose path as vi vm vj
⇒ p1 : vi vm & p2 : vm vj
- by opt. structure property both p1 & p2 are shortest paths.
- vm is not an intermediate vertex of p1 & p2
⇒ p1 = pimm-1 & p2 = pmjm-1

Vm
p1 vm p2

vi vj

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 26


Floyd-Warshall Algorithm

(2) A Recursive Solution to APSP Problem :

• dijm = ω(pij ) : weight of a shortest path from vi to vj


with all intermediate vertices from
Vm = { v1 , v2 , ... , vm }.

• note : dijn = δ (vi ,vj ) since Vn = V


► i.e., all vertices are considered for being intermediate
vertices of pijn .

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 27


Floyd-Warshall Algorithm

• compute dijm in terms of dijk with smaller k < m


• m = 0 : V0 = empty set
⇒ path from vi to vj with no intermediate vertex.
i.e., vi to vj paths with at most one edge
⇒ dij0 = ωi j

• m ≥ 1 : dijm = min {dijm-1 , dimm-1 + dmjm-1 }

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 28


Floyd-Warshall Algorithm
(3) Computing Shortest Path Weights Bottom Up :

FLOYD-WARSHALL( W )
►D0, D1, ... , Dn are n x n matrices
for m ← 1 to n do
for i ← 1 to n do
for j ← 1 to n do
dijm ← min {dijm-1 , dimm-1 + dmjm-1 }
return Dn

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 29


Floyd-Warshall Algorithm

FLOYD-WARSHALL ( W )
► D is an n x n matrix
D←W
for m ← 1 to n do
for i ← 1 to n do
for j ← 1 to n do
if dij > dim + dmj then
dij ← dim + dmj
return D

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 30


Floyd-Warshall Algorithm

• maintaining n D matrices can be avoided by dropping all superscripts.


– m-th iteration of outermost for-loop
begins with D = Dm-1
ends with D = Dm
– computation of dijm depends on dimm-1 and dmjm-1 .
no problem if dim & dmj are already updated to dimm & dmjm
since dimm = dimm-1 & dmjm = dmjm-1.

• running time : Θ( n3 ) = Θ( V3 )
simple code, no complex data structures, small hidden constants

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 31


Transitive Closure of a Directed Graph

• G' = ( V , E' ) : transitive closure of G = ( V , E ), where


► E' = { (vi , vj ): there exists a path from vi to vj in G }

• trivial solution : assign W such that 1 if (vi , vj ) ∈ E


ωij =
∞ otherwise
► run Floyd-Warshall algorithm on W
► dijn < n ⇒ there exists a path from vi to vj ,
i.e., (vi , vj ) ∈ E'
► dijn = ∞ ⇒ no path from vi to vi ,
i.e., (vi , vj ) ∉ E'
► running time : Θ( n3 ) = Θ( V3 )

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 32


Transitive Closure of a Directed Graph

• Better Θ( V3 ) algorithm : saves time and space.


1 if i = j or (vi , vj ) ∈ E
► W = adjacency matrix : ωij =
0 otherwise
► run Floyd-Warshall algorithm by replacing “min” → “ ” & “+” → “ ”

1 if a path from vi to vj with all intermediate vertices from Vm


• define tijm =
0 otherwise
► tijn = 1 ⇒ (vi , vj ) ∈ E' & tijn = 0 ⇒ (vi , vj ) ∉ E'

• recursive definition for tijm = tijm-1 (timm-1 tmjm-1 ) with tij0 = ωij

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 33


Transitive Closure of a Directed Graph

T-CLOSURE (G )
► T = ( tij ) is an n x n boolean matrix
for i ← 1 to n do
for j ← 1 to n do
if i = j or ( vi , vj ) ∈ E then
tij ← 1
else
tij ← 0
for m ← 1 to n do
for i ← 1 to n do
for j ← 1 to n do
tij ← tij ( tim tmj )
Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 34
Johnson’s Algorithm for Sparse Graphs

(1) Preserving shortest paths by edge reweighting :

• L1 : given G = ( V , E ) with ω : E → R
► let h : V → R be any weighting function on the vertex set
► define ω( ω , h ) : E → R as ω( u , v ) = ω( u , v ) + h (u) – h (v)
► let p0k = < v0 , v1 , ... , vk > be a path from v0 to vk

(a) ω( p0k ) = ω( p0k ) + h (v0 ) - h (vk )

(b) ω( p0k ) = δ(v0, vk ) in ( G, ω )  ω( p0k ) = δ(v0, vk ) in ( G, ω )

(c) ( G , ω ) has a neg-wgt cycle  ( G , ω ) has a neg-wgt cycle

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 35


Johnson’s Algorithm for Sparse Graphs
(2) Producing nonnegative edge weights by reweighting :

• given (G, ω) with G = ( V, E ) and ω : E → R


construct a new graph ( G', ω' ) with G' = ( V', E' ) and
ω' = E' → R
► V' = V U { s } for some new vertex s ∉ V
► E' = E U { ( s ,v ) : v ∈ V }
► ω'(u,v) = ω(u,v) (u,v) ∈ E and ω'(s,v) = 0 , v ∈ V

• vertex s has no incoming edges ⇒ s ∉ Rv for any v in V


► no shortest paths from u ≠ s to v in G' contains vertex s
► ( G', ω' ) has no neg-wgt cycle  (G, ω) has no neg-wgt cycle

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 36


Johnson’s Algorithm for Sparse Graphs

• suppose that G and G' have no neg-wgt cycle

• L2 : if we define h (v) = δ (s ,v ) v ∈ V in G' and ω


according to L1.
► we will have ω(u,v) = ω(u,v) + h(u) – h(v) ≥ 0 v ∈ V

proof : for every edge (u, v) ∈ E


δ (s ,v ) ≤ δ (s, u) + ω(u, v) in G' due to triangle inequality
h (v) ≤ h (u) + ω(u, v) ⇒ 0 ≤ ω(u, v) + h(u) – h(v) = ω(u, v)

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 37


Johnson’s Algorithm for Sparse Graphs

Computing All-Pairs Shortest Paths

• adjacency list representation of G.

• returns n x n matrix D = ( dij ) where


dij = δij ,
or reports the existence of a neg-wgt cycle.

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 38


Johnson’s Algorithm for Sparse Graphs
• JOHNSON(G,ω)
► D=(dij) is an nxn matrix
► construct ( G' = (V', E') , ω' ) s.t. V' = V U {s}; E' = E U { (s,v) : v ∈ V }
► ω'(u,v) = ω(u,v), (u,v) ∈ E & ω'(s,v) = 0 v ∈ V
if BELLMAN-FORD(G', ω', s) = FALSE then
return “negative-weight cycle”
else
for each vertex v ∈ V'- {s} = V do
h[v] ← d'[v] ► d'[v] = δ'(s,v) computed by BELLMAN-FORD(G', ω', s)
for each edge (u,v) ∈ E do
ω(u,v) ← ω(u,v) + h[u] – h[v] ► edge reweighting
for each vertex u ∈ V do
run DIJKSTRA(G, ω, u) to compute d[v] = δ (u,v) for all v in V ∈ (G,ω)
for each vertex v ∈ V do
duv = d[v] – ( h[u] – h[v] )
return D

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 39


Johnson’s Algorithm for Sparse Graphs

• running time : O ( V2lgV + EV )


► edge reweighting
BELLMAN-FORD(G', ω', s) : O ( EV )
computing ω values : O (E )
► |V| runs of DIJKSTRA : | V | x O ( VlgV + EV )
= O ( V2lgV + EV );
PQ = fibonacci heap

Dept. CSE, UT Arlington CSE5311 Design and Analysis of Algorithms 40

You might also like