0% found this document useful (0 votes)
50 views

Shortest Path Algorithms

The document discusses algorithms for finding the shortest path between two vertices in a graph, including Dijkstra's algorithm and Bellman-Ford algorithm. It provides an example application of Dijkstra's algorithm to find the shortest path from vertex A to other vertices in a weighted graph. The algorithm works by initially marking the source vertex A with a distance of 0 and marking all other vertices with infinity. It then iteratively updates the distances of neighboring vertices if a shorter path is found through the source vertex.

Uploaded by

Zryan Muhammed
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views

Shortest Path Algorithms

The document discusses algorithms for finding the shortest path between two vertices in a graph, including Dijkstra's algorithm and Bellman-Ford algorithm. It provides an example application of Dijkstra's algorithm to find the shortest path from vertex A to other vertices in a weighted graph. The algorithm works by initially marking the source vertex A with a distance of 0 and marking all other vertices with infinity. It then iteratively updates the distances of neighboring vertices if a shorter path is found through the source vertex.

Uploaded by

Zryan Muhammed
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 94

SHORTEST PATH

SHORTEST PATH
Introduction

 Shortest path problem: finding a path between two vertices in a graph such that


the sum of the weights of its edges is minimized
 Dijkstra algorithm
 Bellman-Ford algorithm
 SP in Directed Acyclic Graph
 (APAS) Floyd-Warshall algorithm
Dijkstra algorithm

 It was constructed by computer scientist Edsger Dijkstra in 1956


 Dijkstra can handle positive edge weights
 Dijkstra’s algorithm is a Greedy algorithm and time complexity is O(V+E LogV).
 This is asymptotically the fastest known single-source shortest-path algorithm for
arbitrary directed graphs with unbounded non-negative weights
Initialize  source vertex distance is 0, all the other vertex
have infinity distance from the source
inf inf
15
B D
5
0 3
4 12
A
8 C
inf 7
inf
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
inf
inf inf
15
B D
5
0 3
4 12
A
8 C
inf 7
inf
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
inf
inf inf
15
B D
5
0 3
4 12
A
8 C
inf 7
inf
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
inf
Node B: decide what is smaller 0+5 or inf ... 5 is smaller so UPDATE
+ we have to track predecessor when we update ( if we do not update, we don’t )
inf inf
15
B D
5
0 3
4 12
A
8 C predecessor of B
inf 7
is A
inf
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
inf
5 inf
15
B D
5
0 3
4 12
A
8 C
inf 7
inf
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
inf
Node H: decide what is smaller 0+8 or inf ... 8 is smaller so UPDATE

5 inf
15
B D
5
0 3
4 12
A
C predecessor of H
8
7 is A
8
inf
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
inf
Node E: decide what is smaller 0+9 or inf ... 9 is smaller so UPDATE

5 inf
15
B D
5
0 3
4 12
A
C predecessor of E
8
7 is A
8
inf
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
9
B–5 ; H–8; E-9
5 inf
15
B D
5
0 3
4 12
A
8 C
8 7
inf
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
9
B–5 ; H–8; E-9
5 inf
15
B D
5
0 3
4 12
A
8 C
8 7
inf
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
9
H–8; E-9
5 inf
15
B D
5
0 3
4 12
A
8 C
8 7
inf
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
9
H–8; E-9
5 inf
15
B D
5
0 3
4 12
A
8 C
8 7
inf
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
9
Node D: decide what is smaller 5+15 or inf ... 20 is smaller so UPDATE
H–8; E-9
5 inf
15
B D
5
0 3
4 12
A
8 C
8 7
inf
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
9
Node D: decide what is smaller 5+15 or inf ... 20 is smaller so UPDATE
H–8; E-9
5 20
15
B D
5
0 3
4 12
A
C predecessor of D
8
7 is B
8
inf
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
9
Node C: decide what is smaller 5+12 or inf ... 17 is smaller so UPDATE
H–8; E-9
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
inf
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
9
Node C: decide what is smaller 5+12 or inf ... 17 is smaller so UPDATE
H–8; E-9
5 20
15
B D
5
0 3
4 12
A
C predecessor of C
8
7 is B
8
17
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
9
Node H: decide what is smaller 5+4 or 8 ... 8 is smaller so DO NOT UPDATE
H–8; E-9
5 20
15
B D
5
0 3
4 12
A
C predecessor of H
8
7 remanins A because
8
17 we do not update !!!
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
9
Node H: decide what is smaller 5+4 or 8 ... 8 is smaller so DO NOT UPDATE
H–8; E-9
5 20
15
B D
5
0 3
4 12
A
C predecessor of H
8
7 remanins A because
8
17 we do not update !!!
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
9
H – 8 ; E – 9 ; C – 17 ; D – 20
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
17
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
9
H – 8 ; E – 9 ; C – 17 ; D – 20
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
17
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
9
E – 9 ; C – 17 ; D – 20
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
17
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
9
E – 9 ; C – 17 ; D – 20
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
17
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
9
Node C: decide what is smaller 8+7 or 17 ... 15 is smaller so UPDATE
E – 9 ; C – 17 ; D – 20
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
17
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
9
Node C: decide what is smaller 8+7 or 17 ... 15 is smaller so UPDATE
E – 9 ; C – 15 ; D – 20
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
15
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
9
Node F: decide what is smaller 8+6 or inf ... 14 is smaller so UPDATE
E – 9 ; C – 15 ; D – 20
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
15
H 9
1
9 inf 11
6
5
F
13
4

20
E G
inf
9
Node F: decide what is smaller 8+6 or inf ... 14 is smaller so UPDATE
E – 9 ; C – 15 ; D – 20
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
15
H 9
1
9 11
6 14
5
F
13
4

20
E G
inf
9
E – 9 ; C – 15 ; D – 20 ; F – 14
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
15
H 9
1
9 11
6 14
5
F
13
4

20
E G
inf
9
E – 9 ; C – 15 ; D – 20 ; F – 14
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
15
H 9
1
9 11
6 14
5
F
13
4

20
E G
inf
9
C – 15 ; D – 20 ; F – 14
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
15
H 9
1
9 11
6 14
5
F
13
4

20
E G
inf
9
C – 15 ; D – 20 ; F – 14
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
15
H 9
1
9 11
6 14
5
F
13
4

20
E G
inf
9
Node H: decide what is smaller 9+5 or 8 ... 8 is smaller so DO NOT UPDATE
C – 15 ; D – 20 ; F – 14
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
15
H 9
1
9 11
6 14
5
F
13
4

20
E G
inf
9
Node F: decide what is smaller 9+4 or 14 ... 13 is smaller so UPDATE
C – 15 ; D – 20 ; F – 14
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
15
H 9
1
9 11
6 14
5
F
13
4

20
E G
inf
9
Node F: decide what is smaller 9+4 or 14 ... 13 is smaller so UPDATE
C – 15 ; D – 20 ; F – 13
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
15
H 9
1
9 11
6 13
5
F
13
4

20
E G
inf
9
Node G: decide what is smaller 9+20 or inf ... 29 is smaller so UPDATE
C – 15 ; D – 20 ; F – 13
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
15
H 9
1
9 11
6 13
5
F
13
4

20
E G
inf
9
Node G: decide what is smaller 9+20 or inf ... 29 is smaller so UPDATE
C – 15 ; D – 20 ; F – 13 ; G – 29
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
15
H 9
1
9 11
6 13
5
F
13
4

20
E G
29
9
C – 15 ; D – 20 ; F – 13 ; G – 29
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
15
H 9
1
9 11
6 13
5
F
13
4

20
E G
29
9
C – 15 ; D – 20 ; F – 13 ; G – 29
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
15
H 9
1
9 11
6 13
5
F
13
4

20
E G
29
9
C – 15 ; D – 20 ; G – 29
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
15
H 9
1
9 11
6 13
5
F
13
4

20
E G
29
9
Node C: decide what is smaller 13+1 or 15 ... 14 is smaller so UPDATE
C – 15 ; D – 20 ; G – 29
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
15
H 9
1
9 11
6 13
5
F
13
4

20
E G
29
9
Node C: decide what is smaller 13+1 or 15 ... 14 is smaller so UPDATE
C – 14 ; D – 20 ; G – 29
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
14
H 9
1
9 11
6 13
5
F
13
4

20
E G
29
9
Node G: decide what is smaller 13+13 or 29 ... 26 is smaller so UPDATE
C – 14 ; D – 20 ; G – 29
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
14
H 9
1
9 11
6 13
5
F
13
4

20
E G
29
9
Node G: decide what is smaller 13+13 or 29 ... 26 is smaller so UPDATE
C – 14 ; D – 20 ; G – 26
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
14
H 9
1
9 11
6 13
5
F
13
4

20
E G
26
9
C – 14 ; D – 20 ; G – 26
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
14
H 9
1
9 11
6 13
5
F
13
4

20
E G
26
9
C – 14 ; D – 20 ; G – 26
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
14
H 9
1
9 11
6 13
5
F
13
4

20
E G
26
9
D – 20 ; G – 26
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
14
H 9
1
9 11
6 13
5
F
13
4

20
E G
26
9
Node D: decide what is smaller 14+3 or 20 ... 17 is smaller so UPDATE
D – 20 ; G – 26
5 20
15
B D
5
0 3
4 12
A
8 C
8 7
14
H 9
1
9 11
6 13
5
F
13
4

20
E G
26
9
Node D: decide what is smaller 14+3 or 20 ... 17 is smaller so UPDATE
D – 17 ; G – 26
5 17
15
B D
5
0 3
4 12
A
8 C
8 7
14
H 9
1
9 11
6 13
5
F
13
4

20
E G
26
9
Node G: decide what is smaller 14+11 or 26 ... 25 is smaller so UPDATE
D – 17 ; G – 26
5 17
15
B D
5
0 3
4 12
A
8 C
8 7
14
H 9
1
9 11
6 13
5
F
13
4

20
E G
26
9
Node G: decide what is smaller 14+11 or 26 ... 25 is smaller so UPDATE
D – 17 ; G – 25
5 17
15
B D
5
0 3
4 12
A
8 C
8 7
14
H 9
1
9 11
6 13
5
F
13
4

20
E G
25
9
D – 17 ; G – 25
5 17
15
B D
5
0 3
4 12
A
8 C
8 7
14
H 9
1
9 11
6 13
5
F
13
4

20
E G
25
9
D – 17 ; G – 25
5 17
15
B D
5
0 3
4 12
A
8 C
8 7
14
H 9
1
9 11
6 13
5
F
13
4

20
E G
25
9
G – 25
5 17
15
B D
5
0 3
4 12
A
8 C
8 7
14
H 9
1
9 11
6 13
5
F
13
4

20
E G
25
9
Node G: decide what is smaller 15+17 or 25 ... 25 is smaller so DO NOT UPDATE
G – 25
5 17
15
B D
5
0 3
4 12
A
8 C
8 7
14
H 9
1
9 11
6 13
5
F
13
4

20
E G
25
9
G – 25
5 17
15
B D
5
0 3
4 12
A
8 C
8 7
14
H 9
1
9 11
6 13
5
F
13
4

20
E G
25
9
G – 25
5 17
15
B D
5
0 3
4 12
A
8 C
8 7
14
H 9
1
9 11
6 13
5
F
13
4

20
E G
25
9
5 17
15
B D
5
0 3
4 12
A
8 C
8 7
14
H 9
1
9 11
6 13
5
F
13
4

20
E G
25
9
We have constructed the shortest path tree: we just have to calculated once, than reuse it
as many times as we want !!!
5 17
15
B D
5
0 3
4 12
A
8 C
8 7
14
H 9
1
9 11
6 13
5
F
13
4

20
E G
25
9
SHORTEST PATH
BELLMAN-FORD ALGORITHM
Bellman Ford Algorithm

 Invented in 1958 by Bellman and Ford independently


 Slower than Dijkstra’s but more robust: it can handle negative edge weights too
 Dijkstra algorithm choose the edge greedely, with the lowest cost: Bellman-Ford
relaxes all edges at the same time for V-1 iteration
 Running time is O(V*E)
 Does V-1 iteration + 1 to detect cycles: if cost decreases in the V-th iteration, than
there is a negative cycle, because all the paths are traversen up to the V-1
iteration !!!
Negative cycle:

What is the problem?

If we would like to find a path with the minimum


A cost we have to go A  B  C  A
to decrease the overall cost

And a next cycle: decrease the cost again


5 1
And again ...

C B
-10

Real life scenarios: no negative cycles at all ... but sometimes we transform a problem into a graph
with positive / negative edge weights and looking for some negative cycles !!!
1970: Yen optimization

 Yen algorithm: it is the Bellman-Ford algorithm with some optimization.


 We can terminate the algorithm if there is no change in the distances between two
iterations !!!
 (we use the same technique in bubble sort)
Applications

 Cycle detection can prove to be very important


 Negative cycles as well  we have to run the Bellman-Ford algorithm that can
handle negative edge weights by default
 On the FOREX market it can detect arbitrage situations !!!
Let all edges are processed in the following order: (B, E), (D, B), (B, D), (A, B), (A, C),
(D, C), (B, C), (E, D).
The first iteration guarantees to give all shortest paths which are at most 1 edge long. We
get the following distances when all edges are processed second time (The last row
shows final values). 
The second iteration guarantees to give all shortest paths which are at most 2 edges long.
The algorithm processes all edges 2 more times. The distances are minimized after the
second iteration, so third and fourth iterations don’t update the distances
SHORTEST PATH
SHORTEST PATH IN DIRECTED ACYCLIC GRAPH
DAG shortest path

 If the graph is a DAG, so there is no directed cycles, it is easier to find the shortest
path
 We sort the vertices into topological order: we iterate throught the topological
order relaxing all edges from the actual vertex
 Topological sort algorithm computes shortest path tree in any edge weighted (can
be negative!!!) DAG in time O(E+V)
 It is much faster than Bellman-Ford or Dijkstra
 Applications: solving Knapsack problem
Example
SHORTEST PATH
ALL PAIRS SHORTEST PATH(FLOYD WARSHALL ALGORITHM)
Floyd Warshall Algorithm

 The all-pairs shortest path algorithm is also known as Floyd-Warshall algorithm.


 used to find all pair shortest path problem from a given weighted graph.
 As a result of this algorithm, it will generate a matrix, which will represent the
minimum distance from any node to all other nodes in the graph.
Time Complexity O(n3)
Space Complexity O(n2)
 Lets discuss the algorithm in an example
Example
1-Create a matrix A0 of dimension n*n where n is the number of vertices. The row and
the column are indexed as i and j respectively. i and j are the vertices of the graph.

Each cell A[i][j] is filled with the distance from the ith vertex to the jth vertex. If there is
no path from ith vertex to jth vertex, the cell is left as infinity.
2- Now, create a matrix A1 using matrix A0. The elements in the first column and the first
row are left as they are. The remaining cells are filled in the following way.

Let k be the intermediate vertex in the shortest path from source to destination. In this
step, k is the first vertex. A[i][j] is filled with (A[i][k] + A[k][j]) if (A[i][j] > A[i][k] +
A[k][j]).

That is, if the direct distance from the source to the destination is greater than the path
through the vertex k, then the cell is filled with A[i][k] + A[k][j].

In this step, k is vertex 1. We calculate the distance from source vertex to destination
vertex through this vertex k.
Calculate the distance from the source vertex to destination vertex through this vertex k
For example: For A1[2, 4], the direct distance from vertex 2 to 4 is 4 and the sum of the
distance from vertex 2 to 4 through vertex (ie. from vertex 2 to 1 and from vertex 1 to 4)
is 7. Since 4 < 7, A0[2, 4] is filled with 4.
3- Similarly, A2 is created using A1. The elements in the second column and the second
row are left as they are.
In this step, k is the second vertex (i.e. vertex 2). The remaining steps are the same as in
step 2.
4- Similarly, A3 is also created.
4- Similarly, A4 is also created.
5- A4 gives the shortest path between each pair of vertices.
TRAVELLING
SALESMAN
PROBLEM
TSP
 Given a list of cities and the distances between each pair of cities, what is the
shortest possible route that visits each city exactly once and returns to the origin
city
 NP-hard problem in combinatorial optimization
 N cities -> N! permutations, so brute force methods cannot be used
 It is possible that the worst-case running time for any algorithm for the TSP
increases exponentially with the number of cities
 Hamilton path: a path in an undirected or directed graph that visits
each vertex exactly once
 Hamiltonian cycle:  Hamiltonian path that is a cycle
 HERE: we are looking for the shortest path Hamiltonian cycle basically !!!
We are looking for a Hamiltonian cycle but with the shortest
overall cost possible !!!

1
A B

2
2
5 3

C D
1
We are looking for a Hamiltonian cycle but with the shortest
overall cost possible !!!

1
A B

2
2
5 3

C D
1

Total cost: 10
We are looking for a Hamiltonian cycle but with the shortest
overall cost possible !!!

1
A B

2
2
5 3

C D
1

Total cost: 6 ~ so this is the solution of the TSP !!!


Algorithm

 The most direct solution would be to try all permutations, this is the brute-force
search
 O(n!) running time: impractical even for 20 cities
 Dynamic programing approach, Held-Karp algorithm solve the problem
 Not so fast too
 Principle: we just want to get an approximate solution, it does not need to be the
best
 It can be done with Monte-Carlo based simulations
Methods for solving TSP
 Choose a random tour: not so efficient
 Greedy search: we always choose the next nearest city
 2-opt-solution: generate a random tour  take a route that crosses over itself and
reorder it so that it does not
 If the tour overall distance can not be enhanced with swap of edges then terminate
 Simulated annealing: it can be good !!!
 It can avoid local minimums
 It is like Metropolis-Hastings algorithm: we allow non-optimal subsolutions too with
given probability ( this probability is constant !!! )
 Simulated annealing
 Nearest Neighbor Method
TRAVELLING
SALESMAN
PROBLEM
NEAREST NEIGHBOR METHOD
This procedure gives reasonably good results for the travelling salesman problem. The
method is as follows:

 Step 1: Select an arbitrary vertex and find the vertex that is nearest to this starting
vertex to form an initial path of one edge.

 Step 2: Let v denote the latest vertex that was added to the path. Now, among the
result of the vertices that are not in the path, select the closest one to v and add the
path, the edge-connecting v and this vertex. Repeat this step until all the vertices
of graph G are included in the path.

 Step 3: Join starting vertex and the last vertex added by an edge and form the
circuit.
Let the problem be the given graph
1-Select edge V1-V3
2-Select edge V3-V2
3-Select edge V2-V4
4-Select edge V4-V5
5-Select edge V5-V1

The total cost is 18


CONCLUSION
CONCLUSION
 Shortest path algorithm has real life use cases as mentioned we have SSSP and APSP
 in SSSP we have to find shortest path from one source vertex to all other vertices, but in
APSP we have to find all pairs shortest path.
 SSSP has some algorithms for instance BFS for unweighted graphs, Dijkstra algorithm for
positive weighted graphs and it doesn’t work with negative graphs it may give true answer
but not perfect, another SSSP algorithm is Bellman Ford works with negative and positive
weighted graphs and it uses dynamic programming and it is useful for detecting negative
cycles
 last thing in SSSP that mentioned is shortest path in directed acyclic graphs (DAG)
 All pairs shortest path we have an algorithm which is Floyd Warshall It computes the shortest
path between every pair of vertices of the given graph. Floyd Warshall Algorithm is an
example of dynamic programming approach.

You might also like