0% found this document useful (0 votes)
67 views

Shortest Path Problem: Habib University

This document presents a project report on algorithms for solving the shortest path problem in graphs. It introduces the shortest path problem and defines key terms. It then describes and provides pseudocode for several algorithms to solve the problem, including Breadth-First Search (BFS), Dijkstra's algorithm, Bellman-Ford algorithm, and Floyd-Warshall algorithm. It also includes an empirical analysis comparing the performance of these algorithms on different types of graphs.

Uploaded by

Lama Imam
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views

Shortest Path Problem: Habib University

This document presents a project report on algorithms for solving the shortest path problem in graphs. It introduces the shortest path problem and defines key terms. It then describes and provides pseudocode for several algorithms to solve the problem, including Breadth-First Search (BFS), Dijkstra's algorithm, Bellman-Ford algorithm, and Floyd-Warshall algorithm. It also includes an empirical analysis comparing the performance of these algorithms on different types of graphs.

Uploaded by

Lama Imam
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Habib University

Dhanani School of Science and Engineering


CS412 Algorithms Design and Analysis

Shortest Path Problem


Dr. Shah Jamal Alam

Muhammad Hammad Maqdoom


Alisha Momin
Fatima Nasir Khan
Lama Imam

Project Report
Spring 2022
Contents

1 Introduction 1
1.1 What is shortest path problem? . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Cyclic and acylic graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Directed and un-directed graph . . . . . . . . . . . . . . . . . . . . . . . . 1
1.4 Negative cycle and negative weighted cycle . . . . . . . . . . . . . . . . . . 1
1.5 Algorithms for determining the shortest path . . . . . . . . . . . . . . . . . 1

2 Algorithms 2
2.1 Breath First Search (BFS) . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1.1 Basic Intuition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1.3 Complexity - Theoretical Analysis . . . . . . . . . . . . . . . . . . . 5
2.1.4 Assumptions and Limitations . . . . . . . . . . . . . . . . . . . . . 5
2.2 Dijkstra’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.1 Basic Intuition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.3 Complexity - Theoretical Analysis . . . . . . . . . . . . . . . . . . . 8
2.2.4 Assumptions and Limitations . . . . . . . . . . . . . . . . . . . . . 8
2.3 Bellman-Ford Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.1 Basic Intuition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.3 Complexity - Theoretical Analysis . . . . . . . . . . . . . . . . . . . 11
2.3.4 Assumptions and Limitations . . . . . . . . . . . . . . . . . . . . . 11
2.4 Floyd–Warshall Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4.1 Basic Intuition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4.3 Complexity - Theoretical Analysis . . . . . . . . . . . . . . . . . . . 14
2.4.4 Assumptions and Limitations . . . . . . . . . . . . . . . . . . . . . 14
2.5 Fibonacci Heap Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.5.1 Basic Intuition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.5.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.5.3 Complexity - Theoretical Analysis . . . . . . . . . . . . . . . . . . . 21
2.5.4 Assumptions and Limitations . . . . . . . . . . . . . . . . . . . . . 21

3 Empirical Analysis 22
3.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

i
4 Theoretical Comparison 29

5 Task Distribution 30

ii
Chapter 1

Introduction

1.1 What is shortest path problem?


The shortest path problem is a graph theory problem that is mainly associated with
finding the shortest path between two vertices or nodes in a graph that minimizes the
total cost of traversing the graph or the sum of the weights of the edges traversed.

1.2 Cyclic and acylic graph


A cyclic graph is a graph containing at least one graph cycle. A graph that is not cyclic
is said to be acyclic.

1.3 Directed and un-directed graph


A directed graph is a type of graph that contains ordered pairs of vertices while an
undirected graph is a type of graph that contains unordered pairs of vertices.

1.4 Negative cycle and negative weighted cycle


A negative cycle is one in which the overall sum of the cycle becomes negative. Whereas
a negative weight cycle is a cycle with weights that sum to a negative number.

1.5 Algorithms for determining the shortest path


• Viterbi algorithm
• Dijkstra algorithm
• Bellman–Ford
• Johnson’s algorithm
• Floyd–Warshall algorithm
• A* search algorithm
The main focus of this report will be on BFS, Dijkstra, Bellman–Ford and Floyd- Warshall
algorithm.

1
Chapter 2

Algorithms

2.1 Breath First Search (BFS)


Breadth-first search is a graph traversal algorithm that starts traversing the graph from
the root node and explores all the neighboring nodes. Then, it selects the nearest node
and explores all the unexplored nodes. While using BFS for traversal, any node in the
graph can be considered as the root node. [5]
BFS and its application in finding connected components of graphs were invented in 1945
by Konrad Zuse, in his (rejected) Ph.D. thesis on the Plankalkül programming language,
but this was not published until 1972. It was reinvented in 1959 by Edward F. Moore,
who used it to find the shortest path out of a maze, and later developed by C. Y. Lee into
a wire routing algorithm (published 1961). [6]

2.1.1 Basic Intuition


Flowing steps are the basic intuition [1]
1. Choose any one node randomly, to start traversing.
2. Visit its adjacent unvisited node.
3. Mark it as visited in the boolean array and display it.
4. Insert the visited node into the queue.
5. If there is no adjacent node, remove the first node from the queue.
6. Repeat the above steps until the queue is empty.

2.1.2 Algorithm
The pseudocode for BFS algorithm is given below: [2]

1 procedure BFS (G , root ) is


2 let Q be a queue
3 label root as explored
4 Q . enqueue ( root )
5 while Q is not empty do

2
6 v := Q . dequeue ()
7 if v is the goal then
8 return v
9 for all edges from v to w in G . adjacentEdges ( v ) do
10 if w is not labeled as explored then
11 label w as explored
12 Q . enqueue ( w )
13

The implementation to determine the time anaylsis is given below:

1 import time
2 import numpy as np
3 import random
4 import matplotlib . pyplot as plt
5 from collections import defaultdict
6
7 # This class represents a directed graph
8 # using adjacency list representation
9 class Graph :
10
11 # Constructor
12 def __init__ ( self ) :
13 self . graph = {}
14 def addNodes ( self , u ) :
15 self . graph [ u ] = []
16 def addEdge ( self ,u , v ) :
17 self . graph [ u ]. append ( v )
18 def BFS ( self , s ) :
19 visited = [ False ] * ( max ( self . graph ) + 1)
20 queue = []
21 queue . append ( s )
22 visited [ s ] = True
23
24 while queue :
25 print ( " true " )
26 v = queue . pop (0)
27 # print (s , end = " ")
28 for i in self . graph [ v ]:
29 if visited [ i ] == False :
30 queue . append ( i )
31 visited [ i ] = True
32

33
34 # g = Graph ()
35 # nodez = np . random . randint (5 , size =200)
36 # # print ( nodez )
37 # for z in range ( len ( nodez ) ) :
38 # if z < len ( nodez ) -1:
39 # g . addEdge ( nodez [ z ] , nodez [ z +1])
40 # print (" Add - " , nodez [ z ] , " " , nodez [ z +1])
41 # g . BFS (0)
42 # while True :
43 # try :
44 # start_time = time . time ()
45 # g . BFS (0)
46 # end_time = time . time ()
47 # print (" Total time = " , end_time - start_time )

3
48 # break
49 # except :
50 # pass
51
52 x = []
53 y = []
54
55
56 for i in range (5 ,225 ,25) :
57 g = Graph ()
58
59 nodez =[]
60 # print ( nodez )
61 for z in range ( i ) :
62 nodez . append ( z )
63 g . addNodes ( z )
64
65

66
67
68 for k in range ( len ( nodez ) ) :
69 for j in range ( len ( nodez ) ) :
70 if k != j :
71

72 g . addEdge ( nodez [ k ] , nodez [ j ])


73
74
75 start_time = time . time ()
76 # print ( start_time )
77 g . BFS (0)
78 end_time = time . time ()
79 # print ( end_time )
80 x . append ( i )
81
82 y . append ( end_time - start_time )
83

84
85
86
87
88
89 print ( x )
90 print ( y )
91
92
93 plt . plot (x , y , label = " Time Taken Per n Vertices " , color = " red " ,
94 marker = " * " )
95

96 # y - axis label
97 plt . ylabel ( ' Time Taken By BFS ( in seconds ) ')
98 # x - axis label
99 plt . xlabel ( ' Number of Vertices In The Graph ')
100 # plot title
101 plt . title ( ' BFS Algorithm - Time Analysis ')
102 # showing legend
103 plt . legend ()
104
105 # function to show the plot

4
106 plt . show ()

2.1.3 Complexity - Theoretical Analysis


The Time complexity of BFS is O(V + E) when Adjacency List is used and O(V2 ) when
Adjacency Matrix is used, where V stands for vertices and E stands for edges.

2.1.4 Assumptions and Limitations


BFS finds the shortest path in undirected and directed graph but it will not work on
weighted graphs since the path with the fewest edges may not be the shortest if the edges
it contains are expensive.

5
2.2 Dijkstra’s Algorithm
Dijkstra’s algorithm is a method for determining the shortest paths between nodes in a
graph, which can be used to represent, for instance, road networks. It was conceived
in 1956 and published three years later by computer scientist Edsger W. Dijkstra. An
SSSP (Single Source Shortest Path) algorithm is Dijkstra’s algorithm. It determines the
shortest route between any two points in the graph that is to determine the shortest path
between a source node to all other nodes in the graph. The shortest path is determined
using a greedy algorithm.

2.2.1 Basic Intuition


1. The Dijkstra algorithm is parameterized by the graph and the source vertex.
2. All nodes are initialized with a distance of "infinite" and the starting node (source
node) is initialized with zero.
3. It makes a new priority queue that is empty. A tuple format is used to store each
item in the queue that is (weight, vertex). In this, weight denotes the distance
between the source and destination vertex.
4. The source vertex is placed in priority queue and distance is set to zero.
5. The priority queue will be iterated until it is empty.
6. It first obtain the vertex with the shortest distance from the priority queue within
the loop, which we’ll refer to as vertex u. After that it loops through all of u’s
adjacent vertices and performs the following for each adjacent vertex v. Vertex u’s
distance is updated if a shorter path to adjacent vertex v is found from vertex u.
After that, it adds vertex v to the priority queue.

2.2.2 Algorithm
The pseudocode for Dijkstra algorithm is given below: [3]

1 function Dijkstra ( Graph , source ) :


2 # Initialization
3 for each vertex v in Graph :
4 # initial distance from source to vertex v is set to infinite
5 dist [ v ] := infinity
6 # Previous node in optimal path from source
7 previous [ v ] := undefined
8 # Distance from source to source
9 dist [ source ] := 0
10 # all nodes in the graph are unoptimized - thus are in Q
11 Q := the set of all nodes in Graph
12 # main loop
13 while Q is not empty :
14 u := node in Q with smallest dist [ ]
15 remove u from Q
16 # where v has not yet been removed from Q .
17 for each neighbor v of u :
18 alt := dist [ u ] + dist_between (u , v )
19 if alt < dist [ v ] # Relax (u , v )

6
20 dist [ v ] := alt
21 previous [ v ] := u
22 return previous [ ]
23

The implementation to determine the time anaylsis is given below:

1 import sys
2 import time
3 import numpy as np
4 import matplotlib . pyplot as plt
5
6

7 class Graph () :
8
9 def __init__ ( self , vertices ) :
10 self . V = vertices
11 self . graph = [[0 for column in range ( vertices ) ]
12 for row in range ( vertices ) ]
13 def printSolution ( self , dist ) :
14 print ( " Vertex \ tDistance from Source " )
15 for node in range ( self .V ) :
16 print ( node , " \ t " , dist [ node ])
17 def minDistance ( self , dist , sptSet ) :
18 min = sys . maxsize
19 for v in range ( self . V ) :
20 if dist [ v ] < min and sptSet [ v ] == False :
21 min = dist [ v ]
22 min_index = v
23 return min_index
24 def dijkstra ( self , src ) :
25 dist = [ sys . maxsize ] * self . V
26 dist [ src ] = 0
27 sptSet = [ False ] * self . V
28 for _ in range ( self . V ) :
29 u = self . minDistance ( dist , sptSet )
30 sptSet [ u ] = True
31 for v in range ( self . V ) :
32 if self . graph [ u ][ v ] > 0 and sptSet [ v ] == False and \
33 dist [ v ] > dist [ u ] + self . graph [ u ][ v ]:
34 dist [ v ] = dist [ u ] + self . graph [ u ][ v ]
35 self . printSolution ( dist )
36 # g = Graph (5000)
37 # g . graph = np . random . randint (5 , size =(5000 , 5000) )
38 # g . dijkstra (0)
39 # while True :
40 # try :
41 # start_time = time . time ()
42 # g . dijkstra (0)
43 # end_time = time . time ()
44 # print (" Total time = " , end_time - start_time )
45 # break
46 # except :
47 # pass
48

49 # x = [25 , 50 , 100 , 150 , 200]


50 # y = [0.013002634048461914 , 0.03902769088745117 , 0.11808395385742188 ,
0.24017548561096191 , 0.32923436164855957]

7
51
52 x = [25 , 50 , 100 , 500 , 1000 , 2000 , 3000 , 4000 , 5000]
53 y = [0.013002634048461914 , 0.03902769088745117 , 0.11808395385742188 ,
1.317941665649414 ,
54 4.752390623092651 ,28.34447145462036 , 69.9988374710083 , 124.48891639709473 ,
146.03596019744873]
55
56 plt . plot (x , y , label = " Time Taken Per n Vertices " , color = " red " , marker = " * " )
57 plt . ylabel ( ' Time Taken By Dijkstra ( in seconds ) ')
58 plt . xlabel ( ' Number of Vertices In The Graph ')
59 plt . title ( ' Dijkstra Algorithm - Time Analysis ')
60 plt . legend ()
61 plt . show ()

2.2.3 Complexity - Theoretical Analysis


Dijkstra’s Algorithm’s theoretical complexity varies by data structure, the following are
the theoretical complexities for each one:
1. Priority queue and Adjacency list:

O((V + E) ∗ log(V ))

2. Fibonacci Heap and adjacency list:

O((E) + V ∗ log(V ))

3. Priority queue and Matrix:

O((V 2 ) + E ∗ log(V ))

2.2.4 Assumptions and Limitations


The following are the necessary assumptions for implementing Dijkstra algorithm to solve
the shortest-path problem for any weighted, directed graph with non-negative weights. It
can handle graphs consisting of cycles but not graphs that consist of negative weighted
cycles.

8
2.3 Bellman-Ford Algorithm
The Bellman–Ford algorithm is an algorithm that computes shortest paths from a single
source vertex to all of the other vertices in a weighted digraph. It is slower than Dijkstra’s
algorithm for the same problem, but more versatile, as it is capable of handling graphs in
which some of the edge weights are negative numbers. The algorithm was first proposed
by Alfonso Shimbel (1955), but is instead named after Richard Bellman and Lester Ford
Jr., who published it in 1958 and 1956, respectively.

2.3.1 Basic Intuition


1. Initialize the distance for every vertex in the graph to infinite, the previous node’s
distance to null, and the source node’s distance to 0.
2. Calculate a temporary distance for each vertex in the graph by adding the distance
of the node and the edge’s weight.
3. For nodes where the distance determined is less than the distance stored in the
distance dictionary for such node, the distance is updated to the temporary distance
and the previous distance is set to the new node in the distance dictionary.
4. If the distance between each edge (u,v) in the graph is less than the distance between
the edges’ weights, it indicates that the graph contains a negative cycle.

2.3.2 Algorithm
The pseudocode for Bellman-Ford algorithm is given below: [4]

1 function bellmanFord (G , S )
2 for each vertex V in G
3 distance [ V ] <- infinite
4 previous [ V ] <- NULL
5 distance [ S ] <- 0
6
7 for each vertex V in G
8 for each edge (U , V ) in G
9 tempDistance <- distance [ U ] + edge_weight (U , V )
10 if tempDistance < distance [ V ]
11 distance [ V ] <- tempDistance
12 previous [ V ] <- U
13
14 for each edge (U , V ) in G
15 If distance [ U ] + edge_weight (U , V ) < distance [ V }
16 Error : Negative Cycle Exists
17
18 return distance [] , previous []
19

Our implementation to analyze the time is given below:

1 import time
2 import numpy as np
3 import matplotlib . pyplot as plt
4

9
5
6 class Graph :
7
8 def __init__ ( self , vertices ) :
9 self . V = vertices
10 self . graph = []
11 def addEdge ( self , u , v , w ) :
12 self . graph . append ([ u , v , w ])
13 def printArr ( self , dist ) :
14 print ( " Vertex Distance from Source " )
15 for i in range ( self . V ) :
16 print ( " {0}\ t \ t {1} " . format (i , dist [ i ]) )
17 def BellmanFord ( self , src ) :
18 dist = [ float ( " Inf " ) ] * self . V
19 dist [ src ] = 0
20 for _ in range ( self . V - 1) :
21 for u , v , w in self . graph :
22 if dist [ u ] != float ( " Inf " ) and dist [ u ] + w < dist [ v ]:
23 dist [ v ] = dist [ u ] + w
24 for u , v , w in self . graph :
25 if dist [ u ] != float ( " Inf " ) and dist [ u ] + w < dist [ v ]:
26 print ( " Graph contains negative weight cycle " )
27 return
28 self . printArr ( dist )
29 g = Graph (500)
30 matrix = np . random . randint (5 , size =(500 ,500) )
31 # print ( matrix )
32 i = 0
33 for node in matrix :
34 x = 0
35 for index in node :
36 g . addEdge (i , x , index )
37 x += 1
38 i += 1
39
40 while True :
41 try :
42 start_time = time . time ()
43 g . BellmanFord (0)
44 end_time = time . time ()
45 print ( " Total time = " , end_time - start_time )
46 break
47 except :
48 pass
49
50
51 x = [25 , 50 , 100 , 150 , 200]
52 y = [0.15509796142578125 , 0.7905609607696533 , 5.7084784507751465 ,
23.45373249053955 ,72.62042880058289]
53
54 plt . plot (x , y , label = " Time Taken Per n Vertices " , color = " red " , marker = " * " )
55 plt . ylabel ( ' Time Taken By BellmanFord Algorithm ( in seconds ) ')
56 plt . xlabel ( ' Number of Vertices In Graph ')
57 plt . title ( ' BellmanFord Algorithm - Time Analysis ')
58 plt . legend ()
59 plt . show ()

10
2.3.3 Complexity - Theoretical Analysis
The Bellman-Ford algorithm is an algorithm that calculates the shortest paths in a
weighted digraph from one source vertex to all other vertices. Bellman-Ford is also
simpler than Dijkstra and suites well for distributed systems. But time complexity of
Bellman-Ford is O(V*E), which is more than Dijkstra.

2.3.4 Assumptions and Limitations


Bellman-Ford does not work with undirected graph with negative edges as we end up with
negative cycles that the Bellman Ford algorithm does not support.

11
2.4 Floyd–Warshall Algorithm
Floyd-Warshall Algorithm is an algorithm for finding the shortest path between all the
pairs of vertices in a weighted graph. This algorithm works for both the directed and
undirected weighted graphs. But, it does not work for the graphs with negative cycles
(where the sum of the edges in a cycle is negative).

Floyd-Warhshall algorithm is also known as Floyd’s algorithm, Roy-Floyd algorithm, Roy-


Warshall algorithm, or WFI algorithm. This algorithm follows the dynamic programming
approach to find the shortest paths.

2.4.1 Basic Intuition


1. Create a matrix A0 of dimension n*n where n is the number of vertices. The row
and the column are indexed as i and j respectively. i and j are the vertices of the
graph. Each cell A[i][j] is filled with the distance from the ith vertex to the j th
vertex. If there is no path from ith vertex to j th vertex, the cell is left as infinity.
2. Now, create a matrix A1 using matrix A0 . The elements in the first column and the
first row are left as they are. The remaining cells are filled in the following way.
A[i][j] is filled with (A[i][k] + A[k][j]) if (A[i][j] > A[i][k] + A[k][j])
Let k be the intermediate vertex in the shortest path from source to destination.
In this step, k is the first vertex. If the direct distance from the source to the
destination is greater than the path through the vertex k, then the cell is filled with
A[i][k] + A[k][j].
3. Similarly, A2 is created using A1 . The elements in the second column and the second
row are left as they are.
In this step, k is the second vertex. The remaining steps are the same as in step 2.
4. Similarly, A3 and A4 is also created.
5. A4 gives the shortest path between each pair of vertices.

2.4.2 Algorithm
The pseudocode for Floyd-Warshall algorithm is given below:

1 n = no of vertices
2 A = matrix of dimension n * n
3 for k = 1 to n
4 for i = 1 to n
5 for j = 1 to n
6 Ak [i , j ] = min ( Ak -1[ i , j ] , Ak -1[ i , k ] + Ak -1[ k , j ])
7 return A
8

The implementation to find the time is given below:

1 # from matplotlib import pyplot as plt


2 import timeit

12
3 import random
4
5 from matplotlib import pyplot as plt
6
7
8 n = 200
9
10 # create a graph
11 def make_graph ( novertices ) :
12 x =[0 ,1 ,2 ,3 ,4 ,5 ,6 ,7 ,8 ,9 ,10 ,999]
13 G =[]
14 for i in range ( novertices ) :
15 row = []
16 for j in range ( novertices ) :
17 row . append ( random . choice ( x ) )
18 G . append ( row )
19
20 return G
21
22 # Algorithm implementation
23 def floyd_warshall (G , nV ) :
24 # defined table
25 distance = list ( map ( lambda i : list ( map ( lambda j : j , i ) ) , G ) )
26 print ( distance )
27 # Adding vertices individually
28 for k in range ( nV ) :
29 for i in range ( nV ) :
30 for j in range ( nV ) :
31 # checks
32 distance [ i ][ j ] = min ( distance [ i ][ j ] , distance [ i ][ k ] +
distance [ k ][ j ])
33
34
35 x_list = []
36 floyd_warshall_list = []
37 for i in range (25 ,200+1 ,25) :
38 G = make_graph ( i )
39 # print ( G )
40
41 start = timeit . default_timer ()
42 floyd_warshall (G , i )
43 end = timeit . default_timer ()
44 floyd_warshall_list . append ( end - start )
45
46 x_list . append ( i )
47
48
49

50 plt . plot ( x_list , floyd_warshall_list , linestyle = " solid " , marker = " * " ,c = " blue "
, label = " Floyd - Warshall - Time Analysis " )
51 plt . title ( " Floyd - Warshall - Time Analysis " )
52 plt . legend ([ ' Time Taken Per n Nodes '] , loc = ' upper left ')
53 plt . xlabel ( " Number of nodes in Graph " )
54 plt . ylabel ( " Time taken by Floyd - Warshall Algorithm ( in seconds ) " )
55 plt . show ()

13
2.4.3 Complexity - Theoretical Analysis
The time Complexity of the Floyd-Warshall Algorithm is O(n3 ) or O(|V |3 ),such that V
is the number of vertices or nodes in the graph. The run time has 3 as exponent because
there are three nested loops in the algorithm. This makes it less efficient than Bellman
ford algorithm that has a run time of O(|V | ∗ |E|)

2.4.4 Assumptions and Limitations


The Floyd-Warshall Algorithm is a weighted graph algorithm that finds the shortest path
between all pairs of vertices. This approach works for weighted graphs that are both
directed and undirected. It does not, however, operate for graphs with negative cycles
(where the sum of the edges in a cycle is negative).

14
2.5 Fibonacci Heap Algorithm
In computer science, a Fibonacci heap is a data structure for priority queue operations,
consisting of a collection of heap-ordered trees. It has a better amortized running time
than many other priority queue data structures including the binary heap and binomial
heap. Michael L. Fredman and Robert E. Tarjan developed Fibonacci heaps in 1984
and published them in a scientific journal in 1987. Fibonacci heaps are named after
the Fibonacci numbers, which are used in their running time analysis. Using Fibonacci
heaps for priority queues improves the asymptotic running time of important algorithms,
such as Dijkstra’s algorithm for computing the shortest path between two nodes in a
graph, compared to the same algorithm using other slower priority queue data structures.
[8]

2.5.1 Basic Intuition


Here is how Fibonacci heaps implement the basic functionalities of heaps and the time
complexity of each operation.

Insert: Insertion to a Fibonacci heap is similar to the insert operation of a binomial


heap. A heap of one element is created and the two heaps are merged with the merge
function. The minimum element pointer is updated if necessary. The total number of
nodes in the tree increases by one.

Find Minimum: The linked list has pointers and keeps track of the minimum node, so
finding the minimum is simple and can be done in constant time.
Union: Union of two Fibonacci heaps consists of following steps. Concatenate the roots
of both the heaps. Update min by selecting a minimum key from the new root lists.

Extract Min: It is the most important operation on a Fibonacci heap. In this operation,
the node with minimum value is removed from the heap and the tree is re-adjusted.
Deleting the minimum element is done in three steps. The node is removed from the root
list and the node’s children are added to the root list. Next, the minimum element is
updated if needed. Finally, consolidate the trees so that there are no repeated orders. If
any consolidation occurred, make sure to update the minimum element if needed. Delaying
consolidation saves times. [9]

2.5.2 Algorithm
The algorithm is referred from [9]

1 Make - Fibonacci - Heap ()


2 n [ H ] := 0
3 min [ H ] := NIL
4 return H
5
6 Fibonacci - Heap - Minimum ( H )
7 return min [ H ]
8

15
9 Fibonacci - Heap - Link (H ,y , x )
10 remove y from the root list of H
11 make y a child of x
12 degree [ x ] := degree [ x ] + 1
13 mark [ y ] := FALSE
14

15 CONSOLIDATE ( H )
16 for i :=0 to D ( n [ H ])
17 Do A [ i ] := NIL
18 for each node w in the root list of H
19 do x := w
20 d := degree [ x ]
21 while A [ d ] <> NIL
22 do y := A [ d ]
23 if key [ x ] > key [ y ]
24 then exchange x < - > y
25 Fibonacci - Heap - Link (H , y , x )
26 A [ d ]:= NIL
27 d := d +1
28 A [ d ]:= x
29 min [ H ]:= NIL
30 for i :=0 to D ( n [ H ])
31 do if A [ i ] < > NIL
32 then add A [ i ] to the root list of H
33 if min [ H ] = NIL or key [ A [ i ]] < key [ min [ H ]]
34 then min [ H ]:= A [ i ]
35
36 Fibonacci - Heap - Union ( H1 , H2 )
37 H := Make - Fibonacci - Heap ()
38 min [ H ] := min [ H1 ]
39 Concatenate the root list of H2 with the root list of H
40 if ( min [ H1 ] = NIL ) or ( min [ H2 ] <> NIL and min [ H2 ] < min [ H1 ])
41 then min [ H ] := min [ H2 ]
42 n [ H ] := n [ H1 ] + n [ H2 ]
43 free the objects H1 and H2
44 return H
45
46
47 Fibonacci - Heap - Insert (H , x )
48 degree [ x ] := 0
49 p [ x ] := NIL
50 child [ x ] := NIL
51 left [ x ] := x
52 right [ x ] := x
53 mark [ x ] := FALSE
54 concatenate the root list containing x with root list H
55 if min [ H ] = NIL or key [ x ] < key [ min [ H ]]
56 then min [ H ] := x
57 n [ H ]:= n [ H ]+1
58
59 Fibonacci - Heap - Extract - Min (H )
60 z := min [ H ]
61 if x <> NIL
62 then for each child x of z
63 do add x to the root list of H
64 p [ x ]:= NIL
65 remove z from the root list of H
66 if z = right [ z]

16
67 then min [ H ]:= NIL
68 else min [ H ]:= right [ z ]
69 CONSOLIDATE ( H )
70 n [ H ] := n [ H ] -1
71 return z
72

73 Fibonacci - Heap - Decrease - Key (H ,x , k )


74 if k > key [ x ]
75 then error " new key is greater than current key "
76 key [ x ] := k
77 y := p [ x ]
78 if y <> NIL and key [ x ] < key [ y ]
79 then CUT (H , x , y )
80 CASCADING - CUT (H , y )
81 if key [ x ] < key [ min [ H ]]
82 then min [ H ] := x
83
84 CUT (H ,x , y )
85 Remove x from the child list of y , decrementing degree [ y ]
86 Add x to the root list of H
87 p [ x ]:= NIL
88 mark [ x ]:= FALSE
89
90 CASCADING - CUT (H , y )
91 z := p [ y ]
92 if z <> NIL
93 then if mark [ y ] = FALSE
94 then mark [ y ]:= TRUE
95 else CUT (H , y , z )
96 CASCADING - CUT (H , z )
97
98 Fibonacci - Heap - Delete (H , x )
99 Fibonacci - Heap - Decrease - Key (H ,x , - infinity )
100 Fibonacci - Heap - Extract - Min ( H )
101

My implementation to analyze the time is given below:


In import fibonnaciheap, we have used the below code:

1 import math
2
3 # Creating fibonacci tree
4 class FibonacciTree :
5 def __init__ ( self , value ) :
6 self . value = value
7 self . child = []
8 self . order = 0
9
10 # Adding tree at the end of the tree
11 def add_at_end ( self , t ) :
12 self . child . append ( t )
13 self . order = self . order + 1
14
15
16 # Creating Fibonacci heap
17 class FibonacciHeap :
18 def __init__ ( self ) :
19 self . trees = []

17
20 self . least = None
21 self . count = 0
22
23 # Insert a node
24 def insert_node ( self , value ) :
25 new_tree = FibonacciTree ( value )
26 self . trees . append ( new_tree )
27 if ( self . least is None or value < self . least . value ) :
28 self . least = new_tree
29 self . count = self . count + 1
30
31 # Get minimum value
32 def get_min ( self ) :
33 if self . least is None :
34 return None
35 return self . least . value
36
37 # Extract the minimum value
38 def extract_min ( self ) :
39 smallest = self . least
40 if smallest is not None :
41 for child in smallest . child :
42 self . trees . append ( child )
43 self . trees . remove ( smallest )
44 if self . trees == []:
45 self . least = None
46 else :
47 self . least = self . trees [0]
48 self . consolidate ()
49 self . count = self . count - 1
50 return smallest . value
51
52 # Consolidate the tree
53 def consolidate ( self ) :
54 aux = ( floor_log ( self . count ) + 1) * [ None ]
55

56 while self . trees != []:


57 x = self . trees [0]
58 order = x . order
59 self . trees . remove ( x)
60 while aux [ order ] is not None :
61 y = aux [ order ]
62 if x . value > y . value :
63 x, y = y, x
64 x . add_at_end ( y )
65 aux [ order ] = None
66 order = order + 1
67 aux [ order ] = x
68
69 self . least = None
70 for k in aux :
71 if k is not None :
72 self . trees . append ( k )
73 if ( self . least is None
74 or k . value < self . least . value ) :
75 self . least = k
76
77

18
78 def floor_log ( x ) :
79 return math . frexp ( x ) [1] - 1
80
81
82 fibonacci_heap = FibonacciHeap ()
83

84 fibonacci_heap . insert_node (7)


85 fibonacci_heap . insert_node (3)
86 fibonacci_heap . insert_node (17)
87 fibonacci_heap . insert_node (24)
88
89 print ( ' the minimum value of the fibonacci heap : {} '. format ( fibonacci_heap .
get_min () ) )
90
91 print ( ' the minimum value removed : {} '. format ( fibonacci_heap . extract_min () ) )

The main file is given below:

1 import sys
2 import fibonnaciheap
3 import time
4 import numpy as np
5 import matplotlib . pyplot as plt
6
7 def addNodes (G , nodes ) :
8 for i in nodes :
9 G [ i ]=[]
10 return G
11
12 def addEdges (G , edges , directed = False ) :
13 for u ,v , w in edges :
14 if directed == True :
15 G [ u ]. append (( v , w ) )
16 else :
17 G [ u ]. append (( v , w ) )
18 G [ v ]. append (( u , w ) )
19 return G
20
21 def adj_matrix_to_Graph ( M ) :
22 nodes =[]
23 for i in range ( len ( M ) ) :
24 nodes . append ( i )
25 edges =[]
26 for i in range ( len ( M ) ) :
27 for j in range ( len ( M [ i ]) ) :
28 if M [ i ][ j ]!=0:
29 edges . append (( i ,j , M [ i ][ j ]) )
30 G ={}
31 G = addNodes (G , nodes )
32 G = addEdges (G , edges )
33 return G
34
35 class Graph () :
36 def __init__ ( self , vertices ) :
37 self . V = vertices
38 self . graph = {}
39 def min_distance ( self , distance , traversed ) :
40 min_index = 0

19
41 min_value = sys . maxsize
42 for i in range ( self . V ) :
43 if traversed [ i ] is False and min_value > distance [ i ]:
44 min_value = distance [ i ]
45 min_index = i
46 return min_index
47 def dijkstra ( self , src , to ) :
48 list_of_nodes = list ( self . graph )
49 inf = float ( ' inf ')
50 cost ={}
51 cost [ src ]=0
52 known ={}
53 Q = fibonnaciheap . FibonacciHeap ()
54 Q . insert_node ( src )
55 for node in self . graph :
56 if node != src :
57 cost [ node ]= inf
58 known [ node ]= False
59 Q . insert_node ( src )
60 while Q . least != None :
61 v = Q . extract_min ()
62 for u in self . graph [ v ]:
63 print ( u )
64 distance = cost [ v ]+ u [1]
65 if distance < cost [ u [0]]:
66 cost [ u [0]]= distance
67
68 return cost [ to ]
69
70 x = []
71 y = []
72
73 for i in range (25 ,200+1 ,25) :
74 g = Graph ( i )
75 res = np . random . randint (5 , size =( i , i ) )
76 print ( res )
77 # g . graph = { idx : res [ idx ] for idx in range ( len ( res ) ) }
78 g . graph = adj_matrix_to_Graph ( res )
79 print ( g . graph )
80 start_time = time . time ()
81 g . dijkstra (0 ,i -1)
82 end_time = time . time ()
83 x . append ( i )
84 y . append ( end_time - start_time )
85
86
87 # plotting points as a scatter plot
88 plt . plot (x , y , label = " Time Taken Per n Vertices " , color = " red " ,
89 marker = " * " )
90
91 # y - axis label
92 plt . ylabel ( ' Time Taken By Dijkstra ( in seconds ) ')
93 # x - axis label
94 plt . xlabel ( ' Number of Vertices In The Graph ')
95 # plot title
96 plt . title ( ' Dijkstra Algorithm using Fibonnaci Heap - Time Analysis ')
97 # showing legend
98 plt . legend ()

20
99
100 # function to show the plot
101 plt . show ()

2.5.3 Complexity - Theoretical Analysis


The time complexity of Fibonacci heap is given below:

2.5.4 Assumptions and Limitations


Fibonacci heaps have a reputation for being slow in practice due to large memory con-
sumption per node and high constant factors on all operations. Recent experimental
results suggest that Fibonacci heaps are more efficient in practice than most of its later
derivatives, including quake heaps, violation heaps, strict Fibonacci heaps, rank pairing
heaps, but less efficient than either pairing heaps or array-based heaps.

21
Chapter 3

Empirical Analysis

To implement BFS we used an adjacency matrix approach and monitored their perfor-
mance on graphs with varying numbers of nodes to determine their time complexity.

With regards to the implementations of Dijkstra and Bellman Ford we created graphs
in Python using the Numpy library and generated random n by n matrices using the
Numpy random built-in function that is numpy.random.randint(range of value, size of
matrix). We assumed that all of these randomly generated values were positive in order
to ensure compatibility with Dijkstra and Bellman-Ford algorithms and the absence of
negative cycles. We used Python’s native time library to record the time it took for Dijk-
stra and Bellman Fords algorithms to find the shortest path from the selected node to all
other nodes in the graphs. We displayed these data and analyzed the time complexities
by comparing the empirical value to theoretical values by plotting the results for a range
of node counts, n, from 25 to 5000.

For Floyd-Warshall we created an adjacency matrix whose size gradually increased through
increasing the number of nodes. The matrix was initialized with non-negative weights that
were attached to each edge. Using present libraries we run and executed the algorithm
and represent the run time and number of nodes on the graph to analyze time complexity
and compare theoretical and empirical values.

22
Figure 3.1: Graph of BFS Algorithm time analysis

Figure 3.2: Graph of Dijkstra Algorithm time analysis

23
Figure 3.3: Graph of Dijkstra Algorithm time analysis

Figure 3.4: Graph of Dijkstra Algorithm using Fibonacci Heap time analysis

24
Figure 3.5: Graph of Bellman-Ford Algorithm time analysis

Figure 3.6: Graph of Floyd-Warshall Algorithm time analysis

25
Figure 3.7: Comparison of all Algorithm time analysis

3.1 Results
BFS calculates the shortest paths in unweighted graphs.On the other hand, Dijkstra’s
algorithm calculates the same thing in weighted graphs.BFS runs in O(E+V), while Dijk-
stra’s runs in O((V+E)*log(V)). As is clear from the performance graphs above, Djikistra
performs significantly better than BFS.

We discovered that Dijkstra performed flawlessly in our empirical time analysis, taking
roughly linear time to identify the shortest path between n nodes, as depicted in the graph.

We achieved a nearly quadratic graph for Bellman Ford’s Algorithm due to the constraints
we imposed on our input matrices, that insured that each of nodes was linked to at least
single edge, implying that the edges number E is at least V+1. Thus, we can say that this
graph empirically satisfies our theoretical complexity because our theoretical complexity
has been reduced from O(V*E) to O(V2 ).

The Floyd-Warshall algorithm comes close to O(V 3 ) in the practical model. The matrice
has no negative weighted edges and they are randomly assigned.

Hence we conclude that Dijkstra algorithm is significantly much better than Bellman-Ford
algorithm with regards to time complexity. Although, Dijkstra algorithm doesn’t work
for negative weighted graphs therefore we use Bellman-Ford algorithm to efficiently per-
form the computation for negative weighted graphs. The Bellman ford algorithm is much
better in terms of run time as compared with Floyd-Warshall algorithm. Floyd-Warshall
works with undirected unlike Bellman Ford, thus it is used for undirected graphs over

26
Bellman Ford despite Bellman Ford having a better runtime.

Additionally we implemented a priority queue and compared Djikstra’s performance. Di-


jkstra’s original shortest path algorithm does not use a priority queue, and runs in O(V 2 )
time. When using a Fibonacci heap as a priority queue, it runs in O(E + V log V) time,
which is asymptotically the fastest known time complexity for this problem. However,
due to their programming complexity, and for some practical purposes, Fibonacci heaps
are not always a necessary or significant improvement. This is evident by the runtime
graphs plotted below for both implementations.

We finally plotted the run time graphs of all the algorithms chosen for our project to be
able to analyse and compare their overall trends As we can see in figure 3.7 Dijkstra and
BFS have completely overlapping lines indicating how close in performance they are but
Dijkstra is significantly better than BFS according to our time complexity analysis.

Next we see Dijkstra’s implementation with a Fibonacci heap falls just above Dijkstra’s
performance indicating that despite it having the fastest asymptotic time complexity it
doesn’t cause significant improvement in the shortest path problem.

Floyd Warshall on the other hand takes a much longer time than Dijkstra and BFS and
Fibonacci heap combined. Bellman ford takes even longer than Floyd Warshall in solving
the shortest path problem.

27
3.2 Limitations
For our empirical analysis we tried to vary our number of vertices in the graph as greater
than 200. But due to machine limitations we were unable to execute any number greater
than 200.

28
Chapter 4

Theoretical Comparison

BFS finds the shortest path in undirected and directed graph but it will not work on
weighted graphs since the path with the fewest edges may not be the shortest if the edges
it contains are expensive. The Time complexity of BFS is O(V + E) when Adjacency
List is used and O(V 2 ) when Adjacency Matrix is used.

To deal with this limitation Djikstra comes in. The main advantage of Dijkstra’s algo-
rithm is its considerably low complexity, an almost linear complexity. It works correctly
for directed and un-directed graphs, however, when working with negative weights, Dijk-
stra’s algorithm can’t be used. Also, when working with dense graphs, where E is close to
V 2 , if we need to calculate the shortest path between any pair of nodes, using Dijkstra’s
algorithm is not a good option as the complexity will then be O(V 2 log(V)).

The main advantage of the Bellman-Ford algorithm is its capability to handle negative
weights in addition to directed and un-directed graphs. However, the Bellman-Ford al-
gorithm has a considerably larger complexity than Dijkstra’s algorithm. Therefore, Dijk-
stra’s algorithm has more applications, because graphs with negative weights are usually
considered a rare case. As mentioned earlier, the Bellman-Ford algorithm can handle
directed and directed graphs with negative weights. However, it can only handle directed
graphs with negative weights, as long as we don’t have negative cycles. To avoid this
limitation we come to the Floyd-Warshall.

Floyd-Warshall Algorithm is an algorithm for finding the shortest path between all the
pairs of vertices in a weighted graph. This algorithm works for both the directed and
undirected weighted graphs. But, it does not work for the graphs with negative cycles.

29
Chapter 5

Task Distribution

• Muhammad Hammad Maqdoom worked on BFS

• Alisha Momin worked on Dijkstra And Bellman Ford

• Lama Imam and Fatima Nasir Khan worked on Floyd Warshall and Fibonacci heap
for Dijkstra

• Lama Imam and Fatima Nasir Khan worked on the presentation and poster for Demo

30
Bibliography

[1] https://techvidvan.com/tutorials/breadth-first-search/#:~:text=
Algorithm%20for%20BFS%3A,visited%20node%20into%20the%20queue.
[2] https://en.wikipedia.org/wiki/Breadth-first_search
[3] http://www.gitta.info/Accessibiliti/en/html/Dijkstra_learningObject1.
html
[4] https://www.programiz.com/dsa/bellman-ford-algorithm
[5] https://www.javatpoint.com/breadth-first-search-algorithm
[6] https://en.wikipedia.org/wiki/Breadth-first_search
[7] https://www.programiz.com/dsa/floyd-warshall-algorithm
[8] https://en.wikipedia.org/wiki/Fibonacci_heap
[9] https://brilliant.org/wiki/fibonacci-heap/
[10] https://kbaile03.github.io/projects/fibo_dijk/fibo_dijk.html
[11] https://www.programiz.com/dsa/fibonacci-heap

31

You might also like