Practical File OF Design and Analysis of Algorithms (Pc-Cs214Al)
Practical File OF Design and Analysis of Algorithms (Pc-Cs214Al)
OF
DESIGN AND ANALYSIS OF ALGORITHMS (PC-CS214AL)
SESSION: 2023-24
Submitted To:
Mrs. Anjali Chaudhary
Assistant Professor (CSE)
Submitted By:
Tushar Mandhan
2022027566 (CSE)
INDEX
S.No Title of the Practical Practical Date Signature
For example: Consider the array arr[] = {10, 50, 30, 70, 80, 20, 90, 40} and key = 30.
Step 1: Start from the first element (index 0) and compare key with each element (arr[i])
• Comparing key with first element arr[0]. Since not equal, the iterator moves to the next element
as a potential match.
• Comparing key with next element arr[1]. Since not equal, the iterator moves to the next element
as a potential match.
Step 2: Now when comparing arr[2] with key, the value matches. So the Linear Search
Algorithm will yield a successful message and return the index of the element when key is found
(here 2).
The time complexity of linear search is O(n) because every element in the array is compared only
once.
Algorithm:
LinearSearch(arr[], target):
• Phonebook Search: Linear search can be used to search through a phonebook to find a person’s
name, given their phone number.
• Spell Checkers: The algorithm compares each word in the document to a dictionary of correctly
spelled words until a match is found.
• Finding Minimum and Maximum Values: Linear search can be used to find the minimum and
maximum values in an array or list.
• Searching through unsorted data: Linear search is useful for searching through unsorted data.
• end.
Ouput:
A linear search is a method for finding an element in an array. It works by sequentially checking
each element in the array until it finds the desired element or until it reaches the end of the array.
2. Can you explain the process used to implement a linear search algorithm?
A linear search algorithm searches through a list of items, one by one, until it finds the target
item. It does this by comparing the target item to each item in the list until it finds a match. If the
target item is not in the list, then the search will fail.
A linear search is a method for finding an element within a data structure, such as an array, that
consists of sequentially checking each element in the data structure until the desired element is
found or the end of the data structure is reached
4. Can you give me some examples of where linear searches are used?
Linear searches are used in a variety of places, but they are especially common in situations
where the data being searched is not sorted in any particular order. For example, if you were
looking for a specific word in a book, you would likely use a linear search, since the pages are
not sorted in any particular way. Another common use for linear searches is when searching
through unsorted data in a database.
Any sorting algorithm can be used in conjunction with linear search, but some will be more
effective than others. For example, if the data is already sorted, then using a linear search will be
very efficient. However, if the data is unsorted, then using a linear search will be less effective. In
general, any sorting algorithm that puts the data in a specific order will be more effective when
used in conjunction with linear search.
7. Can you explain what data locality means in the context of linear search?
Data locality is a term used to describe how close data is to the processor. In the context of linear
search, data locality refers to how close the data being searched is to the processor. If the data is
close to the processor, then the search will be faster. If the data is far from the processor, then the
search will be slower.
The average time complexity of linear search is O(n), and the worst-case time complexity is also
O(n). This means that, on average, the algorithm will take n steps to find the desired element in
the array. However, in the worst case, it could take up to n steps if the desired element is the last
one in the array.
The best space complexity of linear search is O(1), because it only needs to store the element
being searched for.
10. Can you explain the difference between unsorted and sorted lists when it
comes to implementing a linear search?
When you are looking through an unsorted list, you can simply start at the beginning of the list
and check each element until you find the one you are looking for (or reach the end of the list).
With a sorted list, you can take advantage of the fact that the list is in order to speed up the
search. You can start in the middle of the list and then only search the half of the list that could
contain the element you are looking for.
Practical No. 1(b)
Aim: write a program to implement binary search.
Theory :
— Binary Search is a searching algorithm used in a sorted array by repeatedly dividing the
search interval in half.
— The idea of binary search is to use the information that the array is sorted and reduce the time
complexity to O(Log n).
— It is also known as half interval search,logarithmic search or binary chop.
— It works for both negative and duplicate valued elements .
— It is one of the divide and conquer algorithms.
— Binary search compares the target value to the middle of the array,if they are unequal then the
half in which the target cannot lie is eliminated and search continues for the other half.
— There are two necessary conditions in binary search: 1.The list must be sorted. 2.One must
have direct access to the middle element in the sublist.
— There are two stopping conditions in the linear search program:
1.when the element to be searched is found with location.
2. when the element is not found in the whole list .
— Binary Search Algorithm can be implemented in the following two ways
1.Iterative Method: Iterative refers to a sequence of instructions or code being repeated until a
specific end result is achieved. Iterative development is sometimes called circular or evolutionary
development.Iteration uses for loop.
2.Recursive Method: A method or algorithm that invokes itself one or more times with different
arguments.Recursion is the technique of making a function call itself. This technique provides a
way to break complicated problems down into simple problems which are easier to solve.
Algorithm:
BinarySearch(arr[], target):
1. Initialize two pointers, low = 0 and high = length of the array - 1.
2. While low is less than or equal to high:
a. Find the middle index of the array:
mid = (low + high) // 2
b. If the middle element equals the target: - Return the index of the middle element.
c. If the middle element is greater than the target:
- Update high to mid - 1.
d. If the middle element is less than the target:
- Update low to mid + 1.
3. If the target is not found in the array:
- Return -1 to indicate that the target is not present in the array.
Now, the element to search is found. So algorithm will return the index of the element matched.
Complexity :
Now, let's see the time complexity of Binary search in the best case, average case, and worst
case.
We will also see the space complexity of Binary search.
1. Time Complexity
Case Time Complexity
o Best Case Complexity - In Binary search, best case occurs when the element to search is
found in first comparison, i.e., when the first middle element itself is the element to be
searched. The best-case time complexity of Binary search is O(1).
o Average Case Complexity - The average case time complexity of Binary search is
O(logn).
o Worst Case Complexity - In Binary search, the worst case occurs, when we have to keep
reducing the search space till it has only one element. The worst-case time complexity of
Binary search is O(logn).
2. Space Complexity
Binary search finds application in various fields and scenarios where efficient searching of sorted
data is required. Some common applications include:
Searching in Databases : Binary search is widely used in database systems to quickly locate
records based on key values. Index structures like B-trees and binary search trees rely on binary
search for efficient data retrieval.
Searching in Arrays : Binary search efficiently locates elements in sorted arrays. It's used in
programming tasks such as searching for a specific value in a sorted list, determining the position
of an element, or finding the nearest value.
Text Processing : In text processing tasks, binary search helps in searching for keywords or
phrases in sorted lists of words or documents, enabling faster information retrieval and text
indexing.
Finding Smallest or Largest Value : Binary search can be applied to find the smallest or largest
value that satisfies a certain condition within a sorted dataset, such as the smallest number greater
than a given threshold.
Searching in Graphs and Trees : Binary search is used in graph and tree algorithms, such as
finding the lowest common ancestor in a binary tree or searching for elements in a sorted binary
search tree.
Approximate Matching : Binary search can be used for approximate matching or fuzzy
searching, where it efficiently locates entries that are close to a target value within a certain
tolerance.
Output
A: The time complexity of binary search is O(log n), where n is the size of the sorted input
array.
Q: How does binary search compare to linear search in terms of time complexity?
A: Binary search is more efficient than linear search, as it reduces the search space by half in
each step.
Q: What happens if the target element is not present in the sorted array during binary search?
A: Binary search returns -1 (indicating not found) or an appropriate value based on the problem
context.
A: Binary search may return any occurrence of the target element, but it doesn’t guarantee
finding the first or last occurrence.
A: The space complexity is O(1) (constant) because it doesn’t use additional data structures.
PRACTICAL NUMBER :- 2
Aim : Sort a given set of elements using the Quick sort method and determine the time required
to sort the elements. Repeat the experiment for different values of n, the number of elements in
the list to be sorted and plot a graph of the time taken versus n. The elements can be read from a
file or can be generated using the random number generator.
Theory :
Divide and Conquer:
➢ Quick-Sort divides the array into smaller sub-arrays.
➢ It recursively sorts these sub-arrays.
Pivot Element:
➢ Choose a pivot element from the array. The pivot is used for partitioning.
➢ Common choices include the first, last, or a random element.
Partitioning:
➢ Rearrange the elements in the array so that elements less than the pivot are on the ➢ left,
and elements greater than the pivot are on the right.
➢ The pivot itself is in its final sorted position.
Recursive Call:
➢ Apply Quick-Sort recursively to the sub-arrays on the left and right of the pivot. Base
Case:
➢ The base case of the recursion is when the sub-array has zero or one element, as ➢ it is
already sorted.
In-Place Sorting:
➢ Quick-Sort often operates in-place, meaning it doesn't require additional memory for ➢
a new array.
Efficiency:
➢ On average, Quick-Sort has O(n log n) time complexity, making it efficient for large ➢
datasets.
➢ However, in the worst case (rare), it can have O(n^2) time complexity.
Not Stable:
➢ Quick-Sort is not a stable sorting algorithm, meaning the relative order of equal ➢
elements may change.
In the given array, we consider the leftmost element as pivot. So, in this case, a[left] = 24,
a[right] = 27 and a[pivot] = 24.
Since, pivot is at left, so algorithm starts from right and move towards left.
Now, a[pivot] < a[right], so algorithm moves forward one position towards left, i.e. -
Now, a[left] = 19, a[right] = 24, and a[pivot] = 24. Since, pivot is at right, so algorithm starts
from left and moves to right.
As a[pivot] > a[left], so algorithm moves one position to right as -
Now, a[left] = 9, a[right] = 24, and a[pivot] = 24. As a[pivot] > a[left], so algorithm moves one
position to right as -
Now, a[left] = 29, a[right] = 24, and a[pivot] = 24. As a[pivot] < a[left], so, swap a[pivot] and
a[left], now pivot is at left, i.e. -
Since, pivot is at left, so algorithm starts from right, and move to left. Now, a[left] = 24, a[right]
= 29, and a[pivot] = 24. As a[pivot] < a[right], so algorithm moves one position to left, as -
Now, a[pivot] = 24, a[left] = 24, and a[right] = 14. As a[pivot] > a[right], so, swap a[pivot] and
a[right], now pivot is at right, i.e. -
Now, a[pivot] = 24, a[left] = 14, and a[right] = 24. Pivot is at right, so the algorithm starts from
left and move to right.
Now, a[pivot] = 24, a[left] = 24, and a[right] = 24. So, pivot, left and right are pointing the same
element. It represents the termination of procedure.
Element 24, which is the pivot element is placed at its exact position.
Elements that are right side of element 24 are greater than it, and the elements that are left side of
element 24 are smaller than it.
Now, in a similar manner, quick sort algorithm is separately applied to the left and right
subarrays. After sorting gets done, the array will be -
Algorithm:
QuickSort(arr[], low, high):
1. If low < high:
a. Partition the array into two halves using a pivot element:
pivot_index = Partition(arr, low, high)
b. Call QuickSort recursively for the left half:
QuickSort(arr, low, pivot_index - 1)
c. Call QuickSort recursively for the right half:
QuickSort(arr, pivot_index + 1, high)
Space Complexity :
❖ The space complexity is generally O(log n) due to the recursive call stack. In the worst case, it
can be O(n) if the recursion depth becomes the same as the input size (unbalanced partitions).
❖ In summary, Quick-Sort is efficient on average with a time complexity of O(n log n), but its
worst-case time complexity is O(n^2). The space complexity is typically O(log n), making it a
good choice for sorting large datasets in-place.
❖ Sorting Algorithms:
Quick-Sort is primarily used for sorting elements in an array or list efficiently. It is a popular
choice for sorting large datasets due to its average-case time complexity of O(n log n). ❖ File
Systems:
Quick-Sort is applied in file systems for sorting and managing files. It helps organize and
retrieve data more efficiently, especially in scenarios where quick access to sorted information is
crucial.
#include <iostream>
Q: What is Quick-Sort?
A: Quick-Sort is a sorting algorithm that uses a divide-and-conquer strategy to efficiently sort an
array or list.
PRACTICAL NUMBER :- 3
Aim : Using Open, implement a parallelized Merge Sort algorithm to sort a given set of
elements and determine the time Required to sort the elements. Repeat the experiment for
different values of n, the number of elements in the list to be sorted and plot a graph of the time
taken versus n.
The elements can be read from a file or can be generated using the random number generator.
Theory :
❖ Algorithm Type: Merge sort is a well-known sorting algorithm that efficiently sorts
arrays or lists by breaking them down into smaller sublists, sorting these sublists, and then
merging them back together.
❖ Approach: It employs the divide-and-conquer strategy, dividing the unsorted list into smaller
sublists until each sublist contains only one element, making them inherently sorted.
❖ Divide Phase: During the divide phase, merge sort recursively divides the unsorted list into
halves until each sublist consists of one element, forming the base case for sorting.
❖ Recursive Sorting: Through recursive sorting, merge sort sorts the sublists by continuously
dividing them into smaller halves and sorting them individually, ensuring that each sublist is
sorted before proceeding to merge them.
❖ Merge Operation: After sorting the sublists individually, merge sort merges them back
together by comparing elements from each sublist and arranging them in the correct order,
ultimately producing a single sorted list.
❖ Time Complexity: Merge sort boasts a time complexity of O(n log n), making it highly
efficient for sorting large datasets, thanks to its balanced division of the input list and optimal
merging process.
❖ Efficiency: Due to its efficient divide-and-conquer strategy and optimal merging process,
merge sort is well-suited for handling large datasets, outperforming many other sorting
algorithms in terms of speed and performance.
❖ Stability: One of the advantages of merge sort is its stability, meaning it maintains the
relative order of equal elements, ensuring that elements with the same value remain in the
same order as they were initially.
According to the merge sort, first divide the given array into two equal halves. Merge sort keeps
dividing the list into equal parts until it cannot be further divided.
As there are eight elements in the given array, so it is divided into two arrays of size 4.
Now, again divide these two arrays into halves. As they are of size 4, so divide them into new
arrays of size 2.
Now, again divide these arrays to get the atomic value that cannot be further divided.
In the next iteration of combining, now compare the arrays with two data values and merge them
into an array of found values in sorted order.
Now, there is a final merging of the arrays. After the final merging of above arrays, the array will
look like -
Now, the array is completely sorted.
Algorithm:
MergeSort(arr[], left, right):
1. If left < right:
a. Find the middle point to divide the
array into two halves:
middle = (left + right) // 2
b. Call MergeSort recursively for the left half:
MergeSort(arr, left, middle)
c. Call MergeSort recursively for the right half:
MergeSort(arr, middle + 1, right)
d. Merge the two sorted halves using a temporary array:
Merge(arr, left, middle, right)
Now, let's see the time complexity of merge sort in best case, average case, and in worst case.
We will also see the space complexity of the merge sort.
1. Time Complexity
O(n*logn)
Worst Case
❖ Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already
sorted. The best-case time complexity of merge sort is O(n*logn).
❖ Average Case Complexity - It occurs when the array elements are in jumbled order that is
not properly ascending and not properly descending. The average case time complexity of
merge sort is O(n*logn).
❖ Worst Case Complexity - It occurs when the array elements are required to be sorted in
reverse order. That means suppose you have to sort the array elements in ascending order, but
its elements are in descending order. The worst-case time complexity of merge sort is
O(n*logn).
2. Space Complexity
Space Complexity O(n)
Stable YES
o The space complexity of merge sort is O(n). It is because, in merge sort, an extra variable is
required for swapping.
❖ -Sorting Algorithms:
❖ QuickSort is primarily used for sorting elements in an array or list efficiently. It is a popular
choice for sorting large datasets due to its average-case time complexity of O(n log n).
❖ -File Systems:
❖ QuickSort is applied in file systems for sorting and managing files. It helps organize and
retrieve data more efficiently, especially in scenarios where quick access to sorted
information is crucial.
❖ -Database Management Systems:
❖ QuickSort is employed in database systems to sort and organize records. It enhances the
performance of query operations that require sorted data, such as searching for specific
values or generating reports.
❖ -Network Routing:
❖ QuickSort can be utilized in network routing algorithms where a quick arrangement of data is
needed. This is beneficial for optimizing the transmission of data packets in networking
applications.
❖ -Compiler Optimizations:
❖ Compilers use QuickSort in various optimization tasks, such as sorting symbol tables or
optimizing code generation. It helps in managing and organizing information within the
compiler efficiently.
#include <iostream>
while (j<n2)
{
a[k] = RightArray[j];
j++;
k++;
}
}
int main() {
Cout<<”Tushar Mandhan \n”<<”Roll no.
2022027566”<<endl; int a[] = { 11, 30, 24, 7, 31, 16, 39,
41 }; int n = sizeof(a) / sizeof(a[0]); cout<<"Before
sorting array elements are - \n";
printArray(a, n);
mergeSort(a, 0, n - 1);
cout<<"\nAfter sorting array elements are - \n";
printArray(a, n);
return 0;
}
Output :
Q: What is the main advantage of merge sort over other sorting algorithms?
A: One main advantage of merge sort is its consistent performance and efficiency for large
datasets, making it suitable for various applications.
PRACTICAL NUMBER:-4
AIM:- Write a program to implement 0/1 knapsack problem using dynamic programming.
weights[i - 1] <= w:
EXAMPLE:-
1. First, we will be provided weights and values of n items, in this case, six items.
2. We will then put these items in a knapsack of capacity W or, in our case, 10kg to get the
maximum total value in the knapsack.
3. After putting the items, we have to fill this knapsack, but we can't break the item. So, we
must either pick the entire item or skip it.
4. Sometimes this may even lead to a knapsack with some spare space left with it.
Note: It should be noted that the above function using recursion computes the same sub problems
again and again. See the following recursion tree, K(1, 1) is being evaluated twice. In the
following recursion tree, K() refers to knapSack(). The two parameters indicated in the
following recursion tree are n and W.
The recursion tree is for following sample inputs. weight[]
= {1, 1, 1}, W = 2, profit[] = {10, 20, 30}
K(3, 2)
/ \
/ \
K(2, 2) K(2, 1)
/ \ / \
/ \ / \
K(1, 2) K(1, 1) K(1, 1) K(1, 0)
/ \ / \ / \
/ \ / \ / \
K(0, 2) K(0, 1) K(0, 1) K(0, 0) K(0, 1) K(0, 0)
Recursion tree for Knapsack capacity 2 units and 3 items of 1 unit weight.
As there are repetitions of the same subproblem again and again we can implement the following
idea to solve the problem.
If we get a sub problem the first time, we can solve this problem by creating a 2-D array that can
store a particular state (n, w). Now if we come across the same state (n, w) again instead of
calculating it in exponential complexity we can directly return its result stored in the table in
constant time.
IMPLEMENTATION :-
#include <iostream>
using namespace std; int
max(int x, int y)
{ return (x > y) ? x : y;
} int knapSack(int W, int w[], int v[], int n)
{ int i, wt; int K[n + 1][W + 1]; for (i
= 0; i <= n; i++) { for (wt = 0; wt <=
W; wt++) { if (i == 0 || wt == 0)
K[i][wt] = 0; else if (w[i - 1] <= wt)
K[i][wt] = max(v[i - 1] + K[i - 1][wt - w[i - 1]], K[i - 1][wt]);
else
K[i][wt] = K[i - 1][wt];
}
}
return K[n][W];
} int main()
{ cout <<
"Enter the
number of
items in a
Knapsack:"
; int n,
W; cin
>> n; int
v[n], w[n];
for (int i =
0; i < n; i+
+)
{ cout
<< "Enter
value and
weight for
item " << i
<< ":";
cin >> v[i];
cin >>
w[i];
}
cout << "Enter the capacity of knapsack";
cin >> W; cout << knapSack(W, w, v, n);
return 0;
}
❖ *What is the greedy approach for solving the fractional knapsack problem?*
➢ The greedy approach selects items based on their value-to-weight ratio, adding the most
valuable items first.
❖ *How do you calculate the value-to-weight ratio for items in the fractional knapsack problem?
*
➢ Divide the value of an item by its weight.
❖ *What is the time complexity of the dynamic programming solution for the 0/1 knapsack
problem?*
➢ The dynamic programming solution has a time complexity of O(nW), where n is the number
of items and W is the knapsack capacity.
❖ *What is the time complexity of the greedy solution for the fractional knapsack problem?*
➢ The greedy solution has a time complexity of O(n log n), where n is the number of items.
PRACTICAL NUMBER- 5(a)
AIM:- Write a program to implement traversing in a diagraph using Breadth First Search.
THEORY: Breadth First Search (BFS) is a fundamental graph traversal algorithm. It
involves visiting all the connected nodes of a graph in a level-by-level manner. In this article, we
will look into the concept of BFS and how it can be applied to graphs effectively.
Breadth First Search (BFS) algorithm starts at the tree root and explores all nodes at the present
depth prior to moving on to the nodes at the next depth level.
As in the example given above, BFS algorithm traverses from A to B to E to F first then to C and
G lastly to D. It employs the following rules.
• Rule 1 − Visit the adjacent unvisited vertex. Mark it as visited. Display it. Insert it in a
queue.
• Rule 2 − If no adjacent vertex is found, remove the first vertex from the queue. •
Rule 3 − Repeat Rule 1 and Rule 2 until the queue is empty.
WORKING:-
example, we have
3 three nodes but
alphabetically we choose A, mark it as visited and enqueue it.
4
Next, the
unvisited
adjacent node
from S is B.
We mark it as visited and enqueue it.
5
Next, the
unvisited
adjacent node
from S is C.
We mark it as visited and enqueue it.
6
Now, S is left
with no
unvisited
adjacent nodes.
So,we dequeue
and find A.
7
From A we
have D as
unvisited adjacent
node. We mark it
as visited and enqueue it.
At this stage, we are left with no unmarked (unvisited) nodes. But as per the algorithm we keep
on dequeuing in order to get all unvisited nodes. When the queue gets emptied, the program is
over.
Time Complexity
The time complexity of the BFS algorithm is represented in the form of O(V + E), where V is the
number of nodes and E is the number of edges.
Space Complexity
BFS Algorithm
BFS(Graph, start_vertex):
1.Shortest Path and Minimum Spanning Tree for unweighted graph: In an unweighted
graph, the shortest path is the path with the least number of edges. With Breadth First, we always
reach a vertex from a given source using the minimum number of edges. Also, in the case of
unweighted graphs, any spanning tree is Minimum Spanning Tree and we can use either Depth or
Breadth first traversal for finding a spanning tree.
2. Minimum Spanning Tree for weighted graphs: We can also find Minimum Spanning Tree
for weighted graphs using BFT, but the condition is that the weight should be non-negative and
the same for each pair of vertices.
4.When we need to print or analyze data by level in the graph or tree: BFS is also sometimes
referred to as "level-order traversal", since we can track all the nodes at a given level. It's useful
when we need to batch together all nodes that are at a given level in a tree, or at a given level in a
graph relative to some starting node.
IMPLEMENTATION:-
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#define MAX 5
struct Vertex { char
label; bool visited;
};
//queue variables int
queue[MAX];
int rear = -1;
int front = 0;
int queueItemCount = 0;
//graph variables //array
of vertices
struct Vertex* lstVertices[MAX];
//adjacency matrix int
adjMatrix[MAX][MAX];
//vertex count int
vertexCount = 0;
//queue functions void
insert(int data)
{ queue[++rear] =
data;
queueItemCount++;
} int removeData()
{ queueItemCount--;
return queue[front+
+]; }
bool isQueueEmpty()
{ return queueItemCount ==
0;
}
//graph functions
//add vertex to the vertex list void
addVertex(char label) {
struct Vertex* vertex = (struct Vertex*) malloc(sizeof(struct Vertex));
vertex->label = label; vertex->visited = false;
lstVertices[vertexCount++] = vertex;
}
//add edge to edge array void
addEdge(int start,int end)
{ adjMatrix[start][end] = 1;
adjMatrix[end][start] = 1;
}
//display the vertex void
displayVertex(int vertexIndex) {
printf("%c ",lstVertices[vertexIndex]->label);
}
//get the adjacent unvisited vertex int
getAdjUnvisitedVertex(int vertexIndex)
{ int i;
IMPORTANT QUESTIONS:-
❖ What is BFS?
BFS is an algorithm for traversing or searching tree or graph data structures, starting at the
root (or some arbitrary node) and exploring neighbors before moving to the next level
neighbors.
Mark
A as
visited
and put
it onto
the
stack.
We
choose B, mark it as visited and put onto the stack. Here B does
5 not have any unvisited adjacent node.
So, we pop B from the stack.
Time Complexity:
The time complexity of the DFS algorithm is represented in the form of O(V + E), where V is
the number of nodes and E is the number of edges.
Space Complexity:
The space complexity of the DFS algorithm is O(V).
1. Detecting cycle in a graph: A graph has a cycle if and only if we see a back edge during
DFS. So we can run DFS for the graph and check for back edges.
2. Path Finding: We can specialize the DFS algorithm to find a path between two given
vertices u and z.
• Call DFS(G, u) with u as the start vertex.
• Use a stack S to keep track of the path between the start vertex and the current vertex.
• As soon as destination vertex z is encountered, return the path as the contents of the stack.
3. Model checking: Depth-first search can be used in model checking, which is the process of
checking that a model of a system meets a certain set of properties.
4. Back-tracking: Depth-first search can be used in backtracking algorithms.
DFS ALGORITHM:
DFS(Graph, start_vertex):
1. Initialize an empty stack and a set to keep track of visited vertices.
2. Push the start_vertex onto the stack and mark it as visited.
3. While the stack is not empty:
a. Pop a vertex from the stack.
b. Process the vertex (e.g., print it).
c. For each neighbor of the popped vertex:
i. If the neighbor has not been visited:
- Push the neighbor onto the stack.
- Mark the neighbor as visited.
4. Repeat step 3 until the stack is empty.
Source Code:
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#define MAX 5
struct Vertex { char
label; bool visited;
};
//stack variables int
stack[MAX]; int
top = -1; //graph
variables //array of
vertices
struct Vertex* lstVertices[MAX];
//adjacency matrix int
adjMatrix[MAX][MAX];
//vertex count int
vertexCount = 0;
//stack functions void
push(int item)
{ stack[++top] =
item;
} int pop()
{ return
stack[top--];
} int peek()
{ return
stack[top]; } bool
isStackEmpty() {
return top == -1;
}
//graph functions
Output:
2. Question: Can DFS be used to find the shortest path in an unweighted graph? Answer: No,
DFS does not guarantee the shortest path. It may find a longer path if it explores deeper levels
first.
4. Question: Does DFS work for both directed and undirected graphs?
Answer: Yes, DFS works for both types of graphs. In an undirected graph, it explores all
connected components. In a directed graph, it explores the entire component reachable from the
starting node.
Practical -6
Aim: Find Minimum Cost Spanning Tree of a given undirected graph using Prim’s algorithm.
Theory: This algorithm always starts with a single node and moves through several adjacent nodes, in
order to explore all of the connected edges along the way.
The algorithm starts with an empty spanning tree. The idea is to maintain two sets of vertices. The first set
contains the vertices already included in the MST, and the other set contains the vertices not yet included.
At every step, it considers all the edges that connect the two sets and picks the minimum weight edge
from these edges. After picking the edge, it moves the other endpoint of the edge to the set containing
MST.
A group of edges that connects two sets of vertices in a graph is called cut in graph theory. So, at every
step of Prim’s algorithm, find a cut, pick the minimum weight edge from the cut, and include this vertex
in MST Set (the set that contains already included vertices).
How does Prim’s Algorithm Work?
The working of Prim’s algorithm can be described by using the following steps:
Step 1: Determine an arbitrary vertex as the starting vertex of the MST.
Step 2: Follow steps 3 to 5 till there are vertices that are not included in the MST (known as fringe
vertex).
Step 3: Find edges connecting any tree vertex with the fringe vertices.
Step 4: Find the minimum among these edges.
Step 5: Add the chosen edge to the MST if it does not form any cycle.
Step 6: Return the MST and exit.
Step 2: All the edges connecting the incomplete MST and other vertices are the edges {0, 1} and {0, 7}.
Between these two the edge with minimum weight is {0, 1}. So include the edge and vertex 1 in the MST.
Step 3: The edges connecting the incomplete MST to other vertices are {0, 7}, {1, 7} and {1, 2}. Among
these edges the minimum weight is 8 which is of the edges {0, 7} and {1, 2}. Let us here include the edge
{0, 7} and the vertex 7 in the MST
Step 4: The edges that connect the incomplete MST with the fringe vertices are {1, 2}, {7, 6} and {7, 8}.
Add the edge {7, 6} and the vertex 6 in the MST as it has the least weight (i.e., 1).
Step 5: The connecting edges now are {7, 8}, {1, 2}, {6, 8} and {6, 5}. Include edge {6, 5} and vertex 5
in the MST as the edge has the minimum weight (i.e., 2) among them.
Step 6: Among the current connecting edges, the edge {5, 2} has the minimum weight. So include that
edge and the vertex 2 in the MST.
Step 7: The connecting edges between the incomplete MST and the other edges are {2, 8}, {2, 3}, {5, 3}
and {5, 4}. The edge with minimum weight is edge {2, 8} which has weight 2. So include this edge and
the vertex 8 in the MST.
Step 8: See here that the edges {7, 8} and {2, 3} both have same weight which are minimum. But 7 is
already part of MST. So we will consider the edge {2, 3} and include that edge and vertex 3 in the MST.
Step 9: Only the vertex 4 remains to be included. The minimum weighted edge from the incomplete MST
to 4 is
The final structure of the MST is as follows and the weight of the edges of the MST is (4 + 8 + 1 + 2 + 4 +
2 + 7 + 9) = 37.
Prims algorithm:
1. Initialize an empty set to store the vertices that have been included in the MST.
2. Initialize an empty list to store the edges of the MST.
3. Choose an arbitrary vertex to start the MST.
4. Add the chosen vertex to the set of included vertices.
5. While the set of included vertices does not contain all vertices:
a. Find the minimum-weight edge that connects a vertex in the set to a vertex outside the set.
b. Add the edge to the MST.
c. Add the vertex connected by the edge to the set of included vertices.
6. Return the list of edges of the MST.
Implementation:
#include <iostream>
#include <limits.h> using
namespace std;
// Number of vertices in the graph
#define V 5 int minKey(int key[],
bool mstSet[])
{
// Initialize min value int min =
INT_MAX, min_index;
for (int v = 0; v < V; v++) if (mstSet[v] == false && key[v] < min) min = key[v], min_index =
v; return min_index;
}
int printMST(int parent[], int n, int graph[V][V])
{ cout<<"Edge\tWeight\
n"; for (int i = 1; i < V; i+
+)
cout<<parent[i]<<"--"<<i<<"\t"<<graph[i][parent[i]]<<"\n";
}
// Function to construct and print MST for a graph represented using adjacency
// matrix representation void
prims(int graph[V][V])
{ int parent[V]; // Array to store constructed
MST int key[V]; // Key values used to pick
minimum weight edge in cut bool mstSet[V]; //
To represent set of vertices not yet included in
MST
// Initialize all keys as INFINITE for (int i = 0; i < V; i++) key[i] = INT_MAX, mstSet[i] =
false;
key[0] = 0; parent[0] = -1;
for (int count = 0; count < V-1; count++)
{
int u = minKey(key, mstSet);
mstSet[u] = true;
for (int v = 0; v < V; v++) if (graph[u][v] && mstSet[v] == false && graph[u][v] < key[v])
parent[v] = u, key[v] = graph[u][v];
}
printMST(parent, V, graph);
}
int main() {
cout<<"Tushar Mandhan \n "<<"Roll no. 2022027566\n";
int graph[V][V] = {{0, 1, 5, 6, 0},
{2, 4, 0, 8, 5},
{0, 3, 0, 9, 7},
{6, 8, 0, 0, 9},
{0, 5, 7, 9, 0},
};
// Print the solution
prims(graph);
return 0;
}
Output:
Answer: Prim’s algorithm finds the minimum spanning tree (MST) in a weighted, connected
graph. It connects all vertices with the minimum total edge weight.
Answer: Prim’s starts with an arbitrary vertex and repeatedly adds the nearest unvisited vertex to the
MST. It maintains a set of visited vertices and a priority queue (min heap) of edges.
Answer: Both find MSTs, but Kruskal’s processes edges in ascending order of weight, while Prim’s
focuses on vertices. Kruskal’s can handle disconnected graphs, while Prim’s requires a single connected
component.
Answer: Yes, Prim’s is a greedy algorithm. At each step, it chooses the locally optimal edge with the
minimum weight.
Question: Can Prim’s algorithm handle graphs with negative edge weights?
Answer: No, Prim’s assumes non-negative edge weights. If a graph has negative weights, consider using
Dijkstra’s algorithm instead.
PRACTICAL NUMBER :- 7
Aim : Find minimum cost spanning tree of a given undirected graph using kruskal’s algorithm.
Theory :
Kruskal's Algorithm
Spanning tree - A spanning tree is the subgraph of an undirected connected graph.
Minimum Spanning tree - Minimum spanning tree can be defined as the spanning tree in which
the sum of the weights of the edge is minimum. The weight of the spanning tree is the sum of the
weights given to the edges of the spanning tree.
Now, let's start with the main topic.
Kruskal's Algorithm is used to find the minimum spanning tree for a connected weighted graph.
The main target of the algorithm is to find the subset of edges by using which we can traverse
every vertex of the graph. It follows the greedy approach that finds an optimum solution at every
stage instead of focusing on a global optimum.
How does Kruskal's algorithm work?
In Kruskal's algorithm, we start from edges with the lowest weight and keep adding the edges
until the goal is reached. The steps to implement Kruskal's algorithm are listed as follows - First,
sort all the edges from low weight to high.
Now, take the edge with the lowest weight and add it to the spanning tree. If the edge to be added
creates a cycle, then reject the edge.
Continue to add the edges until we reach all vertices, and a minimum spanning tree is created.
The applications of Kruskal's algorithm are -
• Kruskal's algorithm can be used to layout electrical wiring among cities.
• It can be used to lay down LAN connections.
Example of Kruskal's algorithm
Now, let's see the working of Kruskal's algorithm using an example. It will be easier to
understand Kruskal's algorithm using an example.
Suppose a weighted graph is -
Weight 1 2 3 4 5 7 10
Now, let's start constructing the minimum spanning tree.
Step 1 - First, add the edge AB with weight 1 to the MST.
Step 2 - Add the edge DE with weight 2 to the MST as it is not creating the cycle.
Step 3 - Add the edge BC with weight 3 to the MST, as it is not creating any cycle or loop.
Step 4 - Now, pick the edge CD with weight 4 to the MST, as it is not forming the cycle.
Step 5 - After that, pick the edge AE with weight 5. Including this edge will create the cycle, so
discard it.
Step 6 - Pick the edge AC with weight 7. Including this edge will create the cycle, so discard it.
Step 7 - Pick the edge AD with weight 10. Including this edge will also create the cycle, so
discard it.
So, the final minimum spanning tree obtained from the given weighted graph by using Kruskal's
algorithm is -
The cost of the MST is = AB + DE + BC + CD = 1 + 2 + 3 + 4 = 10.
Now, the number of edges in the above tree equals the number of vertices minus 1. So, the
algorithm stops here.
Algorithm:
Step 1: Create a forest F in such a way that every vertex of the graph is a separate tree.
Step 2: Create a set E that contains all the edges of the graph.
Step 3: Repeat Steps 4 and 5 while E is NOT EMPTY and F is not spanning
Step 4: Remove an edge from E with minimum weight
Step 5: IF the edge obtained in Step 4 connects two different trees, then add it to the forest F
(for combining two trees into one tree).
ELSE
Discard the edge
Step 6: END
Complexity of Kruskal's algorithm
Now, let's see the time complexity of Kruskal's algorithm.
Time Complexity
The time complexity of Kruskal's algorithm is O(E logE) or O(V logV), where E is the no. of
edges, and V is the no. of vertices.
Implementation of Kruskal's algorithm :
#include <iostream>
#include <algorithm>
using namespace std; const
int MAX = 1e4 + 5; int
id[MAX], nodes, edges;
pair <long long, pair<int, int> > p[MAX]; void
init()
{
for(int i = 0;i < MAX;++i)
id[i] = i;
}
int root(int x)
{
while(id[x] != x)
{
id[x] = id[id[x]];
x = id[x];
}
return x;
}
void union1(int x, int y)
{ int p =
root(x); int
q = root(y);
id[p] = id[q];
}
long long kruskal(pair<long long, pair<int, int> > p[])
{
int x, y;
long long cost, minimumCost = 0;
for(int i = 0;i < edges;++i)
{
x = p[i].second.first;
y = p[i].second.second;
cost = p[i].first;
if(root(x) != root(y))
{
minimumCost += cost;
union1(x, y);
}
}
return minimumCost;
} int
main()
{
cout<<"Tushar mandhan \n"<<"Roll no. 2022027566\n";
int x, y;
long long weight, cost, minimumCost;
init();
cout <<"Enter Nodes and edges\n";
cin >> nodes >> edges; for(int i
= 0;i < edges;++i)
{
cout<<"Enter the value of X, Y and edges\n";
cin >> x >> y >> weight;
p[i] = make_pair(weight, make_pair(x, y));
} sort(p, p + edges);
minimumCost = kruskal(p);
cout <<"Minimum cost is "<< minimumCost << endl;
return 0; }
Output :
Short Ques Answer
Q: What is Kruskal's algorithm?
A: Kruskal's algorithm is a greedy algorithm used to find the minimum spanning tree of a
connected weighted graph.
Q: How does Kruskal's algorithm ensure connectivity of the minimum spanning tree? A:
Kruskal's algorithm ensures connectivity by adding edges in ascending order of weights,
connecting vertices from disjoint sets.
Q: Does Kruskal's algorithm guarantee the uniqueness of the minimum spanning tree? A: Yes,
Kruskal's algorithm guarantees the uniqueness of the minimum spanning tree if all edge
weights are unique.
PRACTICAL NUMBER :- 8
Aim : From a given vertex in a weighted connected graph, find shortest paths to other vertices
using Dijkstra’s algorithm.
Theory :
An Introduction to Dijkstra's Algorithm
Ever wondered how does Google Maps finds the shortest and fastest route between two places?
Well, the answer is Dijkstra's Algorithm. Dijkstra's Algorithm is a Graph algorithm that finds
the shortest path from a source vertex to all other vertices in the Graph (single source shortest
path). It is a type of Greedy Algorithm that only works on Weighted Graphs having positive
weights. The time complexity of Dijkstra's Algorithm is O(V2) with the help of the adjacency
matrix representation of the graph. This time complexity can be reduced to O((V + E) log V)
with the help of an adjacency list representation of the graph, where V is the number of vertices
and E is the number of edges in the graph.
Dijkstra's Algorithm was designed and published by Dr. Edsger W. Dijkstra, a Dutch Computer
Scientist, Software Engineer, Programmer, Science Essayist, and Systems Scientist.
1. Dijkstra's Algorithm begins at the node we select (the source node), and it examines the
graph to find the shortest path between that node and all the other nodes in the graph.
2. The Algorithm keeps records of the presently acknowledged shortest distance from each
node to the source node, and it updates these values if it finds any shorter path.
3. Once the Algorithm has retrieved the shortest path between the source and another node,
that node is marked as 'visited' and included in the path.
4. The procedure continues until all the nodes in the graph have been included in the path.
In this manner, we have a path connecting the source node to all other nodes, following
the shortest possible path to reach each node.
The following is the step that we will follow to implement Dijkstra's Algorithm:
Step 1: First, we will mark the source node with a current distance of 0 and set the rest of the
nodes to INFINITY.
Step 2: We will then set the unvisited node with the smallest current distance as the current node,
suppose X.
Step 3: For each neighbor N of the current node X: We will then add the current distance of X
with the weight of the edge joining X-N. If it is smaller than the current distance of N, set it as
the new current distance of N.
Step 4: We will then mark the current node X as visited.
Step 5: We will repeat the process from 'Step 2' if there is any node unvisited left in the graph.
Let us now understand the implementation of the algorithm with the help of an example:
1. We will use the above graph as the input, with node A as the source.
2. First, we will mark all the nodes as unvisited.
3. We will set the path to 0 at node A and INFINITY for all the other nodes.
4. We will now mark source node A as visited and access its neighboring nodes. Note: We
have only accessed the neighboring nodes, not visited them.
5. We will now update the path to node B by 4 with the help of relaxation because the path
to node A is 0 and the path from node A to B is 4, and the minimum((0 + 4),
INFINITY) is 4.
6. We will also update the path to node C by 5 with the help of relaxation because the path
to node A is 0 and the path from node A to C is 5, and the minimum((0 + 5),
INFINITY) is 5. Both the neighbors of node A are now relaxed; therefore, we can move
ahead.
7. We will now select the next unvisited node with the least path and visit it. Hence, we will
visit node B and perform relaxation on its unvisited neighbors. After performing
relaxation, the path to node C will remain 5, whereas the path to node E will become 11,
and the path to node D will become 13.
8. We will now visit node E and perform relaxation on its neighboring nodes B, D, and F.
Since only node F is unvisited, it will be relaxed. Thus, the path to node B will remain as
it is, i.e., 4, the path to node D will also remain 13, and the path to node F will become 14
(8 + 6).
9. Now we will visit node D, and only node F will be relaxed. However, the path to node F
will remain unchanged, i.e., 14.
10. Since only node F is remaining, we will visit it but not perform any relaxation as all its
neighboring nodes are already visited.
11. Once all the nodes of the graphs are visited, the program will end.
Complexity :
• The time complexity of Dijkstra's algorithm depends on the data structure used to
implement the priority queue. Using a binary heap, the time complexity is O((V + E) log
V), where V is the number of vertices and E is the number of edges in the graph. With
Fibonacci heaps, it can be reduced to O(V log V + E).
• Dijkstra's algorithm typically requires O(V) space for storing distances and O(V) space
for maintaining the priority queue, resulting in a total space complexity of O(V).
return min_index;
}
Output :
Practical number:9
Aim: Write a program to implement Bellman ford algorithm.
Theory:
Bellman-Ford is a single source shortest path algorithm that determines the shortest path between a
given source vertex and every other vertex in a graph. This algorithm can be used on both weighted
and unweighted graphs.
A Bellman-Ford algorithm is also guaranteed to find the shortest path in a graph, similar to Dijkstra’s
algorithm . Although Bellman-Ford is slower than Dijkstra’s algorithm, it is capable of handling graphs
with negative edge weights, which makes it more versatile. The shortest path cannot be found if there
exists a negative cycle in the graph. If we continue to go around the negative cycle an infinite number of
times, then the cost of the path will continue to decrease (even though the length of the path is increasing).
As a result, Bellman-Ford is also capable of detecting negative cycles, which is an important feature.
Working of Bellman-Ford Algorithm to Detect the Negative cycle in the graph:
1. Initialization: Initialize the distance from the source vertex to all other vertices as infinity, except
the distance from the source vertex to itself, which is initialized to 0. Also, initialize the
predecessor of all vertices as null.
2. Relaxation: Iterate through all the edges of the graph (|V| - 1) times, where |V| is the number of
vertices. In each iteration, relax all the edges. Relaxing an edge (u, v) means updating the distance
to vertex v if a shorter path from the source vertex to v through u is found.
3. Check for Negative Cycles: After the (|V| - 1) iterations, check for negative cycles in the graph.
A negative cycle is a cycle whose total weight is negative. If there is any negative cycle, it means
that there is no shortest path, as the negative cycle can be traversed repeatedly to decrease the
path length indefinitely.
4. Output: If there are no negative cycles, the algorithm outputs the shortest paths from the source
vertex to all other vertices. Each vertex will have its distance from the source vertex and its
predecessor on the shortest path.
Algorithm of Bellman ford algorithm
The Bellman-Ford algorithm is used to find the shortest path in a weighted directed graph with negative
edge weights. Here is a step-by-step guide on how to design and analyze the Bellman-Ford algorithm:
1. Initialize the graph:
- Create a graph representation with vertices and edges.
- Assign an initial distance value to all vertices except the source vertex (set it to 0).
- Set all the distances for the other vertices as infinite.
2. Relax the edges:
- Repeat the following steps |V| - 1 times, where |V| is the number of vertices in the graph:
- For each edge (u, v) with weight w:
- If the distance to the source vertex + w is smaller than the current distance of v, update the distance of v
with the new distance.
- Update the predecessor of v with u.
3. Check for negative cycles:
- Repeat step 2 for one more iteration.
- If any distances are updated, it means a negative cycle exists in the graph.
4. Output the shortest path:
- Start with the destination vertex.
- Traverse back using the predecessor of each vertex until you reach the source vertex. The time
complexity of the Bellman-Ford algorithm is O(|V| * |E|), where |V| is the number of
vertices and |E| is the number of edges in the graph.
By analyzing the algorithm, we can see that it guarantees finding the shortest path even in the presence of
negative edge weights. However, if the graph contains a negative cycle, the algorithm may not terminate,
or the distances may be incorrectly updated.
Source code:
#include <bits/stdc++.h> using
namespace std;
Output:
Practical -10
Aim: Implement any scheme to find the optimal solution for the Travelling salesman problem.
Theory:
Given a set of cities and the distance between every pair of cities, the problem is to find the shortest
possible route that visits every city exactly once and returns to the starting point. Note the difference
between Hamiltonian Cycle and TSP. The Hamiltonian cycle problem is to find if there exists a tour that
visits every city exactly once. Here we know that Hamiltonian Tour exists (because the graph is complete)
and in fact, many such tours exist, the problem is to find a minimum weight Hamiltonian Cycle.
For example, consider the graph shown in the figure on the right side. A TSP tour in the graph is 1-2-4-31.
The cost of the tour is 10+25+30+15 which is 80. The problem is a famous NP-hard problem. There is no
polynomial-time know solution for this problem. The following are different solutions for the traveling
salesman problem.
Time Complexity : O(n2*2n) where O(n* 2n) are maximum number of unique subproblems/states and
O(n) for transition (through for loop as in code) in every states.
Auxiliary Space: O(n*2n), where n is number of Nodes/Cities Algorithm
of travelling salesman problem:
1. Start from an arbitrary city as the starting point.
2. Generate all (n-1)! permutations of cities to visit, excluding the starting city.
3. For each permutation:
a. Compute the total distance/cost of the tour.
4. Select the permutation with the minimum total distance/cost as the optimal tour.
5. Return the optimal tour.
Source code:
#include <iostream> using
namespace std;
// there are four nodes in example graph (graph is 1-based) const
int n = 4;
// give appropriate maximum to avoid overflow const
int MAX = 1000000;
// dist[i][j] represents shortest distance to go from i to j
// this matrix can be calculated for any given graph using
// all-pair shortest path algorithms int
dist[n + 1][n + 1] = {
{ 0, 0, 0, 0, 0 }, { 0, 0, 10, 15, 20 },
{ 0, 10, 0, 25, 25 }, { 0, 15, 25, 0, 30 },
{ 0, 20, 25, 30, 0 },
};
// memoization for top down recursion
int memo[n + 1][1 << (n + 1)]; int
fun(int i, int mask)
{
// base case
// if only ith bit and 1st bit is set in our mask, // it
implies we have visited all other nodes already if
(mask == ((1 << i) | 3)) return dist[1][i]; //
memoization if (memo[i][mask] != 0) return
memo[i][mask]; int res = MAX; // result of this
sub-problem
// we have to travel all nodes j in mask and end the
// path at ith node so for every node j in mask,
// recursively calculate cost of travelling all nodes in
// mask except i and then travel back from node j to
// node i taking the shortest path take the minimum of
// all possible j nodes for (int j = 1; j <= n; j+
+) if ((mask & (1 << j)) && j != i && j != 1)
res = std::min(res, fun(j, mask & (~(1 << i)))
+ dist[j][i]);
return memo[i][mask] = res;
} int
main() {
cout<<"Tushar Mandhan \n"<<"Roll no.
2022027561\n"; int ans = MAX; for (int i = 1; i <=
n; i++)
Output:
Theory:
The Floyd Warshall Algorithm is an all pair shortest path algorithm unlike Dijkstra and Bellman Ford
which are single source shortest path algorithms. This algorithm works for both
the directed and undirected weighted graphs. But, it does not work for the graphs with negative cycles
(where the sum of the edges in a cycle is negative). It follows Dynamic Programming approach to check
every possible path going via every possible node in order to calculate shortest distance between every
pair of nodes.
Idea Behind Floyd Warshall Algorithm:
Suppose we have a graph G[][] with V vertices from 1 to N. Now we have to evaluate a
shortestPathMatrix[][] where shortestPathMatrix[i][j] represents the shortest path between
vertices i and j.
Obviously the shortest path between i to j will have some k number of intermediate nodes. The idea
behind floyd warshall algorithm is to treat each and every vertex from 1 to N as an intermediate node one
by one.
The following figure shows the above optimal substructure property in floyd warshall algorithm:
Source code:
#include <bits/stdc++.h> using
namespace std;
// Number of vertices in the graph
#define V 4
/* Define Infinite as a large enough
value.This value will be used for vertices
not connected to each other */ #define
INF 99999
// A function to print the solution matrix
void printSolution(int dist[][V]); void
floydWarshall(int dist[][V])
{
int i, j, k;
for (k = 0; k < V; k++) {
// Pick all vertices as source one by one
for (i = 0; i < V; i++) {
// Pick all vertices as destination for the
// above picked source
for (j = 0; j < V; j++) {
// If vertex k is on the shortest path from
// i to j, then update the value of
// dist[i][j]
if (dist[i][j] > (dist[i][k] + dist[k][j])
&& (dist[k][j] != INF
&& dist[i][k] != INF)) dist[i][j]
= dist[i][k] + dist[k][j];
}
}
}
// Print the shortest distance matrix
printSolution(dist);
}
/* A utility function to print solution */ void
printSolution(int dist[][V])
{
cout << "The following matrix shows the shortest "
"distances"
" between every pair of vertices \n";
for (int i = 0; i < V; i++) { for (int j =
0; j < V; j++) { if (dist[i][j] == INF)
Output: