0% found this document useful (0 votes)
17 views73 pages

Practical File OF Design and Analysis of Algorithms (Pc-Cs214Al)

This document is a practical file for the Design and Analysis of Algorithms course, detailing various algorithm implementations such as Linear Search, Binary Search, Quick Sort, and others. It includes theoretical explanations, algorithms, time complexities, and sample code for each algorithm. The practical file is submitted by a student to an assistant professor at the State Institute of Engineering and Technology, Nilokheri.

Uploaded by

jinoy28580
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views73 pages

Practical File OF Design and Analysis of Algorithms (Pc-Cs214Al)

This document is a practical file for the Design and Analysis of Algorithms course, detailing various algorithm implementations such as Linear Search, Binary Search, Quick Sort, and others. It includes theoretical explanations, algorithms, time complexities, and sample code for each algorithm. The practical file is submitted by a student to an assistant professor at the State Institute of Engineering and Technology, Nilokheri.

Uploaded by

jinoy28580
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 73

PRACTICAL FILE

OF
DESIGN AND ANALYSIS OF ALGORITHMS (PC-CS214AL)

SESSION: 2023-24

Submitted To:
Mrs. Anjali Chaudhary
Assistant Professor (CSE)

Submitted By:
Tushar Mandhan
2022027566 (CSE)

Department of Computer Science and Engineering,


State Institute of Engineering and Technology, Nilokheri (Karnal)

(Affiliated to Kurukshetra University, Kurukshetra)

INDEX
S.No Title of the Practical Practical Date Signature

1(a) To perform Linear Search

1(b) To perform Binary Search

2 Sort a given set of elements using the Quick sort method


and determine the time required to sort the elements. Repeat
the experiment for different values of n, the number of
elements in the list to be sorted and plot a graph of the time
taken versus n. The elements can be read from a file or can
be generated using the random number generator.
3 Using Open, implement a parallelized Merge Sort algorithm
to sort a given set of elements and determine the time
Required to sort the elements. Repeat the experiment for
different values of n, the number of elements in the list to
be sorted and plot a graph of the time taken versus n. The
elements can be read from a file or can be generated using
the random number generator.
4 Write a program to implement 0/1 knapsack problem using
dynamic programming.

5 a. Print all the nodes reachable from a given starting node in


a digraph using BFS method.
b. Check whether a given graph is connected or not using
DFS method.

6 Find Minimum Cost Spanning Tree of a given undirected


graph using Prim’s algorithm.

7 Find Minimum Cost Spanning Tree of a given undirected


graph using Kruskal’s algorithm

8 From a given vertex in a weighted connected graph, find


shortest paths to other vertices using Dijkstra’s algorithm.

9 Write a program to implement Bellman ford algorithm.

10 Implement any scheme to find the optimal solution for the


Travelling salesman problem.

11. Implement Floyd Warshall algorithm for shortest path.


PRACTICAL NUMBER:-1(a)

Aim: To write a program to implement linear search.


Theory:
Linear search, also known as sequential search, is a straightforward searching algorithm that
traverses the elements of a list one by one until it finds the desired element or reaches the end of
the list. Here's a basic outline of the linear search algorithm:

❖ Start from the first element of the list.


❖ Compare the target element with the current element.
❖ If the current element matches the target element, return its index.
❖ If the current element does not match, move to the next element in the list.
❖ Repeat steps 2-4 until either the target element is found or the end of the list is reached.
❖ If the target element is not found after traversing the entire list, return a "not found"
indication.
❖ The time complexity of linear search is O(n), where n is the number of elements in the list.
This means the worst-case scenario occurs when the target element is either the last element
in the list or is not present in the list at all, requiring the algorithm to traverse all elements.

Working of linear search:


❖ In Linear Search Algorithm,
❖ Every element is considered as a potential match for the key and checked ❖ for the same.
❖ If any element is found equal to the key, the search is successful and the index of that element
is returned.
❖ If no element is found equal to the key, the search yields “No match found”.

For example: Consider the array arr[] = {10, 50, 30, 70, 80, 20, 90, 40} and key = 30.

Step 1: Start from the first element (index 0) and compare key with each element (arr[i])
• Comparing key with first element arr[0]. Since not equal, the iterator moves to the next element
as a potential match.

• Comparing key with next element arr[1]. Since not equal, the iterator moves to the next element
as a potential match.

Step 2: Now when comparing arr[2] with key, the value matches. So the Linear Search
Algorithm will yield a successful message and return the index of the element when key is found
(here 2).

Time complexity of linear search:


o Best Case Complexity - In Linear search, best case occurs when the element we are finding is at
the first position of the array. The best-case time complexity of linear search is O(1). o Average
Case Complexity - The average case time complexity of linear search is O(n).
o Worst Case Complexity - In Linear search, the worst case occurs when the element we are
looking is present at the end of the array. The worst-case in linear search could be when the target
element is not present in the given array, and we have to traverse the entire array. The worst-case
time complexity of linear search is O(n).

The time complexity of linear search is O(n) because every element in the array is compared only
once.

Algorithm:

LinearSearch(arr[], target):

1. Iterate through each element of the array:

a. If the current element equals the target:

- Return the index of the current element.

2. If the target is not found in the array:

- Return -1 to indicate that the target is not present in the array.

Space complexity of linear search:


o The space complexity of linear search is O(1).

Applications of linear search:


Linear search has several practical applications in computer science and beyond. Here are some
examples:

• Phonebook Search: Linear search can be used to search through a phonebook to find a person’s
name, given their phone number.
• Spell Checkers: The algorithm compares each word in the document to a dictionary of correctly
spelled words until a match is found.
• Finding Minimum and Maximum Values: Linear search can be used to find the minimum and
maximum values in an array or list.
• Searching through unsorted data: Linear search is useful for searching through unsorted data.

Pseudocode of Linear Search Algorithm:


• Start.
• linear_search ( array, value)
• For each element in the array // loop the element till the size. • If (searched element == value)
• return key.
• end if.
• end for.

• end.

Implementation of Linear Search Algorithm:


#include <iostream>
#include <vector>
using namespace std;

// Linear search function


int linearSearch(const std::vector<int>& arr, int target)
{ for (int i = 0; i < arr.size(); ++i) { if (arr[i] == target)
return i;
// Element found at index i
}
return -1; // Element not found
}

int main() { cout<<"Tushar Mandhan \n"<<"Roll no.


2022027566"<<endl; vector<int> array = {10, 25, 7,
42, 15, 30}; int key = 15; int result = linearSearch
(array, key); if (result != -1) cout << "Element " << key
<< " found at index " << result << endl; else cout <<
"Element " << key << " not found in the array." <<
endl; return 0;
}

Ouput:

Short question answers:

1. What is a linear search?

A linear search is a method for finding an element in an array. It works by sequentially checking
each element in the array until it finds the desired element or until it reaches the end of the array.

2. Can you explain the process used to implement a linear search algorithm?

A linear search algorithm searches through a list of items, one by one, until it finds the target
item. It does this by comparing the target item to each item in the list until it finds a match. If the
target item is not in the list, then the search will fail.

3. How does a linear search work in data structures?

A linear search is a method for finding an element within a data structure, such as an array, that
consists of sequentially checking each element in the data structure until the desired element is
found or the end of the data structure is reached

4. Can you give me some examples of where linear searches are used?

Linear searches are used in a variety of places, but they are especially common in situations
where the data being searched is not sorted in any particular order. For example, if you were
looking for a specific word in a book, you would likely use a linear search, since the pages are
not sorted in any particular way. Another common use for linear searches is when searching
through unsorted data in a database.

5. Which sorting algorithms can be used in conjunction with linear search?

Any sorting algorithm can be used in conjunction with linear search, but some will be more
effective than others. For example, if the data is already sorted, then using a linear search will be
very efficient. However, if the data is unsorted, then using a linear search will be less effective. In
general, any sorting algorithm that puts the data in a specific order will be more effective when
used in conjunction with linear search.

6. When should you use linear search instead of binary search?


Linear search is best used when the data set is small, or when the data set is unsorted. If the data
set is large and sorted, then binary search will be more efficient.

7. Can you explain what data locality means in the context of linear search?

Data locality is a term used to describe how close data is to the processor. In the context of linear
search, data locality refers to how close the data being searched is to the processor. If the data is
close to the processor, then the search will be faster. If the data is far from the processor, then the
search will be slower.

8. What’s the average and worst-case time complexity of linear search?

The average time complexity of linear search is O(n), and the worst-case time complexity is also
O(n). This means that, on average, the algorithm will take n steps to find the desired element in
the array. However, in the worst case, it could take up to n steps if the desired element is the last
one in the array.

9. What’s the best space complexity of linear search?

The best space complexity of linear search is O(1), because it only needs to store the element
being searched for.

10. Can you explain the difference between unsorted and sorted lists when it
comes to implementing a linear search?

When you are looking through an unsorted list, you can simply start at the beginning of the list
and check each element until you find the one you are looking for (or reach the end of the list).
With a sorted list, you can take advantage of the fact that the list is in order to speed up the
search. You can start in the middle of the list and then only search the half of the list that could
contain the element you are looking for.
Practical No. 1(b)
Aim: write a program to implement binary search.
Theory :
— Binary Search is a searching algorithm used in a sorted array by repeatedly dividing the
search interval in half.
— The idea of binary search is to use the information that the array is sorted and reduce the time
complexity to O(Log n).
— It is also known as half interval search,logarithmic search or binary chop.
— It works for both negative and duplicate valued elements .
— It is one of the divide and conquer algorithms.
— Binary search compares the target value to the middle of the array,if they are unequal then the
half in which the target cannot lie is eliminated and search continues for the other half.
— There are two necessary conditions in binary search: 1.The list must be sorted. 2.One must
have direct access to the middle element in the sublist.
— There are two stopping conditions in the linear search program:
1.when the element to be searched is found with location.
2. when the element is not found in the whole list .
— Binary Search Algorithm can be implemented in the following two ways
1.Iterative Method: Iterative refers to a sequence of instructions or code being repeated until a
specific end result is achieved. Iterative development is sometimes called circular or evolutionary
development.Iteration uses for loop.
2.Recursive Method: A method or algorithm that invokes itself one or more times with different
arguments.Recursion is the technique of making a function call itself. This technique provides a
way to break complicated problems down into simple problems which are easier to solve.
Algorithm:
BinarySearch(arr[], target):
1. Initialize two pointers, low = 0 and high = length of the array - 1.
2. While low is less than or equal to high:
a. Find the middle index of the array:
mid = (low + high) // 2
b. If the middle element equals the target: - Return the index of the middle element.
c. If the middle element is greater than the target:
- Update high to mid - 1.
d. If the middle element is less than the target:
- Update low to mid + 1.
3. If the target is not found in the array:
- Return -1 to indicate that the target is not present in the array.

Working of Binary Search Algorithm :


Now, let's see the working of the Binary Search Algorithm.
To understand the working of the Binary search algorithm, let's take a sorted array. It will be easy
to understand the working of Binary search with an example.
There are two methods to implement the binary search algorithm -
o Iterative method o Recursive method
The recursive method of binary search follows the divide and conquer approach. Let
the elements of array are -

Let the element to search is, K = 56


We have to use the below formula to calculate the mid of the array -
1. mid = (beg + end)/2 So, in the given array - beg
= 0 end = 8 mid = (0 + 8)/2 = 4. So, 4 is the mid
of the array.

Now, the element to search is found. So algorithm will return the index of the element matched.

Complexity :
Now, let's see the time complexity of Binary search in the best case, average case, and worst
case.
We will also see the space complexity of Binary search.
1. Time Complexity
Case Time Complexity

Best Case O(1)

Average Case O(logn)

Worst Case O(logn)

o Best Case Complexity - In Binary search, best case occurs when the element to search is
found in first comparison, i.e., when the first middle element itself is the element to be
searched. The best-case time complexity of Binary search is O(1).
o Average Case Complexity - The average case time complexity of Binary search is
O(logn).
o Worst Case Complexity - In Binary search, the worst case occurs, when we have to keep
reducing the search space till it has only one element. The worst-case time complexity of
Binary search is O(logn).
2. Space Complexity

Space Complexity O(1)


o The space complexity of binary search is O(1)
.
Applications Of Binary Search :

Binary search finds application in various fields and scenarios where efficient searching of sorted
data is required. Some common applications include:
Searching in Databases : Binary search is widely used in database systems to quickly locate
records based on key values. Index structures like B-trees and binary search trees rely on binary
search for efficient data retrieval.
Searching in Arrays : Binary search efficiently locates elements in sorted arrays. It's used in
programming tasks such as searching for a specific value in a sorted list, determining the position
of an element, or finding the nearest value.
Text Processing : In text processing tasks, binary search helps in searching for keywords or
phrases in sorted lists of words or documents, enabling faster information retrieval and text
indexing.
Finding Smallest or Largest Value : Binary search can be applied to find the smallest or largest
value that satisfies a certain condition within a sorted dataset, such as the smallest number greater
than a given threshold.
Searching in Graphs and Trees : Binary search is used in graph and tree algorithms, such as
finding the lowest common ancestor in a binary tree or searching for elements in a sorted binary
search tree.
Approximate Matching : Binary search can be used for approximate matching or fuzzy
searching, where it efficiently locates entries that are close to a target value within a certain
tolerance.

SOURCE CODE OF BINARY SEARCH:

// C++ program to implement iterative Binary Search


#include <iostream>
using namespace std;

// An iterative binary search function.


int binarySearch (int arr[], int l, int r, int x)
{
while (l <= r) {
int m = l + (r - l) / 2;

// Check if x is present at mid


if (arr[m] == x)
return m;

// If x greater, ignore left half


if (arr[m] < x)
l = m + 1;

// If x is smaller, ignore right half


else r = m - 1;
}

// If we reach here, then element was not present


return -1;
}
int main(void)
{
Cout<<”Tushar Mandhan \n<<”Roll no. 2022027566<<endl;
int arr[] = { 2, 3, 4, 10, 40 }; int x
= 10;
int n = sizeof(arr) / sizeof(arr[0]); int
result = binarySearch(arr, 0, n - 1, x);
(result == -1)
? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;
}

Output

Important question for Binary search:


Q: What is the time complexity of a standard binary search algorithm?

A: The time complexity of binary search is O(log n), where n is the size of the sorted input
array.

Q: How does binary search compare to linear search in terms of time complexity?

A: Binary search is more efficient than linear search, as it reduces the search space by half in
each step.

Q: What is the key assumption for binary search to work correctly?

A: The input array must be sorted in ascending or descending order.

Q: Can binary search be applied to an unsorted array?

A: No, binary search requires a sorted input array.

Q: What happens if the target element is not present in the sorted array during binary search?

A: Binary search returns -1 (indicating not found) or an appropriate value based on the problem
context.

Q: How does binary search handle duplicate elements?

A: Binary search may return any occurrence of the target element, but it doesn’t guarantee
finding the first or last occurrence.

Q: What is the space complexity of the iterative binary search algorithm?

A: The space complexity is O(1) (constant) because it doesn’t use additional data structures.

PRACTICAL NUMBER :- 2
Aim : Sort a given set of elements using the Quick sort method and determine the time required
to sort the elements. Repeat the experiment for different values of n, the number of elements in
the list to be sorted and plot a graph of the time taken versus n. The elements can be read from a
file or can be generated using the random number generator.

Theory :
Divide and Conquer:
➢ Quick-Sort divides the array into smaller sub-arrays.
➢ It recursively sorts these sub-arrays.
Pivot Element:
➢ Choose a pivot element from the array. The pivot is used for partitioning.
➢ Common choices include the first, last, or a random element.
Partitioning:
➢ Rearrange the elements in the array so that elements less than the pivot are on the ➢ left,
and elements greater than the pivot are on the right.
➢ The pivot itself is in its final sorted position.
Recursive Call:
➢ Apply Quick-Sort recursively to the sub-arrays on the left and right of the pivot. Base
Case:
➢ The base case of the recursion is when the sub-array has zero or one element, as ➢ it is
already sorted.
In-Place Sorting:
➢ Quick-Sort often operates in-place, meaning it doesn't require additional memory for ➢
a new array.
Efficiency:
➢ On average, Quick-Sort has O(n log n) time complexity, making it efficient for large ➢
datasets.
➢ However, in the worst case (rare), it can have O(n^2) time complexity.
Not Stable:
➢ Quick-Sort is not a stable sorting algorithm, meaning the relative order of equal ➢
elements may change.

Choice of Pivot : 1. There are many different choices


for picking pivots.
2. Always pick the first element as a pivot. 3.
Always pick the last element as a pivot.
4. Pick a random element as a pivot. 5.
Pick the middle as the pivot.

Working of Quick Sort Algorithm :

Now, let's see the working of the Quicksort Algorithm.


To understand the working of quick sort, let's take an unsorted array. It will make the concept
more clear and understandable. Let the elements of array are -

In the given array, we consider the leftmost element as pivot. So, in this case, a[left] = 24,
a[right] = 27 and a[pivot] = 24.
Since, pivot is at left, so algorithm starts from right and move towards left.

Now, a[pivot] < a[right], so algorithm moves forward one position towards left, i.e. -

Now, a[left] = 24, a[right] = 19, and a[pivot] = 24.


Because, a[pivot] > a[right], so, algorithm will swap a[pivot] with a[right], and pivot moves to
right, as -

Now, a[left] = 19, a[right] = 24, and a[pivot] = 24. Since, pivot is at right, so algorithm starts
from left and moves to right.
As a[pivot] > a[left], so algorithm moves one position to right as -

Now, a[left] = 9, a[right] = 24, and a[pivot] = 24. As a[pivot] > a[left], so algorithm moves one
position to right as -

Now, a[left] = 29, a[right] = 24, and a[pivot] = 24. As a[pivot] < a[left], so, swap a[pivot] and
a[left], now pivot is at left, i.e. -
Since, pivot is at left, so algorithm starts from right, and move to left. Now, a[left] = 24, a[right]
= 29, and a[pivot] = 24. As a[pivot] < a[right], so algorithm moves one position to left, as -

Now, a[pivot] = 24, a[left] = 24, and a[right] = 14. As a[pivot] > a[right], so, swap a[pivot] and
a[right], now pivot is at right, i.e. -

Now, a[pivot] = 24, a[left] = 14, and a[right] = 24. Pivot is at right, so the algorithm starts from
left and move to right.

Now, a[pivot] = 24, a[left] = 24, and a[right] = 24. So, pivot, left and right are pointing the same
element. It represents the termination of procedure.
Element 24, which is the pivot element is placed at its exact position.
Elements that are right side of element 24 are greater than it, and the elements that are left side of
element 24 are smaller than it.

Now, in a similar manner, quick sort algorithm is separately applied to the left and right
subarrays. After sorting gets done, the array will be -

Algorithm:
QuickSort(arr[], low, high):
1. If low < high:
a. Partition the array into two halves using a pivot element:
pivot_index = Partition(arr, low, high)
b. Call QuickSort recursively for the left half:
QuickSort(arr, low, pivot_index - 1)
c. Call QuickSort recursively for the right half:
QuickSort(arr, pivot_index + 1, high)

Partition(arr[], low, high):


1. Choose a pivot element from the array (e.g., arr[high]).
2. Initialize two pointers: i = low - 1 and j = low.
3. Iterate through the array from low to high-1:
a. If arr[j] is less than or equal to the pivot element:
- Increment i.
- Swap arr[i] and arr[j].
4. Swap arr[i+1] and arr[high] (the pivot element) to place the pivot element in its correct
position.
5. Return i+1 as the index of the pivot element.
Time Complexity :

❖ Average Case: O(n log n)


Quick-Sort has an average time complexity of O(n log n) when the partitioning is
wellbalanced.
❖ Worst Case: O(n^2)
In the worst-case scenario, when the pivot selection consistently results in unbalanced
partitions, the time complexity can degrade to O(n^2).
❖ Best Case: O(n log n)
The best case occurs when the partitioning consistently produces balanced sub-arrays,
resulting in optimal time complexity.

Space Complexity :

❖ Quick-Sort is often implemented as an in-place sorting algorithm, meaning it doesn't require


additional space proportional to the input size.

❖ The space complexity is generally O(log n) due to the recursive call stack. In the worst case, it
can be O(n) if the recursion depth becomes the same as the input size (unbalanced partitions).

❖ In summary, Quick-Sort is efficient on average with a time complexity of O(n log n), but its
worst-case time complexity is O(n^2). The space complexity is typically O(log n), making it a
good choice for sorting large datasets in-place.

Applications Of Quick Sort :

❖ Sorting Algorithms:
Quick-Sort is primarily used for sorting elements in an array or list efficiently. It is a popular
choice for sorting large datasets due to its average-case time complexity of O(n log n). ❖ File
Systems:
Quick-Sort is applied in file systems for sorting and managing files. It helps organize and
retrieve data more efficiently, especially in scenarios where quick access to sorted information is
crucial.

❖ Database Management Systems:


Quick-Sort is employed in database systems to sort and organize records. It enhances the
performance of query operations that require sorted data, such as searching for specific
values or generating reports.
❖ Network Routing:
Quick-Sort can be utilized in network routing algorithms where a quick arrangement of data
is needed. This is beneficial for optimizing the transmission of data packets in networking
applications.
❖ Compiler Optimizations:
Compilers use Quick-Sort in various optimization tasks, such as sorting symbol tables or
optimizing code generation. It helps in managing and organizing information within the
compiler efficiently.

Implementation of quicksort in C++ :

#include <iostream>

using namespace std;

/* function that consider last element as pivot,


place the pivot at its exact position, and place
smaller elements to left of pivot and greater
elements to right of pivot. */ int partition
(int a[], int start, int end)
{
int pivot = a[end]; // pivot element
int i = (start - 1);

for (int j = start; j <= end - 1; j++)


{
// If current element is smaller than the pivot
if (a[j] < pivot)
{
i++; // increment index of smaller element
int t = a[i]; a[i] = a[j]; a[j] = t;
}
}
int t = a[i+1];
a[i+1] = a[end];
a[end] = t; return
(i + 1);
}
/* function to implement quick sort */
void quick(int a[], int start, int end)
/* a[] = array to be sorted, start = Starting index, end = Endin g index */
{
if (start < end)
{
int p = partition(a, start, end); //p is the partitioning index
quick(a, start, p - 1);
quick(a, p + 1, end);
}
}
/* function to print an array */ void printArr(int a[], int n)
{ i
nt i;
for (i = 0; i < n; i++)
cout<<a[i]<< " ";
} int
main() {
Cout<<”Tushar Mandhan \n”<<”Roll no. 2022027566”<<endl;
int a[] = { 23, 8, 28, 13, 18, 26 };
int n = sizeof(a) / sizeof(a[0]);
cout<<"Before sorting array elements are - \n";
printArr(a, n); quick(a, 0, n - 1);
cout<<"\nAfter sorting array elements are - \n";
printArr(a, n);
return 0;
}
Output :

Short Answer Questions :

Q: What is Quick-Sort?
A: Quick-Sort is a sorting algorithm that uses a divide-and-conquer strategy to efficiently sort an
array or list.

Q: How does Quick-Sort work?


A: It selects a pivot element, partitions the array, and recursively sorts the sub-arrays on either
side of the pivot.
Q: What is the time complexity of Quick-Sort on average?
A: O(n log n).

Q: When does Quick-Sort have its worst-case time complexity?


A: When the pivot selection consistently results in unbalanced partitions, leading to O(n^2) time
complexity.

Q: Is Quick-Sort a stable sorting algorithm?


A: No, Quick-Sort is not stable; the relative order of equal elements may change during the
sorting process.

Q: What is the space complexity of Quick-Sort?


A: Typically O(log n) due to the recursive call stack, but it can be O(n) in the worst case for an
unbalanced partition.

Q: Why is randomization used in Quick-Sort?


A: Randomization helps reduce the likelihood of encountering the worst-case scenario and
improves average-case performance.

Q: In which applications is Quick-Sort commonly used?


A: Quick-Sort is widely used in general-purpose sorting, file systems, database management,
networking, and compiler optimizations.

Q: How does the choice of pivot element impact Quick-Sort's performance?


A: The choice of a good pivot element influences partitioning, affecting the algorithm's
efficiency. Ideally, a pivot that divides the array into roughly equal halves leads to better
performance.

PRACTICAL NUMBER :- 3
Aim : Using Open, implement a parallelized Merge Sort algorithm to sort a given set of
elements and determine the time Required to sort the elements. Repeat the experiment for
different values of n, the number of elements in the list to be sorted and plot a graph of the time
taken versus n.
The elements can be read from a file or can be generated using the random number generator.

Theory :
❖ Algorithm Type: Merge sort is a well-known sorting algorithm that efficiently sorts
arrays or lists by breaking them down into smaller sublists, sorting these sublists, and then
merging them back together.
❖ Approach: It employs the divide-and-conquer strategy, dividing the unsorted list into smaller
sublists until each sublist contains only one element, making them inherently sorted.
❖ Divide Phase: During the divide phase, merge sort recursively divides the unsorted list into
halves until each sublist consists of one element, forming the base case for sorting.
❖ Recursive Sorting: Through recursive sorting, merge sort sorts the sublists by continuously
dividing them into smaller halves and sorting them individually, ensuring that each sublist is
sorted before proceeding to merge them.
❖ Merge Operation: After sorting the sublists individually, merge sort merges them back
together by comparing elements from each sublist and arranging them in the correct order,
ultimately producing a single sorted list.
❖ Time Complexity: Merge sort boasts a time complexity of O(n log n), making it highly
efficient for sorting large datasets, thanks to its balanced division of the input list and optimal
merging process.
❖ Efficiency: Due to its efficient divide-and-conquer strategy and optimal merging process,
merge sort is well-suited for handling large datasets, outperforming many other sorting
algorithms in terms of speed and performance.
❖ Stability: One of the advantages of merge sort is its stability, meaning it maintains the
relative order of equal elements, ensuring that elements with the same value remain in the
same order as they were initially.

Working of Merge Sort Algorithm :

Now, let's see the working of merge sort Algorithm.


To understand the working of the merge sort algorithm, let's take an unsorted array. It will be
easier to understand the merge sort via an example.
Let the elements of array are -

According to the merge sort, first divide the given array into two equal halves. Merge sort keeps
dividing the list into equal parts until it cannot be further divided.
As there are eight elements in the given array, so it is divided into two arrays of size 4.

Now, again divide these two arrays into halves. As they are of size 4, so divide them into new
arrays of size 2.
Now, again divide these arrays to get the atomic value that cannot be further divided.

Now, combine them in the same manner they were broken.


In combining, first compare the element of each array and then combine them into another array
in sorted order.
So, first compare 12 and 31, both are in sorted positions. Then compare 25 and 8, and in the list
of two values, put 8 first followed by 25. Then compare 32 and 17, sort them and put 17 first
followed by 32. After that, compare 40 and 42, and place them sequentially.

In the next iteration of combining, now compare the arrays with two data values and merge them
into an array of found values in sorted order.

Now, there is a final merging of the arrays. After the final merging of above arrays, the array will
look like -
Now, the array is completely sorted.

Algorithm:
MergeSort(arr[], left, right):
1. If left < right:
a. Find the middle point to divide the
array into two halves:
middle = (left + right) // 2
b. Call MergeSort recursively for the left half:
MergeSort(arr, left, middle)
c. Call MergeSort recursively for the right half:
MergeSort(arr, middle + 1, right)
d. Merge the two sorted halves using a temporary array:
Merge(arr, left, middle, right)

Merge(arr[], left, middle, right):


1. Initialize temporary arrays L[] and R[].
2. Copy data from arr[] into L[] and R[]:
- L[] contains elements from arr[left] to arr[middle].
- R[] contains elements from arr[middle+1] to arr[right].
3. Initialize pointers i, j, and k to merge the two halves:
- i is the index for L[].
- j is the index for R[].
- k is the index for the merged array arr[].
4. Compare elements of L[] and R[]:
- If L[i] <= R[j], copy L[i] into arr[k] and increment i and k.
- If R[j] < L[i], copy R[j] into arr[k] and increment j and k.
5. Copy any remaining elements of L[] and R[] into arr[] if there are any.

Merge sort complexity

Now, let's see the time complexity of merge sort in best case, average case, and in worst case.
We will also see the space complexity of the merge sort.
1. Time Complexity

Case Time Complexity

Best Case O(n*logn)

Average Case O(n*logn)

O(n*logn)
Worst Case
❖ Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already
sorted. The best-case time complexity of merge sort is O(n*logn).
❖ Average Case Complexity - It occurs when the array elements are in jumbled order that is
not properly ascending and not properly descending. The average case time complexity of
merge sort is O(n*logn).
❖ Worst Case Complexity - It occurs when the array elements are required to be sorted in
reverse order. That means suppose you have to sort the array elements in ascending order, but
its elements are in descending order. The worst-case time complexity of merge sort is
O(n*logn).

2. Space Complexity
Space Complexity O(n)

Stable YES

o The space complexity of merge sort is O(n). It is because, in merge sort, an extra variable is
required for swapping.

Applications Of Quick Sort :

❖ -Sorting Algorithms:
❖ QuickSort is primarily used for sorting elements in an array or list efficiently. It is a popular
choice for sorting large datasets due to its average-case time complexity of O(n log n).
❖ -File Systems:
❖ QuickSort is applied in file systems for sorting and managing files. It helps organize and
retrieve data more efficiently, especially in scenarios where quick access to sorted
information is crucial.
❖ -Database Management Systems:
❖ QuickSort is employed in database systems to sort and organize records. It enhances the
performance of query operations that require sorted data, such as searching for specific
values or generating reports.
❖ -Network Routing:
❖ QuickSort can be utilized in network routing algorithms where a quick arrangement of data is
needed. This is beneficial for optimizing the transmission of data packets in networking
applications.
❖ -Compiler Optimizations:
❖ Compilers use QuickSort in various optimization tasks, such as sorting symbol tables or
optimizing code generation. It helps in managing and organizing information within the
compiler efficiently.

Implementation of quicksort in C++ :

#include <iostream>

using namespace std;


/* Function to merge the subarrays of a[] */
void merge(int a[], int beg, int mid, int end)
{ int i,
j, k;
int n1 = mid - beg + 1;
int n2 = end - mid;

int LeftArray[n1], RightArray[n2]; //temporary arrays

/* copy data to temp arrays */


for (int i = 0; i < n1; i++)
LeftArray[i] = a[beg + i]; for
(int j = 0; j < n2; j++)
RightArray[j] = a[mid + 1 + j];

i = 0; /* initial index of first sub-array */ j=


0; /* initial index of second sub-array */ k=
beg; /* initial index of merged sub-array */

while (i < n1 && j < n2)


{
if(LeftArray[i] <= RightArray[j])
{
a[k] = LeftArray[i];
i++;
} else
{
a[k] = RightArray[j];
j++;
} k++;
} while
(i<n1) {
a[k] = LeftArray[i];
i++;
k++;
}

while (j<n2)
{
a[k] = RightArray[j];
j++;
k++;
}
}

void mergeSort(int a[], int beg, int end)


{
if (beg < end)
{
int mid = (beg + end) / 2;
mergeSort(a, beg, mid); mergeSort(a,
mid + 1, end);
merge(a, beg, mid, end);
}
}

/* Function to print the array */ void


printArray(int a[], int n)
{
int i; for (i = 0; i
< n; i++)
cout<<a[i]<<" ";
}

int main() {
Cout<<”Tushar Mandhan \n”<<”Roll no.
2022027566”<<endl; int a[] = { 11, 30, 24, 7, 31, 16, 39,
41 }; int n = sizeof(a) / sizeof(a[0]); cout<<"Before
sorting array elements are - \n";
printArray(a, n);
mergeSort(a, 0, n - 1);
cout<<"\nAfter sorting array elements are - \n";
printArray(a, n);
return 0;
}
Output :

Short Answer Questions :

Q: What is merge sort?


A: Merge sort is a sorting algorithm that follows the divide-and-conquer strategy, breaking down
a list into smaller sublists, sorting them, and then merging them to obtain a sorted list.

Q: How does merge sort work?


A: Merge sort works by recursively dividing the unsorted list into halves until each sublist
contains only one element. Then, it merges these sublists in a sorted manner until the entire list is
sorted.

Q: What is the time complexity of merge sort?


A: The time complexity of merge sort is O(n log n), where n is the number of elements in the
list.
Q: Can merge sort handle large datasets efficiently?
A: Yes, merge sort is efficient for sorting large datasets due to its O(n log n) time complexity.

Q: Is merge sort a stable sorting algorithm?


A: Yes, merge sort is a stable sorting algorithm, meaning it preserves the relative order of equal
elements.

Q: Does merge sort require additional space during sorting?


A: Yes, merge sort requires additional space proportional to the size of the input list for its
auxiliary arrays during the merge phase.

Q: What is the main advantage of merge sort over other sorting algorithms?
A: One main advantage of merge sort is its consistent performance and efficiency for large
datasets, making it suitable for various applications.

Q: What are some applications of merge sort?


A: Merge sort is commonly used in external sorting, where data is too large to fit into memory,
and in programming languages or libraries for sorting large collections efficiently.

PRACTICAL NUMBER:-4

AIM:- Write a program to implement 0/1 knapsack problem using dynamic programming.

THEORY:- The Knapsack problem is an example of the combinational optimization


problem.
This problem is also commonly known as the “Rucksack Problem“. The name of the problem is
defined from the maximization problem as mentioned below:
Given a bag with maximum weight capacity of W and a set of items, each having a weight and a
value associated with it. Decide the number of each item to take in a collection such that the total
weight is less than the capacity and the total value is maximized.
Algortihm:

def knapsack(weights, values, capacity):

n = len(weights) dp = [[0] * (capacity + 1) for _

in range(n + 1)] for i in range(1, n + 1):

for w in range(1, capacity + 1): if

weights[i - 1] <= w:

dp[i][w] = max(values[i - 1] + dp[i - 1][w - weights[i - 1]], dp[i - 1][w]) else:

dp[i][w] = dp[i - 1][w] return dp[n][capacity] # Example usage: weights = [2, 3, 4, 5]

values = [3, 4, 5, 6] capacity = 8 print("Maximum value that can be obtained:",

knapsack(weights, values, capacity))

0/1 Knapsack Problem

The 0/1 Knapsack problem can be defined as follows:


We are given N items where each item has some weight (wi) and value (vi) associated with it. We
are also given a bag with capacity W. The target is to put the items into the bag such that the sum
of values associated with them is the maximum possible.
Note that here we can either put an item completely into the bag or cannot put it at all.
-The 0/1 knapsack problem is solved using dynamic programming approach.
-The 0/1 knapsack problem has not an optimal structure.
-In the 0/1 knapsack problem, we are not allowed to break items.
-0/1 knapsack problem, finds a most valuable subset item with a total value less than equal to
weight.
-In the 0/1 knapsack problem we can take objects in an integer value.

EXAMPLE:-

Input: N = 3, W = 4, profit[] = {1, 2, 3}, weight[] = {4, 5, 1}


Output: 3
Explanation: There are two items which have weight less than or equal to 4. If we select the item
with weight 4, the possible profit is 1. And if we select the item with weight 1, the possible profit
is 3. So the maximum possible profit is 3. Note that we cannot put both the items with weight 4
and 1 together as the capacity of the bag is 4.
Dynamic Programming Approach for 0/1 Knapsack Problem:-

We must follow the below given steps:

1. First, we will be provided weights and values of n items, in this case, six items.

2. We will then put these items in a knapsack of capacity W or, in our case, 10kg to get the
maximum total value in the knapsack.

3. After putting the items, we have to fill this knapsack, but we can't break the item. So, we
must either pick the entire item or skip it.

4. Sometimes this may even lead to a knapsack with some spare space left with it.

Memorization Approach for 0/1 Knapsack Problem:

Note: It should be noted that the above function using recursion computes the same sub problems
again and again. See the following recursion tree, K(1, 1) is being evaluated twice. In the
following recursion tree, K() refers to knapSack(). The two parameters indicated in the
following recursion tree are n and W.
The recursion tree is for following sample inputs. weight[]
= {1, 1, 1}, W = 2, profit[] = {10, 20, 30}
K(3, 2)
/ \
/ \
K(2, 2) K(2, 1)
/ \ / \
/ \ / \
K(1, 2) K(1, 1) K(1, 1) K(1, 0)
/ \ / \ / \
/ \ / \ / \
K(0, 2) K(0, 1) K(0, 1) K(0, 0) K(0, 1) K(0, 0)
Recursion tree for Knapsack capacity 2 units and 3 items of 1 unit weight.
As there are repetitions of the same subproblem again and again we can implement the following
idea to solve the problem.

If we get a sub problem the first time, we can solve this problem by creating a 2-D array that can
store a particular state (n, w). Now if we come across the same state (n, w) again instead of
calculating it in exponential complexity we can directly return its result stored in the table in
constant time.

IMPLEMENTATION :-
#include <iostream>
using namespace std; int
max(int x, int y)
{ return (x > y) ? x : y;
} int knapSack(int W, int w[], int v[], int n)
{ int i, wt; int K[n + 1][W + 1]; for (i
= 0; i <= n; i++) { for (wt = 0; wt <=
W; wt++) { if (i == 0 || wt == 0)
K[i][wt] = 0; else if (w[i - 1] <= wt)
K[i][wt] = max(v[i - 1] + K[i - 1][wt - w[i - 1]], K[i - 1][wt]);
else
K[i][wt] = K[i - 1][wt];
}
}
return K[n][W];
} int main()
{ cout <<
"Enter the
number of
items in a
Knapsack:"
; int n,
W; cin
>> n; int
v[n], w[n];
for (int i =
0; i < n; i+
+)
{ cout
<< "Enter
value and
weight for
item " << i
<< ":";
cin >> v[i];
cin >>
w[i];
}
cout << "Enter the capacity of knapsack";
cin >> W; cout << knapSack(W, w, v, n);
return 0;
}

Output of the source code

Some questions on knapsack problem


What is the knapsack problem?
The knapsack problem is a classic optimization problem where we aim to maximize the total
value of items selected, subject to a constraint on the total weight (or capacity) of the knapsack.
❖ What are the two main variants of the knapsack problem?
➢ The two main variants are the *0/1 knapsack problem* (where items can be either included or
excluded) and the *fractional knapsack problem* (where items can be partially included).

❖ *What is the difference between 0/1 knapsack and fractional knapsack?


➢ In the 0/1 knapsack, items must be taken entirely or not at all, whereas in the fractional
knapsack, items can be taken partially (fractionally).

❖ *What is the goal of the knapsack problem?*


➢ The goal is to maximize the total value of selected items while ensuring that their total weight
does not exceed the knapsack's capacity.

❖ *What are the input parameters for the knapsack problem?*


➢ The input includes a list of items (each with a weight and value) and the knapsack's capacity.
❖ *What is the dynamic programming approach to solving the knapsack problem?*
➢ Dynamic programming involves constructing a table to store intermediate results and solving
subproblems to find the optimal solution.

❖ *What is the greedy approach for solving the fractional knapsack problem?*
➢ The greedy approach selects items based on their value-to-weight ratio, adding the most
valuable items first.

❖ *How do you calculate the value-to-weight ratio for items in the fractional knapsack problem?
*
➢ Divide the value of an item by its weight.

❖ *What is the time complexity of the dynamic programming solution for the 0/1 knapsack
problem?*
➢ The dynamic programming solution has a time complexity of O(nW), where n is the number
of items and W is the knapsack capacity.

❖ *What is the time complexity of the greedy solution for the fractional knapsack problem?*
➢ The greedy solution has a time complexity of O(n log n), where n is the number of items.
PRACTICAL NUMBER- 5(a)
AIM:- Write a program to implement traversing in a diagraph using Breadth First Search.
THEORY: Breadth First Search (BFS) is a fundamental graph traversal algorithm. It
involves visiting all the connected nodes of a graph in a level-by-level manner. In this article, we
will look into the concept of BFS and how it can be applied to graphs effectively.
Breadth First Search (BFS) algorithm starts at the tree root and explores all nodes at the present
depth prior to moving on to the nodes at the next depth level.

As in the example given above, BFS algorithm traverses from A to B to E to F first then to C and
G lastly to D. It employs the following rules.

• Rule 1 − Visit the adjacent unvisited vertex. Mark it as visited. Display it. Insert it in a
queue.
• Rule 2 − If no adjacent vertex is found, remove the first vertex from the queue. •
Rule 3 − Repeat Rule 1 and Rule 2 until the queue is empty.
WORKING:-

Step Traversal Description

1 Initialize the queue.


2
We start
from
visiting S
(starting
node),
and mark it as visited.

We then see an unvisited adjacent node from S. In


this

example, we have
3 three nodes but
alphabetically we choose A, mark it as visited and enqueue it.

4
Next, the
unvisited
adjacent node
from S is B.
We mark it as visited and enqueue it.

5
Next, the
unvisited
adjacent node
from S is C.
We mark it as visited and enqueue it.
6
Now, S is left
with no
unvisited
adjacent nodes.
So,we dequeue
and find A.
7
From A we
have D as
unvisited adjacent
node. We mark it
as visited and enqueue it.

At this stage, we are left with no unmarked (unvisited) nodes. But as per the algorithm we keep
on dequeuing in order to get all unvisited nodes. When the queue gets emptied, the program is
over.

Complexity of BFS Algorithm

Time Complexity

The time complexity of the BFS algorithm is represented in the form of O(V + E), where V is the
number of nodes and E is the number of edges.

Space Complexity

The space complexity of the BFS algorithm is O(V).

BFS Algorithm

BFS(Graph, start_vertex):

1. Initialize a queue and a set to keep track of visited vertices.

2. Enqueue the start_vertex into the queue and mark it as visited.

3. While the queue is not empty:

a. Dequeue a vertex from the queue.

b. Process the vertex (e.g., print it).

c. For each neighbor of the dequeued vertex:

i. If the neighbor has not been visited:

- Enqueue the neighbor into the queue.


- Mark the neighbor as visited.

4. Repeat step 3 until the queue is empty.


Applications of Breadth First Search:

1.Shortest Path and Minimum Spanning Tree for unweighted graph: In an unweighted
graph, the shortest path is the path with the least number of edges. With Breadth First, we always
reach a vertex from a given source using the minimum number of edges. Also, in the case of
unweighted graphs, any spanning tree is Minimum Spanning Tree and we can use either Depth or
Breadth first traversal for finding a spanning tree.

2. Minimum Spanning Tree for weighted graphs: We can also find Minimum Spanning Tree
for weighted graphs using BFT, but the condition is that the weight should be non-negative and
the same for each pair of vertices.

3. Peer-to-Peer Networks: In Peer-to-Peer Networks like BitTorrent, Breadth First Search is


used to find all neighbour nodes.

4.When we need to print or analyze data by level in the graph or tree: BFS is also sometimes
referred to as "level-order traversal", since we can track all the nodes at a given level. It's useful
when we need to batch together all nodes that are at a given level in a tree, or at a given level in a
graph relative to some starting node.

IMPLEMENTATION:-
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#define MAX 5
struct Vertex { char
label; bool visited;
};
//queue variables int
queue[MAX];
int rear = -1;
int front = 0;
int queueItemCount = 0;
//graph variables //array
of vertices
struct Vertex* lstVertices[MAX];
//adjacency matrix int
adjMatrix[MAX][MAX];
//vertex count int
vertexCount = 0;
//queue functions void
insert(int data)
{ queue[++rear] =
data;
queueItemCount++;
} int removeData()
{ queueItemCount--;
return queue[front+
+]; }
bool isQueueEmpty()
{ return queueItemCount ==
0;
}
//graph functions
//add vertex to the vertex list void
addVertex(char label) {
struct Vertex* vertex = (struct Vertex*) malloc(sizeof(struct Vertex));
vertex->label = label; vertex->visited = false;
lstVertices[vertexCount++] = vertex;
}
//add edge to edge array void
addEdge(int start,int end)
{ adjMatrix[start][end] = 1;
adjMatrix[end][start] = 1;
}
//display the vertex void
displayVertex(int vertexIndex) {
printf("%c ",lstVertices[vertexIndex]->label);
}
//get the adjacent unvisited vertex int
getAdjUnvisitedVertex(int vertexIndex)
{ int i;

for(i = 0; i<vertexCount; i++) {


if(adjMatrix[vertexIndex][i] == 1 && lstVertices[i]->visited == false)
return i;
}
return -1; }
void breadthFirstSearch() {
int i;
//mark first node as visited
lstVertices[0]->visited = true;
//display the vertex
displayVertex(0); //insert
vertex index in queue
insert(0); int
unvisitedVertex; while(!
isQueueEmpty()) {
//get the unvisited vertex of vertex which is at front of the queue
int tempVertex = removeData(); //no adjacent vertex found
while((unvisitedVertex = getAdjUnvisitedVertex(tempVertex)) != -1) {
lstVertices[unvisitedVertex]->visited = true;
displayVertex(unvisitedVertex); insert(unvisitedVertex);
}
}
//queue is empty, search is complete, reset the visited flag
for(i = 0;i<vertexCount;i++) { lstVertices[i]->visited =
false;
}
} int main()
{ printf("Tushar
Mandhan\n");
printf("Roll no. 2022027566")
int i, j;

for(i = 0; i<MAX; i++) { // set adjacency


for(j = 0; j<MAX; j++) // matrix to 0
adjMatrix[i][j] = 0;
}
addVertex('S'); // 0
addVertex('A'); // 1
addVertex('B'); // 2
addVertex('C'); // 3
addVertex('D'); // 4 addEdge(0,
1); // S - A addEdge(0, 2); // S
- B addEdge(0, 3); // S - C
addEdge(1, 4); // A - D
addEdge(2, 4); // B - D
addEdge(3, 4); // C - D printf("\
nBreadth First Search: ");
breadthFirstSearch();
return 0;
}
Output:

IMPORTANT QUESTIONS:-

❖ What is BFS?
BFS is an algorithm for traversing or searching tree or graph data structures, starting at the
root (or some arbitrary node) and exploring neighbors before moving to the next level
neighbors.

❖ How does BFS work?


BFS starts at the tree root and explores all neighbor nodes at the present depth prior to
moving on to nodes at the next depth level.

❖ Which data structure is used for implementing BFS?


A queue is used to implement BFS.

❖ Can BFS be used to detect cycles in a graph?


Yes, BFS can be used to detect cycles in an undirected graph.

❖ Is BFS a greedy algorithm?


No, BFS is not a greedy algorithm; it is a traversal algorithm that uses a queue.

❖ What is the time complexity of BFS?


The time complexity of BFS is (O(V + E)), where (V) is the number of vertices and (E) is the
number of edges in the graph.

❖ Can BFS find the shortest path in an unweighted graph?


Yes, BFS can find the shortest path in an unweighted graph.

PRACTICAL NO: 5(b)

AIM: To write a program to implement a program depth first search .


THEORY :
Depth First Search (DFS) algorithm is a recursive algorithm for searching all the vertices of a
graph or tree data structure. This algorithm traverses a graph in a depthward motion and uses a
stack to remember to get the next vertex to start a search, when a dead end occurs in any
iteration.

As in the example given above, DFS algorithm traverses from S to A to D to G to E to B first,


then to F and lastly to C. It employs the following rules.
• Rule 1 − Visit the adjacent unvisited vertex. Mark it as visited. Display it. Push it in a
stack.
• Rule 2 − If no adjacent vertex is found, pop up a vertex from the stack. (It will pop up all
the vertices from the stack, which do not have adjacent vertices.)
• Rule 3 − Repeat Rule 1 and Rule 2 until the stack is empty.

Step Traversal Description

1 Initialize the stack.


Mark
S as
visited
and
put it
onto
the
stack.
Explore
any
unvisited adjacent node from S. We have three nodes
2 and we can pick any of them. For this example, we shall take the node in an
alphabetical
order.

Mark
A as
visited
and put
it onto
the
stack.

Explore any unvisited adjacent node from A.


3
Both S and D are adjacent to A but we are concerned for unvisited nodes
only.
Visit D
and mark
it as
visited and
put onto
the stack.
Here, we
have B and C nodes,
4 which are adjacent
to D and
both are unvisited. However, we shall again choose in an alphabetical order.

We

choose B, mark it as visited and put onto the stack. Here B does
5 not have any unvisited adjacent node.
So, we pop B from the stack.

We check the stack top for return to the


previous node and
6 check if it has any
unvisited nodes. Here, we find D to be on the top of the stack.
Only

unvisited adjacent node is from D is C now. So


7
we visit C, mark it as visited and put it onto the stack.

Complexity of DFS Algorithm

Time Complexity:
The time complexity of the DFS algorithm is represented in the form of O(V + E), where V is
the number of nodes and E is the number of edges.

Space Complexity:
The space complexity of the DFS algorithm is O(V).

Applications of Depth First Search:

1. Detecting cycle in a graph: A graph has a cycle if and only if we see a back edge during
DFS. So we can run DFS for the graph and check for back edges.
2. Path Finding: We can specialize the DFS algorithm to find a path between two given
vertices u and z.
• Call DFS(G, u) with u as the start vertex.
• Use a stack S to keep track of the path between the start vertex and the current vertex.
• As soon as destination vertex z is encountered, return the path as the contents of the stack.
3. Model checking: Depth-first search can be used in model checking, which is the process of
checking that a model of a system meets a certain set of properties.
4. Back-tracking: Depth-first search can be used in backtracking algorithms.

DFS ALGORITHM:
DFS(Graph, start_vertex):
1. Initialize an empty stack and a set to keep track of visited vertices.
2. Push the start_vertex onto the stack and mark it as visited.
3. While the stack is not empty:
a. Pop a vertex from the stack.
b. Process the vertex (e.g., print it).
c. For each neighbor of the popped vertex:
i. If the neighbor has not been visited:
- Push the neighbor onto the stack.
- Mark the neighbor as visited.
4. Repeat step 3 until the stack is empty.

IMPLEMENTATION OF DEPTH FIRST SEARCH :

Source Code:

#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#define MAX 5
struct Vertex { char
label; bool visited;
};
//stack variables int
stack[MAX]; int
top = -1; //graph
variables //array of
vertices
struct Vertex* lstVertices[MAX];
//adjacency matrix int
adjMatrix[MAX][MAX];
//vertex count int
vertexCount = 0;
//stack functions void
push(int item)
{ stack[++top] =
item;
} int pop()
{ return
stack[top--];
} int peek()
{ return
stack[top]; } bool
isStackEmpty() {
return top == -1;
}
//graph functions

//add vertex to the vertex list void addVertex(char label) { struct


Vertex* vertex = (struct Vertex*) malloc(sizeof(struct Vertex)); vertex-
>label = label; vertex->visited = false; lstVertices[vertexCount+
+] = vertex;
}
//add edge to edge array void
addEdge(int start,int end)
{ adjMatrix[start][end] = 1;
adjMatrix[end][start] = 1;
}
//display the vertex void displayVertex(int
vertexIndex) { printf("%c
",lstVertices[vertexIndex]->label);
}
//get the adjacent unvisited vertex int
getAdjUnvisitedVertex(int vertexIndex)
{ int i;
for(i = 0; i < vertexCount; i++) { if(adjMatrix[vertexIndex][i] == 1 &&
lstVertices[i]->visited == false) { return i;
} } return -1; }
void depthFirstSearch() {
int i;
//mark first node as visited
lstVertices[0]->visited =
true; //display the vertex
displayVertex(0); //push
vertex index in stack push(0);
while(!isStackEmpty()) {
//get the unvisited vertex of vertex which is at top of the stack
int unvisitedVertex = getAdjUnvisitedVertex(peek());
//no adjacent vertex found
if(unvisitedVertex == -1) { pop(); }
else { lstVertices[unvisitedVertex]->visited =
true; displayVertex(unvisitedVertex);
push(unvisitedVertex);
}
}
//stack is empty, search is complete, reset the visited flag
for(i = 0;i < vertexCount;i++) { lstVertices[i]->visited =
false;
}
} int main()
{
int i, j;
printf("Tushar Mandhan\n ");
printf("Roll no. 202202756\n");

for(i = 0; i < MAX; i++) { // set adjacency


for(j = 0; j < MAX; j++) // matrix to 0
adjMatrix[i][j] = 0;
} addVertex('S'); // 0
addVertex('A'); // 1
addVertex('B'); // 2
addVertex('C'); // 3
addVertex('D'); // 4
addEdge(0, 1); // S - A
addEdge(0, 2); // S - B
addEdge(0, 3); // S - C
addEdge(1, 4); // A - D
addEdge(2, 4); // B - D
addEdge(3, 4); // C - D
printf("Depth First Search: ");
depthFirstSearch(); return 0;
}

Output:

Important question of Depth first search:

1. Question: What is the primary difference between DFS and BFS?


Answer: The key difference lies in the order of exploration. DFS explores as far as possible
along a branch before backtracking, while BFS explores all neighbors at the current level before
moving deeper.

2. Question: Can DFS be used to find the shortest path in an unweighted graph? Answer: No,
DFS does not guarantee the shortest path. It may find a longer path if it explores deeper levels
first.

3. Question: How can you implement DFS iteratively (without recursion)?


Answer: Use a stack data structure. Push the starting node onto the stack, then repeatedly pop a
node, mark it as visited, and push its unvisited neighbors onto the stack until the stack is empty.

4. Question: Does DFS work for both directed and undirected graphs?
Answer: Yes, DFS works for both types of graphs. In an undirected graph, it explores all
connected components. In a directed graph, it explores the entire component reachable from the
starting node.

5. Question: What is the concept of backtracking in DFS?


Answer: Backtracking occurs when DFS reaches a dead end (i.e., no unvisited neighbors) and
needs to backtrack to explore other branches. It pops nodes from the stack until it finds a valid
next node.

6. Question: Can DFS be used to detect cycles in a graph?


Answer: Yes, DFS can detect cycles. If during traversal, you encounter an already visited node
(other than the parent), it indicates a cycle in the graph.

Practical -6

Aim: Find Minimum Cost Spanning Tree of a given undirected graph using Prim’s algorithm.

Theory: This algorithm always starts with a single node and moves through several adjacent nodes, in
order to explore all of the connected edges along the way.

The algorithm starts with an empty spanning tree. The idea is to maintain two sets of vertices. The first set
contains the vertices already included in the MST, and the other set contains the vertices not yet included.
At every step, it considers all the edges that connect the two sets and picks the minimum weight edge
from these edges. After picking the edge, it moves the other endpoint of the edge to the set containing
MST.
A group of edges that connects two sets of vertices in a graph is called cut in graph theory. So, at every
step of Prim’s algorithm, find a cut, pick the minimum weight edge from the cut, and include this vertex
in MST Set (the set that contains already included vertices).
How does Prim’s Algorithm Work?

The working of Prim’s algorithm can be described by using the following steps:
Step 1: Determine an arbitrary vertex as the starting vertex of the MST.
Step 2: Follow steps 3 to 5 till there are vertices that are not included in the MST (known as fringe
vertex).
Step 3: Find edges connecting any tree vertex with the fringe vertices.
Step 4: Find the minimum among these edges.
Step 5: Add the chosen edge to the MST if it does not form any cycle.
Step 6: Return the MST and exit.

Illustration of Prim’s Algorithm:


Consider the following graph as an example for which we need to find the Minimum Spanning Tree
(MST).
Step 1: Firstly, we select an arbitrary vertex that acts as the starting vertex of the Minimum Spanning
Tree. Here we have selected vertex 0 as the starting vertex.

Step 2: All the edges connecting the incomplete MST and other vertices are the edges {0, 1} and {0, 7}.
Between these two the edge with minimum weight is {0, 1}. So include the edge and vertex 1 in the MST.

Step 3: The edges connecting the incomplete MST to other vertices are {0, 7}, {1, 7} and {1, 2}. Among
these edges the minimum weight is 8 which is of the edges {0, 7} and {1, 2}. Let us here include the edge
{0, 7} and the vertex 7 in the MST
Step 4: The edges that connect the incomplete MST with the fringe vertices are {1, 2}, {7, 6} and {7, 8}.
Add the edge {7, 6} and the vertex 6 in the MST as it has the least weight (i.e., 1).

Step 5: The connecting edges now are {7, 8}, {1, 2}, {6, 8} and {6, 5}. Include edge {6, 5} and vertex 5
in the MST as the edge has the minimum weight (i.e., 2) among them.

Step 6: Among the current connecting edges, the edge {5, 2} has the minimum weight. So include that
edge and the vertex 2 in the MST.
Step 7: The connecting edges between the incomplete MST and the other edges are {2, 8}, {2, 3}, {5, 3}
and {5, 4}. The edge with minimum weight is edge {2, 8} which has weight 2. So include this edge and
the vertex 8 in the MST.

Step 8: See here that the edges {7, 8} and {2, 3} both have same weight which are minimum. But 7 is
already part of MST. So we will consider the edge {2, 3} and include that edge and vertex 3 in the MST.

Step 9: Only the vertex 4 remains to be included. The minimum weighted edge from the incomplete MST
to 4 is
The final structure of the MST is as follows and the weight of the edges of the MST is (4 + 8 + 1 + 2 + 4 +
2 + 7 + 9) = 37.

Prims algorithm:
1. Initialize an empty set to store the vertices that have been included in the MST.
2. Initialize an empty list to store the edges of the MST.
3. Choose an arbitrary vertex to start the MST.
4. Add the chosen vertex to the set of included vertices.
5. While the set of included vertices does not contain all vertices:
a. Find the minimum-weight edge that connects a vertex in the set to a vertex outside the set.
b. Add the edge to the MST.
c. Add the vertex connected by the edge to the set of included vertices.
6. Return the list of edges of the MST.
Implementation:
#include <iostream>
#include <limits.h> using
namespace std;
// Number of vertices in the graph
#define V 5 int minKey(int key[],
bool mstSet[])
{
// Initialize min value int min =
INT_MAX, min_index;
for (int v = 0; v < V; v++) if (mstSet[v] == false && key[v] < min) min = key[v], min_index =
v; return min_index;
}
int printMST(int parent[], int n, int graph[V][V])
{ cout<<"Edge\tWeight\
n"; for (int i = 1; i < V; i+
+)
cout<<parent[i]<<"--"<<i<<"\t"<<graph[i][parent[i]]<<"\n";
}
// Function to construct and print MST for a graph represented using adjacency
// matrix representation void
prims(int graph[V][V])
{ int parent[V]; // Array to store constructed
MST int key[V]; // Key values used to pick
minimum weight edge in cut bool mstSet[V]; //
To represent set of vertices not yet included in
MST
// Initialize all keys as INFINITE for (int i = 0; i < V; i++) key[i] = INT_MAX, mstSet[i] =
false;
key[0] = 0; parent[0] = -1;
for (int count = 0; count < V-1; count++)
{
int u = minKey(key, mstSet);
mstSet[u] = true;
for (int v = 0; v < V; v++) if (graph[u][v] && mstSet[v] == false && graph[u][v] < key[v])
parent[v] = u, key[v] = graph[u][v];
}
printMST(parent, V, graph);
}
int main() {
cout<<"Tushar Mandhan \n "<<"Roll no. 2022027566\n";
int graph[V][V] = {{0, 1, 5, 6, 0},
{2, 4, 0, 8, 5},
{0, 3, 0, 9, 7},
{6, 8, 0, 0, 9},
{0, 5, 7, 9, 0},
};
// Print the solution
prims(graph);
return 0;
}

Output:

Some Important questions about prims algorithm:

Question: What is the purpose of Prim’s algorithm?

Answer: Prim’s algorithm finds the minimum spanning tree (MST) in a weighted, connected
graph. It connects all vertices with the minimum total edge weight.

Question: How does Prim’s algorithm work?

Answer: Prim’s starts with an arbitrary vertex and repeatedly adds the nearest unvisited vertex to the
MST. It maintains a set of visited vertices and a priority queue (min heap) of edges.

Question: What is the difference between Prim’s and Kruskal’s algorithms?

Answer: Both find MSTs, but Kruskal’s processes edges in ascending order of weight, while Prim’s
focuses on vertices. Kruskal’s can handle disconnected graphs, while Prim’s requires a single connected
component.

Question: Is Prim’s algorithm greedy?

Answer: Yes, Prim’s is a greedy algorithm. At each step, it chooses the locally optimal edge with the
minimum weight.

Question: Can Prim’s algorithm handle graphs with negative edge weights?

Answer: No, Prim’s assumes non-negative edge weights. If a graph has negative weights, consider using
Dijkstra’s algorithm instead.
PRACTICAL NUMBER :- 7

Aim : Find minimum cost spanning tree of a given undirected graph using kruskal’s algorithm.

Theory :
Kruskal's Algorithm
Spanning tree - A spanning tree is the subgraph of an undirected connected graph.
Minimum Spanning tree - Minimum spanning tree can be defined as the spanning tree in which
the sum of the weights of the edge is minimum. The weight of the spanning tree is the sum of the
weights given to the edges of the spanning tree.
Now, let's start with the main topic.
Kruskal's Algorithm is used to find the minimum spanning tree for a connected weighted graph.
The main target of the algorithm is to find the subset of edges by using which we can traverse
every vertex of the graph. It follows the greedy approach that finds an optimum solution at every
stage instead of focusing on a global optimum.
How does Kruskal's algorithm work?
In Kruskal's algorithm, we start from edges with the lowest weight and keep adding the edges
until the goal is reached. The steps to implement Kruskal's algorithm are listed as follows - First,
sort all the edges from low weight to high.
Now, take the edge with the lowest weight and add it to the spanning tree. If the edge to be added
creates a cycle, then reject the edge.
Continue to add the edges until we reach all vertices, and a minimum spanning tree is created.
The applications of Kruskal's algorithm are -
• Kruskal's algorithm can be used to layout electrical wiring among cities.
• It can be used to lay down LAN connections.
Example of Kruskal's algorithm
Now, let's see the working of Kruskal's algorithm using an example. It will be easier to
understand Kruskal's algorithm using an example.
Suppose a weighted graph is -

The weight of the edges of the above


Edge AB AC AD AE BC CD DE
Weight 1 7 10 5 3 4 2
graph is given in the below table -
Now, sort the edges given above in the ascending order of their weights.
Edge AB DE BC CD AE AC AD

Weight 1 2 3 4 5 7 10
Now, let's start constructing the minimum spanning tree.
Step 1 - First, add the edge AB with weight 1 to the MST.

Step 2 - Add the edge DE with weight 2 to the MST as it is not creating the cycle.

Step 3 - Add the edge BC with weight 3 to the MST, as it is not creating any cycle or loop.

Step 4 - Now, pick the edge CD with weight 4 to the MST, as it is not forming the cycle.

Step 5 - After that, pick the edge AE with weight 5. Including this edge will create the cycle, so
discard it.
Step 6 - Pick the edge AC with weight 7. Including this edge will create the cycle, so discard it.
Step 7 - Pick the edge AD with weight 10. Including this edge will also create the cycle, so
discard it.
So, the final minimum spanning tree obtained from the given weighted graph by using Kruskal's
algorithm is -
The cost of the MST is = AB + DE + BC + CD = 1 + 2 + 3 + 4 = 10.
Now, the number of edges in the above tree equals the number of vertices minus 1. So, the
algorithm stops here.

Algorithm:
Step 1: Create a forest F in such a way that every vertex of the graph is a separate tree.
Step 2: Create a set E that contains all the edges of the graph.
Step 3: Repeat Steps 4 and 5 while E is NOT EMPTY and F is not spanning
Step 4: Remove an edge from E with minimum weight
Step 5: IF the edge obtained in Step 4 connects two different trees, then add it to the forest F
(for combining two trees into one tree).
ELSE
Discard the edge
Step 6: END
Complexity of Kruskal's algorithm
Now, let's see the time complexity of Kruskal's algorithm.
Time Complexity
The time complexity of Kruskal's algorithm is O(E logE) or O(V logV), where E is the no. of
edges, and V is the no. of vertices.
Implementation of Kruskal's algorithm :
#include <iostream>
#include <algorithm>
using namespace std; const
int MAX = 1e4 + 5; int
id[MAX], nodes, edges;
pair <long long, pair<int, int> > p[MAX]; void
init()
{
for(int i = 0;i < MAX;++i)
id[i] = i;
}
int root(int x)
{
while(id[x] != x)
{
id[x] = id[id[x]];
x = id[x];
}
return x;
}
void union1(int x, int y)
{ int p =
root(x); int
q = root(y);
id[p] = id[q];
}
long long kruskal(pair<long long, pair<int, int> > p[])
{
int x, y;
long long cost, minimumCost = 0;
for(int i = 0;i < edges;++i)
{
x = p[i].second.first;
y = p[i].second.second;
cost = p[i].first;
if(root(x) != root(y))
{
minimumCost += cost;
union1(x, y);
}
}
return minimumCost;
} int
main()
{
cout<<"Tushar mandhan \n"<<"Roll no. 2022027566\n";
int x, y;
long long weight, cost, minimumCost;
init();
cout <<"Enter Nodes and edges\n";
cin >> nodes >> edges; for(int i
= 0;i < edges;++i)
{
cout<<"Enter the value of X, Y and edges\n";
cin >> x >> y >> weight;
p[i] = make_pair(weight, make_pair(x, y));
} sort(p, p + edges);
minimumCost = kruskal(p);
cout <<"Minimum cost is "<< minimumCost << endl;
return 0; }
Output :
Short Ques Answer
Q: What is Kruskal's algorithm?
A: Kruskal's algorithm is a greedy algorithm used to find the minimum spanning tree of a
connected weighted graph.

Q: How does Kruskal's algorithm work?


A: Kruskal's algorithm grows the minimum spanning tree by adding the shortest edge that doesn't
form a cycle until all vertices are included.

Q: What is the main idea behind Kruskal's algorithm?


A: The main idea is to sort all the edges by their weights and then add them to the spanning tree
one by one, ensuring no cycles are formed.

Q: What data structure does Kruskal's algorithm primarily use?


A: Kruskal's algorithm primarily uses a disjoint-set data structure to efficiently check for cycles.

Q: What is the time complexity of Kruskal's algorithm?


A: The time complexity of Kruskal's algorithm is O(E log E) with sorting and O(E log V) with a
suitable data structure implementation, where E is the number of edges and V is the number of
vertices.

Q: Can Kruskal's algorithm handle graphs with negative edge weights?


A: Yes, Kruskal's algorithm can handle graphs with negative edge weights, as long as there are no
negative cycles.

Q: How does Kruskal's algorithm ensure connectivity of the minimum spanning tree? A:
Kruskal's algorithm ensures connectivity by adding edges in ascending order of weights,
connecting vertices from disjoint sets.

Q: Is Kruskal's algorithm more suitable for dense or sparse graphs?


A: Kruskal's algorithm is generally more suitable for sparse graphs due to its time complexity
being dependent on the number of edges.
Q: Can Kruskal's algorithm handle disconnected graphs?
A: Yes, Kruskal's algorithm can handle disconnected graphs by separately applying it to each
connected component.

Q: Does Kruskal's algorithm guarantee the uniqueness of the minimum spanning tree? A: Yes,
Kruskal's algorithm guarantees the uniqueness of the minimum spanning tree if all edge
weights are unique.

PRACTICAL NUMBER :- 8

Aim : From a given vertex in a weighted connected graph, find shortest paths to other vertices
using Dijkstra’s algorithm.

Theory :
An Introduction to Dijkstra's Algorithm

Ever wondered how does Google Maps finds the shortest and fastest route between two places?

Well, the answer is Dijkstra's Algorithm. Dijkstra's Algorithm is a Graph algorithm that finds
the shortest path from a source vertex to all other vertices in the Graph (single source shortest
path). It is a type of Greedy Algorithm that only works on Weighted Graphs having positive
weights. The time complexity of Dijkstra's Algorithm is O(V2) with the help of the adjacency
matrix representation of the graph. This time complexity can be reduced to O((V + E) log V)
with the help of an adjacency list representation of the graph, where V is the number of vertices
and E is the number of edges in the graph.

History of Dijkstra's Algorithm

Dijkstra's Algorithm was designed and published by Dr. Edsger W. Dijkstra, a Dutch Computer
Scientist, Software Engineer, Programmer, Science Essayist, and Systems Scientist.

Fundamentals of Dijkstra's Algorithm

The following are the basic concepts of Dijkstra's Algorithm:

1. Dijkstra's Algorithm begins at the node we select (the source node), and it examines the
graph to find the shortest path between that node and all the other nodes in the graph.
2. The Algorithm keeps records of the presently acknowledged shortest distance from each
node to the source node, and it updates these values if it finds any shorter path.
3. Once the Algorithm has retrieved the shortest path between the source and another node,
that node is marked as 'visited' and included in the path.
4. The procedure continues until all the nodes in the graph have been included in the path.
In this manner, we have a path connecting the source node to all other nodes, following
the shortest possible path to reach each node.

Working of Dijkstra’s Algorithm Algorithm :

The following is the step that we will follow to implement Dijkstra's Algorithm:

Step 1: First, we will mark the source node with a current distance of 0 and set the rest of the
nodes to INFINITY.

Step 2: We will then set the unvisited node with the smallest current distance as the current node,
suppose X.

Step 3: For each neighbor N of the current node X: We will then add the current distance of X
with the weight of the edge joining X-N. If it is smaller than the current distance of N, set it as
the new current distance of N.
Step 4: We will then mark the current node X as visited.

Step 5: We will repeat the process from 'Step 2' if there is any node unvisited left in the graph.

Let us now understand the implementation of the algorithm with the help of an example:

Figure 6: The Given Graph

1. We will use the above graph as the input, with node A as the source.
2. First, we will mark all the nodes as unvisited.
3. We will set the path to 0 at node A and INFINITY for all the other nodes.
4. We will now mark source node A as visited and access its neighboring nodes. Note: We
have only accessed the neighboring nodes, not visited them.
5. We will now update the path to node B by 4 with the help of relaxation because the path
to node A is 0 and the path from node A to B is 4, and the minimum((0 + 4),
INFINITY) is 4.
6. We will also update the path to node C by 5 with the help of relaxation because the path
to node A is 0 and the path from node A to C is 5, and the minimum((0 + 5),
INFINITY) is 5. Both the neighbors of node A are now relaxed; therefore, we can move
ahead.
7. We will now select the next unvisited node with the least path and visit it. Hence, we will
visit node B and perform relaxation on its unvisited neighbors. After performing
relaxation, the path to node C will remain 5, whereas the path to node E will become 11,
and the path to node D will become 13.
8. We will now visit node E and perform relaxation on its neighboring nodes B, D, and F.
Since only node F is unvisited, it will be relaxed. Thus, the path to node B will remain as
it is, i.e., 4, the path to node D will also remain 13, and the path to node F will become 14
(8 + 6).
9. Now we will visit node D, and only node F will be relaxed. However, the path to node F
will remain unchanged, i.e., 14.
10. Since only node F is remaining, we will visit it but not perform any relaxation as all its
neighboring nodes are already visited.
11. Once all the nodes of the graphs are visited, the program will end.

Complexity :
• The time complexity of Dijkstra's algorithm depends on the data structure used to
implement the priority queue. Using a binary heap, the time complexity is O((V + E) log
V), where V is the number of vertices and E is the number of edges in the graph. With
Fibonacci heaps, it can be reduced to O(V log V + E).

• Dijkstra's algorithm typically requires O(V) space for storing distances and O(V) space
for maintaining the priority queue, resulting in a total space complexity of O(V).

Implementation of Dijkstra’s Algorithm in C++ :


#include <iostream> using
namespace std;
#include <limits.h>

// Number of vertices in the graph


#define V 9

int minDistance(int dist[], bool sptSet[])


{
// Initialize min value int
min = INT_MAX, min_index;

for (int v = 0; v < V; v++) if (sptSet[v]


== false && dist[v] <= min) min =
dist[v], min_index = v;

return min_index;
}

// A utility function to print the constructed distance


// array void
printSolution(int dist[])
{
cout << "Vertex \t Distance from Source" << endl; for
(int i = 0; i < V; i++)
cout << i << " \t\t\t\t" << dist[i] << endl;
}

// Function that implements Dijkstra's single source


// shortest path algorithm for a graph represented using
// adjacency matrix representation void
dijkstra(int graph[V][V], int src)
{ int dist[V]; // The output array. dist[i] will hold the
// shortest
// distance from src to i

bool sptSet[V]; // sptSet[i] will be true if vertex i is


// included in shortest
// path tree or shortest distance from src to i is
// finalized

// Initialize all distances as INFINITE and stpSet[] as


// false
for (int i = 0; i < V; i++) dist[i] =
INT_MAX, sptSet[i] = false;
// Distance of source vertex from itself is always 0 dist[src]
= 0;

// Find shortest path for all vertices for (int


count = 0; count < V - 1; count++) {

int u = minDistance(dist, sptSet);

// Mark the picked vertex as processed


sptSet[u] = true;

// Update dist value of the adjacent vertices of the


// picked vertex.
for (int v = 0; v < V; v++)

// Update dist[v] only if is not in sptSet,


// there is an edge from u to v, and total
// weight of path from src to v through u is
// smaller than current value of dist[v]
if (!sptSet[v] && graph[u][v]
&& dist[u] != INT_MAX
&& dist[u] + graph[u][v] < dist[v])
dist[v] = dist[u] + graph[u][v];
}
// print the constructed distance array
printSolution(dist);
} int main() {
cout<<"Tushar Mandhan \n"<<"Roll no. 2022027566\n";
/* Let us create the example graph discussed above */
int graph[V][V] = { { 0, 4, 0, 0, 0, 0, 0, 8, 0 },
{ 4, 0, 8, 0, 0, 0, 0, 11, 0 },
{ 0, 8, 0, 7, 0, 4, 0, 0, 2 },
{ 0, 0, 7, 0, 9, 14, 0, 0, 0 },
{ 0, 0, 0, 9, 0, 10, 0, 0, 0 },
{ 0, 0, 4, 14, 10, 0, 2, 0, 0 },
{ 0, 0, 0, 0, 0, 2, 0, 1, 6 },
{ 8, 11, 0, 0, 0, 0, 1, 0, 7 },
{ 0, 0, 2, 0, 0, 0, 6, 7, 0 } };
// Function call
dijkstra(graph, 0);
return 0;
}

Output :

Short Answer Questions :

1. What is Dijkstra's Algorithm?


• Dijkstra's Algorithm is a graph search algorithm used to find the shortest path from a
source node to all other nodes in a weighted graph.
2. Who developed Dijkstra's Algorithm?
• Dijkstra's Algorithm was developed by Dutch computer scientist Edsger W. Dijkstra in
1956.
3. What type of graph does Dijkstra's Algorithm work on?
• Dijkstra's Algorithm works on graphs with non-negative edge weights.
4. How does Dijkstra's Algorithm work?
• Dijkstra's Algorithm iteratively selects the node with the smallest tentative distance from
the source, updates the distances to its neighboring nodes, and continues until all nodes
have been visited.
5. What is the time complexity of Dijkstra's Algorithm?
• The time complexity of Dijkstra's Algorithm is O(V^2) for implementations using arrays
and O((V+E)logV) for implementations using priority queues, where V is the number of
vertices and E is the number of edges in the graph.
6. Does Dijkstra's Algorithm work for graphs with negative edge weights?
• No, Dijkstra's Algorithm does not work for graphs with negative edge weights. It may
produce incorrect results or enter into an infinite loop.
7. What is the data structure commonly used to implement Dijkstra's Algorithm
efficiently?
• Priority queues or min-heaps are commonly used data structures to implement Dijkstra's
Algorithm efficiently.
8. What is the main advantage of Dijkstra's Algorithm?
• The main advantage of Dijkstra's Algorithm is its ability to find the shortest path from a
source node to all other nodes in a graph efficiently.

Practical number:9
Aim: Write a program to implement Bellman ford algorithm.
Theory:
Bellman-Ford is a single source shortest path algorithm that determines the shortest path between a
given source vertex and every other vertex in a graph. This algorithm can be used on both weighted
and unweighted graphs.
A Bellman-Ford algorithm is also guaranteed to find the shortest path in a graph, similar to Dijkstra’s
algorithm . Although Bellman-Ford is slower than Dijkstra’s algorithm, it is capable of handling graphs
with negative edge weights, which makes it more versatile. The shortest path cannot be found if there
exists a negative cycle in the graph. If we continue to go around the negative cycle an infinite number of
times, then the cost of the path will continue to decrease (even though the length of the path is increasing).
As a result, Bellman-Ford is also capable of detecting negative cycles, which is an important feature.
Working of Bellman-Ford Algorithm to Detect the Negative cycle in the graph:
1. Initialization: Initialize the distance from the source vertex to all other vertices as infinity, except
the distance from the source vertex to itself, which is initialized to 0. Also, initialize the
predecessor of all vertices as null.
2. Relaxation: Iterate through all the edges of the graph (|V| - 1) times, where |V| is the number of
vertices. In each iteration, relax all the edges. Relaxing an edge (u, v) means updating the distance
to vertex v if a shorter path from the source vertex to v through u is found.
3. Check for Negative Cycles: After the (|V| - 1) iterations, check for negative cycles in the graph.
A negative cycle is a cycle whose total weight is negative. If there is any negative cycle, it means
that there is no shortest path, as the negative cycle can be traversed repeatedly to decrease the
path length indefinitely.
4. Output: If there are no negative cycles, the algorithm outputs the shortest paths from the source
vertex to all other vertices. Each vertex will have its distance from the source vertex and its
predecessor on the shortest path.
Algorithm of Bellman ford algorithm
The Bellman-Ford algorithm is used to find the shortest path in a weighted directed graph with negative
edge weights. Here is a step-by-step guide on how to design and analyze the Bellman-Ford algorithm:
1. Initialize the graph:
- Create a graph representation with vertices and edges.
- Assign an initial distance value to all vertices except the source vertex (set it to 0).
- Set all the distances for the other vertices as infinite.
2. Relax the edges:
- Repeat the following steps |V| - 1 times, where |V| is the number of vertices in the graph:
- For each edge (u, v) with weight w:
- If the distance to the source vertex + w is smaller than the current distance of v, update the distance of v
with the new distance.
- Update the predecessor of v with u.
3. Check for negative cycles:
- Repeat step 2 for one more iteration.
- If any distances are updated, it means a negative cycle exists in the graph.
4. Output the shortest path:
- Start with the destination vertex.
- Traverse back using the predecessor of each vertex until you reach the source vertex. The time
complexity of the Bellman-Ford algorithm is O(|V| * |E|), where |V| is the number of
vertices and |E| is the number of edges in the graph.
By analyzing the algorithm, we can see that it guarantees finding the shortest path even in the presence of
negative edge weights. However, if the graph contains a negative cycle, the algorithm may not terminate,
or the distances may be incorrectly updated.

Source code:
#include <bits/stdc++.h> using
namespace std;

struct Edge { int


src, dest, weight;
}; struct Graph
{ int V, E;
struct Edge* edge;
}; struct Graph* createGraph(int V, int
E)
{ struct Graph* graph = new
Graph; graph->V = V; graph-
>E = E; graph->edge = new
Edge[E]; return graph;
}

void printArr(int dist[], int n)


{
printf("Vertex Distance from Source\n");
for (int i = 0; i < n; ++i) printf("%d \t\t
%d\n", i, dist[i]);
}

void BellmanFord(struct Graph* graph, int src)


{
int V = graph->V;
int E = graph->E; int
dist[V];
for (int i = 0; i < V; i++) dist[i] =
INT_MAX; dist[src] = 0; for (int i =
1; i <= V - 1; i++) { for (int j = 0; j <
E; j++) { int u = graph-
>edge[j].src; int v = graph-
>edge[j].dest; int weight = graph-
>edge[j].weight; if (dist[u] !=
INT_MAX && dist[u] + weight
< dist[v]) dist[v] = dist[u] +
weight;
} } for (int i = 0; i < E; i++) { int u =
graph->edge[i].src; int v = graph->edge[i].dest;
int weight = graph->edge[i].weight; if (dist[u] !=
INT_MAX && dist[u] + weight < dist[v]) {
printf("Graph contains negative weight cycle");
return;
}
}
printArr(dist, V);
return;
}
int main() {
cout<<"Tushar Mandhan \n"<<"Roll no. 2022027566\n"; int V = 5; int E = 8;
struct Graph* graph = createGraph(V, E);
graph->edge[0].src = 0; graph->edge[0].dest
= 1; graph->edge[0].weight = -1; graph-
>edge[1].src = 0; graph->edge[1].dest = 2;
graph->edge[1].weight = 4; graph-
>edge[2].src = 1; graph->edge[2].dest = 2;
graph->edge[2].weight = 3; graph-
>edge[3].src = 1; graph->edge[3].dest = 3;
graph->edge[3].weight = 2; graph-
>edge[4].src = 1; graph->edge[4].dest = 4;
graph->edge[4].weight = 2; graph-
>edge[5].src = 3; graph->edge[5].dest = 2;
graph->edge[5].weight = 5; graph-
>edge[6].src = 3; graph->edge[6].dest = 1;
graph->edge[6].weight = 1; graph-
>edge[7].src = 4; graph->edge[7].dest = 3;
graph->edge[7].weight = -3;
BellmanFord(graph, 0); return 0;
}

Output:

Some important question of Bellman ford algorithm:


1. What is the primary purpose of the Bellman-Ford algorithm?
Answer: The Bellman-Ford algorithm is used to find the shortest paths from a single source vertex to all
other vertices in a weighted graph, even in the presence of negative edge weights.
2. What is the significance of the initialization step in the Bellman-Ford algorithm?
Answer: In the initialization step, the distance from the source vertex to all other vertices is set to
infinity, except for the distance from the source to itself, which is set to 0. This sets the groundwork for
finding the shortest paths.
3. Why does the Bellman-Ford algorithm iterate through all the edges (|V| - 1) times?
Answer: The algorithm iterates through all the edges (|V| - 1) times to ensure that it finds the shortest
paths from the source vertex to all other vertices, gradually improving the distance estimates with each
iteration.
4. What is the purpose of checking for negative cycles in the Bellman-Ford algorithm?
Answer: Checking for negative cycles is crucial because if a negative cycle exists in the graph, it means
that there is no shortest path. The negative cycle can be traversed repeatedly to decrease the path length
indefinitely, making it impossible to determine a shortest path.
5. What is the time complexity of the Bellman-Ford algorithm, and how does it compare to Dijkstra's
algorithm?
Answer: The time complexity of the Bellman-Ford algorithm is O(V * E), where V is the number of
vertices and E is the number of edges. This makes it less efficient than Dijkstra's algorithm, which has a
time complexity of O((V + E) * log V) using a binary heap or O(V^2) using an array implementation.
However, Bellman-Ford can handle graphs with negative edge weights and detect negative cycles, which
Dijkstra's algorithm cannot do without modifications.

Practical -10
Aim: Implement any scheme to find the optimal solution for the Travelling salesman problem.
Theory:
Given a set of cities and the distance between every pair of cities, the problem is to find the shortest
possible route that visits every city exactly once and returns to the starting point. Note the difference
between Hamiltonian Cycle and TSP. The Hamiltonian cycle problem is to find if there exists a tour that
visits every city exactly once. Here we know that Hamiltonian Tour exists (because the graph is complete)
and in fact, many such tours exist, the problem is to find a minimum weight Hamiltonian Cycle.

For example, consider the graph shown in the figure on the right side. A TSP tour in the graph is 1-2-4-31.
The cost of the tour is 10+25+30+15 which is 80. The problem is a famous NP-hard problem. There is no
polynomial-time know solution for this problem. The following are different solutions for the traveling
salesman problem.
Time Complexity : O(n2*2n) where O(n* 2n) are maximum number of unique subproblems/states and
O(n) for transition (through for loop as in code) in every states.
Auxiliary Space: O(n*2n), where n is number of Nodes/Cities Algorithm
of travelling salesman problem:
1. Start from an arbitrary city as the starting point.
2. Generate all (n-1)! permutations of cities to visit, excluding the starting city.
3. For each permutation:
a. Compute the total distance/cost of the tour.
4. Select the permutation with the minimum total distance/cost as the optimal tour.
5. Return the optimal tour.
Source code:
#include <iostream> using
namespace std;
// there are four nodes in example graph (graph is 1-based) const
int n = 4;
// give appropriate maximum to avoid overflow const
int MAX = 1000000;
// dist[i][j] represents shortest distance to go from i to j
// this matrix can be calculated for any given graph using
// all-pair shortest path algorithms int
dist[n + 1][n + 1] = {
{ 0, 0, 0, 0, 0 }, { 0, 0, 10, 15, 20 },
{ 0, 10, 0, 25, 25 }, { 0, 15, 25, 0, 30 },
{ 0, 20, 25, 30, 0 },
};
// memoization for top down recursion
int memo[n + 1][1 << (n + 1)]; int
fun(int i, int mask)
{
// base case
// if only ith bit and 1st bit is set in our mask, // it
implies we have visited all other nodes already if
(mask == ((1 << i) | 3)) return dist[1][i]; //
memoization if (memo[i][mask] != 0) return
memo[i][mask]; int res = MAX; // result of this
sub-problem
// we have to travel all nodes j in mask and end the
// path at ith node so for every node j in mask,
// recursively calculate cost of travelling all nodes in
// mask except i and then travel back from node j to
// node i taking the shortest path take the minimum of
// all possible j nodes for (int j = 1; j <= n; j+
+) if ((mask & (1 << j)) && j != i && j != 1)
res = std::min(res, fun(j, mask & (~(1 << i)))
+ dist[j][i]);
return memo[i][mask] = res;
} int
main() {
cout<<"Tushar Mandhan \n"<<"Roll no.
2022027561\n"; int ans = MAX; for (int i = 1; i <=
n; i++)

ans = std::min(ans, fun(i, (1 << (n + 1)) - 1)


+ dist[i][1]);
printf("The cost of most efficient tour = %d", ans);
return 0;
}

Output:

Some Important question of traveling salesman problem:


1. What is the objective of the Traveling Salesman Problem (TSP)?
Answer: The objective of the TSP is to find the shortest possible tour that visits each city exactly once and
returns to the original city, minimizing the total distance traveled.
2. What are some common techniques used to solve the Traveling Salesman Problem?
Answer: Common techniques to solve the TSP include brute-force search, dynamic programming, branch
and bound, approximation algorithms like nearest neighbor and minimum spanning tree-based algorithms,
as well as metaheuristic approaches such as genetic algorithms, simulated annealing, and ant colony
optimization.
3. Why is the Traveling Salesman Problem considered a combinatorial optimization problem?
Answer: The TSP involves finding an optimal arrangement of a finite set of elements (cities) among all
possible permutations, while optimizing a specific objective function (total distance traveled). This makes
it a combinatorial optimization problem, as it seeks to find the best solution from a finite set of candidate
solutions.
4. What are some real-world applications of the Traveling Salesman Problem?
Answer: The TSP has practical applications in various fields such as logistics, transportation,
manufacturing, circuit board drilling, vehicle routing, DNA sequencing, and network design, where
minimizing travel costs or distances is essential.
5. What are the challenges associated with solving large instances of the Traveling Salesman Problem
optimally?
Answer: As the number of cities increases, the number of possible permutations grows exponentially,
making it computationally infeasible to solve large instances of the TSP optimally using exact algorithms.
Approximation algorithms and heuristics are often employed to find near-optimal solutions within a
reasonable amount of time.
Practical number- 11
Aim: Implement Floyd Warshall algorithm for shortest path.

Theory:
The Floyd Warshall Algorithm is an all pair shortest path algorithm unlike Dijkstra and Bellman Ford
which are single source shortest path algorithms. This algorithm works for both
the directed and undirected weighted graphs. But, it does not work for the graphs with negative cycles
(where the sum of the edges in a cycle is negative). It follows Dynamic Programming approach to check
every possible path going via every possible node in order to calculate shortest distance between every
pair of nodes.
Idea Behind Floyd Warshall Algorithm:
Suppose we have a graph G[][] with V vertices from 1 to N. Now we have to evaluate a
shortestPathMatrix[][] where shortestPathMatrix[i][j] represents the shortest path between
vertices i and j.
Obviously the shortest path between i to j will have some k number of intermediate nodes. The idea
behind floyd warshall algorithm is to treat each and every vertex from 1 to N as an intermediate node one
by one.
The following figure shows the above optimal substructure property in floyd warshall algorithm:

Floyd Warshall Algorithm Algorithm:


• Initialize the solution matrix same as the input graph matrix as a first step.
• Then update the solution matrix by considering all vertices as an intermediate vertex.
• The idea is to pick all vertices one by one and updates all shortest paths which include the picked
vertex as an intermediate vertex in the shortest path.
• When we pick vertex number k as an intermediate vertex, we already have considered vertices {0,
1, 2, .. k-1} as intermediate vertices.
• For every pair (i, j) of the source and destination vertices respectively, there are two possible
cases.
• k is not an intermediate vertex in shortest path from i to j. We keep the value of dist[i][j] as it is.
• k is an intermediate vertex in shortest path from i to j. We update the value of dist[i][j] as dist[i]
[k] + dist[k][j], if dist[i][j] > dist[i][k] + dist[k][j]

Source code:
#include <bits/stdc++.h> using
namespace std;
// Number of vertices in the graph
#define V 4
/* Define Infinite as a large enough
value.This value will be used for vertices
not connected to each other */ #define
INF 99999
// A function to print the solution matrix
void printSolution(int dist[][V]); void
floydWarshall(int dist[][V])
{
int i, j, k;
for (k = 0; k < V; k++) {
// Pick all vertices as source one by one
for (i = 0; i < V; i++) {
// Pick all vertices as destination for the
// above picked source
for (j = 0; j < V; j++) {
// If vertex k is on the shortest path from
// i to j, then update the value of
// dist[i][j]
if (dist[i][j] > (dist[i][k] + dist[k][j])
&& (dist[k][j] != INF
&& dist[i][k] != INF)) dist[i][j]
= dist[i][k] + dist[k][j];
}
}
}
// Print the shortest distance matrix
printSolution(dist);
}
/* A utility function to print solution */ void
printSolution(int dist[][V])
{
cout << "The following matrix shows the shortest "
"distances"
" between every pair of vertices \n";
for (int i = 0; i < V; i++) { for (int j =
0; j < V; j++) { if (dist[i][j] == INF)

cout << "INF"


<< " "; else
cout << dist[i][j] << " ";
}
cout << endl;
}
} int
main() {
cout<<"Tushar Mandhan \n"<<"Roll no. 2022027566\n";
int graph[V][V] = { { 0, 5, INF, 10 },
{ INF, 0, 3, INF },
{ INF, INF, 0, 1 },
{ INF, INF, INF, 0 } };
floydWarshall(graph); return 0;
}

Output:

Some important question of topological sort:


1. Question: What is the Floyd-Warshall algorithm used for?
Answer: The Floyd-Warshall algorithm is used for finding the shortest paths in a weighted graph, where
edges can have positive or negative weights.
2. Question: What is the time complexity of the Floyd-Warshall algorithm?
Answer: The time complexity of the Floyd-Warshall algorithm is O(n^3), where n is the number of
vertices in the graph.
3. Question: How does the Floyd-Warshall algorithm handle negative cycles?
Answer: The Floyd-Warshall algorithm can detect the presence of negative cycles in a graph. If there are
negative cycles, the algorithm won't necessarily produce correct results for shortest paths.
4. Question: What is the main advantage of the Floyd-Warshall algorithm over other shortest path
algorithms?
Answer: Unlike algorithms like Dijkstra's or Bellman-Ford, the Floyd-Warshall algorithm computes the
shortest paths between all pairs of vertices in a single execution, making it efficient for dense graphs.

You might also like