0% found this document useful (0 votes)
3 views41 pages

1 Updated Algorithm Lab Report

The document outlines multiple experiments implementing various algorithms including Binary Search, Quick Sort, Merge Sort, Fractional Knapsack, and Job Scheduling with Deadlines. Each experiment includes objectives, pseudocode, implementation details, results, and conclusions, highlighting the efficiency and limitations of the algorithms. The algorithms are designed to solve specific problems efficiently, with time complexities ranging from O(log n) to O(n log n) and considerations for memory usage.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views41 pages

1 Updated Algorithm Lab Report

The document outlines multiple experiments implementing various algorithms including Binary Search, Quick Sort, Merge Sort, Fractional Knapsack, and Job Scheduling with Deadlines. Each experiment includes objectives, pseudocode, implementation details, results, and conclusions, highlighting the efficiency and limitations of the algorithms. The algorithms are designed to solve specific problems efficiently, with time complexities ranging from O(log n) to O(n log n) and considerations for memory usage.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Experiment No.

01

Experiment Name: Write a program to implement Binary Search

Objective:
The objective of Binary Search is to efficiently find the position of a target element in a sorted
array or list. It achieves this by repeatedly dividing the search range in half, rather than checking
each element one by one.

Pseudocode: Pseudocode for Binary Search in recursive method-


int BinarySearch (A [], l, h, key)
{if (l==h)
{if (A[l]==key)
return l;
else
return 0;}
else
{mid = (l + h)/2;
if (key==A[mid])
return mid;
if (key<A[mid])
return BinarySearch (l, mid-1, key);
else
return BinarySearch (mid+1, h, key);
}}

Page | 1
Implementation:

Fig-01

Page | 2
Description of the code:
The code implements a recursive Binary Search algorithm in C to search for a given key in a sorted
array.
• Key Components:
1. BinarySearch Function:
o Parameters:
▪ A[]: Sorted array.
▪ l: Lower index of the search range.
▪ h: Higher index of the search range.
▪ key: Element to search for.
o Logic:
▪ If l == h, checks if the single element is the key.
▪ Calculates the midpoint mid = (l + h) / 2.
▪ Compares the key with A[mid]:
▪ If A[mid] == key, returns mid.
▪ If A[mid] > key, recursively searches in the left subarray.
▪ Otherwise, recursively searches in the right subarray.
▪ Returns -1 if the key is not found.
2. main Function:
o Initializes a sorted array A and its size.
o Defines the key to search.
o Calls BinarySearch to find the key's index.
o Prints whether the element is found and its index or states it's not present.

Page | 3
Output:

Fig-02

Results and Discussions:


The program performs a binary search on the sorted array A[] = {2, 5, 8, 12, 16, 23, 38, 56, 72,
91} to find the key element 38.
It outputs:
Element is present at index 6
The code implements a binary search algorithm, which efficiently finds an element in a sorted
array by repeatedly dividing the search range in half. The time complexity is O(log n), making it
faster than linear search for large datasets. However, there is an issue in the base case where the
function returns 0 if the element is not found, which can be misleading since 0 is a valid index. It
should ideally return -1 to indicate that the element is absent from the array.

Conclusion:
The code effectively demonstrates a binary search using recursion. This method efficiently finds
an element in a sorted array with a time complexity of O(log n) by repeatedly dividing the search
range in half.
Key Points:
• Fast and efficient for large datasets.
• Correctly identifies if the key is present and returns its index.
• If the key is not found, it returns -1.
Limitations:
• Works only on sorted arrays.
• The recursive approach may use more memory due to function call overhead. An iterative
method could be more memory-efficient for large arrays.

Page | 4
Experiment No.02

Experiment Name: Write a program to implement Quick Sort

Objective:
The objective of Quick Sort is to sort an array or list of elements by dividing the problem into
smaller sub-problems and sorting them efficiently. Quick sort uses a divide-and-conquer strategy
to achieve this.

Pseudocode: Pseudocode for Quick Sort –


QuickSort(A, low, high)
if low < high then
pivotIndex = Partition(A, low, high)

QuickSort(A, low, pivotIndex - 1)


QuickSort(A, pivotIndex + 1, high)

Partition(A, low, high)


pivot = A[high]
i = low - 1

for j = low to high - 1 do


if A[j] <= pivot then
i=i+1
Swap(A[i], A[j])

Swap(A[i + 1], A[high])


return i + 1

Page | 5
Implementation:

Fig-03

Description of the code:


Quick Sort is an efficient sorting algorithm that uses a divide-and-conquer approach to organize
elements in ascending or descending order.
How Quick Sort Works
1. Choosing a Pivot:
o A pivot element is chosen from the array. In this implementation, the pivot is the
last element of the array.

Page | 6
o The pivot helps in dividing the array into two parts:
▪ Elements smaller than or equal to the pivot.
▪ Elements greater than the pivot.
2. Partitioning the Array:
o The array is rearranged so that all elements smaller than or equal to the pivot are on
the left side.
o All elements greater than the pivot are on the right side.
o The pivot is placed in its correct position in the sorted array.
3. Recursive Sorting:
o Quick Sort is then applied recursively to the two subarrays:
▪ The subarray to the left of the pivot.
▪ The subarray to the right of the pivot.
o This process continues until the subarrays are small enough (contain one or no
elements), at which point they are considered sorted.
4. Combining Results:
o The algorithm combines the sorted subarrays, resulting in a fully sorted array.

Output:

Fig-04

Page | 7
Result and Discussion:
When the code is run, the output will be:
Given array: 10 80 30 90 40 50 70
Sorted array: 10 30 40 50 70 80 90
The given code implements the Quick Sort algorithm, which is a highly efficient sorting technique.
It uses the divide-and-conquer approach, where it selects a pivot (here, the last element in the
subarray) and partitions the array into two sections: elements smaller than the pivot on one side,
and larger elements on the other side. After partitioning, it recursively sorts the two subarrays. The
time complexity is O(n log n) on average, making Quick Sort ideal for large datasets. The program
efficiently sorts the input array and prints both the original and sorted arrays.

Conclusion:
The code implements the Quick Sort algorithm, an efficient sorting technique using the divide-
and-conquer approach. It includes functions for swapping elements, partitioning the array, and
recursively sorting subarrays.
Key Points:
• Efficiency: Best/Average Case O(n log n) Worst Case O(n^2).
• Strengths: Fast, in-place sorting with minimal memory usage.
• Limitations: Poor pivot selection can lead to O(n^2), and it’s not a stable sort.

Page | 8
Experiment No: 03

Experiment Name: Write a program to implement Merge Sort

Objective:
The objective of Merge Sort is to efficiently sort an array or list of elements using a divide-and-
conquer strategy. It splits the array into smaller subarrays, recursively sorts them, and then merges
the sorted subarrays back together.

Pseudocode: Pseudocode for Merge Sort –


MergeSort(A, left, right)
if left < right then
mid = (left + right) / 2

MergeSort(A, left, mid)


MergeSort(A, mid + 1, right)

Merge(A, left, mid, right)

Merge(A, left, mid, right)


n1 = mid - left + 1
n2 = right – mid

Create Left[n1]
Create Right[n2]

for i = 0 to n1 - 1 do
Left[i] = A[left + i]
for j = 0 to n2 - 1 do

Page | 9
Right[j] = A[mid + 1 + j]

i=0
j=0
k = left
while i < n1 and j < n2 do
if Left[i] <= Right[j] then
A[k] = Left[i]
i++

else
A[k] = Right[j]
j++
k++

while i < n1 do
A[k] = Left[i]
i++
k++

while j < n2 do
A[k] = Right[j]
j++
k++

Page | 10
Implementation:

Fig-05

Page | 11
Description of the code:
Merge Sort is a divide-and-conquer sorting algorithm that breaks an array into smaller parts, sorts
them, and then merges them back together in sorted order. It is one of the most efficient sorting
algorithms for large datasets.

01.Splitting the Array:


• The array is split into two halves: left (A[left...mid]) and right (A[mid+1...right]).
• This splitting continues recursively until each subarray has only one element.
02.Merging:
• Two sorted subarrays are merged using the merge function.
• During merging, elements from both subarrays are compared, and the smaller element is
placed in the original array.
03.Recursive Sorting:
• The merge process ensures that the array is sorted as the recursion unwinds.

Output:

Fig-06

Page | 12
Result and Discussion:
When the code is executed, the output will be:
Unsorted array: 38 27 43 3 9 82 10
Sorted array: 3 9 10 27 38 43 82
The given C program implements the Merge Sort algorithm, which is a well-known sorting
technique utilizing the divide-and-conquer approach. The Merge Sort function recursively divides
the array into smaller parts until each subarray has one element (the base case for sorting). The
merge function creates two temporary arrays, iterates through them, compares elements, and
copies the smaller one back into the original array, ensuring sorting. The time complexity of Merge
Sort is O(n log n) which makes it efficient for large datasets. It is a stable sorting algorithm, that
preserves the relative order of equal elements.

Conclusion:
In conclusion, Merge Sort is an efficient and reliable sorting algorithm with a time complexity of
O(n log n), making it well-suited for large datasets. Its divide-and-conquer approach ensures
stable sorting, which preserves the relative order of equal elements. While it offers consistent
performance even in the worst-case scenario, its space complexity of O(n) may be a limitation in
memory-constrained environments. Despite this, Merge Sort remains a strong choice for
applications requiring stable and predictable sorting, especially when handling large or external
datasets.

Page | 13
Experiment No: 04

Experiment Name: Write a program to implement Knapsack Problem (For


Divisible Object)

Objective:
The objective of the Fractional Knapsack Problem is to maximize the total value of items placed
into a knapsack with a fixed weight capacity, where:
• Each item has a given weight and value.
• Items can be divided into fractions.
• The goal is to determine the fraction of each item to include in the knapsack such that the
total value is maximized while ensuring that the total weight of the selected items does not
exceed the given weight limit of the knapsack.

Pseudocode:
function FractionalKnapsack(capacity, items)
sort items by value/weight ratio in descending order
totalValue = 0

for each item in sorted items do


if item.weight <= capacity then
capacity -= item.weight
totalValue += item.value
else
totalValue += item.value * (capacity / item.weight)
break

return totalValue

Page | 14
Implementation:

Fig-07

Page | 15
Description of the code:
The Knapsack Problem is a classic optimization problem in computer science and mathematics,
where the goal is to select a subset of items to maximize the total value, subject to a weight
constraint. It is commonly used in scenarios where resources are limited, such as packing, budget
allocation, and resource management.

Given:
• A set of items, each with a weight and a value.
• A knapsack (container) with a maximum weight capacity.
The objective is to determine the optimal subset of items to include in the knapsack such that:
• The total weight of the selected items does not exceed the knapsack's capacity.
• The total value of the selected items is maximized.
The Knapsack Problem is a foundational problem in optimization, with real-world applications
across various industries. The solution method depends on the problem variant and the trade-off
between time and space complexity.

Output:

Fig-08

Page | 16
Result and Discussion:
When the code is executed, the output will be:
Maximum value in Knapsack = 190.60
The program solves the fractional knapsack problem using a greedy method. It uses an item
structure and swap function to sort items, sortItemsByRatio to sort items based on their value-to-
weight ratio, and the fractionalKnapsack function to iterate through sorted items. The main
function initializes the knapsack capacity and items, and the algorithm calculates the maximum
value within the given capacity, highlighting the efficiency and simplicity of greedy algorithms.

Conclusion:
In conclusion, the Knapsack Problem is a fundamental optimization problem with a wide range of
real-world applications, from resource allocation to logistics. The problem involves selecting a
subset of items to maximize value while adhering to a weight constraint. The solution approach
varies based on the problem type—dynamic programming is commonly used for the 0/1 Knapsack,
while greedy algorithms are efficient for the fractional variant. Despite its complexity, the
Knapsack Problem provides valuable insights into optimization strategies and remains a key
challenge in both theoretical and applied computer science.

Page | 17
Experiment No: 05

Experiment Name: Write a program to implement Job Scheduling with


deadlines - Greedy Method

Objective:
Job Scheduling with Deadlines is a classical optimization problem where we aim to maximize
profit by scheduling a set of jobs, each with a deadline and profit. Each job must be completed
within its deadline, and only one job can be scheduled at a time.

• Greedy Approach:
The greedy method solves the problem by first sorting the jobs by profit (in descending order) and
then trying to schedule the highest-profit job in the latest available time slot before its deadline.
This way, it ensures that the most profitable jobs are prioritized while adhering to their respective
deadlines.

Pseudocode: Pseudocode for job scheduling with deadlines – Greedy Method:


struct Job {
int id;
int deadline;
int profit;
};
function compareJobs(Job a, Job b){
return b.profit - a.profit
}
function findMaxDeadline(jobs[], n)
{
maxDeadline = 0
for i = 0 to n-1
{
if jobs[i].deadline > maxDeadline
{ maxDeadline = jobs[i].deadline} }

Page | 18
return maxDeadline }
function jobScheduling(jobs[], n)
{
Sort jobs[] using compareJobs function
maxDeadline = findMaxDeadline(jobs, n)
Create a timeSlots[] array of size maxDeadline and initialize all slots to -1
totalProfit = 0
countJobs = 0

for i = 0 to n-1{
for j = min(maxDeadline, jobs[i].deadline)-1 down to 0
{
if timeSlots[j] == -1
{
timeSlots[j] = jobs[i].id
totalProfit += jobs[i].profit
countJobs += 1
break
}
}
}

Print countJobs, totalProfit


Print scheduled jobs in timeSlots[]
}

Page | 19
Implementation:

Fig-09

Page | 20
Description of the code:
Job Scheduling with Deadlines (Greedy Method) is an optimization problem where the objective
is to schedule jobs in such a way that the total profit is maximized, subject to certain constraints,
such as job deadlines and available time slots.
Given:
• A set of jobs, each with:
o A deadline by which the job must be completed.
o A profit that is earned if the job is completed before or on its deadline.
• A set of time slots (usually limited in number), and each job takes one unit of time to
complete.
The goal is to schedule jobs in such a way that:
• Each job is completed by its respective deadline.
• The total profit from the scheduled jobs is maximized.

Output:

Fig-10

Page | 21
Result and Discussion:
When the code is executed, the output will be:
Number of jobs scheduled: 4
Total profit: 86
Scheduled jobs in time slots: 9 4 2 5
This C program uses a greedy algorithm to schedule jobs based on their deadlines and profits. It
defines a Job structure with three properties: ID, deadline, and profit. The program uses the
compareJobs function to sort jobs based on profits, findMaxDeadline for maximum deadlines, and
the jobScheduling function to solve the problem. Jobs are sorted by profit, time slots are allocated,
and the process continues until all jobs are checked or slots are filled. The output shows the total
number of scheduled jobs, total profit, and jobs assigned to each time slot.

Conclusion:
In conclusion, the Job Scheduling with Deadlines problem, when solved using the Greedy Method,
offers an efficient approach to maximize profit while respecting job deadlines. By prioritizing
high-profit jobs and scheduling them in the latest available time slots before their deadlines, the
greedy algorithm provides a good solution with a time complexity of O(n log n). While it may not
always guarantee the absolute optimal solution in all cases, the greedy approach is effective and
widely applicable in real-world scenarios such as task scheduling, project management, and
resource allocation.

Page | 22
Experiment No: 06-(a)

Experiment Name: Write a program to implement Fibonacci series using


the Recursive Method

Objective:
The objective of implementing the Fibonacci series using the recursive method is to compute the
sequence of numbers where each number is the sum of the two preceding ones, starting from 0 and
1. The recursive approach aims to break down the problem into smaller subproblems by using the
following recurrence relation:
F(n) = F(n-1) + F(n-2)
Where:
• F(0)=0 (Base case)
• F(1)=1 (Base case)

Pseudocode: Pseudocode for Fibonacci series using recursive method:


function Fibonacci(n)
if n == 0
return 0
else if n == 1
return 1
else
return Fibonacci(n-1) + Fibonacci(n-2)

main
define n
for i from 0 to n-1
print Fibonacci(i)

Page | 23
Implementation:

Fig-11

Description of the code:


The Fibonacci Series is a sequence of numbers where each number is the sum of the two preceding ones,
typically starting with 0 and 1. It is a fundamental concept in mathematics and computer science, often used
in algorithms and dynamic programming.

Recursive Method:
The Fibonacci sequence can be computed using a recursive approach, where each Fibonacci number is
calculated by calling the function recursively to compute the previous two Fibonacci numbers.

Explanation:
• The function Fibonacci (n) checks if n is 0 or 1. If so, it returns n because F(0)=0 and F(1)=1
• For any n>1, the function calls itself recursively to compute the two preceding Fibonacci numbers
F(n−1) and F(n−2) and then returns their sum, F(n)=F(n−1)+F(n−2)

Page | 24
Output:

Fig-12

Result and Discussion:


When the code is executed, the output will be:
Fibonacci series up to 12 terms: 0 1 1 2 3 5 8 13 21 34 55 89.
This C program computes the Fibonacci series using a recursive function. It returns the nth
Fibonacci number if n is 0 or 1, or the sum of the two preceding numbers. In the program the user
enter the number of terms, iterates from 0 to n-1, and it prints the results. However, it has an
exponential time complexity, making it inefficient for large n.

Conclusion:
In conclusion, the recursive method for calculating Fibonacci numbers is a straightforward
approach that mirrors the mathematical definition of the sequence. While it is easy to understand
and implement, its time complexity of O(2^n) makes it inefficient for larger values of n, as it
involves redundant calculations. Although it serves as a good introduction to recursion, for
practical applications with larger inputs, more efficient methods like iteration or dynamic
programming are preferred to improve performance.

Page | 25
Experiment No: 06-(b)

Experiment Name: Write a program to implement Fibonacci series using


the Tabulation Method

Objective:
The objective of implementing the Fibonacci series using the tabulation method (a bottom-up
dynamic programming approach) is to efficiently compute the Fibonacci sequence by building the
solution iteratively from the base cases, avoiding redundant calculations seen in the recursive
method.

Pseudocode: Pseudocode for Fibonacci series using tabulation method:


function FibonacciTabulation(n)
if n <= 0
return 0
else if n == 1
return 1
fibArray[n+1]
fibArray[0] = 0
fibArray[1] = 1

for i from 2 to n do
fibArray[i] = fibArray[i-1] + fibArray[i-2]
return fibArray[n]
main
define n
for i from 0 to n-1
print FibonacciTabulation(i)

Page | 26
Implementation:

Fig-13

Description of the code:


The Tabulation Method is an iterative approach to solving problems that can be broken down into
simpler subproblems, such as the Fibonacci series. Unlike the recursive approach, which involves
repeated function calls, the tabulation method uses an array (or table) to store intermediate results,
thus avoiding redundant calculations and improving efficiency.
Explanation:
• Initialization: We create an array fib of size n+1 to store Fibonacci numbers. The base
cases F(0)=0 and F(1)=1 are set at the beginning.
• Iterative Calculation: Starting from F(2) we calculate each Fibonacci number using the
recurrence relation F(n)=F(n−1)+F(n−2) storing each result in the array.
• Final Result: After the loop completes, fib[n] holds the nth Fibonacci number.

Page | 27
Output:

Fig-14

Result and Discussion:


When the code is executed, the output will be:
Fibonacci series up to 15 terms: 0 1 1 2 3 5 8 13 21 34 55 89 144 233 377
The C program calculates the Fibonacci series using dynamic programming's tabulation method.
It initializes an array fibArray with the first two Fibonacci numbers, and fills it with Fibonacci
numbers using a for loop. In the main function the user enter the number of terms for the Fibonacci
series, and prints it up to the specified number. This method has a time complexity of O(n) and
uses O(n) space to store the Fibonacci numbers.

Conclusion:
The tabulation method for calculating Fibonacci numbers is a more efficient solution compared
to the recursive approach. With a time complexity of O(n) it avoids redundant calculations and
ensures that each Fibonacci number is computed only once. This method is particularly useful
when working with large values of n, and it can be further optimized to use constant space by
keeping track of only the last two computed values.

Page | 28
Experiment No: 07

Experiment Name: Write a program to implement from a given vertex in a


weighted connected graph, find shortest path to other vertices using
Dijkstra’s Algorithm

Objective:
To implement Dijkstra’s Algorithm for finding the shortest path from a given source vertex to all
other vertices in a weighted, connected graph, ensuring efficient computation and accurate results.

Pseudocode: Pseudocode for find shortest path to other vertices using Dijkstra’s Algorithm:
function Dijkstra(Graph, source):

for each vertex v in Graph:

dist[v] := infinity

prev[v] := undefined

dist[source] := 0

Q := priority queue containing all vertices in Graph, with dist[source] prioritized

while Q is not empty:

u := vertex in Q with the smallest distance in dist[]

remove u from Q

for each neighbor v of u in Graph.Adjacent[u]:

alt := dist[u] + length(u, v)

if alt < dist[v]:

dist[v] := alt

prev[v] := u

decrease priority of v in Q

return dist[], prev[]

Page | 29
Implementation:

Fig-15

Description of the code:


Dijkstra's Algorithm is a widely used algorithm for finding the shortest paths between nodes in a
graph, which may represent, for example, road networks, communication networks, or any other
structure with weighted edges. The algorithm is efficient and guarantees the shortest path in
graphs with non-negative edge weights.
Given:
• A graph G=(V,E) where V is the set of vertices (nodes) and E is the set of edges
(connections between nodes).
• Each edge has a weight (or cost), which represents the distance, time, or cost associated
with traveling between two nodes.
• A starting node s, and the goal is to find the shortest path from s to all other nodes in the
graph.

Page | 30
Output:

Fig-16

Result and Discussion:


The output from the program indicates the shortest distances from the source vertex to all other
vertices. Summary of the result:
• Vertex 0: Distance 0
• Vertex 1: Distance 10
• Vertex 2: Distance 50
• Vertex 3: Distance 30
• Vertex 4: Distance 60
The experiment demonstrates the implementation of Dijkstra's Algorithm for finding the shortest
paths in a weighted, connected graph. Key observations include:
01.Correctness
02.Efficiency
03.Limitations
04.Practical Application
The experiment successfully validates Dijkstra's Algorithm in calculating shortest paths.

Conclusion:
Dijkstra's Algorithm is a powerful and efficient method for solving the single-source shortest path
problem in graphs with non-negative edge weights. Its greedy approach ensures that the shortest
path is found iteratively, and its efficiency makes it suitable for a wide range of practical
applications in fields like networking, routing, and logistics. However, it does not handle graphs
with negative edge weights, for which other algorithms like Bellman-Ford are used.

Page | 31
Experiment No: 08

Experiment Name: Write a program to implement Graph traversal using


Breadth First Search technique

Objective:
To implement the Breadth First Search (BFS) algorithm for traversing or searching a graph,
starting from a given vertex, to systematically visit all vertices and edges in a breadth-first manner.
The goal is to explore the graph layer by layer and ensure that all reachable vertices are visited
while maintaining correct traversal order.

Pseudocode: Pseudocode for traversing a graph using breadth first search technique:

function BFS(Graph, start_vertex):


for each vertex v in Graph:
visited[v] := false
queue := empty queue
visited[start_vertex] := true
enqueue(queue, start_vertex)

while queue is not empty:


current_vertex := dequeue(queue)
print current_vertex
for each adjacent_vertex of current_vertex in Graph:
if not visited[adjacent_vertex]:
visited[adjacent_vertex] := true
enqueue(queue, adjacent_vertex)

Page | 32
Implementation:

Fig-17

Page | 33
Description of the code:
Breadth First Search (BFS) is a graph traversal algorithm that explores all the vertices of a graph
in breadth ward layers, meaning it visits all the neighbors of a vertex before moving on to their
neighbors. It is a powerful technique for exploring graphs and is particularly useful in finding the
shortest path in an unweighted graph.
Explanation:
• Queue Initialization: We initialize a queue with the starting node and mark it as visited.
• Traversal Loop: In each iteration, we dequeue the front node from the queue, visit it, and
enqueue all its unvisited neighbors.
• Termination: The process continues until the queue is empty, meaning all reachable nodes
have been visited.

Output:

Fig-18

Page | 34
Result and Discussion:
The result from the Breadth First Search (BFS) traversal is as follows:
• The algorithm has successfully visited all vertices starting from vertex 0, in the order:
o Visited 0
o Visited 1
o Visited 2
o Visited 3
o Visited 4
This output indicates that the BFS algorithm has correctly traversed all vertices in the graph in a
breadth-first manner, visiting each level before moving to the next.
The experiment demonstrates that BFS efficiently traverses a graph, visiting each vertex in the
correct order, which is essential for various applications like finding the shortest path in
unweighted graphs.

Conclusion:
Breadth First Search is an essential graph traversal technique that explores nodes in layers, making it ideal
for tasks like finding the shortest path in unweighted graphs, exploring connected components, and many
other applications. With its clear and systematic approach using a queue, BFS ensures that all nodes are
visited in the correct order, and it operates efficiently with a time complexity of O(V+E)

Page | 35
Experiment No: 09

Experiment Name: Write a program to implement Graph traversal using


Depth First Search technique

Objective:
To implement the Depth First Search (DFS) algorithm for traversing or searching a graph, starting
from a given vertex, to explore as far as possible along each branch before backtracking. The goal
is to visit all reachable vertices in a graph, ensuring that each vertex is explored deeply before
moving to the next one, and effectively discovering paths in a depth-first manner.

Pseudocode: Pseudocode for traversing a graph using depth first search technique:

function DFS(Graph, start_vertex):


for each vertex v in Graph:
visited[v] := false
function DFS_Visit(v):
visited[v] := true
print v
for each neighbor u of v in Graph:
if not visited[u]:
DFS_Visit(u)
DFS_Visit(start_vertex)

Page | 36
Implementation:

Fig-19

Page | 37
Description of the code:
Depth First Search (DFS) is a graph traversal technique that explores as far as possible along each
branch before backtracking. It is a fundamental algorithm used for searching and exploring all
nodes in a graph, and it works by starting at a source node and exploring deeply along each branch
before moving on to other branches.
Explanation:
• Recursive DFS: In the recursive version, we explore each neighbor of the current node by
calling dfs() recursively for each unvisited neighbor.
• Iterative DFS: In the iterative version, we use an explicit stack to keep track of nodes to
be visited. We pop nodes from the stack, process them, and push their unvisited neighbors
back onto the stack.

Output:

Fig-20

Result and Discussion:


• The DFS traversal successfully visited all vertices reachable from the start vertex.
• The order of visiting vertices followed the depth-first approach, exploring each branch
completely before backtracking.
The experiment demonstrated the effectiveness of DFS in traversing a graph and visiting all
reachable vertices. The recursive approach provided a clear and straightforward implementation,
suitable for understanding the fundamental concepts of graph traversal.

Conclusion:
Depth First Search is a powerful graph traversal technique that explores nodes deeply along each
branch before backtracking. It is versatile and can be used in various applications, including
pathfinding, cycle detection, and connected component discovery. DFS is efficient with a time
complexity of O(V+E) making it suitable for large graphs. The algorithm can be implemented
using either recursion or an explicit stack, depending on the problem's requirements.

Page | 38
Experiment No: 10

Experiment Name: Write a program to check whether a given vertex is


connected or not using Depth First Search (DFS) method

Objective:
To implement the Depth First Search (DFS) algorithm to determine whether a given vertex is
connected to a specified source vertex in an undirected or directed graph. The goal is to traverse
the graph starting from the source vertex and check if the target vertex is reachable, thus confirming
if the vertex is connected.

Pseudocode: Pseudocode to check whether a given vertex is connected or not using Depth First
Search (DFS) method:

function DFS_CheckConnectivity(Graph, start_vertex, target_vertex):


for each vertex v in Graph:
visited[v] := false
function DFS_Visit(v):
visited[v] := true
if v == target_vertex:
return true
for each neighbor u of v in Graph:
if not visited[u]:
if DFS_Visit(u):
return true
return false
return DFS_Visit(start_vertex)

Page | 39
Implementation:

Fig-21

Page | 40
Description of the code:
To check whether a given vertex is connected to other vertices in a graph, Depth First Search (DFS)
can be used to explore the graph starting from the given vertex. If DFS is able to visit all reachable
vertices from the given vertex, then the graph is connected (at least from the starting vertex to all
reachable nodes). If DFS cannot visit all vertices, then the vertex is not connected to all others.
Explanation:
• DFS Function: The dfs function is a recursive function that visits all the neighbors of the
current node. It marks each visited node to avoid revisiting it.
• Function: This function starts the DFS from the given start node. After the DFS traversal,
it checks whether the size of the visited set equals the total number of vertices in the graph.
If all vertices are visited, the graph is connected; otherwise, it is not.

Output:

Fig-22

Result and Discussion:


DFS started from vertex 0, visited vertex 2, and then reached vertex 3, confirming the connectivity
between vertex 0 and vertex 3.
• Start Vertex: 0
• Target Vertex: 3
The experiment demonstrated that DFS is an effective method for checking connectivity in a graph.
The approach ensures that all reachable vertices from the start vertex are explored, providing a
clear indication of whether the target vertex is connected.

Conclusion:
Using Depth First Search (DFS), we can efficiently check whether a given vertex is connected to
all other vertices in the graph. If DFS from the given vertex visits all vertices, then the graph is
connected from that vertex; otherwise, it is not. This method is simple and effective, with a time
complexity of O(V+E).

Page | 41

You might also like