1 Updated Algorithm Lab Report
1 Updated Algorithm Lab Report
01
Objective:
The objective of Binary Search is to efficiently find the position of a target element in a sorted
array or list. It achieves this by repeatedly dividing the search range in half, rather than checking
each element one by one.
Page | 1
Implementation:
Fig-01
Page | 2
Description of the code:
The code implements a recursive Binary Search algorithm in C to search for a given key in a sorted
array.
• Key Components:
1. BinarySearch Function:
o Parameters:
▪ A[]: Sorted array.
▪ l: Lower index of the search range.
▪ h: Higher index of the search range.
▪ key: Element to search for.
o Logic:
▪ If l == h, checks if the single element is the key.
▪ Calculates the midpoint mid = (l + h) / 2.
▪ Compares the key with A[mid]:
▪ If A[mid] == key, returns mid.
▪ If A[mid] > key, recursively searches in the left subarray.
▪ Otherwise, recursively searches in the right subarray.
▪ Returns -1 if the key is not found.
2. main Function:
o Initializes a sorted array A and its size.
o Defines the key to search.
o Calls BinarySearch to find the key's index.
o Prints whether the element is found and its index or states it's not present.
Page | 3
Output:
Fig-02
Conclusion:
The code effectively demonstrates a binary search using recursion. This method efficiently finds
an element in a sorted array with a time complexity of O(log n) by repeatedly dividing the search
range in half.
Key Points:
• Fast and efficient for large datasets.
• Correctly identifies if the key is present and returns its index.
• If the key is not found, it returns -1.
Limitations:
• Works only on sorted arrays.
• The recursive approach may use more memory due to function call overhead. An iterative
method could be more memory-efficient for large arrays.
Page | 4
Experiment No.02
Objective:
The objective of Quick Sort is to sort an array or list of elements by dividing the problem into
smaller sub-problems and sorting them efficiently. Quick sort uses a divide-and-conquer strategy
to achieve this.
Page | 5
Implementation:
Fig-03
Page | 6
o The pivot helps in dividing the array into two parts:
▪ Elements smaller than or equal to the pivot.
▪ Elements greater than the pivot.
2. Partitioning the Array:
o The array is rearranged so that all elements smaller than or equal to the pivot are on
the left side.
o All elements greater than the pivot are on the right side.
o The pivot is placed in its correct position in the sorted array.
3. Recursive Sorting:
o Quick Sort is then applied recursively to the two subarrays:
▪ The subarray to the left of the pivot.
▪ The subarray to the right of the pivot.
o This process continues until the subarrays are small enough (contain one or no
elements), at which point they are considered sorted.
4. Combining Results:
o The algorithm combines the sorted subarrays, resulting in a fully sorted array.
Output:
Fig-04
Page | 7
Result and Discussion:
When the code is run, the output will be:
Given array: 10 80 30 90 40 50 70
Sorted array: 10 30 40 50 70 80 90
The given code implements the Quick Sort algorithm, which is a highly efficient sorting technique.
It uses the divide-and-conquer approach, where it selects a pivot (here, the last element in the
subarray) and partitions the array into two sections: elements smaller than the pivot on one side,
and larger elements on the other side. After partitioning, it recursively sorts the two subarrays. The
time complexity is O(n log n) on average, making Quick Sort ideal for large datasets. The program
efficiently sorts the input array and prints both the original and sorted arrays.
Conclusion:
The code implements the Quick Sort algorithm, an efficient sorting technique using the divide-
and-conquer approach. It includes functions for swapping elements, partitioning the array, and
recursively sorting subarrays.
Key Points:
• Efficiency: Best/Average Case O(n log n) Worst Case O(n^2).
• Strengths: Fast, in-place sorting with minimal memory usage.
• Limitations: Poor pivot selection can lead to O(n^2), and it’s not a stable sort.
Page | 8
Experiment No: 03
Objective:
The objective of Merge Sort is to efficiently sort an array or list of elements using a divide-and-
conquer strategy. It splits the array into smaller subarrays, recursively sorts them, and then merges
the sorted subarrays back together.
Create Left[n1]
Create Right[n2]
for i = 0 to n1 - 1 do
Left[i] = A[left + i]
for j = 0 to n2 - 1 do
Page | 9
Right[j] = A[mid + 1 + j]
i=0
j=0
k = left
while i < n1 and j < n2 do
if Left[i] <= Right[j] then
A[k] = Left[i]
i++
else
A[k] = Right[j]
j++
k++
while i < n1 do
A[k] = Left[i]
i++
k++
while j < n2 do
A[k] = Right[j]
j++
k++
Page | 10
Implementation:
Fig-05
Page | 11
Description of the code:
Merge Sort is a divide-and-conquer sorting algorithm that breaks an array into smaller parts, sorts
them, and then merges them back together in sorted order. It is one of the most efficient sorting
algorithms for large datasets.
Output:
Fig-06
Page | 12
Result and Discussion:
When the code is executed, the output will be:
Unsorted array: 38 27 43 3 9 82 10
Sorted array: 3 9 10 27 38 43 82
The given C program implements the Merge Sort algorithm, which is a well-known sorting
technique utilizing the divide-and-conquer approach. The Merge Sort function recursively divides
the array into smaller parts until each subarray has one element (the base case for sorting). The
merge function creates two temporary arrays, iterates through them, compares elements, and
copies the smaller one back into the original array, ensuring sorting. The time complexity of Merge
Sort is O(n log n) which makes it efficient for large datasets. It is a stable sorting algorithm, that
preserves the relative order of equal elements.
Conclusion:
In conclusion, Merge Sort is an efficient and reliable sorting algorithm with a time complexity of
O(n log n), making it well-suited for large datasets. Its divide-and-conquer approach ensures
stable sorting, which preserves the relative order of equal elements. While it offers consistent
performance even in the worst-case scenario, its space complexity of O(n) may be a limitation in
memory-constrained environments. Despite this, Merge Sort remains a strong choice for
applications requiring stable and predictable sorting, especially when handling large or external
datasets.
Page | 13
Experiment No: 04
Objective:
The objective of the Fractional Knapsack Problem is to maximize the total value of items placed
into a knapsack with a fixed weight capacity, where:
• Each item has a given weight and value.
• Items can be divided into fractions.
• The goal is to determine the fraction of each item to include in the knapsack such that the
total value is maximized while ensuring that the total weight of the selected items does not
exceed the given weight limit of the knapsack.
Pseudocode:
function FractionalKnapsack(capacity, items)
sort items by value/weight ratio in descending order
totalValue = 0
return totalValue
Page | 14
Implementation:
Fig-07
Page | 15
Description of the code:
The Knapsack Problem is a classic optimization problem in computer science and mathematics,
where the goal is to select a subset of items to maximize the total value, subject to a weight
constraint. It is commonly used in scenarios where resources are limited, such as packing, budget
allocation, and resource management.
Given:
• A set of items, each with a weight and a value.
• A knapsack (container) with a maximum weight capacity.
The objective is to determine the optimal subset of items to include in the knapsack such that:
• The total weight of the selected items does not exceed the knapsack's capacity.
• The total value of the selected items is maximized.
The Knapsack Problem is a foundational problem in optimization, with real-world applications
across various industries. The solution method depends on the problem variant and the trade-off
between time and space complexity.
Output:
Fig-08
Page | 16
Result and Discussion:
When the code is executed, the output will be:
Maximum value in Knapsack = 190.60
The program solves the fractional knapsack problem using a greedy method. It uses an item
structure and swap function to sort items, sortItemsByRatio to sort items based on their value-to-
weight ratio, and the fractionalKnapsack function to iterate through sorted items. The main
function initializes the knapsack capacity and items, and the algorithm calculates the maximum
value within the given capacity, highlighting the efficiency and simplicity of greedy algorithms.
Conclusion:
In conclusion, the Knapsack Problem is a fundamental optimization problem with a wide range of
real-world applications, from resource allocation to logistics. The problem involves selecting a
subset of items to maximize value while adhering to a weight constraint. The solution approach
varies based on the problem type—dynamic programming is commonly used for the 0/1 Knapsack,
while greedy algorithms are efficient for the fractional variant. Despite its complexity, the
Knapsack Problem provides valuable insights into optimization strategies and remains a key
challenge in both theoretical and applied computer science.
Page | 17
Experiment No: 05
Objective:
Job Scheduling with Deadlines is a classical optimization problem where we aim to maximize
profit by scheduling a set of jobs, each with a deadline and profit. Each job must be completed
within its deadline, and only one job can be scheduled at a time.
• Greedy Approach:
The greedy method solves the problem by first sorting the jobs by profit (in descending order) and
then trying to schedule the highest-profit job in the latest available time slot before its deadline.
This way, it ensures that the most profitable jobs are prioritized while adhering to their respective
deadlines.
Page | 18
return maxDeadline }
function jobScheduling(jobs[], n)
{
Sort jobs[] using compareJobs function
maxDeadline = findMaxDeadline(jobs, n)
Create a timeSlots[] array of size maxDeadline and initialize all slots to -1
totalProfit = 0
countJobs = 0
for i = 0 to n-1{
for j = min(maxDeadline, jobs[i].deadline)-1 down to 0
{
if timeSlots[j] == -1
{
timeSlots[j] = jobs[i].id
totalProfit += jobs[i].profit
countJobs += 1
break
}
}
}
Page | 19
Implementation:
Fig-09
Page | 20
Description of the code:
Job Scheduling with Deadlines (Greedy Method) is an optimization problem where the objective
is to schedule jobs in such a way that the total profit is maximized, subject to certain constraints,
such as job deadlines and available time slots.
Given:
• A set of jobs, each with:
o A deadline by which the job must be completed.
o A profit that is earned if the job is completed before or on its deadline.
• A set of time slots (usually limited in number), and each job takes one unit of time to
complete.
The goal is to schedule jobs in such a way that:
• Each job is completed by its respective deadline.
• The total profit from the scheduled jobs is maximized.
Output:
Fig-10
Page | 21
Result and Discussion:
When the code is executed, the output will be:
Number of jobs scheduled: 4
Total profit: 86
Scheduled jobs in time slots: 9 4 2 5
This C program uses a greedy algorithm to schedule jobs based on their deadlines and profits. It
defines a Job structure with three properties: ID, deadline, and profit. The program uses the
compareJobs function to sort jobs based on profits, findMaxDeadline for maximum deadlines, and
the jobScheduling function to solve the problem. Jobs are sorted by profit, time slots are allocated,
and the process continues until all jobs are checked or slots are filled. The output shows the total
number of scheduled jobs, total profit, and jobs assigned to each time slot.
Conclusion:
In conclusion, the Job Scheduling with Deadlines problem, when solved using the Greedy Method,
offers an efficient approach to maximize profit while respecting job deadlines. By prioritizing
high-profit jobs and scheduling them in the latest available time slots before their deadlines, the
greedy algorithm provides a good solution with a time complexity of O(n log n). While it may not
always guarantee the absolute optimal solution in all cases, the greedy approach is effective and
widely applicable in real-world scenarios such as task scheduling, project management, and
resource allocation.
Page | 22
Experiment No: 06-(a)
Objective:
The objective of implementing the Fibonacci series using the recursive method is to compute the
sequence of numbers where each number is the sum of the two preceding ones, starting from 0 and
1. The recursive approach aims to break down the problem into smaller subproblems by using the
following recurrence relation:
F(n) = F(n-1) + F(n-2)
Where:
• F(0)=0 (Base case)
• F(1)=1 (Base case)
main
define n
for i from 0 to n-1
print Fibonacci(i)
Page | 23
Implementation:
Fig-11
Recursive Method:
The Fibonacci sequence can be computed using a recursive approach, where each Fibonacci number is
calculated by calling the function recursively to compute the previous two Fibonacci numbers.
Explanation:
• The function Fibonacci (n) checks if n is 0 or 1. If so, it returns n because F(0)=0 and F(1)=1
• For any n>1, the function calls itself recursively to compute the two preceding Fibonacci numbers
F(n−1) and F(n−2) and then returns their sum, F(n)=F(n−1)+F(n−2)
Page | 24
Output:
Fig-12
Conclusion:
In conclusion, the recursive method for calculating Fibonacci numbers is a straightforward
approach that mirrors the mathematical definition of the sequence. While it is easy to understand
and implement, its time complexity of O(2^n) makes it inefficient for larger values of n, as it
involves redundant calculations. Although it serves as a good introduction to recursion, for
practical applications with larger inputs, more efficient methods like iteration or dynamic
programming are preferred to improve performance.
Page | 25
Experiment No: 06-(b)
Objective:
The objective of implementing the Fibonacci series using the tabulation method (a bottom-up
dynamic programming approach) is to efficiently compute the Fibonacci sequence by building the
solution iteratively from the base cases, avoiding redundant calculations seen in the recursive
method.
for i from 2 to n do
fibArray[i] = fibArray[i-1] + fibArray[i-2]
return fibArray[n]
main
define n
for i from 0 to n-1
print FibonacciTabulation(i)
Page | 26
Implementation:
Fig-13
Page | 27
Output:
Fig-14
Conclusion:
The tabulation method for calculating Fibonacci numbers is a more efficient solution compared
to the recursive approach. With a time complexity of O(n) it avoids redundant calculations and
ensures that each Fibonacci number is computed only once. This method is particularly useful
when working with large values of n, and it can be further optimized to use constant space by
keeping track of only the last two computed values.
Page | 28
Experiment No: 07
Objective:
To implement Dijkstra’s Algorithm for finding the shortest path from a given source vertex to all
other vertices in a weighted, connected graph, ensuring efficient computation and accurate results.
Pseudocode: Pseudocode for find shortest path to other vertices using Dijkstra’s Algorithm:
function Dijkstra(Graph, source):
dist[v] := infinity
prev[v] := undefined
dist[source] := 0
remove u from Q
dist[v] := alt
prev[v] := u
decrease priority of v in Q
Page | 29
Implementation:
Fig-15
Page | 30
Output:
Fig-16
Conclusion:
Dijkstra's Algorithm is a powerful and efficient method for solving the single-source shortest path
problem in graphs with non-negative edge weights. Its greedy approach ensures that the shortest
path is found iteratively, and its efficiency makes it suitable for a wide range of practical
applications in fields like networking, routing, and logistics. However, it does not handle graphs
with negative edge weights, for which other algorithms like Bellman-Ford are used.
Page | 31
Experiment No: 08
Objective:
To implement the Breadth First Search (BFS) algorithm for traversing or searching a graph,
starting from a given vertex, to systematically visit all vertices and edges in a breadth-first manner.
The goal is to explore the graph layer by layer and ensure that all reachable vertices are visited
while maintaining correct traversal order.
Pseudocode: Pseudocode for traversing a graph using breadth first search technique:
Page | 32
Implementation:
Fig-17
Page | 33
Description of the code:
Breadth First Search (BFS) is a graph traversal algorithm that explores all the vertices of a graph
in breadth ward layers, meaning it visits all the neighbors of a vertex before moving on to their
neighbors. It is a powerful technique for exploring graphs and is particularly useful in finding the
shortest path in an unweighted graph.
Explanation:
• Queue Initialization: We initialize a queue with the starting node and mark it as visited.
• Traversal Loop: In each iteration, we dequeue the front node from the queue, visit it, and
enqueue all its unvisited neighbors.
• Termination: The process continues until the queue is empty, meaning all reachable nodes
have been visited.
Output:
Fig-18
Page | 34
Result and Discussion:
The result from the Breadth First Search (BFS) traversal is as follows:
• The algorithm has successfully visited all vertices starting from vertex 0, in the order:
o Visited 0
o Visited 1
o Visited 2
o Visited 3
o Visited 4
This output indicates that the BFS algorithm has correctly traversed all vertices in the graph in a
breadth-first manner, visiting each level before moving to the next.
The experiment demonstrates that BFS efficiently traverses a graph, visiting each vertex in the
correct order, which is essential for various applications like finding the shortest path in
unweighted graphs.
Conclusion:
Breadth First Search is an essential graph traversal technique that explores nodes in layers, making it ideal
for tasks like finding the shortest path in unweighted graphs, exploring connected components, and many
other applications. With its clear and systematic approach using a queue, BFS ensures that all nodes are
visited in the correct order, and it operates efficiently with a time complexity of O(V+E)
Page | 35
Experiment No: 09
Objective:
To implement the Depth First Search (DFS) algorithm for traversing or searching a graph, starting
from a given vertex, to explore as far as possible along each branch before backtracking. The goal
is to visit all reachable vertices in a graph, ensuring that each vertex is explored deeply before
moving to the next one, and effectively discovering paths in a depth-first manner.
Pseudocode: Pseudocode for traversing a graph using depth first search technique:
Page | 36
Implementation:
Fig-19
Page | 37
Description of the code:
Depth First Search (DFS) is a graph traversal technique that explores as far as possible along each
branch before backtracking. It is a fundamental algorithm used for searching and exploring all
nodes in a graph, and it works by starting at a source node and exploring deeply along each branch
before moving on to other branches.
Explanation:
• Recursive DFS: In the recursive version, we explore each neighbor of the current node by
calling dfs() recursively for each unvisited neighbor.
• Iterative DFS: In the iterative version, we use an explicit stack to keep track of nodes to
be visited. We pop nodes from the stack, process them, and push their unvisited neighbors
back onto the stack.
Output:
Fig-20
Conclusion:
Depth First Search is a powerful graph traversal technique that explores nodes deeply along each
branch before backtracking. It is versatile and can be used in various applications, including
pathfinding, cycle detection, and connected component discovery. DFS is efficient with a time
complexity of O(V+E) making it suitable for large graphs. The algorithm can be implemented
using either recursion or an explicit stack, depending on the problem's requirements.
Page | 38
Experiment No: 10
Objective:
To implement the Depth First Search (DFS) algorithm to determine whether a given vertex is
connected to a specified source vertex in an undirected or directed graph. The goal is to traverse
the graph starting from the source vertex and check if the target vertex is reachable, thus confirming
if the vertex is connected.
Pseudocode: Pseudocode to check whether a given vertex is connected or not using Depth First
Search (DFS) method:
Page | 39
Implementation:
Fig-21
Page | 40
Description of the code:
To check whether a given vertex is connected to other vertices in a graph, Depth First Search (DFS)
can be used to explore the graph starting from the given vertex. If DFS is able to visit all reachable
vertices from the given vertex, then the graph is connected (at least from the starting vertex to all
reachable nodes). If DFS cannot visit all vertices, then the vertex is not connected to all others.
Explanation:
• DFS Function: The dfs function is a recursive function that visits all the neighbors of the
current node. It marks each visited node to avoid revisiting it.
• Function: This function starts the DFS from the given start node. After the DFS traversal,
it checks whether the size of the visited set equals the total number of vertices in the graph.
If all vertices are visited, the graph is connected; otherwise, it is not.
Output:
Fig-22
Conclusion:
Using Depth First Search (DFS), we can efficiently check whether a given vertex is connected to
all other vertices in the graph. If DFS from the given vertex visits all vertices, then the graph is
connected from that vertex; otherwise, it is not. This method is simple and effective, with a time
complexity of O(V+E).
Page | 41