0% found this document useful (0 votes)
12 views38 pages

DAA - Unit 1

The document provides a comprehensive overview of algorithms, including their design, analysis, and common problem types such as sorting, searching, and graph problems. It discusses algorithm efficiency through time and space complexity, using Big O notation to express performance. Additionally, it covers techniques for solving recurrence relations and methods for analyzing the complexity of recursive algorithms.

Uploaded by

aditigzp2005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views38 pages

DAA - Unit 1

The document provides a comprehensive overview of algorithms, including their design, analysis, and common problem types such as sorting, searching, and graph problems. It discusses algorithm efficiency through time and space complexity, using Big O notation to express performance. Additionally, it covers techniques for solving recurrence relations and methods for analyzing the complexity of recursive algorithms.

Uploaded by

aditigzp2005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

DESIGN AND ANALYSIS OF ALGORITHM

Unit 1

Introduction to Algorithms
Algorithms are step-by-step procedures or formulas for solving problems. They are essential
in computer science for developing software and systems. Key aspects of studying algorithms
include their efficiency and the methods used to analyze and improve them.
Fundamentals of Algorithmic Solving
Algorithmic solving involves systematically breaking down a problem into smaller,
manageable parts, devising a plan to solve each part, and combining these solutions to solve
the overall problem. This process typically includes problem understanding, designing an
algorithm, implementing it, and analyzing its correctness and efficiency. Key steps include:
 Problem Definition: Clearly defining the problem and its constraints.
 Algorithm Design: Creating a step-by-step plan or flowchart to solve the problem.
 Implementation: Writing the algorithm in a programming language.
 Testing and Debugging: Ensuring the algorithm works correctly for various inputs.
 Optimization: Improving the algorithm for better performance.
Important Problem Types
Several common types of problems are frequently addressed by algorithms, including:
 Sorting: Arranging data in a particular order (e.g., quicksort, mergesort).
 Searching: Finding specific data within a dataset (e.g., binary search).
 Graph Problems: Analyzing graphs and networks (e.g., Dijkstra's algorithm for
shortest paths).
 Dynamic Programming: Solving complex problems by breaking them down into
simpler subproblems (e.g., Fibonacci sequence).
 Greedy Algorithms: Making a series of choices that are locally optimal (e.g.,
Kruskal's algorithm for minimum spanning tree).

Analysing Algorithms
Algorithm Analysis is the process of determining the computational complexity of an
algorithm. It involves measuring both time and space complexity to evaluate performance.
Complexity Analysis involves evaluating how the runtime or space requirements of an
algorithm grow as the input size increases.
1. Big O Notation (O): Represents the upper bound of the time complexity, providing an
asymptotic analysis.
2. Big Ω Notation (Ω): Represents the lower bound of the time complexity.
3. Big Θ Notation (Θ): Represents the tight bound, providing both the upper and lower
bounds.
Complexity of Algorithms
1. Time Complexity: Describes how the execution time of an algorithm changes with the
size of the input. Common notations include O(n), O(log n), O(n^2), etc.
2. Space Complexity: Measures the amount of memory an algorithm uses relative to the
input size.

Growth of Functions
Growth of Functions helps us understand how algorithms perform as the input size grows.
Common growth rates include constant time (O(1)), logarithmic time (O(log n)), linear time
(O(n)), and polynomial time (O(n^k)).
Example
 O(1): Accessing an element in an array.
 O(log n): Binary search in a sorted array.
 O(n): Iterating through an array.
 O(n^2): Nested loops.
Analysis of a Simple For Loop
Consider the following simple for loop:
for (i = 1; i <= n; i++)
{
v[i] = v[i] + 1;
}
Analysis
1. Loop Execution Count
The loop variable i starts at 1 and increments by 1 until it exceeds n. Therefore, the
loop executes exactly n times. Each iteration of the loop performs a constant amount
of work, specifically updating the value of v[i].
2. Time Complexity
o Constant Time Operations: The operations inside the loop (i.e., v[i] = v[i] + 1)
are constant time operations, denoted as O(1), because they do not depend on
the size of n.
o Total Running Time: Since the loop runs n times and each iteration takes
constant time, the total running time of the loop is proportional to n.
Therefore, we express the time complexity of the loop as O(n). This notation
provides an upper bound on the running time, abstracting away constant
factors and lower-order terms.
Big-O Notation
 Multiplicative Factor: The actual time taken might be expressed as a multiple of n.
For example, if the loop execution takes 100n instructions or 34n microseconds, the
Big-O notation abstracts away these constants. Thus, O(n) remains the appropriate
notation despite the specific constants.
 Additive Factor: The loop may incur a constant startup time, such as an additional 3
microseconds. In Big-O notation, additive constants are also disregarded. So, a
running time of 34n + 3 microseconds still falls under O(n).
 The time complexity of the loop is O(n).
 The Big-O notation reflects that the loop’s running time grows linearly with n,
ignoring constant multiplicative and additive factors.
 O(n) indicates linear growth in the number of operations relative to the input
size n.
The loop's linear running time can be simplified to O(n) in Big-O notation, which describes
its performance accurately for large input sizes.

Analysis of a Nested For Loop


Consider the following nested for loop:
for (i = 1; i <= n; i++)
{
for (j = 1; j <= n; j++)
{
a[i, j] = b[i, j] * x;
}
}
Analysis
1. Loop Execution Count
o Outer Loop: The outer loop runs from i = 1 to i <= n, executing n times.
o Inner Loop: For each iteration of the outer loop, the inner loop runs from j = 1
to j <= n, executing n times.
Therefore, the total number of iterations of the inner loop is:
n×n=n2
The inner loop runs n times for each of the n iterations of the outer loop, resulting in a
total of n^2 iterations.
2. Time Complexity
o Constant Time Operations: The assignment statement a[i, j] = b[i, j] * x in
the inner loop is a constant time operation, denoted as O(1).
o Total Running Time: Since the inner loop executes n^2 times and each
iteration performs a constant time operation, the total running time of the code
is proportional to n^2. Hence, the time complexity of the nested loops is
O(n^2).

Quadratic Running Time


 The code has quadratic running time because the time complexity is proportional to
the square of the input size n. This is represented as O(n^2) in Big-O notation.
 The time complexity of the nested for loop is O(n^2).
 The nested loops result in a total of n^2 iterations, each involving a
constant time operation.
 This quadratic time complexity indicates that the running time grows
proportionally to the square of the input size n.
o Thus, the code is said to have quadratic running time, reflecting the
significant increase in operations as the input size n grows.

Analysis of Matrix Multiplication


Consider the code for multiplying two n×n A and B to compute their product matrix C. The
code is as follows:
for (i = 1; i <= n; i++) {
for (j = 1; j <= n; j++) {
C[i, j] = 0;
for (k = 1; k <= n; k++) {
C[i, j] = C[i, j] + A[i, k] * B[k, j];
}
}
}
Analysis
Initialization of Matrix C
Outer Loops: The initialization of matrix C requires two nested loops:
for (i = 1; i <= n; i++)
for (j = 1; j <= n; j++)
C[i, j] = 0;
Each loop runs n times, so this part executes n*n = n^2 times. Each iteration
of these loops involves a constant time operation (setting C[i, j] to 0). Thus,
this part of the code has a time complexity of O(n^2)
Matrix Multiplication
Nested Loops for Computation:
for (i = 1; i <= n; i++)
for (j = 1; j <= n; j++)
for (k = 1; k <= n; k++)
C[i, j] = C[i, j] + A[i, k] * B[k, j];
Here, we have three nested loops:
 The outer two loops (over i and j) iterate n times each.
 The innermost loop (over k) also iterates n times for each combination
of i and j.
Therefore, the total number of iterations for the innermost loop is:
n×n×n = n^3
Each iteration of the innermost loop involves a constant time operation: computing
the product A[i, k] * B[k, j] and updating C[i, j]. Thus, the matrix multiplication part
of the code has a time complexity of O(n^3).
Overall Time Complexity

Combining the complexities of the initialization and the multiplication:


 Initialization: O(n^2)
 Multiplication: O(n^3)
Since O(n^3) dominates O(n^2). the overall time complexity of the matrix
multiplication algorithm is O(n^3)
 The matrix multiplication algorithm involves three nested loops, leading to a
time complexity of O(n^3).
 The initialization step contributes O(n^2) to the time complexity, but this is
overshadowed by the O(n^3) complexity of the multiplication step.
 Hence, the matrix multiplication algorithm is said to have cubic running time,
reflecting its proportional growth with the cube of the input size n.

Common Problem Types


1. Sorting:
 Algorithms: QuickSort, MergeSort, Bubble Sort
 Purpose: Arrange data in a specific order.
2. Searching:
 Algorithms: Binary Search, Linear Search
 Purpose: Find specific elements within a dataset.
3. Graph Problems:
 Algorithms: Dijkstra's, Bellman-Ford, Prim's
 Purpose: Analyze graphs and networks.
4. Dynamic Programming:
 Algorithms: Fibonacci sequence, Knapsack problem
 Purpose: Solve complex problems by breaking them into simpler subproblems.
5. Greedy Algorithms:
 Algorithms: Kruskal's, Huffman coding
Purpose: Make locally optimal
Recurrences
Recurrence Relations: Recurrence relation is an equation, which is defined in terms of itself.
There is no single technique or algorithm that can be used to solve all recurrence relations. In
fact, some recurrence relations cannot be solved. Most of the recurrence relations that we
encounter are linear recurrence relations with constant coefficients. It describes time
complexity of recursive algorithms. Solving these recurrences helps in understanding the
overall complexity of recursive algorithms.

Solving Recurrences
1. Substitution Method: The substitution method is a technique used to solve recurrence
relations, which are often encountered in analyzing the time complexity of recursive
algorithms. The method involves guessing the form of the solution and then using
mathematical induction to verify that the guess is correct.
Steps for the Substitution Method
1. Guess the Form of the Solution: Make an educated guess about the asymptotic form
of the solution to the recurrence relation. This guess is often based on the form of the
recurrence relation.
2. Prove the Guess by Induction: Use mathematical induction to prove that your guess
is correct. This involves showing that the guess holds for the base case and then
proving that if it holds for a certain value, it also holds for the next value.

Example 1: Simple Recurrence Relation


Consider the recurrence relation:
T(n)=2T(n2)+n
Step 1: Guess the Form of the Solution
A reasonable guess for this recurrence relation is T(n)=O(nlog⁡n). We'll use this guess
to prove our result.
Step 2: Prove the Guess by Induction
 Base Case:
For n=1 we assume T(1)=c where ccc is a constant. Our guess is
T(n)=O(nlog⁡n) which simplifies to T(1)=O(1⋅log⁡1)=O(1). The base case
holds.
 Inductive Step:
Assume that the guess T(k)≤Cklog k is true for all k<n. We need to show that it
holds for n.

Substituting the guess into the recurrence relation:


T(n)=2T(n/2)+n

Using the induction hypothesis:


T(n/2)≤C(n/2)log⁡(n/2)
Thus:
T(n)≤2[C(n/2)log(n/2)]+n

Simplify:
T(n)≤Cnlog⁡(n/2)+n
T(n)≤Cn(logn−log2)+n
T(n)≤Cn logn−Cn log2+n
For sufficiently large n, we can choose C such that T(n)≤Cn logn
Thus, the guess T(n)=O(n logn) is correct.
The solution to the recurrence relation T(n)=2T(n/2)+n is T(n)=O(n logn)

Example 2: Another Recurrence Relation


Consider the recurrence relation:
T(n)=3T(n/4)+n

Step 1: Guess the Form of the Solution


A reasonable guess is T(n)=O(n). We'll use this guess to verify.
Step 2: Prove the Guess by Induction
 Base Case:
For n=1, we assume T(1)=c. Our guess is T(n)=O(n). which simplifies to T(1)=O(1).
The base case holds.
Inductive Step:
Assume that T(k)≤Ck is true for all k<n. We need to show it holds for n.
Substituting the guess into the recurrence relation:
T(n)=3T(n/4)+n
Using the induction hypothesis:
T(n4)≤C
Thus:
T(n)≤3[C(n/4)]+n
To satisfy T(n)≤C , we need:
3Cn/4+n≤Cn
Simplifying:
3Cn4+n≤Cn
n≤Cn−3Cn/4
n≤Cn/4
This is always true for sufficiently large n if C≥4. Thus, the guess T(n)=O(n) is
correct.
Conclusion: The solution to the recurrence relation T(n)=3T(n/4)+n

Iteration Method for Solving Recurrences


The iteration method, also known as the expansion method or unfolding method, is a
technique used to solve recurrence relations. The method involves expanding the recurrence
relation step by step to find a pattern or derive a closed-form solution.
Steps for Using the Iteration Method
1. Expand the Recurrence: Start with the recurrence relation and iteratively substitute
the recurrence into itself to find a pattern.
2. Identify the Pattern: Look for a general pattern or formula after several iterations.
3. Solve for the Pattern: Express the general pattern in a closed-form solution.

Example 1: Simple Recurrence Relation


Consider the recurrence relation:
T(n)=T(n−1)+1 with the base case T(1)=1
Step 1: Expand the Recurrence
Expand the recurrence by substituting T(n−1) into the relation:
T(n)=T(n−1)+
T(n−1)=T(n−2)+1
T(n−2)=T(n−3)+1
Substitute these into the original recurrence:
T(n)=(T(n−2)+1)+1
T(n)=T(n−2)+2
T(n) = (T(n - 3) + 1) + 2
T(n) = (T(n - 3) + 3

Continue this process until you reach the base case T(1):
T(n)=T(n−k)+k

Step 2: Identify the Pattern


Notice that when k=n−1, you reach the base case T(1):
T(n)=T(1)+(n−1)
Step 3: Solve for the Pattern
Given T(1)=1:
T(n)=1+(n−1)
T(n) = n
The closed-form solution for the recurrence T(n)=T(n−1)+1 with T(1)=1 is T(n)=n

Master Method for Solving Recurrences


The Master Method provides a straightforward way to solve recurrences of the
form:T(n)=aT(n/b)+f(n)
where a≥1, b>1, and f(n) is an asymptotically positive function. This
method is applicable for divide-and-conquer recurrences and helps
determine the asymptotic behavior of T(n).
Master Theorem
The Master Theorem provides a way to solve recurrences by comparing f(n) with nlogb
a
. It involves three cases:
Merge Sort
 Merge Sort is a divide and conquer algorithm that recursively divides a list into two
halves, sorts each half, and then merges them to produce the sorted result.
 It operates by repeatedly splitting the list until each sublist contains a single element,
and then merging the sub lists back together in a sorted order.
Steps in Merge Sort:
1. Divide: The array is divided into two halves until the base case of an array with a
single element is reached.
2. Conquer: Recursively sort the two halves.
3. Combine: Merge the two sorted halves to produce a single sorted array.
Merge Sort Algorithm:
 The array is recursively divided into two halves.
 Once divided down to individual elements (considered sorted), the algorithm merges
these elements back together.
 While merging, elements from both halves are compared, and the smaller element is
added to the final sorted array.
Pseudo Code for Merge Sort:
MergeSort(arr[], left, right)
if left >= right
return
middle = (left + right)/2
MergeSort(arr, left, middle)
MergeSort(arr, middle + 1, right)
Merge(arr, left, middle, right)

Merge(arr[], left, middle, right)


Create temporary arrays leftArr[] and rightArr[]
Copy data into leftArr and rightArr
Merge the two arrays into arr[]
Copy remaining elements if any from leftArr[] and rightArr[]
Copy remaining elements if any from rightArr[]

Key Properties:
1. Time Complexity: O(n log n) – Merge Sort divides the list into halves at each level,
which takes log n steps, and each level requires O(n) operations for merging.
2. Space Complexity: O(n) – Due to the need for additional storage for the temporary
arrays created during merging.
3. Stability: Merge Sort is a stable sorting algorithm, meaning it preserves the relative
order of elements with equal keys.
4. Recursion: Merge Sort is a recursive algorithm, breaking the array into progressively
smaller sub-arrays.
Case Study: Application of Merge Sort
Scenario: Imagine an e-commerce company that processes thousands of customer orders each
day. These orders must be sorted by delivery time to ensure efficient dispatching and
delivery. Sorting the orders rapidly and efficiently is critical to maintaining customer
satisfaction and operational efficiency.
Problem: The company receives bulk orders at the start of each day. Each order contains
several attributes such as order ID, customer name, and most importantly, delivery time. To
optimize the delivery process, the orders need to be sorted by the delivery time before
dispatch.
Solution: The company uses Merge Sort to handle the sorting of orders, as it can efficiently
handle large datasets due to its O(n log n) time complexity.
Steps for Implementation:
1. Dividing the Orders: The list of orders is divided into two halves recursively until each
list contains only one order.
2. Sorting and Merging: The smaller sublists are then merged back together. While
merging, orders are compared based on delivery time, ensuring that the earlier
deliveries are processed first.
3. Result: The final merged list is a sorted list of orders by delivery time, ready for
dispatch.
Why Merge Sort was chosen:
 Efficient for large datasets: Merge Sort’s time complexity of O(n log n) is optimal for
sorting large volumes of orders.
 Stability: Merge Sort preserves the relative order of orders with the same delivery
time, which is crucial for maintaining consistent service.
 Predictable performance: Merge Sort does not degrade to O(n^2) like some other
algorithms, making it a reliable choice for high-demand situations.
Outcome: The e-commerce company successfully improved its order processing efficiency,
leading to faster deliveries and higher customer satisfaction. The use of Merge Sort allowed
the company to sort large volumes of orders quickly and reliably, without compromising on
performance or accuracy.
Quick Sort
Quick Sort:
Quick Sort is an efficient, in-place, and comparison-based sorting algorithm. It follows the
divide-and-conquer paradigm by dividing the array into sub-arrays around a pivot element
and recursively sorting the sub-arrays. Quick Sort is known for its average-case time
complexity of O(n log n), making it one of the fastest sorting algorithms in practice,
especially for large datasets.
Key Concepts:
 Divide-and-Conquer: Quick Sort divides the array into smaller sub-arrays, solves each
sub-array, and then combines the results.
 Pivot: A key element around which the partitioning of the array is done. Various
strategies can be used to select the pivot:
o First element
o Last element
o Random element
o Median-of-three
 Partitioning: The process of rearranging the array such that elements smaller than the
pivot go to the left of it, and elements larger go to the right.
Quick Sort Algorithm Steps:
1. Choose a Pivot: Select a pivot element from the array.
2. Partitioning: Reorder the array so that all elements with values less than the pivot
come before the pivot, and all elements with values greater come after it. The pivot
element is now in its correct sorted position.
3. Recursion: Recursively apply the above steps to the sub-arrays of elements with
smaller and greater values than the pivot.
4. Base Case: When the size of the sub-array becomes 1 or 0, recursion stops.
Pseudo Code:
def quick_sort(arr, low, high):
if low < high:
# pi is partitioning index, arr[pi] is now at the right place
pi = partition(arr, low, high)

# Recursively sort elements before and after partition


quick_sort(arr, low, pi-1)
quick_sort(arr, pi+1, high)

def partition(arr, low, high):


pivot = arr[high] # Select the pivot as the last element
i = low - 1 # Index of smaller element

for j in range(low, high):


if arr[j] < pivot:
i=i+1
arr[i], arr[j] = arr[j], arr[i] # Swap elements

arr[i+1], arr[high] = arr[high], arr[i+1] # Swap pivot element


return i + 1
Time Complexity:
 Best Case: O(n log n), occurs when the partitioning is balanced.
 Average Case: O(n log n), due to the average behavior of partitioning.
 Worst Case: O(n^2), occurs when the pivot selection is poor (e.g., already sorted or
reverse-sorted data) and results in highly unbalanced partitions.
Space Complexity:
 In-place Sorting: Uses O(log n) additional space for recursion stack.
Optimizations:
 Randomized Pivot Selection: Randomly selecting a pivot minimizes the chances of
encountering the worst case.
 Median-of-Three Partitioning: Choosing the median of the first, middle, and last
elements as the pivot improves performance on certain datasets.
 Switching to Insertion Sort: For small sub-arrays (10 elements or fewer), switching to
insertion sort can reduce overhead and improve performance.

Heap Sort
Heap Sort is a popular comparison-based sorting algorithm that uses the binary heap data
structure. It works by building a max-heap (or min-heap) from the input data and then
repeatedly extracting the maximum (or minimum) element to sort the array. Heap Sort has a
time complexity of O(n log n) and is an efficient in-place sorting algorithm.

Binary Heap
A binary heap is a complete binary tree where:
 Max-Heap: The value of each node is greater than or equal to the values of its
children, with the largest element at the root.
 Min-Heap: The value of each node is smaller than or equal to the values of its
children, with the smallest element at the root.
Heap Sort uses the max-heap property to sort elements in ascending order (or min-heap for
descending order).
Heap Sort Algorithm Steps
1. Build a Max-Heap: Transform the input array into a max-heap using the heapify
process.
2. Extract Elements: Remove the largest element (root of the heap), swap it with the last
item in the heap, and reduce the size of the heap.
3. Heapify the Root: Rebalance the heap by performing heapify on the root.
4. Repeat: Continue this process until the entire array is sorted.

Heapify Process
Heapify is the process of ensuring that the subtree rooted at a given node maintains the heap
property. In a max-heap, this means making sure that a parent node is larger than both of its
children, swapping nodes if necessary, and recursively applying this process to affected
subtrees.

Heapify Pseudocode:
heapify(arr[], n, i):
largest = i // Initialize largest as root
left = 2*i + 1 // Left child
right = 2*i + 2 // Right child

// If left child is larger than root


if left < n and arr[left] > arr[largest]:
largest = left

// If right child is larger than the largest so far


if right < n and arr[right] > arr[largest]:
largest = right

// If largest is not root


if largest != i:
swap arr[i] and arr[largest]
heapify(arr, n, largest)
Time Complexity
 Building the Heap: O(n)
 Heapify Process: O(log n) for each element
 Overall Time Complexity: O(n log n)

Space Complexity
Heap Sort is an in-place sorting algorithm, which means it requires a constant space
overhead, resulting in O(1) additional space.
Advantages
 In-place sorting (O(1) space complexity).
 Consistent O(n log n) time complexity.
 No recursive function calls are required (unlike Merge Sort).
Disadvantages
 Not a stable sort (relative order of equal elements may change).
 Performance can be slower compared to algorithms like Quick Sort for smaller
datasets due to the complexity of the heapify process.
Case Study: Applying Heap Sort in Task Scheduling

Background
A software development team faces the challenge of efficiently scheduling a set of tasks with
varying priorities. Each task has a priority value, and higher priority tasks must be executed
first. The team aims to minimize idle time while ensuring that tasks are completed in the
correct order.

Problem
The team has a list of tasks, each represented as a tuple (task_name, priority_value). They
need an algorithm that can efficiently order tasks so that the highest-priority task is executed
first. Additionally, the process of updating priorities and reordering tasks should be efficient.

Solution: Heap Sort


Heap Sort is well-suited for this problem because of its ability to quickly order elements
based on priority. The team can model the tasks as a max-heap, where the root of the heap is
always the task with the highest priority. The following steps outline the solution:
1. Building the Heap: The team constructs a max-heap using the priority values of the
tasks. Each task is added to the heap in O(n) time, ensuring that the highest-priority
task is always at the top.
2. Extracting and Executing Tasks: The team repeatedly removes the root of the heap
(the highest-priority task), executes it, and replaces it with the last task in the heap.
They then apply heapify to restore the max-heap property in O(log n) time.
3. Rebalancing the Heap: After each task is completed, if the priorities of remaining
tasks change (e.g., due to dynamic scheduling), the heap can be updated using the
heapify process to ensure tasks remain sorted.

Shell Sort
Shell Sort is an in-place comparison-based sorting algorithm that generalizes the Insertion
Sort algorithm by allowing the exchange of items that are far apart. It is named after its
inventor, Donald Shell, who introduced it in 1959. Shell Sort improves the efficiency of
Insertion Sort by breaking the array into smaller subarrays and then sorting these subarrays.
Working of Shell Sort:
 Gap Sequence: Shell Sort starts by sorting pairs of elements far apart from each other,
then progressively reducing the gap between elements to be compared. A common
gap sequence is to halve the gap size each time, but other sequences like Knuth’s
sequence can also be used.
 Subarray Sorting: The main idea is that the elements that are far apart can be
swapped, leading to faster elimination of disorder. Once the gap becomes 1, the
algorithm performs a regular insertion sort on the entire array, but at this point, the
array is already partially sorted, making this pass very efficient.
Algorithm Steps:
1. Initialize a gap sequence: Start with a gap larger than 1, typically half the length of
the list, and then reduce the gap in subsequent passes.
2. Sort the elements using Insertion Sort: For each gap, perform a gapped insertion sort.
3. Repeat until the gap becomes 1: When the gap is reduced to 1, the list is fully sorted.
Time Complexity:
 The time complexity of Shell Sort depends heavily on the gap sequence used. The
average time complexity is typically better than O(n²), but for some gap sequences, it
can approach O(n log n). The worst-case time complexity is O(n²) for the gap
sequence proposed by Shell.
Advantages of Shell Sort:
 Adaptive: Performs well when the input is already partially sorted.
 In-place: Does not require any additional memory.
 Versatile: Works well for small or medium-sized datasets.
Disadvantages of Shell Sort:
 The performance of Shell Sort heavily depends on the choice of gap sequence, which
can be tricky to optimize.

Case Study: Shell Sort in Practice


Consider a warehouse inventory system where the goal is to sort product IDs for quick
retrieval. A simple insertion sort might be too slow for larger inventories. By using Shell
Sort, the system can sort IDs more efficiently by first sorting distant elements and then
refining the order with closer comparisons.
For instance, if there are 10,000 product IDs, Shell Sort can start by sorting elements that are
5,000 positions apart, then 2,500 apart, and so on. This reduces the number of swaps needed
in later stages, making the sorting process significantly faster than if the system had used
traditional insertion sort.

Psudo code Shell Sort


ShellSort(array, n)
// array: the array to be sorted
// n: the number of elements in the array

// Initialize the gap


gap = n / 2

// Reduce the gap until it becomes 0


while gap > 0
// Perform insertion sort for this gap
for i = gap to n - 1
// Store the current element in a temporary variable
temp = array[i]
j=i

// Shift elements to find the correct position for temp


while j >= gap and array[j - gap] > temp
array[j] = array[j - gap]
j = j - gap

// Place temp in the correct position


array[j] = temp

// Update the gap value


gap = gap / 2

End ShellSort
Sorting in Linear Time
Sorting in linear time refers to algorithms that sort data with a time complexity of O(n),
where n is the number of elements. These algorithms are efficient for specific types of data
and constraints. Common linear time sorting algorithms include Counting Sort, Radix Sort,
and Bucket Sort.

Key Linear Time Sorting Algorithms:


1. Counting Sort:
o Concept: Counting Sort works by counting the number of occurrences of each
distinct element and using these counts to place elements in their correct
position.
o Time Complexity: O(n + k), where n is the number of elements and k is the
range of the input values.
o Space Complexity: O(k) for the count array.
o Use Case: Suitable for integers within a small range. Not suitable for large
ranges or non-integer data.
2. Radix Sort:
o Concept: Radix Sort processes integers digit by digit, starting from the least
significant digit to the most significant digit. It uses Counting Sort as a
subroutine to sort the digits.
o Time Complexity: O(d * (n + k)), where d is the number of digits, n is the
number of elements, and k is the base of the number system (usually 10 for
decimal).
o Space Complexity: O(n + k).
o Use Case: Suitable for integers and fixed-length strings.
3. Bucket Sort:
o Concept: Bucket Sort distributes elements into buckets based on a range of
values, sorts the buckets individually, and then concatenates the sorted
buckets.
o Time Complexity: O(n + k), where k is the number of buckets.
o Space Complexity: O(n + k).
o Use Case: Suitable for uniformly distributed data within a range.

Case Study: Sorting Exam Scores


Consider a scenario where a school needs to sort exam scores of 1,000 students ranging from
0 to 100. Using a comparison-based sorting algorithm would be inefficient for such a large
dataset. Instead, Counting Sort is a perfect choice because the range of scores (0 to 100) is
relatively small compared to the number of students.
By using Counting Sort, the school can sort the scores efficiently in linear time, ensuring that
students receive their results quickly and accurately.

Counting Sort
Counting Sort is a non-comparison-based sorting algorithm that operates on the principle of
counting the occurrences of each unique value in the input array. It is particularly efficient
when the range of input values is known and relatively small compared to the number of
elements.

How Counting Sort Works


Find the Range: Determine the minimum and maximum values in the input array.
Count Occurrences: Create a count array where each index represents a value from the input
array, and the value at each index represents the number of occurrences of that value.
Accumulate Counts: Modify the count array such that each element at index i stores the sum
of counts up to index i. This step helps in placing the elements in their correct positions.
Build the Output Array: Place the elements from the input array into their correct positions in
the output array using the accumulated counts.
Copy the Output Array: Copy the sorted elements from the output array back to the input
array.

Pseudo Code
CountingSort(A, B, k)
Input: Array A of size n, output array B of size n, maximum value k
Output: Array B sorted in non-decreasing order

// Step 1: Initialize the count array


Create an array C of size k + 1 and initialize all elements to 0

// Step 2: Count occurrences


for i = 0 to length(A) - 1 do
C[A[i]] = C[A[i]] + 1

// Step 3: Accumulate counts


for i = 1 to k do
C[i] = C[i] + C[i - 1]

// Step 4: Place elements in the correct position


for i = length(A) - 1 to 0 do
B[C[A[i]] - 1] = A[i]
C[A[i]] = C[A[i]] - 1

// Step 5: Copy the output array to the input array


for i = 0 to length(A) - 1 do
A[i] = B[i]

Radix Sort
Radix Sort is a non-comparative integer sorting algorithm that sorts numbers by processing
individual digits. It is particularly efficient when dealing with large numbers or datasets with
a fixed range of digits. Radix Sort processes numbers from the least significant digit (LSD) to
the most significant digit (MSD) or vice versa.

Key Concepts
Radix: The base of the number system. For example, base-10 for decimal numbers.
Stable Sort: Radix Sort uses a stable sorting algorithm (often Counting Sort) to sort digits,
which ensures that the relative order of elements with the same digit is preserved.
Steps of Radix Sort
Find Maximum Value: Determine the maximum number to find the number of digits in the
largest number.

Iterate Over Each Digit: Sort the numbers based on each digit, starting from the least
significant digit to the most significant digit (LSD to MSD) or vice versa.
Use a Stable Sorting Algorithm: Apply a stable sort (like Counting Sort) to each digit to
ensure that digits are sorted correctly while maintaining the relative order of numbers with
the same digit.

Example
Let's sort the following array of integers: [170, 45, 75, 90, 802, 24, 2, 66]

Step-by-Step Execution:

Find the Maximum Value:

Maximum value is 802, which has 3 digits.


Sort by Each Digit:

Units place (LSD):


Using Counting Sort on units place: [170, 90, 802, 2, 45, 75, 24, 66]
Tens place:
Using Counting Sort on tens place: [170, 45, 75, 802, 2, 24, 66, 90]
Hundreds place:
Using Counting Sort on hundreds place: [2, 24, 45, 66, 75, 90, 170, 802]

Psudo code
RADIX_SORT(A, n)
// A is the array of integers to be sorted
// n is the number of elements in A

max_value = FIND_MAX_VALUE(A, n)
exp = 1 // Initialize the exponent for the LSD (1 for the unit place)

// Loop until the maximum value is greater than 0


WHILE max_value / exp > 0
// Perform Counting Sort based on the digit represented by exp
COUNTING_SORT_BY_DIGIT(A, n, exp)
// Move to the next digit place
exp = exp * 10

// Function to find the maximum value in the array


FIND_MAX_VALUE(A, n)
max_value = A[0]
FOR i = 1 TO n - 1
IF A[i] > max_value
max_value = A[i]
RETURN max_value

// Function to perform Counting Sort based on a specific digit place


(exp)
COUNTING_SORT_BY_DIGIT(A, n, exp)
// Create a count array and an output array
count = ARRAY_OF_SIZE(10) // Count array for digits 0-9
output = ARRAY_OF_SIZE(n) // Output array

// Initialize count array with 0


FOR i = 0 TO 9
count[i] = 0

// Store count of occurrences of each digit in count array


FOR i = 0 TO n - 1
index = (A[i] / exp) % 10
count[index] = count[index] + 1

// Change count[i] so that count[i] contains the position of this digit


in output[]
FOR i = 1 TO 9
count[i] = count[i] + count[i - 1]

// Build the output array


FOR i = n - 1 DOWNTO 0
index = (A[i] / exp) % 10
output[count[index] - 1] = A[i]
count[index] = count[index] - 1

// Copy the sorted elements from output array back to A[]


FOR i = 0 TO n - 1
A[i] = output[i]

Time Complexity: O(d * (n + k)), where d is the number of digits, n is the number of
elements, and k is the base of the number system.

Space Complexity: O(n + k), where k is the base of the number system.

Bucket Sort
Concept: Bucket Sort distributes elements into buckets based on a range of values, sorts each
bucket individually using another sorting algorithm (often Insertion Sort), and then
concatenates the sorted buckets.

Bucket Sort is a distribution sort algorithm that works by dividing the elements into several
buckets, sorting each bucket individually (using another sorting algorithm), and then
concatenating the results. It's particularly efficient when the input is uniformly distributed
over a range.
Steps of Bucket Sort
1. Initialization: Create k empty buckets. The number of buckets is usually determined
based on the range of input values and their distribution.
2. Distribution: Distribute the input elements into these buckets. Each bucket
corresponds to a specific range of values.
3. Sorting Buckets: Sort each bucket individually. This can be done using any sorting
algorithm, but Insertion Sort is commonly used for its simplicity when the bucket size
is small.
4. Concatenation: Merge the sorted buckets to form the final sorted array.
Psudo code

Procedure BucketSort(array, n):


Input:
array - the array of elements to be sorted
n - the number of elements in the array

1. Find the maximum value in the array


2. Initialize an array of empty buckets (buckets)
- The number of buckets should be chosen based on the range of
the data and the desired granularity
- Each bucket is usually represented as a list or an array

3. Distribute the elements into the buckets


For each element in array:
- Calculate the index of the bucket for this element
- Insert the element into the corresponding bucket

4. Sort each bucket


For each bucket in buckets:
- Sort the bucket using a suitable sorting algorithm (e.g.,
Insertion Sort)
- This sorting is efficient for small lists

5. Concatenate the sorted buckets


Initialize an empty array for the sorted output
For each bucket in buckets:
- Append the sorted elements of the bucket to the sorted output
array

6. Copy the sorted output array back to the original array


For each index from 0 to n-1:
- Set array[index] = sortedOutput[index]
Time Complexity
 Best Case: O(n + k), where n is the number of elements and k is the number of
buckets. This happens when the elements are uniformly distributed.
 Average Case: O(n + k), but performance depends on the distribution of the input.
 Worst Case: O(n^2) if the elements are not uniformly distributed and many elements
fall into a single bucket.
Space Complexity
 Space Complexity: O(n + k) for the buckets and the output.
IMPLEMENTATION OF ALGORITHMS IN C LANGUAGE
C Code: Sheel Sort
#include <stdio.h>
#include <stdlib.h>

// Function to perform Shell Sort


void shellSort(int arr[], int n) {
// Start with a large gap, then reduce the gap
for (int gap = n / 2; gap > 0; gap /= 2) {
// Perform gapped insertion sort
for (int i = gap; i < n; i++) {
int temp = arr[i];
int j;
// Shift earlier gap-sorted elements up until the correct location for arr[i] is found
for (j = i; j >= gap && arr[j - gap] > temp; j -= gap) {
arr[j] = arr[j - gap];
}
// Place temp (the original arr[i]) in its correct location
arr[j] = temp;
}
}
}

// Function to print an array


void printArray(int arr[], int n) {
for (int i = 0; i < n; i++) {
printf("%d ", arr[i]);
}
printf("\n");
}

// Main function to accept user input and perform Shell Sort


int main() {
int n;

// Prompt the user to enter the size of the array


printf("Enter the number of elements: ");
if (scanf("%d", &n) != 1 || n <= 0) {
printf("Invalid input. Please enter a positive integer.\n");
return 1;
}

// Dynamically allocate memory for the array


int *arr = (int *)malloc(n * sizeof(int));

// Check if memory allocation was successful


if (arr == NULL) {
printf("Memory allocation failed.\n");
return 1;
}

// Prompt the user to enter the array elements


printf("Enter %d integers:\n", n);
for (int i = 0; i < n; i++) {
while (1) {
printf("Element %d: ", i + 1);
if (scanf("%d", &arr[i]) == 1) {
break;
} else {
printf("Invalid input. Please enter an integer.\n");
while (getchar() != '\n'); // Clear invalid input from buffer
}
}
}

printf("Array before sorting:\n");


printArray(arr, n);

// Perform Shell Sort


shellSort(arr, n);

printf("Array after sorting:\n");


printArray(arr, n);

// Free the dynamically allocated memory


free(arr);

return 0;
}
C Code: Counting Sort:

#include <stdio.h>
#include <stdlib.h>

// Function to perform Counting Sort


void countingSort(int arr[], int n, int range) {
// Create a count array and initialize it with zeros
int *count = (int *)calloc(range + 1, sizeof(int));
int *output = (int *)malloc(n * sizeof(int));

// Check if memory allocation was successful


if (count == NULL || output == NULL) {
printf("Memory allocation failed.\n");
free(count);
free(output);
exit(1);
}

// Count the occurrences of each element


for (int i = 0; i < n; i++) {
count[arr[i]]++;
}

// Calculate cumulative count


for (int i = 1; i <= range; i++) {
count[i] += count[i - 1];
}

// Build the output array


for (int i = n - 1; i >= 0; i--) {
output[count[arr[i]] - 1] = arr[i];
count[arr[i]]--;
}

// Copy the output array to the original array


for (int i = 0; i < n; i++) {
arr[i] = output[i];
}

// Free dynamically allocated memory


free(count);
free(output);
}

// Function to print an array


void printArray(int arr[], int n) {
for (int i = 0; i < n; i++) {
printf("%d ", arr[i]);
}
printf("\n");
}

// Main function to accept user input and perform Counting Sort


int main() {
int n, max_value;

// Prompt the user to enter the size of the array


printf("Enter the number of elements: ");
if (scanf("%d", &n) != 1 || n <= 0) {
printf("Invalid input. Please enter a positive integer.\n");
return 1;
}

// Prompt the user to enter the maximum value in the array


printf("Enter the maximum value in the array: ");
if (scanf("%d", &max_value) != 1 || max_value <= 0) {
printf("Invalid input. Please enter a positive integer.\n");
return 1;
}

// Dynamically allocate memory for the array


int *arr = (int *)malloc(n * sizeof(int));

// Check if memory allocation was successful


if (arr == NULL) {
printf("Memory allocation failed.\n");
return 1;
}

// Prompt the user to enter the array elements


printf("Enter %d integers (between 0 and %d):\n", n,
max_value);
for (int i = 0; i < n; i++) {
while (1) {
printf("Element %d: ", i + 1);
if (scanf("%d", &arr[i]) == 1 && arr[i] >= 0 && arr[i]
<= max_value) {
break;
} else {
printf("Invalid input. Please enter an integer between
0 and %d.\n", max_value);
while (getchar() != '\n'); // Clear invalid input from
buffer
}
}
}

printf("Array before sorting:\n");


printArray(arr, n);

// Perform Counting Sort


countingSort(arr, n, max_value);

printf("Array after sorting:\n");


printArray(arr, n);

// Free the dynamically allocated memory


free(arr);

return 0;
}

Explanation of the Code:


1. Counting Sort Function:
 Initialization: count array is used to store the count of each element, and
output array is used to store the sorted elements.
 Counting: Count occurrences of each element.
 Cumulative Count: Modify the count array to store the cumulative count.
 Building Output: Place elements into the correct position in the output array.
 Copy Output: Copy the sorted elements from output back to the original arr.
2. Dynamic Memory Allocation:
 Use malloc to allocate memory for the array and calloc for the count array.
3. Input Validation:
 Ensure valid integer inputs for array elements and maximum value. Clear the
input buffer if invalid data is entered.
C Code : Radix Sort
#include <stdio.h>
#include <stdlib.h>

// Function to perform Counting Sort based on a specific digit


void countingSort(int arr[], int n, int exp) {
int *output = (int *)malloc(n * sizeof(int));
int count[10] = {0};

// Count occurrences of each digit


for (int i = 0; i < n; i++) {
count[(arr[i] / exp) % 10]++;
}

// Update count array to hold actual positions


for (int i = 1; i < 10; i++) {
count[i] += count[i - 1];
}

// Build the output array


for (int i = n - 1; i >= 0; i--) {
output[count[(arr[i] / exp) % 10] - 1] = arr[i];
count[(arr[i] / exp) % 10]--;
}

// Copy the output array to arr[]


for (int i = 0; i < n; i++) {
arr[i] = output[i];
}

// Free dynamically allocated memory


free(output);
}

// Function to perform Radix Sort


void radixSort(int arr[], int n) {
// Find the maximum number to determine the number of digits
int max = arr[0];
for (int i = 1; i < n; i++) {
if (arr[i] > max) {
max = arr[i];
}
}

// Perform Counting Sort for each digit


for (int exp = 1; max / exp > 0; exp *= 10) {
countingSort(arr, n, exp);
}
}

// Function to print an array


void printArray(int arr[], int n) {
for (int i = 0; i < n; i++) {
printf("%d ", arr[i]);
}
printf("\n");
}

// Main function to accept user input and perform Radix Sort


int main() {
int n;

// Prompt the user to enter the size of the array


printf("Enter the number of elements: ");
if (scanf("%d", &n) != 1 || n <= 0) {
printf("Invalid input. Please enter a positive integer.\n");
return 1;
}

// Dynamically allocate memory for the array


int *arr = (int *)malloc(n * sizeof(int));

// Check if memory allocation was successful


if (arr == NULL) {
printf("Memory allocation failed.\n");
return 1;
}

// Prompt the user to enter the array elements


printf("Enter %d integers:\n", n);
for (int i = 0; i < n; i++) {
while (1) {
printf("Element %d: ", i + 1);
if (scanf("%d", &arr[i]) == 1 && arr[i] >= 0) {
break;
} else {
printf("Invalid input. Please enter a non-negative
integer.\n");
while (getchar() != '\n'); // Clear invalid input from buffer
}
}
}

printf("Array before sorting:\n");


printArray(arr, n);

// Perform Radix Sort


radixSort(arr, n);

printf("Array after sorting:\n");


printArray(arr, n);

// Free the dynamically allocated memory


free(arr);

return 0;
}
C Code : Bucket Sort :
#include <stdio.h>
#include <stdlib.h>

// Function to perform Insertion Sort on a bucket


void insertionSort(int arr[], int n) {
for (int i = 1; i < n; i++) {
int key = arr[i];
int j = i - 1;
while (j >= 0 && arr[j] > key) {
arr[j + 1] = arr[j];
j--;
}
arr[j + 1] = key;
}
}

// Function to perform Bucket Sort


void bucketSort(int arr[], int n, int numBuckets) {
if (n <= 0) return;

// Find the maximum value to determine the bucket range


int max = arr[0];
for (int i = 1; i < n; i++) {
if (arr[i] > max) {
max = arr[i];
}
}

// Create buckets
int **buckets = (int **)malloc(numBuckets * sizeof(int *));
int *bucketSizes = (int *)calloc(numBuckets, sizeof(int));

// Check if memory allocation was successful


if (buckets == NULL || bucketSizes == NULL) {
printf("Memory allocation failed.\n");
free(buckets);
free(bucketSizes);
exit(1);
}

for (int i = 0; i < numBuckets; i++) {


buckets[i] = (int *)malloc(n * sizeof(int));
if (buckets[i] == NULL) {
printf("Memory allocation failed.\n");
for (int j = 0; j < i; j++) free(buckets[j]);
free(buckets);
free(bucketSizes);
exit(1);
}
}

// Distribute elements into buckets


for (int i = 0; i < n; i++) {
int index = (arr[i] * numBuckets) / (max + 1);
buckets[index][bucketSizes[index]++] = arr[i];
}

// Sort each bucket and concatenate


int k = 0;
for (int i = 0; i < numBuckets; i++) {
if (bucketSizes[i] > 0) {
insertionSort(buckets[i], bucketSizes[i]);
for (int j = 0; j < bucketSizes[i]; j++) {
arr[k++] = buckets[i][j];
}
}
free(buckets[i]);
}

// Free dynamically allocated memory


free(buckets);
free(bucketSizes);
}

// Function to print an array


void printArray(int arr[], int n) {
for (int i = 0; i < n; i++) {
printf("%d ", arr[i]);
}
printf("\n");
}

// Main function to accept user input and perform Bucket Sort


int main() {
int n, numBuckets;

// Prompt the user to enter the size of the array


printf("Enter the number of elements: ");
if (scanf("%d", &n) != 1 || n <= 0) {
printf("Invalid input. Please enter a positive integer.\n");
return 1;
}

// Prompt the user to enter the number of buckets


printf("Enter the number of buckets: ");
if (scanf("%d", &numBuckets) != 1 || numBuckets <= 0) {
printf("Invalid input. Please enter a positive integer.\n");
return 1;
}

// Dynamically allocate memory for the array


int *arr = (int *)malloc(n * sizeof(int));

// Check if memory allocation was successful


if (arr == NULL) {
printf("Memory allocation failed.\n");
return 1;
}

// Prompt the user to enter the array elements


printf("Enter %d integers:\n", n);
for (int i = 0; i < n; i++) {
while (1) {
printf("Element %d: ", i + 1);
if (scanf("%d", &arr[i]) == 1 && arr[i] >= 0) {
break;
} else {
printf("Invalid input. Please enter a non-negative
integer.\n");
while (getchar() != '\n'); // Clear invalid input from buffer
}
}
}

printf("Array before sorting:\n");


printArray(arr, n);

// Perform Bucket Sort


bucketSort(arr, n, numBuckets);

printf("Array after sorting:\n");


printArray(arr, n);

// Free the dynamically allocated memory


Code :Max HEAP:
#include <stdio.h>
#include <stdlib.h>

int capacity = 10; // Initial capacity for heap


int size = 0; // Current size of heap

// Function to swap two elements


void swap(int *a, int *b) {
int temp = *a;
*a = *b;
*b = temp;
}

// Function to heapify down a subtree rooted at index i


void heapifyDown(int arr[], int i) {
int largest = i; // Initialize largest as root
int left = 2 * i + 1; // Left child
int right = 2 * i + 2; // Right child

// Check if left child exists and is larger than root


if (left < size && arr[left] > arr[largest])
largest = left;

// Check if right child exists and is larger than the largest so far
if (right < size && arr[right] > arr[largest])
largest = right;

// If largest is not root, swap and continue heapifying


if (largest != i) {
swap(&arr[i], &arr[largest]);
heapifyDown(arr, largest);
}
}

// Function to heapify up for insertion


void heapifyUp(int arr[], int i) {
int parent = (i - 1) / 2;
if (i && arr[i] > arr[parent]) {
swap(&arr[i], &arr[parent]);
heapifyUp(arr, parent);
}
}

// Function to insert a new element into the heap


void insert(int arr[], int value) {
if (size == capacity) {
// Double the array size if capacity is reached
capacity *= 2;
arr = (int*)realloc(arr, capacity * sizeof(int));
}

// Insert the new value at the end of the heap


arr[size] = value;
size++;

// Heapify up to maintain max heap property


heapifyUp(arr, size - 1);
}

// Function to delete the root element (maximum element) from the


heap
void deleteRoot(int arr[]) {
if (size <= 0) {
printf("Heap is empty!\n");
return;
}

// Replace root with the last element


arr[0] = arr[size - 1];
size--;

// Heapify down to maintain max heap property


heapifyDown(arr, 0);
}

// Function to print the heap


void printHeap(int arr[]) {
if (size == 0) {
printf("Heap is empty!\n");
return;
}
for (int i = 0; i < size; i++) {
printf("%d ", arr[i]);
}
printf("\n");
}

// Menu-driven program to perform heap operations


int main() {
int *arr = (int*)malloc(capacity * sizeof(int));
int choice, value;

while (1) {
printf("\nMax Heap Operations:\n");
printf("1. Insert\n");
printf("2. Delete Root\n");
printf("3. Print Heap\n");
printf("4. Exit\n");
printf("Enter your choice: ");
scanf("%d", &choice);

switch (choice) {
case 1:
printf("Enter value to insert: ");
scanf("%d", &value);
insert(arr, value);
printf("Value inserted.\n");
break;
case 2:
deleteRoot(arr);
printf("Root deleted.\n");
break;
case 3:
printf("Current Heap: ");
printHeap(arr);
break;
case 4:
free(arr);
exit(0);
default:
printf("Invalid choice! Please try again.\n");
}
}

return 0;
}

Explanation
Heap Operations:
 Insertion: Adds a new element to the heap by placing it at the end of the heap and
then heapifying up to maintain the max-heap property.
 Deletion: Removes the root element (maximum value in the heap) by replacing it
with the last element and heapifying down to maintain the max-heap property.

Heapify Functions:
 heapifyUp(): Used after insertion to maintain the heap property by comparing the
newly inserted element with its parent and swapping if necessary.
 heapifyDown(): Used after deletion to restore the heap property by comparing
the current node with its children and swapping it with the largest child if
necessary.
Menu Options:
The user is prompted to choose from the following operations:
 Insert: Allows the user to insert a new value into the heap.
 Delete Root: Removes the maximum element (root) from the heap.
 Print Heap: Displays the current state of the heap.
 Exit: Exits the program.

You might also like