0% found this document useful (0 votes)
3 views49 pages

Unit 5 Sorting

The document provides an overview of three sorting algorithms: Bubble Sort, Quick Sort, and Selection Sort. It details their working mechanisms, advantages, disadvantages, and time complexities, emphasizing that Bubble Sort is simple but inefficient for large datasets, Quick Sort is efficient with a divide-and-conquer approach, and Selection Sort is straightforward but also inefficient for large datasets. Each algorithm's time and space complexities are summarized, highlighting the practical use cases and limitations of each sorting method.

Uploaded by

devrajak1895
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views49 pages

Unit 5 Sorting

The document provides an overview of three sorting algorithms: Bubble Sort, Quick Sort, and Selection Sort. It details their working mechanisms, advantages, disadvantages, and time complexities, emphasizing that Bubble Sort is simple but inefficient for large datasets, Quick Sort is efficient with a divide-and-conquer approach, and Selection Sort is straightforward but also inefficient for large datasets. Each algorithm's time and space complexities are summarized, highlighting the practical use cases and limitations of each sorting method.

Uploaded by

devrajak1895
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Bubble Sort Algorithm


Bubble Sort is the simplest sorting algorithm that works by repeatedly
swapping the adjacent elements if they are in the wrong order. This algorithm
is not suitable for large data sets as its average and worst-case time
complexity are quite high.
 We sort the array using multiple passes. After the first pass, the maximum
element goes to end (its correct position). Same way, after second pass,
the second largest element goes to second last position and so on.
 In every pass, we process only those elements that have already not
moved to correct position. After k passes, the largest k elements must
have been moved to the last k positions.
 In a pass, we consider remaining elements and compare all adjacent and
swap if larger element is before a smaller element. If we keep doing this,
we get the largest (among the remaining elements) at its correct position.

Advantages of Bubble Sort:


 Bubble sort is easy to understand and implement.
 It does not require any additional memory space.
 It is a stable sorting algorithm, meaning that elements with the
same key value maintain their relative order in the sorted output.

Disadvantages of Bubble Sort:


 Bubble sort has a time complexity of O(n2) which makes it very
slow for large data sets.
 Bubble sort is a comparison-based sorting algorithm, which
means that it requires a comparison operator to determine the
relative order of elements in the input data set. It can limit the
efficiency of the algorithm in certain cases.

Algorithm
In the algorithm given below, suppose arr is an array of n elements. The
assumed swap function in the algorithm will swap the values of given array
elements.

1. begin BubbleSort(arr)
2. for all array elements
3. if arr[i] > arr[i+1]
4. swap(arr[i], arr[i+1])
5. end if
6. end for
7. return arr
8. end BubbleSort

Working of Bubble sort Algorithm


Now, let's see the working of Bubble sort Algorithm.

To understand the working of bubble sort algorithm, let's take an unsorted array. We
are taking a short and accurate array, as we know the complexity of bubble sort
is O(n2).

Let the elements of array are -

First Pass
Sorting will start from the initial two elements. Let compare them to check which is
greater.

Here, 32 is greater than 13 (32 > 13), so it is already sorted. Now, compare 32 with
26.

Here, 26 is smaller than 36. So, swapping is required. After swapping new array will
look like -

Now, compare 32 and 35.

Here, 35 is greater than 32. So, there is no swapping required as they are already
sorted.
Now, the comparison will be in between 35 and 10.

Here, 10 is smaller than 35 that are not sorted. So, swapping is required. Now, we
reach at the end of the array. After first pass, the array will be -

Now, move to the second iteration.

Second Pass
The same process will be followed for second iteration.

Here, 10 is smaller than 32. So, swapping is required. After swapping, the array will
be -

Now, move to the third iteration.

Third Pass
The same process will be followed for third iteration.
Here, 10 is smaller than 26. So, swapping is required. After swapping, the array will

be -

Now, move to the fourth iteration.

Fourth pass
Similarly, after the fourth iteration, the array will be -

Hence, there is no swapping required, so the array is completely sorted.

Bubble sort complexity


Now, let's see the time complexity of bubble sort in the best case, average case, and
worst case. We will also see the space complexity of bubble sort.

1. Time Complexity
Best Case O(n)

Average Case O(n2)

Worst Case O(n2)

o Best Case Complexity - It occurs when there is no sorting required, i.e.


the array is already sorted. The best-case time complexity of bubble
sort is O(n).
o Average Case Complexity - It occurs when the array elements are in
jumbled order that is not properly ascending and not properly
descending. The average case time complexity of bubble sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are
required to be sorted in reverse order. That means suppose you have to
sort the array elements in ascending order, but its elements are in
descending order. The worst-case time complexity of bubble sort
is O(n2).

2. Space Complexity
Space Complexity O(1)

Stable YES

o The space complexity of bubble sort is O(1). It is because, in bubble


sort, an extra variable is required for swapping.
o The space complexity of optimized bubble sort is O(2). It is because two
extra variables are required in optimized bubble sort.

Quick Sort

QuickSort is a sorting algorithm based on the Divide and Conquer that picks
an element as a pivot and partitions the given array around the picked pivot
by placing the pivot in its correct position in the sorted array.
How does QuickSort Algorithm work?
QuickSort works on the principle of divide and conquer, breaking
down the problem into smaller sub-problems.
There are mainly three steps in the algorithm:
1. Choose a Pivot: Select an element from the array as the pivot.
The choice of pivot can vary (e.g., first element, last element,
random element, or median).
2. Partition the Array: Rearrange the array around the pivot. After
partitioning, all elements smaller than the pivot will be on its left,
and all elements greater than the pivot will be on its right. The
pivot is then in its correct position, and we obtain the index of the
pivot.
3. Recursively Call: Recursively apply the same process to the two
partitioned sub-arrays (left and right of the pivot).
4. Base Case: The recursion stops when there is only one element
left in the sub-array, as a single element is already sorted.
Here’s a basic overview of how the QuickSort algorithm works.

Choice of Pivot
There are many different choices for picking pivots.
 Always pick the first (or last) element as a pivot. The below
implementation is picks the last element as pivot. The problem
with this approach is it ends up in the worst case when array is
already sorted.
 Pick a random element as a pivot. This is a preferred approach
because it does not have a pattern for which the worst case
happens.
 Pick the median element is pivot. This is an ideal approach in
terms of time complexity as we can find median in linear time and
the partition function will always divide the input array into two
halves. But it is low on average as median finding has high
constants.

Choosing the pivot


Picking a good pivot is necessary for the fast implementation of quicksort. However,
it is typical to determine a good pivot. Some of the ways of choosing a pivot are as
follows -

o Pivot can be random, i.e. select the random pivot from the given array.
o Pivot can either be the rightmost element of the leftmost element of the
given array.
o Select median as the pivot element.
Algorithm:

1. QUICKSORT (array A, start, end)


2. {
3. 1 if (start < end)
4. 2{
5. 3 p = partition(A, start, end)
6. 4 QUICKSORT (A, start, p - 1)
7. 5 QUICKSORT (A, p + 1, end)
8. 6}
9. }

Working of Quick Sort Algorithm


Now, let's see the working of the Quicksort Algorithm.

To understand the working of quick sort, let's take an unsorted array. It will make the
concept more clear and understandable.

Let the elements of array are -


In the given array, we consider the leftmost element as pivot. So, in this case, a[left]
= 24, a[right] = 27 and a[pivot] = 24.

Since, pivot is at left, so algorithm starts from right and move towards left.

Now, a[pivot] < a[right], so algorithm moves forward one position towards left, i.e. -

Now, a[left] = 24, a[right] = 19, and a[pivot] = 24.

Because, a[pivot] > a[right], so, algorithm will swap a[pivot] with a[right], and pivot
moves to right, as -
Now, a[left] = 19, a[right] = 24, and a[pivot] = 24. Since, pivot is at right, so algorithm
starts from left and moves to right.

As a[pivot] > a[left], so algorithm moves one position to right as -

Now, a[left] = 9, a[right] = 24, and a[pivot] = 24. As a[pivot] > a[left], so algorithm
moves one position to right as -

Now, a[left] = 29, a[right] = 24, and a[pivot] = 24. As a[pivot] < a[left], so, swap
a[pivot] and a[left], now pivot is at left, i.e. -
Since, pivot is at left, so algorithm starts from right, and move to left. Now, a[left] =
24, a[right] = 29, and a[pivot] = 24. As a[pivot] < a[right], so algorithm moves one
position to left, as -

Now, a[pivot] = 24, a[left] = 24, and a[right] = 14. As a[pivot] > a[right], so, swap
a[pivot] and a[right], now pivot is at right, i.e. -

Now, a[pivot] = 24, a[left] = 14, and a[right] = 24. Pivot is at right, so the algorithm
starts from left and move to right.

Now, a[pivot] = 24, a[left] = 24, and a[right] = 24. So, pivot, left and right are pointing
the same element. It represents the termination of procedure.
Element 24, which is the pivot element is placed at its exact position.

Elements that are right side of element 24 are greater than it, and the elements that
are left side of element 24 are smaller than it.

Now, in a similar manner, quick sort algorithm is separately applied to the left and
right sub-arrays. After sorting gets done, the array will be -

Quicksort complexity
Now, let's see the time complexity of quicksort in best case, average case, and in
worst case. We will also see the space complexity of quicksort.

1. Time Complexity
Case Time Complexity

Best Case O(n*logn)

Average Case O(n*logn)

Worst Case O(n2)

o Best Case Complexity - In Quicksort, the best-case occurs when the


pivot element is the middle element or near to the middle element. The
best-case time complexity of quicksort is O(n*logn).
o Average Case Complexity - It occurs when the array elements are in
jumbled order that is not properly ascending and not properly
descending. The average case time complexity of quicksort
is O(n*logn).
o Worst Case Complexity - In quick sort, worst case occurs when the
pivot element is either greatest or smallest element. Suppose, if the
pivot element is always the last element of the array, the worst case
would occur when the given array is sorted already in ascending or
descending order. The worst-case time complexity of quicksort is O(n2).
Though the worst-case complexity of quicksort is more than other sorting algorithms
such as Merge sort and Heap sort, still it is faster in practice. Worst case in quick
sort rarely occurs because by changing the choice of pivot, it can be implemented in
different ways. Worst case in quicksort can be avoided by choosing the right pivot
element.

2. Space Complexity
Space Complexity O(n*logn)

Stable NO

o The space complexity of quicksort is O(n*logn).

SELECTION SORT
the selection sort algorithm organises a list of an array by constantly
discovering the minimum component (assuming ascending order) from the
unsorted part and placing it at the starting point. The algorithm is very simple
and popular. However, it is not appropriate for extensive data sets. In selection
sort, the smallest value among the unsorted elements of the array is selected in
every pass and inserted to its appropriate position into the array. It is also the
simplest algorithm. It is an in-place comparison sorting algorithm. In this algorithm,
the array is divided into two parts, first is sorted part, and another one is the unsorted
part. Initially, the sorted part of the array is empty, and unsorted part is the given
array. Sorted part is placed at the left, while the unsorted part is placed at the right.

In selection sort, the first smallest element is selected from the unsorted array and
placed at the first position. After that second smallest element is selected and placed
in the second position. The process continues until the array is entirely sorted.

The average and worst-case complexity of selection sort is O(n2), where n is the
number of items. Due to this, it is not suitable for large data sets.

Selection sort is generally used when -

o A small array is to be sorted


o Swapping cost doesn't matter
o It is compulsory to check all elements
Now, let's see the algorithm of selection sort.

Algorithm
1. SELECTION SORT(arr, n)
2.
3. Step 1: Repeat Steps 2 and 3 for i = 0 to n-1
4. Step 2: CALL SMALLEST(arr, i, n, pos)
5. Step 3: SWAP arr[i] with arr[pos]
6. [END OF LOOP]
7. Step 4: EXIT
8.
9. SMALLEST (arr, i, n, pos)
10. Step 1: [INITIALIZE] SET SMALL = arr[i]
11. Step 2: [INITIALIZE] SET pos = i
12. Step 3: Repeat for j = i+1 to n
13. if (SMALL > arr[j])
14. SET SMALL = arr[j]
15. SET pos = j
16. [END OF if]
17. [END OF LOOP]
18. Step 4: RETURN pos

Working of Selection sort Algorithm


Now, let's see the working of the Selection sort Algorithm.

To understand the working of the Selection sort algorithm, let's take an unsorted
array. It will be easier to understand the Selection sort via an example.

Let the elements of array are -

Now, for the first position in the sorted array, the entire array is to be scanned
sequentially.

At present, 12 is stored at the first position, after searching the entire array, it is
found that 8 is the smallest value.

So, swap 12 with 8. After the first iteration, 8 will appear at the first position in the
sorted array.
For the second position, where 29 is stored presently, we again sequentially scan
the rest of the items of unsorted array. After scanning, we find that 12 is the second
lowest element in the array that should be appeared at second position.

Now, swap 29 with 12. After the second iteration, 12 will appear at the second
position in the sorted array. So, after two iterations, the two smallest values are
placed at the beginning in a sorted way.

The same process is applied to the rest of the array elements. Now, we are showing
a pictorial representation of the entire sorting process.

Now, the array is completely sorted.

Selection sort complexity


Now, let's see the time complexity of selection sort in best case, average case, and
in worst case. We will also see the space complexity of the selection sort.

1. Time Complexity
Case Time Complexity

Best Case O(n2)

Average Case O(n2)

Worst Case O(n2)

o Best Case Complexity - It occurs when there is no sorting required, i.e.


the array is already sorted. The best-case time complexity of selection
sort is O(n2).
o Average Case Complexity - It occurs when the array elements are in
jumbled order that is not properly ascending and not properly
descending. The average case time complexity of selection sort
is O(n2).
o Worst Case Complexity - It occurs when the array elements are
required to be sorted in reverse order. That means suppose you have to
sort the array elements in ascending order, but its elements are in
descending order. The worst-case time complexity of selection sort
is O(n2).
2. Space Complexity
Space Complexity O(1)

Stable YES

o The space complexity of selection sort is O(1). It is because, in


selection sort, an extra variable is required for swapping.

Heap Sort Algorithm


In this article, we will discuss the Heapsort Algorithm. Heap sort processes the
elements by creating the min-heap or max-heap using the elements of the given
array. Min-heap or max-heap represents the ordering of array in which the root
element represents the minimum or maximum element of the array.

Heap sort basically recursively performs two main operations -

o Build a heap H, using the elements of array.


o Repeatedly delete the root element of the heap formed in 1st phase.
Before knowing more about the heap sort, let's first see a brief description of Heap.

What is a heap?
A heap is a complete binary tree, and the binary tree is a tree in which the node can
have the utmost two children. A complete binary tree is a binary tree in which all the
levels except the last level, i.e., leaf node, should be completely filled, and all the
nodes should be left-justified.

What is heap sort?


Heapsort is a popular and efficient sorting algorithm. The concept of heap sort is to
eliminate the elements one by one from the heap part of the list, and then insert
them into the sorted part of the list.

Heapsort is the in-place sorting algorithm.

Now, let's see the algorithm of heap sort.

Algorithm
1. HeapSort(arr)
2. BuildMaxHeap(arr)
3. for i = length(arr) to 2
4. swap arr[1] with arr[i]
5. heap_size[arr] = heap_size[arr] ? 1
6. MaxHeapify(arr,1)
7. End
BuildMaxHeap(arr)

1. BuildMaxHeap(arr)
2. heap_size(arr) = length(arr)
3. for i = length(arr)/2 to 1
4. MaxHeapify(arr,i)
5. End
MaxHeapify(arr,i)

1. MaxHeapify(arr,i)
2. L = left(i)
3. R = right(i)
4. if L ? heap_size[arr] and arr[L] > arr[i]
5. largest = L
6. else
7. largest = i
8. if R ? heap_size[arr] and arr[R] > arr[largest]
9. largest = R
10. if largest != i
11. swap arr[i] with arr[largest]
12. MaxHeapify(arr,largest)
13. End

Working of Heap sort Algorithm


Now, let's see the working of the Heapsort Algorithm.

In heap sort, basically, there are two phases involved in the sorting of elements. By
using the heap sort algorithm, they are as follows -

o The first step includes the creation of a heap by adjusting the elements
of the array.
o After the creation of heap, now remove the root element of the heap
repeatedly by shifting it to the end of the array, and then store the heap
structure with the remaining elements.
Now let's see the working of heap sort in detail by using an example. To understand
it more clearly, let's take an unsorted array and try to sort it using heap sort. It will
make the explanation clearer and easier.

First, we have to construct a heap from the given array and convert it into max heap.
After converting the given heap into max heap, the array elements are -

Next, we have to delete the root element (89) from the max heap. To delete this
node, we have to swap it with the last node, i.e. (11). After deleting the root element,
we again have to heapify it to convert it into max heap.

After swapping the array element 89 with 11, and converting the heap into max-
heap, the elements of array are -

In the next step, again, we have to delete the root element (81) from the max heap.
To delete this node, we have to swap it with the last node, i.e. (54). After deleting the
root element, we again have to heapify it to convert it into max heap.
After swapping the array element 81 with 54 and converting the heap into max-heap,
the elements of array are -

In the next step, we have to delete the root element (76) from the max heap again.
To delete this node, we have to swap it with the last node, i.e. (9). After deleting the
root element, we again have to heapify it to convert it into max heap.

After swapping the array element 76 with 9 and converting the heap into max-heap,
the elements of array are -

In the next step, again we have to delete the root element (54) from the max heap.
To delete this node, we have to swap it with the last node, i.e. (14). After deleting the
root element, we again have to heapify it to convert it into max heap.
After swapping the array element 54 with 14 and converting the heap into max-heap,
the elements of array are -

In the next step, again we have to delete the root element (22) from the max heap.
To delete this node, we have to swap it with the last node, i.e. (11). After deleting the
root element, we again have to heapify it to convert it into max heap.

After swapping the array element 22 with 11 and converting the heap into max-heap,
the elements of array are -

In the next step, again we have to delete the root element (14) from the max heap.
To delete this node, we have to swap it with the last node, i.e. (9). After deleting the
root element, we again have to heapify it to convert it into max heap.
After swapping the array element 14 with 9 and converting the heap into max-heap,
the elements of array are -

In the next step, again we have to delete the root element (11) from the max heap.
To delete this node, we have to swap it with the last node, i.e. (9). After deleting the
root element, we again have to heapify it to convert it into max heap.

After swapping the array element 11 with 9, the elements of array are -

Now, heap has only one element left. After deleting it, heap will be empty.

After completion of sorting, the array elements are -

Now, the array is completely sorted.


Heap sort complexity
Now, let's see the time complexity of Heap sort in the best case, average case, and
worst case. We will also see the space complexity of Heapsort.

1. Time Complexity
Case Time Complexity

Best Case O(n logn)

Average Case O(n log n)

Worst Case O(n log n)

o Best Case Complexity - It occurs when there is no sorting required, i.e.


the array is already sorted. The best-case time complexity of heap sort
is O(n logn).
o Average Case Complexity - It occurs when the array elements are in
jumbled order that is not properly ascending and not properly
descending. The average case time complexity of heap sort is O(n log
n).
o Worst Case Complexity - It occurs when the array elements are
required to be sorted in reverse order. That means suppose you have to
sort the array elements in ascending order, but its elements are in
descending order. The worst-case time complexity of heap sort is O(n
log n).
The time complexity of heap sort is O(n logn) in all three cases (best case, average
case, and worst case). The height of a complete binary tree having n elements
is logn.

2. Space Complexity
Space Complexity O(1)

Stable N0

o The space complexity of Heap sort is O(1).


Insertion Sort Algorithm
In this article, we will discuss the Insertion sort Algorithm. The working procedure of
insertion sort is also simple. This article will be very helpful and interesting to
students as they might face insertion sort as a question in their examinations. So, it
is important to discuss the topic.

Insertion sort works similar to the sorting of playing cards in hands. It is assumed
that the first card is already sorted in the card game, and then we select an unsorted
card. If the selected unsorted card is greater than the first card, it will be placed at
the right side; otherwise, it will be placed at the left side. Similarly, all unsorted cards
are taken and put in their exact place.

The same approach is applied in insertion sort. The idea behind the insertion sort is
that first take one element, iterate it through the sorted array. Although it is simple to
use, it is not appropriate for large data sets as the time complexity of insertion sort in
the average case and worst case is O(n2), where n is the number of items. Insertion
sort is less efficient than the other sorting algorithms like heap sort, quick sort, merge
sort, etc.

Insertion sort has various advantages such as -

o Simple implementation
o Efficient for small data sets
o Adaptive, i.e., it is appropriate for data sets that are already
substantially sorted.
Now, let's see the algorithm of insertion sort.

Algorithm
The simple steps of achieving the insertion sort are listed as follows -

Step 1 - If the element is the first element, assume that it is already sorted. Return 1.

Step2 - Pick the next element, and store it separately in a key.

Step3 - Now, compare the key with all elements in the sorted array.

Step 4 - If the element in the sorted array is smaller than the current element, then
move to the next element. Else, shift greater elements in the array towards the right.

Step 5 - Insert the value.

Step 6 - Repeat until the array is sorted.


Working of Insertion sort Algorithm
Now, let's see the working of the insertion sort Algorithm.

To understand the working of the insertion sort algorithm, let's take an unsorted
array. It will be easier to understand the insertion sort via an example.

Let the elements of array are -

Initially, the first two elements are compared in insertion sort.

Here, 31 is greater than 12. That means both elements are already in ascending
order. So, for now, 12 is stored in a sorted sub-array.

Now, move to the next two elements and compare them.

Here, 25 is smaller than 31. So, 31 is not at correct position. Now, swap 31 with 25.
Along with swapping, insertion sort will also check it with all elements in the sorted
array.

For now, the sorted array has only one element, i.e. 12. So, 25 is greater than 12.
Hence, the sorted array remains sorted after swapping.

Now, two elements in the sorted array are 12 and 25. Move forward to the next
elements that are 31 and 8.
Both 31 and 8 are not sorted. So, swap them.

After swapping, elements 25 and 8 are unsorted.

So, swap them.

Now, elements 12 and 8 are unsorted.

So, swap them too.

Now, the sorted array has three items that are 8, 12 and 25. Move to the next items
that are 31 and 32.

Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.

Move to the next elements that are 32 and 17.

17 is smaller than 32. So, swap them.


Swapping makes 31 and 17 unsorted. So, swap them too.

Now, swapping makes 25 and 17 unsorted. So, perform swapping again.

Now, the array is completely sorted.

Insertion sort complexity


Now, let's see the time complexity of insertion sort in best case, average case, and in
worst case. We will also see the space complexity of insertion sort.

1. Time Complexity
Case Time Complexity

Best Case O(n)

Average Case O(n2)

Worst Case O(n2)

o Best Case Complexity - It occurs when there is no sorting required, i.e.


the array is already sorted. The best-case time complexity of insertion
sort is O(n).
o Average Case Complexity - It occurs when the array elements are in
jumbled order that is not properly ascending and not properly
descending. The average case time complexity of insertion sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are
required to be sorted in reverse order. That means suppose you have to
sort the array elements in ascending order, but its elements are in
descending order. The worst-case time complexity of insertion sort
is O(n2).
2. Space Complexity
Space Complexity O(1)

Stable YES

o The space complexity of insertion sort is O(1). It is because, in insertion


sort, an extra variable is required for swapping.

Shell Sort Algorithm


In this article, we will discuss the shell sort algorithm. Shell sort is the generalization
of insertion sort, which overcomes the drawbacks of insertion sort by comparing
elements separated by a gap of several positions.

It is a sorting algorithm that is an extended version of insertion sort. Shell sort has
improved the average time complexity of insertion sort. As similar to insertion sort, it
is a comparison-based and in-place sorting algorithm. Shell sort is efficient for
medium-sized data sets.

In insertion sort, at a time, elements can be moved ahead by one position only. To
move an element to a far-away position, many movements are required that increase
the algorithm's execution time. But shell sort overcomes this drawback of insertion
sort. It allows the movement and swapping of far-away elements as well.

This algorithm first sorts the elements that are far away from each other, then it
subsequently reduces the gap between them. This gap is called as interval. This
interval can be calculated by using the Knuth's formula given below -

1. hh = h * 3 + 1
2. where, 'h' is the interval having initial value 1.
Now, let's see the algorithm of shell sort.

Algorithm
The simple steps of achieving the shell sort are listed as follows -

1. ShellSort(a, n) // 'a' is the given array, 'n' is the size of array


2. for (interval = n/2; interval > 0; interval /= 2)
3. for ( i = interval; i < n; i += 1)
4. temp = a[i];
5. for (j = i; j >= interval && a[j - interval] > temp; j -= interval)
6. a[j] = a[j - interval];
7. a[j] = temp;
8. End ShellSort
Working of Shell sort Algorithm
Now, let's see the working of the shell sort Algorithm.

To understand the working of the shell sort algorithm, let's take an unsorted array. It
will be easier to understand the shell sort via an example.

Let the elements of array are -

We will use the original sequence of shell sort, i.e., N/2, N/4,....,1 as the intervals.

In the first loop, n is equal to 8 (size of the array), so the elements are lying at the
interval of 4 (n/2 = 4). Elements will be compared and swapped if they are not in
order.

Here, in the first loop, the element at the 0th position will be compared with the
element at 4th position. If the 0th element is greater, it will be swapped with the
element at 4th position. Otherwise, it remains the same. This process will continue for
the remaining elements.

At the interval of 4, the sublists are {33, 12}, {31, 17}, {40, 25}, {8, 42}.

Now, we have to compare the values in every sub-list. After comparing, we have to
swap them if required in the original array. After comparing and swapping, the
updated array will look as follows -
In the second loop, elements are lying at the interval of 2 (n/4 = 2), where n = 8.

Now, we are taking the interval of 2 to sort the rest of the array. With an interval of 2,
two sublists will be generated - {12, 25, 33, 40}, and {17, 8, 31, 42}.

Now, we again have to compare the values in every sub-list. After comparing, we
have to swap them if required in the original array. After comparing and swapping,
the updated array will look as follows -

In the third loop, elements are lying at the interval of 1 (n/8 = 1), where n = 8. At last,
we use the interval of value 1 to sort the rest of the array elements. In this step, shell
sort uses insertion sort to sort the array elements.
Now, the array is sorted in ascending order.

Shell sort complexity


Now, let's see the time complexity of Shell sort in the best case, average case, and
worst case. We will also see the space complexity of the Shell sort.

1. Time Complexity
Case Time Complexity

Best Case O(n*logn)

Average Case O(n*log(n)2)


Worst Case O(n2)

o Best Case Complexity - It occurs when there is no sorting required,


i.e., the array is already sorted. The best-case time complexity of Shell
sort is O(n*logn).
o Average Case Complexity - It occurs when the array elements are in
jumbled order that is not properly ascending and not properly
descending. The average case time complexity of Shell sort
is O(n*logn).
o Worst Case Complexity - It occurs when the array elements are
required to be sorted in reverse order. That means suppose you have to
sort the array elements in ascending order, but its elements are in
descending order. The worst-case time complexity of Shell sort is O(n2).

2. Space Complexity
Space Complexity O(1)

Stable NO

o The space complexity of Shell sort is O(1).

Merge Sort Algorithm


In this article, we will discuss the merge sort Algorithm. Merge sort is the sorting
technique that follows the divide and conquer approach. This article will be very
helpful and interesting to students as they might face merge sort as a question in
their examinations. In coding or technical interviews for software engineers, sorting
algorithms are widely asked. So, it is important to discuss the topic.

Merge sort is similar to the quick sort algorithm as it uses the divide and conquer
approach to sort the elements. It is one of the most popular and efficient sorting
algorithm. It divides the given list into two equal halves, calls itself for the two halves
and then merges the two sorted halves. We have to define the merge() function to
perform the merging.

The sub-lists are divided again and again into halves until the list cannot be divided
further. Then we combine the pair of one element lists into two-element lists, sorting
them in the process. The sorted two-element pairs is merged into the four-element
lists, and so on until we get the sorted list.

Now, let's see the algorithm of merge sort.


Algorithm
In the following algorithm, arr is the given array, beg is the starting element,
and end is the last element of the array.

1. MERGE_SORT(arr, beg, end)


2.
3. if beg < end
4. set mid = (beg + end)/2
5. MERGE_SORT(arr, beg, mid)
6. MERGE_SORT(arr, mid + 1, end)
7. MERGE (arr, beg, mid, end)
8. end of if
9.
10. END MERGE_SORT
The important part of the merge sort is the MERGE function. This function performs
the merging of two sorted sub-arrays that are A[beg…mid] and A[mid+1…end], to
build one sorted array A[beg…end]. So, the inputs of the MERGE function are A[],
beg, mid, and end.

Working of Merge sort Algorithm


Now, let's see the working of merge sort Algorithm.

To understand the working of the merge sort algorithm, let's take an unsorted array.
It will be easier to understand the merge sort via an example.

Let the elements of array are -

According to the merge sort, first divide the given array into two equal halves. Merge
sort keeps dividing the list into equal parts until it cannot be further divided.

As there are eight elements in the given array, so it is divided into two arrays of size
4.

Now, again divide these two arrays into halves. As they are of size 4, so divide them
into new arrays of size 2.
Now, again divide these arrays to get the atomic value that cannot be further divided.

Now, combine them in the same manner they were broken.

In combining, first compare the element of each array and then combine them into
another array in sorted order.

So, first compare 12 and 31, both are in sorted positions. Then compare 25 and 8,
and in the list of two values, put 8 first followed by 25. Then compare 32 and 17, sort
them and put 17 first followed by 32. After that, compare 40 and 42, and place them
sequentially.

In the next iteration of combining, now compare the arrays with two data values and
merge them into an array of found values in sorted order.

Now, there is a final merging of the arrays. After the final merging of above arrays,
the array will look like -

Now, the array is completely sorted.


Merge sort complexity
Now, let's see the time complexity of merge sort in best case, average case, and in
worst case. We will also see the space complexity of the merge sort.

1. Time Complexity
Case Time Complexity

Best Case O(n*logn)

Average Case O(n*logn)

Worst Case O(n*logn)

o Best Case Complexity - It occurs when there is no sorting required, i.e.


the array is already sorted. The best-case time complexity of merge sort
is O(n*logn).
o Average Case Complexity - It occurs when the array elements are in
jumbled order that is not properly ascending and not properly
descending. The average case time complexity of merge sort
is O(n*logn).
o Worst Case Complexity - It occurs when the array elements are
required to be sorted in reverse order. That means suppose you have to
sort the array elements in ascending order, but its elements are in
descending order. The worst-case time complexity of merge sort
is O(n*logn).

2. Space Complexity
Space Complexity O(n)

Stable YES

o The space complexity of merge sort is O(n). It is because, in merge


sort, an extra variable is required for swapping.

Radix Sort Algorithm


In this article, we will discuss the Radix sort Algorithm. Radix sort is the linear sorting
algorithm that is used for integers. In Radix sort, there is digit by digit sorting is
performed that is started from the least significant digit to the most significant digit.

The process of radix sort works similar to the sorting of students names, according to
the alphabetical order. In this case, there are 26 radix formed due to the 26
alphabets in English. In the first pass, the names of students are grouped according
to the ascending order of the first letter of their names. After that, in the second pass,
their names are grouped according to the ascending order of the second letter of
their name. And the process continues until we find the sorted list.

Now, let's see the algorithm of Radix sort.

Algorithm
1. radixSort(arr)
2. max = largest element in the given array
3. d = number of digits in the largest element (or, max)
4. Now, create d buckets of size 0 - 9
5. for i -> 0 to d
6. sort the array elements using counting sort (or any stable sort) according to th
e digits at
7. the ith place

Working of Radix sort Algorithm


Now, let's see the working of Radix sort Algorithm.

The steps used in the sorting of radix sort are listed as follows -

o First, we have to find the largest element (suppose max) from the given
array. Suppose 'x' be the number of digits in max. The 'x' is calculated
because we need to go through the significant places of all elements.
o After that, go through one by one each significant place. Here, we have
to use any stable sorting algorithm to sort the digits of each significant
place.
Now let's see the working of radix sort in detail by using an example. To understand
it more clearly, let's take an unsorted array and try to sort it using radix sort. It will
make the explanation clearer and easier.

In the given array, the largest element is 736 that have 3 digits in it. So, the loop will
run up to three times (i.e., to the hundreds place). That means three passes are
required to sort the array.
Now, first sort the elements on the basis of unit place digits (i.e., x = 0). Here, we are
using the counting sort algorithm to sort the elements.

Pass 1:
In the first pass, the list is sorted on the basis of the digits at 0's place.

After the first pass, the array elements are -

Pass 2:
In this pass, the list is sorted on the basis of the next significant digits (i.e., digits at
10th place).
After the second pass, the array elements are -

Pass 3:
In this pass, the list is sorted on the basis of the next significant digits (i.e., digits at
100th place).
After the third pass, the array elements are -

Now, the array is sorted in ascending order.

Radix sort complexity


Now, let's see the time complexity of Radix sort in best case, average case, and
worst case. We will also see the space complexity of Radix sort.

1. Time Complexity
Case Time Complexity

Best Case Ω(n+k)

Average Case θ(nk)

Worst Case O(nk)

o Best Case Complexity - It occurs when there is no sorting required, i.e.


the array is already sorted. The best-case time complexity of Radix sort
is Ω(n+k).
o Average Case Complexity - It occurs when the array elements are in
jumbled order that is not properly ascending and not properly
descending. The average case time complexity of Radix sort is θ(nk).
o Worst Case Complexity - It occurs when the array elements are
required to be sorted in reverse order. That means suppose you have to
sort the array elements in ascending order, but its elements are in
descending order. The worst-case time complexity of Radix sort
is O(nk).
Radix sort is a non-comparative sorting algorithm that is better than the comparative
sorting algorithms. It has linear time complexity that is better than the comparative
algorithms with complexity O(n logn).
2. Space Complexity
Space Complexity O(n + k)

Stable YES

o The space complexity of Radix sort is O(n + k).

Binary Search Algorithm


In this article, we will discuss the Binary Search Algorithm. Searching is the process
of finding some particular element in the list. If the element is present in the list, then
the process is called successful, and the process returns the location of that
element. Otherwise, the search is called unsuccessful.

Linear Search and Binary Search are the two popular searching techniques. Here we
will discuss the Binary Search Algorithm.

Binary search is the search technique that works efficiently on sorted lists. Hence, to
search an element into some list using the binary search technique, we must ensure
that the list is sorted.

Binary search follows the divide and conquer approach in which the list is divided
into two halves, and the item is compared with the middle element of the list. If the
match is found then, the location of the middle element is returned. Otherwise, we
search into either of the halves depending upon the result produced through the
match.

NOTE: Binary search can be implemented on sorted array elements. If the


list elements are not arranged in a sorted manner, we have first to sort
them.
Now, let's see the algorithm of Binary Search.

Algorithm
1. Binary_Search(a, lower_bound, upper_bound, val) // 'a' is the given array, 'low
er_bound' is the index of the first array element, 'upper_bound' is the index of t
he last array element, 'val' is the value to search
2. Step 1: set beg = lower_bound, end = upper_bound, pos = - 1
3. Step 2: repeat steps 3 and 4 while beg <=end
4. Step 3: set mid = (beg + end)/2
5. Step 4: if a[mid] = val
6. set pos = mid
7. print pos
8. go to step 6
9. else if a[mid] > val
10. set end = mid - 1
11. else
12. set beg = mid + 1
13. [end of if]
14. [end of loop]
15. Step 5: if pos = -1
16. print "value is not present in the array"
17. [end of if]
18. Step 6: exit

Working of Binary search


Now, let's see the working of the Binary Search Algorithm.

To understand the working of the Binary search algorithm, let's take a sorted array. It
will be easy to understand the working of Binary search with an example.

There are two methods to implement the binary search algorithm -

o Iterative method
o Recursive method
The recursive method of binary search follows the divide and conquer approach.

Let the elements of array are -

Let the element to search is, K = 56

We have to use the below formula to calculate the mid of the array -

1. mid = (beg + end)/2


So, in the given array -

beg = 0

end = 8

mid = (0 + 8)/2 = 4. So, 4 is the mid of the array.


Now, the element to search is found. So algorithm will return the index of the
element matched.

Binary Search complexity


Now, let's see the time complexity of Binary search in the best case, average case,
and worst case. We will also see the space complexity of Binary search.

1. Time Complexity
Case Time Complexity

Best Case O(1)


Average Case O(logn)

Worst Case O(logn)

o Best Case Complexity - In Binary search, best case occurs when the
element to search is found in first comparison, i.e., when the first middle
element itself is the element to be searched. The best-case time
complexity of Binary search is O(1).
o Average Case Complexity - The average case time complexity of
Binary search is O(logn).
o Worst Case Complexity - In Binary search, the worst case occurs,
when we have to keep reducing the search space till it has only one
element. The worst-case time complexity of Binary search is O(logn).

2. Space Complexity
Space Complexity O(1)

o The space complexity of binary search is O(1).

Hashing in Data Structure


Introduction to Hashing in Data Structure:
Hashing is a popular technique in computer science that involves mapping large data
sets to fixed-length values. It is a process of converting a data set of variable size
into a data set of a fixed size. The ability to perform efficient lookup operations
makes hashing an essential concept in data structures.

What is Hashing?
A hashing algorithm is used to convert an input (such as a string or integer) into a
fixed-size output (referred to as a hash code or hash value). The data is then stored
and retrieved using this hash value as an index in an array or hash table. The hash
function must be deterministic, which guarantees that it will always yield the same
result for a given input.

Hashing is commonly used to create a unique identifier for a piece of data, which can
be used to quickly look up that data in a large dataset. For example, a web browser
may use hashing to store website passwords securely. When a user enters their
password, the browser converts it into a hash value and compares it to the stored
hash value to authenticate the user.
What is a hash Key?
In the context of hashing, a hash key (also known as a hash value or hash code) is a
fixed-size numerical or alphanumeric representation generated by a hashing
algorithm. It is derived from the input data, such as a text string or a file, through a
process known as hashing.

Hashing involves applying a specific mathematical function to the input data, which
produces a unique hash key that is typically of fixed length, regardless of the size of
the input. The resulting hash key is essentially a digital fingerprint of the original
data.

The hash key serves several purposes. It is commonly used for data integrity
checks, as even a small change in the input data will produce a significantly different
hash key. Hash keys are also used for efficient data retrieval and storage in hash
tables or data structures, as they allow quick look-up and comparison operations.

How Hashing Works?


The process of hashing can be broken down into three steps:

o Input: The data to be hashed is input into the hashing algorithm.


o Hash Function: The hashing algorithm takes the input data and applies
a mathematical function to generate a fixed-size hash value. The hash
function should be designed so that different input values produce
different hash values, and small changes in the input produce large
changes in the output.
o Output: The hash value is returned, which is used as an index to store
or retrieve data in a data structure.

Hashing Algorithms:
There are numerous hashing algorithms, each with distinct advantages and
disadvantages. The most popular algorithms include the following:

o MD5: A widely used hashing algorithm that produces a 128-bit hash


value.
o SHA-1: A popular hashing algorithm that produces a 160-bit hash value.
o SHA-256: A more secure hashing algorithm that produces a 256-bit
hash value.
Hash Function:
Hash Function: A hash function is a type of mathematical operation that takes an
input (or key) and outputs a fixed-size result known as a hash code or hash value.
The hash function must always yield the same hash code for the same input in order
to be deterministic. Additionally, the hash function should produce a unique hash
code for each input, which is known as the hash property.

There are different types of hash functions, including:

o Division method:
This method involves dividing the key by the table size and taking the remainder as
the hash value. For example, if the table size is 10 and the key is 23, the hash value
would be 3 (23 % 10 = 3).

o Multiplication method:
This method involves multiplying the key by a constant and taking the fractional part
of the product as the hash value. For example, if the key is 23 and the constant is
0.618, the hash value would be 2 (floor(10*(0.61823 - floor(0.61823))) = floor(2.236)
= 2).

o Universal hashing:
This method involves using a random hash function from a family of hash functions.
This ensures that the hash function is not biased towards any particular input and is
resistant to attacks.
Collision Resolution
One of the main challenges in hashing is handling collisions, which occur when two
or more input values produce the same hash value. There are various techniques
used to resolve collisions, including:

o Chaining: In this technique, each hash table slot contains a linked list of
all the values that have the same hash value. This technique is simple
and easy to implement, but it can lead to poor performance when the
linked lists become too long.
o Open addressing: In this technique, when a collision occurs, the
algorithm searches for an empty slot in the hash table by probing
successive slots until an empty slot is found. This technique can be
more efficient than chaining when the load factor is low, but it can lead
to clustering and poor performance when the load factor is high.
o Double hashing: This is a variation of open addressing that uses a
second hash function to determine the next slot to probe when a
collision occurs. This technique can help to reduce clustering and
improve performance.

Example of Collision Resolution


Let's continue with our example of a hash table with a size of 5. We want to store the
key-value pairs "John: 123456" and "Mary: 987654" in the hash table. Both keys
produce the same hash code of 4, so a collision occurs.

We can use chaining to resolve the collision. We create a linked list at index 4 and
add the key-value pairs to the list. The hash table now looks like this:

0:

1:

2:

3:

4: John: 123456 -> Mary: 987654

5:

Hash Table:
A hash table is a data structure that stores data in an array.Typically, a size for the
array is selected that is greater than the number of elements that can fit in the hash
table. A key is mapped to an index in the array using the hash function.

The hash function is used to locate the index where an element needs to be inserted
in the hash table in order to add a new element. The element gets added to that
index if there isn't a collision.If there is a collision, the collision resolution method is
used to find the next available slot in the array.

The hash function is used to locate the index that the element is stored in order to
retrieve it from the hash table. If the element is not found at that index, the collision
resolution method is used to search for the element in the linked list (if chaining is
used) or in the next available slot (if open addressing is used).

Hash Table Operations


There are several operations that can be performed on a hash table, including:

o Insertion: Inserting a new key-value pair into the hash table.


o Deletion: Removing a key-value pair from the hash table.
o Search: Searching for a key-value pair in the hash table.

Creating a Hash Table:


Hashing is frequently used to build hash tables, which are data structures that
enable quick data insertion, deletion, and retrieval. One or more key-value pairs can
be stored in each of the arrays of buckets that make up a hash table.

To create a hash table, we first need to define a hash function that maps each key to
a unique index in the array. A simple hash function might be to take the sum of the
ASCII values of the characters in the key and use the remainder when divided by the
size of the array. However, this hash function is inefficient and can lead to collisions
(two keys that map to the same index).

To avoid collisions, we can use more advanced hash functions that produce a more
even distribution of hash values across the array. One popular algorithm is the djb2
hash function, which uses bitwise operations to generate a hash value:

1. unsigned long hash(char* str) {


2.
3. unsigned long hash = 5381;
4. int c;
5.
6. while (c = *str++) {
7. hash = ((hash << 5) + hash) + c;
8. }
9.
10. return hash;
11. }
This hash function takes a string as input and returns an unsigned long integer hash
value. The function initializes a hash value of 5381 and then iterates over each
character in the string, using bitwise operations to generate a new hash value. The
final hash value is returned.
Application of Data Structure
Introduction:
Data structures are integral components of computer science and software
development, offering efficient ways to organize, store, and manipulate data. These
structures serve as the building blocks for designing algorithms and data storage
systems. From simple arrays to sophisticated tree structures and graphs, data
structures play a vital role in various domains, enhancing performance, scalability,
and overall system efficiency.

Arrays:
Arrays are collections of elements stored in contiguous memory locations. They
provide direct access to elements based on their indices. Arrays find applications in
numerous scenarios, including:

o Dynamic Programming: Arrays are extensively used in dynamic


programming to store intermediate results and optimize recursive
algorithms. Dynamic programming algorithms like the Fibonacci series,
matrix chain multiplication, and the knapsack problem rely on arrays to
store and retrieve previously calculated values efficiently.
o Searching and Sorting: Arrays provide a foundation for searching and
sorting algorithms. Common searching algorithms like binary search
and sorting algorithms like quicksort, mergesort, and heapsort utilize
arrays for efficient data manipulation.
o Implementing Other Data Structures: Arrays serve as the underlying
data structure for implementing more complex structures such as
stacks, queues, and hash tables.

Linked Lists:
Linked lists are dynamic data structures composed of nodes, where each node
contains data and a pointer to the next node. Linked lists are useful in scenarios that
involve frequent insertion and deletion of elements, such as:

o Memory Management: Linked lists play a vital role in memory


management systems. They enable efficient allocation and deallocation
of memory blocks by maintaining a linked structure that allows for easy
insertion and deletion.
o Implementing Other Data Structures: Linked lists are fundamental in
implementing other dynamic data structures such as stacks, queues,
and hash tables.
o Polynomial Manipulation: In algebraic calculations, linked lists are
used to represent and manipulate polynomials efficiently. Each node in
the linked list represents a term in the polynomial, with its coefficient
and exponent stored as data.
Stacks:
Stacks follow the Last-In-First-Out (LIFO) principle, where the last element inserted
is the first one to be removed. Stacks find applications in several areas, including:

o Expression Evaluation and Conversion: Stacks are extensively used


in evaluating and converting expressions. Infix to postfix conversion,
postfix evaluation, and balancing parentheses are common applications
of stacks in expression manipulation.
o Function Call Stack: Stacks are essential for managing function calls
in programming languages. When a function is called, the function's
local variables and return address are pushed onto the stack, allowing
for proper execution and return flow.
o Backtracking Algorithms: Backtracking algorithms, such as depth-first
search (DFS), rely on stacks to keep track of visited nodes and potential
paths. The stack stores the state information required to backtrack and
explore alternative paths.

Queues:
Queues adhere to the First-In-First-Out (FIFO) principle, where elements are
inserted at the rear and removed from the front. Queues have various applications,
including:

o Job Scheduling: Queues are used in operating systems and task


management systems for job scheduling. The first-in-first-out (FIFO)
nature of queues ensures fairness and proper execution order.
o Breadth-First Search (BFS) Algorithms: BFS algorithms explore
graphs in a level-by-level manner, making queues an ideal data
structure for maintaining the order of traversal.
o Printers' Job Management: In spooling systems, queues are
employed to manage print jobs, ensuring that they are processed in the
order they were received.

Trees:
Trees are hierarchical data structures consisting of nodes connected by edges. They
enable efficient searching, insertion, and deletion operations, and are utilized in
numerous applications:

o File Systems: File systems utilize tree structures to represent directory


hierarchies. Each node in the tree represents a directory, with child
nodes representing subdirectories and files.
o Database Indexing: Trees are extensively used in database indexing
for efficient searching and retrieval of records. B-tree and B+-tree
structures are commonly employed to organize and store large volumes
of data.
o Hierarchical Relationships: Trees are useful for representing
hierarchical relationships in organizations, XML, and JSON data. They
allow for efficient navigation and management of hierarchical data.
o Decision-Making Processes: Decision trees and game trees are
employed in decision-making processes, such as machine learning
algorithms and game AI, to model choices and outcomes.

Graphs:
Graphs are versatile data structures comprising vertices (nodes) interconnected by
edges. Graphs have broad applications in areas such as:

o Social Network Analysis: Graphs are used to model and analyze


social networks, enabling applications such as friend recommendations,
community detection, and influence analysis.
o Network Routing Algorithms: Graphs are essential in network routing
algorithms, determining the shortest or optimal path between nodes.
Dijkstra's algorithm and Bellman-Ford algorithm rely on graphs for
efficient routing.
o Web Page Ranking: Graph-based algorithms like Google's PageRank
employ graphs to rank web pages based on their importance and
connectivity within the web graph.
o Bioinformatics and Computational Biology: Graphs are utilized to
model and analyze biological networks, such as protein-protein
interaction networks and gene regulatory networks.

Hash Tables:
Hash tables (hash maps) use a hash function to store and retrieve data efficiently.
They find applications in a wide range of scenarios, including:

o Database Indexing and Searching: Hash tables provide fast retrieval


of data, making them suitable for indexing and searching in databases.
Hash functions distribute data evenly across the table, allowing for
efficient access.
o Caching Mechanisms: Hash tables are employed in caching
mechanisms to store frequently accessed data, reducing the need for
expensive computations or database queries.
o Symbol Tables: Compilers and interpreters utilize hash tables as
symbol tables to store identifiers, keywords, and their associated
attributes during the compilation and execution process.
o Key-Value Stores: Hash tables are the foundation for implementing
key-value stores, where data is stored and retrieved based on unique
keys.
Conclusion:
In conclusion, the applications of data structures are vast and critical in computer
science and software development. These structures serve as fundamental tools for
organizing, storing, and manipulating data efficiently.

By understanding the strengths and weaknesses of different data structures,


developers can choose the most suitable one for specific tasks, leading to optimized
algorithms, improved system performance, and enhanced data management.

Arrays find applications in sorting algorithms, dynamic programming, and


implementing other data structures.

Linked lists excel in memory management, implementing dynamic structures, and


polynomial manipulation.

Stacks are essential for expression evaluation, function call management, and
backtracking algorithms.

Queues play a crucial role in job scheduling, breadth-first search algorithms, and
print job management.

Trees are utilized in file systems, database indexing, representing hierarchical


relationships, and decision-making processes.

Graphs are versatile structures used in social network analysis, network routing, web
page ranking, and bioinformatics.

Hash tables offer fast retrieval in database indexing, caching mechanisms, symbol
tables, and key-value stores.

You might also like