FDS Unit 3 Notes
FDS Unit 3 Notes
Unit – III
Searching and Sorting
1. Searching
Linear search is a method for searching for an element in a collection of elements. In linear
search, each element of the collection is visited one by one in a sequential fashion to find the
desired element. Linear search is also known as sequential search.
It is widely used to search an element from the unordered list, i.e., the list in which items are
not sorted. The worst-case time complexity of linear search is O(n).
The steps used in the implementation of Linear Search are listed as follows -
o In each iteration of for loop, compare the search element with the current array element,
and -
o If the element matches, then return the index of the corresponding array
element.
o If the element does not match, then move to the next element.
o If there is no match or the search element is not present in the given array, return -1.
Now, start from the first element and compare K with each element of the array.
The value of K, i.e., 41, is not matched with the first element of the array. So, move to the next
element. And follow the same process until the respective element is found.
Now, the element to be searched is found. So, algorithm will return the index of the element
matched.
Time Complexity:
Best Case: In the best case, the key might be present at the first index. So the best case
complexity is O(1).
3|Page
Worst Case: In the worst case, the key might be present at the last index i.e., opposite
to the end from which the search has started in the list. So, the worst-case complexity
is O(n) where n is the size of the list.
Space Complexity:
O(1) as except for the variable to iterate through the list, no other variable is used.
Sentinel Search is a variation of the linear search algorithm that slightly optimizes
performance by reducing the number of comparisons in the search loop. It does this by placing
a special "sentinel" element at the end of the array, which guarantees that the search will always
terminate without needing to check for the end of the list inside the loop.
A sentinel is placed at the end of the list, which is usually the element you're searching
for.
The algorithm searches for the target by moving sequentially through the list.
Because of the sentinel, the search loop doesn't need to check if the end of the list is
reached—it will always find the sentinel, allowing it to stop without extra checks.
After finding the sentinel, the algorithm checks if the actual target was found in the
original list (and not just the sentinel).
arr[n - 1] = target
i = 0
while arr[i] != target:
i = i + 1
4|Page
arr[n - 1] = lastElement
Time Complexity
Worst case: O(n) as the target is found at the last position or not found at all, so the
entire array is scanned.
Space Complexity
The space complexity is O(1) because the algorithm only uses a few extra variables
regardless of the size of the input list.
1. Append the target element (8) at the end of the array as a sentinel. Array becomes: [3,
5, 1, 8, 2, 8]
3. Compare the element at index 0 with the target element (8). No match (3 ≠ 8).
4. Increment the loop variable i to 1 and compare the element at index 1 with the target
element (8). No match (5 ≠ 8).
5|Page
5. Increment the loop variable i to 2 and compare the element at index 2 with the target
element (8). No match (1 ≠ 8).
6. Increment the loop variable i to 3 and compare the element at index 3 with the target
element (8). A match is found (8 = 8).
7. The search is successful, and the index (3) where the match was found is returned.
Binary search is a fast search algorithm with run-time complexity of Ο(log n). This search
algorithm works on the principle of divide and conquer, since it divides the array into half
before searching. For this algorithm to work properly, the data collection should be in the sorted
form.
Binary search looks for a particular key value by comparing the middle most item of the
collection. If a match occurs, then the index of item is returned. But if the middle item has a
value greater than the key value, the right sub-array of the middle item is searched. Otherwise,
the left sub-array is searched. This process continues recursively until the size of a subarray
reduces to zero.
1. Start with the entire array and define two pointers: low (the start of the array) and high
(the end of the array).
o If the middle element equals the target, the search is complete, and the index is
returned.
6|Page
o If the target is less than the middle element, discard the right half (i.e., adjust
high).
o If the target is greater than the middle element, discard the left half (i.e., adjust
low).
4. Repeat the process with the new half until the target is found or the search interval is
empty (i.e., low exceeds high).
Time Complexity
Best case: O(1) The target is found at the middle on the first comparison.
Worst case: O(log n) Each step halves the search space, so it takes about log2(n) steps
to reduce the array to one element, where n is the number of elements).
7|Page
Average case: O(log n) The target is typically found after a few divisions of the array.
Space Complexity
Example: Consider an array with elements [10, 14, 19, 26, 27, 31, 33, 35, 42, 44]
For a binary search to work, it is mandatory for the target array to be sorted. We shall learn the
process of binary search with a pictorial example. The following is our sorted array and let us
assume that we need to search the location of value 31 using binary search.
Here it is, 0 + (9 - 0) / 2 = 4 (integer value of 4.5). So, 4 is the mid of the array.
Now we compare the value stored at location 4, with the value being searched, i.e. 31. We find
that the value at location 4 is 27, which is not a match. As the value is greater than 27 and we
have a sorted array, so we also know that the target value must be in the upper portion of the
array.
We change our low to mid + 1 and find the new mid value again.
low = mid + 1
Our new mid is 7 now. We compare the value stored at location 7 with our target value 31.
8|Page
The value stored at location 7 is not a match, rather it is less than what we are looking for. So,
the value must be in the lower part from this location.
We compare the value stored at location 5 with our target value. We find that it is a match.
Binary search halves the searchable items and thus reduces the count of comparisons to be
made to very less numbers.
Fibonacci Series is a series of numbers that have two primitive numbers 0 and 1. The successive
numbers are the sum of preceding two numbers in the series. This is an infinite constant series,
therefore, the numbers in it are fixed. The first few numbers in this Fibonacci series include −
The main idea behind the Fibonacci series is also to eliminate the least possible places where the
element could be found. In a way, it acts like a divide & conquer algorithm (logic being the closest to
binary search algorithm). This algorithm, like jump search and exponential search, also skips through
the indices of the input array in order to perform searching.
2. Partitioning: The algorithm divides the array into sections that are Fibonacci-number
sized. We hold the two preceding numbers of the selected Fibonacci number, that is, we
hold Fm, Fm-1, Fm-2 numbers from the Fibonacci Series.
3. Comparison: The algorithm compares the target value with the value at the index
calculated using Fibonacci numbers, and based on the comparison, it decides whether
to search in the left or right segment or if the target is found.
FibonacciSearch(arr, x):
n = length of arr
fm2 = 0 // (m-2)'th Fibonacci Number
fm1 = 1 // (m-1)'th Fibonacci Number
fm = fm2 + fm1 // m'th Fibonacci Number
offset = -1
// Element found
else:
return i
Time Complexity
Best Case: O(1) – when the element is found at the first comparison.
Average Case: O(logn) – on average, the search will reduce the size of the problem
logarithmically.
Space Complexity
11 | P a g e
Space Complexity: O(1) – the algorithm uses a constant amount of space regardless of
the input size.
Example:
Suppose we have a sorted array of elements {12, 14, 16, 17, 20, 24, 31, 43, 50, 62} and need to
identify the location of element 24 in it using Fibonacci Search.
Step 1
The size of the input array is 10. The smallest Fibonacci number greater than 10 is 13.
We initialize offset = -1
Step 2
In the first iteration, compare it with the element at index = minimum (offset + Fm-2, n – 1) =
minimum (-1 + 5, 9) = minimum (4, 9) = 4.
The fourth element in the array is 20, which is not a match and is less than the key element.
Step 3
In the second iteration, update the offset value and the Fibonacci numbers.
Since the key is greater, the offset value will become the index of the element, i.e. 4. Fibonacci
numbers are updated as Fm = Fm-1 = 8.
Fm-1 = 5, Fm-2 = 3.
Now, compare it with the element at index = minimum (offset + Fm-2, n – 1) = minimum (4 +
3, 9) = minimum (7, 9) = 7.
Element at the 7th index of the array is 43, which is not a match and is also lesser than the key.
12 | P a g e
Step 4
We discard the elements after the 7th index, so n = 7 and offset value remains 4.
Fm-1 = 2, Fm-2 = 1.
Now, compare it with the element at index = minimum (offset + Fm-2, n – 1) = minimum (4 +
1, 6) = minimum (5, 7) = 5.
The element at index 5 in the array is 24, which is our key element. 5th index is returned as the
output for this example array.
Indexed Sequential Search is a search algorithm that combines sequential search with
indexing to make searching faster in large datasets. It is especially useful when the data is too
large to be efficiently searched using just sequential search but doesn't require the complexity
of more advanced algorithms like binary search.
Key Concepts:
1. Indexing:
o The data is divided into blocks (also called segments or partitions), and an index
is created to store pointers or references to the start of each block. This index
typically holds key values and the position of the corresponding block in the
array or dataset.
2. Two Steps:
o Step 1: First, search through the index to determine which block might contain
the target value.
13 | P a g e
Time Complexity:
Space Complexity:
O(√n) for storing the index. The index takes up additional space but typically is much
smaller than the full dataset.
o Start by searching the first index (Index 1) to determine the range of the array
where the element might be located.
o In this case, 85 is greater than 80 but less than 100, so it falls in the group starting
from 80.
o Now that we've narrowed down to the range starting from 80, we perform a
search in Index 2.
o In Index 2, 80 points to the array block starting from the value 80 and ending at
100.
o Once the block is identified, we perform a sequential search within that block.
2. Sorting
In data management and handling, sorting plays an important role in efficiently organising and
arranging data. Sorting is a process that is required in many places and is to be handled properly
in order to use efficiently. Sorting is a simple process of swapping two or more numbers in an
ordered manner i.e., in both ascending and descending order.
o External Sorting
Internal Sorting is the type of sorting that takes place inside the main memory of the computer
and happens only when the data to be sorted is exceptionally small enough that can be managed
by the main memory. Reading and writing of the data from this slower media slows down the
sorting process significantly. Although there are many different sorting methods to avoid this
situation or condition.
External sorting is a term for a class of sorting algorithms that can handle massive amounts of
data. External sorting is required when the data being sorted does not fit into the main memory
of a computing device (usually RAM) and instead, must reside in the slower external memory
(usually a hard drive).
External sorting typically uses a hybrid sort-merge strategy. In the sorting phase, chunks of
data small enough to fit in the main memory are read, sorted, and written out to a temporary
file. In the merge phase, the sorted sub-files are combined into a single larger file.
Sort order refers to the way in which elements are arranged after sorting. There are two main
types:
16 | P a g e
Ascending order: Elements are arranged from the smallest to the largest (e.g., for
numbers: 1, 2, 3, or for letters: A, B, C).
Example:
Before: [3, 1, 4, 2]
Descending order: Elements are arranged from the largest to the smallest (e.g., for
numbers: 9, 8, 7, or for letters: Z, Y, X).
Example:
Before: [3, 1, 4, 2]
2.2.2 Stability
A sorting algorithm is considered stable if it preserves the relative order of elements with equal
values. In other words, if two elements are equal in value, they will appear in the same order
in the sorted list as they did in the original list.
Example: Suppose you are sorting a list of employees by salary. If two employees have the
same salary, a stable sort will ensure they remain in the same order they were originally in.
Some Sorting Algorithms are stable by nature, such as Bubble Sort, Insertion Sort, Merge Sort,
Count Sort, etc. Comparison-based stable sorts such as Merge Sort and Insertion Sort maintain
17 | P a g e
stability by ensuring that Element A[j] comes before A[i] if and only if A[j] < A[i], here i, j
are indices, and i < j. The relative order is preserved if A[i] ≡ A[j] i.e. A[i] comes before A[j].
Other non-comparison-based sorts such as Counting Sort maintain stability by ensuring that
the Sorted Array is filled in reverse order so that elements with equivalent keys have the same
relative position. Some sorts such as Radix Sort depend on another sort, with the only
requirement that the other sort should be stable.
Quick Sort, Heap Sort etc., can be made stable by also taking the position of the elements into
consideration. This change may be done in a way that does not compromise a lot on the
performance and takes some extra space, possibly θ(n).
2.2.3 Efficiency
Efficiency refers to how fast and resource-efficient a sorting algorithm is. Efficiency can be
measured in two main ways:
Time complexity: The number of comparisons or operations required to sort a list. This
is typically measured in Big-O notation.
o O(n2): Bubble sort, selection sort, insertion sort (slow for large datasets).
o O(n log n): Quick sort, merge sort, heap sort (faster for large datasets).
Example:
o A sorting algorithm with O(n log n) is generally more efficient than one with
O(n2) for large datasets.
Space complexity: How much additional memory is required by the sorting algorithm.
Some algorithms sort in-place, meaning they don’t need extra memory, while others
require additional memory for temporary storage.
Example:
o In-place sorting: Quick sort and bubble sort require little to no extra space.
The number of passes refers to how many times the sorting algorithm goes through the data to
sort it.
18 | P a g e
Multiple-pass algorithms: Algorithms like bubble sort and selection sort require
several passes over the data, refining the order in each pass.
Example:
o In Bubble Sort, each pass compares adjacent elements and swaps them if needed. The
algorithm requires multiple passes until no more swaps are needed:
Bubble Sort is a simple comparison-based sorting algorithm. It repeatedly steps through the
list, compares adjacent elements, and swaps them if they are in the wrong order. The process
continues until no more swaps are needed, meaning the list is sorted.
Although it is simple to use, it is primarily used as an educational tool because the performance
of bubble sort is poor in the real world. It is not suitable for large data sets. The average and
worst-case complexity of Bubble sort is O(n2), where n is a number of items.
In Bubble Sort, the number of passes required to fully sort an array of N elements is at most
(N - 1).
First Pass
19 | P a g e
Second Pass
Third Pass
Fourth Pass
When all of the unsorted elements are placed in their correct positions, the array is sorted.
Step 1: Start.
Step 2: Initialize the array arr[] and variable n as the size of the array.
Step 4.1: Set swapped = false (reset swap flag for each pass).
Step 5: Repeat Step 4 until swapped == false (i.e., no more swaps are needed).
Time Complexity:
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is
already sorted. The best-case time complexity of bubble sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. The average case time
complexity of bubble sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be sorted
in reverse order. That means suppose you have to sort the array elements in ascending
order, but its elements are in descending order. The worst-case time complexity of
bubble sort is O(n2).
Space Complexity
o The space complexity of bubble sort is O(1). It is because, in bubble sort, an extra
variable is required for swapping.
Insertion sort is a simple sorting algorithm that works by iteratively inserting each element of
an unsorted list into its correct position in a sorted portion of the list. It is like sorting playing
cards in your hands. You split the cards into two groups: the sorted cards and the unsorted cards.
Then, you pick a card from the unsorted group and put it in the right place in the sorted group.
21 | P a g e
We start with second element of the array as first element in the array is assumed to be
sorted.
Compare second element with the first element and check if the second element is
smaller then swap them.
Move to the third element and compare it with the first two elements and put at its
correct position
Here, 31 is greater than 12. That means both elements are already in ascending order. So, for
now, 12 is stored in a sorted sub-array.
Here, 25 is smaller than 31. So, 31 is not at correct position. Now, swap 31 with 25. Along with
swapping, insertion sort will also check it with all elements in the sorted array.
For now, the sorted array has only one element, i.e. 12. So, 25 is greater than 12. Hence, the
sorted array remains sorted after swapping.
Now, two elements in the sorted array are 12 and 25. Move forward to the next elements that
are 31 and 8.
22 | P a g e
Now, the sorted array has three items that are 8, 12 and 25. Move to the next items that are 31
and 32.
Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.
Step 1: Start.
Step 2: For each element i from index 1 to n-1 (where n is the size of the array):
Step 2.1: Store the value of the element at index i in a temporary variable key.
Step 2.3. While j >= 0 and the value at index j is greater than key, do the following:
Step 4: Stop.
Time Complexity:
In the worst case (when the array is in reverse order), each element has to be compared
with all the previously sorted elements, resulting in O(n²) comparisons and shifts.
In the best case (when the array is already sorted), the algorithm only needs to make n-
1 comparisons with no shifts, resulting in a linear time complexity of O(n).
On average, each element will need to be compared with about half of the sorted
elements, leading to an average-case complexity of O(n²).
1. First we find the smallest element and swap it with the first element. This way we get
the smallest element at its correct position.
2. Then we find the smallest among remaining elements (or second smallest) and move it
to its correct position by swapping.
3. We keep doing this until we get all elements moved to correct position.
Example: Consider an array of elements [12, 29, 25, 8, 32, 17, 40]
Now, for the first position in the sorted array, the entire array is to be scanned sequentially.
At present, 12 is stored at the first position, after searching the entire array, it is found that 8 is
the smallest value.
So, swap 12 with 8. After the first iteration, 8 will appear at the first position in the sorted array.
For the second position, where 29 is stored presently, we again sequentially scan the rest of the
items of unsorted array. After scanning, we find that 12 is the second lowest element in the
array that should be appeared at second position.
Now, swap 29 with 12. After the second iteration, 12 will appear at the second position in the
sorted array. So, after two iterations, the two smallest values are placed at the beginning in a
sorted way.
25 | P a g e
The same process is applied to the rest of the array elements. Now, we are showing a pictorial
representation of the entire sorting process.
Step 1: Start.
Step 2: For each element i from index 0 to n-2 (where n is the size of the array):
Step 2.1: Set min_index = i (assume the element at index i is the smallest in the
unsorted part).
Step 2.2.1: If the element at index j is smaller than the element at min_index, then:
Step 2.3: If min_index is not equal to i, swap the element at index i with the element
at min_index.
Step 3: Repeat steps 2.1 to 2.3 for all elements until the array is sorted.
Step 4: Stop.
Time Complexity:
26 | P a g e
In the worst case, the algorithm needs to compare every element to find the minimum
for each pass, resulting in n(n−1)/2 comparisons, which simplifies to O(n2).
Even if the array is already sorted, Selection Sort still goes through the same number
of comparisons to determine the minimum, so the best-case scenario also results in
O(n2).
Similar to the worst case, on average, it will still perform O(n2) comparisons.
Quicksort is the widely used sorting algorithm that makes n log n comparisons in average case
for sorting an array of n elements. It is a faster and highly efficient sorting algorithm. This
algorithm follows the divide and conquer approach. Divide and conquer is a technique of
breaking down the algorithms into subproblems, then solving the subproblems, and combining
the results back together to solve the original problem.
Divide: In Divide, first pick a pivot element. After that, partition or rearrange the array into
two sub-arrays such that each element in the left sub-array is less than or equal to the pivot
element and each element in the right sub-array is larger than the pivot element.
Quicksort picks an element as pivot, and then it partitions the given array around the picked
pivot element. In quick sort, a large array is divided into two arrays in which one holds values
27 | P a g e
that are smaller than the specified value (Pivot), and another array holds the values that are
greater than the pivot.
After that, left and right sub-arrays are also partitioned using the same approach. It will
continue until the single element remains in the sub-array.
Example: Consider an array having elements [24, 9, 29, 14, 19, 27]
In the given array, we consider the leftmost element as pivot. So, in this case, a[left] = 24,
a[right] = 27 and a[pivot] = 24.
Since, pivot is at left, so algorithm starts from right and move towards left.
Now, a[pivot] < a[right], so algorithm moves forward one position towards left, i.e. -
Because, a[pivot] > a[right], so, algorithm will swap a[pivot] with a[right], and pivot moves to
right, as -
28 | P a g e
Now, a[left] = 19, a[right] = 24, and a[pivot] = 24. Since, pivot is at right, so algorithm starts
from left and moves to right.
Now, a[left] = 9, a[right] = 24, and a[pivot] = 24. As a[pivot] > a[left], so algorithm moves one
position to right as -
Now, a[left] = 29, a[right] = 24, and a[pivot] = 24. As a[pivot] < a[left], so, swap a[pivot] and
a[left], now pivot is at left, i.e. -
Since, pivot is at left, so algorithm starts from right, and move to left. Now, a[left] = 24, a[right]
= 29, and a[pivot] = 24. As a[pivot] < a[right], so algorithm moves one position to left, as -
29 | P a g e
Now, a[pivot] = 24, a[left] = 24, and a[right] = 14. As a[pivot] > a[right], so, swap a[pivot] and
a[right], now pivot is at right, i.e. -
Now, a[pivot] = 24, a[left] = 14, and a[right] = 24. Pivot is at right, so the algorithm starts from
left and move to right.
Now, a[pivot] = 24, a[left] = 24, and a[right] = 24. So, pivot, left and right are pointing the
same element. It represents the termination of procedure.
Element 24, which is the pivot element is placed at its exact position.
Elements that are right side of element 24 are greater than it, and the elements that are left side
of element 24 are smaller than it.
Now, in a similar manner, quick sort algorithm is separately applied to the left and right sub-
arrays. After sorting gets done, the array will be -
30 | P a g e
Time Complexity:
o Best Case Complexity - In Quicksort, the best-case occurs when the pivot element is
the middle element or near to the middle element. The best-case time complexity of
quicksort is O(n logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. The average case time
complexity of quicksort is O(n logn).
o Worst Case Complexity - In quick sort, worst case occurs when the pivot element is
either greatest or smallest element. Suppose, if the pivot element is always the last
element of the array, the worst case would occur when the given array is sorted already
in ascending or descending order. The worst-case time complexity of quicksort
is O(n2).
Space Complexity:
Step 1: Start
Step 3.1: Set pivot = arr[high] (Choose the last element as pivot).
Increment i.
Step 3.4: Swap arr[i + 1] and arr[high] (Place the pivot in its correct position).
It is a sorting algorithm that is an extended version of insertion sort. Shell sort has improved
the average time complexity of insertion sort. As similar to insertion sort, it is a comparison-
based and in-place sorting algorithm. Shell sort is efficient for medium-sized data sets.
In insertion sort, at a time, elements can be moved ahead by one position only. To move an
element to a far-away position, many movements are required that increase the algorithm's
execution time. But shell sort overcomes this drawback of insertion sort. It allows the
movement and swapping of far-away elements as well.
This algorithm first sorts the elements that are far away from each other, then it subsequently
reduces the gap between them. This gap is called as interval. This interval can be calculated
by using the Knuth's formula given below -
hh = h * 3 + 1
Example: Consider an array with elements: [33, 31, 40, 8, 12, 17, 25, 42]
We will use the original sequence of shell sort, i.e., N/2, N/4,....,1 as the intervals.
In the first loop, n is equal to 8 (size of the array), so the elements are lying at the interval of 4
(n/2 = 4). Elements will be compared and swapped if they are not in order.
32 | P a g e
Here, in the first loop, the element at the 0th position will be compared with the element at
4th position. If the 0th element is greater, it will be swapped with the element at 4th position.
Otherwise, it remains the same. This process will continue for the remaining elements.
At the interval of 4, the sublists are {33, 12}, {31, 17}, {40, 25}, {8, 42}.
Now, we have to compare the values in every sub-list. After comparing, we have to swap them
if required in the original array. After comparing and swapping, the updated array will look as
follows -
In the second loop, elements are lying at the interval of 2 (n/4 = 2), where n = 8.
Now, we are taking the interval of 2 to sort the rest of the array. With an interval of 2, two
sublists will be generated - {12, 25, 33, 40}, and {17, 8, 31, 42}.
Now, we again have to compare the values in every sub-list. After comparing, we have to swap
them if required in the original array. After comparing and swapping, the updated array will
look as follows -
In the third loop, elements are lying at the interval of 1 (n/8 = 1), where n = 8. At last, we use
the interval of value 1 to sort the rest of the array elements. In this step, shell sort uses insertion
sort to sort the array elements.
33 | P a g e
Time Complexity:
o Best Case Complexity - It occurs when there is no sorting required, i.e., the array is
already sorted. The best-case time complexity of Shell sort is O(n logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. The average case time
complexity of Shell sort is O(n logn).
o Worst Case Complexity - It occurs when the array elements are required to be sorted
in reverse order. That means suppose you have to sort the array elements in ascending
order, but its elements are in descending order. The worst-case time complexity of Shell
sort is O(n2).
Radix sort is the linear sorting algorithm that is used for integers. In Radix sort, digit by digit
sorting is performed that starts from the least significant digit to the most significant digit.
The process of radix sort works similar to the sorting of students’ names, according to the
alphabetical order. In this case, there are 26 radix formed due to the 26 letters in the English
34 | P a g e
alphabet. In the first pass, the names of students are grouped according to the ascending order
of the first letter of their names. After that, in the second pass, their names are grouped
according to the ascending order of the second letter of their name. And the process continues
until we find the sorted list.
1. The range of input numbers is large but the number of digits (k) in the numbers is
relatively small.
2. The data is non-negative integers, fixed-length strings, or items that can be mapped to
integers.
3. You need a stable, linear time sorting algorithm, and the additional space required is
acceptable.
Step 1: Start
Step 2: Initialize the array arr[] and set n as the size of the array.
Step 3: Find the maximum element in arr[] to determine the number of digits d.
Step 4: For each digit position, starting from the least significant digit (LSD) to the most
significant digit (MSD), do the following:
Step 4.1: Initialize a counting array count[0..9] to store the count of occurrences
of each digit (0 to 9).
Step 4.2.1: Extract the digit at the current place value from arr[i].
Step 4.2.2: Increment the count of this digit in the count[] array.
Step 4.3: Modify the counting array count[] to represent cumulative counts for each
digit.
Step 4.4.1: Place each element arr[i] into its correct position in a temporary
output array output[] based on the current digit.
35 | P a g e
Step 4.5: Copy the elements from the temporary output[] back into the original arr[].
Step 5: Repeat steps 4.1 to 4.5 for all digits (from LSD to MSD).
Example: Consider an array with elements [181, 289, 390, 121, 145, 736, 514, 212]
The steps used in the sorting of radix sort are listed as follows -
o First, we have to find the largest element (suppose max) from the given array.
Suppose 'x' be the number of digits in max. The 'x' is calculated because we need to
go through the significant places of all elements.
o After that, go through one by one each significant place. Here, we have to use any stable
sorting algorithm to sort the digits of each significant place.
Now let's see the working of radix sort in detail by using an example. To understand it more
clearly, let's take an unsorted array and try to sort it using radix sort.
In the given array, the largest element is 736 that have 3 digits in it. So, the loop will run up to
three times (i.e., to the hundreds place). That means three passes are required to sort the array.
Now, first sort the elements on the basis of unit place digits (i.e., x = 0). Here, we are using the
counting sort algorithm to sort the elements.
Pass 1:
In the first pass, the list is sorted on the basis of the digits at 0's place.
36 | P a g e
Pass 2:
In this pass, the list is sorted on the basis of the next significant digits (i.e., digits at 10 th place).
Pass 3:
In this pass, the list is sorted on the basis of the next significant digits (i.e., digits at 100 th place).
Time Complexity:
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is
already sorted. The best-case time complexity of Radix sort is O(n+k).
o Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. The average case time
complexity of Radix sort is O(nk).
o Worst Case Complexity - It occurs when the array elements are required to be sorted
in reverse order. That means suppose you have to sort the array elements in ascending
order, but its elements are in descending order. The worst-case time complexity of
Radix sort is O(nk).
Space Complexity:
Counting sort is a sorting technique that is based on the keys between specific ranges.
This sorting technique doesn't perform sorting by comparing elements. It performs sorting by
counting objects having distinct key values like hashing. After that, it performs some arithmetic
operations to calculate each object's index position in the output sequence. Counting sort is not
used as a general-purpose sorting algorithm.
Counting sort is effective when range is not greater than number of objects to be sorted. It can
be used to sort the negative input values.
Step 1: Start
Step 2: Initialize the array arr[] and set n as the size of the array.
Step 3: Find the maximum element in arr[] (let's call it max) to determine the range of the
counting array.
38 | P a g e
Step 5: For i = 0 to n - 1:
Step 6: Modify the count[] array to contain the cumulative sum of the counts:
Step 8.1: Place each element arr[i] in its correct position in the output[] array
using the count[] array.
1. Find the maximum element from the given array. Let max be the maximum element.
2. Now, initialize array of length max + 1 having all 0 elements. This array will be used to store
the count of the elements in the given array.
3. Now, we have to store the count of each array element at their corresponding index in the
count array.
39 | P a g e
The count of an element will be stored as - Suppose array element '4' is appeared two times, so
the count of element 4 is 2. Hence, 2 is stored at the 4th position of the count array. If any
element is not present in the array, place 0, i.e. suppose element '3' is not present in the array,
so, 0 will be stored at 3rd position.
Now, store the cumulative sum of count array elements. It will help to place the elements at
the correct index of the sorted array.
After placing element at its place, decrease its count by one. Before placing element 2, its count
was 2, but after placing it at its correct position, the new count for element 2 is 1.
40 | P a g e
Time Complexity:
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is
already sorted. The best-case time complexity of counting sort is O(n + k).
o Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. The average case time
complexity of counting sort is O(n + k).
o Worst Case Complexity - It occurs when the array elements are required to be sorted
in reverse order. That means suppose you have to sort the array elements in ascending
order, but its elements are in descending order. The worst-case time complexity of
counting sort is O(n + k).
Space Complexity:
o The space complexity of counting sort is O(k), where larger the range of elements, the
larger the space complexity.
Bucket sort is a sorting algorithm that separate the elements into multiple groups said to be
buckets. Elements in bucket sort are first uniformly divided into groups called buckets, and
then they are sorted by any other sorting algorithm. After that, elements are gathered in a sorted
manner.
41 | P a g e
Step 1: Start
Step 2: Initialize the array arr[] and set n as the size of the array.
Step 3: Find the maximum element (let's call it max) and minimum element (let's call it min)
in arr[] to determine the range of the buckets.
Step 4: Create n empty buckets (or decide on the number of buckets based on the data range).
Step 5: For i = 0 to n - 1:
Step 5.1: Determine the appropriate bucket for each element arr[i].
Step 6: Sort each bucket individually using a simple sorting algorithm (e.g., Insertion Sort).
Step 7: Concatenate all sorted buckets to form the final sorted array.
Example: Consider an array having elements [10, 8, 20, 7, 16, 18, 12, 1, 23, 11]
Now, create buckets with a range from 0 to 25. The buckets range are 0-5, 5-10, 10-15, 15-20,
20-25. Elements are inserted in the buckets according to the bucket range. Suppose the value
of an item is 16, so it will be inserted in the bucket with the range 15-20. Similarly, every item
of the array will insert accordingly.
Now, sort each bucket individually. The elements of each bucket can be sorted by using any of
the stable sorting algorithms.
Time Complexity:
o Best Case Complexity - If we use the insertion sort to sort the bucket elements, the
overall complexity will be linear, i.e., O(n + k), where O(n) is for making the buckets,
and O(k) is for sorting the bucket elements using algorithms with linear time complexity
at best case. The best-case time complexity of bucket sort is O(n + k).
o Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. Bucket sort runs in the
linear time, even when the elements are uniformly distributed. The average case time
complexity of bucket sort is O(n + k).
o Worst Case Complexity - In bucket sort, worst case occurs when the elements are of
the close range in the array, because of that, they have to be placed in the same bucket.
So, some buckets have more number of elements than others.
The complexity will get worse when the elements are in the reverse order.
The worst-case time complexity of bucket sort is O(n2).
Space Complexity: