Advanced Sorting Algorithm
Advanced Sorting Algorithm
Advanced Sorting
Algorithm
Today’s checklist:
Practice questions
Merge Sort:
Merge sort is defined as a sorting algorithm that works by dividing an array into smaller subarrays, sorting each
subarray, and then merging the sorted subarrays back together to form the final sorted array.
Algorithm Overview:
Divide: The input array is divided into halves until individual elements are obtained.
Merge: Subsequently, the algorithm merges these smaller sorted arrays back together in sorted order. This
merging process is where the sorted arrays are combined to form larger sorted arrays until the entire array is
sorted.
Imagine we had two sorted array and we would to make new sorted array with all the elements of array1 and
array2 combined we could easily do this by taking oen pointer at the start of array one and other start of array2
then keep comparing the their element and which ever is shorter push that into new array and move the
pointer forward,this way we could easily prepare the new sorted array
int i=0,j=0;
if(arr1[i]<arr2[j[) ans.push_back(arr[i]),i++;
else ans.push_back(arr[j]),j++;
while(i<arr1.size() ){
ans.push_back(arr[i]),i++;
while( j<arr2.size()){
ans.push_back(arr[j]),j++;
return ans;
Java
C++ &+ DSA
DSA
Explanation: Divide and conquer is a problem-solving technique that Merge Sort adopts. It divides a problem
into smaller, more manageable sub-problems, solves them independently, and combines the solutions to solve
the original problem.
Application in Merge Sort: Merge Sort implements this technique by dividing the array into smaller sub-arrays
until they become trivially sorted (single-element arrays). It then merges these sorted sub-arrays together.
Algorithm Steps:
1. Divide: Recursively divide the array into smaller halves until single elements remain.
Code Example:
void merge(int arr[], int l, int m, int r) {
int n1 = m - l + 1;
int n2 = r - m;
arr[k] = L[i];
i++;
} else {
arr[k] = R[j];
j++;
k++;
arr[k] = L[i];
Java
C++ &+ DSA
DSA
i++;
k++;
arr[k] = R[j];
j++;
k++;
if (l < r) {
int m = l + (r - l) / 2;
mergeSort(arr, l, m);
mergeSort(arr, m + 1, r);
merge(arr, l, m, r);
Explanation:
The merge() function merges two sorted sub-arrays into a single sorted array. It creates temporary arrays
` `
`L[]`and , copies data from the original array into these arrays, and then merges them back into the
`R[]`
The mergeSort() function is the main driver function for the Merge Sort algorithm. It recursively divides the
` `
array into halves and merges them using the merge() function to achieve the final sorted array.
` `
It is O(nlogn)
Divide-and-Conquer: By repeatedly dividing the array into halves until single elements remain, Merge Sort
creates a binary tree-like structure. This logarithmic division contributes to its improved time complexity.
Efficient Merging: When merging the divided halves, Merge Sort efficiently compares and merges two
already sorted arrays in linear time, resulting in overall faster sorting.
Advantage Over Others: Compared to quadratic time complexity algorithms, Merge Sort's O(n log n) time
complexity ensures more efficient performance, especially as the dataset size grows.
V isual epresentation: isualize Merge Sort's time complexity as a balanced binary tree, with each level
R V
Java
C++ &+ DSA
DSA
Real-world Applicability: This efficiency makes Merge Sort preferable for stable and efficient sorting in
applications handling substantial datasets or requiring consistent time performance.
Explanation: Merge Sort, with its efficient and stable sorting technique, finds practical applications across
various domains.
Sorting Linked Lists: Merge Sort's divide-and-conquer approach makes it well-suited for efficiently sorting
linked lists.
External Sorting: ts ability to handle large datasets by utilizing external memory efficiently makes Merge
I
its reliability, stability, and consistent performance in sorting arrays and collections.
Space Complexity: Merge Sort requires additional memory proportional to the input size for its merging
process, impacting its suitability in memory-constrained environments.
Memory er ead: The need for extra space during sorting might pose challenges when dealing with
Ov h
The Merge Sort algorithm sorts an array by dividing it into smaller halves, sorting these smaller halves, and
merging them back together in the desired order. To achieve a decreasing order, the comparison during
merging needs to be altered, ensuring larger elements precede smaller ones during the merge step.
int n1 = m - l + 1;
int n2 = r - m;
Java
C++ &+ DSA
DSA
for (int j = 0; j < n2; j++)
int i = 0, j = 0, k = l;
arr[k] = L[i];
i++;
} else {
arr[k] = R[j];
j++;
k++;
arr[k] = L[i];
i++;
k++;
arr[k] = R[j];
j++;
k++;
if (l < r) {
int m = l + (r - l) / 2;
mergeSort(arr, l, m);
mergeSort(arr, m + 1, r);
merge(arr, l, m, r);
Complexity Analysis:
Time Complexity: O(n log n) - Merge Sort's time complexity remains unchanged in both ascending and
descending order sorting as it requires the same number of comparisons and divisions.
Space Complexity: O(n) - Merge Sort uses additional memory for temporary arrays during the merging
process, resulting in linear space usage.
Java
C++ &+ DSA
DSA
Quick Sort:
QuickSort is a sorting algorithm based on the Divide and Conquer algorithm that picks an element as a pivot
and partitions the given array around the picked pivot by placing the pivot in its correct position in the sorted
array.
The key process in quickSort is a partition(). The target of partitions is to place the pivot (any element can be
chosen to be a pivot) at its correct position in the sorted array and put all smaller elements to the left of the
pivot, and all greater elements to the right of the pivot.
Partition is done recursively on each side of the pivot after the pivot is placed in its correct position and this
finally sorts the array.
2. Or you might always go for the last item.(as shown in the partition algorithm)
Partition Algorithm:
The logic is simple, we start from the leftmost element and keep track of the index of smaller (or equal)
elements as i. While traversing, if we find a smaller element, we swap the current element with arr[i]. Otherwise,
we ignore the current element.
Let us understand the working of partition and the Quick Sort algorithm with the help of the following example:
Consider: arr[] = {10, 80, 30, 90, 40}.
Compare 10 with the pivot and as it is less than pivot arrange it accrodingly.
Java
C++ &+ DSA
DSA
Compare 30 with pivot. It is less than pivot so arrange it accordingly.
As the partition process is done recursively, it keeps on putting the pivot in its actual position in the sorted array.
Repeatedly putting pivots in their actual position makes the array sorted.
Java
C++ &+ DSA
DSA
Partitioning of the subarrays:
int pivot=arr[high];
int i=(low-1);
for(int j=low;j<=high;j++)
if(arr[j]<pivot)
i++;
swap(arr[i],arr[j]);
swap(arr[i+1],arr[high]);
return (i+1);
if(low<high)
int pi=partition(arr,low,high);
quickSort(arr,low,pi-1);
quickSort(arr,pi+1,high);
Java
C++ &+ DSA
DSA
Best Case: When the pivot consistently splits the array evenly, leading to balanced partitions, resulting in a fast
sorting time of Ω(N log N).
Average Case: Quicksort usually performs really well on average, taking θ(N log N) time. It's considered one of
the fastest sorting methods in practice.
Worst Case: If the pivot consistently causes highly unbalanced partitions, like when the array is already sorted,
it could take O(N^2) time. Techniques like choosing better pivots (e.g., median of three) or using randomized
algorithms (Randomized Quicksort) help avoid this.
Auxiliary Space: Normally O(1), but considering the space used by the recursive stack, it could go up to O(N) in
the worst case.
Randomized QuickSort: Randomized quick sort is designed to decrease the chances of the algorithm being
executed in the worst case time complexity of O(n2). The worst case time complexity of quick sort arises when
the input given is an already sorted list, leading to n(n – 1) comparisons.
return randomIndex;
QuickSort, in its standard implementation, is not a stable sorting algorithm. Stability in sorting algorithms refers
to the preservation of the relative order of equal elements.
In QuickSort, the partitioning step does not guarantee the preservation of the relative order of equal elements.
When two elements are considered equal by the comparison function and are swapped during the partitioning
step, their original order may not be maintained in the sorted output.
For example, if you have two elements A and B in the input array such that A appears before B and they are
considered equal, during the partitioning step, A and B might be swapped, potentially changing their order in
the final sorted array. This lack of stability can affect certain applications where maintaining the original order
of equal elements is crucial.
However, there are variations and modifications of QuickSort, like using a different pivot selection strategy or
tweaking the partitioning step, that can be designed to maintain stability by considering the original order of
equal elements. These modifications may impact the efficiency or general implementation of the algorithm.
Java
C++ &+ DSA
DSA
Application of Quicksort:
Commercial Computing is used in various government and private organizations for the purpose of sorting
various data like sorting files by name/date/price, sorting of students by their roll no., sorting of account
profile by given id, etc
The sorting algorithm is used for information searching and as Quicksort is the fastest algorithm so it is
widely used as a better way of searching
It is used everywhere where a stable sort is not needed
Quicksort is a cache-friendly algorithm as it has a good locality of reference when used for arrays
It is tail-recursive and hence all the call optimization can be done
It is an in-place sort that does not require any extra storage memory
It is used in operational research and event-driven simulation
Numerical computations and in scientific research, for accuracy in calculations most of the efficiently
developed algorithm uses priority queue and quick sort is used for sorting
Variants of Quicksort are used to separate the Kth smallest or largest elements
It is used to implement primitive type methods
If data is sorted then the search for information became easy and efficient.
# Merge Sort:
Partitioning Method: Always divides the array into two equal halves, ensuring balanced splitting
Worst Case Complexity: O(n log n) consistently, regardless of the input
Suitable for: Works efficiently on any size of datasets, offering predictable performance
Storage Requirement: Needs additional memory space for auxiliary arrays during merging
Efficiency: More consistent and efficient for larger datasets due to its reliable performance
Sorting Method: Merge Sort is an external sorting algorithm
Stability: It maintains stability, preserving the order of equal elements
Preferred for: Linked Lists and scenarios where consistent performance on any dataset size is essential.
Java
C++ &+ DSA
DSA
This comparison illustrates their distinct characteristics, aiding in choosing the suitable algorithm based on the
dataset size, stability requirements, and expected performance. Adjust the explanation based on your
audience's familiarity with sorting concepts.
Q. Given an array arr[] of size N and a number K, where K is smaller than the size of the array. Find the K’th
smallest element in the given array. Given that all array elements are distinct.
Input: arr[] = {7, 10, 4, 3, 20, 15}, K = 3
Output: 7
Output: 10
Code:
*a = *b;
*b = temp;
int x = arr[r], i = l;
if (arr[j] <= x) {
swap(&arr[i], &arr[j]);
i++;
swap(&arr[i], &arr[r]);
return i;
if (pos - l == K - 1)
return arr[pos];
K - pos + l - 1);
return INT_MAX;
Java
C++ &+ DSA
DSA
Explanation: if QuickSort is used as a sorting algorithm in first step. In QuickSort, pick a pivot element, then
move the pivot element to its correct position and partition the surrounding array. The idea is, not to do
complete quicksort, but stop at the point where pivot itself is k’th smallest element. Also, not to recur for both left
and right sides of pivot, but recur for one of them according to the position of pivot.
In this algorithm pick a pivot element and move it to it’s correct position
Now, if index of pivot is equal to K then return the value, else if the index of pivot is greater than K, then recur for
the left subarray, else recur for the right subarray
Q. Inversion Count
Two elements of an array a, a[i] and a[j] form an inversion if a[i] > a[j] and i < j. Given an array of integers. Find
the Inversion Count in the array.
Output: 6
Explanation: Given array has six inversions: (8, 4), (4, 2), (8, 2), (8, 1), (4, 1), (2, 1).
Naive Approach: Traverse through the array, and for every index, find the number of smaller elements on the
right side of the array. This can be done using a nested loop. Sum up the counts for all indices in the array and
print the sum.
Naive Approach Code:
int inv_count = 0;
inv_count++;
return inv_count;
Time Complexity: O(N^2), Two nested loops are needed to traverse the array from start to end.
Q. What if We have an array made up of two subarrays, both sorted. What can be said about the inversions
including a certain element?
Compare Elements Efficiently: For each element in one array, find the count of elements smaller than it in the
other array. This is easier due to the sorted nature of the arrays.
Java
C++ &+ DSA
DSA
Use Merge Approach: Employ a merge-like technique to efficiently count inversions while merging the two
arrays. As you merge, track inversions when elements from the second array are smaller than elements from
the first array.
The sorted nature makes it easier to navigate and count inversions by using techniques like merging or efficient
comparisons between the arrays' elements.
// This function sorts the input array and returns the number of
inversions in the array
int temp[array_size];
// call _mergeSortAndCountInv()
return inv_count;
int right)
int i, j, k;
int inv_count = 0;
i = left;
j = mid;
Java
C++ &+ DSA
DSA
k = left;
temp[k++] = arr[i++];
else {
temp[k++] = arr[j++];
temp[k++] = arr[i++];
temp[k++] = arr[j++];
arr[i] = temp[i];
return inv_count;
Time Complexity: O(N * log N), The algorithm used is divide and conquer i.e. merge sort whose complexity is
O(n log n).
Explanation:
Main Idea: Use Merge sort with modification that every time an unsorted pair is found increment count by one
and return count at the end.
Steps:
The idea is similar to merge sort, divide the array into two equal or almost equal halves in each step until the
base case is reached
Create a function merge that counts the number of inversions when two halves of the array are merged,
Create two indices i and j, i is the index for the first half, and j is an index of the second half.
If a[i] is greater than a[j], then there are (mid – i) inversions because left and right subarrays are sorted, so
all the remaining elements in left-subarray (a[i+1], a[i+2] … a[mid]) will be greater than a[j]
Create a recursive function to divide the array into halves and find the answer by summing the number of
inversions in the first half, the number of inversions in the second half and the number of inversions by
merging the two.
Java
C++ &+ DSA
DSA
The base case of recursion is when there is only one element in the given half
Print the answer.
Cycle Sort
Specialty:
In-place Sorting: Cycle Sort is an in-place sorting algorithm, meaning it minimizes the use of additional
memory space during sorting, making it memory-efficient.
Minimal Writes: It minimizes the number of writes or swaps to sort the array, making it preferable for scenarios
where write operations are costly or restricted.
Where to Use:
In dealing with problems involving arrays containing numbers in a given range,
Memory Constraints: Suitable for memory-constrained environments or scenarios where additional memory
usage needs to be minimized.
Reduced Writes: Effective when minimizing write operations to the dataset is crucial, like with non-volatile
memory or flash memory devices where write endurance is a concern.
The basic idea behind cycle sort is to divide the input array into cycles, where each cycle consists of elements
that belong to the same position in the sorted output array. The algorithm then performs a series of swaps to
place each element in its correct position within its cycle, until all cycles are complete and the array is sorted.
Step-wise algorithm:
Start with an unsorted array of n elements
Initialize a variable, cycleStart, to 0
For each element in the array, compare it with every other element to its right. If any elements are smaller
than the current element, increment cycleStart
If cycleStart is still 0 after comparing the first element with all other elements, move to the next element and
repeat step 3
Once a smaller element is found, swap the current element with the first element in its cycle. The cycle is
then continued until the current element returns to its original position.
Repeat steps 3-5 until all cycles have been completed.
Code:
int writes = 0;
Java
C++ &+ DSA
DSA
int item = arr[cycle_start];
pos++;
if (pos == cycle_start)
continue;
pos += 1;
if (pos != cycle_start) {
swap(item, arr[pos]);
writes++;
pos = cycle_start;
pos += 1;
pos += 1;
if (item != arr[pos]) {
swap(item, arr[pos]);
writes++;
Time Complexity Analysis: O(n2), The algorithm's two nested loops iterate through the array elements,
leading to a quadratic time complexity
Method 2: This method is only applicable when given array values or elements are in the range of 1 to N or 0 to
N. In this method, we do not need to rotate an array
Approach : All the given array values should be in the range of 1 to N or 0 to N. If the range is 1 to N then every
array element’s correct position will be the index == value-1 i.e. means at the 0th index value will be 1 similarly at
the 1st index position value will be 2 and so on till nth value.
similarly for 0 to N values correct index position of each array element or value will be the same as its value i.e.
at 0th index 0 will be there 1st position 1 will be there.
Java
C++ &+ DSA
DSA
Code:
int i = 0;
while(i < n)
if(arr[i] != arr[correct]){
swap(arr[i], arr[correct]) ;
}else{
i++ ;
int n = arr.size();
int swaps = 0;
pos++;
if (pos == cycleStart)
continue;
pos++;
if (pos != cycleStart) {
swap(item, arr[pos]);
swaps++;
Java
C++ &+ DSA
DSA
pos = cycleStart;
pos++;
pos++;
if (item != arr[pos]) {
swap(item, arr[pos]);
swaps++;
return swaps;
}
int main() {
cout << "Array sorted using Cycle Sort with " << numSwaps <<
" swaps: ";
return 0;
Example 1:
Input: nums = [3,0,1]
Output: 2
Explanation: n = 3 since there are 3 numbers, so all numbers are in the range [0,3]. 2 is the missing number in
the range since it does not appear in nums.
Example 2:
Input: nums = [0,1]
Output: 2
Java
C++ &+ DSA
DSA
Explanation: n = 2 since there are 2 numbers, so all numbers are in the range [0,2]. 2 is the missing number in
the range since it does not appear in nums.
Code:
#include <vector>
class Solution {
public:
int start = 0;
swap(nums[start], nums[num]);
} else {
start++;
if (nums[i] != i) {
return i;
return nums.size();
};
Explanation: Use cycle sort, then traverse the array and for the first index where arr[i]!=i return that element
i this is the missing number
Time: O(n), for cyclic sort in a given range
There is only one repeated number in nums, return this repeated number.
Example 1:
Input: nums = [1,3,4,2,2]
Output: 2
Example 2:
Input: nums = [3,1,3,4,2]
Output: 3
Java
C++ &+ DSA
DSA
Code:
class Solution {
public:
int n=nums.size();
for(int i=0;i<n;){
if(nums[i]!=nums[nums[i]-1])
swap(nums[i],nums[nums[i]-1]);
else i++;
for(int i=0;i<n;i++) {
return -1;
int slow=nums[0],fast=nums[0];
while(1){
slow=nums[slow];
fast=nums[nums[fast]];
if(slow==fast) break;
slow=nums[0];
while(slow!=fast){
slow=nums[slow];
fast=nums[fast];
return slow;
};
Explanation: We make use of cyclic sort, and after sorting we check whatever element does not match with the
corresponding index is the duplicate number. But the problem is the question has mentioned we can't modify a
given array and we can’t use extra space too.
Q. Given an array nums of n integers where nums[i] is in the range [1, n], return an array of all the integers in
the range [1, n] that do not appear in nums.
Example 1:
Input: nums = [4,3,2,7,8,2,3,1]
Output: [5,6]
Java
C++ &+ DSA
DSA
Example 2:
Input: nums = [1,1]
Output: [2]
Code:
int i=0;
vector<int>v;
while(i<nums.size()){
if(nums[i]!=nums[nums[i]-1]){
swap(nums[i],nums[nums[i]-1]);
else{
i++;
for(int i=0;i<nums.size();i++){
if(nums[i]!=i+1){
v.push_back(i+1);
return v;
Explanation: Use cycle sort, then check if nums[i]!=i+1, it means i+1 is one of disappered number
Q. First Missing PositiveGiven an unsorted integer array nums, return the smallest missing positive integer.
You must implement an algorithm that runs in O(n) time and uses O(1) auxiliary space.
Example 1:
Input: nums = [1,2,0]
Output: 3
Explanation: The numbers in the range [1,2] are all in the array.
Example 2:
Input: nums = [3,4,-1,1]
Output: 2
Java
C++ &+ DSA
DSA
Example 3:
Input: nums = [7,8,9,11,12]
Output: 1
class Solution {
public:
int firstMissingPositive(vector<int>& A) {
int N = A.size();
if (N == 0)
return 1;
if (A[i] < 0)
continue;
while (corr_idx != i) {
break;
swap(A[i], A[corr_idx]);
corr_idx = A[i]-1;
int i = 0;
if (A[i]-1 != i)
return i+1;
return i+1;
Explanation:
For an array of size N, if there are no missing positives, all numbers will exist from 1...N. Thinking this way, we
can find the correct position of each number in O(1) time, as the correct index for A[i] is A[i]-1.
Do a cyclic sort in O(N) time to put numbers in their correct places. For numbers that are out of 1...N bounds
just ignore them.
In the second iteration, check for the number, not in its correct position that will be the missing one.
Java
C++ &+ DSA
DSA
THANK
YOU !