0% found this document useful (0 votes)
2 views

Advanced Sorting Algorithm

The document outlines lesson plans for advanced sorting algorithms, focusing on Merge Sort and Quick Sort. Merge Sort utilizes a divide-and-conquer strategy to sort arrays efficiently with a time complexity of O(n log n), while Quick Sort also employs a similar strategy but can have a worst-case time complexity of O(n^2) based on pivot selection. The document further discusses the implementations, applications, and comparisons of both algorithms, highlighting their advantages and drawbacks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Advanced Sorting Algorithm

The document outlines lesson plans for advanced sorting algorithms, focusing on Merge Sort and Quick Sort. Merge Sort utilizes a divide-and-conquer strategy to sort arrays efficiently with a time complexity of O(n log n), while Quick Sort also employs a similar strategy but can have a worst-case time complexity of O(n^2) based on pivot selection. The document further discusses the implementations, applications, and comparisons of both algorithms, highlighting their advantages and drawbacks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Lesson Plan

Advanced Sorting

Algorithm
Today’s checklist:

Merge Sort Algorith

Quick Sort Algorith

Cycle Sort Algorith

Practice questions

Merge Sort:

Merge sort is defined as a sorting algorithm that works by dividing an array into smaller subarrays, sorting each

subarray, and then merging the sorted subarrays back together to form the final sorted array.

Algorithm Overview:

The Merge Sort algorithm is composed of two primary steps:

Divide: The input array is divided into halves until individual elements are obtained.

Merge: The sorted halves are merged together in a sorted manner.

Merge: Subsequently, the algorithm merges these smaller sorted arrays back together in sorted order. This

merging process is where the sorted arrays are combined to form larger sorted arrays until the entire array is

sorted.

What if we had two sorted arrays?

Imagine we had two sorted array and we would to make new sorted array with all the elements of array1 and

array2 combined we could easily do this by taking oen pointer at the start of array one and other start of array2

then keep comparing the their element and which ever is shorter push that into new array and move the

pointer forward,this way we could easily prepare the new sorted array

Let’s look at the code

vector<int> merge(vector<int> arr1,vector<int> arr2){

int i=0,j=0;

while(i<arr1.size() && j<arr2.size()){

if(arr1[i]<arr2[j[) ans.push_back(arr[i]),i++;

else ans.push_back(arr[j]),j++;

while(i<arr1.size() ){

ans.push_back(arr[i]),i++;

while( j<arr2.size()){

ans.push_back(arr[j]),j++;

return ans;

Introducing Divide and Conquer

Java
C++ &+ DSA
DSA
Explanation: Divide and conquer is a problem-solving technique that Merge Sort adopts. It divides a problem
into smaller, more manageable sub-problems, solves them independently, and combines the solutions to solve
the original problem.

Application in Merge Sort: Merge Sort implements this technique by dividing the array into smaller sub-arrays
until they become trivially sorted (single-element arrays). It then merges these sorted sub-arrays together.

Merge Sort Algorithm


Explanation: Merge Sort operates with a recursive approach, combining the steps of dividing and merging to
sort an array efficiently.

Algorithm Steps:
1. Divide: Recursively divide the array into smaller halves until single elements remain.

2. Merge: Merge the sorted halves together in a sorted manner.

Code Example:
void merge(int arr[], int l, int m, int r) {

int n1 = m - l + 1;

int n2 = r - m;

int L[n1], R[n2];

// Copy data to temp arrays L[] and R[]

for (int i = 0; i < n1; i++)

L[i] = arr[l + i];

for (int j = 0; j < n2; j++)

R[j] = arr[m + 1 + j];

// Merge the temp arrays back into arr[l..r]

int i = 0; // Initial index of first subarray

int j = 0; // Initial index of second subarray

int k = l; // Initial index of merged subarray

while (i < n1 && j < n2) {

if (L[i] <= R[j]) {

arr[k] = L[i];

i++;

} else {

arr[k] = R[j];

j++;

k++;

// Copy the remaining elements of L[], if any

while (i < n1) {

arr[k] = L[i];

Java
C++ &+ DSA
DSA
i++;

k++;

// Copy the remaining elements of R[], if any

while (j < n2) {

arr[k] = R[j];

j++;

k++;

void mergeSort(int arr[], int l, int r) {

if (l < r) {

int m = l + (r - l) / 2;

// Sort first and second halves

mergeSort(arr, l, m);

mergeSort(arr, m + 1, r);

// Merge the sorted halves

merge(arr, l, m, r);

Explanation:

The merge() function merges two sorted sub-arrays into a single sorted array. It creates temporary arrays
` `

`L[]`and , copies data from the original array into these arrays, and then merges them back into the
`R[]`

original array arr . ` []`

The mergeSort() function is the main driver function for the Merge Sort algorithm. It recursively divides the
` `

array into halves and merges them using the merge() function to achieve the final sorted array.
` `

Improvement in Time Complexity


Explanation: Merge Sort achieves its efficient O(n log n) time complexity through a divide-and-conquer
strategy seprating them in half takes logn and merging them back takes n so overall

It is O(nlogn)

Divide-and-Conquer: By repeatedly dividing the array into halves until single elements remain, Merge Sort
creates a binary tree-like structure. This logarithmic division contributes to its improved time complexity.
Efficient Merging: When merging the divided halves, Merge Sort efficiently compares and merges two
already sorted arrays in linear time, resulting in overall faster sorting.
Advantage Over Others: Compared to quadratic time complexity algorithms, Merge Sort's O(n log n) time
complexity ensures more efficient performance, especially as the dataset size grows.
V isual epresentation: isualize Merge Sort's time complexity as a balanced binary tree, with each level
R V

representing a division step and linear merging at each level.

Java
C++ &+ DSA
DSA
Real-world Applicability: This efficiency makes Merge Sort preferable for stable and efficient sorting in
applications handling substantial datasets or requiring consistent time performance.

Stability of Merge Sort


Explanation: Merge Sort is categorized as a stable sorting algorithm, ensuring the preservation of the relative
order of equal elements during the sorting process

Applications of Merge Sort

Explanation: Merge Sort, with its efficient and stable sorting technique, finds practical applications across
various domains.

Sorting Linked Lists: Merge Sort's divide-and-conquer approach makes it well-suited for efficiently sorting
linked lists.
External Sorting: ts ability to handle large datasets by utilizing external memory efficiently makes Merge
I

Sort a preferred choice for external sorting algorithms.


P rogramming Lang age Libraries: Merge Sort is often integrated into programming language libraries for
u

its reliability, stability, and consistent performance in sorting arrays and collections.

D rawbacks of Merge Sort


Explanation: While Merge Sort excels in efficiency, it's not without its limitations.

Space Complexity: Merge Sort requires additional memory proportional to the input size for its merging
process, impacting its suitability in memory-constrained environments.
Memory er ead: The need for extra space during sorting might pose challenges when dealing with
Ov h

exceptionally large datasets, potentially affecting its practicality.


n-place Merging C allenges: Merge Sort's typical implementation involves extra memory allocation for
I h

merging, which might limit its application in memory-sensitive scenarios.


Implement a merge sort algorit m to sort an array of elements in decreasing order
h .

The Merge Sort algorithm sorts an array by dividing it into smaller halves, sorting these smaller halves, and
merging them back together in the desired order. To achieve a decreasing order, the comparison during
merging needs to be altered, ensuring larger elements precede smaller ones during the merge step.

void merge(vector<int>& arr, int l, int m, int r) {

int n1 = m - l + 1;

int n2 = r - m;

vector<int> L(n1), R(n2);

for (int i = 0; i < n1; i++)

L[i] = arr[l + i];

Java
C++ &+ DSA
DSA
for (int j = 0; j < n2; j++)

R[j] = arr[m + 1 + j];

int i = 0, j = 0, k = l;

while (i < n1 && j < n2) {

if (L[i] >= R[j]) { // Change comparison to sort in


descending order

arr[k] = L[i];

i++;

} else {

arr[k] = R[j];

j++;

k++;

while (i < n1) {

arr[k] = L[i];

i++;

k++;

while (j < n2) {

arr[k] = R[j];

j++;

k++;

void mergeSort(vector<int>& arr, int l, int r) {

if (l < r) {

int m = l + (r - l) / 2;

mergeSort(arr, l, m);

mergeSort(arr, m + 1, r);

merge(arr, l, m, r);

Complexity Analysis:

Time Complexity: O(n log n) - Merge Sort's time complexity remains unchanged in both ascending and
descending order sorting as it requires the same number of comparisons and divisions.
Space Complexity: O(n) - Merge Sort uses additional memory for temporary arrays during the merging
process, resulting in linear space usage.

Java
C++ &+ DSA
DSA
Quick Sort:
QuickSort is a sorting algorithm based on the Divide and Conquer algorithm that picks an element as a pivot
and partitions the given array around the picked pivot by placing the pivot in its correct position in the sorted
array.

The key process in quickSort is a partition(). The target of partitions is to place the pivot (any element can be
chosen to be a pivot) at its correct position in the sorted array and put all smaller elements to the left of the
pivot, and all greater elements to the right of the pivot.

Partition is done recursively on each side of the pivot after the pivot is placed in its correct position and this
finally sorts the array.

When it comes to picking a pivot point:


1. You can always use the first item.

2. Or you might always go for the last item.(as shown in the partition algorithm)

3. Another way is to randomly select any item.

4. You could also choose the one right in the middle.

Partition Algorithm:
The logic is simple, we start from the leftmost element and keep track of the index of smaller (or equal)
elements as i. While traversing, if we find a smaller element, we swap the current element with arr[i]. Otherwise,
we ignore the current element.
Let us understand the working of partition and the Quick Sort algorithm with the help of the following example:
Consider: arr[] = {10, 80, 30, 90, 40}.

Compare 10 with the pivot and as it is less than pivot arrange it accrodingly.

Compare 80 with the pivot. It is greater than pivot.

Java
C++ &+ DSA
DSA
Compare 30 with pivot. It is less than pivot so arrange it accordingly.

Compare 90 with the pivot. It is greater than the pivot.

Arrange the pivot in its correct position.

As the partition process is done recursively, it keeps on putting the pivot in its actual position in the sorted array.
Repeatedly putting pivots in their actual position makes the array sorted.

Initial partition on the main array:

Java
C++ &+ DSA
DSA
Partitioning of the subarrays:

int partition(int arr[], int low, int high)

//choose the pivot

int pivot=arr[high];

int i=(low-1);

for(int j=low;j<=high;j++)

if(arr[j]<pivot)

i++;

swap(arr[i],arr[j]);

swap(arr[i+1],arr[high]);

return (i+1);

void quickSort(int arr[], int low, int high)

// when low is less than high

if(low<high)

int pi=partition(arr,low,high);

quickSort(arr,low,pi-1);

quickSort(arr,pi+1,high);

Java
C++ &+ DSA
DSA
Best Case: When the pivot consistently splits the array evenly, leading to balanced partitions, resulting in a fast
sorting time of Ω(N log N).

Average Case: Quicksort usually performs really well on average, taking θ(N log N) time. It's considered one of
the fastest sorting methods in practice.

Worst Case: If the pivot consistently causes highly unbalanced partitions, like when the array is already sorted,
it could take O(N^2) time. Techniques like choosing better pivots (e.g., median of three) or using randomized
algorithms (Randomized Quicksort) help avoid this.

Auxiliary Space: Normally O(1), but considering the space used by the recursive stack, it could go up to O(N) in
the worst case.

Randomized QuickSort: Randomized quick sort is designed to decrease the chances of the algorithm being
executed in the worst case time complexity of O(n2). The worst case time complexity of quick sort arises when
the input given is an already sorted list, leading to n(n – 1) comparisons.

// Function to generate a random pivot index

int getRandomPivot(int arr[], int start, int end) {

srand(time(NULL)); // Seed for random number generator

int randomIndex = start + rand() % (end - start + 1); //


Random index within the range

return randomIndex;

Stability of Quick Sort:

QuickSort, in its standard implementation, is not a stable sorting algorithm. Stability in sorting algorithms refers
to the preservation of the relative order of equal elements.

In QuickSort, the partitioning step does not guarantee the preservation of the relative order of equal elements.
When two elements are considered equal by the comparison function and are swapped during the partitioning
step, their original order may not be maintained in the sorted output.

For example, if you have two elements A and B in the input array such that A appears before B and they are
considered equal, during the partitioning step, A and B might be swapped, potentially changing their order in
the final sorted array. This lack of stability can affect certain applications where maintaining the original order
of equal elements is crucial.

However, there are variations and modifications of QuickSort, like using a different pivot selection strategy or
tweaking the partitioning step, that can be designed to maintain stability by considering the original order of
equal elements. These modifications may impact the efficiency or general implementation of the algorithm.

Java
C++ &+ DSA
DSA
Application of Quicksort:
Commercial Computing is used in various government and private organizations for the purpose of sorting
various data like sorting files by name/date/price, sorting of students by their roll no., sorting of account
profile by given id, etc
The sorting algorithm is used for information searching and as Quicksort is the fastest algorithm so it is
widely used as a better way of searching
It is used everywhere where a stable sort is not needed
Quicksort is a cache-friendly algorithm as it has a good locality of reference when used for arrays
It is tail-recursive and hence all the call optimization can be done
It is an in-place sort that does not require any extra storage memory
It is used in operational research and event-driven simulation
Numerical computations and in scientific research, for accuracy in calculations most of the efficiently
developed algorithm uses priority queue and quick sort is used for sorting
Variants of Quicksort are used to separate the Kth smallest or largest elements
It is used to implement primitive type methods
If data is sorted then the search for information became easy and efficient.

Merge Sort vs Quick Sort


Both Quick Sort and Merge Sort are efficient sorting algorithms based on the divide-and-conquer strategy, yet
they differ in their approach and suitability for various scenarios:
# Quick Sort:
Partitioning Method: It partitions the array flexibly, not necessarily into halves, based on a pivot element
Worst Case Complexity: O(n^2) in rare instances where poor pivot selection occurs
Suitable for: Smaller arrays and often outperforms other algorithms for small datasets due to its speed
Storage Requirement: Requires less additional memory as it sorts in place
Efficiency: While highly efficient on average, it can be less effective for larger datasets
Sorting Method: Quick Sort is an internal sorting algorithm
Stability: It's generally unstable, meaning it might change the relative order of equal elements
Preferred for: Arrays where speed is crucial, especially for smaller datasets.

# Merge Sort:
Partitioning Method: Always divides the array into two equal halves, ensuring balanced splitting
Worst Case Complexity: O(n log n) consistently, regardless of the input
Suitable for: Works efficiently on any size of datasets, offering predictable performance
Storage Requirement: Needs additional memory space for auxiliary arrays during merging
Efficiency: More consistent and efficient for larger datasets due to its reliable performance
Sorting Method: Merge Sort is an external sorting algorithm
Stability: It maintains stability, preserving the order of equal elements
Preferred for: Linked Lists and scenarios where consistent performance on any dataset size is essential.

Java
C++ &+ DSA
DSA
This comparison illustrates their distinct characteristics, aiding in choosing the suitable algorithm based on the
dataset size, stability requirements, and expected performance. Adjust the explanation based on your
audience's familiarity with sorting concepts.
Q. Given an array arr[] of size N and a number K, where K is smaller than the size of the array. Find the K’th
smallest element in the given array. Given that all array elements are distinct.
Input: arr[] = {7, 10, 4, 3, 20, 15}, K = 3 

Output: 7

Input: arr[] = {7, 10, 4, 3, 20, 15}, K = 4 

Output: 10

Code:

void swap(int* a, int* b)

int temp = *a;

*a = *b;

*b = temp;

int partition(int arr[], int l, int r)

int x = arr[r], i = l;

for (int j = l; j <= r - 1; j++) {

if (arr[j] <= x) {

swap(&arr[i], &arr[j]);

i++;

swap(&arr[i], &arr[r]);

return i;

int kthSmallest(int arr[], int l, int r, int K)

if (K > 0 && K <= r - l + 1) {

int pos = partition(arr, l, r);

if (pos - l == K - 1)

return arr[pos];

if (pos - l > K - 1) 

return kthSmallest(arr, l, pos - 1, K);

return kthSmallest(arr, pos + 1, r,

K - pos + l - 1);

return INT_MAX;

Java
C++ &+ DSA
DSA
Explanation: if QuickSort is used as a sorting algorithm in first step. In QuickSort, pick a pivot element, then
move the pivot element to its correct position and partition the surrounding array. The idea is, not to do
complete quicksort, but stop at the point where pivot itself is k’th smallest element. Also, not to recur for both left
and right sides of pivot, but recur for one of them according to the position of pivot.

Run quick sort algorithm on the input array

In this algorithm pick a pivot element and move it to it’s correct position

Now, if index of pivot is equal to K then return the value, else if the index of pivot is greater than K, then recur for
the left subarray, else recur for the right subarray 

Repeat this process until the element at index K is not found

Q. Inversion Count
Two elements of an array a, a[i] and a[j] form an inversion if a[i] > a[j] and i < j. Given an array of integers. Find
the Inversion Count in the array.

Input: arr[] = {8, 4, 2, 1}

Output: 6

Explanation: Given array has six inversions: (8, 4), (4, 2), (8, 2), (8, 1), (4, 1), (2, 1).

Naive Approach: Traverse through the array, and for every index, find the number of smaller elements on the
right side of the array. This can be done using a nested loop. Sum up the counts for all indices in the array and
print the sum.
Naive Approach Code: 

int getInvCount(int arr[], int n)

int inv_count = 0;

for (int i = 0; i < n - 1; i++)

for (int j = i + 1; j < n; j++)

if (arr[i] > arr[j])

inv_count++;

return inv_count;

Time Complexity: O(N^2), Two nested loops are needed to traverse the array from start to end.

Auxiliary Space: O(1), No extra space is required.

Q. What if We have an array made up of two subarrays, both sorted. What can be said about the inversions
including a certain element?

Compare Elements Efficiently: For each element in one array, find the count of elements smaller than it in the
other array. This is easier due to the sorted nature of the arrays.

Java
C++ &+ DSA
DSA
Use Merge Approach: Employ a merge-like technique to efficiently count inversions while merging the two
arrays. As you merge, track inversions when elements from the second array are smaller than elements from
the first array.

The sorted nature makes it easier to navigate and count inversions by using techniques like merging or efficient
comparisons between the arrays' elements.

//Efficient Approach using merge sort


Code:

// This function sorts the input array and returns the number of
inversions in the array

int mergeSort(int arr[], int array_size)

int temp[array_size];

return _mergeSort(arr, temp, 0, array_size - 1);

// An auxiliary recursive function that sorts the input array and


returns the number of inversions in the array.

int _mergeSort(int arr[], int temp[], int left, int right)

int mid, inv_count = 0;

if (right > left) {

// Divide the array into two parts and

// call _mergeSortAndCountInv()

// for each of the parts

mid = (right + left) / 2;

// Inversion count will be sum of

// inversions in left-part, right-part

// and number of inversions in merging

inv_count += _mergeSort(arr, temp, left, mid);

inv_count += _mergeSort(arr, temp, mid + 1, right);

// Merge the two parts

inv_count += merge(arr, temp, left, mid + 1, right);

return inv_count;

int merge(int arr[], int temp[], int left, int mid,

int right)

int i, j, k;

int inv_count = 0;

i = left;

j = mid;

Java
C++ &+ DSA
DSA
k = left;

while ((i <= mid - 1) && (j <= right)) {

if (arr[i] <= arr[j]) {

temp[k++] = arr[i++];

else {

temp[k++] = arr[j++];

// this is tricky -- see above

// explanation/diagram for merge()

inv_count = inv_count + (mid - i);

// Copy the remaining elements of left subarray

// (if there are any) to temp

while (i <= mid - 1)

temp[k++] = arr[i++];

// Copy the remaining elements of right subarray

// (if there are any) to temp

while (j <= right)

temp[k++] = arr[j++];

// Copy back the merged elements to original array

for (i = left; i <= right; i++)

arr[i] = temp[i];

return inv_count;

Time Complexity: O(N * log N), The algorithm used is divide and conquer i.e. merge sort whose complexity is
O(n log n).

Auxiliary Space: O(N), Temporary array.

Explanation:

Main Idea: Use Merge sort with modification that every time an unsorted pair is found increment count by one
and return count at the end.

Steps:
The idea is similar to merge sort, divide the array into two equal or almost equal halves in each step until the
base case is reached
Create a function merge that counts the number of inversions when two halves of the array are merged,
Create two indices i and j, i is the index for the first half, and j is an index of the second half.
If a[i] is greater than a[j], then there are (mid – i) inversions because left and right subarrays are sorted, so
all the remaining elements in left-subarray (a[i+1], a[i+2] … a[mid]) will be greater than a[j]
Create a recursive function to divide the array into halves and find the answer by summing the number of
inversions in the first half, the number of inversions in the second half and the number of inversions by
merging the two.

Java
C++ &+ DSA
DSA
The base case of recursion is when there is only one element in the given half
Print the answer.
Cycle Sort
Specialty:
In-place Sorting: Cycle Sort is an in-place sorting algorithm, meaning it minimizes the use of additional
memory space during sorting, making it memory-efficient.

Minimal Writes: It minimizes the number of writes or swaps to sort the array, making it preferable for scenarios
where write operations are costly or restricted.

Where to Use:
In dealing with problems involving arrays containing numbers in a given range,
Memory Constraints: Suitable for memory-constrained environments or scenarios where additional memory
usage needs to be minimized.

Reduced Writes: Effective when minimizing write operations to the dataset is crucial, like with non-volatile
memory or flash memory devices where write endurance is a concern.

The basic idea behind cycle sort is to divide the input array into cycles, where each cycle consists of elements
that belong to the same position in the sorted output array. The algorithm then performs a series of swaps to
place each element in its correct position within its cycle, until all cycles are complete and the array is sorted.
Step-wise algorithm:
Start with an unsorted array of n elements
Initialize a variable, cycleStart, to 0
For each element in the array, compare it with every other element to its right. If any elements are smaller
than the current element, increment cycleStart
If cycleStart is still 0 after comparing the first element with all other elements, move to the next element and
repeat step 3
Once a smaller element is found, swap the current element with the first element in its cycle. The cycle is
then continued until the current element returns to its original position.
Repeat steps 3-5 until all cycles have been completed.

The array is now sorted.

Code:

void cycleSort(int arr[], int n)

// count number of memory writes

int writes = 0;

for (int cycle_start = 0; cycle_start <= n - 2; cycle_start+


+) {

Java
C++ &+ DSA
DSA
int item = arr[cycle_start];

int pos = cycle_start;

for (int i = cycle_start + 1; i < n; i++)

if (arr[i] < item)

pos++;

if (pos == cycle_start)

continue;

while (item == arr[pos])

pos += 1;

if (pos != cycle_start) {

swap(item, arr[pos]);

writes++;

while (pos != cycle_start) {

pos = cycle_start;

for (int i = cycle_start + 1; i < n; i++)

if (arr[i] < item)

pos += 1;

while (item == arr[pos])

pos += 1;

if (item != arr[pos]) {

swap(item, arr[pos]);

writes++;

Time Complexity Analysis: O(n2), The algorithm's two nested loops iterate through the array elements,
leading to a quadratic time complexity 

Auxiliary Space: O(1),no extra space used

Method 2: This method is only applicable when given array values or elements are in the range of 1 to N or 0 to
N. In this method, we do not need to rotate an array
Approach : All the given array values should be in the range of 1 to N or 0 to N. If the range is 1 to N then every
array element’s correct position will be the index == value-1 i.e. means at the 0th index value will be 1 similarly at
the 1st index position value will be 2 and so on till nth value.

similarly for 0 to N values correct index position of each array element or value will be the same as its value i.e.
at 0th index 0 will be there 1st position 1 will be there.

Java
C++ &+ DSA
DSA
Code:

void cyclicSort(int arr[], int n){

int i = 0; 

while(i < n)

int correct = arr[i] - 1 ;

if(arr[i] != arr[correct]){

swap(arr[i], arr[correct]) ;

}else{

i++ ;

Time Complexity: O(n), as i going from 0 to n-1

Q. What is the worst number of swaps in Cyclic sort for an length n ?


The worst-case scenario in Cycle Sort occurs when every element is out of place and needs to be moved to its
correct sorted position. This situation results in the maximum number of swaps.

int cycleSort(vector<int>& arr) {

int n = arr.size();

int swaps = 0;

for (int cycleStart = 0; cycleStart < n - 1; cycleStart++) {

int item = arr[cycleStart];

int pos = cycleStart;

for (int i = cycleStart + 1; i < n; i++)

if (arr[i] < item)

pos++;

if (pos == cycleStart)

continue;

while (item == arr[pos])

pos++;

if (pos != cycleStart) {

swap(item, arr[pos]);

swaps++;

while (pos != cycleStart) {

Java
C++ &+ DSA
DSA
pos = cycleStart;

for (int i = cycleStart + 1; i < n; i++)

if (arr[i] < item)

pos++;

while (item == arr[pos])

pos++;

if (item != arr[pos]) {

swap(item, arr[pos]);

swaps++;

return swaps;

}

int main() {

vector<int> elements = {5, 4, 3, 2, 1};

int numSwaps = cycleSort(elements);

cout << "Array sorted using Cycle Sort with " << numSwaps <<
" swaps: ";

for (int num : elements) {

cout << num << " ";

cout << endl;

return 0;

Q. Missing Number [Leetcode - 268]


Given an array nums containing n distinct numbers in the range [0, n], return the only number in the range that
is missing from the array.

Example 1:
Input: nums = [3,0,1]

Output: 2

Explanation: n = 3 since there are 3 numbers, so all numbers are in the range [0,3]. 2 is the missing number in
the range since it does not appear in nums.
Example 2:
Input: nums = [0,1]

Output: 2

Java
C++ &+ DSA
DSA
Explanation: n = 2 since there are 2 numbers, so all numbers are in the range [0,2]. 2 is the missing number in
the range since it does not appear in nums.
Code:

#include <vector>

using namespace std;

class Solution {

public:

int missingNumber(vector<int>& nums) {

int start = 0;

while (start < nums.size()) {

int num = nums[start];

if (num < nums.size() && num != start) {

swap(nums[start], nums[num]);

} else {

start++;

for (int i = 0; i < nums.size(); ++i) {

if (nums[i] != i) {

return i;

return nums.size();

};

Explanation: Use cycle sort, then traverse the array and for the first index where arr[i]!=i return that element
i this is the missing number
Time: O(n), for cyclic sort in a given range

Space: O(1), only const extra space is used

Q. Find the duplicate number [Leetcode - 287]


Given an array of integers nums containing n + 1 integers where each integer is in the range [1, n] inclusive.

There is only one repeated number in nums, return this repeated number.

Example 1:
Input: nums = [1,3,4,2,2]

Output: 2

Example 2:
Input: nums = [3,1,3,4,2]

Output: 3

Java
C++ &+ DSA
DSA
Code:

class Solution {

public:

int findDuplicate(vector<int>& nums) {

int n=nums.size();

for(int i=0;i<n;){

if(nums[i]!=nums[nums[i]-1])
swap(nums[i],nums[nums[i]-1]);

else i++;

for(int i=0;i<n;i++) {

if(nums[i]!=i+1) return nums[i];

return -1;

//So we use the Tortoise and hare method of Cycle


Detection

int slow=nums[0],fast=nums[0];

while(1){

slow=nums[slow];

fast=nums[nums[fast]];

if(slow==fast) break;

slow=nums[0];

while(slow!=fast){

slow=nums[slow];

fast=nums[fast];

return slow;

};

Explanation: We make use of cyclic sort, and after sorting we check whatever element does not match with the
corresponding index is the duplicate number. But the problem is the question has mentioned we can't modify a
given array and we can’t use extra space too.

Time: O(n), to traverse the array

Space: No Extra space being used

Q. Given an array nums of n integers where nums[i] is in the range [1, n], return an array of all the integers in
the range [1, n] that do not appear in nums.

Example 1:
Input: nums = [4,3,2,7,8,2,3,1]

Output: [5,6]

Java
C++ &+ DSA
DSA
Example 2:
Input: nums = [1,1]

Output: [2]

Code:

vector<int> findDisappearedNumbers(vector<int>& nums) {

int i=0;

vector<int>v;

while(i<nums.size()){

if(nums[i]!=nums[nums[i]-1]){

swap(nums[i],nums[nums[i]-1]);

else{

i++;

for(int i=0;i<nums.size();i++){

if(nums[i]!=i+1){

v.push_back(i+1);

return v;

Explanation: Use cycle sort, then check if nums[i]!=i+1, it means i+1 is one of disappered number

Time Complexity: O(n), cycle sort for element in given range

Space complexity: O(1), no extra space being used

Q. First Missing PositiveGiven an unsorted integer array nums, return the smallest missing positive integer.

You must implement an algorithm that runs in O(n) time and uses O(1) auxiliary space.

Example 1:
Input: nums = [1,2,0]

Output: 3

Explanation: The numbers in the range [1,2] are all in the array.

Example 2:
Input: nums = [3,4,-1,1]

Output: 2

Explanation: 1 is in the array but 2 is missing.

Java
C++ &+ DSA
DSA
Example 3:
Input: nums = [7,8,9,11,12]

Output: 1

Explanation: The smallest positive integer 1 is missing.


Code:

class Solution {

public:

int firstMissingPositive(vector<int>& A) {

int N = A.size();

if (N == 0)

return 1;

for (int i = 0; i < N; i++) {

if (A[i] < 0)

continue;

int corr_idx = A[i]-1;

while (corr_idx != i) {

if (corr_idx < 0 || corr_idx >= N || A[corr_idx]


== A[i]) 

break;

swap(A[i], A[corr_idx]);

corr_idx = A[i]-1;

int i = 0;

for (; i < N; i++)

if (A[i]-1 != i)

return i+1;

return i+1;

Explanation: 

For an array of size N, if there are no missing positives, all numbers will exist from 1...N. Thinking this way, we
can find the correct position of each number in O(1) time, as the correct index for A[i] is A[i]-1.

Do a cyclic sort in O(N) time to put numbers in their correct places. For numbers that are out of 1...N bounds
just ignore them.

In the second iteration, check for the number, not in its correct position that will be the missing one.

Time: O(N), for cycle sort

Space: O(1), no extra space used

Java
C++ &+ DSA
DSA
THANK

YOU !

You might also like