0% found this document useful (0 votes)
12 views63 pages

LEARNING MATERIALS-Algorithm-UNIT2 (MODIFIED)

Uploaded by

SUDAM DUTTA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views63 pages

LEARNING MATERIALS-Algorithm-UNIT2 (MODIFIED)

Uploaded by

SUDAM DUTTA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

DH SIR’S CLASSROOM

LEARNING MATERIAL
Department: Computer Science and Technology
rd
|||| Semester: 3 |||| Subject: ALGORITHM |||| UNIT: (2) Sorting |||| Teacher: Debasish Hati ||||
_________________________________________________________________________________________________________

CONTENTS:
1. The Sorting Problem
 Bubble sort
 Selection sort
 Insertion sort
 Merge sort
 Quick sort
 Shell sort
 Heap sort
2. Computation of best, average and worst case time complexity of all the above sorting algorithms.
3. Linear time sorting
 Count sort
 Bucket sort
 Radix sort

 What is Sorting?

Sorting is the process of arranging data in a particular order, typically in ascending or descending order. Sorting
algorithms are a fundamental concept in computer science because they are used in various applications, from
organizing data for easier search and retrieval to optimizing other algorithms that require sorted input data.

 Importance of Sorting

 Efficiency: Sorted data makes other algorithms, like searching and merging, more efficient.
 Data Organization: Sorting is essential in organizing data in databases, spreadsheets, and files, making it easier
to analyze and understand.
 Optimization: Many algorithms and operations (like binary search) perform better on sorted data.

 Types of Sorting Algorithms

Sorting algorithms can be categorized based on several factors, such as complexity, stability, and where the
sorting is performed.

1. Based on Complexity

 Time Complexity: How the algorithm's running time increases as the size of the input increases.
o O(n^2): Slower for large datasets (e.g., Bubble Sort, Insertion Sort, Selection Sort).
o O(n log n): More efficient for larger datasets (e.g., Merge Sort, Quick Sort, Heap Sort).
o O(n): Linear time complexity, very efficient but only applicable in specific cases (e.g., Counting Sort,
Radix Sort).
 Space Complexity: How much additional memory the algorithm uses aside from the input data.
o In-Place Algorithms: Require a constant amount of extra space (e.g., Quick Sort, Heap Sort).
o Non-In-Place Algorithms: Require additional memory proportional to the input size (e.g., Merge
Sort).

2. Based on Stability

 Stable Sorts: Maintain the relative order of equal elements (e.g., Merge Sort, Insertion Sort).
 Unstable Sorts: May change the relative order of equal elements (e.g., Quick Sort, Heap Sort).

3. Based on Methodology

 Comparison-Based Sorting: Elements are compared with each other to determine the order (e.g., Bubble Sort,
Merge Sort, Quick Sort).
 Non-Comparison Sorting: Sorting is done without directly comparing elements (e.g., Counting Sort, Radix
Sort).

 Common Sorting Algorithms:-


1. Bubble Sort

 Method: Repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the
wrong order.
 Complexity: O(n^2) time complexity, not efficient for large lists.
 Stability: Stable.

2. Selection Sort

 Method: Repeatedly selects the smallest (or largest) element from the unsorted portion and places it at the
beginning.
 Complexity: O(n^2) time complexity.
 Stability: Unstable.

3. Insertion Sort

 Method: Builds the sorted list one element at a time by repeatedly taking the next element and inserting it
into its correct position.
 Complexity: O(n^2) time complexity, but performs well for small or nearly sorted lists.
 Stability: Stable.

4. Merge Sort

 Method: A divide-and-conquer algorithm that divides the list into halves, sorts each half, and then merges
the sorted halves.
 Complexity: O(n log n) time complexity.
 Stability: Stable.

5. Quick Sort

 Method: A divide-and-conquer algorithm that selects a pivot element and partitions the array around the
pivot, recursively sorting the sub-arrays.
 Complexity: O(n log n) average time complexity, but O(n^2) in the worst case.
 Stability: Unstable.
6. Heap Sort

 Method: Builds a max heap (or min heap) from the input data and repeatedly extracts the maximum (or
minimum) element.
 Complexity: O(n log n) time complexity.
 Stability: Unstable.

7. Counting Sort

 Method: Counts the number of occurrences of each distinct element and uses this information to place
elements in the correct position.
 Complexity: O(n + k), where k is the range of the input values.
 Stability: Stable.

8. Radix Sort

 Method: Processes each digit of the numbers, starting from the least significant digit to the most significant,
using counting sort as a subroutine.
 Complexity: O(nk), where k is the number of digits.
 Stability: Stable.

 EXTERNAL SORTING VS INTERNAL SORTING:-


External Sorting vs. Internal Sorting are two different methods of sorting data, primarily distinguished by
where the data is stored during the sorting process and the size of the data being sorted.

Internal Sorting

 Definition: Internal sorting refers to sorting methods that are performed within the main memory (RAM) of a
computer.
 Data Size: Suitable for small to moderately large datasets that can fit entirely within the main memory.
 Common Algorithms:
o Quick Sort: A fast, divide-and-conquer algorithm, but not stable.
o Merge Sort: A stable, divide-and-conquer algorithm that uses additional memory.
o Heap Sort: Efficient with O(n log n) time complexity, and does not require additional memory.
o Insertion Sort: Simple but efficient for small or nearly sorted datasets.
 Performance: Generally faster because accessing data in memory is quicker compared to accessing data on
disk.
 Memory Requirement: Requires that the entire dataset fit in the computer's main memory.

External Sorting

 Definition: External sorting is used when the data being sorted is too large to fit into the main memory,
requiring the use of external storage such as disk drives.
 Data Size: Suitable for very large datasets that exceed the capacity of the main memory.
 Common Algorithms:
o External Merge Sort: The most common external sorting algorithm, which involves dividing the data
into chunks that fit in memory, sorting each chunk internally, and then merging the sorted chunks.
 Performance: Slower compared to internal sorting due to the need to read and write data to external storage.
However, it's optimized to minimize the number of I/O operations.
 Memory Requirement: Works with limited main memory by processing data in blocks that can fit in memory,
while the bulk of the data remains on disk.

Key Differences

 Memory Usage: Internal sorting requires the entire dataset to fit in main memory, whereas external sorting
handles data that exceeds the main memory capacity.
 Speed: Internal sorting is generally faster due to quicker memory access, while external sorting is slower due
to reliance on disk I/O.
 Use Cases: Internal sorting is used for smaller datasets, while external sorting is necessary for large datasets
that cannot be loaded entirely into memory.

When to Use Each:

 Internal Sorting: Use when working with small to medium-sized datasets that can fit into the available RAM.
 External Sorting: Use when dealing with large datasets, such as database sorting, where data needs to be
sorted but cannot fit into memory all at once.

 Bubble Sort:
 What is Bubble sort:
 Bubble sort is the easiest sorting algorithm to implement.
 It is inspired by observing the behavior of air bubbles over foam.
 It is an in-place sorting algorithm.
 It uses no auxiliary data structures (extra space) while sorting.

 How Bubble Sort Works?


 Bubble sort uses multiple passes (scans) through an array.
 In each pass, bubble sort compares the adjacent elements of the array.
 It then swaps the two elements if they are in the wrong order.
 In each pass, bubble sort places the next largest element to its proper position.
 In short, it bubbles down the largest element to its correct position.

 Bubble Sort Algorithm-


The bubble sort algorithm is given below-

 for(int pass=1 ; pass<=n-1 ; ++pass) // Making passes through array


 {
 for(int i=0 ; i<=n-2 ; ++i)
 {
 if(A[i] > A[i+1]) // If adjacent elements are in wrong order
 swap(i,i+1,A); // Swap them
 }
 }
 //swap function : Exchange elements from array A at position x,y
 void swap(int x, int y, int[] A)
 {
 int temp = A[x];
 A[x] = A[y];
 A[y] = temp;
 return ;
 }
 // pass : Variable to count the number of passes that are done till now
 // n : Size of the array
 // i : Variable to traverse the array A
 // swap() : Function to swap two numbers from the array
 // x,y : Indices of the array that needs to be swapped

 Bubble Sort Example-


Consider the following array A-

Now, we shall implement the above bubble sort algorithm on this array.
Step-01:
▫ We have pass=1 and i=0.
▫ We perform the comparison A[0] > A[1] and swaps if the 0th element is greater than the 1th element.
▫ Since 6 > 2, so we swap the two elements.

Step-02:
▫ We have pass=1 and i=1.
▫ We perform the comparison A[1] > A[2] and swaps if the 1th element is greater than the 2th element.
▫ Since 6 < 11, so no swapping is required.
Step-03:
▫ We have pass=1 and i=2.
▫ We perform the comparison A[2] > A[3] and swaps if the 2nd element is greater than the 3rd element.
▫ Since 11 > 7, so we swap the two elements.

Step-04:
▫ We have pass=1 and i=3.
▫ We perform the comparison A[3] > A[4] and swaps if the 3rd element is greater than the 4th element.
▫ Since 11 > 5, so we swap the two elements.

Finally after the first pass, we see that the largest element 11 reaches its correct position.
Step-05:
▫ Similarly after pass=2, element 7 reaches its correct position.
▫ The modified array after pass=2 is shown below-

Step-06:
▫ Similarly after pass=3, element 6 reaches its correct position.
▫ The modified array after pass=3 is shown below-

Step-07:
▫ No further improvement is done in pass=4.
▫ This is because at this point, elements 2 and 5 are already present at their correct positions.
▫ The loop terminates after pass=4.
▫ Finally, the array after pass=4 is shown below-
 Optimization Of Bubble Sort Algorithm-
 If the array gets sorted after a few passes like one or two, then ideally the algorithm should
terminate.
 But still the above algorithm executes the remaining passes which costs extra comparisons.

 Optimized Bubble Sort Algorithm-


The optimized bubble sort algorithm is shown below-

 for (int pass=1 ; pass<=n-1 ; ++pass)


 {
 flag=0 // flag denotes are there any swaps done in pass
 for (int i=0 ; i<=n-2 ; ++i)
 {
 if(A[i] > A[i+1])
 {
 swap(i,i+1,A);
 flag=1 // After swap, set flag to 1
 }
 }
 if(flag == 0) break; // No swaps indicates we can terminate loop
 }
 void swap(int x, int y, int[] A)
 {
 int temp = A[x];
 A[x] = A[y];
 A[y] = temp;
 return;
 }

 Explanation-
 To avoid extra comparisons, we maintain a flag variable.
 The flag variable helps to break the outer loop of passes after obtaining the sorted array.
 The initial value of the flag variable is set to 0.
 The zero value of flag variable denotes that we have not encountered any swaps.
 Once we need to swap adjacent values for correcting their wrong order, the value of flag variable is
set to 1.
 If we encounter a pass where flag == 0, then it is safe to break the outer loop and declare the array
is sorted.
 Properties-
Some of the important properties of bubble sort algorithm are-

 Bubble sort is a stable sorting algorithm.


 Bubble sort is an in-place sorting algorithm.
 The worst case time complexity of bubble sort algorithm is O(n2).
 The space complexity of bubble sort algorithm is O(1).
 Number of swaps in bubble sort = Number of inversion pairs present in the given array.
 Bubble sort is beneficial when array elements are less and the array is nearly sorted.
 Advantages of Bubble Sort:
 It is a fast and efficient way of sorting objects.
 It is an easy way to clean up a cluttered area.
 It is a safe way to move objects if they are fragile or if they are large.
 Simple to write code for (compared to merge + insertion)
 Simple to understand (biggest number goes to top)
 Doesn't need much extra memory to run the algorithm
 Disadvantages of Bubble Sort:
 It can be a bit messy.
 It can be difficult to keep track of who is responsible for which object.
 It can be difficult to determine who has completed the job.

 Selection Sort-
 What is Selection Sort:
 Selection sort is one of the easiest approaches to sorting.
 It is inspired from the way in which we sort things out in day to day life.
 It is an in-place sorting algorithm because it uses no auxiliary data structures while sorting.

 How Selection Sort Works?


 Consider the following elements are to be sorted in ascending order using selection sort- 6, 2, 11, 7,
5
 Selection sort works as-
▫ It finds the first smallest element (2).
▫ It swaps it with the first element of the unordered list.
▫ It finds the second smallest element (5).
▫ It swaps it with the second element of the unordered list.
▫ Similarly, it continues to sort the given elements.
 As a result, sorted elements in ascending order are- 2, 5, 6, 7, 11

 Selection Sort Algorithm-


Let A be an array with n elements. Then, selection sort algorithm used for sorting is as follows-

 for (i = 0 ; i < n-1 ; i++)


 {
 index = i;
 for(j = i+1 ; j < n ; j++)
 {
 if(A[j] < A[index])
 index = j;
 }
 temp = A[i];
 A[i] = A[index];
 A[index] = temp;
 }
Here,

 i = variable to traverse the array A


 index = variable to store the index of minimum element
 j = variable to traverse the unsorted sub-array
 temp = temporary variable used for swapping

 Selection Sort Example-


Consider the following elements are to be sorted in ascending order- 6, 2, 11, 7, 5
The above selection sort algorithm works as illustrated below-

 Step-01: For i = 0

 Step-02: For i = 1

 Step-03: For i = 2
 Step-04: For i = 3

 Step-05: For i = 4
Loop gets terminated as ‘i’ becomes 4.
The state of array after the loops are finished is as shown-

With each loop cycle,

 The minimum element in unsorted sub-array is selected.


 It is then placed at the correct location in the sorted sub-array until array A is completely sorted.

 Important Notes-
 Selection sort is not a very efficient algorithm when data sets are large.
 This is indicated by the average and worst case complexities.
 Selection sort uses minimum number of swap operations O(n) among all the sorting algorithms.

 Advantages of Selection Sort:


 Selection sort algorithm is 60% more efficient than bubble sort algorithm.
 Selection sort algorithm is easy to implement.
 Selection sort algorithm can be used for small data sets, unfortunately Insertion sort algorithm best
suitable for it.
 Disadvantages of Selection Sort:
 Running time of Selection sort algorithm is very poor of 0 (n2).
 Insertion sort algorithm is far better than selection sort algorithm.

 Insertion Sort-
 What is Insertion Sort:
 Insertion sort is an in-place sorting algorithm.
 It uses no auxiliary data structures while sorting.
 It is inspired from the way in which we sort playing cards.

 How Insertion Sort Works?


 Consider the following elements are to be sorted in ascending order- 6, 2, 11, 7, 5
 Insertion sort works as-
 Firstly,
▫ It selects the second element (2).
▫ It checks whether it is smaller than any of the elements before it.
▫ Since 2 < 6, so it shifts 6 towards right and places 2 before it.
▫ The resulting list is 2, 6, 11, 7, 5.
 Secondly,
▫ It selects the third element (11).
▫ It checks whether it is smaller than any of the elements before it.
▫ Since 11 > (2, 6), so no shifting takes place.
▫ The resulting list remains the same.
 Thirdly,
▫ It selects the fourth element (7).
▫ It checks whether it is smaller than any of the elements before it.
▫ Since 7 < 11, so it shifts 11 towards right and places 7 before it.
▫ The resulting list is 2, 6, 7, 11, 5.
 Fourthly,
▫ It selects the fifth element (5).
▫ It checks whether it is smaller than any of the elements before it.
▫ Since 5 < (6, 7, 11), so it shifts (6, 7, 11) towards right and places 5 before them.
▫ The resulting list is 2, 5, 6, 7, 11.
 As a result, sorted elements in ascending order are- 2, 5, 6, 7, 11
 Insertion Sort Algorithm-
 Let A be an array with n elements. The insertion sort algorithm used for sorting is as follows-
 for (i = 1 ; i < n ; i++)
 {
 key = A [ i ];
 j = i - 1;
 while(j > 0 && A [ j ] > key)
 {
 A [ j+1 ] = A [ j ];
 j--;
 }
 A [ j+1 ] = key;
 }

 Here,
 i = variable to traverse the array A
 key = variable to store the new number to be inserted into the sorted sub-array
 j = variable to traverse the sorted sub-array
 Insertion Sort Example-
 Consider the following elements are to be sorted in ascending order- 6, 2, 11, 7, 5
 The above insertion sort algorithm works as illustrated below-
 Step-01: For i = 1

 Step-02: For i = 2

 Step-03: For i = 3

 Working of inner loop when i = 3

2 5 11 7 6 For j = 2; 11 > 7 so A[3] = 11


2 5 11 11 6 For j = 1; 5 < 7 so loop stops and
A[2] = 7
2 5 7 11 6 After inner loop ends

 Step-04: For i = 4

 Loop gets terminated as ‘i’ becomes 5. The state of array after the loops are finished-

 With each loop cycle,


 One element is placed at the correct location in the sorted sub-array until array A is completely
sorted.
 Important Notes-
 Insertion sort is not a very efficient algorithm when data sets are large.
 This is indicated by the average and worst case complexities.
 Insertion sort is adaptive and number of comparisons are less if array is partially sorted.

 Advantages of Insertion Sort:


 It is simple to implement.
 It is efficient on small datasets.
 It is stable (does not change the relative order of elements with equal keys)
 It is in-place (only requires a constant amount O (1) of extra memory space).
 It is an online algorithm, which can sort a list when it is received.
 Disadvantages of Insertion Sort:
 Inefficient for large lists (time complexity increases exponentially with the size of the list)
 Not suitable for lists with many duplicate elements
 Not adaptable to different data structures (works best with arrays)

 Merge Sort-
 What is Merge Sort:
 Merge sort is a famous sorting algorithm.
 It uses a divide and conquer paradigm for sorting.
 It divides the problem into sub problems and solves them individually.
 It then combines the results of sub problems to get the solution of the original problem.

 How Merge Sort Works?


 Before learning how merge sort works, let us learn about the merge procedure of merge sort
algorithm.
 The merge procedure of merge sort algorithm is used to merge two sorted arrays into a third array
in sorted order.
 Consider we want to merge the following two sorted sub arrays into a third array in sorted order-

 The merge procedure of merge sort algorithm is given below-


 // L : Left Sub Array , R : Right Sub Array , A : Array
 merge(L, R, A)
 {
 nL = length(L) // Size of Left Sub Array
 nR = length(R) // Size of Right Sub Array
 i=j=k=0
 while(i<nL && j<nR)
 {
 /* When both i and j are valid i.e. when both the sub arrays have elements
to insert in A */
 if(L[i] <= R[j])
 {
 A[k] = L[i]
 k = k+1
 i = i+1
 }
 else
 {
 A[k] = R[j]
 k = k+1
 j = j+1
 }
 }
 // Adding Remaining elements from left sub array to array A
 while(i<nL)
 {
 A[k] = L[i]
 i = i+1
 k = k+1
 }
 // Adding Remaining elements from right sub array to array A
 while(j<nR)
 {
 A[k] = R[j]
 j = j+1
 k = k+1
 }
 }
The above merge procedure of merge sort algorithm is explained in the following steps-

 Step-01:
 Create two variables i and j for left and right sub arrays.
 Create variable k for sorted output array.

 Step-02:
 We have i = 0, j = 0, k = 0.
 Since L[0] < R[0], so we perform A[0] = L[0] i.e. we copy the first element from left sub array to our
sorted output array.
 Then, we increment i and k by 1.
 Then, we have-
 Step-03:
 We have i = 1, j = 0, k = 1.
 Since L[1] > R[0], so we perform A[1] = R[0] i.e. we copy the first element from right sub array to our
sorted output array.
 Then, we increment j and k by 1.
 Then, we have-

 Step-04:
 We have i = 1, j = 1, k = 2.
 Since L[1] > R[1], so we perform A[2] = R[1].
 Then, we increment j and k by 1.
 Then, we have-
 Step-05:
 We have i = 1, j = 2, k = 3.
 Since L[1] < R[2], so we perform A[3] = L[1].
 Then, we increment i and k by 1.
 Then, we have-

 Step-06:
 We have i = 2, j = 2, k = 4.
 Since L[2] > R[2], so we perform A[4] = R[2].
 Then, we increment j and k by 1.
 Then, we have-
 Step-07:
 Clearly, all the elements from right sub array have been added to the sorted output array.
 So, we exit the first while loop with the condition while(i<nL && j<nR) since now j>nR.
 Then, we add remaining elements from the left sub array to the sorted output array using next
while loop.
 Finally, our sorted output array is-

 Basically,
 After finishing elements from any of the sub arrays, we can add the remaining elements from the
other sub array to our sorted output array as it is.
 This is because left and right sub arrays are already sorted.

Time Complexity
The above mentioned merge procedure takes Θ(n) time.
This is because we are just filling an array of size n from left & right sub arrays by
incrementing i and j at most Θ(n) times.

 Merge Sort Algorithm-


Merge Sort Algorithm works in the following steps-
 It divides the given unsorted array into two halves- left and right sub arrays.
 The sub arrays are divided recursively.
 This division continues until the size of each sub array becomes 1.
 After each sub array contains only a single element, each sub array is sorted trivially.
 Then, the above discussed merge procedure is called.
 The merge procedure combines these trivially sorted arrays to produce a final sorted array.
 The division procedure of merge sort algorithm which uses recursion is given below-
 // A : Array that needs to be sorted
 MergeSort(A)
 {
 n = length(A)
 if n<2 return
 mid = n/2
 left = new_array_of_size(mid) // Creating temporary array for left
 right = new_array_of_size(n-mid) // and right sub arrays
 for(int i=0 ; i<=mid-1 ; ++i)
 {
 left[i] = A[i] // Copying elements from A to left
 }
 for(int i=mid ; i<=n-1 ; ++i)
 {
 right[i-mid] = A[i] // Copying elements from A to right
 }
 MergeSort(left) // Recursively solving for left sub array
 MergeSort(right) // Recursively solving for right sub array
 merge(left, right, A) // Merging two sorted left/right sub array to final array
 }

 Merge Sort Example-


Consider the following elements have to be sorted in ascending order- 6, 2, 11, 7, 5, 4. The merge sort
algorithm works as-
 Properties-
Some of the important properties of merge sort algorithm are-

 Merge sort uses a divide and conquer paradigm for sorting.


 Merge sort is a recursive sorting algorithm.
 Merge sort is a stable sorting algorithm.
 Merge sort is not an in-place sorting algorithm.
 The time complexity of merge sort algorithm is Θ(nlogn).
 The space complexity of merge sort algorithm is Θ(n).

NOTE
Merge sort is the best sorting algorithm in terms of time complexity Θ(nlogn)
if we are not concerned with auxiliary space used.

 Advantages of Merge Sort:


 Merge sort algorithm is best case for sorting slow-access data e.g) tape drive.
 Merge sort algorithm is better at handling sequential - accessed lists.
 Disadvantages Merge Sort:
 The running time of merge sort algorithm is 0(n log n). which turns out to be the worse case.
 Merge sort algorithm requires additional memory spance of 0(n) for the temporary array TEMP.

 Quick Sort-
 What is Quick Sort:
 Quick Sort is a famous sorting algorithm.
 It sorts the given data items in ascending order.
 It uses the idea of divide and conquer approach.
 It follows a recursive algorithm.

 Quick Sort Algorithm-


Consider-

 a = Linear Array in memory


 beg = Lower bound of the sub array in question
 end = Upper bound of the sub array in question
Then, Quick Sort Algorithm is as follows-

 Partition_Array (a , beg , end , loc)


 Begin
 Set left = beg , right = end , loc = beg
 Set done = false
 While (not done) do
 While ( (a[loc] <= a[right] ) and (loc ≠ right) ) do
 Set right = right - 1
 end while
 if (loc = right) then
 Set done = true
 else if (a[loc] > a[right]) then
 Interchange a[loc] and a[right]
 Set loc = right
 end if
 if (not done) then
 While ( (a[loc] >= a[left] ) and (loc ≠ left) ) do
 Set left = left + 1
 end while
 if (loc = left) then
 Set done = true
 else if (a[loc] < a[left]) then
 Interchange a[loc] and a[left]
 Set loc = left
 end if
 end if
 end while
 End

 How Does Quick Sort Works?


 Quick Sort follows a recursive algorithm.
 It divides the given array into two sections using a partitioning element called as pivot.
 The division performed is such that-
▫ All the elements to the left side of pivot are smaller than pivot.
▫ All the elements to the right side of pivot are greater than pivot.
 After dividing the array into two sections, the pivot is set at its correct position.
 Then, sub arrays are sorted separately by applying quick sort algorithm recursively.

 Quick Sort Example-


Consider the following array has to be sorted in ascending order using quick sort algorithm-

Quick Sort Algorithm works in the following steps-

 Step-01:
▫ Initially-
 Left and Loc (pivot) points to the first element of the array.
 Right points to the last element of the array.
▫ So to begin with, we set loc = 0, left = 0 and right = 5 as-

 Step-02:
▫ Since loc points at left, so algorithm starts from right and move towards left.
▫ As a[loc] < a[right], so algorithm moves right one position towards left as
▫ Now, loc = 0, left = 0 and right = 4.

 Step-03:
▫ Since loc points at left, so algorithm starts from right and move towards left.
▫ As a[loc] > a[right], so algorithm swaps a[loc] and a[right] and loc points at right as-

▫ Now, loc = 4, left = 0 and right = 4.

 Step-04:
▫ Since loc points at right, so algorithm starts from left and move towards right.
▫ As a[loc] > a[left], so algorithm moves left one position towards right as-

▫ Now, loc = 4, left = 1 and right = 4.

 Step-05:
▫ Since loc points at right, so algorithm starts from left and move towards right.
▫ As a[loc] > a[left], so algorithm moves left one position towards right as-
▫ Now, loc = 4, left = 2 and right = 4.

 Step-06:
▫ Since loc points at right, so algorithm starts from left and move towards right.
▫ As a[loc] < a[left], so we algorithm swaps a[loc] and a[left] and loc points at left as-

▫ Now, loc = 2, left = 2 and right = 4.

 Step-07:
▫ Since loc points at left, so algorithm starts from right and move towards left.
▫ As a[loc] < a[right], so algorithm moves right one position towards left as-

▫ Now, loc = 2, left = 2 and right = 3.

 Step-08:
▫ Since loc points at left, so algorithm starts from right and move towards left.
▫ As a[loc] > a[right], so algorithm swaps a[loc] and a[right] and loc points at right as-
▫ Now, loc = 3, left = 2 and right = 3.

 Step-09:
▫ Since loc points at right, so algorithm starts from left and move towards right.
▫ As a[loc] > a[left], so algorithm moves left one position towards right as-

▫ Now, loc = 3, left = 3 and right = 3.


 Now,
▫ loc, left and right points at the same element.
▫ This indicates the termination of procedure.
▫ The pivot element 25 is placed in its final position.
▫ All elements to the right side of element 25 are greater than it.
▫ All elements to the left side of element 25 are smaller than it.

 Now, quick sort algorithm is applied on the left and right sub arrays separately in the similar manner.

 Advantages of Quick Sort-


The advantages of quick sort algorithm are-

 Quick Sort is an in-place sort, so it requires no temporary memory.


 Quick Sort is typically faster than other algorithms.
(because its inner loop can be efficiently implemented on most architectures)

 Quick Sort tends to make excellent usage of the memory hierarchy like virtual memory or caches.
 Quick Sort can be easily parallelized due to its divide and conquer nature.
 Disadvantages of Quick Sort-
 The worst case complexity of quick sort is O(n2).
 This complexity is worse than O(nlogn) worst case complexity of algorithms like merge sort, heap
sort etc.
 It is not a stable sort i.e. the order of equal elements may not be preserved.

 Shell Sort:
 What is Shell Sort:
 Shell sort is a highly efficient sorting algorithm and is based on insertion sort algorithm. This algorithm
avoids large shifts as in case of insertion sort, if the smaller value is to the far right and has to be
moved to the far left.
 This algorithm uses insertion sort on a widely spread elements, first to sort them and then sorts the
less widely spaced elements. This spacing is termed as interval. This interval is calculated based on
Knuth's formula as − h = h * 3 + 1
where − h is interval with initial value 1
 This algorithm is quite efficient for medium-sized data sets as its average and worst-case complexity
of this algorithm depends on the gap sequence the best known is Ο(n), where n is the number of
items. And the worst case space complexity is O(n).

 How Shell Sort Works?


 Let us consider the following example to have an idea of how shell sort works. We take the same
array we have used in our previous examples. For our example and ease of understanding, we take
the interval of 4. Make a virtual sub-list of all values located at the interval of 4 positions. Here these
values are {35, 14}, {33, 19}, {42, 27} and {10, 44}

 We compare values in each sub-list and swap them (if necessary) in the original array. After this step,
the new array should look like this −
 Then, we take interval of 1 and this gap generates two sub-lists - {14, 27, 35, 42}, {19, 10, 33, 44}

 We compare and swap the values, if required, in the original array. After this step, the array should
look like this −

 Finally, we sort the rest of the array using interval of value 1. Shell sort uses insertion sort to sort the
array.
 Following is the step-by-step depiction −

 We see that it required only four swaps to sort the rest of the array.

 Algorithm:
The simple steps of achieving the shell sort are listed as follows -
 ShellSort(a, n) // 'a' is the given array, 'n' is the size of array
 for (interval = n/2; interval > 0; interval /= 2)
 for ( i = interval; i < n; i += 1)
 temp = a[i];
 for (j = i; j >= interval && a[j - interval] > temp; j -= interval)
 a[j] = a[j - interval];
 a[j] = temp;
 End ShellSort

 Advantages of Shell Sort:


 Shell sort algorithm is only efficient for finite number of elements in an array.
 Shell sort algorithm is 5.32 x faster than bubble sort algorithm.

 Disadvantages of Shell Sort:


 Shell sort algorithm is complex in structure and bit more difficult to understand.
 Shell sort algorithm is significantly slower than the merge sort, quick sort and heap sort algorithms.

 Heap Sort:
 What is a heap?
A heap is a complete binary tree, and the binary tree is a tree in which the node can have the utmost two
children. A complete binary tree is a binary tree in which all the levels except the last level, i.e., leaf node,
should be completely filled, and all the nodes should be left-justified.

 What is heap sort?


Heapsort is a popular and efficient sorting algorithm. The concept of heap sort is to eliminate the elements
one by one from the heap part of the list, and then insert them into the sorted part of the list. Heapsort is
the in-place sorting algorithm.

 Algorithm:
 HeapSort(arr)
 BuildMaxHeap(arr)
 for i = length(arr) to 2
 swap arr[1] with arr[i]
 heap_size[arr] = heap_size[arr] ? 1
 MaxHeapify(arr,1)
 End
BuildMaxHeap(arr)

 BuildMaxHeap(arr)
 heap_size(arr) = length(arr)
 for i = length(arr)/2 to 1
 MaxHeapify(arr,i)
 End
MaxHeapify(arr,i)

 MaxHeapify(arr,i)
 L = left(i)
 R = right(i)
 if L ? heap_size[arr] and arr[L] > arr[i]
 largest = L
 else
 largest = i
 if R ? heap_size[arr] and arr[R] > arr[largest]
 largest = R
 if largest != i
 swap arr[i] with arr[largest]
 MaxHeapify(arr,largest)
 End

 Working of Heap sort Algorithm:


 Now, let's see the working of the Heapsort Algorithm.
 In heap sort, basically, there are two phases involved in the sorting of elements. By using the heap
sort algorithm, they are as follows -
▫ The first step includes the creation of a heap by adjusting the elements of the array.
▫ After the creation of heap, now remove the root element of the heap repeatedly by shifting it to
the end of the array, and then store the heap structure with the remaining elements.
 Now let's see the working of heap sort in detail by using an example. To understand it more clearly,
let's take an unsorted array and try to sort it using heap sort. It will make the explanation clearer and
easier.

 First, we have to construct a heap from the given array and convert it into max heap.
 After converting the given heap into max heap, the array elements are -

 Next, we have to delete the root element (89) from the max heap. To delete this node, we have to
swap it with the last node, i.e. (11). After deleting the root element, we again have to heapify it to
convert it into max heap.

 After swapping the array element 89 with 11, and converting the heap into max-heap, the elements
of array are -

 In the next step, again, we have to delete the root element (81) from the max heap. To delete this
node, we have to swap it with the last node, i.e. (54). After deleting the root element, we again have
to heapify it to convert it into max heap.
 After swapping the array element 81 with 54 and converting the heap into max-heap, the elements
of array are -

 In the next step, we have to delete the root element (76) from the max heap again. To delete this
node, we have to swap it with the last node, i.e. (9). After deleting the root element, we again have
to heapify it to convert it into max heap.

 After swapping the array element 76 with 9 and converting the heap into max-heap, the elements of
array are -

 In the next step, again we have to delete the root element (54) from the max heap. To delete this
node, we have to swap it with the last node, i.e. (14). After deleting the root element, we again have
to heapify it to convert it into max heap.

 After swapping the array element 54 with 14 and converting the heap into max-heap, the elements
of array are -

 In the next step, again we have to delete the root element (22) from the max heap. To delete this
node, we have to swap it with the last node, i.e. (11). After deleting the root element, we again have
to heapify it to convert it into max heap.
 After swapping the array element 22 with 11 and converting the heap into max-heap, the elements
of array are -

 In the next step, again we have to delete the root element (14) from the max heap. To delete this
node, we have to swap it with the last node, i.e. (9). After deleting the root element, we again have
to heapify it to convert it into max heap.

 After swapping the array element 14 with 9 and converting the heap into max-heap, the elements of
array are -

 In the next step, again we have to delete the root element (11) from the max heap. To delete this
node, we have to swap it with the last node, i.e. (9). After deleting the root element, we again have
to heapify it to convert it into max heap.

 After swapping the array element 11 with 9, the elements of array are -

 Now, heap has only one element left. After deleting it, heap will be empty.
 After completion of sorting, the array elements are -

 Now, the array is completely sorted.

 Advantages of Heap Sort:


 Like insertion sort algorithm, but unlike merge sort algorithm, the heap sort algorithm sorts in
place.
 Heap sort algorithm is simple, fast, and stable sorting algorithm which can be used to sort large sets
of data.

 Disadvantages of Heap Sort:


 Heap sort algorithm uses 0(1) memory space for the sorting operation.
 Heap sort algorithm's worst case comes with the running time of 0(n log (n)) which is more likely to
merge sort algorithm.

 Counting Sort:
 What is Count Sort:
 It is a linear time sorting algorithm which works faster by not making a comparison. It assumes that
the number to be sorted is in range 1 to k where k is small.
 Basic idea is to determine the "rank" of each number in the final sorted array.
 Counting Sort uses three arrays:
▫ A [1, n] holds initial input.
▫ B [1, n] holds sorted output.
▫ C [1, k] is the array of integer. C [x] is the rank of x in A where x ∈ [1, k]
 Firstly C [x] to be a number of elements of A [j] that is equal to x
▫ Initialize C to zero
▫ For each j from 1 to n increment C [A[j]] by 1
 We set B[C [x]] =A[j]
 If there are duplicates, we decrement C [i] after copying.
 Algorithm of Count Sort:
 Counting Sort (array P, array Q, int k)
 For i ← 1 to k
 do C [i] ← 0 [ θ(k) times]
 for j ← 1 to length [A]
 do C[A[j]] ← C [A [j]]+1 [θ(n) times]
 // C [i] now contain the number of elements equal to i
 for i ← 2 to k
 do C [i] ← C [i] + C[i-1] [θ(k) times]
 //C[i] now contain the number of elements ≤ i
 for j ← length [A] down to 1 [θ(n) times]
 do B[C[A[j] ← A [j]
 C[A[j] ← C[A[j]-1
 Explanation:
 Step1: for loop initialize the array R to 'o'. But there is a contradict in the first step initialize of loop
variable 1 to k or 0 to k. As 0&1 are based on the minimum value comes in array A (input array).
Basically, we start I with the value which is minimum in input array 'A'
 For loops of steps 3 to 4 inspects each input element. If the value of an input element is 'i', we
increment C [i]. Thus, after step 5, C [i] holds the number of input element equal to I for each integer
i=0, 1, 2.....k
 Step 6 to 8 for loop determines for each i=0, 1.....how many input elements are less than or equal to
i
 For loop of step 9 to 11 place each element A [j] into its correct sorted position in the output array B.
for each A [j],the value C [A[j]] is the correct final position of A [j] in the output array B, since there
are C [A[j]] element less than or equal to A [i].
 Because element might not be distinct, so we decrement C[A[j] each time we place a value A [j] into
array B decrement C[A[j] causes the next input element with a value equal to A [j], to go to the
position immediately before A [j] in the output array.

 Analysis of Running Time:


 For a loop of step 1 to 2 take θ(k) times
 For a loop of step 3 to 4 take θ(n) times
 For a loop of step 6 to 7 take θ(k) times
 For a loop of step 9 to 11 take θ(n) times
 Overall time is θ(k+n) time.
 Note:
 Counting Sort has the important property that it is stable: numbers with the same value appears in
the output array in the same order as they do in the input array.
 Counting Sort is used as a subroutine in Radix Sort.
 Example:

Illustration the operation of Counting Sort in the array. A= ( 7,1,3,1,2,4,5,7,2,4,3)


Solution:
Fig: Initial A and C Arrays

 For j=1 to 11
 J=1, C [1, k] = Fig: A [1] = 7 Processed

 J=2, C [1, k] = Fig: A [2] = 1 Processed

 J=3, C [1, k] = Fig: A [3] = 3 Processed

 J=4, C [1, k] = Fig: A [4] = 1 Processed


 J=5, C [1, k] = Fig: A [5] = 2 Processed

 UPDATED C is: Fig: C now contains a count of elements of A

 Note: here the item of 'A' one by one get scanned and will become a position in 'C' and how many
times the item get accessed will be mentioned in an item in 'C' Array and it gets updated or counter
increased by 1 if any item gets accessed again.
 Now, the for loop i= 2 to 7 will be executed having statement: C [i] = C [i] + C [i-1]
 By applying these conditions we will get C updated as i stated from 2 up to 7
C [2] = C [2] + C [1] C [3] = C [3] + C [2]
C [2] = 2 + 2 C [3] = 2 + 4
C [2] = 4 C [3] = 6

C [4] = C [4] + C [3] C [5] = C [5] + C [4]


C [4] = 2 + 6 C [5] = 1 +8
C [4] = 8 C [5] = 9

C [6] = C [6] + C [5] C [7] = C [7] + C [6]


C [6] = 0 + 9 C [7] = 2 + 9
C [6] = 9 C [7] = 11

 Thus the Updated C is: Fig: C set to rank each number of A

 Now, we will find the new array B

 Now two Conditions will apply:


 B[C[A[j] ← A [j]
 C[A[j] ← C[A[j]-1
 We decrease counter one by one by '1'
 We start scanning of the element in A from the last position.
 Element in A became a position in C
 For j ← 11 to 1
Step 1
B [C [A [11]]] = A [11] C [A [11] = C [A [11]-1
B [C [3] = 3 C [3] = C [3] -1
B [6] = 3 C [3] = 5

Fig: A [11] placed in Output array B


Step 2
B [C [A [10]]] = A [10] C [A [10]] = C [A [10]]-1
B [C [4]] =4 C [4] = C [4] -1
B [8] = 4 C [4] = 7

Fig: A [10] placed in Output array B


Step 3
B [C [A [9]] = A [9] C [A [9] = C [A [9]]-1
B [C [2]] = A [2] C [2] = C [2]-1
B [4] = 2 C [2] = 3

Fig: A [9] placed in Output array B


Step 4
B [C [A [8]]] = A [8] C [A [8]] =C [A [8]] -1
B [C [7]] =7 C [A [8]] = C [7]-1
B [11] =7 C [7] = 10

Fig: A [8] placed in Output array B


Step 5
B [C [A [7]]] = A [7] C [A [7]] = C [A [7]] - 1
B [C [5]] = 5 C [5] = C [5] - 1
B [9] = 5 C [5] =8

Fig: A [7] placed in Output array B


Step 6
B [C [A [6]]] = A [6] C [A [6]] = C [A [6]] - 1
B [C [4]] = 4 C [4] = C [4] - 1
B [7] = 4 C [4] = 6

Fig: A [6] placed in Output array B


Step 7
B [C [A [5]]] = A [5] C [A [5] = C [A [5]] -1
B [C [2] =2 C [2] = C [2] - 1
B [3] = 2 C [2] = 2

Fig: A [5] placed in Output array B


Step 8
B [C [A [4]]] = A [4] C [A [4]] = C [A [4]] - 1
B [C [1] = 1 C [1] = C [1] - 1
B [2] = 1 C [1] = 1

Fig: A [4] placed in Output array B


Step 9
B [C [A [3]]] = A [3] C [A [3]] = C [A [3]] - 1
B [C [3] = 3 C [3] = C [3] - 1
B [5] = 3 C [3] = 4

Fig: A [3] placed in Output array B


Step 10
B [C [A [2]]] = A [2] C [A [2]] = C [A [2]] - 1
B [C [1]] = 1 C [1] = C [1] - 1
B [1] = 1 C [1] = 0

Fig: A [2] placed in Output array B


Step 11
B [C [A [1]]] = A [1] C [A [1]] = C [A [1]] - 1
B [C [7]] = 7 C [7] = C [7] - 1
B [10] = 7 C [7] = 9

Fig: B now contains the final sorted data.

 Advantages of Counting Sort:


 It is quite fast
 It is a stable algorithm
Note: For a sorting algorithm to be stable, the order of elements with equal keys (values) in the sorted
array should be the same as that of the input array.

 Disadvantages of Counting Sort:


 It is not suitable for sorting large data sets
 It is not suitable for sorting string values

 Bucket Sort:
 What is Bucket Sort?
 Bucket Sort runs in linear time on average. Like Counting Sort, bucket Sort is fast because it considers
something about the input. Bucket Sort considers that the input is generated by a random process
that distributes elements uniformly over the intervalμ=[0,1].
▫ To sort n input numbers, Bucket Sort
▫ Partition μ into n non-overlapping intervals called buckets.
▫ Puts each input number into its buckets
▫ Sort each bucket using a simple algorithm, e.g. Insertion Sort and then
▫ Concatenates the sorted lists.
 Bucket Sort considers that the input is an n element array A and that each element A [i] in the array
satisfies 0≤A [i] <1. The code depends upon an auxiliary array B [0....n-1] of linked lists (buckets) and
considers that there is a mechanism for maintaining such lists.
 Bucket Sort Algorithm:

BUCKET-SORT (A)
1. n ← length [A]
2. for i ← 1 to n
3. do insert A [i] into list B [n A[i]]
4. for i ← 0 to n-1
5. do sort list B [i] with insertion sort.
6. Concatenate the lists B [0], B [1] ...B [n-1] together in order.

 Example:
Illustrate the operation of BUCKET-SORT on the array A = (0.78, 0.17, 0.39, 0.26, 0.72, 0.94, 0.21, 0.12,
0.23, 068)
Solution:

Fig: Bucket sort: step 1, placing keys in bins in sorted order


Fig: Bucket sort: step 2, concatenate the lists

Fig: Bucket sort: the final sorted sequence

 Advantages of Bucket Sort


 Bucket sort allows each bucket to be processed independently. As a result, you’ll frequently need to
sort much smaller arrays as a secondary step after sorting the main array.
 Bucket sort also has the advantage of being able to be used as an external sorting algorithm. If you
need to sort a list that is too large to fit in memory, you may stream it through RAM, split the contents
into buckets saved in external files, and then sort each file separately in RAM.

 Disadvantages of Bucket Sort


 The problem is that if the buckets are distributed incorrectly, you may wind up spending a lot of extra
effort for no or very little gain. As a result, bucket sort works best when the data is more or less evenly
distributed, or when there is a smart technique to pick the buckets given a fast set of heuristics based
on the input array.
 Can’t apply it to all data types since a suitable bucketing technique is required. Bucket sort’s efficiency
is dependent on the distribution of the input values, thus it’s not worth it if your data are closely
grouped.In many situations, you might achieve greater performance by using a specialized sorting
algorithm like radix sort, counting sort, or burst sort instead of bucket sort.
 Bucket sort’s performance is determined by the number of buckets used, which may need some
additional performance adjustment when compared to other algorithms.

 Radix Sort:
 What is Radix Sort?
 Radix Sort is a Sorting algorithm that is useful when there is a constant'd' such that all keys are d digit
numbers. To execute Radix Sort, for p =1 towards 'd' sort the numbers with respect to the Pth digits
from the right using any linear time stable sort.
 The Code for Radix Sort is straightforward. The following procedure assumes that each element in
the n-element array A has d digits, where digit 1 is the lowest order digit and digit d is the highest-
order digit.
 Here is the algorithm that sorts A [1.n] where each number is d digits long.
 Radix Sort Algorithm:
RADIX-SORT (array A, int n, int d)
1. for i ← 1 to d
2. do stably sort A to sort array A on digit i

 Example:

The first Column is the input. The remaining Column shows the list after successive sorts on increasingly
significant digit position. The vertical arrows indicate the digits position sorted on to produce each list from
the previous one.

 576 49[4] 9[5]4 [1]76 176


 494 19[4] 5[7]6 [1]94 194
 194 95[4] 1[7]6 [2]78 278
 296 → 57[6] → 2[7]8 → [2]96 → 296
 278 29[6] 4[9]4 [4]94 494
 176 17[6] 1[9]4 [5]76 576
 954 27[8] 2[9]6 [9]54 954
 Advantages of the radix sort algorithm
 The radix sort performs better and faster than the quick sort.
 The radix sort is a stable sort to classify the numbers.
 The radix sorts maintain equal values.
 The radix sort is provided an easy and simple algorithm.
 This algorithm does not work in comparison operation with a list of numbers.
 Disadvantages of the radix sort algorithm
 The radix sort contains a large space for sorting the numbers.
 The radix sort does not apply to all data types’ values. You apply only integral values.
 The radix sort algorithm does not provide better efficiency.
 This algorithm does not use for tightly clustered values.

 Time and Space Complexity of above Sorting Algorithm:


1. Bubble Sort:
 Time Complexity:
o Worse case: O(n2)
When the array is reverse-sorted, we iterate through the array (n - 1) times. In the first
iteration, we do (n - 1) swaps, (n - 2) in the second, and so on until in the last iteration where
we do only one swap. Thus the total number of swaps sum up to n * (n - 1) / 2.
o Average case: O(n2)
For a completely random array, the total number of swaps averages out to be around n2 / 4,
which is again O(n2).
o Best case: O(n)
In the first iteration of the array, if we do not perform any swap, we know that the array is
already sorted so stop sorting, therefore the time complexity turns out to be linear.
 Space Complexity:
Since we use only a constant amount of additional memory apart from the input array, the space
complexity is O(1).

2. Selection Sort:
 Time Complexity:
o Worst case = Average Case = Best Case = O(n2)
We perform the same number of comparisons for an array of any given size.
o In the first iteration, we perform (n - 1) comparisons, (n - 2) in the second, and so on until the
last iteration where we perform only one comparison. Thus the total number of comparisons
sum up to n * (n - 1) / 2. The number of swaps performed is at most n - 1. So the overall time
complexity is quadratic.
 Space Complexity:
Since we are not using any extra data structure apart from the input array, the space complexity is
O(1).

3. Insertion Sort:
 Time Complexity:
o Worse case: O(n2)
When we apply insertion sort on a reverse-sorted array, it will insert each element at the beginning
of the sorted subarray, making it the worst time complexity of insertion sort.
o Average case: O(n2)
When the array elements are in random order, the average running time is O(n2 / 4) = O(n2).
o Best case: O(n)
When we initiate insertion sort on an already sorted array, it will only compare each element to its
predecessor, thereby requiring n steps to sort the already sorted array of n elements.
 Space Complexity:
Since we use only a constant amount of additional memory apart from the input array, the space
complexity is O(1).
4. Shell sort:
 Time Complexity

Case Time Complexity

Best Case O(n*logn)

Average Case O(n*log(n)2)

Worst Case O(n2)

o Best Case Complexity - It occurs when there is no sorting required, i.e., the array is already sorted.
The best-case time complexity of Shell sort is O(n*logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not
properly ascending and not properly descending. The average case time complexity of Shell sort
is O(n*logn).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse
order. That means suppose you have to sort the array elements in ascending order, but its
elements are in descending order. The worst-case time complexity of Shell sort is O(n2).
 Space Complexity:
o The space complexity of Shell sort is O(1).

Space Complexity O(1)

Stable NO

5. Merge Sort:
 Time Complexity:
o Worst case = Average Case = Best Case = O(n log n)
o Merge sort performs the same number of operations for any input array of a given size.
o In this algorithm, we keep dividing the array into two subarrays recursively which will create
O(log n) rows where each element is present in each row exactly once.
o For each row, it takes O(n) time to merge every pair of subarrays. So the overall time complexity
becomes O(n log n).
 Space Complexity:
Since we use an auxiliary array of size at most n to store the merged subarray, the space complexity
is O(n).
6. Quick Sort:
 Time Complexity:
o Worse case: O(n2)
When the array is sorted and we choose the leftmost element as pivot, or the array is reverse-
sorted and we choose the rightmost element as pivot, the time complexity becomes quadratic since
partitioning the array results in highly unbalanced subarrays in such cases.
Also when there are a large number of identical elements in the array, optimal partitioning
becomes hard, resulting in quadratic time complexity.

o Average case and best case: O(n log n)


The best case for quick-sort happens when we successfully pick the median element for
partitioning every time. Such partitioning allows us to divide the array in half every time.
o We can avoid the worst-case in quicksort almost always by choosing an appropriate pivot. There
are various ways to achieve this:
 Pick the pivot from the middle of the array
 Adopt a random selection of pivots
 Take the median of three pivot candidates, i,e., choose the median of the first, middle, and
the last elements of the array as the pivot.
o These methods result in almost equal partitioning of the array, on average. This way the average
case time complexity of quicksort practically becomes O(n log n).

 Space Complexity:
Although quicksort doesn’t use auxiliary space to store array elements, additional space is required
for creating stack frames in recursive calls.
o Worst case: O(n)
This happens when the pivot element is the largest or smallest element of the array in every
recursive call. The size of the subarray after partitioning will be n-1 and 1. In this case, the size of
the recursive tree will be n.
o Best case: O(log n)
This happens when the pivot element’s correct position in the partitioned array is in the middle
every time. The size of subarrays will be half the size of the original array. In this case, the recursive
tree’s size will be O(log n).

7. Heap Sort:
 Time Complexity:
o Worst case = Average Case = Best Case = O(n log n)
o The order of time taken by the heap sort algorithm for an array of any given size is the same.
o The process of extraction in a heap structure with n elements takes logarithmic time, O(log n).
When there are n elements in the heap it takes O(log n) time to extract, then there remain (n -
1) elements, the next extraction takes O(log (n - 1)), and so on until there is only one element in
the heap where the extraction takes O(log 1) time.
o The total time complexity sums up to O(log n) + O(log (n -1)) + … + O(1) = O(log (n!)). The time
complexity of O(n log n) best represents this complexity in a simplified form.

 Space Complexity:
Since we are not using any extra data structure, heap sort is an in-place sorting algorithm.
Therefore, its space complexity is O(1).

8. Counting Sort:
 Time Complexity:
o Worst case = Average Case = Best Case = O(n + k)
o We iterate over each element of the input array which contributes O(n) time complexity. We
also iterate over the auxiliary array which contributes O(k) time. So the overall time complexity
is O(n + k).
 Space Complexity:
o We are using an auxiliary array of size k, so the space complexity comes out to be O(k).

9. Radix Sort:
 Time Complexity:
o Worst case = Average Case = Best Case = O(n * k)
o Let's call the number of digits/characters in the maximum value of the input as “k.”
o In this algorithm, we apply the counting sort algorithm for each digit which is k times.
o So the time complexity is O(k * (n + b)), where b is the base for representing numbers, and k is
the number of digits, or the radix, of the largest number in the array..
o Since b is constant for a given problem, the time complexity can be denoted as O (n * k).
 Space Complexity:
The space complexity comes from the counting sort, which requires O(n + k) space to hold counts,
indices, and output arrays.

10. Bucket Sort:


 Time Complexity:
o Worse case: O(n2)
When the elements in the input array are of close range, they are likely to be placed in the same
bucket and this may result in some buckets having a greater number of elements than others. This
way the overall complexity depends on the algorithm which is used for sorting each bucket which is
generally insertion sort, thus giving quadratic complexity.
o Average case and best case: O(n + k)
When the elements are distributed randomly in the array, bucket sort runs in linear time in all cases
as long as the sum of the squares of the bucket sizes is linear in the total number of elements.
 Space Complexity:
We need O(k) memory to store k empty buckets and then we divide the array of O(n) size into these
buckets that require a total of O(n + k) space in total, given that we use insertion sort to sort the
elements within a bucket.
Sorting Algorithms Time and Space Complexity Cheat Sheet
Here’s a cheat sheet to help you memorize the basic attributes of each algorithm:

 QUESTION AND ANSWER:


MCQ:
1. The number of swapping needed to sort the numbers 8, 22, 7, 9, 31, 5, 13 in ascending order using
bubble sort is- (ISRO CS 2017)
a. 11
b. 12
c. 13
d. 10
2. When will bubble sort take worst-case time complexity?
a. The array is sorted in ascending order.
b. The array is sorted in descending order.
c. Only the first half of the array is sorted.
d. Only the second half of the array is sorted.
3. Assume that a merge sort algorithm in the worst case takes 30 seconds for an input of size 64. Which of
the following most closely approximates the maximum input size of a problem that can be solved in 6
minutes? (GATE 2015)
a. 256
b. 512
c. 1024
d. 2048
4. In the following scenarios, when will you use selection sort?
a. The input is already sorted
b. A large file has to be sorted
c. Large values need to be sorted with small keys
d. Small values need to be sorted with large keys
5. What is the advantage of selection sort over other sorting techniques?
a. It requires no additional storage space
b. It is scalable
c. It works best for inputs which are already sorted
d. it is faster than any other sorting technique
6. The given array is arr = {3,4,5,2,1}. The number of iterations in bubble sort and selection sort respectively
are __________
a. 5 and 4
b. 4 and 5
c. 2 and 4
d. 2 and 5
7. The given array is arr = {1,2,3,4,5}. (bubble sort is implemented with a flag variable)The number of
iterations in selection sort and bubble sort respectively are __________
a. 5 and 4
b. 1 and 4
c. 0 and 4
d. 4 and 1

8. How many passes does an insertion sort algorithm consist of?


a. N
b. N-1
c. N+1
d. N2
9. What will be the number of passes to sort the elements using insertion sort?
14, 12,16, 6, 3, 10
a. 6
b. 5
c. 7
d. 1
10. For the following question, how will the array elements look like after second pass?
34, 8, 64, 51, 32, 21
a. 8, 21, 32, 34, 51, 64
b. 8, 32, 34, 51, 64, 21
c. 8, 34, 51, 64, 32, 21
d. 8, 34, 64, 51, 32, 21
11. Which of the following real time examples is based on insertion sort?
a. arranging a pack of playing cards
b. database scenarios and distributes scenarios
c. arranging books on a library shelf
d. real-time systems
12. Merge sort uses which of the following technique to implement sorting?
a. Backtracking
b. greedy algorithm
c. divide and conquer
d. dynamic programming
13. Which of the following method is used for sorting in merge sort?
a. Merging
b. Partitioning
c. Selection
d. Exchanging
14. Which of the following stable sorting algorithm takes the least time when applied to an almost sorted
array?
a. Quick sort
b. Insertion sort
c. Selection sort
d. Merge sort
15. Find the pivot element from the given input using median-of-three partitioning method.
8, 1, 4, 9, 6, 3, 5, 2, 7, 0.
a. 8
b. 7
c. 9
d. 6

SAQ:
1. Why Sorting algorithms are important?
 Efficient sorting is important for optimizing the efficiency of other algorithms (such as search and
merge algorithms) that require input data to be in sorted lists. Sorting is also often useful for
canonicalizing data and for producing human-readable output. Sorting have direct applications in
database algorithms, divide and conquer methods, data structure algorithms, and many more.
2. Explain what is ideal Sorting algorithm?
 The Ideal Sorting Algorithm would have the following properties:
 Stable: Equal keys aren’t reordered.
 Operates in place: requiring O(1) extra space.
 Worst-case O(n log n) key comparisons.
 Worst-case O(n) swaps.
 Adaptive: Speeds up to O(n) when data is nearly sorted or when there are few unique keys.
There is no algorithm that has all of these properties, and so the choice of sorting algorithm depends on
the application.
3. What's the difference between External vs Internal sorting?
 In internal sorting all the data to sort is stored in memory at all times while sorting is in progress.
 In external sorting data is stored outside memory (like on disk) and only loaded into memory in
small chunks. External sorting is usually applied in cases when data can't fit into memory entirely.
So in internal sorting you can do something like shell sort - just access whatever array elements you
want at whatever moment you want. You can't do that in external sorting - the array is not entirely
in memory, so you can't just randomly access any element in memory and accessing it randomly on
disk is usually extremely slow. The external sorting algorithm has to deal with loading and unloading
chunks of data in optimal manner.
Divide and conquer algorithms like merge sort are commonly used for external sorting, because
they break up the problem of sorting everything into a series of smaller sorts on chunks at a time. It
doesn’t require random access to the the dataset and can be made to operate in chunks which fit in
memory. In some cases the in-memory chunks maybe sorted using an in-memory (internal) sorting
algorithm.
4. What is meant by to "Sort in Place"?
 The idea of an in-place algorithm isn't unique to sorting, but sorting is probably the most important
case, or at least the most well-known. The idea is about space efficiency - using the minimum amount
of RAM, hard disk or other storage that you can get away with.
The idea is to produce an output in the same memory space that contains the input by successively
transforming that data until the output is produced. This avoids the need to use twice the storage -
one area for the input and an equal-sized area for the output.
Quicksort is one example of In-Place Sorting.

5. What is meant by Sorting and searching?

 Sorting and searching are fundamentals operations in computer science. Sorting refers to the operation
of arranging data in some given order

6. Define Bubble sort.

 Bubble sort is the one of the easiest sorting method. In this method each data item is compared with its
neighbor and if it is an descending sorting , then the bigger number is moved to the top of all The smaller
numbers are slowly moved to the bottom position, hence it is also called as the exchange sort.

7. Define merge sort?

 Merge sort is based on divide and conquer method. It takes the list to be stored and divide it in half to
create two unsorted lists. The two unsorted lists are then sorted and merge to get a sorted list.

8. Define insertion sort?

 Successive element in the array to be sorted and inserted into its proper place with respect to the other
already sorted element. We start with second element and put it in its correct place, so that the first and
second elements of the array are in order.

9. Define selection sort?

 It basically determines the minimum or maximum of the lists and swaps it with the element at the index
where its supposed to be. The process is repeated such that the n th minimum or maximum element is
swapped with the element at the n-1th index of the list.

10. What is the basic idea of shell sort?


 Shell sort works by comparing elements that are distant rather than adjacent elements in an array or list
where adjacent elements are compared. Shell sort uses an increment sequence. The increment size is
reduced after each pass until increment size is 1.

11. What is the purpose of quick sort and advantage?

 The purpose of the quick sort is to move a data item in the correct direction, just enough for to reach its
final place in the array.Quick sort reduces unnecessary swaps and moves an item to a greater distance,
in one move.

12. Define quick sort?

 The quicksort algorithm is fastest when the median of the array is chosen as the pivot value. That is
because the resulting partitions are of very similar size. Each partition splits itself in two and thus the
base case is reached very quickly and it follow the divide and conquer strategy.

13. Advantage of quick sort?

 Quick sort reduces unnecessary swaps and moves an item to a greater distance, in one move.

14. Define radix sort?

 Radix sort the elements by processing its individual digits. Radix sort processing the digits either by least
significant digit(LSD) method or by most significant digit(MSD) method. Radix sort is a clever and intuitive
little sorting algorithm, radix sort puts the elements in order by comparing the digits of the numbers.

Practice Problem:
1. Sort The given array using bubble, insertion, selection, merge, heap sort.
a. arr[] = {4, 1, 3, 9, 7}
b. arr[] = {10, 9, 8, 7, 6, 5, 4, 3, 2, 1}
c. arr[] = { 2, 1, 6, 10, 4, 1, 3, 9, 7}
d. arr[] = { 9, 5, 10, 4, 14, 1, 0, 2, 6}
Also use Linear time sorting for sort the above problems

 SEMESTER QUESTIONS FROM THIS UNIT:

1. Write down the selection sort algorithm. Explain the steps of selection sort algorithm
using an example. Find the worst case complexity of selection sort algorithm.
[Year:2024]
Ans:

Selection Sort Algorithm

Algorithm (Pseudocode):

SelectionSort(A, n)
for i = 0 to n-2 do
minIndex = i
for j = i+1 to n-1 do
if A[j] < A[minIndex] then
minIndex = j
end for
if minIndex ≠ i then
Swap A[i] with A[minIndex]
end for
end SelectionSort

Steps of the Selection Sort Algorithm:

1. Initialization:
o Start with the first element (index i = 0).
o Assume this element is the minimum (minIndex = i).
2. Finding the Minimum:
o Compare this element with the rest of the elements to find the actual minimum.
o If a smaller element is found, update minIndex to the index of this smaller element.
3. Swapping:
o Swap the current element with the element at minIndex.
o This places the smallest unsorted element in its correct position.
4. Repeat:
o Move to the next element (index i = 1), and repeat the process until the entire array is
sorted.

Example: Selection Sort on an Array

Let's consider the array: [64, 25, 12, 22, 11].

Step-by-Step Execution:

 Initial Array: [64, 25, 12, 22, 11]

1. First Pass (i = 0):


o Initial minIndex = 0 (value 64).
o Compare 64 with 25, 12, 22, and 11.
o The smallest element is 11 (at index 4).
o Swap 64 with 11.
o Array after 1st pass: [11, 25, 12, 22, 64]
2. Second Pass (i = 1):
o Initial minIndex = 1 (value 25).
o Compare 25 with 12 and 22.
o The smallest element is 12 (at index 2).
o Swap 25 with 12.
o Array after 2nd pass: [11, 12, 25, 22, 64]
3. Third Pass (i = 2):
o Initial minIndex = 2 (value 25).
o Compare 25 with 22.
o The smallest element is 22 (at index 3).
o Swap 25 with 22.
o Array after 3rd pass: [11, 12, 22, 25, 64]
4. Fourth Pass (i = 3):
o Initial minIndex = 3 (value 25).
o Compare 25 with 64.
o 25 is the smallest, so no swap is needed.
o Array after 4th pass: [11, 12, 22, 25, 64]
5. Final Array:
o The array is now sorted: [11, 12, 22, 25, 64]

Worst Case Complexity of Selection Sort

The worst-case time complexity of the Selection Sort algorithm can be determined by counting the number
of comparisons made during the sorting process.

 Outer Loop (i from 0 to n-2): runs n-1 times.


 Inner Loop (j from i+1 to n-1): runs n-i-1 times for each i.

Space Complexity

The space complexity of Selection Sort is O(1) since it only requires a constant amount of extra memory for
variables like ‘minIndex’.

2. What is the significance of Linear sort algorithm? Explain the limitation of Linear
sort.Discuss Bucket sort with an example. [Year:2024]
Ans:
Significance of Linear Sort Algorithms

Linear sort algorithms are a class of sorting algorithms that have a time complexity of O(n). This means
that the time it takes to sort the input data grows linearly with the size of the input, making them highly
efficient for large datasets. These algorithms are significant because they can outperform comparison-based
sorting algorithms like Quick Sort, Merge Sort, and Heap Sort, which have a lower bound of O(n log n) in
their average and worst cases.

Examples of Linear Sort Algorithms

 Counting Sort
 Radix Sort
 Bucket Sort

Significance of Linear Sorting:

1. Efficiency: Linear sorting algorithms are more efficient for specific types of data, particularly when the range
of input data is limited.
2. Non-Comparison Based: Unlike traditional comparison-based sorting algorithms, linear sorting algorithms do
not compare elements directly but use other properties like digit positions, buckets, or counts.
3. Optimized for Special Cases: These algorithms are designed for cases where the data has specific properties
(e.g., small range of integer keys), making them more suitable for certain applications.

Limitations of Linear Sort Algorithms

1. Limited Applicability:
o Linear sorting algorithms are often specialized and not universally applicable. They work well only
under specific conditions:
 Counting Sort: Requires that the range of the numbers to be sorted is known and reasonably
small.
 Radix Sort: Works efficiently with fixed-length integers or strings but may not be suitable for
floating-point numbers or large ranges of numbers without additional modifications.
 Bucket Sort: Assumes that input is uniformly distributed over a range.
2. Extra Space:
o Many linear sorting algorithms require additional memory. For example, Counting Sort requires O(k)
extra space, where k is the range of the numbers being sorted.
3. Stability Issues:
o While some linear sorting algorithms like Counting Sort are stable, others may require additional
modifications to ensure stability.
4. Not Comparison-Based:
o These algorithms are not general-purpose because they do not rely on comparisons. For arbitrary
data types, comparison-based sorting algorithms might be the only viable option.
5. Setup Overhead:
o Some algorithms, like Bucket Sort, have overhead associated with setting up the buckets, which
might offset the performance gains, especially if the data isn't uniformly distributed.
Bucket Sort

Bucket Sort is a linear sorting algorithm that distributes elements into a number of buckets, each of which is
then sorted individually (usually using another sorting algorithm like Insertion Sort). After sorting, the
buckets are combined to form the final sorted array.

 Algorithm (Pseudocode):
BucketSort(A, n)
Create an empty array of buckets (e.g., B[0..k-1])
for each element A[i] in array A do
Insert A[i] into the appropriate bucket B[⌊k * A[i]⌋]
end for
for each bucket B[i] do
Sort bucket B[i] (e.g., using Insertion Sort)
end for
Concatenate the sorted buckets to form the sorted array
end BucketSort

 Example of Bucket Sort

Let's sort the array [0.42, 0.32, 0.23, 0.52, 0.25, 0.47, 0.51] using Bucket Sort.

1. Initialization:
o Assume we have n = 7 elements, and we create n empty buckets.
o Buckets are typically chosen based on the number of elements and the range of the data (e.g., [0,1)
for the example given).
2. Distribute the elements into buckets:
o Bucket 0: [0.23, 0.25, 0.32]
o Bucket 1: [0.42, 0.47]
o Bucket 2: [0.52, 0.51]
3. Sort each bucket:
o Bucket 0: [0.23, 0.25, 0.32] (using Insertion Sort)
o Bucket 1: [0.42, 0.47]
o Bucket 2: [0.51, 0.52]
4. Concatenate the buckets:
o Final sorted array: [0.23, 0.25, 0.32, 0.42, 0.47, 0.51, 0.52]

Analysis of Bucket Sort:

1. Time Complexity:
o Best Case: O(n) when the elements are uniformly distributed.
o Average Case: O(n + k), where k is the number of buckets.
o Worst Case: O(n^2) when all elements fall into the same bucket.
2. Space Complexity:
o Bucket Sort requires additional space for the buckets, making the space complexity O(n + k).
3. Stability:
o Bucket Sort is stable if the underlying sorting algorithm used within each bucket is stable.

3. Explain Merge sort algorithm with suitable example. [March-2023]


Ans:

Merge Sort Algorithm

Merge Sort is a classic example of a divide-and-conquer algorithm. It works by dividing the unsorted list
into smaller sublists until each sublist contains a single element, then merging those sublists back together in
a sorted order.

Steps of the Merge Sort Algorithm

1. Divide:
o Recursively split the array into two halves until each sub-array contains only one element. A list with
a single element is considered sorted.
2. Conquer (Merge):
o Merge the sorted sub-arrays back together to form a single sorted array. This merging process
involves comparing the elements of the sub-arrays and arranging them in order.
3. Combine:
o Continue merging until the entire list is reassembled into a sorted order.

Merge Sort Pseudocode


MergeSort(A, left, right)
if left < right then
mid = (left + right) / 2
MergeSort(A, left, mid)
MergeSort(A, mid + 1, right)
Merge(A, left, mid, right)
end if
end MergeSort

Merge(A, left, mid, right)


n1 = mid - left + 1
n2 = right - mid
Create temporary arrays L[1..n1] and R[1..n2]
for i = 1 to n1 do
L[i] = A[left + i - 1]
end for
for j = 1 to n2 do
R[j] = A[mid + j]
end for
i = 1, j = 1, k = left
while i ≤ n1 and j ≤ n2 do
if L[i] ≤ R[j] then
A[k] = L[i]
i = i + 1
else
A[k] = R[j]
j = j + 1
end if
k = k + 1
end while
Copy remaining elements of L, if any
Copy remaining elements of R, if any
end Merge
Example of Merge Sort

Let's sort the array [38, 27, 43, 3, 9, 82, 10] using Merge Sort.

Step-by-Step Execution:

1. Initial Array: [38, 27, 43, 3, 9, 82, 10]


2. Divide:
o Split into two halves: [38, 27, 43] and [3, 9, 82, 10]
o Split further:
 [38, 27, 43] becomes [38] and [27, 43], then [27, 43] becomes [27] and [43]
 [3, 9, 82, 10] becomes [3, 9] and [82, 10], then [3, 9] becomes [3] and [9],
and [82, 10] becomes [82] and [10]
o Now, the array is split into individual elements: [38], [27], [43], [3], [9], [82], [10].
3. Conquer (Merge):
o Merge the individual elements back together in sorted order:
 [38] and [27] merge to form [27, 38]
 [27, 38] and [43] merge to form [27, 38, 43]
 [3] and [9] merge to form [3, 9]
 [82] and [10] merge to form [10, 82]
 [3, 9] and [10, 82] merge to form [3, 9, 10, 82]
 Finally, [27, 38, 43] and [3, 9, 10, 82] merge to form [3, 9, 10, 27, 38, 43,
82]
4. Final Sorted Array: [3, 9, 10, 27, 38, 43, 82]

Complexity Analysis

 Time Complexity:
o The time complexity of Merge Sort is O(n log n) in all cases (worst, average, and best), making it very
efficient, especially for large datasets.
 Space Complexity:
o Merge Sort requires additional space for the temporary arrays used during merging, so its space
complexity is O(n).

Advantages of Merge Sort

1. Stable Sort:
o Merge Sort maintains the relative order of equal elements, making it a stable sorting algorithm.
2. Predictable Performance:
o With a guaranteed time complexity of O(n log n), it provides reliable performance regardless of the
input data.
3. Efficient for Large Data:
o Merge Sort is well-suited for sorting large datasets and can be optimized to work with external
memory (e.g., disk storage) for sorting data that doesn't fit in memory.

Disadvantages of Merge Sort

1. Space Usage:
o Requires O(n) additional space for temporary arrays, which can be a drawback when memory is a
constraint.
2. Not In-Place:
o Unlike some other sorting algorithms (e.g., Quick Sort), Merge Sort does not sort the elements in
place, requiring additional space for merging.

You might also like