0% found this document useful (0 votes)
2 views

sorting

The document provides an overview of various sorting algorithms including Bubble Sort, Selection Sort, Insertion Sort, Merge Sort, Quick Sort, Heap Sort, Counting Sort, Radix Sort, and Bucket Sort. Each algorithm is explained with its pseudocode and time complexity, highlighting their efficiency and suitability for different types of datasets. The document emphasizes that while some algorithms are simple, they may be inefficient for large datasets, whereas others are more complex but offer better performance.

Uploaded by

Anshika Uniyal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

sorting

The document provides an overview of various sorting algorithms including Bubble Sort, Selection Sort, Insertion Sort, Merge Sort, Quick Sort, Heap Sort, Counting Sort, Radix Sort, and Bucket Sort. Each algorithm is explained with its pseudocode and time complexity, highlighting their efficiency and suitability for different types of datasets. The document emphasizes that while some algorithms are simple, they may be inefficient for large datasets, whereas others are more complex but offer better performance.

Uploaded by

Anshika Uniyal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

1.

Bubble Sort:

BubbleSort(A)

for i = 0 to length(A)-1

for j = 0 to length(A)-i-2

if A[j] > A[j+1]

swap A[j] and A[j+1]

Explanation:

Bubble Sort repeatedly compares adjacent elements and swaps them if they are in
the wrong order. After each pass, the largest unsorted element moves to its correct
position. This process is repeated until no more swaps are needed. It has a time
complexity of O(n²), making it inefficient for large datasets.

2. Selection Sort:

SelectionSort(A)

for i = 0 to length(A)-1

minIndex = i

for j = i+1 to length(A)-1

if A[j] < A[minIndex]

minIndex = j

swap A[i] and A[minIndex]

Explanation:

Selection Sort repeatedly selects the smallest (or largest, depending on sorting
order) element from the unsorted part of the array and swaps it with the first
unsorted element. It doesn’t require a lot of swaps, but it still has a time
complexity of O(n²), making it inefficient for large datasets.

3. Insertion Sort:

InsertionSort(A)

for i = 1 to length(A)-1

key = A[i]

j = i-1

while j >= 0 and A[j] > key

A[j+1] = A[j]

j = j-1

A[j+1] = key

Explanation:

Insertion Sort builds a sorted portion of the array one element at a time. For each
element, it compares it with elements in the sorted part and inserts it in its correct
position. It’s efficient for small datasets or lists that are already mostly sorted.
Best-case time complexity is O(n) when the array is already sorted, and worst-case
time complexity is O(n²).

4. Merge Sort:

MergeSort(A)

if length(A) > 1

mid = length(A) / 2

L = A[0..mid-1], R = A[mid..end]

MergeSort(L), MergeSort(R)

merge(L, R, A)

Explanation:
Merge Sort is a divide-and-conquer algorithm that divides the array into two halves,
recursively sorts each half, and then merges the two sorted halves. It guarantees a
time complexity of O(n log n) and is efficient for large datasets, but it requires extra
memory for temporary arrays.

5. Quick Sort:

QuickSort(A, low, high)

if low < high

pi = Partition(A, low, high)

QuickSort(A, low, pi-1), QuickSort(A, pi+1, high)

Partition(A, low, high)

pivot = A[high]

i = low-1

for j = low to high-1

if A[j] < pivot

i += 1

swap A[i] and A[j]

swap A[i+1] and A[high]

return i+1

Explanation:

Quick Sort is a divide-and-conquer algorithm that selects a pivot element and


partitions the array so that elements smaller than the pivot are on the left and
elements larger than the pivot are on the right. It recursively sorts the left and right
partitions. On average, it runs in O(n log n) time, but its worst-case time complexity
is O(n²) if the pivot is poorly chosen. It’s often faster in practice compared to Merge
Sort.

6. Heap Sort:

HeapSort(A)
BuildMaxHeap(A)

for i = length(A)-1 to 1

swap A[0] and A[i]

MaxHeapify(A, 0)

BuildMaxHeap(A)

for i = length(A)//2 - 1 to 0

MaxHeapify(A, i)

MaxHeapify(A, i)

left = 2*i+1, right = 2*i+2

largest = i

if left < length(A) and A[left] > A[largest]

largest = left

if right < length(A) and A[right] > A[largest]

largest = right

if largest != i

swap A[i] and A[largest]

MaxHeapify(A, largest)

Explanation:

Heap Sort builds a max heap from the input array and repeatedly extracts the
largest element (the root of the heap) by swapping it with the last element. After
each extraction, the heap is restructured. The time complexity of Heap Sort is O(n
log n), and it doesn’t require extra memory like Merge Sort.

7. Counting Sort:

CountingSort(A, k)

C = array[k] initialized to 0

for i = 0 to length(A)-1

C[A[i]] += 1
for i = 1 to k

C[i] += C[i-1]

for i = length(A)-1 to 0

B[C[A[i]]-1] = A[i]

C[A[i]] -= 1

Explanation:

Counting Sort assumes that the input consists of integers in a known range. It
counts the number of occurrences of each element and uses these counts to place
the elements in the correct position in the output array. It has a time complexity of
O(n+k), where k is the range of the input, making it efficient for large datasets with a
small range of values.

8. Radix Sort:

RadixSort(A, d)

for i = 1 to d

Apply stable CountingSort on ith digit

Explanation:

Radix Sort processes each digit of the numbers in the array one at a time, starting
from the least significant digit, using a stable sorting algorithm (like Counting Sort)
to sort based on each digit. It’s efficient for sorting numbers with a fixed number of
digits and has a time complexity of O(d(n+k)), where d is the number of digits and k
is the range of digits.

9. Bucket Sort:

BucketSort(A)

Create empty buckets

for i = 0 to length(A)-1

insert A[i] into its bucket


sort each bucket, concatenate all buckets

Explanation:

Bucket Sort distributes elements into a number of buckets based on their value
range. Each bucket is then sorted individually, usually using another sorting
algorithm, and the buckets are concatenated to get the final sorted list. The time
complexity is O(n) when the input is uniformly distributed, but it depends on how
the data is divided into buckets.

Detailed Explanation:

• Bubble Sort: Simplest but inefficient for large datasets (O(n²)).

• Selection Sort: Reduces the number of swaps compared to Bubble Sort but
still O(n²).

• Insertion Sort: Efficient for small or nearly sorted data, O(n) in the best case.

• Merge Sort: Divide-and-conquer algorithm with guaranteed O(n log n) time


but requires additional space.

• Quick Sort: Very efficient in practice, O(n log n) on average, but can degrade
to O(n²) in the worst case.

• Heap Sort: Always O(n log n), efficient for large datasets, and doesn’t require
extra space.

• Counting Sort: Efficient for sorting integers in a known range, O(n+k).

• Radix Sort: Good for sorting integers with a fixed number of digits, O(d(n+k)).

• Bucket Sort: Ideal for uniformly distributed data, can achieve O(n) under
optimal conditions.

You might also like