Time Complexity of Sorting Algorithms in Special Cases
Sorting Algorithm All Elements are Same Already Sorted Array
Bubble Sort O(n²) (still compares all O(n) (best case, only checks
elements) once)
Selection Sort O(n²) (still makes all O(n²) (no improvement)
comparisons)
Insertion Sort O(n) (no swaps needed) O(n) (best case, minimal
swaps)
Merge Sort O(n log n) (still splits and O(n log n) (no advantage)
merges)
QuickSort (Lomuto/Naïve O(n²) (bad pivot selection, O(n²) (if always picking the
Partitioning) unbalanced partitions) worst pivot)
QuickSort O(n log n) (if implemented O(n log n) (balanced pivot
(Hoare/Randomized well) helps)
Partitioning)
Heap Sort O(n log n) (builds heap) O(n log n) (no
improvement)
Counting Sort O(n) (efficient, works well) O(n) (linear time)
Radix Sort O(nk) (depends on digit O(nk) (same as above)
count k)
### Explanation of Worst-Case Behavior
1. **QuickSort:**
- If all elements are the same, it does not split efficiently, leading to O(n²) complexity.
- If already sorted and a bad pivot is chosen (like first or last element), it leads to O(n²).
- A randomized pivot improves it to O(n log n).
2. **Insertion Sort:**
- Works in O(n) for both cases because it simply scans without swaps.
3. **Bubble Sort:**
- Takes O(n²) for identical elements but O(n) if already sorted (since no swaps happen).
4. **Merge Sort and Heap Sort:**
- Always take O(n log n) since they process every element fully.
5. **Counting Sort:**
- Works optimally in O(n) when elements are the same or already sorted.