Analysis of Sorting Algorithms: Performance
Comparison
Name ID
Samuel Godad UGR/4642/13
Firanmit Megersa UGR/7847/13
From fifth-year students
June 14, 2025
Submitted to Dr. Beakal Gizachew
(GitHub Repository)
1 Introduction
This report presents a comprehensive analysis of three fundamental sorting algorithms:
Insertion Sort, Quick Sort, and Merge Sort. The study focuses on their empirical perfor-
mance characteristics, measured through runtime analysis and operation counts, across
various input scenarios. Our investigation aims to bridge the gap between theoretical
complexity analysis and practical performance, providing insights into algorithm behav-
ior under different conditions.
The analysis encompasses:
• Runtime measurements in milliseconds
• Operation counts (comparisons and swaps)
• Performance across different input types
• Comparison with theoretical complexity bounds
2 Methodology
2.1 Implementation Details
The algorithms were implemented in C++ with the following key features:
• A SortingStats struct to track:
– Execution time in milliseconds
– Number of comparisons
– Number of swaps/moves
1
• Precise timing using std::chrono
• Consistent measurement methodology across all algorithms
2.2 Test Cases and Data Generation
Four distinct types of input data were generated:
• Random Lists: Simulating typical unsorted data with elements randomly dis-
tributed
• Ascending Lists: Representing best-case scenarios for some algorithms
• Descending Lists: Representing worst-case scenarios
• Small Lists: Testing performance on limited data sets
Input sizes ranged from 1,000 to 10,000 elements, with increments of 1,000, providing
a comprehensive view of scalability.
2.3 Measurement and Analysis Tools
• C++ implementation for algorithm execution and data collection
• Python scripts for data processing and visualization
• CSV files for structured data storage
• Matplotlib for generating performance graphs
2
3 Results and Analysis
3.1 Runtime Analysis
3.1.1 Random Lists
Figure 1: Runtime comparison for random arrays
For random lists, Quick Sort demonstrates superior performance, followed by Merge Sort.
This aligns with their theoretical O(n log n) average-case complexity. Insertion Sort shows
a quadratic growth pattern, consistent with its O(n²) complexity. The performance gap
widens significantly as input size increases, highlighting the importance of asymptotic
complexity in practical applications.
3
3.1.2 Ascending Lists
Figure 2: Runtime comparison for ascending arrays
Ascending lists reveal interesting behavior:
• Insertion Sort performs exceptionally well, approaching O(n) complexity
• Quick Sort shows degraded performance due to poor pivot selection
• Merge Sort maintains consistent O(n log n) performance
4
3.1.3 Descending Lists
Figure 3: Runtime comparison for descending arrays
Descending lists represent worst-case scenarios:
• Insertion Sort shows clear O(n²) behavior
• Quick Sort’s performance degrades due to unbalanced partitions
• Merge Sort maintains its consistent performance
5
3.2 Operation Count Analysis
3.2.1 Random Lists
Figure 4: Operation count comparison for random arrays
The operation counts for random lists reveal:
• Insertion Sort performs significantly more operations
• Quick Sort and Merge Sort show more efficient operation counts
• The growth rate of operations correlates with runtime performance
6
3.2.2 Ascending Lists
Figure 5: Operation count comparison for ascending arrays
Operation counts for ascending lists show:
• Minimal operations for Insertion Sort
• High operation count for Quick Sort due to poor partitioning
• Consistent operation count for Merge Sort
7
3.2.3 Descending Lists
Figure 6: Operation count comparison for descending arrays
Descending lists demonstrate:
• Maximum operations for Insertion Sort
• High operation count for Quick Sort
• Stable operation count for Merge Sort
8
4 Theoretical vs. Actual Performance
4.1 Random Lists
Figure 7: Theoretical vs. actual performance for random arrays
The theoretical curves (O(n²) and O(n log n)) align well with actual performance:
• Insertion Sort follows the O(n²) curve
• Quick Sort and Merge Sort follow the O(n log n) curve
• Constant factors affect the actual performance but not the growth rate
9
4.2 Ascending Lists
Figure 8: Theoretical vs. actual performance for ascending arrays
Ascending lists show:
• Insertion Sort performs better than theoretical worst case
• Quick Sort’s performance degrades to O(n²)
• Merge Sort maintains theoretical performance
10
4.3 Descending Lists
Figure 9: Theoretical vs. actual performance for descending arrays
Descending lists demonstrate:
• Insertion Sort reaches theoretical worst case
• Quick Sort’s performance matches theoretical worst case
• Merge Sort maintains theoretical performance
5 Discussion
5.1 Algorithm Performance Analysis
5.1.1 Insertion Sort
• Best Case (O(n)): Achieved with nearly sorted data
• Worst Case (O(n²)): Occurs with reverse-sorted data
• Space Complexity: O(1) - in-place sorting
• Stability: Stable sorting algorithm
• Adaptive: Performs well on partially sorted data
11
5.1.2 Quick Sort
• Average Case (O(n log n)): Achieved with good pivot selection
• Worst Case (O(n²)): Occurs with poor pivot choices
• Space Complexity: O(log n) for recursion stack
• Stability: Not stable by default
• Pivot Selection: Critical for performance
5.1.3 Merge Sort
• Consistent Performance (O(n log n)): Maintained across all cases
• Space Complexity: O(n) for auxiliary array
• Stability: Stable sorting algorithm
• Parallelization: Naturally parallelizable
• External Sorting: Suitable for large datasets
5.2 Explaining Performance Differences
5.2.1 Random Lists
Random lists demonstrate the fundamental characteristics of each algorithm:
• Quick Sort typically performs best due to its efficient divide-and-conquer strategy
and good average-case performance. The random nature of the input helps maintain
balanced partitions, leading to optimal O(n log n) performance.
• Merge Sort also shows excellent performance on random data, offering guaranteed
O(n log n) performance. While it may be slightly slower than Quick Sort due to
its additional space requirements and merge operations, it maintains consistent
performance.
• Insertion Sort shows significantly slower performance on random data, with its
O(n²) complexity becoming apparent as input size increases. Each element requires
multiple comparisons and shifts, leading to quadratic growth in operations.
5.2.2 Ascending Lists
Ascending lists reveal the adaptive nature of algorithms:
• Insertion Sort excels in this scenario, approaching O(n) complexity. Each element
is already in its correct position, requiring only one comparison per element.
• Quick Sort performs poorly with a naive pivot selection (e.g., always choosing the
last element). This leads to highly unbalanced partitions, degrading performance
to O(n²).
• Merge Sort maintains its O(n log n) performance, as its divide-and-conquer ap-
proach is not significantly affected by the initial order of elements.
12
5.2.3 Descending Lists
Descending lists represent worst-case scenarios:
• Insertion Sort shows clear O(n²) behavior, as each element must be shifted to the
beginning of the array. This results in the maximum number of comparisons and
swaps.
• Quick Sort also suffers from poor performance due to unbalanced partitions, sim-
ilar to ascending lists. The choice of pivot becomes critical in these scenarios.
• Merge Sort maintains its consistent O(n log n) performance, demonstrating its
robustness to input order.
5.2.4 Small Lists
For very small lists (n ¡ 50):
• Insertion Sort often performs best due to its low overhead and efficient handling
of small datasets.
• Quick Sort and Merge Sort may show slower performance due to their recursive
nature and additional overhead, despite their better asymptotic complexity.
5.3 Key Questions Addressed
5.3.1 Impact of Asymptotic Complexity
The graphs clearly illustrate how asymptotic complexity influences performance:
• Algorithms with O(n log n) complexity (Quick Sort and Merge Sort) show a much
flatter curve compared to Insertion Sort’s O(n²) curve.
• The exponential growth of O(n²) becomes evident in Insertion Sort’s performance
on large, unsorted lists.
• The logarithmic factor in O(n log n) keeps the growth more controlled for Quick
Sort and Merge Sort.
• Constant factors affect the actual performance but don’t change the fundamental
growth rate.
5.3.2 Behavior on Special Cases
The analysis reveals why algorithms behave differently on special cases:
• Insertion Sort’s efficiency on nearly sorted lists stems from its adaptive nature,
performing fewer operations when elements are close to their final positions.
• Quick Sort’s performance degradation on sorted or nearly sorted lists highlights
the importance of pivot selection strategies.
• Merge Sort’s consistent performance across various input types demonstrates the
advantage of a stable, predictable algorithm.
13
6 Reflection
6.1 Algorithm Design and Performance
The findings underscore the critical relationship between algorithm design, theoretical
complexity, and practical performance:
• Insertion Sort’s simplicity and in-place sorting make it efficient for small or nearly
sorted datasets, but its sequential nature leads to poor scalability for large, unsorted
inputs.
• Quick Sort’s divide-and-conquer strategy provides excellent average-case perfor-
mance, but its efficiency heavily depends on pivot selection, highlighting the trade-
off between average-case efficiency and worst-case performance.
• Merge Sort’s consistent O(n log n) performance comes at the cost of additional
space requirements, demonstrating the space-time trade-off in algorithm design.
6.2 Optimization Considerations
The analysis raises important questions about optimization:
• When to optimize for specific scenarios?
– For small datasets or nearly sorted data, Insertion Sort might be preferred due
to its low overhead.
– For general-purpose sorting of large, random datasets, Quick Sort with a robust
pivot selection strategy is often optimal.
– When guaranteed performance is critical, Merge Sort provides the most reliable
option.
• Hybrid approaches:
– Many standard library implementations use hybrid algorithms (e.g., Introsort)
that combine the strengths of different algorithms.
– Quick Sort can be used for large partitions, switching to Insertion Sort for
small partitions.
– Heap Sort can be used as a fallback to avoid Quick Sort’s worst-case behavior.
6.3 Practical Implications
The study has several practical implications:
• Algorithm Selection: The choice of sorting algorithm should be based on:
– Expected input characteristics
– Dataset size
– Memory constraints
– Stability requirements
14
– Performance guarantees needed
• Implementation Considerations:
– Pivot selection strategies for Quick Sort
– Space-time trade-offs
– Parallelization opportunities
– Memory access patterns
• Future Improvements:
– Development of more adaptive algorithms
– Better hybrid approaches
– Memory-optimized versions
– Parallel implementations
7 Conclusion
The analysis reveals several key insights:
• No single algorithm is optimal for all scenarios
• Input characteristics significantly impact performance
• Theoretical complexity provides valuable insights
• Practical considerations often influence algorithm choice
7.1 Future Considerations
• Hybrid sorting algorithms
• Parallel implementations
• Memory-optimized versions
• Adaptive algorithm selection
15