AADA Expt-7
AADA Expt-7
Experiment/assignment / tutorial
No. 7
Grade: AA / AB / BB / BC / CC / CD /DD
Experiment No.: 7
Objectives:
1. Time Complexity Validation: Confirm that the average case time complexity
of randomized quicksort approaches O(n log n) as n increases.
2. Worst-Case Behavior: Observe if and when worst-case behaviors (O(n²)) arise,
even with random pivoting.
3. Time vs. Input Size Relationship: Plot the time vs. input size and fit the curve
to determine how close it aligns with theoretical expectations.
4. Effect of Randomization: Understand how randomization impacts
performance by running experiments with different random inputs for each
input size.
CO .
1. https://www.geeksforgeeks.org/quicksort-using-random-pivoting/
2. https://www.tutorialspoint.com/data_structures_algorithms/dsa_randomize
d_quick_sort_algorithm.htm 3.
3. https://stackoverflow.com/questions/63686324/quicksort-to-already-sorted-
array
1. Sorting Algorithms:
• Familiarity with basic sorting algorithms (e.g., bubble sort, selection sort, merge
sort) and their time complexities.
2. Quicksort Algorithm:
• Basic Concept: Quicksort is a divide-and-conquer algorithm that works by
selecting a 'pivot' element and partitioning the array into elements less than and
greater than the pivot.
• Deterministic vs. Randomized: Understand the difference between deterministic
quicksort (fixed pivot) and randomized quicksort (random pivot selection).
3. Time Complexity:
• Big O Notation: Knowledge of how to express time complexity using Big O
notation, particularly O(n log n) and O(n²).
• Average vs. Worst Case: Distinction between average-case and worst-case time
complexities and how they apply to sorting algorithms.
4. Randomization:
• Random Selection: Understand how random selection of a pivot can affect the
performance of quicksort, reducing the chances of hitting worst-case scenarios.
5. Recursive Algorithms:
• Basic understanding of recursion, including how recursive function calls work and
how they apply to algorithms like quicksort.
Related Theory:
1. Quicksort Overview:
• Divide and Conquer Strategy: Quicksort operates by dividing the array into two
sub-arrays based on a pivot element. It recursively sorts the sub-arrays.
• Partitioning: The process of rearranging the array so that elements less than the
pivot come before it and elements greater come after it.
2. Randomized Pivot Selection:
• Randomized quicksort selects a pivot randomly, which helps in balancing the
partitions. This randomization reduces the chance of encountering the worst-
case time complexity that can occur with a poor choice of pivot (e.g., always
picking the smallest or largest element).
3. Time Complexity Analysis:
• Average Case: The expected time complexity is O(n log n). This is because:
o Each partitioning step takes O(n) time (to traverse the array).
o The depth of the recursion tree is approximately O(log n) on average,
leading to an overall complexity of O(n log n).
• Worst Case: The worst-case time complexity is O(n²), which occurs when the
pivot is consistently the smallest or largest element, leading to unbalanced
partitions. Randomization significantly mitigates this risk.
• Best Case: In the best case, where the pivot splits the array evenly, the time
complexity is also O(n log n).
4. Probability and Randomization:
• The use of randomization introduces a probabilistic element to the algorithm’s
performance. The average-case performance relies on the probability that the
chosen pivots will lead to balanced partitions.
• The randomness helps ensure that, on average, the depth of the recursion is
logarithmic, which is essential for achieving efficient sorting.
5. Recursion and Its Implications:
• Quicksort is a recursive algorithm. Each recursive call sorts a smaller sub-array,
leading to a recursive tree structure.
• The maximum depth of recursion affects space complexity, and it is O(log n) for
average cases, leading to an overall space complexity of O(log n) due to the
recursion stack.
Implementation:
Code:
import random
import time
def generate_random_array(size):
return [random.randint(0, size) for _ in range(size)]
def measure_execution_time(arr):
start_time = time.time_ns()
randomized_quick_sort(arr, 0, len(arr) - 1)
end_time = time.time_ns()
return end_time - start_time
def main():
sizes = [100, 1000, 10000, 100000, 200000]
print("Array Size\tExecution Time (ns)")
if __name__ == "__main__":
main()
Output:
PS D:\notes\College\M.Tech\sem-1\AADA\practical> python -u
Array Size Execution Time (ns)
100 0
1000 998900
10000 19479000
100000 234154600
200000 570913400
Conclusion:
The randomized QuickSort is efficient for sorting medium-sized arrays and performs well
in average cases, thanks to its O(nlogn)O(n \log n)O(nlogn) complexity. However, for
very large datasets, other sorting algorithms, such as MergeSort or hybrid approaches like
Timsort, may be more efficient due to reduced recursion depth and additional optimizations.