0% found this document useful (0 votes)
2 views5 pages

AADA Expt-7

The document outlines an experiment to implement randomized quicksort, aiming to validate its average-case time complexity of O(n log n) and observe worst-case scenarios. It includes objectives, expected outcomes, related theory, and implementation code, demonstrating the algorithm's efficiency for medium-sized arrays. The conclusion suggests that while randomized quicksort is effective, other algorithms may be more suitable for larger datasets.

Uploaded by

VIDIT SHAH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views5 pages

AADA Expt-7

The document outlines an experiment to implement randomized quicksort, aiming to validate its average-case time complexity of O(n log n) and observe worst-case scenarios. It includes objectives, expected outcomes, related theory, and implementation code, demonstrating the algorithm's efficiency for medium-sized arrays. The conclusion suggests that while randomized quicksort is effective, other algorithms may be more suitable for larger datasets.

Uploaded by

VIDIT SHAH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Somaiya Vidyavihar University

(Constituent College – K J Somaiya College of Engineering)

Batch: A Roll No.:16030724019

Experiment/assignment / tutorial
No. 7

Grade: AA / AB / BB / BC / CC / CD /DD

Signature of the Staff In-charge with date

Experiment No.: 7

Title: To implement randomized quicksort to experimentally derive time complexity

Objectives:
1. Time Complexity Validation: Confirm that the average case time complexity
of randomized quicksort approaches O(n log n) as n increases.
2. Worst-Case Behavior: Observe if and when worst-case behaviors (O(n²)) arise,
even with random pivoting.
3. Time vs. Input Size Relationship: Plot the time vs. input size and fit the curve
to determine how close it aligns with theoretical expectations.
4. Effect of Randomization: Understand how randomization impacts
performance by running experiments with different random inputs for each
input size.

Expected Outcome of Experiment:

CO .

Expected Outcomes of the Experiment (Short Points)


1. Average-Case Time Complexity:
• Execution time should follow O(n log n) as the input size increases.
2. Worst-Case Time Complexity:
• Rare instances of O(n²) may occur but are expected to be outliers.
3. Effect of Randomization:
• Different execution times for the same input size due to random pivot
selection, but the average should reflect O(n log n).

Department of Computer Engineering


M.Tech.Comp CLab-1 Sem I / Aug 2024
Somaiya Vidyavihar University
(Constituent College – K J Somaiya College of Engineering)

Books/ Papers/Websites referred:

1. https://www.geeksforgeeks.org/quicksort-using-random-pivoting/
2. https://www.tutorialspoint.com/data_structures_algorithms/dsa_randomize
d_quick_sort_algorithm.htm 3.
3. https://stackoverflow.com/questions/63686324/quicksort-to-already-sorted-
array

Pre Lab/ Prior Concepts:

1. Sorting Algorithms:
• Familiarity with basic sorting algorithms (e.g., bubble sort, selection sort, merge
sort) and their time complexities.
2. Quicksort Algorithm:
• Basic Concept: Quicksort is a divide-and-conquer algorithm that works by
selecting a 'pivot' element and partitioning the array into elements less than and
greater than the pivot.
• Deterministic vs. Randomized: Understand the difference between deterministic
quicksort (fixed pivot) and randomized quicksort (random pivot selection).
3. Time Complexity:
• Big O Notation: Knowledge of how to express time complexity using Big O
notation, particularly O(n log n) and O(n²).
• Average vs. Worst Case: Distinction between average-case and worst-case time
complexities and how they apply to sorting algorithms.
4. Randomization:
• Random Selection: Understand how random selection of a pivot can affect the
performance of quicksort, reducing the chances of hitting worst-case scenarios.
5. Recursive Algorithms:
• Basic understanding of recursion, including how recursive function calls work and
how they apply to algorithms like quicksort.

Department of Computer Engineering


M.Tech.Comp CLab-1 Sem I / Aug 2024
Somaiya Vidyavihar University
(Constituent College – K J Somaiya College of Engineering)

Related Theory:
1. Quicksort Overview:
• Divide and Conquer Strategy: Quicksort operates by dividing the array into two
sub-arrays based on a pivot element. It recursively sorts the sub-arrays.
• Partitioning: The process of rearranging the array so that elements less than the
pivot come before it and elements greater come after it.
2. Randomized Pivot Selection:
• Randomized quicksort selects a pivot randomly, which helps in balancing the
partitions. This randomization reduces the chance of encountering the worst-
case time complexity that can occur with a poor choice of pivot (e.g., always
picking the smallest or largest element).
3. Time Complexity Analysis:
• Average Case: The expected time complexity is O(n log n). This is because:
o Each partitioning step takes O(n) time (to traverse the array).
o The depth of the recursion tree is approximately O(log n) on average,
leading to an overall complexity of O(n log n).
• Worst Case: The worst-case time complexity is O(n²), which occurs when the
pivot is consistently the smallest or largest element, leading to unbalanced
partitions. Randomization significantly mitigates this risk.
• Best Case: In the best case, where the pivot splits the array evenly, the time
complexity is also O(n log n).
4. Probability and Randomization:
• The use of randomization introduces a probabilistic element to the algorithm’s
performance. The average-case performance relies on the probability that the
chosen pivots will lead to balanced partitions.
• The randomness helps ensure that, on average, the depth of the recursion is
logarithmic, which is essential for achieving efficient sorting.
5. Recursion and Its Implications:
• Quicksort is a recursive algorithm. Each recursive call sorts a smaller sub-array,
leading to a recursive tree structure.

Department of Computer Engineering


M.Tech.Comp CLab-1 Sem I / Aug 2024
Somaiya Vidyavihar University
(Constituent College – K J Somaiya College of Engineering)

• The maximum depth of recursion affects space complexity, and it is O(log n) for
average cases, leading to an overall space complexity of O(log n) due to the
recursion stack.
Implementation:

Code:

import random
import time

def swap(arr, i, j):


arr[i], arr[j] = arr[j], arr[i]

def partition(arr, low, high):


pivot_index = random.randint(low, high)
swap(arr, pivot_index, high)
pivot = arr[high]
i = low - 1
for j in range(low, high):
if arr[j] <= pivot:
i += 1
swap(arr, i, j)
swap(arr, i + 1, high)
return i + 1

def randomized_quick_sort(arr, low, high):


if low < high:
pi = partition(arr, low, high)
randomized_quick_sort(arr, low, pi - 1)
randomized_quick_sort(arr, pi + 1, high)

def generate_random_array(size):
return [random.randint(0, size) for _ in range(size)]

def measure_execution_time(arr):
start_time = time.time_ns()
randomized_quick_sort(arr, 0, len(arr) - 1)
end_time = time.time_ns()
return end_time - start_time

def main():
sizes = [100, 1000, 10000, 100000, 200000]
print("Array Size\tExecution Time (ns)")

Department of Computer Engineering


M.Tech.Comp CLab-1 Sem I / Aug 2024
Somaiya Vidyavihar University
(Constituent College – K J Somaiya College of Engineering)
for size in sizes:
arr = generate_random_array(size)
time_taken = measure_execution_time(arr)
print(f"{size}\t\t{time_taken}")

if __name__ == "__main__":
main()

Output:
PS D:\notes\College\M.Tech\sem-1\AADA\practical> python -u
Array Size Execution Time (ns)
100 0
1000 998900
10000 19479000
100000 234154600
200000 570913400

Conclusion:
The randomized QuickSort is efficient for sorting medium-sized arrays and performs well
in average cases, thanks to its O(nlog⁡n)O(n \log n)O(nlogn) complexity. However, for
very large datasets, other sorting algorithms, such as MergeSort or hybrid approaches like
Timsort, may be more efficient due to reduced recursion depth and additional optimizations.

Date: Signature of faculty in-charge

Department of Computer Engineering


M.Tech.Comp CLab-1 Sem I / Aug 2024

You might also like