Student AOA EX-2
Student AOA EX-2
Experiment No.02
A.1 Aim:
Write a program to implement Merge sort / Binary Search using Divide and Conquer
Approach and analyze its complexity.
A.2 Prerequisite: -
A.3 Outcome:
After successful completion of this experiment students will be able to analyze the time
complexity of various classic problems.
A.4 Theory:
Merge sort is based on the divide-and-conquer paradigm. Its worst-case running time has a
lower order of growth than insertion sort. Since we are dealing with subproblems, we state
each subproblem as sorting a subarray A[p .. r]. Initially, p = 1 and r = n, but these values
change as we recurs through subproblems.
1. Divide Step
If a given array A has zero or one element, simply return; it is already sorted. Otherwise,
split A[p .. r] into two subarrays A[p .. q] and A[q + 1 .. r], each containing about half of the
elements of A[p .. r]. That is, q is the halfway point of A[p .. r].
2. Conquer Step
Conquer by recursively sorting the two subarrays A[p .. q] and A[q + 1 .. r].
3. Combine Step
Combine the elements back in A[p .. r] by merging the two sorted subarrays A[p .. q]
and A[q + 1 .. r] into a sorted sequence. To accomplish this step, we will define a procedure
MERGE (A, p, q, r).
Algorithm:
MERGE (A, p, q, r)
n1 ← q − p + 1
n2 ← r − q
Create arrays L[1 . . n1 + 1] and R[1 . . n2 + 1]
FOR i ← 1 TO n1
DO L[i] ← A[p + i − 1]
FOR j ← 1 TO n2
DO R[j] ← A[q + j ]
L[n1 + 1] ← ∞
R[n2 + 1] ← ∞
i←1
j←1
FOR k ← p TO r
DO IF L[i ] ≤ R[ j]
THEN A[k] ← L[i]
i←i+1
ELSE A[k] ← R[j]
j←j+1
Time Complexity:
In sorting n objects, merge sort has an average and worst-case performance of O(n log n). If
the running time of merge sort for a list of length n is T(n), then the recurrence T(n) = 2T(n/2)
+ n follows from the definition of the algorithm (apply the algorithm to two lists of half the
size of the original list and add the n steps taken to merge the resulting two lists). The closed
form follows from the master theorem.
In the worst case, the number of comparisons merge sort makes is equal to or slightly smaller
than (n ⌈lg n⌉ - 2⌈lg n⌉ + 1), which is between (n lg n - n + 1) and (n lg n + n + O(lg n)).
Time complexity=O(nlogn)
PART B
(PART B: TO BE COMPLETED BY STUDENTS)
Class: C Batch: C2
Grade:
Merge Sort follows the divide-and-conquer paradigm, and its time complexity can be derived
using recurrence relation:
1. Divide: The array is split into two halves, taking O(1) time.
2. Conquer: Each half is recursively sorted, leading to T(n/2) + T(n/2).
3. Combine: The two sorted halves are merged in O(n) time.
Using the Master Theorem, where a=2a = 2a=2, b=2b = 2b=2, and f(n)=O(n)f(n) =
O(n)f(n)=O(n), we get:
Hence, the time complexity of Merge Sort is O(n log n) in all cases.
Q2: What is the worst-case and best-case time complexity of Merge Sort?
Worst-case time complexity: O(n log n) – This occurs when the array is completely unsorted,
but Merge Sort maintains consistent performance due to its recursive structure.
Best-case time complexity: O(n log n) – Even if the array is already sorted, Merge Sort still
recursively splits and merges the subarrays, maintaining the same complexity.
Unlike some other sorting algorithms (e.g., QuickSort), Merge Sort does not have an improved
best-case performance.
Q3: How many comparisons are done in Merge Sort?
In the worst case, Merge Sort performs about (n log n - n + 1) to (n log n + n + O(log n))
comparisons. The actual number depends on the structure of recursive calls and merging steps.
Q4: Can we say Merge Sort works best for large nnn? Yes or No? Reason?
Yes, Merge Sort is well-suited for large datasets because of its consistent O(n log n) time
complexity. Unlike algorithms like Bubble Sort or Insertion Sort, which perform poorly on large
inputs (O(n²)), Merge Sort remains efficient. Additionally, Merge Sort is a stable sorting
algorithm, making it ideal for applications requiring stable order preservation. However, it
requires additional memory for merging, making it less optimal for space-constrained
environments.