0% found this document useful (0 votes)
11 views7 pages

Student AOA EX-2

The document outlines an experiment to implement Merge Sort using the divide-and-conquer approach, detailing its algorithm, time complexity of O(n log n), and efficiency in sorting large datasets. It emphasizes the advantages of Merge Sort over other sorting techniques, particularly in terms of stability and consistent performance. The experiment reinforces the understanding of recursive algorithms and their application in efficient problem-solving.

Uploaded by

tamol60913
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views7 pages

Student AOA EX-2

The document outlines an experiment to implement Merge Sort using the divide-and-conquer approach, detailing its algorithm, time complexity of O(n log n), and efficiency in sorting large datasets. It emphasizes the advantages of Merge Sort over other sorting techniques, particularly in terms of stability and consistent performance. The experiment reinforces the understanding of recursive algorithms and their application in efficient problem-solving.

Uploaded by

tamol60913
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

PART A

(PART A: TO BE REFFERED BY STUDENTS)

Experiment No.02
A.1 Aim:
Write a program to implement Merge sort / Binary Search using Divide and Conquer
Approach and analyze its complexity.

A.2 Prerequisite: -

A.3 Outcome:
After successful completion of this experiment students will be able to analyze the time
complexity of various classic problems.

A.4 Theory:
Merge sort is based on the divide-and-conquer paradigm. Its worst-case running time has a
lower order of growth than insertion sort. Since we are dealing with subproblems, we state
each subproblem as sorting a subarray A[p .. r]. Initially, p = 1 and r = n, but these values
change as we recurs through subproblems.

To sort A[p .. r]:

1. Divide Step

If a given array A has zero or one element, simply return; it is already sorted. Otherwise,
split A[p .. r] into two subarrays A[p .. q] and A[q + 1 .. r], each containing about half of the
elements of A[p .. r]. That is, q is the halfway point of A[p .. r].

2. Conquer Step

Conquer by recursively sorting the two subarrays A[p .. q] and A[q + 1 .. r].

3. Combine Step

Combine the elements back in A[p .. r] by merging the two sorted subarrays A[p .. q]
and A[q + 1 .. r] into a sorted sequence. To accomplish this step, we will define a procedure
MERGE (A, p, q, r).
Algorithm:

MERGE (A, p, q, r)

n1 ← q − p + 1
n2 ← r − q
Create arrays L[1 . . n1 + 1] and R[1 . . n2 + 1]
FOR i ← 1 TO n1
DO L[i] ← A[p + i − 1]
FOR j ← 1 TO n2
DO R[j] ← A[q + j ]
L[n1 + 1] ← ∞
R[n2 + 1] ← ∞
i←1
j←1
FOR k ← p TO r
DO IF L[i ] ≤ R[ j]
THEN A[k] ← L[i]
i←i+1
ELSE A[k] ← R[j]
j←j+1

Time Complexity:

In sorting n objects, merge sort has an average and worst-case performance of O(n log n). If
the running time of merge sort for a list of length n is T(n), then the recurrence T(n) = 2T(n/2)
+ n follows from the definition of the algorithm (apply the algorithm to two lists of half the
size of the original list and add the n steps taken to merge the resulting two lists). The closed
form follows from the master theorem.
In the worst case, the number of comparisons merge sort makes is equal to or slightly smaller
than (n ⌈lg n⌉ - 2⌈lg n⌉ + 1), which is between (n lg n - n + 1) and (n lg n + n + O(lg n)).
Time complexity=O(nlogn)
PART B
(PART B: TO BE COMPLETED BY STUDENTS)

Roll No.: C26 Name: Hrishikesh Sanap

Class: C Batch: C2

Date of Experiment: 15/01/2025 Date of Submission

Grade:

B.1 Software Code written by student:


B.2 Input and Output:

B.3 Observations and learning:


During the experiment, we implemented the Merge Sort algorithm using the divide-and-
conquer approach and analyzed its efficiency. The algorithm successfully sorted different sets
of input arrays, demonstrating its stability and efficiency in handling large datasets. We
observed that the recursive splitting and merging process ensures a consistently structured
sorting mechanism, maintaining a time complexity of O(n log n) in all cases. Additionally, the
merging step efficiently combines sorted subarrays while preserving order. Compared to other
sorting techniques like Insertion Sort, Merge Sort showed significant performance
improvements, especially for larger inputs.
B.4 Conclusion:
From this experiment, we conclude that Merge Sort is an efficient and reliable sorting
algorithm, particularly well-suited for large datasets due to its O(n log n) time complexity. The
divide-and-conquer approach ensures that the array is broken down into manageable
subproblems, making it highly structured and predictable. Although Merge Sort requires
additional memory for merging, its consistent performance across all cases makes it a preferred
choice for applications where stability and guaranteed efficiency are essential. The experiment
reinforced the understanding of recursive algorithms and their role in efficient problem-solving.
Overall, Merge Sort proves to be a powerful sorting technique in computational applications.

B.5 Question of Curiosity


Q1: Derive time complexity of Merge Sort

Merge Sort follows the divide-and-conquer paradigm, and its time complexity can be derived
using recurrence relation:

1. Divide: The array is split into two halves, taking O(1) time.
2. Conquer: Each half is recursively sorted, leading to T(n/2) + T(n/2).
3. Combine: The two sorted halves are merged in O(n) time.

Thus, the recurrence relation is:

T(n)=2T(n/2)+O(n)T(n) = 2T(n/2) + O(n)T(n)=2T(n/2)+O(n)

Using the Master Theorem, where a=2a = 2a=2, b=2b = 2b=2, and f(n)=O(n)f(n) =
O(n)f(n)=O(n), we get:

T(n)=O(nlog⁡n)T(n) = O(n \log n)T(n)=O(nlogn)

Hence, the time complexity of Merge Sort is O(n log n) in all cases.

Q2: What is the worst-case and best-case time complexity of Merge Sort?

 Worst-case time complexity: O(n log n) – This occurs when the array is completely unsorted,
but Merge Sort maintains consistent performance due to its recursive structure.
 Best-case time complexity: O(n log n) – Even if the array is already sorted, Merge Sort still
recursively splits and merges the subarrays, maintaining the same complexity.

Unlike some other sorting algorithms (e.g., QuickSort), Merge Sort does not have an improved
best-case performance.
Q3: How many comparisons are done in Merge Sort?

The number of comparisons in Merge Sort is approximately:

C(n)=nlog⁡nC(n) = n \log nC(n)=nlogn

In the worst case, Merge Sort performs about (n log n - n + 1) to (n log n + n + O(log n))
comparisons. The actual number depends on the structure of recursive calls and merging steps.

Q4: Can we say Merge Sort works best for large nnn? Yes or No? Reason?

Yes, Merge Sort is well-suited for large datasets because of its consistent O(n log n) time
complexity. Unlike algorithms like Bubble Sort or Insertion Sort, which perform poorly on large
inputs (O(n²)), Merge Sort remains efficient. Additionally, Merge Sort is a stable sorting
algorithm, making it ideal for applications requiring stable order preservation. However, it
requires additional memory for merging, making it less optimal for space-constrained
environments.

You might also like