Daa FJ I - Set A Key
Daa FJ I - Set A Key
1. D
2. B
3. B
4. B
5.B
6.B
7.A.
8. D
9. B
10.A
PART B
11. Solution:
ii)
------------------------------------------------------------------------------------------------------------------------
12. Insertion sort:
Let's analyze the Insertion Sort algorithm and its time complexities in both the best-case and worst-
case scenarios.
Insertion Sort Algorithm Overview:
Insertion sort is a simple sorting algorithm that builds the final sorted array one item at a time. It is
much like sorting playing cards in your hands. It works by iterating through the array and inserting
each element into its correct position in the sorted portion of the array.
Best-Case Scenario: When the array is already sorted.
In the best case, every element in the array is already in its correct position. As the algorithm iterates
through the array, each element is compared only once with the previous element, and no shifting of
elements is required.
• Time Complexity Analysis:
o For each element, we need just one comparison to check if it's already in the correct
position.
o The outer loop runs n−1n-1 times (where nn is the number of elements).
o The inner loop does no work in the best case, as the elements are already sorted.
Thus, the best-case time complexity of Insertion Sort is O(n), where nn is the number of elements
in the array.
Worst-Case Scenario: When the array is sorted in reverse order.
In the worst case, the array is sorted in reverse order, meaning that every element will need to be
compared with every other element in the sorted portion of the array. Each time an element is inserted,
it will have to be compared with all previously sorted elements and then shifted to the correct
position.
13. Write the binary search algorithm using recursive function and analyze its complexity.
Binary Search Algorithm (Recursive Implementation)
Binary Search is a divide and conquer algorithm used to find an element in a sorted array. It works
by repeatedly dividing the search space in half until the element is found or the search space is empty.
Algorithm (Recursive Binary Search in Python)
def binary_search(arr, low, high, key):
if low > high:
return -1 # Element not found
if arr[mid] == key:
return mid # Element found
elif arr[mid] > key:
return binary_search(arr, low, mid - 1, key) # Search in left half
else:
return binary_search(arr, mid + 1, high, key) # Search in right
result = binary_search(arr, 0, len(arr) - 1, key)
print("Element found at index:", result if result != -1 else "Not found")
Time Complexity Analysis
Let T(n) represent the time complexity of binary search on an array of size n. The recurrence relation
is:
T(n)=T(n/2)+O(1)
where:
• T(n/2) represents searching in half of the array,
• O(1) represents constant time operations (comparison and recursive call).
Using Master Theorem:
• a=1 (one recursive call)
• b=2 (problem size reduces by half)
• f(n)=O(1)
Since O(1)== O(n^0) is smaller than O(nlog21)= O(n^0), the time complexity is:
T(n) = O(log n)
Final Answer: Binary search has a time complexity of O(log n) in the worst, best, and average
cases.
__________________________________________________________________________
14. Apply master’s theorem and find the time complexity:
i) T(n)=8T(n/2)+𝑂(𝑛^3)
ii) T(n)=3T(n/4)+O(nlogn)
______________________________________________________________________
PART C
15. A. Design an algorithm to perform the following task and analyse its best, worst, and average case
time complexity.
Task:
You are given an array of size n. Your goal is to sort the array in ascending order by repeatedly selecting
the smallest element from the unsorted part and placing it in its correct position.
Algorithm:
selection_sort(arr):
n = len(arr)
for i in range(n - 1): # Iterate through the array
min_index = i # Assume the first element is the minimum
for j in range(i + 1, n): # Find the minimum element in the remaining array
if arr[j] < arr[min_index]:
min_index = j # Update the index of the smallest element
# Swap the found minimum element with the first unsorted element
arr[i], arr[min_index] = arr[min_index], arr[i]
_________________________________________________________________________________
15 B. Recursive tree method:
16.A. Solution: Closest pair of points
1. Applying the Divide and Conquer Approach
Step 1: Divide the Problem
• Sort all n drone coordinates based on their x-coordinates.
• Split the points into two equal halves: Left half (L) and Right half (R).
• Find the closest pair of points in L and the closest pair in R recursively.
Step 2: Conquer (Recursive Step)
• Let d_L be the closest distance in L and d_R in R.
• The minimum of these two, d = min(d_L, d_R), is our best candidate so far.
Step 3: Combine (Checking Across the Divide)
• There might be a closest pair where one point is in L and the other in R.
• Define a vertical strip of width 2d around the dividing line.
• Consider only points in this strip and check their distances.
• If a pair in the strip is closer than d, update the minimum distance.
Final Result: The smallest distance found in the recursive calls or the strip check is the closest pair
distance.
2. Recurrence Relation and Time Complexity
• The problem is divided into two subproblems of size n/2: T(n)=2T(n/2)+O(n)
• The O(n) term comes from sorting and checking strip points.
• Solving the recurrence using Master’s Theorem:
o Here, a = 2, b = 2, f(n) = O(n)
o Since f(n) = O(n) = Θ(n^c) where c = 1, and log_b(a) = log_2(2) = 1,
o This fits Case 2 of Master’s Theorem, so the complexity is: T(n)=O(nlogn)
Final Complexity:
• Brute Force: O(n²)
• Divide and Conquer: O(n log n) (Much faster for large datasets)
3. Comparison with Brute-Force Approach
Brute Force O(n²) Suitable for small values of n (e.g., n < 50)
Divide & Conquer O(n log n) Efficient for large-scale drone tracking systems
In real-world applications, drone fleets may have thousands of drones, making the O(n²) brute-
force approach impractical. The O(n log n) divide and conquer method significantly reduces
computational time.
4. Handling Dynamic (Real-Time) Drone Position Updates
In a real-time drone tracking system, drone positions are constantly changing. The Divide and
Conquer method needs modifications:
Use a Dynamic Data Structure:
• Use a Balanced BST (e.g., AVL tree ) to store points efficiently.
• Insert and delete points in O(log n) time, maintaining an updated set of closest pairs.
____________________________________________________________________________
16.B. Solution:
Solution to the Refined Scenario-Based Question: Maximum Sum Subarray in 1D Data
1. Brute-Force Approach and Time Complexity
Brute-Force Algorithm:
• Iterate through all possible subarrays.
• Compute the sum of each subarray and track the maximum sum.
Implementation:
For each starting index i, iterate over all ending indices j and compute the sum:
max_subarray_bruteforce(arr):
n = len(arr)
max_sum = INT_MAX
for i 1 to n:
for j 1 to n:
subarray_sum = sum(arr[i:j+1]) # O(n) sum computation
max_sum = max(max_sum, subarray_sum)
return max_sum
Time Complexity Analysis:
• Two nested loops O(n²) generate subarrays.
• Computing sum O(n) in the worst case.
• Overall complexity:
O(n²)
2. Divide and Conquer Approach
Applying Divide and Conquer:
We split the array into two halves, solving for each half recursively, and consider crossing subarrays
that span both halves.
Steps:
1. Divide:
o Split the array into two halves.
2. Conquer:
o Recursively find the maximum sum subarray in the left half.
o Recursively find the maximum sum subarray in the right half.
3. Combine:
o Find the maximum subarray that crosses the middle by:
▪ Expanding leftward from the middle to find max sum.
▪ Expanding rightward from the middle to find max sum.
▪ Adding these to get the maximum crossing sum.
o Return the maximum of the three cases:
▪ Left subarray max
▪ Right subarray max
▪ Maximum crossing subarray
Implementation:
def max_crossing_sum(arr, left, mid, right):
# Compute max sum in the left subarray (ending at mid)
left_sum = INT_MIN
temp_sum = 0
for i in (mid, left - 1, -1):
temp_sum += arr[i]
left_sum = max(left_sum, temp_sum)
# Compute max sum in the right subarray (starting at mid+1)
right_sum = INT_MIN
temp_sum = 0
for i in (mid + 1, right + 1):
temp_sum += arr[i]
right_sum = max(right_sum, temp_sum)
# Maximum crossing sum
return left_sum + right_sum
def max_subarray_divide_conquer(arr, left, right):
if left == right:
return arr[left] # Base case: single element
mid = (left + right) / 2
# Compute max sum in left, right, and crossing subarrays
left_max = max_subarray_divide_conquer(arr, left, mid)
right_max = max_subarray_divide_conquer(arr, mid + 1, right)
cross_max = max_crossing_sum(arr, left, mid, right)
return max(left_max, right_max, cross_max)
Time Complexity Analysis:
• We divide the problem into two subproblems of size n/2.
• We combine in O(n) time (crossing sum computation).
• Recurrence Relation:
T(n)=2T(n/2)+O(n)
• Using Master’s Theorem:
o a = 2, b = 2, f(n) = O(n)
o log₂(2) = 1, and f(n) = O(n) matches Θ(n^c), where c = 1
o Time Complexity:
T(n)=O(nlog n)
Final Complexity Comparison:
Brute-Force O(n²)
Approach Time Complexity
Divide and Conquer significantly reduces computation time for large inputs.