Diploma 5 Sem ADA Notes 3
Diploma 5 Sem ADA Notes 3
Diploma 5 Sem ADA Notes 3
Divide and Conquer is a problem-solving strategy that involves breaking down a complex
problem into smaller, more manageable sub-problems, solving each sub-problem
independently, and then combining the solutions to solve the original problem. The strategy
is based on the idea that it is easier to solve smaller problems than larger ones.
1. Divide: The problem is divided into smaller sub-problems that are similar to the
original problem but of smaller size.
2. Conquer: The sub-problems are solved recursively using the same algorithm until
they become simple enough to be solved directly.
3. Combine: The solutions to the sub-problems are combined to solve the original
problem.
This approach is used in many algorithms, including Quicksort, Merge Sort, and Strassen’s
Algorithm.
The following are some standard algorithms that follow Divide and Conquer algorithm.
Quicksort is a sorting algorithm. The algorithm picks a pivot element and rearranges
the array elements so that all elements smaller than the picked pivot element move to
the left side of the pivot, and all greater elements move to the right side. Finally, the
algorithm recursively sorts the subarrays on the left and right of the pivot element.
Merge Sort is also a sorting algorithm. The algorithm divides the array into two
halves, recursively sorts them, and finally merges the two sorted halves.
Closest Pair of Points The problem is to find the closest pair of points in a set of
points in the x-y plane. The problem can be solved in O(n^2) time by calculating the
distances of every pair of points and comparing the distances to find the minimum.
The Divide and Conquer algorithm solves the problem in O(N log N) time.
Strassen’s Algorithm is an efficient algorithm to multiply two matrices. A simple
method to multiply two matrices needs 3 nested loops and is O(n^3). Strassen’s
algorithm multiplies two matrices in O(n^2.8974) time.
Cooley–Tukey Fast Fourier Transform (FFT) algorithm is the most common
algorithm for FFT. It is a divide and conquer algorithm which works in O(N log N)
time.
Karatsuba algorithm for fast multiplication does the multiplication of two n-digit
numbers in at most.
1. Divide: Divide the given problem into sub problems using recursion.
2. Conquer: Solve the smaller sub problems recursively. If the sub problem is small
enough, then solve it directly.
3. Combine: Combine the solutions of the sub-problems that are part of the recursive
process to solve the actual problem.
Here, we will sort an array using the divide and conquer approach (ie. merge sort).
3. Now, combine the individual elements in a sorted manner. Here, conquer and
combine steps go side by side.
Binary search is a fast search algorithm with run-time complexity of Ο(log n). This search
algorithm works on the principle of divide and conquer. For this algorithm to work properly,
the data collection should be in the sorted form.
Binary search looks for a particular item by comparing the middle most item of the
collection. If a match occurs, then the index of item is returned. If the middle item is greater
than the item, then the item is searched in the sub-array to the left of the middle item.
Otherwise, the item is searched for in the sub-array to the right of the middle item. This
process continues on the sub-array as well until the size of the subarray reduces to zero.
To understand the working of the Binary search algorithm, let's take a sorted array. It will be
easy to understand the working of Binary search with an example.
We have to use the below formula to calculate the mid of the array –
beg = 0
end = 8
return False
else
if x == arr[mid]
return mid
1. Divide the search space into two halves by finding the middle index “mid”.
2. Compare the middle element of the search space with the key.
3. If the key is found at middle element, the process is terminated.
4. If the key is not found at middle element, choose which half will be used as the next
search space.
a. If the key is smaller than the middle element, then the left side is used for next
search.
b. If the key is larger than the middle element, then the right side is used for next
5. This process is continued until the key is found or the total search space is exhausted.\
Binary search complexity:
Time Complexity:
Binary search can be used as a building block for more complex algorithms used in
machine learning, such as algorithms for training neural networks or finding the
optimal hyper parameters for a model.
It can be used for searching in computer graphics such as algorithms for ray tracing or
texture mapping.
It can be used for searching a database.
Merge Sort Algorithm
Merge Sort is one of the most popular sorting algorithms that is based on the principle of
Divide and Conquer Algorithm.
Using the Divide and Conquer technique, we divide a problem into subproblems. When the
solution to each subproblem is ready, we 'combine' the results from the subproblems to solve
the main problem.
Suppose we had to sort an array A. A subproblem would be to sort a sub-section of this array
starting at index p and ending at index r, denoted as A[p..r]
Divide
If q is the half-way point between p and r, then we can split the subarray A[p..r] into two
arrays A[p..q] and A[q+1, r].
Conquer
In the conquer step, we try to sort both the subarrays A[p..q] and A[q+1, r]. If we haven't yet
reached the base case, we again divide both these subarrays and try to sort them.
Combine
When the conquer step reaches the base step and we get two sorted subarrays A[p..q] and
A[q+1, r] for array A[p..r], we combine the results by creating a sorted array A[p..r] from two
sorted subarrays A[p..q] and A[q+1, r].
In Short:
Figure: Divide and Conquer Technique
Step 2 − divide the list recursively into two halves until it can no more be divided.
Step 3 − merge the smaller lists into new list in sorted order.
end of if
END MERGE_SORT
For Example: To understand merge sort, we take an unsorted array as the following –
We know that merge sort first divides the whole array iteratively into equal halves unless the
atomic values are achieved. We see here that an array of 8 items is divided into two arrays of
size 4.
This does not change the sequence of appearance of items in the original. Now we divide
these two arrays into halves.
We further divide these arrays and we achieve atomic value which can no more be divided.
Now, we combine them in exactly the same manner as they were broken down.
We first compare the element for each list and then combine them into another list in a sorted
manner. We see that 14 and 33 are in sorted positions. We compare 27 and 10 and in the
target list of 2 values we put 10 first, followed by 27. We change the order of 19 and 35
whereas 42 and 44 are placed sequentially.
In the next iteration of the combining phase, we compare lists of two data values, and merge
them into a list of found data values placing all in a sorted order.
After the final merging, the list should look like this –
Sorting large datasets: Merge sort is particularly well-suited for sorting large datasets
due to its guaranteed worst-case time complexity of O(n log n).
External sorting: Merge sort is commonly used in external sorting, where the data to
be sorted is too large to fit into memory.
Custom sorting: Merge sort can be adapted to handle different input distributions,
such as partially sorted, nearly sorted, or completely unsorted data.
Used for Counting inversions in a List.
Stability: Merge sort is a stable sorting algorithm, which means it maintains the
relative order of equal elements in the input array.
Guaranteed worst-case performance: Merge sort has a worst-case time complexity of
O(N logN), which means it performs well even on large datasets.
Parallelizable: Merge sort is a naturally parallelizable algorithm, which means it can
be easily parallelized to take advantage of multiple processors or threads.
Space complexity: Merge sort requires additional memory to store the merged sub-
arrays during the sorting process.
Not in-place: Merge sort is not an in-place sorting algorithm, which means it requires
additional memory to store the sorted data. This can be a disadvantage in applications
where memory usage is a concern.
Not always optimal for small datasets: For small datasets, Merge sort has a higher
time complexity than some other sorting algorithms, such as insertion sort. This can
result in slower performance for very small datasets.
Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already
sorted. The best-case time complexity of merge sort is O(n*logn).
Average Case Complexity - It occurs when the array elements are in jumbled order that is not
properly ascending and not properly descending. The average case time complexity of merge
sort is O(n*logn).
Worst Case Complexity - It occurs when the array elements are required to be sorted in
reverse order. That means suppose you have to sort the array elements in ascending order, but
its elements are in descending order. The worst-case time complexity of merge sort is
O(n*logn).
The space complexity of merge sort is O(n). It is because, in merge sort, an extra variable is
required for swapping.
Quick Sort Algorithm
Quick sort is a highly efficient sorting algorithm and is based on partitioning of array of data
into smaller arrays. A large array is partitioned into two arrays one of which holds values
smaller than the specified value, say pivot, based on which the partition is made and another
array holds values greater than the pivot value.
Pivot element can be any element from the array, it can be the first element, the last element
or any random element.
Quicksort partitions an array and then calls itself recursively twice to sort the two resulting
subarrays. This algorithm is quite efficient for large-sized data sets as its average and worst-
case complexity are O(n2), respectively.
Move all the elements < pivot to the left of the pivot and move all elements >= pivot to the
pivot’s right
Step 6: Stop the recursion when the base case is reached. The base case is an array of size
zero or one
To understand the working of quick sort, let's take an unsorted array. It will make the concept
more clear and understandable.
In the given array, we consider the leftmost element as pivot. So, in this case, a[left] = 24,
a[right] = 27 and a[pivot] = 24.
Since, pivot is at left, so algorithm starts from right and move towards left.
Now, a[pivot] < a[right], so algorithm moves forward one position towards left, i.e. –
Because, a[pivot] > a[right], so, algorithm will swap a[pivot] with a[right], and pivot moves
to right, as –
Now, a[left] = 19, a[right] = 24, and a[pivot] = 24. Since, pivot is at right, so algorithm starts
from left and moves to right.
Now, a[left] = 29, a[right] = 24, and a[pivot] = 24. As a[pivot] < a[left], so, swap a[pivot] and
a[left], now pivot is at left, i.e. –
Since, pivot is at left, so algorithm starts from right, and move to left. Now, a[left] = 24,
a[right] = 29, and a[pivot] = 24. As a[pivot] < a[right], so algorithm moves one position to
left, as –
Now, a[pivot] = 24, a[left] = 24, and a[right] = 14. As a[pivot] > a[right], so, swap a[pivot]
and a[right], now pivot is at right, i.e. –
Now, a[pivot] = 24, a[left] = 14, and a[right] = 24. Pivot is at right, so the algorithm starts
from left and move to right.
Now, a[pivot] = 24, a[left] = 24, and a[right] = 24. So, pivot, left and right are pointing the
same element. It represents the termination of procedure.
Elements that are right side of element 24 are greater than it, and the elements that are left
side of element 24 are smaller than it.
Now, in a similar manner, quick sort algorithm is separately applied to the left and right sub-
arrays. After sorting gets done, the array will be –
Time Complexity of Quick Sort Algorithm
Best Case Complexity - In Quicksort, the best-case occurs when the pivot element is
the middle element or near to the middle element. The best-case time complexity of
quicksort is O(n*logn).
Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. The average case time
complexity of quicksort is O(n*logn).
Worst Case Complexity - In quick sort, worst case occurs when the pivot element is
either greatest or smallest element. Suppose, if the pivot element is always the last
element of the array, the worst case would occur when the given array is sorted
already in ascending or descending order. The worst-case time complexity of
quicksort is O(n2).
Space Complexity
It has a worst-case time complexity of O(n2), which occurs when the pivot is chosen
poorly.
It is not a stable sort, meaning that if two elements have the same key, their relative
order will not be preserved in the sorted output in case of quick sort, because here we
are swapping elements according to the pivot’s position (without considering their
original positions).
Application of Quicksort
Commercial Computing is used in various government and private organizations for
the purpose of sorting various data like sorting files by name/date/price, sorting of
students by their roll no., sorting of account profile by given id, etc.
The sorting algorithm is used for information searching and as Quicksort is the fastest
algorithm so it is widely used as a better way of searching.
It is used everywhere where a stable sort is not needed.
It is an in-place sort that does not require any extra storage memory.
Numerical computations and in scientific research, for accuracy in calculations most
of the efficiently developed algorithm uses priority queue and quick sort is used for
sorting.
Quicksort is a cache-friendly algorithm as it has a good locality of reference when
used for arrays.
It is used in operational research and event-driven simulation.
In efficient implementations Quick Sort is not a stable sort, meaning that the relative
order of equal sort items is not preserved.
Overall time complexity of Quick Sort is O(nLogn). In the worst case, it makes O(n2)
comparisons, though this behavior is rare.
The space complexity of Quick Sort is O(nLogn). It is an in-place sort (i.e. it doesn’t
require any extra storage)
***