Interpolation&External
Interpolation&External
Given a sorted array of n uniformly distributed values arr[], write a function to search
for a particular element x in the array.
Linear Search finds the element in O(n) time, Jump Search takes O(? n) time
and Binary Search takes O(log n) time.
The Interpolation Search is an improvement over Binary Search for instances, where
the values in a sorted array are uniformly distributed. Interpolation constructs new
data points within the range of a discrete set of known data points. Binary Search
always goes to the middle element to check. On the other hand, interpolation search
may go to different locations according to the value of the key being searched. For
example, if the value of the key is closer to the last element, interpolation search is
likely to start search toward the end side.
To find the position to be searched, it uses the following formula.
// The idea of formula is to return higher value of pos
// when element to be searched is closer to arr[hi]. And
// smaller value when closer to arr[lo]
arr[] ==> Array where elements need to be searched
x ==> Element to be searched
lo ==> Starting index in arr[]
hi ==> Ending index in arr[]
There are many different interpolation methods and one such is known as linear
interpolation. Linear interpolation takes two data points which we assume as (x1,y1)
and (x2,y2) and the formula is : at point(x,y).
This algorithm works in a way we search for a word in a dictionary. The interpolation
search algorithm improves the binary search algorithm. The formula for finding a
value is: K = data-low/high-low.
K is a constant which is used to narrow the search space. In the case of binary search,
the value for this constant is: K=(low+high)/2.
# Driver code
# Element to be searched
x = 18
index = interpolationSearch(arr, 0, n - 1, x)
if index != -1:
print("Element found at index", index)
else:
print("Element not found")
Output
Element found at index 4
Time Complexity: O(log2(log2 n)) for the average case, and O(n) for the worst case
Auxiliary Space Complexity: O(1)
External Sorting
External sorting is a term for a class of sorting algorithms that can handle massive
amounts of data. External sorting is required when the data being sorted does not fit
into the main memory of a computing device (usually RAM) and instead, must reside
in the slower external memory (usually a hard drive).
External sorting typically uses a hybrid sort-merge strategy. In the sorting phase,
chunks of data small enough to fit in the main memory are read, sorted, and written
out to a temporary file. In the merge phase, the sorted sub-files are combined into a
single larger file.
Example:
The external merge sort algorithm, which sorts chunks that each fit in RAM, then
merges the sorted chunks together. We first divide the file into runs such that the size
of a run is small enough to fit into the main memory. Then sort each run in the main
memory using the merge sort sorting algorithm. Finally merge the resulting runs
together into successively bigger runs, until the file is sorted.
When We do External Sorting?
When the unsorted data is too large to perform sorting in computer internal
memory then we use external sorting.
In external sorting we use the secondary device. in a secondary storage
device, we use the tape disk array.
when data is large like in merge sort and quick sort.
Quick Sort: best average runtime.
Merge Sort: Best Worse case time.
To perform sort-merge, join operation on data.
To perform order by the query.
To select duplicate element.
Where we need to take large input from the user.
Examples:
1. Merge sort
2. Tape sort
3. Polyphase sort
4. External radix
5. External merge
Prerequisites: MergeSort, Merge K Sorted Arrays:
Input:
input_file: Name of input file. input.txt
output_file: Name of output file, output.txt
run_size: Size of a run (can fit in RAM)
num_ways: Number of runs to be merged
To solve the problem follow the below idea:
The idea is straightforward, All the elements cannot be sorted at once as the size is
very large. So the data is divided into chunks and then sorted using merge sort. The
sorted data is then dumped into files. As such a huge amount of data cannot be
handled altogether. Now After sorting the individual chunks. Sort the whole array by
using the idea of merging k sorted arrays.
Follow the below steps to solve the problem:
Read input_file such that at most ‘run_size’ elements are read at a time. Do
following for the every run read in an array.
Sort the run using MergeSort.
Store the sorted array in a file. Let’s say ‘i’ for ith file.
Merge the sorted files using the approach discussed merge k sorted arrays