Eee DS (Unit 6)
Eee DS (Unit 6)
Syllabus:
Searching, Definition, Linear Search, Binary Search, Fibonacci Search, Hashing, Sorting, Definition,
Bubble Sort, Insertion Sort, Selection Sort, Quick Sort, Merging, Merge Sort, Iterative and Recursive
Merge Sort, Shell Sort, Radix Sort, Heap Sort.
Linear/Sequential Search
In computer science, searching is the process of finding an item with specified properties from a collection
of items.
In computer science, linear search or sequential search is a method for finding a particular value in a list that
consists of checking every one of its elements, one at a time and in sequence, until the desired one is found.
Linear search is the simplest search algorithm.
It is a special case of brute-force search. Its worst case cost is proportional to the number of elements in the
list.
Algorithm
# Input: Array A, integer key
# Output: first index of key in A,#
or -1 if not found
Algorithm: Linear_Search
for i = 0 to last index of A:
if A[i] equals key:
return i
return -1
Program
#include <stdio.h>
void main() {
int array[100], key, i, n;
printf("Enter the number of elements in array\n");
scanf("%d",&n);
printf("Enter %d integer(s)\n", n);
for (i = 0; i < n; i++) {
printf("Array[%d]=", i);
scanf("%d", &array[i]);
}
printf("Enter the number to search\n");
scanf("%d", &key);
for (i = 0; i < n; i++) {
if (array[i] == key) { /* if required element found */
printf("%d is present at location %d.\n", key, i+1);
break;
}
}
if (i == n) {
printf("%d is not present in array.\n", search);
}
getch();
}
Example
(a) 2 9 3 1 8
(b) 2 9 3 1 8
(c) 2 9 3 1 8
i
Binary Search
If we have an array that is sorted, we can use a much more efficient algorithm called a Binary Search.
In binary search each time we divide array into two equal half and compare middle element with search
element.
If middle element is equal to search element then we got that element and return that index otherwise if
middle element is less than search element we look right part of array and if middle element is greater than
search element we look left part of array.
Algorithm
Program #include
<stdio.h>void main()
{
int i, first, last, middle, n, key, array[100];
Example
Find 6 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}.
Record
0
1
f()Address
2
3
4
5
6
Mapping of Record in hash table Hash Table
Hash Table Data Structure:
There are two different forms of hashing.
1. Open hashing or external hashing
Open or external hashing, allows records to be stored in unlimited space (could be a hard disk).
It places no limitation on the size of the tables.
2. Closed hashing or internal hashing
Closed or internal hashing, uses a fixed space for storage and thus limits the size of hash table.
B-1
The basic idea is that the records [elements] are partitioned into B classes, numbered 0,1,2 … B-l
A Hashing function f(x) maps a record with key n to an integer value between 0 and B-l.
Each bucket in the bucket table is the head of the linked list of records mapped to that bucket.
1. Division-Method
In this method we use modular arithmetic system to divide the key value by some integer
divisor m(may be table size).
It gives us the location value, where the element can be placed.
We can write,
L = (K mod m) + 1
where L => location in
table/fileK => key value
m => table size/number of slots in file
Suppose, k = 23, m = 10 then
L = (23 mod 10) + 1= 3 + 1=4, The key whose value is 23 is placed in 4th location.
2. Midsquare Methods
In this case, we square the value of a key and take the number of digits required to form an
address,from the middle position of squared value.
Suppose a key value is 16, then its square is 256. Now if we want address of two digits,
then youselect the address as 56 (i.e. two digits starting from middle of 256).
3. Folding Method
Most machines have a small number of primitive data types for which there are arithmetic
instructions.
Frequently key to be used will not fit easily in to one of these data types
It is not possible to discard the portion of the key that does not fit into such an arithmetic data type
The solution is to combine the various parts of the key in such a way that all parts of the key
affectfor final result such an operation is termed folding of the key.
That is the key is actually partitioned into number of parts, each part having the same length
as thatof the required address.
Add the value of each parts, ignoring the final carry to get the required address.
This is done in two ways :
o Fold-shifting: Here actual values of each parts of key are added.
Suppose, the key is : 12345678, and the required address is of two digits,
Then break the key into: 12, 34, 56, 78.
Add these, we get 12 + 34 + 56 + 78 : 180, ignore first 1 we get 80 as location
o Fold-boundary: Here the reversed values of outer parts of key are added.
Suppose, the key is : 12345678, and the required address is of two digits,
Then break the key into: 21, 34, 56, 87.
Add these, we get 21 + 34 + 56 + 87 : 198, ignore first 1 we get 98 as location
4. Digit Analysis
This hashing function is a distribution-dependent.
Here we make a statistical analysis of digits of the key, and select those digits (of fixed
position)which occur quite frequently.
Then reverse or shifts the digits to get the address.
For example, if the key is : 9861234. If the statistical analysis has revealed the fact that the
third and fifth position digits occur quite frequently, then we choose the digits in these
positions from the key.So we get, 62. Reversing it we get 26 as the address.
5. Length Dependent Method
In this type of hashing function we use the length of the key along with some portion of the
key j to produce the address, directly.
In the indirect method, the length of the key along with some portion of the key is used to
obtain intermediate value.
6. Algebraic Coding
Here a n bit key value is represented as a polynomial.
The divisor polynomial is then constructed based on the address range required.
The modular division of key-polynomial by divisor polynomial, to get the address-polynomial.
Let f(x) = polynomial of n bit key = a1 + a2x + ……. + anxn-1
d(x) = divisor polynomial = x1 + d1 + d2x + …. + d1x1-1
then the required address polynomial will be f(x) mod d(x)
7. Multiplicative Hashing
This method is based on obtaining an address of a key, based on the multiplication value.
If k is the non-negative key, and a constant c, (0 < c < 1), compute kc mod 1, which is a fractional
part ofkc.
Multiply this fractional part by m and take a floor value to get the address
𝑚 (𝑘𝑐 𝑚𝑜𝑑 1) J
0 < h (k) < m
Collision Resolution Strategies (Synonym Resolution)
Collision resolution is the main problem in hashing.
If the element to be inserted is mapped to the same location, where an element is already inserted
thenwe have a collision and it must be resolved.
There are several strategies for collision resolution. The most commonly used are :
1. Separate chaining - used with open hashing
2. Open addressing - used with closed hashing
1. Separate chaining
In this strategy, a separate list of all elements mapped to the same value is maintained.
Separate chaining is based on collision avoidance.
If memory space is tight, separate chaining should be avoided.
Additional memory space for links is wasted in storing address of linked elements.
Hashing function should ensure even distribution of elements among buckets; otherwise the
timingbehavior of most operations on hash table will deteriorate.
List of Elements
0 10 50
2 12 32 62
4 4 24
7 7
9 9 69
0 5 10 15 50
1 1 21 31
2 2 22 32
3 3 33 48
4 4 34 49
2. Open Addressing
Separate chaining requires additional memory space for pointers. Open addressing
hashing is analternate method of handling collision.
In open addressing, if a collision occurs, alternate cells are tried until an empty cell is found.
a. Linear probing
b. Quadratic probing
c. Double hashing.
a) Linear Probing
In linear probing, whenever there is a collision, cells are searched sequentially (with
wraparound) for an empty cell.
Fig. shows the result of inserting keys {5,18,55,78,35,15} using the hash function
(f(key)=key%10) and linear probing strategy.
c) Double Hashing
This method requires two hashing functions f1 (key) and f2 (key).
Problem of clustering can easily be handled through double hashing.
Function f1 (key) is known as primary hash function.
In case the address obtained by f1 (key) is already occupied by a key, the function
f2 (key) isevaluated.
The second function f2 (key) is used to compute the increment to be added to the
addressobtained by the first hash function f1 (key) in case of collision.
The search for an empty location is made successively at the addresses f1 (key) +
f2(key),f1 (key) + 2f2 (key), f1 (key) + 3f2(key),...
SORTING
Arranging the data in ascending or descending order is known as sorting.
Sorting is very important from the point of view of our practical life.
The best example of sorting can be phone numbers in our phones. If, they are not maintained in
an alphabetical order we would not be able to search any number effectively.
Sorting Methods
Many methods are used for sorting, such as:
1. Bubble sort
2. Selection sort
3. Insertion sort
4. Quick sort
5. Merge sort
6. Heap sort
7. Radix sort
8. Shell sort
Generally a sort is classified as internal only if the data which is being sorted is in main memory.
It can be external, if the data is being sorted in the auxiliary storage.
Quick Sort :
Quick sort is a divide and conquer algorithm. Quick sort first divides a large list into two smaller sub-lists:
the low elements and the high elements. Quick sort can then recursively sort the sub-lists.
The list is divided into two partitions such that "all elements to the left of pivot are smaller than
the pivot
and all elements to the right of pivot are greater than or equal to the pivot".
Step by Step Process
In Quick sort algorithm, partitioning of the list is performed using following steps..
Step 1 - Consider the first element of the list as pivot (i.e., Element at first position in the list).
Step 2 - Define two variables i and j. Set i and j to first and last elements of the list respectively.
Step 3 - Increment i until list[i] > pivot then stop.
Step 4 - Decrement j until list[j] < pivot then stop.
Step 5 - If i < j then exchange list[i] and list[j].
Step 6 - Repeat steps 3,4 & 5 until i > j.
Step 7 - Exchange the pivot element with list[j] element.
Advantages:
One of the fastest algorithms on average.
Does not need additional memory (the sorting takes place in the array - this is called in-place
processing).
Disadvantages: The worst-case complexity is O(N2)
1 2 3 4 5 6 7 8 9 10 11 12 13 Remarks
38 08 16 06 79 57 24 56 02 58 04 70 45
Pivot 08 16 06 04 57 24 56 02 58 79 70 45
Pivot 08 16 06 04 02 24 56 57 58 79 70 45
24 08 16 06 04 02 38 56 57 58 79 70 45
Pivot 08 16 06 04 Up
06 04 08
Pivot Dn Up
04 06
(02 04 06 08 16 24 38) (56 57 58 79 70 45) Swap up & down
Pivot Up 58 79 70 Dn
Pivot 57 70 79
Pivot Dn Up 79 Swap down& pivot
Pivot Up
Dn
02 04 06 08 16 24 38 45 56 57 58 70 79 The array is sorted
Merge Sort:
Merge sort is based on Divide and conquer method. It takes the list to be sorted and divide it in
half to create two unsorted lists. The two unsorted lists are then sorted and merged to get a
sorted list. The twounsorted lists are sorted by continually calling the merge-sort algorithm; we
eventually get a list of size1 which is already sorted. The two lists of size 1 are then merged.
Input the total number of elements that are there in an array (number_of_elements). Input the array
(array[number_of_elements]). Then call the function MergeSort() to sort the input array.
MergeSort() function sorts the array in the range [left,right] i.e. from index left to index right
inclusive. Merge() function merges the two sorted parts. Sorted parts will be from [left, mid] and
[mid+1, right]. After merging output the sorted array.
MergeSort() function:
It takes the array, left-most and right-most index of the array to be sorted as arguments. Middle
index (mid) of the array is calculated as (left + right)/2. Check if (left<right) cause we have to sort
only when left<right because when left=right it is anyhow sorted. Sort the left part by calling
MergeSort() function again over the left part MergeSort(array,left,mid) and the right part by
recursive call of MergeSort function as MergeSort(array,mid + 1, right). Lastly merge the two arrays
using the Merge function.
Merge() function:
It takes the array, left-most , middle and right-most index of the array to be merged as
arguments.Finally copy back the sorted array to the original array.
Algorithm : Mergesort(a[0...n-1])
if n>1
𝑛 𝑛
copy 𝐴 [0 … 2 − 1] 𝑡𝑜 𝐵[0 … 2 − 1]
𝑛 𝑛
copy 𝐴 [2 − 1 … 𝑛 − 1] 𝑡𝑜 𝐴[0 … 2 − 1]
𝑛
Mergesort 𝐵[0 … 2 − 1]
𝑛
Mergesort 𝑐[0 … 2 − 1]
Merge(B,C,A)
Algorithm : Merge(B[o...p-1],C[0...q-1],A[0...p+q-1])
i=0,j=0,k=0
while i<p and j<q do
if B[i]<= c[j]
A[k] = B[i];
i=i+1;
else
A[k] = c[j];
j=j+1;
k = k+1;
// Copy left over elements
if i==p
copy C[j...q-1] to A[K ...p+q-1]
else
copy B[i..p-1] to A[K..p+q-1]
Step-by-step example:
Time Complexity:
To sort an unsorted list with 'n' number of elements, Radix sort algorithm needs the following complexities...
Heap sort is one of the sorting algorithms used to arrange a list of elements in order. Heapsort
algorithm uses one of the tree concepts called Heap Tree. In this sorting algorithm, we use Max
Heap to arrange list of elements in Descending order and Min Heap to arrange list elements in
Ascending order.
The Heap sort algorithm to arrange a list of elements in ascending order is performed using
following steps...
Step 3 - Delete the root element from Min Heap using Heapify method.