0% found this document useful (0 votes)
25 views

Module 4

The document discusses arrays, including their advantages and disadvantages. The key advantages are efficient access to elements, fast data retrieval due to contiguous memory storage, and memory efficiency. The main disadvantages are fixed size, issues with insertion/deletion due to shifting elements, and lack of flexibility compared to other data structures. C code examples are provided for linear search, insertion at the end of an array, deletion from an array, and selection sort of an array.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Module 4

The document discusses arrays, including their advantages and disadvantages. The key advantages are efficient access to elements, fast data retrieval due to contiguous memory storage, and memory efficiency. The main disadvantages are fixed size, issues with insertion/deletion due to shifting elements, and lack of flexibility compared to other data structures. C code examples are provided for linear search, insertion at the end of an array, deletion from an array, and selection sort of an array.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Module-4:

Advantages of array data structure:

 Efficient access to elements: Arrays provide direct and efficient access to any element in the collection.
Accessing an element in an array is an O(1) operation, meaning that the time required to access an element is
constant and does not depend on the size of the array.
 Fast data retrieval: Arrays allow for fast data retrieval because the data is stored in contiguous memory locations.
This means that the data can be accessed quickly and efficiently without the need for complex data structures or
algorithms.
 Memory efficiency: Arrays are a memory-efficient way of storing data. Because the elements of an array are
stored in contiguous memory locations, the size of the array is known at compile time. This means that memory can
be allocated for the entire array in one block, reducing memory fragmentation.
 Versatility: Arrays can be used to store a wide range of data types, including integers, floating-point numbers,
characters, and even complex data structures such as objects and pointers.
 Easy to implement: Arrays are easy to implement and understand, making them an ideal choice for beginners
learning computer programming.
 Compatibility with hardware: The array data structure is compatible with most hardware architectures, making it
a versatile tool for programming in a wide range of environments.

Disadvantages of array data structure:

 Fixed size: Arrays have a fixed size that is determined at the time of creation. This means that if the size of the
array needs to be increased, a new array must be created and the data must be copied from the old array to the
new array, which can be time-consuming and memory-intensive.
 Memory allocation issues: Allocating a large array can be problematic, particularly in systems with limited
memory. If the size of the array is too large, the system may run out of memory, which can cause the program to
crash.
 Insertion and deletion issues: Inserting or deleting an element from an array can be inefficient and time-
consuming because all the elements after the insertion or deletion point must be shifted to accommodate the
change.
 Wasted space: If an array is not fully populated, there can be wasted space in the memory allocated for the array.
This can be a concern if memory is limited.
 Limited data type support: Arrays have limited support for complex data types such as objects and structures, as
the elements of an array must all be of the same data type.
 Lack of flexibility: The fixed size and limited support for complex data types can make arrays inflexible compared
to other data structures such as linked lists and trees.
Coding implementation of the search operation: C Coding implementation of inserting an element at
program to implement linear search in the end: // C program to implement insert
unsorted array operation in an unsorted array.

#include <stdio.h> #include <stdio.h>

// Function to implement search operation // Inserts a key in arr[] of given


int findElement(int arr[], int n, int key) capacity.
{ // n is current size of arr[]. This
int i; // function returns n + 1 if insertion
for (i = 0; i < n; i++) // is successful, else n.
if (arr[i] == key) int insertSorted(int arr[], int n, int
return i; key, int capacity)
{
// If the key is not found
return -1; // Cannot insert more elements if n
} is
// already more than or equal to
// Driver's Code capacity
int main() if (n >= capacity)
{ return n;
int arr[] = { 12, 34, 10, 6, 40 };
int n = sizeof(arr) / sizeof(arr[0]); arr[n] = key;

// Using a last element as search return (n + 1);


element }
int key = 40;
// Driver Code
// Function call int main()
int position = findElement(arr, n, key); {
int arr[20] = { 12, 16, 20, 40, 50,
if (position == -1) 70 };
printf("Element not found"); int capacity = sizeof(arr) /
else sizeof(arr[0]);
printf("Element Found at Position: int n = 6;
%d", int i, key = 26;
position + 1);
printf("\n Before Insertion: ");
return 0; for (i = 0; i < n; i++)
} printf("%d ", arr[i]);

// Inserting key
n = insertSorted(arr, n, key,
capacity);

printf("\n After Insertion: ");


for (i = 0; i < n; i++)
printf("%d ", arr[i]);

return 0;
}

o/p: Element Found at Position: 5 o/p: Before Insertion: 12 16 20 40 50 70


After Insertion: 12 16 20 40 50 70 26
Delete Operation: // C program to implement Sorting operation: // C program for
delete operation in a unsorted array implementation of selection sort

#include <stdio.h> #include <stdio.h>

// To search a key to be deleted void swap(int *xp, int *yp)


int findElement(int arr[], int n, int {
key); int temp = *xp;
*xp = *yp;
// Function to delete an element *yp = temp;
int deleteElement(int arr[], int n, int }
key)
{ void selectionSort(int arr[], int n)
// Find position of element to be {
deleted int i, j, min_idx;
int pos = findElement(arr, n, key);
// One by one move boundary of unsorted
if (pos == -1) { subarray
printf("Element not found"); for (i = 0; i < n-1; i++)
return n; {
} // Find the minimum element in
unsorted array
// Deleting element min_idx = i;
int i; for (j = i+1; j < n; j++)
for (i = pos; i < n - 1; i++) if (arr[j] < arr[min_idx])
arr[i] = arr[i + 1]; min_idx = j;

return n - 1; // Swap the found minimum element


} with the first element
if(min_idx != i)
swap(&arr[min_idx], &arr[i]);
// Function to implement search operation
}
int findElement(int arr[], int n, int key)
}
{
int i;
for (i = 0; i < n; i++) /* Function to print an array */
if (arr[i] == key) void printArray(int arr[], int size)
return i; {
int i;
for (i=0; i < size; i++)
return -1;
printf("%d ", arr[i]);
}
printf("\n");
}
// Driver's code
int main()
// Driver program to test above functions
{
int main()
int i;
{
int arr[] = { 10, 50, 30, 40, 20 };
int arr[] = {64, 25, 12, 22, 11};
int n = sizeof(arr)/sizeof(arr[0]);
int n = sizeof(arr) / sizeof(arr[0]); selectionSort(arr, n);
int key = 30; printf("Sorted array: \n");
printArray(arr, n);
printf("Array before deletion\n"); return 0;
for (i = 0; i < n; i++) }
printf("%d ", arr[i]);

// Function call
n = deleteElement(arr, n, key);

printf("\nArray after deletion\n");


for (i = 0; i < n; i++)
printf("%d ", arr[i]);

return 0;
}

Op: Array before deletion Op: Sorted array:


10 50 30 40 20 11 12 22 25 64

Array after deletion


10 50 40 20

// C program for insertion sort / C code to implement quicksort

#include <math.h>
#include <stdio.h> #include <stdio.h>

/* Function to sort an array using // Function to swap two elements


insertion sort*/ void swap(int* a, int* b)
void insertionSort(int arr[], int n) {
{ int t = *a;
int i, key, j; *a = *b;
for (i = 1; i < n; i++) { *b = t;
key = arr[i]; }
j = i - 1;
// Partition the array using the last
/* Move elements of arr[0..i-1], element as the pivot
that are int partition(int arr[], int low, int
greater than key, to one position high)
ahead {
of their current position */ // Choosing the pivot
while (j >= 0 && arr[j] > key) { int pivot = arr[high];
arr[j + 1] = arr[j];
j = j - 1;
// Index of smaller element and
}
indicates
arr[j + 1] = key;
// the right position of pivot found so
}
far
}
int i = (low - 1);
// A utility function to print an array of
for (int j = low; j <= high - 1; j++) {
size n
void printArray(int arr[], int n)
{ // If current element is smaller
int i; than the pivot
for (i = 0; i < n; i++) if (arr[j] < pivot) {
printf("%d ", arr[i]);
printf("\n"); // Increment index of smaller
} element
i++;
/* Driver program to test insertion sort */ swap(&arr[i], &arr[j]);
int main() }
{ }
int arr[] = { 12, 11, 13, 5, 6 }; swap(&arr[i + 1], &arr[high]);
int n = sizeof(arr) / sizeof(arr[0]); return (i + 1);
}
insertionSort(arr, n);
printArray(arr, n); // The main function that implements
QuickSort
return 0; // arr[] --> Array to be sorted,
} // low --> Starting index,
// high --> Ending index
void quickSort(int arr[], int low, int
high)
{
if (low < high) {

// pi is partitioning index, arr[p]


// is now at right place
int pi = partition(arr, low, high);

// Separately sort elements before


// partition and after partition
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}

// Driver code
int main()
{
int arr[] = { 10, 7, 8, 9, 1, 5 };
int N = sizeof(arr) / sizeof(arr[0]);

// Function call
quickSort(arr, 0, N - 1);
printf("Sorted array: \n");
for (int i = 0; i < N; i++)
printf("%d ", arr[i]);
return 0;
}

Output: 5 6 11 12 13 Sorted array:


1 5 7 8 9 10

 Quick sort Algorithm

quickSort(array, leftmostIndex, rightmostIndex)


if (leftmostIndex < rightmostIndex)
pivotIndex <- partition(array,leftmostIndex, rightmostIndex)
quickSort(array, leftmostIndex, pivotIndex - 1)
quickSort(array, pivotIndex, rightmostIndex)

partition(array, leftmostIndex, rightmostIndex)


set rightmostIndex as pivotIndex
storeIndex <- leftmostIndex - 1
for i <- leftmostIndex + 1 to rightmostIndex
if element[i] < pivotElement
swap element[i] and element[storeIndex]
storeIndex++
swap pivotElement and element[storeIndex+1] return storeIndex + 1
 Quicksort complexity

Time complexity of quicksort in best case, average case, and in worst case.
Time Complexity
Best Case O(n*logn)

Average O(n*logn)
Case

Worst O(n2)
Case

o Best Case Complexity - In Quicksort, the best-case occurs when the pivot element is the middle element or near to the middle
element. The best-case time complexity of quicksort is O(n*logn).

o Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly ascending and not properly
descending. The average case time complexity of quicksort is O(n*logn).

o Worst Case Complexity - In quick sort, worst case occurs when the pivot element is either greatest or smallest element. Suppose, if
the pivot element is always the last element of the array, the worst case would occur when the given array is sorted already in
ascending or descending order. The worst-case time complexity of quicksort is O(n2).

o The space complexity of quicksort is O(n*logn).

Advantages of Quick Sort:


 It is a divide-and-conquer algorithm that makes it easier to solve problems.
 It is efficient on large data sets.
 It has a low overhead, as it only requires a small amount of memory to function.

Disadvantages of Quick Sort:


 It has a worst-case time complexity of O(N2), which occurs when the pivot is chosen poorly.
 It is not a good choice for small data sets.
 It is not a stable sort, meaning that if two elements have the same key, their relative order will not be preserved in the sorted
output in case of quick sort, because here we are swapping elements according to the pivot’s position (without considering
their original positions).
 // C program for Merge Sort

#include <stdio.h>

#include <stdlib.h>

// Merges two subarrays of arr[].

// First subarray is arr[l..m]

// Second subarray is arr[m+1..r]

void merge(int arr[], int l, int m, int r)

int i, j, k;

int n1 = m - l + 1;

int n2 = r - m;

// Create temp arrays

int L[n1], R[n2];

// Copy data to temp arrays L[] and R[]

for (i = 0; i < n1; i++)

L[i] = arr[l + i];

for (j = 0; j < n2; j++)

R[j] = arr[m + 1 + j];


// Merge the temp arrays back into arr[l..r

i = 0;

j = 0;

k = l;

while (i < n1 && j < n2) {

if (L[i] <= R[j]) {

arr[k] = L[i];

i++;

else {

arr[k] = R[j];

j++;

k++;

// Copy the remaining elements of L[],

// if there are any

while (i < n1) {

arr[k] = L[i];

i++;
k++;

// Copy the remaining elements of R[],

// if there are any

while (j < n2) {

arr[k] = R[j];

j++;

k++;

// l is for left index and r is right index of the

// sub-array of arr to be sorted

void mergeSort(int arr[], int l, int r)

if (l < r) {

int m = l + (r - l) / 2;

// Sort first and second halves

mergeSort(arr, l, m);

mergeSort(arr, m + 1, r);
merge(arr, l, m, r);

// Function to print an array

void printArray(int A[], int size)

int i;

for (i = 0; i < size; i++)

printf("%d ", A[i]);

printf("\n");

// Driver code

int main()

int arr[] = { 12, 11, 13, 5, 6, 7 };

int arr_size = sizeof(arr) / sizeof(arr[0]);

printf("Given array is \n");

printArray(arr, arr_size);
mergeSort(arr, 0, arr_size - 1);

printf("\nSorted array is \n");

printArray(arr, arr_size);

return 0;

Output
Given array is
12 11 13 5 6 7
Sorted array is
5 6 7 11 12 13

 Algorithm:

MergeSort(A, p, r):

if p > r

return

q = (p+r)/2

mergeSort(A, p, q)

mergeSort(A, q+1, r)

merge(A, p, q, r)

Complexity Analysis of Merge Sort:


Time Complexity: O(N log(N)),

Merge Sort is a recursive algorithm and time complexity can be expressed as following recurrence relation.
T(n) = 2T(n/2) + θ(n)

The above recurrence can be solved either using the Recurrence Tree method or the Master method. It falls in case II
of the Master Method and the solution of the recurrence is θ(Nlog(N)). The time complexity of Merge Sort isθ(Nlog(N))
in all 3 cases (worst, average, and best) as merge sort always divides the array into two halves and takes linear time to
merge two halves.

Auxiliary Space: O(N), In merge sort all elements are copied into an auxiliary array. So N auxiliary space is required
for merge sort.

Applications of Merge Sort:


 Sorting large datasets: Merge sort is particularly well-suited for sorting large datasets due to its guaranteed worst-
case time complexity of O(n log n).
 External sorting: Merge sort is commonly used in external sorting, where the data to be sorted is too large to fit
into memory.
 Custom sorting: Merge sort can be adapted to handle different input distributions, such as partially sorted, nearly
sorted, or completely unsorted data.

Advantages of Merge Sort:


 Stability: Merge sort is a stable sorting algorithm, which means it maintains the relative order of equal elements in
the input array.
 Guaranteed worst-case performance: Merge sort has a worst-case time complexity of O(N logN), which means it
performs well even on large datasets.
 Parallelizable: Merge sort is a naturally parallelizable algorithm, which means it can be easily parallelized to take
advantage of multiple processors or threads.

Drawbacks of Merge Sort:


 Space complexity: Merge sort requires additional memory to store the merged sub-arrays during the sorting
process.
 Not in-place: Merge sort is not an in-place sorting algorithm, which means it requires additional memory to store
the sorted data. This can be a disadvantage in applications where memory usage is a concern.
 Not always optimal for small datasets: For small datasets, Merge sort has a higher time complexity than some
other sorting algorithms, such as insertion sort. This can result in slower performance for very small datasets.

// Heap Sort in C

#include <stdio.h>

// Function to swap the position of two elements

void swap(int* a, int* b)

int temp = *a;


*a = *b;

*b = temp;

// To heapify a subtree rooted with node i

// which is an index in arr[].

// n is size of heap

void heapify(int arr[], int N, int i)

// Find largest among root,

// left child and right child

// Initialize largest as root

int largest = i;

// left = 2*i + 1

int left = 2 * i + 1;

// right = 2*i + 2

int right = 2 * i + 2;

// If left child is larger than root


if (left < N && arr[left] > arr[largest])

largest = left;

// If right child is larger than largest

// so far

if (right < N && arr[right] > arr[largest])

largest = right;

// Swap and continue heapifying

// if root is not largest

// If largest is not root

if (largest != i) {

swap(&arr[i], &arr[largest]);

// Recursively heapify the affected

// sub-tree

heapify(arr, N, largest);

}
// Main function to do heap sort

void heapSort(int arr[], int N)

// Build max heap

for (int i = N / 2 - 1; i >= 0; i--)

heapify(arr, N, i);

// Heap sort

for (int i = N - 1; i >= 0; i--) {

swap(&arr[0], &arr[i]);

// Heapify root element

// to get highest element at

// root again

heapify(arr, i, 0);

}
// A utility function to print array of size n

void printArray(int arr[], int N)

for (int i = 0; i < N; i++)

printf("%d ", arr[i]);

printf("\n");

// Driver's code

int main()

int arr[] = { 12, 11, 13, 5, 6, 7 };

int N = sizeof(arr) / sizeof(arr[0]);

// Function call

heapSort(arr, N);

printf("Sorted array is\n");

printArray(arr, N);

// This code is contributed by _i_plus_plus_.

Output
Sorted array is
5 6 7 11 12 13
Complexity Analysis of Heap Sort
Time Complexity: O(N log N)

Auxiliary Space: O(log n), due to the recursive call stack. However, auxiliary space can be O(1) for iterative implementation.

Advantages of Heap Sort:


 Efficiency – The time required to perform Heap sort increases logarithmically while other algorithms may grow exponentially
slower as the number of items to sort increases. This sorting algorithm is very efficient.
 Memory Usage – Memory usage is minimal because apart from what is necessary to hold the initial list of items to be sorted,
it needs no additional memory space to work
 Simplicity – It is simpler to understand than other equally efficient sorting algorithms because it does not use advanced
computer science concepts such as recursion.

Disadvantages of Heap Sort:


 Costly: Heap sort is costly.
 Unstable: Heap sort is unstable. It might rearrange the relative order.
 Efficient: Heap Sort is not very efficient when working with highly complex data.

Frequently Asked Questions Related to Heap Sort

Q1. What are the two phases of Heap Sort?


The heap sort algorithm consists of two phases. In the first phase, the array is converted into a max heap. And in the second phase,
the highest element is removed (i.e., the one at the tree root) and the remaining elements are used to create a new max heap.

Q2. Why Heap Sort is not stable?


The heap sort algorithm is not a stable algorithm. This algorithm is not stable because the operations that are performed in a heap
can change the relative ordering of the equivalent keys.

Q3. Is Heap Sort an example of the “Divide and Conquer” algorithm?


Heap sort is NOT at all a Divide and Conquer algorithm. It uses a heap data structure to efficiently sort its element and not a
“divide and conquer approach” to sort the elements.

Q4. Which sorting algorithm is better – Heap sort or Merge Sort?


The answer lies in the comparison of their time complexity and space requirements. The Merge sort is slightly faster than the Heap
sort. But on the other hand merge sort takes extra memory. Depending on the requirement, one should choose which one to use.

Q5. Why is Heap sort better than Selection sort?


Heap sort is similar to selection sort, but with a better way to get the maximum element. It takes advantage of the heap data
structure to get the maximum element in constant time

Hashing:
Hashing is a technique or process of mapping keys, and values into the hash table by using a hash function. It is done for faster access to
elements. The efficiency of mapping depends on the efficiency of the hash function used.

There are majorly three components of hashing:


1. Key: A Key can be anything string or integer which is fed as input in the hash function the technique that determines an index
or location for storage of an item in a data structure.
2. Hash Function: The hash function receives the input key and returns the index of an element in an array called a hash
table. The index is known as the hash index.
3. Hash Table: Hash table is a data structure that maps keys to values using a special function called a hash function. Hash stores
the data in an associative manner in an array where each data value has its own
unique index.

 How does Hashing work?


Suppose we have a set of strings {“ab”, “cd”, “efg”} and we would like to store
it in a table. Our main objective here is to search or update the values stored in
the table quickly in O(1) time and we are not concerned about the ordering of
strings in the table. So the given set of strings can act as a key and the string
itself will act as the value of the string but how to store the value corresponding
to the key?
1.Step 1: We know that hash functions (which is some mathematical formula) are used to calculate the hash value which acts
as the index of the data structure where the value will be stored.
2.Step 2: So, let’s assign
 “a” = 1,
 “b”=2, .. etc, to all alphabetical characters.
3.Step 3: Therefore, the numerical value by summation of all characters of the string:
“ab” = 1 + 2 = 3,
“cd” = 3 + 4 = 7 ,
“efg” = 5 + 6 + 7 = 18
4.Step 4: Now, assume that we have a table of size 7 to store these strings. The hash function that is used here is the sum of the
characters in key mod Table size. We can compute the location of the string in the array by taking the sum(string) mod 7.
5.Step 5: So we will then store
“ab” in 3 mod 7 = 3,
“cd” in 7 mod 7 = 0, and
“efg” in 18 mod 7 = 4.

The above technique enables us to calculate the location of a given string by using a simple hash function and rapidly find the
value that is stored in that location. Therefore the idea of hashing seems like a great way to store (key, value) pairs of th e data in a
table.

 Hash function
The hash function creates a mapping between key and value, this is done through the use of mathematical formulas known as hash
functions. The result of the hash function is referred to as a hash value or hash. The hash value is a representation of the original
string of characters but usually smaller than the original.
For example: Consider an array as a Map where the key is the index and the value is the value at that index. So for an array A if we
have index i which will be treated as the key then we can find the value by simply looking at the value at A[i]. simply looking up
A[i].

 Types of Hash functions:


There are many hash functions that use numeric or alphanumeric keys. This article focuses on discussing different hash functions:
1. Division Method.
2. Mid Square Method.
3. Folding Method.
4. Multiplication Method

 Complexity of calculating hash value using the hash function


Time complexity: O(n)
Space complexity: O(1)

 Advantages of Hash Data structure


1. Hash provides better synchronization than other data structures.
2. Hash tables are more efficient than search trees or other data structures
3. Hash provides constant time for searching, insertion, and deletion operations on average.

 Disadvantages of Hash Data structure


1. Hash is inefficient when there are many collisions.
2. Hash collisions are practically not avoided for a large set of possible keys.
3. Hash does not allow null values.
 What is Collision?
Since a hash function gets us a small number for a key which is a big integer or string, there is a possibility that two keys result in
the same value. The situation where a newly inserted key maps to an already occupied slot in the hash table is called collision and
must be handled using some collision handling technique. Time to insert = O(1)
Time complexity of search insert and delete is O(1) if α is O(1)
There are mainly two methods to handle collision:
 Separate Chaining
 Open Addressing

Separate Chaining:
The idea behind separate chaining is to implement the array as a linked list called a chain. Separate chaining is one of the most
popular and commonly used techniques in order to handle collisions.
Let us consider a simple hash function as “key mod 7” and a sequence of keys as 50, 700, 76, 85, 92, 73, 101

Open Addressing:
Like separate chaining, open addressing is a method for handling collisions. In Open Addressing, all elements are
stored in the hash table itself. So at any point, the size of the table must be greater than or equal to the total number of
keys (Note that we can increase table size by copying old data if needed). This approach is also known as closed
hashing. This entire procedure is based upon probing.

Different ways of Open Addressing:


1. Linear Probing:
In linear probing, the hash table is searched sequentially that starts from the original location of the hash. If in case the location that
we get is already occupied, then we check for the next location.
Let us consider a simple hash function as “key mod 7” and a sequence of keys as 50, 700, 76, 85, 92, 73, 101,
which means hash(key)= key% S, here S=size of the table =7,indexed from 0 to 6.We can define the hash function as per our
choice if we want to create a hash table,although it is fixed internally with a pre-defined formula.
2. Quadratic Probing
If you observe carefully, then you will understand that the interval between probes will increase proportionally to the hash value.
Quadratic probing is a method with the help of which we can solve the problem of clustering that was discussed above. This
method is also known as the mid-square method. In this method, we look for the i2‘th slot in the ith iteration. We always start from
the original hash location. If only the location is occupied then we check the other slots.

Let us consider table Size = 7, hash function as Hash(x) = x % 7 and collision resolution strategy to be f(i) = i 2 . Insert = 22, 30, and
50.

Step 1: Create a table of size 7

Step 2 – Insert 22 and 30


 Hash(22) = 22 % 7 = 1, Since the cell at index 1 is empty, we can easily insert 22 at slot 1.
 Hash(30) = 30 % 7 = 2, Since the cell at index 2 is empty, we can easily insert 30 at slot 2 .
 Step 3: Inserting 50
 Hash(50) = 50 % 7 = 1
 In our hash table slot 1 is already occupied. So, we will search for slot 1+12, i.e. 1+1 = 2,
 Again slot 2 is found occupied, so we will search for cell 1+2 2, i.e.1+4 = 5,
 Now, cell 5 is not occupied so we will place 50 in slot 5.

3. Double Hashing
The intervals that lie between probes are computed by another hash function. Double hashing is a technique that reduces clust ering
in an optimized way. In this technique, the increments for the probing sequence are computed by using another hash function. We
use another hash function hash2(x) and look for the i*hash2(x) slot in the ith rotation.

 Difference between methodology:

.No. Separate Chaining Open Addressing

1. Chaining is Simpler to implement. Open Addressing requires more computation.

In chaining, Hash table never fills up, we can always add more
2. In open addressing, table may become full.
elements to chain.

Open addressing requires extra care to avoid clustering


3. Chaining is Less sensitive to the hash function or load factors.
and load factor.

Chaining is mostly used when it is unknown how many and how Open addressing is used when the frequency and
4.
frequently keys may be inserted or deleted. number of keys is known.

Cache performance of chaining is not good as keys are stored Open addressing provides better cache performance as
5.
using linked list. everything is stored in the same table.

You might also like