Extendible hashing is a type of hash system which treats a hash as a bit string, and uses a trie for bucket lookup. Because of the hierarchical nature of the system, re-hashing is an incremental operation (done one bucket at a time, as needed).
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
106 views1 page
Extendible Hashing Pseudocode
Extendible hashing is a type of hash system which treats a hash as a bit string, and uses a trie for bucket lookup. Because of the hierarchical nature of the system, re-hashing is an incremental operation (done one bucket at a time, as needed).
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1
Extendible Hashing Pseudocode
// Construct an empty extendible hash table.
Create() { Create an empty bucket and set its local depth to be 0. Create a directory with two entries (0 and 1), both pointing to the above bucket. Set the global depth to be 1. } // Search the hash table for a key value and return the corresponding data (or NULL). Search(key) { Compute the hash value h(key) and the directory index, based on the global depth. Search in the correct bucket for the key value. If the key value is found, return the corresponding data. } // Insert an element x into the hash table. Insert(x) { Compute the hash value h(x.key) and the directory index, based on the global depth. If the bucket is not full then insert x. Else: If the global depth = the local depth of the bucket then call DoubleDirectory(). Call RecursiveSplitInsert(x). } // Double the size of the directory. DoubleDirectory() { Allocate space for a new directory with double the entries of the original. Redistribute the buckets among the entries in the new directory. Deallocate the old directory. Increment the global depth. } // Split a bucket and insert x into the original bucket or its split image. // If there is no room in either of these buckets, recursively split again. RecursiveSplitInsert(x) { Compute the hash value h(x.key) and the directory index, based on the global depth. Get the bucket corresponding to h(x.key). Allocate a new bucket to be this buckets split image. Update directory entries to point to either of these buckets, based on local depth + 1 bits. Distribute the entries in the original bucket between the two buckets. Set the local depth of the two buckets to be 1 + the local depth of the original bucket. Try again to insert x by calling Insert(x). }