Module5 Notes
Module5 Notes
Hashing is a technique or process of mapping keys, and values into the hash
table by using a hash function. It is done for faster access to elements. The
efficiency of mapping depends on the efficiency of the hash function used.
Let a hash function H(x) maps the value x at the index x%10 in an Array. For
example if the list of values is [11,12,13,14,15] it will be stored at positions
{1,2,3,4,5} in the array or Hash table respectively.
4. Multiplication Method
This method involves the following steps:
1.
Choose a constant value A such that 0 < A < 1.
2.
Multiply the key value with A.
3.
Extract the fractional part of kA.
4.
Multiply the result of the above step by the size of the hash table
i.e. M.
5. The resulting hash value is obtained by taking the floor of the result
obtained in step 4.
Formula:
h(K) = floor (M (kA mod 1))
Here,
M is the size of the hash table.
k is the key value.
A is a constant value.
Types of Hashing:
1.Open Hashing or closed Addressing-
i.Separate Chaining
2.Closed Hashing or Open Addresssing-
i.Linear Probing
ii.Quadratic Probing
iii.Double Hasing
Here, all those elements that hash into the same slot index are inserted into
a linked list. Now, we can use a key K to search in the linked list by just linearly
traversing. If the intrinsic key for any entry is equal to K then it means that we
have found our entry. If we have reached the end of the linked list and yet we
haven’t found our entry then it means that the entry does not exist.
Hence,in separate chaining, if two different elements have the same hash
value then we store both the elements in the same linked list one after the
other.
Hash table
• Step 2: Now insert all the keys in the hash table one by one. The
first key is 50. It will map to slot number 0 because 50%5=0. So
insert it into slot number 0.
• Step 3: The next key is 70. It will map to slot number 0 because
70%5=0 but 50 is already at slot number 0 so, search for the next
empty slot and insert it.
• Step 4: The next key is 76. It will map to slot number 1 because
76%5=1 but 70 is already at slot number 1 so, search for the next
empty slot and insert it.
Hash table
• Step 3: Inserting 50
• Hash(50) = 50 % 7 = 1
• In our hash table slot 1 is already occupied. So, we will
search for slot 1+12, i.e. 1+1 = 2,
• Again slot 2 is found occupied, so we will search for cell
1+22, i.e.1+4 = 5,
• Now, cell 5 is not occupied so we will place 50 in slot 5.
3. Double Hashing
The intervals that lie between probes are computed by another hash function.
Double hashing is a technique that reduces clustering in an optimized way. In
this technique, the increments for the probing sequence are computed by
using another hash function. We use another hash function hash2(x) and look
for the i*hash2(x) slot in the ith rotation.
• Step 2: Insert 43
• 43 % 7 = 1, location 1 is empty so insert 43 into 1 slot.
•Step 4: Insert 72
• 72 % 7 = 2, but location 2 is already being occupied and
this is a collision.
• So we need to resolve this collision using double hashing.
hnew = [h1(72) + i * (h2(72)] % 7
= [2 + 1 * (1 + 72 % 5)] % 7
=5%7
= 5,
• Local Depth: It is the same as that of Global Depth except for the
fact that Local Depth is associated with the buckets and not the
directories. Local depth in accordance with the global depth is used
to decide the action that to be performed in case an overflow occurs.
Local Depth is always less than or equal to the Global Depth.
• Inserting 4 and 6:
Both 4(100) and 6(110)have 0 in their LSB. Hence, they are hashed
as follows:
Key Observations:
1. A Bucket will have more than one pointers pointing to it if its local
depth is less than the global depth.
2. When overflow condition occurs in a bucket, all the entries in the
bucket are rehashed with a new local depth.
3. If Local Depth of the overflowing bucket
4. The size of a bucket cannot be changed after the data insertion
process begins.
Advantages:
Example
Consider an elements to insert 7, 2, 45, 32, and 12 in a priority queue.
The element with the least value has the highest property. Thus, you should maintain the
lowest element at the front node.
The above illustration how it maintains the priority during insertion in a queue. But, if you
carry the N comparisons for each insertion, time-complexity will become O(N2).
Furthermore, you can implement the min priority queue using a min heap, whereas you can
implement the max priority queue using a max heap.
Common Operations
o Return an element with minimum/Maximum priority
o Insert an element at arbitrary priority
o Delete an element with maximum / minimum priority
Example: Consider elements to insert: (5, 10), (2, 20), (8, 30), (1,40), (7, 50)
Insert (5, 10)
o Start with an empty priority queue.
o The first element (5, 10) is inserted as the root node since the priority queue is
initially empty.
Delete min
o The minimum element (1, 40) is deleted from the priority queue. The root node is
replaced with the last node of the heap, and then the last node is deleted.
o After deleting the minimum element, check the child nodes of (7, 50) and place
smaller priority node to the root.
o After adjusting, now check the heap property for node (7,50). Node (7, 50) has 2
child nodes, apply min heap and move min priority node to root.
Insertion
Insertion is same as insertion in single ended priority queue
The complexity of binary heap 𝑂(𝑛 log 𝑛) is more compared to leftist trees 𝑂(log 𝑛)
Leftist Heap
Let n be the total number of elements in the two priority queues that are to be combined. If
heaps are used to represent priority queues, then the combine operation takes O(n) time.
Using a leftist tree, the combine operation as well as the normal priority queue operations
take logarithmic time.
Leftist trees are binary trees that prioritize balancing during insertion and deletion
They ensure that the left subtree always has a greater or equal height compared to the right
subtree.
The leftist property allows for efficient merging of two leftist trees.
To define a leftist tree, we need to know about the concept of an extended binary tree.
An extended binary tree is a binary tree in which all empty binary subtrees have been
replaced by a square node.
Figure below shows two example binary trees.
Their corresponding extended binary trees are shown below. The square nodes in an
extended binary tree are called external nodes. The original (circular) nodes of the binary
tree are called internal nodes.
Let X be a node in an extended binary tree. Let left_child (x) and right_child (x),
respectively, denote the left and right children of the internal node x.
Define shortest (x) to be the length of a shortest path from x to an external node. It is easy
to see that shortest (x) satisfies the following recurrence
𝑠ℎ𝑜𝑟𝑡𝑒𝑠𝑡(𝑥)
0, 𝑖𝑓 𝑥 𝑖𝑠 𝑒𝑥𝑡𝑒𝑟𝑛𝑎𝑙 𝑛𝑜𝑑𝑒
={
1 + min{𝑠ℎ𝑜𝑟𝑡𝑒𝑠𝑡(𝑙𝑒𝑓𝑡_𝑐ℎ𝑖𝑙𝑑(𝑥)), 𝑠ℎ𝑜𝑟𝑡𝑒𝑠𝑡(𝑟𝑖𝑔ℎ𝑡_𝑐ℎ𝑖𝑙𝑑(𝑥)), 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
The number outside each internal node x of above figure is the value of shortest(x)
Example 2
Definition: A leftist tree is a binary tree such that if it is not empty, for every internal node x
𝑠ℎ𝑜𝑟𝑡𝑒𝑠𝑡(𝑙𝑒𝑓𝑡_𝑐ℎ𝑖𝑙𝑑(𝑥)) ≥ 𝑠ℎ𝑜𝑟𝑡𝑒𝑠𝑡(𝑟𝑖𝑔ℎ𝑡_𝑐ℎ𝑖𝑙𝑑(𝑥))
Lemma 1: Let x be the root of a leftist tree that has n (internal) nodes.
a. 𝑛 ≥ 2𝑠ℎ𝑜𝑟𝑡𝑒𝑠𝑡(𝑥) − 1
b. The rightmost root to external node path is the shortest root to external node path. Its length
is shortest (x)
Proof: (a) From the definition of shortest(x) it follows that there are no external nodes on the first
shortest(x) levels of the leftist tree. Hence, the leftist tree has at least
∑𝑠ℎ𝑜𝑟𝑡𝑒𝑠𝑡(𝑥)
𝑖=1 2𝑖−1 = 2𝑠ℎ𝑜𝑟𝑡𝑒𝑠𝑡(𝑥) − 1 internal nodes.
Leftist trees are represented with nodes that have the fields left-child, right-child, shortest,
and data.
typedef struct {
int key;
/*---------------*/
} element;
struct leftist {
struct leftist *left_child;
element data;
struct leftist *right_chiId;
int shortest;
};
Definition:
A min-leftist tree (max leftist tree) is a leftist tree in which the key value in each node is
smaller)than the key values in its children (if any). In other words, a min (max) leftist tree
is a leftist tree that is also a min (max) tree.
Figure below depicts two min-leftist trees. The number inside a node x is the key of the
element in x and the number outside x is shortest (x). The operations insert, delete
min(delete max), and combine can be performed in logarithmic time using a min (max)
leftist tree.
Examples of Leftist trees computing s(x)
Merge
Example 2:
o Step 1: Consider 2 Leftist trees
o Step 2: To apply merge for above 2 leftist trees, Find the minimum root in both leftist
trees. Minimum root is 1 and pass right subtree of min root 1 along with first tree (Apply
recursive call). This process will repeat until any one leftist tree without nodes i.e.
reaches NULL.
o Step 3: To apply merge for above 2 leftist trees, Find the minimum root in both leftist
trees. Minimum root is 3 and pass right subtree of min root 3 along with first tree
o Step 4: To apply merge for above 2 leftist trees, Find the minimum root in both leftist
trees. Minimum root is 7 and pass right subtree of min root 7 along with first tree. Here,
right subtree is NULL
o Step 5: Since, right leftist tree is NULL (base condition is reached), return left tree as
result to previous step (Step 4). Attach the result obtained and apply merge concept.
o To apply merge concept, find the shortest(x) value for both trees.
For all x, Shortest(left(x) >= Shortest(right(x)
o Since, Shortest(left(x) >= Shortest(right(x) then add as right child of
Minimum root. After adding result is show in C
o Pass the result to step 2 (Return of recursive call)
o Step 6: Consider the smallest root in step 2, merge result obtained in step 5 with step
2.
o Merge the left subtree of step 2 (remaining part i.e left part - since right
subtree is of step 2 is already processed).
o To apply merge concept, find the shortest(x) value for both trees.
For all x, Shortest(left(x) >= Shortest(right(x)
o Compare the Shortest value of root left subtree i.e. 4 and root of second leftist
tree. Since criteria is satisfied, add second tree (Root 7) as right child of Root 3
o Transfer the resultant leftist tree to step 1, ignoring transferred tree in Step1.
o Step 7: Consider the smallest root in step 1, merge result obtained in step 6 with step
1.
o Merge the left subtree of step 2 (remaining part i.e left part, since right subtree
is of step 1 is already processed).
o To apply merge concept, find the shortest(x) value for both trees.
For all x, Shortest(left(x) >= Shortest(right(x)
o Compare the Shortest value of root left subtree i.e. 1 and root of second leftist
tree. Since criteria is not satisfied, swap left subtree of root 1 and second leftist
tree to obtain final tree.
C Function to merge two leftist trees
Node *Merge(Node * root1, Node * root 2)
{
if(root1 == NULL) //Base Condition
return root2;
if(root2 == NULL)
return root1;
Insert Operation
Consider the leftist tree
Step 1: Insert node 6, Apply Merge operations
Step 2: Find the minimum root and pass its right subtree along with 2nd tree for further
Step 3: Smallest root in both leftist trees is 6, consider its right subtree of 6 along with
remaining leftist tree.
Step 4: Since right leftist tree (root2) is empty return left subtree as result to the previous
step. The remaining tree and result of step 3 is
Find the S value to merge these two leftist trees. The S value of left child of smaller root
(Root 6) is not greater the Root of right leftist tree so, swap and the result is
Step 5: Pass the resultant tree to Step 1 to merge. Find the Shortest value for both leftist
trees. Smaller root is 5, check the Shortest value of Left of Root 5 with result obtained from
step 4 i.e shortest value of Root 6. S(8)>=S(6) so, add Root 6 as right child of 5.
Time Complexity :
o For insertion Insert O(1) and Merge O(log n) = O(log n)
Initialize Heap
6 2 9 8 3 4 11 18 7 24 1 5
Create a Empty leftist heap.
Read the elements one by one and apply merge
Complexity will be
o For each insertion O(log n) and Total N insertions = O (n log n)
Delete Operation
Delete the root element. We will get 2 leftist tree and apply merge for these leftist trees
Complexity will be
o For Deletion O(1) and Merge O(log n) = O(log n)
Consider an example