0% found this document useful (0 votes)
6 views

Lecture On Tree

Uploaded by

titirshaasingh
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Lecture On Tree

Uploaded by

titirshaasingh
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 181

Tree Data Structure

Lecture
on
Tree
Why Tree ??
Tree Data Structure
 Specialized method to
organize and store data in the
computer to be used more
effectively.

 Tree data structure has roots,


branches, and leaves
connected with one another.
 Non-linear data structure.
Tree
Parent Node: Predecessor of a node
Child Node: Successor of a node
Root Node: Topmost node of a tree
Leaf Node or External Node: Nodes which do not have
any child nodes
Ancestor of a Node:Any predecessor nodes on the
path of the root to that node are called Ancestors
of that node
Descendant: Any successor node on the path from
the leaf node to that node
Sibling:Children of the same parent node are called
siblings
Level of a node: The count of edges on the path from
the root node to that node. The root node has
level 0.
Internal node: Node at least one child
Neighbour of a Node: Parent or child nodes of that
node
Number of edges:
Connection between two nodes. N nodes then it will have (N-1)
edges

Depth of a node: The length of the path from the root to that node.
Each edge adds 1 unit of length to the path

Tree Height of a node: The length of the longest path from the node to a
leaf node

Height of the Tree: Longest path from the node to a leaf node

Degree of a Node: Total count of subtrees attached to that node is


called the degree of the node
Types of Tree Data Structure
Binary Trees

Binary Search Tree (BST)

AVL Trees
Types of Binary Tree
Binary Trees: Each node has up to two children, the left child node and
the right child node. This structure is the foundation for more complex
tree types like Binay Search Trees and AVL Trees.

A balanced Binary Tree has at most 1 in difference between its left and right subtree heights, for
each node in the tree.

A complete Binary Tree has all levels full of nodes, except the last level, which is can also be full,
or filled from left to right. The properties of a complete Binary Tree means it is also balanced.

A full Binary Tree is a kind of tree where each node has either 0 or 2 child nodes.

A perfect Binary Tree has all leaf nodes on the same level, which means that all levels are full of
nodes, and all internal nodes have two child nodes. The properties of a perfect Binary Tree means it
is also full, balanced, and complete.
Types of Binary Tree
Binary Tree Properties
 The maximum number of nodes at level ‘l’ of a binary tree is 2 l.
 The Maximum number of nodes in a binary tree of height ‘h’ is 2 h – 1.
 In a Binary Tree with N nodes, minimum possible height or the minimum number of levels is Log 2(N+1).

 A Binary Tree with L leaves has at least | Log2L |+ 1 levels.


 In Binary tree where every node has 0 or 2 children, the number of leaf nodes is always one more than
nodes with two children.
 In a non empty binary tree, if n is the total number of nodes and e is the total number of edges, then e =
n-1
 In a Perfect Binary Tree, the number of leaf nodes is the number of internal nodes
plus 1
Binary Tree Properties
 In a Binary Tree with N nodes, minimum possible height or the minimum number of levels is Log 2(N+1).
Binary Tree Properties
 A Binary Tree with L leaves has at least | Log 2L |+ 1 levels.

Number of Leaves: L=4


Tree class Node
{
Node *left_child;
int data;
Node *right_child;
};
Tree Traversal
Tree Traversal
• The most common operations performed on tree structure is that of traversal.
• This is a procedure by which each node in the tree is processed exactly once in
a systematic manner.
• There are three ways of traversing a binary tree.
1. Preorder Traversal
2. Inorder Traversal
3. Postorder Traversal
Preorder Traversal
• Preorder traversal of a binary tree is defined as
follow
1. Process the root node
2. Traverse the left subtree in preorder
3. Traverse the right subtree in preorder
• If particular subtree is empty (i.e., node has no
left or right descendant) the traversal is
performed by doing nothing.
• In other words, a null subtree is considered to
be fully traversed when it is encountered.
Preorder Traversal
void printPreorder(Node* node)
{
if (node == NULL)
return;
cout << node->data << " ";
print Preorder(node->left);
print Preorder(node->right);
}
Inorder Traversal
• Inorder traversal of a binary tree is defined as follow
1. Traverse the left subtree in Inorder
2. Process the root node
3. Traverse the right subtree in Inorder
Inorder Traversal
void printInorder(struct Node* node)
{
if (node == NULL)
return;
printInorder(node->left);
cout << node->data << " ";
printInorder(node->right);
}
Postorder Traversal
• Postorder traversal of a binary tree is defined as follow
1. Traverse the left subtree in Postorder
2. Traverse the right subtree in Postorder
3. Process the root node
Postorder Traversal
void printPostorder(struct Node* node)
{
if (node == NULL)
return;
printPostorder(node->left);
printPostorder(node->right);
cout << node->data << " ";
}
Quiz
Postorder Traversal
Code
#include <iostream>
using namespace std;

// Definition for a binary tree node


class Node {

public:
int data;
Node* left;
Node* right;

Node(int value) {
data = value;
left = right = nullptr;
}
};
Code
// Function to print in-order traversal (Left, Root, Right)
// Function to print pre-order traversal (Root, Left, Right)
void inorder(Node* root) {
void preorder(Node* root) {
if (root == nullptr) {
if (root == nullptr) {
return;
return;
}
}
// Traverse the left subtree
// Visit the root node
inorder(root->left);
cout << root->data << " ";
// Visit the root node
// Traverse the left subtree
cout << root->data << " ";
preorder(root->left);
// Traverse the right subtree
// Traverse the right subtree
inorder(root->right);
preorder(root->right);
}
}
Code
// Function to print post-order traversal (Left, Right, Root)
void postorder(Node* root) {
if (root == nullptr) {
return;
}

// Traverse the left subtree


postorder(root->left);

// Traverse the right subtree


postorder(root->right);

// Visit the root node


cout << root->data << " ";
}
Code

// In-order traversal
int main() {
cout << "In-order traversal: ";
// Manually creating the binary tree with 7 nodes
inorder(root);
Node* root = new Node(1);
cout << endl;
root->left = new Node(2);
root->right = new Node(3);
// Post-order traversal
root->left->left = new Node(4);
cout << "Post-order traversal: ";
root->left->right = new Node(5);
postorder(root);
root->right->left = new Node(6);
cout << endl;
root->right->right = new Node(7);
return 0;
// Pre-order traversal
}
cout << "Pre-order traversal: ";
preorder(root);
cout << endl;
Order traversal based tree

Construct A Binary Tree


from:

 Inorder and Preorder


Traversal
 Inorder and Postorder
Traversal
Inorder and Preorder Traversal Tree

Inorder : 40, 20, 50, 10, 60, 30


Preorder: 10,20,40, 50, 30, 60
Inorder and Preorder Traversal Tree
Inorder : 40, 20, 50, 10, 60, 30
Preorder: 10,20,40, 50, 30, 60
Inorder and Preorder Traversal Tree
Inorder : 40, 20, 50, 10, 60, 30
Preorder: 10,20,40, 50, 30, 60
Inorder and Preorder Traversal Tree

Consider the following traversals of the tree.


Inorder = {2, 5, 6, 10, 12, 14, 15}
Preorder = {10, 5, 2, 6, 14, 12, 15}

Binary Tree????
Pseudocode Inorder and Preorder Traversal Tree
function buildTree(preorder[], inorder[], start, end, preIndex): function search(inorder[], start, end, value):
if start > end: for i = start to end:
return NULL if inorder[i] == value:
# Pick current root from Preorder return i
rootValue = preorder[preIndex]
preIndex = preIndex + 1
node = createNode(rootValue)
# If node has no children, return it
if start == end:
return node
# Find root's position in Inorder traversal
inIndex = search(inorder, start, end, rootValue)
# Recursively build the left and right subtrees
node.left = buildTree(preorder, inorder, start, inIndex - 1, preIndex)
node.right = buildTree(preorder, inorder, inIndex + 1, end, preIndex)

return node
Inorder and Postorder Traversal Tree
Inorder : 40, 20, 50, 10, 60, 30
Preorder: 40,50,20, 60, 30, 10
Inorder and Postorder Traversal Tree
Inorder : 40, 20, 50, 10, 60, 30
Preorder: 40,50,20, 60, 30, 10
Inorder and Postorder Traversal Tree
function buildTree(postorder[], inorder[], start, end, postIndex): function search(inorder[], start, end, value):
if start > end: for i = start to end:
return NULL if inorder[i] == value:
# Pick current root from Postorder return i
rootValue = postorder[postIndex]
postIndex = postIndex - 1
node = createNode(rootValue)
# If node has no children, return it
if start == end:
return node
# Find root's position in Inorder traversal
inIndex = search(inorder, start, end, rootValue)
# Recursively build the right and left subtrees
node.right = buildTree(postorder, inorder, inIndex + 1, end, postIndex)
node.left = buildTree(postorder, inorder, start, inIndex - 1, postIndex)

return node
Complexity Analysis
Time Complexity:
In-order: O(n)
Pre-order: O(n)
Post-order: O(n)

Space Complexity
In-order: O(n)
Pre-order: O(n)
Post-order: O(n)
Data Structure:
Stack
Binary Search Trees

Binary Search
Trees
Binary Search Trees
A Binary Search Tree (BST) is a type of Binary Tree data
structure, where the following properties must be true for
any node "X" in the tree:

 The X node's left child and all of its descendants (children,


children's children, and so on) have lower values than X's value.

 The right child, and all its descendants have higher values than
X's value.

 Left and right subtrees must also be Binary Search Trees without
any duplicate values.

These properties makes it faster to search, add and delete values


than a regular binary tree.
Binary Search Trees
Advantages of Binary Search Tree
 Efficient Searching: Binary Search Trees provide fast
search operations due to their organized structure.
 Sorted Data: The elements in a Binary Search Tree are
automatically sorted, making it easier to retrieve data in a
specific order.
 Easy Insertion and Deletion: Adding or removing elements
from a Binary Search Tree is relatively simple and efficient.
 Memory Efficiency: Binary Search Trees use memory
efficiently by dynamically allocating memory for new nodes
only when necessary.
Binary Search Trees
Search for a Value in a
BST
How it works:
Start at the root node.

 If this is the value we are looking for, return.

 If the value we are looking for is higher, continue searching in the


right subtree.

 If the value we are looking for is lower, continue searching in the


left subtree.

 If the subtree we want to search does not exist, depending on


the programming language, return None, or NULL, or something
similar, to indicate that the value is not inside the BST.
Binary Search Tree Algorithm

Algorithm : Search for a Value in a BST

•START
 Search (root, item)
 if (item = root → data) or (root = NULL)
 return root
 else if (item < root → data)
 return Search(root → left, item)
 else
 return Search(root → right, item)
 END if
•END
Binary Search Trees
Search for a Value (8) in a
BST
Binary Search Trees
Insert a Node in a BST

How it works:
 Start at the root node.

 Compare each node:


 Is the value lower? Go left.
 Is the value higher? Go right.

 Continue to compare nodes with the new value until there is no


right or left to compare with. That is where the new node is
inserted.
Binary Search Tree Algorithm
Algorithm : Insert a Value in a BST

1. Create a new BST node and assign values to it.


2. insert(node, key)
i) If root == NULL,
return the new node to the calling function.
ii) if root=>data < key
call the insert function with root=>right and assign the return value in root=>right.
root->right = insert(root=>right,key)
iii) if root=>data > key
call the insert function with root->left and assign the return value in root=>left.
root=>left = insert(root=>left,key)
3. Finally, return the original root pointer to the calling function.
Binary Search Trees
Insert a Node in a BST
Binary Search Trees
Delete a Node in a BST

How it works: (we will consider these three cases)


 If the node is a leaf node, remove it by removing the link to it.

 If the node only has one child node, connect the parent node of
the node you want to remove to that child node.

 If the node has both right and left child nodes: Find the node's in-
order successor, change values with that node, then delete it.
Binary Search Trees
Delete a Node in a BST

Case 1 Delete 8
Binary Search Trees
Delete a Node in a BST

Case 2 Delete 19
 If the node only has one child node, connect the parent node of the node you
want to remove to that child node.
Binary Search Trees
Delete a Node in a BST

Case 3 Delete 13

 If the node has both right and left child nodes: Find the node's in-order successor, change
values with that node, then delete it.
Binary Search Trees
Delete a Node in a BST
13 i s
repla
ce d
by 14

Inord
e
Succe r 14 w i
ll
s
r nod so be fre
e
13
e of
Case 3 Delete 13
 If the node has both right and left child nodes: Find the node's in-order successor, change
values with that node, then delete it.
Binary Search Trees
Delete a Node in a BST

Delete 40 ??
Binary Search Tree Algorithm
Algorithm : Delete a Value in a BST2. If root->right is NULL,
deleteNode(root, value) `temp = root->left`
i) If root == NULL, free(root)
return root. return temp
ii) If value < root->data,
`root->left = deleteNode(root->left, value)` b) If the node has two children:
iii) If value > root->data, 1. Find the in-order successor (smallest node in the right
`root->right = deleteNode(root->right, value)` subtree).
temp->data = in-order successor (Value)
iv) If value == root->data, (Node to be deleted is found) 2. Swap the values of the node and its in-order successor.
a) If the node has no child or only one child:
1. If root->left is NULL, `root->data = temp->data`
`temp = root->right` 3. Call the deleteNode function on root->right to delete
free(root) the in-order successor.
return temp `root->right = deleteNode(root->right, temp->data)`

Finally, return the original root pointer to the calling


function.
BST from the preorder traversal
Construct a BST from the preorder
traversal

preorder[] = {6, 3, 1, 4, 8, 7, 9}

?????????
BST from the preorder traversal
Construct a BST from the preorder
traversal
preorder[] = {6, 3, 1, 4, 8, 7, 9}

Algorithm

 First, we will create a base case to check if the value of pre has not exceeded the
length of the preorder array (pre is the variable pointing to the 0th index of the
preorder array) and if the value of l is greater than r (l and r are the variables to
divide the preorder array to form the left subtree and right subtree).
 Create a new node with the value preorder[pre] and increment the value of pre.
 After creating a node, run a for loop from (l to r) to find the first element greater
than the node’s value. Store the value if present.
 Now divide the array into left and right subtrees and recur for both left and right
subtrees.
BST from the preorder traversal
Construct a BST from the preorder
traversal
preorder[] = {6, 3, 1, 4, 8, 7, 9}
BST from the preorder traversal
Construct a BST from the preorder
traversal
preorder[] = {6, 3, 1, 4, 8, 7, 9}
BST from the preorder traversal
Construct a BST from the preorder
traversal
preorder[] = {6, 3, 1, 4, 8, 7, 9}
BST from the preorder traversal
Construct a BST from the preorder
traversal
preorder[] = {6, 3, 1, 4, 8, 7, 9}

Final BST
Application of Binary Search Tree
Application of Binary Search Tree
Some of the applications of binary search trees listed below:
 Dictionary: BSTs are commonly used to implement dictionaries, allowing efficient word lookup and spell-checking.

 Database Indexing: BSTs enable fast data retrieval by serving as index structures in databases.

 File System: BSTs can be utilized in file systems to organize and search files efficiently.

 Auto-complete: BSTs can power auto-complete features in text editors or search engines, suggesting relevant
options as users type.

 Network Routing: BSTs can assist in network routing algorithms, aiding in efficient packet forwarding.

 Finding Successor/Predecessor: BSTs provide quick access to the successor or predecessor of a given value, useful
in certain algorithms and data processing tasks.
Worst Case of Binary Search Tree

Is this BST?

Time Complexity ????


AVL Tree
AVL Tree
• We can guarantee O(log2n) performance for each search tree operation by
ensuring that the search tree height is always O(log2n).
• Trees with a worst case height of O(log2n) are called balanced trees.
• One of the popular balanced tree is AVL tree, which was introduced by
Adelson-Velskii and Landis.
AVL Tree
Properties Of AVL Tree
An AVL tree is a height-balanced binary search tree.
In an AVL tree, the heights of the two-child subtrees can differ by utmost 1.
An AVL tree must have a balancing factor of either -1, 0 or 1.
If the balancing factor is something else, we need to rebalance the AVL tree
until the balancing factor is either 1, 0, -1 or
AVL Tree
Balancing Factor:
• The difference between the height of the
left subtree and the height of the right
subtree is known as the balancing factor.

Balanced Factor(X) = height(left Subtree (X)) – height(right


Subtree(X))
AVL Tree
Balancing Factor:
• The difference between the height of the left subtree and the
height of the right subtree is known as the balancing factor.
AVL Tree
Operations on AVL Tree in C

There are basically three kinds of operations that are performed in the AVL tree:
1.Insertion of a New Node

2.Deletion of a Node
• Deleting a Node from the Right Subtree
• Deleting a Node from the Left Subtree

3.Searching a Node in AVL Tree

To perform these operations, we have to do the following four kinds of


rotations:
•LL Rotation (Left Rotation)
•RR Rotation (Right Rotation)
•LR Rotation (Left Right Rotation)
•RL Rotation (Right Left Rotation)
AVL Tree
RR Rotation (Left Rotation)
.
Example: Insert 5, 6, and 9
AVL Tree
LL Rotation (Right Rotation)

Example: Insert 5, 4, and 3


AVL Tree
LR Rotation (Left and Right Rotation)
In LR rotation, at the first step, every node moves to the left and then moves one step
to the right from the current position.
Example: Insert 6, 4, and 5
AVL Tree
RL Rotation (Right and Left Rotation)
In RL rotation, at the first step, every node moves to the right and then moves one step
to the left from the current position.
Example: Insert 6, 4, and 5
AVL Tree
Example: AVL Tree by inserting the following values.

10 15 20 9 5 16 17 8 6
AVL Tree
Example: AVL Tree by inserting the following values.

10 15 20 9 5 16 17 8 6
AVL Tree
Example: AVL Tree by inserting the following values.

10 15 20 9 5 16 17 8 6
AVL Tree
Example: AVL Tree by inserting the following values.

10 15 20 9 5 16 17 8 6
AVL Tree
Example: AVL Tree by inserting the following values.

10 15 20 9 5 16 17 8 6
AVL Tree
AVL Tree
AVL Tree
Algorithm: Insert a Value in an AVL Tree
Algorithm: Insert a Value in an AVL Tree
1. insertNode(root, value)
o i) If root == NULL,
create a new node with the given value and return it.
o ii) If value < root->data,
call the insertNode function with root->left and assign the return value to root->left.
root->left = insertNode(root->left, value)
o iii) If value > root->data,
call the insertNode function with root->right and assign the return value to root->right.
root->right = insertNode(root->right, value)
2. Update the Height of the Root Node
o Set root->height to 1 + max(height(root->left), height(root->right))
Algorithm: Insert a Value in an AVL Tree
3. Check Balance Factor of Root Node iii) Left Right Case:
o balance = height(root->left) - height(root->right) If balance > 1 and value > root->left->data,
o Perform the necessary rotation based on the perform a left rotation on root->left, followed by a
right rotation on the root.
balance factor:
root->left = leftRotate(root->left)
 i) Left Left Case:
return rightRotate(root)
If balance > 1 and value < root->left->data,
iv) Right Left Case:
perform a right rotation on the root.
If balance < -1 and value < root->right->data,
return rightRotate(root)
perform a right rotation on root->right, followed by a
 ii) Right Right Case:
left rotation on the root.
If balance < -1 and value > root->right->data, root->right = rightRotate(root->right)
perform a left rotation on the root. return leftRotate(root)
return leftRotate(root)
4. Finally, Return the Root Pointer to the Calling
Function
Algorithm: Insert a Value in an AVL Tree
Helper Functions:  leftRotate(x):
 height(node): Returns the height of the node. o y = x->right
 rightRotate(y): o T2 = y->left
o x = y->left o Perform rotation:
o T2 = x->right  y->left = x
o Perform rotation:  x->right = T2
 x->right = y o Update heights of x and y
 y->left = T2 o Return y (new root)
o Update heights of x and y
o Return x (new root)
Deletion in AVL Tree

Deletion in AVL Tree


Deletion in AVL Tree
Deletion in AVL Tree

Delete 12

Balanced
Deletion in AVL Tree

Delete 14
AVL Tree
Construct AVL tree for the following data

5, 12, 10, 9, 8, 14, 23, 29, 28,


17
AVL Tree
Construct AVL tree for the following data
5, 12, 10, 9, 8, 14, 23, 29, 28,
17
AVL Tree
Construct AVL tree for the following data

21,26,30,9,4,14,28,18,15,10,
2,3,7
AVL Tree
21,26,30,9,4,14,28,18,15,10,
2,3,7
AVL Tree
21,26,30,9,4,14,28,18,15,10,
2,3,7
AVL Tree
21,26,30,9,4,14,28,18,15,10,
2,3,7
AVL Tree
21,26,30,9,4,14,28,18,15,10,
2,3,7
AVL Tree
21,26,30,9,4,14,28,18,15,10,
2,3,7
AVL Tree
Construct AVL tree for the following data

2, 4, 6, 8, 10, 12, 14, 16, 18, 20,


22, 24
AVL Tree

2, 4, 6, 8, 10, 12, 14, 16, 18, 20,


22, 24
B Tree

B Tree
B Tree
• B trees are extended binary search trees that are specialized
in m-way searching, since the order of B trees is 'm’.
• Order of a tree is defined as the maximum number of
children a node can accommodate.
• Therefore, the height of a b tree is relatively smaller than
the height of AVL tree and RB tree.
• They are general form of a Binary Search Tree as it holds
more than one key and two children.
B Tree
properties of B trees
•Every node in a B Tree will hold a maximum of m children and (m-1)
keys, since the order of the tree is m.

•Every node in a B tree, except root and leaf, can hold at least m/2
children

•The root node must have no less than two children.

•All the paths in a B tree must end at the same level, i.e. the leaf
nodes must be at the same level.

•A B tree always maintains sorted data.


B Tree

B trees are also widely used in disk access, minimizing the disk
access time since the height of a b tree is low.
B Tree

Where the order m= 4


Algorithm : B tree Insertion
Insertions are done at the leaf node level. The following algorithm needs to be followed in order
to insert an item into B Tree.

1.Traverse the B Tree in order to find the appropriate leaf node at which the node can be
inserted.
2.If the leaf node contain less than m-1 keys then insert the element in the increasing order.
3.Else, if the leaf node contains m-1 keys, then follow the following steps.
• Insert the new element in the increasing order of elements.
• Split the node into the two nodes at the median.
• Push the median element upto its parent node.
• If the parent node also contain m-1 number of keys, then split it too by following the
same steps
B Tree

Take Ceiling value always


B Tree

Identify Mid
B Tree

9
B Tree

22

Add 13
B Tree

22
B Tree

Add 7 and 10
B Tree

11
B Tree

16

Add 14 and 8
B Tree

16
B Tree
Create B tree for 1 to 10 with order m= 3
B Tree
Create B tree for 1 to 20 with order m= 5
B Tree
B tree Deletion
Before going through the steps below, one must know these facts
about a B tree of degree m.
1.A node can have a maximum of m children. (i.e. 3)

2.A node can contain a maximum of m-1 keys. (i.e. 2)

3.A node should have a minimum of ⌈m/2⌉ children. (i.e. 2)

4.A node (except root node) should contain a minimum of ⌈m/2⌉ -


1 keys. (i.e. 1)
B Tree
B tree Deletion
There are three main cases for deletion operation in a B tree.
Case I
The key to be deleted lies in the leaf. There are two cases for it.
A. The deletion of the key does not violate the property of the minimum
number of keys a node should hold.
B. The deletion of the key violates the property of the minimum number of
keys a node should hold. In this case, we borrow a key from its
immediate neighboring sibling node in the order of left to right.
 First, visit the immediate left sibling. If the left sibling node has more than a
minimum number of keys, then borrow a key from this node.
 Else, check to borrow from the immediate right sibling node.
C. If both the immediate sibling nodes already have a minimum number
of keys, then merge the node with either the left sibling node or the
right sibling node. This merging is done through the parent node.
B Tree
B tree Deletion
In the tree below, deleting 32 does not violate the above properties.
(A)
B Tree
B tree Deletion
In the tree below, deleting 31 results in the (B) condition. Let us
borrow a key from the left sibling node. (B)
B Tree
B tree Deletion
Deleting 30.(C)
B Tree
B tree Deletion
Case II
If the key to be deleted lies in the internal node, the following cases
occur.
A. The internal node, which is deleted, is replaced by an inorder
predecessor if the left child has more than the minimum number of
keys.
B. The internal node, which is deleted, is replaced by an inorder
successor if the right child has more than the minimum number of
keys.
C. If either child has exactly a minimum number of keys then, merge
the left and the right children.
B Tree
B tree Deletion
Deleting an internal node (33)……(A)
B Tree
B tree Deletion
Deleting an internal node (30)……(C)
B Tree
B tree Deletion

Case III
In this case, the height of the tree shrinks.

If the target key lies in an internal node, and the deletion of the key leads
to a fewer number of keys in the node (i.e. less than the minimum
required), then look for the inorder predecessor and the inorder
successor. If both the children contain a minimum number of keys then,
borrowing cannot take place. This leads to Case II(C) i.e. merging the
children.

Again, look for the sibling to borrow a key. But, if the sibling also has only
a minimum number of keys then, merge the node with the sibling along
with the parent. Arrange the children accordingly (increasing order).
B Tree
B tree Deletion
Case III
In this case, the height of the tree shrinks. Deleting an internal node
(10)
B Tree
Quiz
m=5
Delete key =5
B Tree
Quiz
m=5
Delete key =5
B Tree
Quiz
m=5
Delete key =12
B Tree
Quiz
m=5
Delete key =12
B Tree
Quiz
m=5
Delete key =32
B Tree
Quiz
m=5
Delete key =32
B Tree
Quiz
m=5
Delete key = 53
B Tree
Quiz
m=5
Delete key =53
B Tree
Quiz
m=5
Delete key =21
B Tree
Application of B Tree

 Database or File System – Consider having to search details of a


person in Yellow Pages (directory containing details of professionals).
Such a system where you need to search a record among millions can
be done using B Tree.

 Search Engines and Multilevel Indexing – B tree Searching is used


in search engines and also querying in MySql Server.

 To store blocks of data – B Tree helps in faster storing blocks of data


in secondary storage due to the balanced structure of B Tree.
Heap Data Structure

Heap Data
Structure
Heap Data Structure
Heap Data Structure is a special case of balanced
binary tree data structure where the root-node key is
compared with its children and arranged accordingly.

Note:: Heap Tree is considered as ACBT(Almost


complete Binary Tree)
Heap Data Structure
Note:: Heap Tree is considered as ACBT(Almost
complete Binary Tree)
Heap Data Structure

Types of Heap Data Structure

Min-Heap − Where the value of the root node is less than


or equal to either of its children.

Max-Heap − Where the value of the root node is greater


than or equal to either of its children.
Heap Data Structure
Min-Heap

•In a min-heap, each node is smaller than its child nodes.


•The root of a min-heap is the smallest element in the heap.
Heap Data Structure
Max-Heap

•In a max-heap, each node is greater than its child nodes.


•The root of a max-heap is the largest element in the heap.
Heap Data Structure
Max-Heap
Create Max-Heap for
Heap Data Structure
Max-Heap
Create Max-Heap for .

The heap as a list is 16,8,14,1,6,5,12


requiring 10 swaps
Heap Data Structure
Create Min-Heap for . 23 10 24 1 23 22 15 17 19
18
Heap Data Structure
Heapify Method.

Heapify method rearranges the elements of an array where


the left and right sub-tree of ith element obeys the heap
property.

Example :
Given Array :
140 42 26 66 12 48 19 1 100 27 8 4 46 10 32
Heap Data Structure
Heapify Method.

Example :
Given Array :
140 42 26 66 12 48 19 1 100 27 8 4 46 10 32
Heap Data Structure
Heap Deletion
The standard deletion operation on Heap is to delete the element present at the root node
of the Heap. That is if it is a Max Heap, the standard deletion operation will delete the
maximum element and if it is a Min heap, it will delete the minimum element.

Process of Deletion:

Since deleting an element at any intermediary position in the heap can be costly, so we can
simply replace the element to be deleted by the last element and delete the last element of the
Heap.
• Replace the root or element to be deleted by the last element.
• Delete the last element from the Heap.
• Since, the last element is now placed at the position of the root node. So, it may not follow
the heap property. Therefore, heapify the last node placed at the position of root.
Heap Data Structure
Heap Deletion
Heap Data Structure
Heap Sort
 Heap sort is one of the sorting algorithms used to arrange a list of elements in
order. Heapsort algorithm uses one of the tree concepts called Heap Tree. In this
sorting algorithm, we use Max Heap to arrange list of elements in Descending
order and Min Heap to arrange list elements in Ascending order.

Step by Step Process


The Heap sort algorithm to arrange a list of elements in ascending order is performed using
following steps...

•Step 1 - Construct a Binary Tree with given list of Elements.


•Step 2 - Transform the Binary Tree into Max Heap or Min Heap.
•Step 3 - Delete the root element from Max Heap or Min Heap using Heapify method.
•Step 4 - Put the deleted element into the Sorted list.
•Step 5 - Repeat the same until Max Heap or Min Heap becomes empty.
•Step 6 - Display the sorted list.
Heap Data Structure
Heap Sort
Heap Data Structure
Heap Sort
Heap Data Structure
Heap Sort
Heap Data Structure
Heap Sort
Heap Data Structure
Heap Sort
Heap Data Structure
Heap Sort
Heap Data Structure
Heap Sort
Heap Data Structure
Heap Sort
Heap Data Structure
Heap Sort
Heap Data Structure
Applications of Heap
 Heap Sort: Heap Sort is a sorting algorithm. It has a time complexity of O(N*log N) that
leverages the Binary Heap data structure. By organizing data into a heap, it enables fast
sorting operations.
 Priority Queue: Priority queue benefit from heap implementation, allowing for swift
execution of essential operations such as insert(), delete(), extractMax(), and
decreaseKey(), all achievable in O(log N) time.
 Graph Algorithms: Heaps play a pivotal role in various graph algorithms, including
Dijkstra’s Shortest Path and Prim’s Minimum Spanning Tree algorithms. Their efficient
structure facilitates optimal path finding and tree construction, enhancing the
performance of these algorithms.
 Dynamic Memory Allocation: Heaps are also instrumental in dynamic memory
allocation tasks, where they efficiently manage memory blocks based on their priorities or
sizes, ensuring optimal memory utilization and management.
 Operating Systems: Heaps are integral to memory management in operating systems,
facilitating tasks such as process scheduling, memory allocation, and resource
optimization. By efficiently organizing memory blocks, heaps contribute to the smooth
operation of various system functions.
Hashing
Hashing
Hashing is a well-known technique to search any particular
element among several elements.
It minimizes the number of comparisons while performing
the search.
Advantage-
 Unlike other searching techniques, Hashing is extremely
efficient.
 The time taken by it to perform the search does not
depend upon the total number of elements.
 It completes the search with constant time complexity
O(1).
Hashing
Hashing Mechanism-
 An array data structure called as Hash table is used to store the data items.
 Based on the hash key value, data items are inserted into the hash table.

Hash Key Value-


Hash key value is a special value that
serves as an index for a data item.
 It indicates where the data item should
be stored in the hash table.
 Hash key value is generated using a hash
function.
Hashing
• Hashing has 2 major components
• Hash function h
• Hash Table Data Structure of size M
• A hash function h maps keys (a identifying element of record set) to hash value or hash key
which refers to specific location in Hash table
• Example:
h(key) = key mod M
is a hash function for integer keys
• The integer h(key) is called the hash value of key key
Hashing
Hash Function-
Hash function is a function that maps any big number or string to a small integer
value

 Hash function takes the data item as an input and returns a small integer value as an
output.

 The small integer value is called as a hash value.

 Hash value of the data item is then used as an index for storing it into the hash table.
Hashing

Different types of hash functions are used for the


mapping of keys into tables.

(a) Division Method


(b) Mid-square Method
(c) Folding Method
Hashing
(a) Division Method
• Choose a number m larger than the number n of keys in k.
• The number m is usually chosen to be a prime no.
• The hash function H is defined as,
H(k) = k(mod m) or H(k) = k(mod m) + 1
• Denotes the remainder, when k is divided by m
• 2nd formula is used when range is from 1 to m.
Hashing
(a) Division Method
• Example:
• Elements are: 3205, 7148, 2345

• Table size: 0 – 99 (prime)
• m = 97 (prime)

• H(3205)= 4, H(7148)=67, H(2345)=17
Hashing
Folding Method
• The key k is partitioned into no. of parts

• Then add these parts together and ignoring the last carry.

• One can also reverse the first part before adding (right or left justified. Mostly
right)

H(k) = k1 + k2 + ………. + kn
Hashing
Folding Method
• Example: mod 97

• H(3205)=32+05=37 or H(3250)=32+50=82
• H(7148)=71+48=22 or H(7184)=71+84=58
Hashing
Mid-Square Method
• The key k is squared. Then the hash function H is defined as

• The is obtained by deleting the digits from both ends of .


• The same position must be used for all the keys.
Example:
k: 3205 7148 2345
k2: 10272025 51093904 5499025
H(k): 72 93 99

4th and 5th digits have been selected. From the right side.
Hashing
Hash Table : Direct-address Tables
• The implementation of hash tables is called hashing.
• Hashing is a technique used for performing insertions, deletions and finds in constant
average time (i.e. O(1))

Hash table size


• Should be appropriate for the hash function used

• Too big will waste memory; too small will increase


collisions and may eventually force rehashing (copying
into a larger table)
Hashing
Hashing
Collision in Hashing-
When the hash value of a key maps to an already occupied bucket of the hash table, it is
called as a Collision.
Hash function may return the same hash value for two or more keys.
Hashing
Collision resolution techniques
Hashing
Separate Chaining-
 To handle the collision, This technique creates a linked list to
the slot for which collision occurs.
 The new key is then inserted in the linked list.
 These linked lists to the slots appear like chains.
 That is why, this technique is called as separate chaining.
Hashing
Separate Chaining-
Example:

Using the hash function


‘key mod 7’,
insert the following sequence of
keys in the hash table-
50, 700, 76, 85, 92, 73 and 101
Hashing
Open Addressing

More formally:
Cells h0(x), h1(x), h2(x), …are tried in succession where
hi(x) = (hash(x) + f(i)) mod TableSize, with f(0) = 0.
The function f is the collision resolution strategy.
There are three common collision resolution strategies:
Linear Probing
Quadratic probing
Double hashing
Hashing
Linear Probing
• Locations are checked from the hash location k to the end of the table and
the element is placed in the first empty slot
• If the bottom of the table is reached, checking “wraps around” to the start of
the table. Modulus is used for this purpose
• Thus, if linear probing is used, these routines must continue down the table
until a match or empty location is found
• Linear probing is guaranteed to find a slot for the insertion if there still an
empty slot in the table.
• If the load factor is greater than 50% - 70% then the time to search or to
add a record will increase.
Hashing
Linear Probing

However, linear probing also tends to promote clustering within the table.
Hashing
Quadratic Probing
Quadratic probing is a solution to the clustering problem
Linear probing adds 1, 2, 3, etc. to the original hashed key

Quadratic probing adds 12, 22, 32 etc. to the original hashed key
H(k) = (hash(k) + i^2) mod table_size
• However, whereas linear probing guarantees that all empty positions will be
examined if necessary, quadratic probing does not
• More generally, with quadratic probing, insertion may be impossible if the
table is more than half-full!
Hashing
Quadratic Probing
• Calculate the initial hash position for the key.
• If the position is occupied, apply the quadratic probing formula to find the next available
slot.
• Repeat this process until an empty slot is found, and insert the data.
Hashing
Quadratic Probing
Hashing
Double Hashing
A second hash function is used to drive the collision resolution.
f(i) = i * hash2(key)

Double hashing can be done using :


(hash1(key) + i * hash2(key)) % TABLE_SIZE
Here hash1() and hash2() are hash functions and TABLE_SIZE
is size of hash table.
(We repeat by increasing i when collision occurs)

A function such as hash2(key) = R – ( x mod R) with R a prime smaller


than TableSize will work well.
e.g. try R = 7 or 11 if table size is 13
Hashing
Double Hashing
(hash1(key) + i * hash2(key)) %
TABLE_SIZE

hash2(key) = R – ( x mod R) with


R a prime smaller than TableSize
will
Thank You

You might also like