Lecture On Tree
Lecture On Tree
Lecture
on
Tree
Why Tree ??
Tree Data Structure
Specialized method to
organize and store data in the
computer to be used more
effectively.
Depth of a node: The length of the path from the root to that node.
Each edge adds 1 unit of length to the path
Tree Height of a node: The length of the longest path from the node to a
leaf node
Height of the Tree: Longest path from the node to a leaf node
AVL Trees
Types of Binary Tree
Binary Trees: Each node has up to two children, the left child node and
the right child node. This structure is the foundation for more complex
tree types like Binay Search Trees and AVL Trees.
A balanced Binary Tree has at most 1 in difference between its left and right subtree heights, for
each node in the tree.
A complete Binary Tree has all levels full of nodes, except the last level, which is can also be full,
or filled from left to right. The properties of a complete Binary Tree means it is also balanced.
A full Binary Tree is a kind of tree where each node has either 0 or 2 child nodes.
A perfect Binary Tree has all leaf nodes on the same level, which means that all levels are full of
nodes, and all internal nodes have two child nodes. The properties of a perfect Binary Tree means it
is also full, balanced, and complete.
Types of Binary Tree
Binary Tree Properties
The maximum number of nodes at level ‘l’ of a binary tree is 2 l.
The Maximum number of nodes in a binary tree of height ‘h’ is 2 h – 1.
In a Binary Tree with N nodes, minimum possible height or the minimum number of levels is Log 2(N+1).
public:
int data;
Node* left;
Node* right;
Node(int value) {
data = value;
left = right = nullptr;
}
};
Code
// Function to print in-order traversal (Left, Root, Right)
// Function to print pre-order traversal (Root, Left, Right)
void inorder(Node* root) {
void preorder(Node* root) {
if (root == nullptr) {
if (root == nullptr) {
return;
return;
}
}
// Traverse the left subtree
// Visit the root node
inorder(root->left);
cout << root->data << " ";
// Visit the root node
// Traverse the left subtree
cout << root->data << " ";
preorder(root->left);
// Traverse the right subtree
// Traverse the right subtree
inorder(root->right);
preorder(root->right);
}
}
Code
// Function to print post-order traversal (Left, Right, Root)
void postorder(Node* root) {
if (root == nullptr) {
return;
}
// In-order traversal
int main() {
cout << "In-order traversal: ";
// Manually creating the binary tree with 7 nodes
inorder(root);
Node* root = new Node(1);
cout << endl;
root->left = new Node(2);
root->right = new Node(3);
// Post-order traversal
root->left->left = new Node(4);
cout << "Post-order traversal: ";
root->left->right = new Node(5);
postorder(root);
root->right->left = new Node(6);
cout << endl;
root->right->right = new Node(7);
return 0;
// Pre-order traversal
}
cout << "Pre-order traversal: ";
preorder(root);
cout << endl;
Order traversal based tree
Binary Tree????
Pseudocode Inorder and Preorder Traversal Tree
function buildTree(preorder[], inorder[], start, end, preIndex): function search(inorder[], start, end, value):
if start > end: for i = start to end:
return NULL if inorder[i] == value:
# Pick current root from Preorder return i
rootValue = preorder[preIndex]
preIndex = preIndex + 1
node = createNode(rootValue)
# If node has no children, return it
if start == end:
return node
# Find root's position in Inorder traversal
inIndex = search(inorder, start, end, rootValue)
# Recursively build the left and right subtrees
node.left = buildTree(preorder, inorder, start, inIndex - 1, preIndex)
node.right = buildTree(preorder, inorder, inIndex + 1, end, preIndex)
return node
Inorder and Postorder Traversal Tree
Inorder : 40, 20, 50, 10, 60, 30
Preorder: 40,50,20, 60, 30, 10
Inorder and Postorder Traversal Tree
Inorder : 40, 20, 50, 10, 60, 30
Preorder: 40,50,20, 60, 30, 10
Inorder and Postorder Traversal Tree
function buildTree(postorder[], inorder[], start, end, postIndex): function search(inorder[], start, end, value):
if start > end: for i = start to end:
return NULL if inorder[i] == value:
# Pick current root from Postorder return i
rootValue = postorder[postIndex]
postIndex = postIndex - 1
node = createNode(rootValue)
# If node has no children, return it
if start == end:
return node
# Find root's position in Inorder traversal
inIndex = search(inorder, start, end, rootValue)
# Recursively build the right and left subtrees
node.right = buildTree(postorder, inorder, inIndex + 1, end, postIndex)
node.left = buildTree(postorder, inorder, start, inIndex - 1, postIndex)
return node
Complexity Analysis
Time Complexity:
In-order: O(n)
Pre-order: O(n)
Post-order: O(n)
Space Complexity
In-order: O(n)
Pre-order: O(n)
Post-order: O(n)
Data Structure:
Stack
Binary Search Trees
Binary Search
Trees
Binary Search Trees
A Binary Search Tree (BST) is a type of Binary Tree data
structure, where the following properties must be true for
any node "X" in the tree:
The right child, and all its descendants have higher values than
X's value.
Left and right subtrees must also be Binary Search Trees without
any duplicate values.
•START
Search (root, item)
if (item = root → data) or (root = NULL)
return root
else if (item < root → data)
return Search(root → left, item)
else
return Search(root → right, item)
END if
•END
Binary Search Trees
Search for a Value (8) in a
BST
Binary Search Trees
Insert a Node in a BST
How it works:
Start at the root node.
If the node only has one child node, connect the parent node of
the node you want to remove to that child node.
If the node has both right and left child nodes: Find the node's in-
order successor, change values with that node, then delete it.
Binary Search Trees
Delete a Node in a BST
Case 1 Delete 8
Binary Search Trees
Delete a Node in a BST
Case 2 Delete 19
If the node only has one child node, connect the parent node of the node you
want to remove to that child node.
Binary Search Trees
Delete a Node in a BST
Case 3 Delete 13
If the node has both right and left child nodes: Find the node's in-order successor, change
values with that node, then delete it.
Binary Search Trees
Delete a Node in a BST
13 i s
repla
ce d
by 14
Inord
e
Succe r 14 w i
ll
s
r nod so be fre
e
13
e of
Case 3 Delete 13
If the node has both right and left child nodes: Find the node's in-order successor, change
values with that node, then delete it.
Binary Search Trees
Delete a Node in a BST
Delete 40 ??
Binary Search Tree Algorithm
Algorithm : Delete a Value in a BST2. If root->right is NULL,
deleteNode(root, value) `temp = root->left`
i) If root == NULL, free(root)
return root. return temp
ii) If value < root->data,
`root->left = deleteNode(root->left, value)` b) If the node has two children:
iii) If value > root->data, 1. Find the in-order successor (smallest node in the right
`root->right = deleteNode(root->right, value)` subtree).
temp->data = in-order successor (Value)
iv) If value == root->data, (Node to be deleted is found) 2. Swap the values of the node and its in-order successor.
a) If the node has no child or only one child:
1. If root->left is NULL, `root->data = temp->data`
`temp = root->right` 3. Call the deleteNode function on root->right to delete
free(root) the in-order successor.
return temp `root->right = deleteNode(root->right, temp->data)`
preorder[] = {6, 3, 1, 4, 8, 7, 9}
?????????
BST from the preorder traversal
Construct a BST from the preorder
traversal
preorder[] = {6, 3, 1, 4, 8, 7, 9}
Algorithm
First, we will create a base case to check if the value of pre has not exceeded the
length of the preorder array (pre is the variable pointing to the 0th index of the
preorder array) and if the value of l is greater than r (l and r are the variables to
divide the preorder array to form the left subtree and right subtree).
Create a new node with the value preorder[pre] and increment the value of pre.
After creating a node, run a for loop from (l to r) to find the first element greater
than the node’s value. Store the value if present.
Now divide the array into left and right subtrees and recur for both left and right
subtrees.
BST from the preorder traversal
Construct a BST from the preorder
traversal
preorder[] = {6, 3, 1, 4, 8, 7, 9}
BST from the preorder traversal
Construct a BST from the preorder
traversal
preorder[] = {6, 3, 1, 4, 8, 7, 9}
BST from the preorder traversal
Construct a BST from the preorder
traversal
preorder[] = {6, 3, 1, 4, 8, 7, 9}
BST from the preorder traversal
Construct a BST from the preorder
traversal
preorder[] = {6, 3, 1, 4, 8, 7, 9}
Final BST
Application of Binary Search Tree
Application of Binary Search Tree
Some of the applications of binary search trees listed below:
Dictionary: BSTs are commonly used to implement dictionaries, allowing efficient word lookup and spell-checking.
Database Indexing: BSTs enable fast data retrieval by serving as index structures in databases.
File System: BSTs can be utilized in file systems to organize and search files efficiently.
Auto-complete: BSTs can power auto-complete features in text editors or search engines, suggesting relevant
options as users type.
Network Routing: BSTs can assist in network routing algorithms, aiding in efficient packet forwarding.
Finding Successor/Predecessor: BSTs provide quick access to the successor or predecessor of a given value, useful
in certain algorithms and data processing tasks.
Worst Case of Binary Search Tree
Is this BST?
There are basically three kinds of operations that are performed in the AVL tree:
1.Insertion of a New Node
2.Deletion of a Node
• Deleting a Node from the Right Subtree
• Deleting a Node from the Left Subtree
10 15 20 9 5 16 17 8 6
AVL Tree
Example: AVL Tree by inserting the following values.
10 15 20 9 5 16 17 8 6
AVL Tree
Example: AVL Tree by inserting the following values.
10 15 20 9 5 16 17 8 6
AVL Tree
Example: AVL Tree by inserting the following values.
10 15 20 9 5 16 17 8 6
AVL Tree
Example: AVL Tree by inserting the following values.
10 15 20 9 5 16 17 8 6
AVL Tree
AVL Tree
AVL Tree
Algorithm: Insert a Value in an AVL Tree
Algorithm: Insert a Value in an AVL Tree
1. insertNode(root, value)
o i) If root == NULL,
create a new node with the given value and return it.
o ii) If value < root->data,
call the insertNode function with root->left and assign the return value to root->left.
root->left = insertNode(root->left, value)
o iii) If value > root->data,
call the insertNode function with root->right and assign the return value to root->right.
root->right = insertNode(root->right, value)
2. Update the Height of the Root Node
o Set root->height to 1 + max(height(root->left), height(root->right))
Algorithm: Insert a Value in an AVL Tree
3. Check Balance Factor of Root Node iii) Left Right Case:
o balance = height(root->left) - height(root->right) If balance > 1 and value > root->left->data,
o Perform the necessary rotation based on the perform a left rotation on root->left, followed by a
right rotation on the root.
balance factor:
root->left = leftRotate(root->left)
i) Left Left Case:
return rightRotate(root)
If balance > 1 and value < root->left->data,
iv) Right Left Case:
perform a right rotation on the root.
If balance < -1 and value < root->right->data,
return rightRotate(root)
perform a right rotation on root->right, followed by a
ii) Right Right Case:
left rotation on the root.
If balance < -1 and value > root->right->data, root->right = rightRotate(root->right)
perform a left rotation on the root. return leftRotate(root)
return leftRotate(root)
4. Finally, Return the Root Pointer to the Calling
Function
Algorithm: Insert a Value in an AVL Tree
Helper Functions: leftRotate(x):
height(node): Returns the height of the node. o y = x->right
rightRotate(y): o T2 = y->left
o x = y->left o Perform rotation:
o T2 = x->right y->left = x
o Perform rotation: x->right = T2
x->right = y o Update heights of x and y
y->left = T2 o Return y (new root)
o Update heights of x and y
o Return x (new root)
Deletion in AVL Tree
Delete 12
Balanced
Deletion in AVL Tree
Delete 14
AVL Tree
Construct AVL tree for the following data
21,26,30,9,4,14,28,18,15,10,
2,3,7
AVL Tree
21,26,30,9,4,14,28,18,15,10,
2,3,7
AVL Tree
21,26,30,9,4,14,28,18,15,10,
2,3,7
AVL Tree
21,26,30,9,4,14,28,18,15,10,
2,3,7
AVL Tree
21,26,30,9,4,14,28,18,15,10,
2,3,7
AVL Tree
21,26,30,9,4,14,28,18,15,10,
2,3,7
AVL Tree
Construct AVL tree for the following data
B Tree
B Tree
• B trees are extended binary search trees that are specialized
in m-way searching, since the order of B trees is 'm’.
• Order of a tree is defined as the maximum number of
children a node can accommodate.
• Therefore, the height of a b tree is relatively smaller than
the height of AVL tree and RB tree.
• They are general form of a Binary Search Tree as it holds
more than one key and two children.
B Tree
properties of B trees
•Every node in a B Tree will hold a maximum of m children and (m-1)
keys, since the order of the tree is m.
•Every node in a B tree, except root and leaf, can hold at least m/2
children
•All the paths in a B tree must end at the same level, i.e. the leaf
nodes must be at the same level.
B trees are also widely used in disk access, minimizing the disk
access time since the height of a b tree is low.
B Tree
1.Traverse the B Tree in order to find the appropriate leaf node at which the node can be
inserted.
2.If the leaf node contain less than m-1 keys then insert the element in the increasing order.
3.Else, if the leaf node contains m-1 keys, then follow the following steps.
• Insert the new element in the increasing order of elements.
• Split the node into the two nodes at the median.
• Push the median element upto its parent node.
• If the parent node also contain m-1 number of keys, then split it too by following the
same steps
B Tree
Identify Mid
B Tree
9
B Tree
22
Add 13
B Tree
22
B Tree
Add 7 and 10
B Tree
11
B Tree
16
Add 14 and 8
B Tree
16
B Tree
Create B tree for 1 to 10 with order m= 3
B Tree
Create B tree for 1 to 20 with order m= 5
B Tree
B tree Deletion
Before going through the steps below, one must know these facts
about a B tree of degree m.
1.A node can have a maximum of m children. (i.e. 3)
Case III
In this case, the height of the tree shrinks.
If the target key lies in an internal node, and the deletion of the key leads
to a fewer number of keys in the node (i.e. less than the minimum
required), then look for the inorder predecessor and the inorder
successor. If both the children contain a minimum number of keys then,
borrowing cannot take place. This leads to Case II(C) i.e. merging the
children.
Again, look for the sibling to borrow a key. But, if the sibling also has only
a minimum number of keys then, merge the node with the sibling along
with the parent. Arrange the children accordingly (increasing order).
B Tree
B tree Deletion
Case III
In this case, the height of the tree shrinks. Deleting an internal node
(10)
B Tree
Quiz
m=5
Delete key =5
B Tree
Quiz
m=5
Delete key =5
B Tree
Quiz
m=5
Delete key =12
B Tree
Quiz
m=5
Delete key =12
B Tree
Quiz
m=5
Delete key =32
B Tree
Quiz
m=5
Delete key =32
B Tree
Quiz
m=5
Delete key = 53
B Tree
Quiz
m=5
Delete key =53
B Tree
Quiz
m=5
Delete key =21
B Tree
Application of B Tree
Heap Data
Structure
Heap Data Structure
Heap Data Structure is a special case of balanced
binary tree data structure where the root-node key is
compared with its children and arranged accordingly.
Example :
Given Array :
140 42 26 66 12 48 19 1 100 27 8 4 46 10 32
Heap Data Structure
Heapify Method.
Example :
Given Array :
140 42 26 66 12 48 19 1 100 27 8 4 46 10 32
Heap Data Structure
Heap Deletion
The standard deletion operation on Heap is to delete the element present at the root node
of the Heap. That is if it is a Max Heap, the standard deletion operation will delete the
maximum element and if it is a Min heap, it will delete the minimum element.
Process of Deletion:
Since deleting an element at any intermediary position in the heap can be costly, so we can
simply replace the element to be deleted by the last element and delete the last element of the
Heap.
• Replace the root or element to be deleted by the last element.
• Delete the last element from the Heap.
• Since, the last element is now placed at the position of the root node. So, it may not follow
the heap property. Therefore, heapify the last node placed at the position of root.
Heap Data Structure
Heap Deletion
Heap Data Structure
Heap Sort
Heap sort is one of the sorting algorithms used to arrange a list of elements in
order. Heapsort algorithm uses one of the tree concepts called Heap Tree. In this
sorting algorithm, we use Max Heap to arrange list of elements in Descending
order and Min Heap to arrange list elements in Ascending order.
Hash function takes the data item as an input and returns a small integer value as an
output.
Hash value of the data item is then used as an index for storing it into the hash table.
Hashing
• Then add these parts together and ignoring the last carry.
• One can also reverse the first part before adding (right or left justified. Mostly
right)
H(k) = k1 + k2 + ………. + kn
Hashing
Folding Method
• Example: mod 97
• H(3205)=32+05=37 or H(3250)=32+50=82
• H(7148)=71+48=22 or H(7184)=71+84=58
Hashing
Mid-Square Method
• The key k is squared. Then the hash function H is defined as
4th and 5th digits have been selected. From the right side.
Hashing
Hash Table : Direct-address Tables
• The implementation of hash tables is called hashing.
• Hashing is a technique used for performing insertions, deletions and finds in constant
average time (i.e. O(1))
More formally:
Cells h0(x), h1(x), h2(x), …are tried in succession where
hi(x) = (hash(x) + f(i)) mod TableSize, with f(0) = 0.
The function f is the collision resolution strategy.
There are three common collision resolution strategies:
Linear Probing
Quadratic probing
Double hashing
Hashing
Linear Probing
• Locations are checked from the hash location k to the end of the table and
the element is placed in the first empty slot
• If the bottom of the table is reached, checking “wraps around” to the start of
the table. Modulus is used for this purpose
• Thus, if linear probing is used, these routines must continue down the table
until a match or empty location is found
• Linear probing is guaranteed to find a slot for the insertion if there still an
empty slot in the table.
• If the load factor is greater than 50% - 70% then the time to search or to
add a record will increase.
Hashing
Linear Probing
However, linear probing also tends to promote clustering within the table.
Hashing
Quadratic Probing
Quadratic probing is a solution to the clustering problem
Linear probing adds 1, 2, 3, etc. to the original hashed key
Quadratic probing adds 12, 22, 32 etc. to the original hashed key
H(k) = (hash(k) + i^2) mod table_size
• However, whereas linear probing guarantees that all empty positions will be
examined if necessary, quadratic probing does not
• More generally, with quadratic probing, insertion may be impossible if the
table is more than half-full!
Hashing
Quadratic Probing
• Calculate the initial hash position for the key.
• If the position is occupied, apply the quadratic probing formula to find the next available
slot.
• Repeat this process until an empty slot is found, and insert the data.
Hashing
Quadratic Probing
Hashing
Double Hashing
A second hash function is used to drive the collision resolution.
f(i) = i * hash2(key)