DS Module-4
DS Module-4
Binary Tree is defined as a tree data structure where each node has at most 2
children. Since each element in a binary tree can have only 2 children, we typically
name them the left and right child.
A Binary tree is represented by a pointer to the topmost node (commonly known as the
“root”) of the tree. If the tree is empty, then the value of the root is NULL. Each node of a
Binary Tree contains the following parts:
1. Data
2. Pointer to left child
3. Pointer to right child
Basic Operation On Binary Tree:
Inserting an element.
Removing an element.
Searching for an element.
Traversing the tree.
Binary Tree (Array implementation)
Given an array that represents a tree in such a way that array indexes are values in
tree nodes and array values give the parent node of that particular index (or node).
The value of the root node index would always be -1 as there is no parent for root.
Construct the standard linked representation of given Binary Tree from this given
representation.
In order to represent a tree using an array, the numbering of nodes can start either
from 0–(n-1) or 1– n, consider the below illustration as follows:
Implementation:
// JAVA implementation of tree using array
import java.io.*;
import java.lang.*;
import java.util.*;
// Class 1
obj.Root("A");
obj.set_Left("B", 0);
obj.set_Right("C", 0);
obj.set_Left("D", 1);
obj.set_Right("E", 1);
obj.set_Right("F", 2);
obj.print_Tree();
// Class 2
// Helper class
class Array_imp {
// Method 2
int t = (root * 2) + 1;
if (str[root] == null) {
System.out.printf(
t);
else {
str[t] = key;
// Method 3
int t = (root * 2) + 2;
if (str[root] == null) {
System.out.printf(
t);
}
else {
str[t] = key;
// Method 4
if (str[i] != null)
System.out.print(str[i]);
else
System.out.print("-");
Output
ABCDE-F---
Time complexity: O(log n) since using heap to create a binary tree
Space complexity: O(1)
Unlike linear data structures (Array, Linked List, Queues, Stacks, etc) which have
only one logical way to traverse them, trees can be traversed in different ways.
A Tree Data Structure can be traversed in following ways:
1. Depth First Search or DFS
1. Inorder Traversal
2. Preorder Traversal
3. Postorder Traversal
2. Level Order Traversal or Breadth First Search or BFS
3. Boundary Traversal
4. Diagonal Traversal
Inorder Traversal:
Algorithm Inorder(tree)
1. Traverse the left subtree, i.e., call Inorder(left->subtree)
2. Visit the root.
3. Traverse the right subtree, i.e., call Inorder(right->subtree)
class Node {
int key;
key = item;
class BinaryTree {
Node root;
if (node == null)
return;
printInorder(node.left);
printInorder(node.right);
// Driver code
public static void main(String[] args)
// Function call
System.out.println(
tree.printInorder(tree.root);
Output
Inorder traversal of binary tree is
4 2 5 1 3
Time Complexity: O(N)
Auxiliary Space: If we don’t consider the size of the stack for function calls then
O(1) otherwise O(h) where h is the height of the tree.
Preorder Traversal:
Algorithm Preorder(tree)
1. Visit the root.
2. Traverse the left subtree, i.e., call Preorder(left->subtree)
3. Traverse the right subtree, i.e., call Preorder(right->subtree)
Uses of Preorder:
Preorder traversal is used to create a copy of the tree. Preorder traversal is also used
to get prefix expressions on an expression tree.
Code implementation of Preorder traversal:
// Java program for different tree traversals
// Class containing left and right child of current
class Node {
int key;
key = item;
class BinaryTree {
Node root;
if (node == null)
return;
printPreorder(node.right);
// Driver code
// Function call
System.out.println(
tree.printPreorder(tree.root);
Output
Preorder traversal of binary tree is
1 2 4 5 3
Time Complexity: O(N)
Auxiliary Space: If we don’t consider the size of the stack for function calls then
O(1) otherwise O(h) where h is the height of the tree.
Postorder Traversal:
Algorithm Postorder(tree)
1. Traverse the left subtree, i.e., call Postorder(left->subtree)
2. Traverse the right subtree, i.e., call Postorder(right->subtree)
3. Visit the root
Uses of Postorder:
Postorder traversal is used to delete the tree. Please see the question for the deletion of
a tree for details. Postorder traversal is also useful to get the postfix expression of an
expression tree
Below is the implementation of the above traversal methods:
class Node {
int key;
key = item;
class BinaryTree {
Node root;
if (node == null)
return;
printPostorder(node.left);
printPostorder(node.right);
// Driver code
// Function call
System.out.println(
tree.printPostorder(tree.root);
}
A binary search tree follows some order to arrange the elements. In a Binary
search tree, the value of left node must be smaller than the parent node, and the
value of right node must be greater than the parent node. This rule is applied
recursively to the left and right subtrees of the root.
In the above figure, we can observe that the root node is 40, and all the nodes of the
left subtree are smaller than the root node, and all the nodes of the right subtree are
greater than the root node.
Similarly, we can see the left child of root node is greater than its left child and smaller
than its right child. So, it also satisfies the property of binary search tree. Therefore, we
can say that the tree in the above image is a binary search tree.
Suppose if we change the value of node 35 to 55 in the above tree, check whether the
tree will be binary search tree or not.
In the above tree, the value of root node is 40, which is greater than its left child 30
but smaller than right child of 30, i.e., 55. So, the above tree does not satisfy the
property of Binary search tree. Therefore, the above tree is not a binary search tree.
// search tree
import java.io.*;
class Node {
int key;
key = item;
// Root of BST
Node root;
// Constructor
// A recursive function to
if (root == null) {
root = new Node(key);
return root;
return root;
// A utility function to
if (root != null) {
inorderRec(root.left);
inorderRec(root.right);
// Driver Code
{
BinarySearchTree tree = new BinarySearchTree();
50
/ \
30 70
/\/\
20 40 60 80 */
tree.insert(50);
tree.insert(30);
tree.insert(20);
tree.insert(40);
tree.insert(70);
tree.insert(60);
tree.insert(80);
tree.inorder();
Output
20 30 40 50 60 70 80
Time Complexity:
The worst-case time complexity of insert operations is O(h) where h is the
height of the Binary Search Tree.
In the worst case, we may have to travel from the root to the deepest leaf
node. The height of a skewed tree may become n and the time complexity
of insertion operation may become O(n).
Auxiliary Space: The auxiliary space complexity of insertion into a binary search
tree is O(1)
import java.io.*;
import java.util.*;
class GFG {
// Driver code
tree.insert(30);
tree.insert(50);
tree.insert(15);
tree.insert(20);
tree.insert(10);
tree.insert(40);
tree.insert(60);
tree.inorder();
class Node {
Node left;
int val;
Node right;
}
class BST {
Node root;
if (root == null) {
root = node;
return;
prev = temp;
temp = temp.left;
prev = temp;
temp = temp.right;
prev.left = node;
else
prev.right = node;
if (temp != null) {
stack.add(temp);
temp = temp.left;
else {
temp = stack.pop();
temp = temp.right;
Output
10 15 20 30 40 50 60
The time complexity of inorder traversal is O(n), as each node is visited once.
The Auxiliary space is O(n), as we use a stack to store nodes for recursion.
class Node {
int key;
key = item;
class BinarySearchTree {
Node root;
// Constructor
BinarySearchTree() {
root = null;
if (node == null) {
return node;
}
// Otherwise, recur down the tree
return node;
return root;
// Driver Code
// Inserting nodes
tree.insert(tree.root, 30);
tree.insert(tree.root, 20);
tree.insert(tree.root, 40);
tree.insert(tree.root, 70);
tree.insert(tree.root, 60);
tree.insert(tree.root, 80);
// Key to be found
int key = 6;
// Searching in a BST
else
key = 60;
// Searching in a BST
else
Output
6 not found
60 found
Time complexity: O(h), where h is the height of the BST.
Auxiliary Space: O(h), where h is the height of the BST. This is because the
maximum amount of space needed to store the recursion stack would be h.
Deletion in Binary Search Tree (BST)
import java.util.*;
class Node {
int key;
Node(int item) {
key = item;
class BST {
Node root;
if (root != null) {
inorder(root.left);
inorder(root.right);
if (node == null)
return node;
// Base case
if (root == null)
return root;
// node to be deleted
return root;
return root;
}
// to be deleted.
if (root.left == null) {
return temp;
return temp;
else {
// Find successor
succParent = succ;
succ = succ.left;
if (succParent != root)
succParent.left = succ.right;
else
succParent.right = succ.right;
root.key = succ.key;
return root;
// Driver Code
50
/ \
30 70
/\/\
20 40 60 80 */
tree.insert(tree.root, 30);
tree.insert(tree.root, 20);
tree.insert(tree.root, 40);
tree.insert(tree.root, 70);
tree.insert(tree.root, 60);
System.out.print("Original BST: ");
tree.inorder(tree.root);
tree.inorder(tree.root);
tree.inorder(tree.root);
tree.inorder(tree.root);
Output
Original BST: 20 30 40 50 60 70
import java.io.*;
class Node {
int data;
Node left;
Node right;
Node(int v)
this.data = v;
class GFG {
// Inorder Traversal
if (node == null)
return;
printInorder(node.left);
// Visit node
System.out.print(node.data + " ");
printInorder(node.right);
// Driver Code
// Function call
printInorder(root);
Preorder Traversal:
Below is the idea to solve the problem:
At first visit the root then traverse left subtree and then traverse the right subtree.
Follow the below steps to implement the idea:
Visit the root and print the data.
Traverse left subtree
Traverse the right subtree
Below is the implementation of the preorder traversal.
// Java code to implement the approach
import java.io.*;
// Class describing a node of tree
class Node {
int data;
Node left;
Node right;
Node(int v)
this.data = v;
class GFG {
// Preorder Traversal
if (node == null)
return;
// Visit node
printPreorder(node.left);
printPreorder(node.right);
}
public static void main(String[] args)
// Function call
printPreorder(root);
Output
Preorder Traversal: 100 20 10 30 200 150 300
Time complexity: O(N), Where N is the number of nodes.
Auxiliary Space: O(H), Where H is the height of the tree
Postorder Traversal:
Below is the idea to solve the problem:
At first traverse left subtree then traverse the right subtree and then visit the root.
Follow the below steps to implement the idea:
Traverse left subtree
Traverse the right subtree
Visit the root and print the data.
Below is the implementation of the postorder traversal:
// Java code to implement the approach
import java.io.*;
// Class describing a node of tree
class GFG {
int data;
Node left;
Node right;
Node(int v)
this.data = v;
// Preorder Traversal
if (node == null)
return;
printPreorder(node.left);
printPreorder(node.right);
// Visit node
}
public static void main(String[] args)
// Function call
printPreorder(root);
Output
PostOrder Traversal: 10 30 20 150 300 200 100
Time complexity: O(N), Where N is the number of nodes.
Auxiliary Space: O(H), Where H is the height of the tree
An AVL tree defined as a self-balancing Binary Search Tree (BST) where the
difference between heights of left and right subtrees for any node cannot be more
than one.
The difference between the heights of the left subtree and the right subtree for any
node is known as the balance factor of the node.
The AVL tree is named after its inventors, Georgy Adelson-Velsky and Evgenii
Landis, who published it in their 1962 paper “An algorithm for the organization of
information”.
Insertion
Deletion
Searching [It is similar to performing a search in BST]
An AVL tree may rotate in one of the following four ways to keep itself balanced:
Left Rotation:
When a node is added into the right subtree of the right subtree, if the tree gets out of
balance, we do a single left rotation.
Right Rotation:
If a node is added to the left subtree of the left subtree, the AVL tree may get out of
balance, we do a single right rotation.
Left-Right Rotation:
A left-right rotation is a combination in which first left rotation takes place after that
right rotation executes.
Right-Left Rotation:
A right-left rotation is a combination in which first right rotation takes place after
that left rotation executes.
1. It is difficult to implement.
2. It has high constant factors for some of the operations.
3. Less used compared to Red-Black trees.
4. Due to its rather strict balance, AVL trees provide complicated insertion and
removal operations as more rotations are performed.
5. Take more processing for balancing.
class Node {
Node(int d) {
key = d;
height = 1;
class AVLTree {
Node root;
int height(Node N) {
if (N == null)
return 0;
return N.height;
}
// A utility function to get maximum of two integers
return (a > b) ? a : b;
Node rightRotate(Node y) {
Node x = y.left;
Node T2 = x.right;
// Perform rotation
x.right = y;
y.left = T2;
// Update heights
return x;
Node leftRotate(Node x) {
Node y = x.right;
Node T2 = y.left;
// Perform rotation
y.left = x;
x.right = T2;
// Update heights
return y;
int getBalance(Node N) {
if (N == null)
return 0;
if (node == null)
return node;
/* 2. Update height of this ancestor node */
node.height = 1 + max(height(node.left),
height(node.right));
unbalanced */
return rightRotate(node);
return leftRotate(node);
node.left = leftRotate(node.left);
return rightRotate(node);
node.right = rightRotate(node.right);
return leftRotate(node);
}
/* return the (unchanged) node pointer */
return node;
// of the tree.
if (node != null) {
preOrder(node.left);
preOrder(node.right);
30
/\
20 40
/\ \
10 25 50
*/
System.out.println("Preorder traversal" +
tree.preOrder(tree.root);
Output
Preorder traversal of the constructed AVL tree is
30 20 10 25 40 50
Complexity Analysis
Time Complexity: O(n*log(n)), For Insertion
Auxiliary Space: O(1)
2) Zag Rotation:
The Zag Rotation in splay trees operates in a similar fashion to the single left
rotation in AVL Tree rotations. During this rotation, nodes shift one position to the
left from their current location. For instance, consider the following illustration:
3) Zig-Zig Rotation:
The Zig-Zig Rotation in splay trees is a double zig rotation. This rotation results in
nodes shifting two positions to the right from their current location. Take a look at
the following example for a better understanding:
4) Zag-Zag Rotation:
In splay trees, the Zag-Zag Rotation is a double zag rotation. This rotation causes
nodes to move two positions to the left from their present position. For example:
5) Zig-Zag Rotation:
6) Zag-Zig Rotation:
The Zag-Zig Rotation in splay trees is a series of zag rotations followed by a zig
rotation. This results in nodes moving one position to the left, followed by a shift
one position to the right from their current location. The following illustration offers
a visual representation of this concept:
// Java Program for the above approach
class Node {
class SplayTree {
node.key = key;
return node;
Node y = x.left;
x.left = y.right;
y.right = x;
return y;
}
static Node leftRotate(Node x) {
Node y = x.right;
x.right = y.left;
y.left = x;
return y;
return root;
if (root.left == null)
return root;
root = rightRotate(root);
if (root.left.right != null)
root.left = leftRotate(root.left);
else {
if (root.right == null)
return root;
if (root.right.left != null)
root.right = rightRotate(root.right);
root = leftRotate(root);
if (root == null)
return newNode(key);
if (root.key == key)
return root;
node.right = root;
node.left = root.left;
root.left = null;
else {
node.left = root;
node.right = root.right;
root.right = null;
return node;
}
if (node != null) {
System.out.println();
preOrder(node.left);
preOrder(node.right);
preOrder(root);
Output
Preorder traversal of the modified Splay tree:
Splay trees have amortized time complexity of O(log n) for many operations,
making them faster than many other balanced tree data structures in some cases.
Splay trees are self-adjusting, meaning that they automatically balance themselves as
items are inserted and removed. This can help to avoid the performance degradation
that can occur when a tree becomes unbalanced.
Splay trees can have worst-case time complexity of O(n) for some operations,
making them less predictable than other balanced tree data structures like AVL trees
or red-black trees.
Splay trees may not be suitable for certain applications where predictable
performance is required.
Red- Black Trees
Red-Black tree is a binary search tree in which every node is colored with either red
or black. It is a type of self balancing binary search tree. It has a good efficient worst
case running time complexity.
The Red-Black tree satisfies all the properties of binary search tree in addition to
that it satisfies following additional properties –
1. Root property: The root is black.
2. External property: Every leaf (Leaf is a NULL child of a node) is black in Red-
Black tree.
3. Internal property: The children of a red node are black. Hence possible parent of
red node is a black node.
4. Depth property: All the leaves have the same black depth.
5. Path property: Every simple path from root to descendant leaf node contains same
number of black nodes.
The result of all these above-mentioned properties is that the Red-Black tree is
roughly balanced.
Rules That Every Red-Black Tree Follows:
1. Every node has a color either red or black.
2. The root of the tree is always black.
3. There are no two adjacent red nodes (A red node cannot have a red parent
or red child).
4. Every path from a node (including root) to any of its descendants NULL
nodes has the same number of black nodes.
5. Every leaf (e.i. NULL node) must be colored BLACK.
Why Red-Black Trees?
Most of the BST operations (e.g., search, max, min, insert, delete.. etc) take O(h)
time where h is the height of the BST. The cost of these operations may become
O(n) for a skewed Binary tree. If we make sure that the height of the tree remains
O(log n) after every insertion and deletion, then we can guarantee an upper bound of
O(log n) for all these operations. The height of a Red-Black tree is always O(log n)
where n is the number of nodes in the tree.
“n” is the total number of elements in the red-black tree.
Comparison with AVL Tree:
The AVL trees are more balanced compared to Red-Black Trees, but they may cause
more rotations during insertion and deletion. So if your application involves frequent
insertions and deletions, then Red-Black trees should be preferred. And if the
insertions and deletions are less frequent and search is a more frequent operation,
then AVL tree should be preferred over the Red-Black Tree.
How does a Red-Black Tree ensure balance?
A simple example to understand balancing is, that a chain of 3 nodes is not possible
in the Red-Black tree. We can try any combination of colors and see if all of them
violate the Red-Black tree property.
int key;
key = item;
class BinarySearchTree {
Node root;
// Constructor
BinarySearchTree() {
root = null;
if (node == null) {
return node;
return node;
return root;
// Driver Code
// Inserting nodes
tree.insert(tree.root, 30);
tree.insert(tree.root, 20);
tree.insert(tree.root, 40);
tree.insert(tree.root, 70);
tree.insert(tree.root, 60);
tree.insert(tree.root, 80);
// Key to be found
int key = 6;
// Searching in a BST
else
key = 60;
// Searching in a BST
else
Output
6 not found
60 found
Time complexity: O(h), where h is the height of the BST.
Auxiliary Space: O(h), where h is the height of the BST. This is because the
maximum amount of space needed to store the recursion stack would be h