0% found this document useful (0 votes)
9 views31 pages

Adsa Unit 2

This document provides an overview of hierarchical data structures, focusing on Binary Search Trees (BST) and Red-Black Trees. It explains the properties, operations, and algorithms associated with BSTs, including searching, insertion, and deletion, as well as the characteristics and balancing mechanisms of Red-Black Trees. The document also includes examples and pseudocode for various operations within these data structures.

Uploaded by

stpblr23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views31 pages

Adsa Unit 2

This document provides an overview of hierarchical data structures, focusing on Binary Search Trees (BST) and Red-Black Trees. It explains the properties, operations, and algorithms associated with BSTs, including searching, insertion, and deletion, as well as the characteristics and balancing mechanisms of Red-Black Trees. The document also includes examples and pseudocode for various operations within these data structures.

Uploaded by

stpblr23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 31

UNIT 2

HIERARCHICAL DATA STRUCTURES


2.1 Binary Search Tree
2.1.1 Basics
What is a tree?
A tree is a kind of data structure that is used to represent the data in hierarchical
form. It can be defined as a collection of objects or entities called as nodes that
are linked together to simulate a hierarchy. Tree is a non-linear data structure as
the data in a tree is not stored linearly or sequentially.
What is a Binary Search tree?
A binary search tree follows some order to arrange the elements. In a Binary
search tree, the value of left node must be smaller than the parent node, and the
value of right node must be greater than the parent node. This rule is applied
recursively to the left and right subtrees of the root.
Let's understand the concept of Binary search tree with an example.

In the above figure, we can observe that the root node is 40, and all the nodes of
the left subtree are smaller than the root node, and all the nodes of the right
subtree are greater than the root node.
Similarly, we can see the left child of root node is greater than its left child and
smaller than its right child. So, it also satisfies the property of binary search tree.
Therefore, we can say that the tree in the above image is a binary search tree.
Suppose if we change the value of node 35 to 55 in the above tree, check
whether the tree will be binary search tree or not.
In the above tree, the value of root node is 40, which is greater than its left child
30 but smaller than right child of 30, i.e., 55. So, the above tree does not satisfy
the property of Binary search tree. Therefore, the above tree is not a binary
search tree.
Advantages of Binary search tree
o Searching an element in the Binary search tree is easy as we always have
a hint that which subtree has the desired element.
o As compared to array and linked lists, insertion and deletion operations
are faster in BST.
2.1.2 Querying a Binary search tree
Querying a Binary Search Trees:
1. Searching: The TREE-SEARCH (x, k) algorithm searches the tree node at x for
a node whose key value equal to k. It returns a pointer to the node if it exists
otherwise NIL.
TREE-SEARCH (x, k)
1. If x = NIL or k = key [x].
2. then return x.
3. If k < key [x].
4. then return TREE-SEARCH (left [x], k)
5. else return TREE-SEARCH (right [x], k)
Clearly, this algorithm runs in O (h) time where h is the height of the tree. The
iterative version of the above algorithm is very easy to implement
ITERATIVE-TREE- SEARCH (x, k)
1. while x ≠ NIL and k ≠ key [k].
2. do if k < key [x].
3. then x ← left [x].
4. else x ← right [x].
5. return x.
2. Minimum and Maximum: An item in a binary search tree whose key is a
minimum can always be found by following left child pointers from the root until
a NIL is encountered. The following procedure returns a pointer to the minimum
element in the subtree rooted at a given node x.
TREE- MINIMUM (x)
1. While left [x] ≠ NIL.
2. do x←left [x].
3. return x.
TREE-MAXIMUM (x)
1. While left [x] ≠ NIL
2. do x←right [x].
3. return x.
3. Successor and predecessor: Given a node in a binary search tree,
sometimes we used to find its successor in the sorted form determined by an in
order tree walk. If all keys are specific, the successor of a node x is the node with
the smallest key greater than key[x]. The structure of a binary search tree allows
us to rule the successor of a node without ever comparing keys. The following
action returns the successor of a node x in a binary search tree if it exists, and
NIL if x has the greatest key in the tree:
TREE SUCCESSOR (x)
1. If right [x] ≠ NIL.
2. Then return TREE-MINIMUM (right [x]))
3. y←p [x]
4. While y ≠ NIL and x = right [y]
5. do x←y
6. y←p[y]
7. return y.
The code for TREE-SUCCESSOR is broken into two cases. If the right subtree of
node x is nonempty, then the successor of x is just the leftmost node in the right
subtree, which we find in line 2 by calling TREE-MINIMUM (right [x]). On the other
hand, if the right subtree of node x is empty and x has a successor y, then y is
the lowest ancestor of x whose left child is also an ancestor of x. To find y, we
quickly go up the tree from x until we encounter a node that is the left child of its
parent; lines 3-7 of TREE-SUCCESSOR handle this case.
The running time of TREE-SUCCESSOR on a tree of height h is O (h) since we
either follow a simple path up the tree or follow a simple path down the tree. The
procedure TREE-PREDECESSOR, which is symmetric to TREE-SUCCESSOR, also
runs in time O (h).
4. Insertion in Binary Search Tree: To insert a new value into a binary search
tree T, we use the procedure TREE-INSERT. The procedure takes a node ´ for
which key [z] = v, left [z] NIL, and right [z] = NIL. It modifies T and some of the
attributes of z in such a way that it inserts into an appropriate position in the
tree.
TREE-INSERT (T, z)
1. y ←NIL.
2. x←root [T]
3. while x ≠ NIL.
4. do y←x
5. if key [z]< key [x]
6. then x←left [x].
7. else x←right [x].
8. p [z]←y
9. if y = NIL.
10. then root [T]←z
11. else if key [z] < key [y]
12. then left [y]←z
For Example:

Fig: Working of TREE-INSERT


Suppose we want to insert an item with key 13 into a Binary Search Tree.
1. x=1
2. y = 1 as x ≠ NIL.
3. Key [z] < key [x]
4. 13 < not equal to 12.
5. x ←right [x].
6. x ←3
7. Again x ≠ NIL
8. y ←3
9. key [z] < key [x]
10. 13 < 18
11. x←left [x]
12. x←6
13.Again x ≠ NIL, y←6
14. 13 < 15
15. x←left [x]
16. x←NIL
17. p [z]←6
Now our node z will be either left or right child of its parent (y).
1. key [z] < key [y]
2. 13 < 15
3. Left [y] ← z
4. Left [6] ← z
So, insert a node in the left of node index at 6.
5. Deletion in Binary Search Tree: When Deleting a node from a tree it is
essential that any relationships, implicit in the tree can be maintained. The
deletion of nodes from a binary search tree will be considered:
There are three distinct cases:
1. Nodes with no children: This case is trivial. Simply set the parent's
pointer to the node to be deleted to nil and delete the node.
2. Nodes with one child: When z has no left child then we replace z by its
right child which may or may not be NIL. And when z has no right child,
then we replace z with its right child.
3. Nodes with both Childs: When z has both left and right child. We find z's
successor y, which lies in right z's right subtree and has no left child (the
successor of z will be a node with minimum value its right subtree and so
it has no left child).
o If y is z's right child, then we replace z.

o Otherwise, y lies within z's right subtree but not z's right child. In
this case, we first replace z by its own right child and the replace z
by y.
TREE-DELETE (T, z)
1. If left [z] = NIL or right [z] = NIL.
2. Then y ← z
3. Else y ← TREE- SUCCESSOR (z)
4. If left [y] ≠ NIL.
5. Then x ← left [y]
6. Else x ← right [y]
7. If x ≠NIL
8. Then p[x] ← p [y]
9. If p[y] = NIL.
10. Then root [T] ← x
11. Else if y = left [p[y]]
12. Then left [p[y]] ← x
13. Else right [p[y]] ← y
14. If y ≠ z.
15. Then key [z] ← key [y]
16. If y has other fields, copy them, too.
17. Return y
The Procedure runs in O (h) time on a tree of height h.
For Example: Deleting a node z from a binary search tree. Node z may be the
root, a left child of node q, or a right child of q.
Z has the only right child.

Z has the only left child. We replace z by l.

Node z has two children; its left child is node l, its right child is its successor y,
and y's right child is node x. We replace z by y, updating y's left child to become
l, but leaving x as y's right child.
Node z has two children (left child l and right child r), and its successor y ≠ r lies
within the subtree rooted at r. We replace y with its own right child x, and we set
y to be r's parent. Then, we set y to be q's child and the parent of l.
2.2 Red Black Tree
Red Black Tree
A Red Black Tree is a category of the self-balancing binary search tree. It was
created in 1972 by Rudolf Bayer who termed them "symmetric binary B-trees."
A red-black tree is a Binary tree where a particular node has color as an extra
attribute, either red or black. By check the node colors on any simple path from
the root to a leaf, red-black trees secure that no such path is higher than twice as
long as any other so that the tree is generally balanced.
2.2.1 Properties
A red-black tree must satisfy these properties:
1. The root is always black.
2. A nil is recognized to be black. This factor that every non-NIL node has two
children.
3. Black Children Rule: The children of any red node are black.
4. Black Height Rule: For particular node v, there exists an integer bh (v)
such that specific downward path from v to a nil has correctly bh (v) black
real (i.e. non-nil) nodes. Call this portion the black height of v. We
determine the black height of an RB tree to be the black height of its root.
A tree T is an almost red-black tree (ARB tree) if the root is red, but other
conditions above hold.

2.2.2 Operations on RB Trees


The search-tree operations TREE-INSERT and TREE-DELETE, when runs on a red-
black tree with n keys, take O (log n) time. Because they customize the tree, the
conclusion may violate the red-black properties. To restore these properties, we
must change the color of some of the nodes in the tree and also change the
pointer structure.
1. Rotation:
Restructuring operations on red-black trees can generally be expressed more
clearly in details of the rotation operation.

Clearly, the order (Ax By C) is preserved by the rotation operation. Therefore, if


we start with a BST and only restructure using rotation, then we will still have a
BST i.e. rotation do not break the BST-Property.
LEFT ROTATE (T, x)
1. y ← right [x]
1. y ← right [x]
2. right [x] ← left [y]
3. p [left[y]] ← x
4. p[y] ← p[x]
5. If p[x] = nil [T]
then root [T] ← y
else if x = left [p[x]]
then left [p[x]] ← y
else right [p[x]] ← y
6. left [y] ← x.
7. p [x] ← y.
Example: Draw the complete binary tree of height 3 on the keys {1, 2, 3... 15}.
Add the NIL leaves and color the nodes in three different ways such that the
black heights of the resulting trees are: 2, 3 and 4.
Solution:
Tree with black-height-2

Tree with black-height-3


Tree with black-height-4

2. Insertion:
o Insert the new node the way it is done in Binary Search Trees.

o Color the node red

o If an inconsistency arises for the red-black tree, fix the tree according to
the type of discrepancy.
A discrepancy can decision from a parent and a child both having a red color.
This type of discrepancy is determined by the location of the node concerning
grandparent, and the color of the sibling of the parent.
RB-INSERT (T, z)
1. y ← nil [T]
2. x ← root [T]
3. while x ≠ NIL [T]
4. do y ← x
5. if key [z] < key [x]
6. then x ← left [x]
7. else x ← right [x]
8. p [z] ← y
9. if y = nil [T]
10. then root [T] ← z
11. else if key [z] < key [y]
12. then left [y] ← z
13. else right [y] ← z
14. left [z] ← nil [T]
15. right [z] ← nil [T]
16. color [z] ← RED
17. RB-INSERT-FIXUP (T, z)
After the insert new node, Coloring this new node into black may violate the
black-height conditions and coloring this new node into red may violate coloring
conditions i.e. root is black and red node has no red children. We know the black-
height violations are hard. So we color the node red. After this, if there is any
color violation, then we have to correct them by an RB-INSERT-FIXUP procedure.
RB-INSERT-FIXUP (T, z)
1. while color [p[z]] = RED
2. do if p [z] = left [p[p[z]]]
3. then y ← right [p[p[z]]]
4. If color [y] = RED
5. then color [p[z]] ← BLACK //Case 1
6. color [y] ← BLACK //Case 1
7. color [p[z]] ← RED //Case 1
8. z ← p[p[z]] //Case 1
9. else if z= right [p[z]]
10. then z ← p [z] //Case 2
11. LEFT-ROTATE (T, z) //Case 2
12. color [p[z]] ← BLACK //Case 3
13. color [p [p[z]]] ← RED //Case 3
14. RIGHT-ROTATE (T,p [p[z]]) //Case 3
15. else (same as then clause)
With "right" and "left" exchanged
16. color [root[T]] ← BLACK
Example: Show the red-black trees that result after successively inserting the
keys 41,38,31,12,19,8 into an initially empty red-black tree.
Solution:
Insert 41

Insert 19
Thus the final tree is

3. Deletion:
First, search for an element to be deleted
o If the element to be deleted is in a node with only left child, swap this
node with one containing the largest element in the left subtree. (This
node has no right child).
o If the element to be deleted is in a node with only right child, swap this
node with the one containing the smallest element in the right subtree
(This node has no left child).
o If the element to be deleted is in a node with both a left child and a right
child, then swap in any of the above two ways. While swapping, swap only
the keys but not the colors.
o The item to be deleted is now having only a left child or only a right child.
Replace this node with its sole child. This may violate red constraints or
black constraint. Violation of red constraints can be easily fixed.
o If the deleted node is black, the black constraint is violated. The
elimination of a black node y causes any path that contained y to have
one fewer black node.
o Two cases arise:

o The replacing node is red, in which case we merely color it black to


make up for the loss of one black node.
o The replacing node is black.

The strategy RB-DELETE is a minor change of the TREE-DELETE procedure. After


splicing out a node, it calls an auxiliary procedure RB-DELETE-FIXUP that changes
colors and performs rotation to restore the red-black properties.
RB-DELETE (T, z)
1. if left [z] = nil [T] or right [z] = nil [T]
2. then y ← z
3. else y ← TREE-SUCCESSOR (z)
4. if left [y] ≠ nil [T]
5. then x ← left [y]
6. else x ← right [y]
7. p [x] ← p [y]
8. if p[y] = nil [T]
9. then root [T] ← x
10. else if y = left [p[y]]
11. then left [p[y]] ← x
12. else right [p[y]] ← x
13. if y≠ z
14. then key [z] ← key [y]
15. copy y's satellite data into z
16. if color [y] = BLACK
17. then RB-delete-FIXUP (T, x)
18. return y

RB-DELETE-FIXUP (T, x)
1. while x ≠ root [T] and color [x] = BLACK
2. do if x = left [p[x]]
3. then w ← right [p[x]]
4. if color [w] = RED
5. then color [w] ← BLACK //Case 1
6. color [p[x]] ← RED //Case 1
7. LEFT-ROTATE (T, p [x]) //Case 1
8. w ← right [p[x]] //Case 1
9. If color [left [w]] = BLACK and color [right[w]] = BLACK
10. then color [w] ← RED //Case 2
11. x ← p[x] //Case 2
12. else if color [right [w]] = BLACK
13. then color [left[w]] ← BLACK //Case 3
14. color [w] ← RED //Case 3
15. RIGHT-ROTATE (T, w) //Case 3
16. w ← right [p[x]] //Case 3
17. color [w] ← color [p[x]] //Case 4
18. color p[x] ← BLACK //Case 4
19. color [right [w]] ← BLACK //Case 4
20. LEFT-ROTATE (T, p [x]) //Case 4
21. x ← root [T] //Case 4
22. else (same as then clause with "right" and "left" exchanged)
23. color [x] ← BLACK
Example: In a previous example, we found that the red-black tree that results
from successively inserting the keys 41,38,31,12,19,8 into an initially empty tree.
Now show the red-black trees that result from the successful deletion of the keys
in the order 8, 12, 19,31,38,41.
Solution:

Delete 38
Delete 41
No Tree.
2.3 B – Tree
2.3.1 Definition
B Tree is a specialized m-way tree that can be widely used for disk access. A B-
Tree of order m can have at most m-1 keys and m children. One of the main
reasons of using B tree is its capability to store large number of keys in a single
node and large key values by keeping the height of the tree relatively small.
A B tree of order m contains all the properties of an M way tree. In addition, it
contains the following properties.
1. Every node in a B-Tree contains at most m children.
2. Every node in a B-Tree except the root node and the leaf node contain at
least m/2 children.
3. The root nodes must have at least 2 nodes.
4. All leaf nodes must be at the same level.
It is not necessary that, all the nodes contain the same number of children but,
each node must have m/2 number of nodes.
A B tree of order 4 is shown in the following image.
While performing some operations on B Tree, any property of B Tree may violate
such as number of minimum children a node can have. To maintain the
properties of B Tree, the tree may split or join.
2.3.2 Operations on B – Tree
Searching:
Searching in B Trees is similar to that in Binary search tree. For example, if we
search for an item 49 in the following B Tree. The process will something like
following :
1. Compare item 49 with root node 78. since 49 < 78 hence, move to its left
sub-tree.
2. Since, 40<49<56, traverse right sub-tree of 40.
3. 49>45, move to right. Compare 49.
4. match found, return.
Searching in a B tree depends upon the height of the tree. The search algorithm
takes O(log n) time to search any element in a B tree.

Inserting
Insertions are done at the leaf node level. The following algorithm needs to be
followed in order to insert an item into B Tree.
1. Traverse the B Tree in order to find the appropriate leaf node at which the
node can be inserted.
2. If the leaf node contain less than m-1 keys then insert the element in the
increasing order.
3. Else, if the leaf node contains m-1 keys, then follow the following steps.
o Insert the new element in the increasing order of elements.

o Split the node into the two nodes at the median.

o Push the median element upto its parent node.

o If the parent node also contain m-1 number of keys, then split it too
by following the same steps.
Example:
Insert the node 8 into the B Tree of order 5 shown in the following image.

8 will be inserted to the right of 5, therefore insert 8.

The node, now contain 5 keys which is greater than (5 -1 = 4 ) keys. Therefore
split the node from the median i.e. 8 and push it up to its parent node shown as
follows.

Deletion
Deletion is also performed at the leaf nodes. The node which is to be deleted can
either be a leaf node or an internal node. Following algorithm needs to be
followed in order to delete a node from a B tree.
1. Locate the leaf node.
2. If there are more than m/2 keys in the leaf node then delete the desired
key from the node.
3. If the leaf node doesn't contain m/2 keys then complete the keys by taking
the element from eight or left sibling.
o If the left sibling contains more than m/2 elements then push its
largest element up to its parent and move the intervening element
down to the node where the key is deleted.
o If the right sibling contains more than m/2 elements then push its
smallest element up to the parent and move intervening element
down to the node where the key is deleted.
4. If neither of the sibling contain more than m/2 elements then create a new
leaf node by joining two leaf nodes and the intervening element of the
parent node.
5. If parent is left with less than m/2 nodes then, apply the above process on
the parent too.
If the the node which is to be deleted is an internal node, then replace the node
with its in-order successor or predecessor. Since, successor or predecessor will
always be on the leaf node hence, the process will be similar as the node is being
deleted from the leaf node.
Example 1
Delete the node 53 from the B Tree of order 5 shown in the following figure.

53 is present in the right child of element 49. Delete it.


Now, 57 is the only element which is left in the node, the minimum number of
elements that must be present in a B tree of order 5, is 2. it is less than that, the
elements in its left and right sub-tree are also not sufficient therefore, merge it
with the left sibling and intervening element of parent i.e. 49.
The final B tree is shown as follows.

2.3.3 Applications
B tree is used to index the data and provides fast access to the actual data
stored on the disks since, the access to value stored in a large database that is
stored on a disk is a very time-consuming process.
Searching an un-indexed and unsorted database containing n key values needs
O(n) running time in worst case. However, if we use B Tree to index this
database, it will be searched in O(log n) time in worst case.

2.4 Heap
A heap is a complete binary tree, and the binary tree is a tree in which the node
can have utmost two children. Before knowing more about the heap data
structure, we should know about the complete binary tree.
What is Heap Data Structure?
A heap is a binary tree-based data structure that satisfies the heap property: the
value of each node is greater than or equal to the value of its children. This
property makes sure that the root node contains
the maximum or minimum value (depending on the type of heap), and the
values decrease or increase as you move down the tree.
Types of Heaps
There are two main types of heaps:
 Max Heap: The root node contains the maximum value, and the values
decrease as you move down the tree.
 Min Heap: The root node contains the minimum value, and the values
increase as you move down the tree.
Heap Operations
Common heap operations are:
 Insert: Adds a new element to the heap while maintaining the heap
property.
 Extract Max/Min: Removes the maximum or minimum element from the
heap and returns it.
 Heapify: Converts an arbitrary binary tree into a heap.
Heap Data Structure Applications
Heaps have various applications, like:
 Heaps are commonly used to implement priority queues, where elements
are retrieved based on their priority (maximum or minimum value).
 Heapsort is a sorting algorithm that uses a heap to sort an array in
ascending or descending order.
 Heaps are used in graph algorithms like Dijkstra’s algorithm and Prim’s
algorithm for finding the shortest paths and minimum spanning trees.

2.5 Disjoint Sets


The disjoint set data structure is also known as union-find data structure and
merge-find set. It is a data structure that contains a collection of disjoint or non-
overlapping sets. The disjoint set means that when the set is partitioned into the
disjoint subsets. The various operations can be performed on the disjoint
subsets. In this case, we can add new sets, we can merge the sets, and we can
also find the representative member of a set. It also allows to find out whether
the two elements are in the same set or not efficiently.
The disjoint set can be defined as the subsets where there is no common
element between the two sets. Let's understand the disjoint sets through an
example.

s1 = {1, 2, 3, 4}
s2 = {5, 6, 7, 8}
We have two subsets named s1 and s2. The s1 subset contains the elements 1,
2, 3, 4, while s2 contains the elements 5, 6, 7, 8. Since there is no common
element between these two sets, we will not get anything if we consider the
intersection between these two sets. This is also known as a disjoint set where
no elements are common. Now the question arises how we can perform the
operations on them. We can perform only two operations, i.e., find and union.
In the case of find operation, we have to check that the element is present in
which set. There are two sets named s1 and s2 shown below:
Suppose we want to perform the union operation on these two sets. First, we
have to check whether the elements on which we are performing the union
operation belong to different or same sets. If they belong to the different sets,
then we can perform the union operation; otherwise, not. For example, we want
to perform the union operation between 4 and 8. Since 4 and 8 belong to
different sets, so we apply the union operation. Once the union operation is
performed, the edge will be added between the 4 and 8 shown as below:
When the union operation is applied, the set would be represented as:

s1Us2 = {1, 2, 3, 4, 5, 6, 7, 8}
Suppose we add one more edge between 1 and 5. Now the final set can be
represented as:
s3 = {1, 2, 3, 4, 5, 6, 7, 8}
If we consider any element from the above set, then all the elements belong to
the same set; it means that the cycle exists in a graph.
2.5.1 Union Find Algorithm
A union-find algorithm is an algorithm that performs two useful operations on
such a data structure:
 Find: Determine which subset a particular element is in. This can
determine if two elements are in the same subset.
 Union: Join two subsets into a single subset. Here first we have to check if
the two subsets belong to the same set. If not, then we cannot perform
union.

2.6 Fibonacci Heap


A Fibonacci heap is defined as the collection of rooted-tree in which all the trees
must hold the property of Min-heap. That is, for all the nodes, the key value of
the parent node should be greater than the key value of the parent node.
Properties of Fibonacci Heap:
1. It can have multiple trees of equal degrees, and each tree doesn't need to
have 2^k nodes.
2. All the trees in the Fibonacci Heap are rooted but not ordered.
3. All the roots and siblings are stored in a separated circular-doubly-linked
list.
4. The degree of a node is the number of its children. Node X -> degree =
Number of X's children.
5. Each node has a mark-attribute in which it is marked TRUE or FALSE. The
FALSE indicates the node has not any of its children. The TRUE represents
that the node has lost one child. The newly created node is marked FALSE.
6. The potential function of the Fibonacci heap is F(FH) = t[FH] + 2 * m[FH]
7. The Fibonacci Heap (FH) has some important technicalities listed below:
1. min [FH] - Pointer points to the minimum node in the Fibonacci Heap
2. n[FH] - Determines the number of nodes
3. t[FH] - Determines the number of rooted trees
4. m[FH] - Determines the number of marked nodes
5. F(FH) - Potential Function.
Advantages of Fibonacci Heap:
1. Fast amortized running time: The running time of operations such as
insert, extract-min and merge in a Fibonacci heap is O(1) for insert, O(log
n) for extract-min and O(1) amortized for merge, making it one of the most
efficient data structures for these operations.
2. Lazy consolidation: The use of lazy consolidation allows for the merging of
trees to be performed more efficiently in batches, rather than one at a
time, improving the efficiency of the merge operation.
3. Efficient memory usage: Fibonacci heaps have a relatively small constant
factor compared to other data structures, making them a more memory-
efficient choice in some applications.
Disadvantages of Fibonacci Heap:
1. Increased complexity: The structure and operations of a Fibonacci heap
are more complex than those of a binary or binomial heap, making it a
less intuitive data structure for some users.
2. Less well-known: Compared to other data structures, Fibonacci heaps are
less well-known and widely used, making it more difficult to find resources
and support for implementation and optimization.
Like Binomial Heap, Fibonacci Heap is a collection of trees with min-heap or
max-heap properties. In Fibonacci Heap, trees can have any shape even if all
trees can be single nodes (This is unlike Binomial Heap where every tree has to
be a Binomial Tree).

Fibonacci Heap maintains a pointer to the minimum value (which is the root
of a tree). All tree roots are connected using a circular doubly linked list, so
all of them can be accessed using a single ‘min’ pointer.
The main idea is to execute operations in a “lazy” way. For example, merge
operation simply links two heaps, insert operation simply adds a new tree
with a single node. The operation extract minimum is the most complicated
operation. It does delay the work of consolidating trees. This makes delete
also complicated as delete first decreases the key to minus infinite, then
calls extract minimum.
2.7 Mergeable-Heap Operations
A mergeable heap supports the usual heap operations: [1]
 Make-Heap(), create an empty heap.
 Insert(H,x), insert an element x into the heap H.
 Min(H), return the minimum element, or Nil if no such element exists.
 Extract-Min(H), extract and return the minimum element, or Nil if no such
element exists.
And one more that distinguishes it: [1]
 Merge(H1,H2), combine the elements of H1 and H2 into a single heap.
2.8 Decreasing a key and deleting a node
Extract_min():
We create a function for deleting the minimum node and setting the min pointer
to the minimum value in the remaining heap. The following algorithm is
followed:
1. Delete the min node.
2. Set head to the next min node and add all the trees of the deleted node in
the root list.
3. Create an array of degree pointers of the size of the deleted node.
4. Set degree pointer to the current node.
5. Move to the next node.
 If degrees are different then set degree pointer to next node.
 If degrees are the same then join the Fibonacci trees by union
operation.
6. Repeat steps 4 and 5 until the heap is completed.
Example:

Decrease_key():
To decrease the value of any element in the heap, we follow the following
algorithm:
 Decrease the value of the node ‘x’ to the new chosen value.
 CASE 1) If min-heap property is not violated,
o Update min pointer if necessary.

 CASE 2) If min-heap property is violated and parent of ‘x’ is unmarked,


o Cut off the link between ‘x’ and its parent.

o Mark the parent of ‘x’.

o Add tree rooted at ‘x’ to the root list and update min pointer if
necessary.
 CASE 3)If min-heap property is violated and parent of ‘x’ is marked,
o Cut off the link between ‘x’ and its parent p[x].

o Add ‘x’ to the root list, updating min pointer if necessary.

o Cut off link between p[x] and p[p[x]].

o Add p[x] to the root list, updating min pointer if necessary.

o If p[p[x]] is unmarked, mark it.

o Else, cut off p[p[x]] and repeat steps 4.2 to 4.5, taking p[p[x]] as
‘x’.
Example:

Deletion ():
To delete any element in a Fibonacci heap, the following algorithm is followed:
1. Decrease the value of the node to be deleted ‘x’ to a minimum by
Decrease_key() function.
2. By using min-heap property, heapify the heap containing ‘x’, bringing ‘x’
to the root list.
3. Apply Extract_min() algorithm to the Fibonacci heap.
Example:

2.9 Bounding the Maximum Degree


Bounding the Maximum Degree in Data Structures
1. Definition of Degree
 Degree of a Node: The degree of a node in a tree or graph is the number
of children (for trees) or edges (for graphs) that a node has.
 Maximum Degree: The highest degree of any node in the tree or graph.
2. Bounding in Trees
 Binary Trees:
o Each node has at most two children.

o Maximum degree = 2.

o Properties:

 If a binary tree has n internal nodes, it has n+1 external


nodes (leaf nodes).
 Total number of nodes (N) in a full binary tree is 2n + 1.
 General k-ary Trees:
o Each node has at most k children.

o Maximum degree = k.

o Properties:

 If a k-ary tree has n internal nodes, it has kn+1 external


nodes.
 Total number of nodes (N) in a full k-ary tree is kn + 1.
3. Bounding in Graphs
 Undirected Graphs:
o The degree of a node is the number of edges connected to it.

o Maximum degree (Δ): The highest degree of any node in the graph.

o Properties:

 In an undirected graph with n nodes and m edges, the sum of


the degrees of all nodes is 2m (since each edge contributes
to the degree of two nodes).

 Maximum degree of the graph, Δ, can be bounded using the


number of edges: Δ ≤ 2m/n.
 Directed Graphs:
o The degree of a node is divided into in-degree (number of incoming
edges) and out-degree (number of outgoing edges).

o Maximum in-degree (Δ_in): The highest in-degree of any node.

o Maximum out-degree (Δ_out): The highest out-degree of any node.

o Properties:

 In a directed graph with n nodes and m edges, the sum of the


in-degrees of all nodes is m (each edge contributes to the in-
degree of one node).
 The sum of the out-degrees of all nodes is also m (each edge
contributes to the out-degree of one node).

 Maximum in-degree, Δ_in, and maximum out-degree, Δ_out,


can each be bounded using the number of edges: Δ_in ≤ m/n
and Δ_out ≤ m/n.
4. Applications and Implications
 In algorithms and data structures, knowing the maximum degree helps in
designing efficient algorithms.
 Bounding the maximum degree can simplify the complexity analysis of
algorithms, such as those for searching, traversal, and optimization
problems.
 In network design and analysis, bounding the maximum degree helps
ensure load balancing and avoid bottlenecks.
 In database indexing and query optimization, trees with bounded degrees
(e.g., B-trees) ensure efficient data retrieval and update operations.

You might also like