0% found this document useful (0 votes)
37 views87 pages

Unit Ii

non linear data structure

Uploaded by

lakshmi shree
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views87 pages

Unit Ii

non linear data structure

Uploaded by

lakshmi shree
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 87

UNIT II

TREES
Tree Terminologies - Binary tree - Binary tree traversal - Expression tree construction- Binary
Search Trees- Querying a binary search tree, Insertion and deletion–AVL trees-rotations,
insertion. B-Trees-Definition of-trees- Basic operations on B-trees- insertion and deletion.
Priority Queues (Heaps) – Model – Simple implementations – Binary Heap-Properties.

3.1 Tree Terminologies

 Consider a scenario where you are required to represent the directory structure of your
operating system.
 The directory structure contains various folders and files. A folder may further contain
any number of sub folders and files.
 In such a case, it is not possible to represent the structure linearly because all the items
have a hierarchical relationship among themselves.
 In such a case, it would be good if you have a data structure that enables you to store
your data in a nonlinear fashion.

DEFINITION:

 A tree is a nonlinear data structure that represent a hierarchical relationship among the
various data elements
 Trees are used in applications in which the relation between data elements needs to be
represented in a hierarchy.

 Each element in a tree is referred to as a node.


 The topmost node in a tree is called root.

 Each node in a tree can further have subtrees below its hierarchy.
 Let us discuss various terms that are most frequently used with trees.
 Leaf node: It refers to a node with no children.
 Nodes E, F, G, H, I, J, L, and M are leaf nodes.
 Subtree: A portion of a tree, which can be viewed as a separate tree in itself is called
a subtree.
 A subtree can also contain just one node called the leaf node.
Tree with root B, containing nodes E, F, G, and H is a subtree of node A.
 Children of a node: The roots of the subtrees of a node are called the children of the
node.
o F, G, and H are children of node B. B is the parent of these nodes.
 Degree of a node: It refers to the number of subtrees of a node in a tree.
Degree of node C is 1
Degree of node D is 2
Degree of node A is 3
Degree of node B is 4
 Edge: A link from the parent to a child node is referred to as an edge.
 Siblings/Brothers: It refers to the children of the same node.
Nodes B, C, and D are siblings of each other.
Nodes E, F, G, and H are siblings of each other.
 Level of a node: It refers to the distance (in number of nodes) of a node from the root.
Root always lies at level 0.
 As you move down the tree, the level increases by one.
 Depth of a tree: Refers to the total number of levels in the tree.
 The depth of the following tree is 4.
 Internal node: It refers to any node between the root and a leaf node.
Nodes B, C, D, and K are internal nodes.

Example:

 Consider the above tree and answer the questions that follow:
a. What is the depth of the tree?
b. Which nodes are children of node B?
c. Which node is the parent of node F?
d. What is the level of node E?
e. Which nodes are the siblings of node H?
f. Which nodes are the siblings of node D?
g. Which nodes are leaf nodes?
 Answer:
a. 4
b. D and E
c. C
d. 2
e. H does not have any siblings
f. The only sibling of D is E
g. F, G, H, and I

3.2 Binary tree:


Binary tree is a tree where each node has exactly zero or two children. In a binary
tree, a node cannot have more than two children. In a binary tree, children are named as
“left” and “right” children. The child nodes contain a reference to their parent.
 There are various types of binary trees, the most important are:
 Full binary tree
 Complete binary tree

A full binary tree is a tree in which every node in the tree has two children except
the leaves of the tree.
A complete binary tree is a binary tree in which every level of the binary tree is
completely filled except the last level. In the unfilled level, the nodes are attached starting
from the left-most position.
What is Full Binary Tree?
Full binary tree is a binary tree in which every node in the tree has exactly zero or
two children. In other words, every node in the tree except the leaves has exactly two
children. Figure 1 below depicts a full binary tree. In a full binary tree, the number of
nodes (n), number of laves (l) and the number of internal nodes (i) is related in a special
way such that if you know any one of them you can determine the other two values as
follows:
1. If a full binary tree has i internal nodes:
– Number of leaves l = i+1
– Total number of nodes n = 2*i+1
2. If a full binary tree has n nodes:
– Number of internal nodes i = (n-1)/2
– Number of leaves l=(n+1)/2
3. If a full binary tree has l leaves:
– Total Number of nodes n=2*l-1
– Number of internal nodes i = l-1

What is Complete Binary Tree?


As shown in figure 2, a complete binary tree is a binary tree in which every level
of the tree is completely filled except the last level. Also, in the last level, nodes should
be attached starting from the left-most position. A complete binary tree of height h
satisfies the following conditions:

– From the root node, the level above last level represents a full binary tree of height h-1
– One or more nodes in last level may have 0 or 1 children
– If a, b are two nodes in the level above the last level, then a has more children than b if
and only if a is situated left of b

What is the difference between Complete Binary Tree and Full Binary Tree?
Complete binary trees and full binary trees have a clear difference. While a full
binary tree is a binary tree in which every node has zero or two children, a complete
binary tree is a binary tree in which every level of the binary tree is completely filled
except the last level. Some special data structures like heaps need to be complete binary
trees while they don’t need to be full binary trees. In a full binary tree, if you know the
number of total nodes or the number of laves or the number of internal nodes, you can
find the other two very easily. But a complete binary tree does not have a special property
relating theses three attributes.

Binary tree can be represented by Array and linked list.

Array representation of a binary tree:

 All the nodes are represented as the elements of an array.


 If there are n nodes in a binary tree, then for any node with index i, where 0 < i < n – 1:
o Parent of i is at (i – 1)/2.
o Left child of i is at 2i + 1:
 If 2i + 1 > n – 1, then the node does not have a left child.
o Right child of i is at 2i + 2:
 If 2i + 2 > n – 1, then the node does have a right child.
Linked representation of a binary tree:

 It uses a linked list to implement a binary tree.


 Each node in the linked representation holds the following information:
 Data
 Reference to the left child
 Reference to the right child
 If a node does not have a left child or a right child or both, the respective left or right
child fields of that node point to NULL.

3.3 Binary Tree Traversal:

 You can implement various operations on a binary tree.


 A common operation on a binary tree is traversal.
 Traversal refers to the process of visiting all the nodes of a binary tree once.
 There are four ways for traversing a binary tree:
 Inorder traversal(LDR) a+b
 Preorder traversal(DLR)=+ab
 Postorder traversal(LRD)= ab+
 Level by level traversal(-level by level)

InOrder Traversal:

In this traversal, the tree is visited starting from the root node. At a
particular node, the traversal is continued with its left node, recursively, until no
further left node is found. Then the data at the current node (the left most node in the
sub tree) is visited, and the procedure shifts to the right of the current node, and the
procedure is continued. This can be explained as:
1. Traverse the left subtree
2. Visit root
3. Traverse the right subtree (Left Data Right)

 Let us consider an example.

 The left subtree of node A is not NULL.  The left subtree of node B is not NULL.
 Therefore, move to node B to traverse the left  Therefore, move to node D to traverse the
subtree of A. left subtree of B.
1 2

 Left subtree of H is empty.


 The left subtree of node D is NULL.
 Therefore, visit node H.
 Therefore, visit node D.

3 4

 Right subtree of H is empty.


 The left subtree of B has been visited.
 Therefore, move to node B.
 Therefore, visit node B.
6
5

 Right subtree of B is not empty.


 Left subtree of E is empty.
 Therefore, move to the right subtree of B
 Therefore, visit node E.

7
8
 Right subtree of E is empty.  Left subtree of A has been visited.
 Therefore, move to node A.  Therefore, visit node A.
9 10
 Right subtree of A is not empty.  Left subtree of C is not empty.
 Therefore, move to the right subtree of A.  Therefore, move to the left subtree of C.

11 12
 Left subtree of F is empty.  Right subtree of F is empty.
 Therefore, visit node F.  Therefore, move to node C.
1
13
4
 The left subtree of node C has been visited.  Right subtree of C is not empty.
 Therefore, visit node C.  Therefore, move to the right subtree of
node C.

15
16
 Left subtree of G is not empty.  Left subtree of I is empty.
 Therefore, move to the left subtree of node G.  Therefore, visit I.

17 18

 Visit node G.
 Right subtree of I is empty.
 Therefore, move to node G.

20
19

 Right subtree of G is empty.


19

Preorder Traversal

In this traversal, the tree is visited starting from the root node. At a particular
node, the data is read (visited), then the traversal continues with its left node,
recursively, until no further left node is found. Then the right node of the recent left node
is set as the current node, and the procedure is continued. This can be explained as:
1. Visit root
2. Traverse the left subtree
3. Traverse the right subtree

Post Order Traversal:

In this traversal, the tree is visited starting from the root node. At a particular node,
the traversal is continued with its left node, recursively, until no further left node is
found. Then the right node of the current node (the left most node in the sub tree) is
visited, and the procedure shifts to the right of the current node, and the
procedure is continued. This can be explained as:

1. Traverse the left subtree


2. Traverse the right subtree
3. Visit the root

3.4 Expression Tree Construction:

The leaves of a binary expression tree are operands, such as constants or variable names, and the
other nodes contain operators. These particular trees happen to be binary, because all of the
operations are binary, and although this is the simplest case, it is possible for nodes to have more
than two children. It is also possible for a node to have only one child, as is the case with the
unary minus operator. An expression tree, T, can be evaluated by applying the operator at the
root to the values obtained by recursively evaluating the left and right subtrees

An algebraic expression can be produced from a binary expression tree by recursively producing
a parenthesized left expression, then printing out the operator at the root, and finally recursively
producing a parenthesized right expression. This general strategy (left, node, right) is known as
an in-order travesal. An alternate traversal strategy is to recursively print out the left subtree, the
right subtree, and then the operator. This traversal strategy is generally known as post-order
traversal. A third strategy is to print out the operator first and then recursively print out the left
and right subtree.

These three standard depth-first traversals are representations of the three different expression
formats: infix, postfix, and prefix. An infix expression is produced by the inorder traversal, a
postfix expression is produced by the post-order traversal, and a prefix expression is produced by
the pre-order traversal.When an infix expression is printed, an opening and closing parenthesis
must be added at the beginning and ending of each expression. As every subtree represents a
subexpression, an opening parenthesis is printed at its start and the closing parenthesis is printed
after processing all of its children.

Pseudocode:

Algorithm infix (tree)


/*Print the infix expression for an expression tree.
Pre : tree is a pointer to an expression tree
Post: the infix expression has been printed*/
if (tree not empty)
if (tree token is operator)
print (open parenthesis)
end if
infix (tree left subtree)
print (tree token)
infix (tree right subtree)
if (tree token is operator)
print (close parenthesis)
end if
end if
end infix

Postfix Traversal

The postfix expression is formed by the basic postorder traversal of any binary tree. It does not
require parentheses.
Pseudocode:

Algorithm postfix (tree)


/*Print the postfix expression for an expression tree.
Pre : tree is a pointer to an expression tree
Post: the postfix expression has been printed*/
if (tree not empty)
postfix (tree left subtree)
postfix (tree right subtree)
print (tree token)
end if
end postfix

Prefix Traversal

The prefix expression formed by prefix traversal uses the standard pre-order tree traversal. No
parentheses are necessary.

Pseudocode:

Algorithm prefix (tree)


/*Print the prefix expression for an expression tree.
Pre : tree is a pointer to an expression tree
Post: the prefix expression has been printed*/
if (tree not empty)
print (tree token)
prefix (tree left subtree)
prefix (tree right subtree) and check if stack is not empty
end if
end prefix

Construction of Expression Tree

The evaluation of the tree takes place by reading the postfix expression one symbol at a time. If
the symbol is an operand, one-node tree is created and a pointer is pushed onto a stack. If the
symbol is an operator, the pointers are popped to two trees T1 and T2 from the stack and a new
tree whose root is the operator and whose left and right children point to T2 and T1 respectively
is formed . A pointer to this new tree is then pushed to the Stack.

Example

The input is: a b + c d e + * * since the first two symbols are operands, one-node trees are
created and pointers are pushed to them onto a stack. For convenience the stack will grow from
left to right.

Stack growing from left to right

The next symbol is a '+'. It pops the two pointers to the trees, a new tree is formed, and a pointer
to it is pushed onto to the stack.
Formation of New Tree

Next, c, d, and e are read. A one-node tree is created for each and a pointer to the corresponding
tree is pushed onto the stack.

Continuing, a '+' is read, and it merges the last two trees.


Merging Two trees

Now, a '*' is read. The last two tree pointers are popped and a new tree is formed with a '*' as the
root.

Forming a new tree with a root


Finally, the last symbol is read. The two trees are merged and a pointer to the final tree remains
on the stack.

Steps to construct an expression tree a b + c d e + * *

Algebraic expressions
Binary algebraic expression tree equivalent to ((5 + z) / -8) * (4 ^ 2)

Algebraic expression trees represent expressions that contain numbers, variables, and unary and
binary operators. Some of the common operators are × (multiplication), ÷ (division), +
(addition), − (subtraction), ^ (exponentiation), and - (negation). The operators are contained in
the internal nodes of the tree, with the numbers and variables in the leaf nodes. The nodes of
binary operators have two child nodes, and the unary operators have one child node.

Boolean expressions

Binary Boolean Expression tree equivalent to ((true false) false) (true false))

Boolean expressions are represented very similarly to algebraic expressions, the only difference
being the specific values and operators used. Boolean expressions use true and false as constant
values, and the operators include (AND), (OR), (NOT).

3.5 BINARY SEARCH TREE

First of all, binary search tree (BST) is a dynamic data structure, which means, that
its size is only limited by amount of free memory in the operating system and number of
elements may vary during the program run. Main advantage of binary search trees is
rapid search, while addition is quite cheap. Let us see more formal definition of BST.

Binary search tree is a data structure, which meets the following requirements:

 it is a binary tree;
 left subtree of a node contains only values lesser, than the node's value;
 right subtree of a node contains only values greater, than the node's value.

Notice, that definition above doesn't allow duplicates.

Example of a binary search tree

In this above figure the first one is Binary search tree. But the second one is not
a binary search tree.

What for binary search trees are used?

Binary search tree is used to construct map data structure. In practice, data can be
often associated with some unique key. For instance, in the phone book such a key is a
telephone number. Storing such a data in binary search tree allows to look up for the
record by key faster, than if it was stored in unordered list. Also, BST can be utilized to
construct set data structure, which allows to store an unordered collection of unique
values and make operations with such collections.

Performance of a binary search tree depends of its height. In order to keep tree
balanced and minimize its height, the idea of binary search trees was advanced in
balanced search trees (AVL trees, Red-Black trees, Splay trees). Here we will discuss the
basic ideas, laying in the foundation of binary search trees.
Binary tree

Binary tree is a widely-used tree data structure. Feature of a binary tree, which
distinguish it from common tree, is that each node has at most two children. Widespread
usage of binary tree is as a basic structure for binary search tree. Each binary tree has
following groups of nodes:

 Root: the topmost node in a tree. It is a kind of "main node" in the tree, because
all other nodes can be reached from root. Also, root has no parent. It is the node, at
which operations on tree begin (commonly).
 Internal nodes: these nodes has a parent (root node is not an internal node) and at
least one child.
 Leaf nodes: These nodes have a parent, but has no children.

Let us see an example of a binary tree.

Example of a binary tree

Operations
Basically, we can only define traversals for binary tree as possible operations:
root-left-right (preorder), left-right-root (postorder) and left-root-right (inorder)
traversals. We will speak about them in detail later.

3.6 Querying a binary search tree

Binary search tree – Insertion:

Adding a value to BST can be divided into two stages:

 Search for a place to put a new element;


 Insert the new element to this place.

Let us see these stages in more detail.


Search for a place

At this stage an algorithm should follow binary search tree property. If a new value is
less, than the current node's value, go to the left subtree, else go to the right subtree.
Following this simple rule, the algorithm reaches a node, which has no left or right
subtree. By the moment a place for insertion is found, we can say for sure, that a new
value has no duplicate in the tree. Initially, a new node has no children, so it is a leaf. Let
us see it at the picture. Gray circles indicate possible places for a new node.
3.7 Insertion and Deletion

Now, let's go down to algorithm itself. Here and in almost every operation on BST
recursion is utilized. Starting from the root,

1. Check, whether value in current node and a new value are equal. If so, duplicate is
found. Otherwise,
2. if a new value is less, than the node's value:
o if a current node has no left child, place for insertion has been found;
o Otherwise, handle the left child with the same algorithm.
3. if a new value is greater, than the node's value:
o if a current node has no right child, place for insertion has been found;
o Otherwise, handle the right child with the same algorithm.

Let’s have a look on the example, demonstrating a case of insertion in the binary search
tree.

Example

Insert 4 to the tree, shown above.


Binary Search Tree Search operation

Searching for a value in a BST is very similar to add operation. Search algorithm
traverses the tree "in-depth", choosing appropriate way to go, following binary search
tree property and compares value of each visited node with the one, we are looking for.
Algorithm stops in two cases:

 a node with necessary value is found;


 Algorithm has no way to go.

Search algorithm in detail

Now, let's see more detailed description of the search algorithm. Like an add operation,
and almost every operation on BST, search algorithm utilizes recursion. Starting from the
root,

1. Check, whether value in current node and searched value are equal. If so, value is
found. Otherwise,
2. if searched value is less, than the node's value:
o if current node has no left child, searched value doesn't exist in the BST;
o Otherwise, handle the left child with the same algorithm.
3. if a new value is greater, than the node's value:
o if current node has no right child, searched value doesn't exist in the
BST;
o Otherwise, handle the right child with the same algorithm.

Let us have a look on the example, demonstrating searching for a value in the binary
search tree.

Example

Search for 3 in the tree, shown above.


As in add operation, check first if root exists. If not, tree is empty, and, therefore,
searched value doesn't exist in the tree.

Binary search tree: Removing a node

Remove operation on binary search tree is more complicated, than add and search.
Basically, in can be divided into two stages:
 search for a node to remove;
 if the node is found, run remove algorithm.

Remove algorithm in detail

Now, let's see more detailed description of a remove algorithm. First stage is
identical to algorithm for lookup, except we should track the parent of the current node.
Second part is trickier. There are three cases, which are described below.

1. Node to be removed has no children.

This case is quite simple. Algorithm sets corresponding link of the parent to
NULL and disposes the node.

Example. Remove -4 from a BST.

2. Node to be removed has one child.

It this case, node is cut from the tree and algorithm links single child (with it's
subtree) directly to the parent of the removed node.

Example. Remove 18 from a BST.


3. Node to be removed has two children.
This is the most complex case. To solve it, let us see one useful BST property first.
We are going to use the idea, that the same set of values may be represented as
different binary-search trees. For example those BSTs:

contains the same values {5, 19, 21, 25}. To transform first tree into second one, we can
do following:

o choose minimum element from the right subtree (19 in the example);
o replace 5 by 19;
o hang 5 as a left child.

The same approach can be utilized to remove a node, which has two children:

o find a minimum value in the right subtree;


o replace value of the node to be removed with found minimum. Now, right
subtree contains a duplicate!
o apply remove to the right subtree to remove a duplicate.

Notice, that the node with minimum value has no left child and, therefore, it's removal
may result in first or second cases only.

Example. Remove 12 from a BST.

Find minimum element in the right subtree of the node to be removed. In current
example it is 19.
Replace 12 with 19. Notice, that only values are replaced, not nodes. Now we have two
nodes with the same value.

Remove 19 from the left subtree.


First, check first if root exists. If not, tree is empty, and, therefore, value, that
should be removed, doesn't exist in the tree. Then, check if root value is the one to be
removed. It's a special case and there are several approaches to solve it. We propose
the dummy root method, when dummy root node is created and real root hanged to it as a
left child. When remove is done, set root link to the link to the left child of the dummy
root.

Binary search tree. List values in order

To construct an algorithm listing BST's values in order, let us recall binary search tree
property:

 left subtree of a node contains only values lesser, than the node's value;
 right subtree of a node contains only values greater, than the node's value.

Algorithm looks as following:

1. get values in order from left subtree;


2. get values in order from right subtree;
3. result for current node is (result for left subtree) join (current node's value) join
(result for right subtree).
Running this algorithm recursively, starting form the root, we'll get the result for whole
tree. Let us see an example of algorithm, described above.
Example
Algorithm Steps:

1. Create the memory space for the root node and initialize the value to zero.
2. Read the value.
3. If the value is less than the root value, it is assigned as the left child of the root.
Else if new value is greater than the root value, it is assigned as the right child of
the root. Else if there is no value in the root, the new value is assigned as the root.
4. The step(2) and (3) is repeated to insert the ‘n’ number of values.

Search Operation:

1. Read the value to be searched.


2. Check whether the root is not null
3. If the value to be searched is less than the root, consider the left sub-tree for
searching the particular element else if the value is greater than the root consider
the right sub-tree to search the particular element else if the value is equal then
return the value that is the value which was searched.

Program

#include<stdio.h>
#include<conio.h>
#include<process.h>
#include<alloc.h>
struct tree
{
int data;
struct tree *lchild;
struct tree *rchild;
}*t, *temp;

int element;
void inorder (struct tree *);
struct tree *create (struct tree *, int);
struct tree *find (struct tree *, int);
struct tree *insert (struct tree *, int);
struct tree *del (struct tree *, int);
struct tree *findmin (struct tree *);
struct tree *findmax (struct tree *);

void main( )
{
int ch;
printf (“\n\n\t BINARY SEARCH TREE”);
do
{
printf (“\nMain Menu\n”);
printf (“\n1.Create \n2.Insert \n3.Delete \n4.Find \n5.Findmax \n6.Findmin”);
printf (“\n7.Exit”);
printf (“\nEnter your choice:”);
scanf (“%d”, &ch);
switch(ch)
{
case 1:
printf (“Enter the element\n”);
scanf (“%d”, &element);
t = create (t, element);
inorder(t);
break;

case 2:
printf (“Enter the element\n”);
scanf (“%d”, &element);
t = insert (t, element);
inorder(t);
break;

case 3:
printf (“Enter the data”);
scanf (“%d”, &element);
t = del (t, element);
inorder(t);
break;

case 4:
printf (“Enter the data”);
scanf (“%d”, &element);
temp = find (t, element);
if(temp->data == element)
printf (“Element %d is found”, element);
else
printf (“Element is not found”);
break;

case 5:
temp = findmax(t);
printf(“Maximum element is %d”, temp->data);
break;

case 6:
temp = findmin(t);
printf (“Minimum element is %d”, temp->data);
break;
}
}while(ch != 7);
}

struct tree *create (struct tree* t, int element)


{
t = (struct tree*) malloc (sizeof(struct tree));
t->data = element;
t-> lchild = NULL;
t-> rchild = NULL;
return t;
}

struct tree *find (struct tree* t, int element)


{
if ( t== NULL)
return NULL;
if (element< t->data)
return (find(t->rchild, element) );
else
return t;
}

struct tree *findmin (struct tree* t)


{
if ( t == NULL)
return NULL;
else
if (t->lchild == NULL)
return t;
else
return (findmin (t->lchild));
}

struct tree *findmax (struct tree* t)


{
if (t != NULL)
{
while (t->rchild != NULL)
t = t->rchild;
}
return t;
}

struct tree *insert (struct tree* t, int element)


{
if (t== NULL)
{
t = (struct tree*) malloc (sizeof(struct tree));
t->data = element;
t->lchild = NULL;
t->rchild = NULL;
return t;
}
else
{
if(element< t->data)
t->lchild = insert(t->lchild, element);
else
if (element> t->data)
t->rchild = insert (t->rchild, element);
else
if(element == t->data)
printf ( “Element already present\n”);
return t;
}
}
struct tree* del (struct tree* t, int element)
{
if (t == NULL)
printf (“Element not found\n”);
else
if(element< t->data)
t->lchild = del (t->lchild, element);
else
if(element> t->data)
t->rchild = del (t->rchild, element);
else
if(t->lchild && t->rchild)
{
temp = findmin (t->rchild);
{
t->data = temp->data;
t->rchild = del (t->rchild, t->data);
}
}
else
{
temp = t;
if (t->lchild == NULL)
t = t->rchild;
else
if (t->rchild == NULL)
t = t->lchild;
free (temp);
}
return t;
}
void inorder (struct tree *t)
{
if (t = = NULL)
return;
else
{
inorder (t->lchild);
printf (“\t%d”, t->data);
inorder (t->rchild);
}
}

SAMPLE INPUT AND OUTPUT:

Main Menu

1.Create
2.Insert
3.Delete
4.Find
5.Findmax
6.Findmin
7.Exit
Enter your choice: 1
Enter the element
12
12
Main Menu

1.Create
2.Insert
3.Delete
4.Find
5.Findmax
6.Findmin
7.Exit
Enter your choice: 2
Enter the element
13
12 13
Main Menu

1.Create
2.Insert
3.Delete
4.Find
5.Findmax
6.Findmin
7.Exit
Enter your choice: 3
Enter the data12
13
Main Menu

1.Create
2.Insert
3.Delete
4.Find
5.Findmax
6.Findmin
7.Exit
Enter your choice: 4
Enter the data13
Element 13 is found

Main Menu

1.Create
2.Insert
3.Delete
4.Find
5.Findmax
6.Findmin
7.Exit
Enter your choice: 2
Enter the element
14
13 14
Main Menu
1.Create
2.Insert
3.Delete
4.Find
5.Findmax
6.Findmin
7.Exit
Enter your choice: 2
Enter the element
15
13 14 15

Main Menu

1.Create
2.Insert
3.Delete
4.Find
5.Findmax
6.Findmin
7.Exit
Enter your choice: 5
Maximum element is 15

Main Menu

1.Create
2.Insert
3.Delete
4.Find
5.Findmax
6.Findmin
7.Exit
Enter your choice: 6
Minimum element is 13

Main Menu

1.Create
2.Insert
3.Delete
4.Find
5.Findmax
6.Findmin
7.Exit
Enter your choice: 7

3.8 AVL Tree


 In a binary search tree, the time required to search for a particular value depends upon
its height.
 The shorter the height, the faster is the search.
 However, binary search trees tend to attain large heights because of continuous insert
and delete operations.
 Consider an example in which you want to insert some numeric values in a binary
search tree in the following order:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
 After inserting values in the specified order, the binary search tree appears as follows:

 This process can be very time consuming if the number of values stored in a binary
search tree is large.
 Now if you want to search for a value 14 in the given binary search tree, you will
have to traverse all its preceding nodes before you reach node 14. In this case, you
need to make 14 comparisons
 Therefore, such a structure loses its property of a binary search tree in which after
every comparison, the search operations are reduced to half.
 To solve this problem, it is desirable to keep the height of the tree to a minimum.
 Therefore, the following binary search tree can be modified to reduce its height.

 The height of the binary search tree has now been reduced to 4
 Now if you want to search for a value 14, you just need to traverse nodes 8 and 12,
before you reach node 14
 In this case, the total number of comparisons to be made to search for node 14 is
three.
 This approach reduces the time to search for a particular value in a binary search tree.
 This can be implemented with the help of a height balanced tree.

Height Balanced Tree

 A height balanced tree is a binary tree in which the difference between the heights of
the left subtree and right subtree of a node is not more than one.
 In a height balanced tree, each node has a Balance Factor (BF) associated with it.
 For the tree to be balanced, BF can have three values:
 0: A Balance Factor value of 0 indicates that the height of the left subtree of a
node is equal to the height of its right subtree.
 1: A Balance Factor value of 1 indicates that the height of the left subtree is
greater than the height of the right subtree by one. A node in this state is said to
be left heavy.
 –1: A Balance Factor value of –1 indicates that the height of the right subtree
is greater than the height of the left subtree by one. A node in this state is said
to be right heavy.

Fig. Balanced Binary Search Trees

 After inserting a new node in a height balanced tree, the balance factor of one or more
nodes may attain a value other than 1, 0, or –1.
 This makes the tree unbalanced.
 In such a case, you first need to locate the pivot node.
 A pivot node is the nearest ancestor of the newly inserted node, which has a balance
factor other than 1, 0 or –1.
 To restore the balance, you need to perform appropriate rotations around the pivot
node.
Inserting Nodes in a Height Balanced Tree

 Insert operation in a height balanced tree is similar to that in a simple binary search
tree.
 However, inserting a node can make the tree unbalanced.
 To restore the balance, you need to perform appropriate rotations around the pivot
node.
 This involves two cases:
 When the pivot node is initially right heavy and the new node is inserted in the
right subtree of the pivot node.
 When the pivot node is initially left heavy and the new node is inserted in the left
subtree of the pivot node.
 Let us first consider a case in which the pivot node is initially right heavy and the new
node is inserted in the right subtree of the pivot node.
 In this case, after the insertion of a new element, the balance factor of pivot node
becomes –2.
 Now there can be two situations in this case:
 If a new node is inserted in the right subtree of the right child of the pivot
node.(RR)
 If the new node is inserted in the left subtree of the right child of the pivot
node.(LR)
 Consider the first case in which a new node is inserted in the right subtree of the right
child of the pivot node.
Single Rotation:

 The two trees in below Figure contain the same elements and are both binary search
trees.
 First of all, in both trees k1 < k2. Second, all elements in the subtree A are smaller
than k1 in both trees.
 Third, all elements in subtree C are larger than k2. Finally, all elements in subtree B
are in between k1 and k2. The conversion of one of the above trees to the other is
known as a rotation.
 A rotation involves only a few pointer changes (we shall see exactly how many later),
and changes the structure of the tree while preserving the search tree property.
 The rotation does not have to be done at the root of a tree; it can be done at any node
in the tree, since that node is the root of some subtree.
 It can transform either tree into the other.
 This gives a simple method to fix up an AVL tree if an insertion causes some node in
an AVL tree to lose the balance property: Do a rotation at that node.
K1 K2

K2 K1

A C

B C A B

The basic algorithm is to start at the node inserted and travel up the tree, updating
the balance information at every node on the path. If we get to the root without having
found any badly balanced nodes, we are done. Otherwise, we do a rotation at the first bad
node found, adjust its balance, and are done (we do not have to continue going to the
root). In many cases, this is sufficient to rebalance the tree. For instance, in the below
Figure, after the insertion of the 61/2 in the original AVL tree on the left, node 8
becomes unbalanced. Thus, we do a single rotation between 7 and 8, obtaining the tree on
the right.
K2
K1

K1 C K2
A

A B B C

Let us work through a rather long example. Suppose we start with an initially
empty AVL tree and insert the keys 1 through 7 in sequential order. The first problem
occurs when it is time to insert key 3, because the AVL property is violated at the root.
We perform a single rotation between the root and its right child to fix the problem. The
tree is shown in the following figure, before and after the rotation:
Figure 4.32 AVL property destroyed by insertion of 6 1/2 , then fixed by a
rotation

To make things clearer, a dashed line indicates the two nodes that are the subject
of the rotation. Next, we insert the key 4, which causes no problems, but the insertion of
5 creates a violation at node 3, which is fixed by a single rotation. Besides the local
change caused by the rotation, the programmer must remember that the rest of the tree
must be informed of this change. Here, this means that 2's right child must be reset to
point to 4 instead of 3. This is easy to forget to do and would destroy the tree (4 would be
inaccessible).

Next, we insert 6. This causes a balance problem for the root, since its left subtree
is of height 0, and its right subtree would be height 2. Therefore, we perform a single
rotation at the root between 2 and 4.

2
4
1 4
2 5
The rotation is performed by making 2 a child of 4 and making 4's original left
subtree the new right subtree of 2. Every key in this subtree must lie between 2 and 4, so
this transformation makes sense. The next key we insert is 7, which causes another
rotation.

4 4

2 5
2 6

1 3 6 1 3 5 7

Double Rotation:

The algorithm described in the preceding paragraphs has one problem. There is a
case where the rotation does not fix the tree. Continuing our example, suppose we insert
keys 8 through 15 in reverse order. Inserting 15 is easy, since it does not destroy the
balance property, but inserting 14 causes a height imbalance at node 7.
As the diagram shows, the single rotation has not fixed the height imbalance. The
problem is that the height imbalance was caused by a node inserted into the tree
containing the middle elements (tree Y in Fig. 4.31) at the same time as the other trees
had identical heights. The case is easy to check for, and the solution is called a double
rotation, which is similar to a single rotation but involves four subtrees instead of three.
In Figure 4.33, the tree on the left is converted to the tree on the right. By the way, the
effect is the same as rotating between k1 and k2 and then between k2 and k3.
In our example, the double rotation is a right-left double rotation and involves 7,
15, and 14. Here, k3 is the node with key 7, k1 is the node with key 15, and k2 is the node
with key 14. Subtrees A, B, C, and D are all empty.

Next we insert 13, which require a double rotation. Here the double rotation is
again a right-left double rotation that will involve 6, 14, and 7 and will restore the tree. In
this case, k3 is the node with key 6, k1 is the node with key 14, and k2 is the node with
key 7. Subtree A is the tree rooted at the node with key 5, subtree B is the empty subtree
that was originally the left child of the node with key 7, subtree C is the tree rooted at the
node with key 13, and finally, subtree D is the tree rooted at the node with key 15.
If 12 is now inserted, there is an imbalance at the root. Since 12 is not between 4
and 7, we know that the single rotation will work.

Insertion of 11 will require a single rotation:


To insert 10, a single rotation needs to be performed, and the same is true for the
subsequent insertion of 9. We insert 8 without a rotation, creating the almost perfectly
balanced tree that follows.

Finally, we insert 81/2 to show the symmetric case of the double rotation. Notice
that 81/2 causes the node containing 9 to become unbalanced. Since 81/2 is between 9
and 8 (which is 9's child on the path to 81/2, a double rotation needs to be performed,
yielding the following tree.
Example:

 Let us consider another example to insert values in a binary search tree and restore its
balance whenever required.

50 40 30 60 55 80 10 35 32

Insert 50
0
50
Tree is balanced.

Insert 40

Tree is balanced.

Insert 30
Before rotation After rotation

Tree becomes unbalanced A single right rotation (LL) restores the balance

Insert 60: Insert 55:

Before Rotation After rotation


Now the Tree is unbalanced, A double rotation restores the balance(RL)
Before Rotation After Rotation

Insert 80 (single left rotation)

Insert 10 Insert 35

Insert 32 (double rotation)


Finally the Tree becomes balanced.

3.9 B-Trees

Definition of B-Trees
A B-tree is a tree data structure that keeps data sorted and allows searches, insertions, and
deletions in logarithmic amortized time. Unlike self-balancing binary search trees, it is
optimized for systems that read and write large blocks of data. It is most commonly used in
database and file systems.

The B-Tree Rules

Important properties of a B-tree:


 B-tree nodes have many more than two children.
 A B-tree node may contain more than just a single element.

The set formulation of the B-tree rules: Every B-tree depends on a positive constant integer
called MINIMUM, which is used to determine how many elements are held in a single node.

 Rule 1: The root can have as few as one element (or even no elements if it also has no
children); every other node has at least MINIMUM elements.
 Rule 2: The maximum number of elements in a node is twice the value of MINIMUM.
 Rule 3: The elements of each B-tree node are stored in a partially filled array, sorted
from the smallest element (at index 0) to the largest element (at the final used position
of the array).
 Rule 4: The number of subtrees below a nonleaf node is always one more than the
number of elements in the node.
o Subtree 0, subtree 1, ...
 Rule 5: For any nonleaf node:

1. An element at index i is greater than all the elements in subtree number i of the
node, and
2. An element at index i is less than all the elements in subtree number i + 1 of the
node.
Rule 6: Every leaf in a B-tree has the same depth. Thus it ensures that a B-tree avoids
the problem of a unbalanced tree.

Searching for a Target in a Set


The psuedocode:
1. Make a local variable, i, equal to the first index such that data[i] >= target. If there is no
such index, then set i equal to dataCount, indicating that none of the elements is greater than or
equal to the target.
2. if (it found the target at data[i])
return true;
else if (the root has no children)
return false;
else return subset[i].contains(target);
See the following example, try to search for 10.

It can implement a private method:


• private int firstGE(int target), which returns the first location in the root such that
data[x] >= target. If there's no such location, then return value is dataCount.
3.10 Basic operations on B-trees- insertion and deletion
Adding an Element to a B-Tree
It is easier to add a new element to a B-tree if we relax one of the B-tree rules.
Loose addition allows the root node of the B-tree to have MAXIMUM + 1 elements. For
example, suppose we want to add 18 to the tree:

The above result is an illegal B-tree. Our plan is to perform a loose addition first, and then fix
the root's problem.
The Loose Addition Operation for a B-Tree:
private void looseAdd(int element)
{
1. i = firstGE(element) // find the first index such that data[i] >= element
2. if (we found the new element at data[i]) return; // since there's already a copy in the set
3. else if (the root has no children)
Add the new element to the root at data[i]. (shift array)
4. else {
subset[i].looseAdd(element);
if the root of subset[i] now has an excess element, then fix that problem before returning.
}
}

private void fixExcess(int i)


// precondition: (i < childCount) and the entire B-tree is valid except that subset[i] has
MAXIMUM + 1 elements.
// postcondition: the tree is rearranged to satisfy the loose addition rule
Fixing a Child with an Excess Element:

 To fix a child with MAXIMIM + 1 elements, the child node is split into two nodes that
each contain MINIMUM elements. This leaves one extra element, which is passed up to
the parent.
 It is always the middle element of the split node that moves upward.
 The parent of the split node gains one additional child and one additional element.
 The children of the split node have been equally distributed between the two smaller
nodes.

Fixing the Root with an Excess Element:


 Create a new root.
 fixExcess(0).

Removing an Element from a B-Tree


Loose removal rule:
Loose removal allows to leave a root that has one element too few.
public boolean remove(int target)
{
answer = looseRemove(target);
if ((dataCount == 0) && (childCount == 1))
Fix the root of the entire tree so that it no longer has zero elements;
return answer;
}

private boolean looseRemove(int target)


{
1. i = firstGE(target)
2. Deal with one of these four possibilities:
2a. if (root has no children and target not found) return false.
2b. if( root has no children but target found) {
remove the target
return true
}
2c. if (root has children and target not found) {
answer = subset[i].looseRemove(target)
if (subset[i].dataCount < MINIMUM)
fixShortage(i)
return true
}
2d. if (root has children and target found) {
data[i] = subset[i].removeBiggest()
if (subset[i].dataCount < MINIMUM)
fixShortage(i)
return true
}
}

private void fixShortage(int i)


// Precondition: (i < childCount) and the entire B-tree is valid except that subset[i] has
MINIMUM - 1 elements.
// Postcondition: problem fixed based on the looseRemoval rule.

private int removeBiggest()


// Precondition: (dataCount > 0) and this entire B-tree is valid
// Postcondition: the largest element in this set has been removed and returned. The entire B-
tree is still valid based on the looseRemoval rule.

Fixing Shortage in a Child:


When fixShortage(i) is activated, we know that subset[i] has MINIMUM - 1 elements. There
are four cases that we need to consider:
Case 1: Transfer an extra element from subset[i-1]. Suppose subset[i-1] has more than the
MINIMUM number of elements.
a. Transfer data[i-1] down to the front of subset[i].data.
b. Transfer the final element of subset[i-1].data up to replace data[i-1].
c. If subset[i-1] has children, transfer the final child of subset[i-1] over to the front of
subset[i].

Case 2: Transfer an extra element from subset[i+1]. Suppose subset[i+1] has more than the
MINIMUM number of elements.
Case 3: Combine subset[i] with subset[i-1]. Suppose subset[i-1] has only MINIMUM
elements.Transfer data[i-1] down to the end of subset[i-1].data.

a. Transfer all the elements and children from subset[i] to the end of subset[i-1].
b. Disconnect the node subset[i] from the B-tree by shifting subset[i+1], subset[i+2] and so
on leftward.

Case 4: Combine subset[i] with subset[i+1]. Suppose subset[i+1] has only MINIMUM
elements.
It may need to continue activating fixShortage() until the B-tree rules are satisfied.
Removing the Biggest Element from a B-Tree:
private int removeBiggest()
{
if (root has no children)
remove and return the last element
else {
answer = subset[childCount-1].removeBiggest()
if (subset[childCount-1].dataCount < MINIMUM)
fixShortage(childCount-1)
return answer
}
}
A more concrete example for node deletion:
Exceptions of 2-3 Tree
A 2-3 tree is a type of B-tree where every node with children (internal node) has either two
children and one data element (2-nodes) or three children and two data elements (3-node). Leaf
nodes have no children and one or two data elements.

Trees-Time Analysis
The implementation of a B-tree is efficient since the depth of the tree is kept small.
Worst-case times for tree operations: the worst-case time performance for the following
operations are all O(d), where d is the depth of the tree:

1. Adding an element to a binary search tree (BST), a heap, or a B-tree.


2. Removing an element from a BST, a heap, or a B-tree.
3. Searching for a specified element in a BST or a B-tree.

Exceptions of 2-3 Tree


A 2-3 tree is a type of B-tree where every node with children (internal node) has either two
children and one data element (2-nodes) or three children and two data elements (3-node). Leaf
nodes have no children and one or two data elements.
Trees-Time Analysis
The implementation of a B-tree is efficient since the depth of the tree is kept small.
Worst-case times for tree operations: the worst-case time performance for the following
operations are all O(d), where d is the depth of the tree:

1. Adding an element to a binary search tree (BST), a heap, or a B-tree.


2. Removing an element from a BST, a heap, or a B-tree.
3. Searching for a specified element in a BST or a B-tree.

Time Analysis for BST


Suppose a BST has n elements. What is the maximum depth the tree could have?

 A BST with n elements could have a depth as big as n-1.

Worst-Case Times for BSTs:

 Adding an element, removing an element, or searching for an element in a BST


with n elements is O(n).

Time Analysis for Heaps


Remember that a heap is a complete BST, so each level must be full before proceeding to the
next level.
Number of nodes needed for a heap to reach depth d is: (1 + 2 + 4 + 8 + ... + 2d-1) + 1 = 2d = n.
Thus d = log2n.
Worst-Case Times for Heap Operations:

 Adding or removing an element in a heap with n elements is O(log n).


Time Analysis for B-Tree
Suppose a B-tree has n elements and M is the maximum number of children a node can have.
What is the maximum depth the tree could have? What is the minimum depth the tree could
have?

 The worst-case depth (maximum depth) of a B-tree is: logM/2 n.


 The best-case depth (minimum depth) of a B-tree is: logM n.

Worst-Case Times for B-Trees:

 Adding or removing an element in a B-tree with n elements is O(log n).

The idea it is earlier to seen that of putting multiple set (list, hash table) elements together
into large chunks that exploit locality can also be applied to trees. Binary search trees are not
good for locality because a given node of the binary tree probably occupies only a fraction of
any cache line. B-trees are a way to get better locality by putting multiple elements into each
tree node.

B-trees were originally invented for storing data structures on disk, where locality is even
more crucial than with memory. Accessing a disk location takes about 5ms = 5,000,000ns.
Therefore, if you are storing a tree on disk, you want to make sure that a given disk read is as
effective as possible. B-trees have a high branching factor, much larger than 2, which ensures
that few disk reads are needed to navigate to the place where data is stored. B-trees may also
useful for in-memory data structures because these days main memory is almost as slow
relative to the processor as disk drives were to main memory when B-trees were first
introduced!

A B-tree of order m is a search tree in which each nonleaf node has up to m children. The
actual elements of the collection are stored in the leaves of the tree, and the nonleaf nodes
contain only keys. Each leaf stores some number of elements; the maximum number may be
greater or (typically) less than m. The data structure satisfies several invariants:

1. Every path from the root to a leaf has the same length
2. If a node has n children, it contains n−1 keys.

3. Every node (except the root) is at least half full

4. The elements stored in a given subtree all have keys that are between the keys in
the parent node on either side of the subtree pointer. (This generalizes the BST
invariant.)

5. The root has at least two children if it is not a leaf.

For example, the following is an order-5 B-tree (m=5) where the leaves have enough space
to store up to 3 data records:

Because the height of the tree is uniformly the same and every node is at least half full, we
are guaranteed that the asymptotic performance is O(lg n) where n is the size of the collection.
The real win is in the constant factors, of course. We can choose m so that the pointers to
the m children plus the m−1 elements fill out a cache line at the highest level of the memory
hierarchy where we can expect to get cache hits. For example, if we are accessing a large disk
database then our "cache lines" are memory blocks of the size that is read from disk.

Lookup in a B-tree is straightforward. Given a node to start from, we use a simple linear or
binary search to find whether the desired element is in the node, or if not, which child pointer
to follow from the current node.

Insertion and deletion from a B-tree are more complicated; in fact, they are notoriously
difficult to implement correctly. For insertion, we first find the appropriate leaf node into
which the inserted element falls (assuming it is not already in the tree). If there is already room
in the node, the new element can be inserted simply. Otherwise the current leaf is already full
and must be split into two leaves, one of which acquires the new element. The parent is then
updated to contain a new key and child pointer. If the parent is already full, the process ripples
upwards, eventually possibly reaching the root. If the root is split into two, then a new root is
created with just two children, increasing the height of the tree by one.

For example, here is the effect of a series of insertions. The first insertion (13) merely affects
a leaf. The second insertion (14) overflows the leaf and adds a key to an internal node. The
third insertion propagates all the way to the root.

Deletion works in the opposite way: the element is removed from the leaf. If the leaf
becomes empty, a key is removed from the parent node. If that breaks invariant 3, the keys of
the parent node and its immediate right (or left) sibling are reapportioned among them so that
invariant 3 is satisfied. If this is not possible, the parent node can be combined with that
sibling, removing a key another level up in the tree and possible causing a ripple all the way to
the root. If the root has just two children, and they are combined, then the root is deleted and
the new combined node becomes the root of the tree, reducing the height of the tree by one.

4.1 Priority Queues (Heaps) – Model

Consider that print jobs are given in printer. Although jobs sent to a line printer are generally
placed on a queue, this might not always be the best thing to do. For instance, one job might be
particularly important, so that it might be desirable to allow that job to be run as soon as the printer is
available. Conversely, if, when the printer becomes available, there are several one-page jobs and one
hundred-page job, it might be reasonable to make the long job go last, even if it is not the last job
submitted.

Similarly, in a multiuser environment, the operating system scheduler must decide which of
several processes to run. Generally a process is only allowed to run for a fixed period of time. One
algorithm uses a queue. Jobs are initially placed at the end of the queue. The scheduler will repeatedly
take the first job on the queue, run it until either it finishes or its time limit is up, and place it at the
end of the queue if it does not finish. This strategy is generally not appropriate, because very short
jobs will seem to take a long time because of the wait involved to run. Generally, it is important that
short jobs finish as fast as possible, so these jobs should have preference over jobs that have already
been running. Furthermore, some jobs that are not short are still very important and should also have
preference.

Model:

 A Priority Queue is data structure that allows the following two operations: Insert which does
the obvious thing of insertion, and Delete Min. Which finds, returns, removes the minimum
element in the priority Queue
 The Insert operation is the equivalent of Enqueue and Delete Min is the Priority Queue
equivalent of the Dequeue option.
 The Delete Min function also alter its input
 Priority queues have many applications besides the operating systems
 Priority queues are also important in the implementation of greedy algorithms, which operate by
repeatedly finding a minimum.
4.2 Simple Implementation:

 There are several obvious way to implement a Priority Queue.


 We could use a simple linked list, performing insertions at the front in O(1) and traversing the
list, which requires O(n) time, to delete the minimum.
 we could insist that the list be always kept sorted; this makes insertions expensive (O(n)) and
delete_mins cheap (O(1))
 Another way of implementing priority queue would be to use a binary search tree (BST). This
given an O(log N) average running time for both operations.
 This is true in spite of the fact that although the insertions are random, the deletions are not.
Repeatedly removing a node that is in the left subtree would seem to hurt the balance of the tree
by making the right subtree heavy.

4.3 Binary Heap

 The implementation we will use is known as a binary heap.


 Its use is so common for priority queue implementations that, when the word heap is used
without a qualifier, it is generally assumed to be referring to this implementation of the data
structure.
 As with AVL Trees, on operation on a heap can destroy one of the properties, so a heap
operation must not terminate until all heap properties are in order.

Properties of Binary heap:

 Structure Property
 Heap order Property
Structure Property:

 A heap is a binary tree that is completely filled with the possible exception of the bottom level,
which is filled from left to right. Such tree is known as complete binary tree.
 It is easy to show that a complete binary tree of height ‘h’ has between 2h and 2h+1 -1 nodes.
 An important observation is that a complete binary tree is so regular, it can be represented in an
array and no pointer is necessary.
 For any element in array position i the left child is in position 2i, the right child is the cell after
the left child (2i+1) and the parent is in position (i/2)

 Thus not only are pointers not required, but the operations required to traverse the tree are
extremely simple and likely to be very fast on most computers.
 The only problem with this implementation is that an estimate of the maximum heap size is
required in advance, but typically this is not a problem.
 A heap data structure will, then, consist of an array (of whatever type the key is) and integers
representing the maximum 2nd current heap size. Figure 6.4 shows a typical priority queue
declaration.
Heap order Property

.
 Analogously, we can declare a (max) heap, which enables us to efficiently find and remove the
maximum element, by changing the heap order property. Thus, a priority queue can be used to
find either a minimum or a maximum, but this needs to be decided ahead of time.
 By the heap order property, the minimum element can always be found at the root. Thus, we get
the extra operation, find_min, in constant time.
 The property that allows operations to be performed quickly is the heap order property.
 Since we want to be able to find the minimum quickly, it makes sense that the smallest element
should be at the root.
 If we consider that any subtree should also be a heap, then any node should be smaller than all
of its descendants.

 Applying this logic, we arrive at the heap order property. In a heap, for every node X, the key in
the parent of X is smaller than (or equal to) the key in X, with the exception of the root (Refer –
Heap sort).
 In Figure the tree on the left is a heap, but the tree on the right is not (the dashed line shows the
violation of heap order).
 Analogously, we can declare a (max) heap, which enables us to efficiently find and remove the
maximum element, by changing the heap order property. Thus, a priority queue can be used to
find either a minimum or a maximum, but these needs to be decided ahead of time.
 By the heap order property, the minimum element can always be found at the root. Thus, we get
the extra operation, find_min, in constant time

Basic Heap Operations

 It is easy to perform the two required operations. All the work involves ensuring that the heap
order property is maintained.
 Insert
 Delete Min

INSERT

 To insert an element x into the heap, we create a hole in the next available location, since
otherwise the tree will not be complete.
 If x can be placed in the hole without violating heap order, then we do so and are done.
Otherwise we slide the element that is in the hole's parent node into the hole, thus bubbling the
hole up toward the root.
 We continue this process until x can be placed in the hole.
 Figure shows that to insert 14, we create a hole in the next available heap location.
 Inserting 14 in the hole would violate the heap order property, so 31 is slid down into the hole.
This strategy is continued in Figure 6.7 until the correct location for 14 is found.
 This general strategy is known as a percolate up; the new element is percolated up the heap until
the correct location is found.
 We could have implemented the percolation in the insert routine by performing repeated swaps
until the correct order was established, but a swap requires three assignment statements.
 If an element is percolated up d levels, the number of assignments performed by the swaps
would be 3d. Our method uses d + 1 assignments.

Delete_min
 Delete_mins are handled in a similar manner as insertions. Finding the minimum is easy; the
hard part is removing it.
 When the minimum is removed, a hole is created at the root.
 Since the heap now becomes one smaller, it follows that the last element x in the heap must
move somewhere in the heap.
 If x can be placed in the hole, then we are done. This is unlikely, so we slide the smaller of the
hole's children into the hole, thus pushing the hole down one level.
 We repeat this step until x can be placed in the hole. Thus, our action is to place x in its correct
spot along a path from the root containing minimum children.

 In Figure 6.9 the left figure shows a heap prior to the delete_min. After 13 is removed, we must
now try to place 31 in the heap. 31 cannot be placed in the hole, because this would violate heap
order.
 Thus, we place the smaller child (14) in the hole, sliding the hole down one level (see Fig.
6.10). We repeat this again, placing 19 into the hole and creating a new hole one level deeper.
 We then place 26 in the hole and create a new hole on the bottom level. Finally, we are able to
place 31 in the hole (Fig. 6.11). This general strategy is known as a percolate down.
 We use the same technique as in the insert routine to avoid the use of swaps in this routine.
*******
EXERCISES
PART A
1. Define a tree
2. Define root
3. Define degree of the node
4. Define leaves
5. Define depth and height of a node
6. Define depth and height of a tree
7. Define a binary tree
8. Define a path in a tree
9. Define terminal nodes in a tree
10. Define non-terminal nodes in a tree
11. Define a full binary tree
12. Define a complete binary tree
13. Define a right-skewed binary tree
14. State the properties of a binary tree
15. What is meant by binary tree traversal?
16. What are the different binary tree traversal techniques?
17. What are the tasks performed while traversing a binary tree?
18. What are the tasks performed during preorder traversal?
19. What are the tasks performed during inorder traversal?
20. What are the tasks performed during postorder traversal?
21. State the merits of linear representation of binary trees.
22. State the demerit of linear representation of binary trees.
23. State the merit of linked representation of binary trees.
24. State the demerits of linked representation of binary trees.
25. Define a binary search tree
26. What do you mean by general trees?
27. Define ancestor and descendant
28. Why it is said that searching a node in a binary search tree is efficient than that of a
simple binary tree?
29. Define AVL Tree.
30. What do you mean by balanced trees?
31. What are the categories of AVL rotations?
32. What do you mean by balance factor of a node in AVL tree?
33. What is the minimum number of nodes in an AVL tree of height h?
34. Define B-tree of order M.
35. What do you mean by 2-3 trees?
36. What do you mean by 2-3-4 tree?
37. What are the applications of B-tree?
38 What is an expression tree?

PART – B
1. What is a binary search tree? Explain with example?
2. Explain binary tree traversals?
3. Explain the expression trees?
4. Write the procedure to convert general tree to binary tree.
5. Explain AVL tree in detail
9. What is a binary tree? Explain binary tree traversal in “c”
10. Construct a binary tree to satisfy the following orders:
11. Explain representing lists as binary trees. Write an algorithm to find kth element and deleting
it.

You might also like