Data Structure
Data Structure
We read the linear data structures like an array, linked list, stack and queue in which all the
elements are arranged in a sequential manner. The different data structures are used for different
kinds of data.
o What type of data needs to be stored?: It might be a possibility that a certain data
structure can be the best fit for some kind of data.
o Cost of operations: If we want to minimize the cost for the operations for the most
frequently performed operations. For example, we have a simple list on which we have to
perform the search operation; then, we can create an array in which elements are stored in
sorted order to perform the binary search. The binary search works very fast for the
simple list as it divides the search space into half.
o Memory usage: Sometimes, we want a data structure that utilizes less memory.
A tree is also one of the data structures that represent hierarchical data. Suppose we want to show
the employees and their positions in the hierarchical form then it can be represented as shown
below:
The above tree shows the organization hierarchy of some company. In the above
structure, john is the CEO of the company, and John has two direct reports named
as Steve and Rohan. Steve has three direct reports named Lee, Bob, Ella where Steve is a
manager. Bob has two direct reports named Sal and Emma. Emma has two direct reports
named Tom and Raj. Tom has one direct report named Bill. This particular logical structure is
known as a Tree. Its structure is similar to the real tree, so it is named a Tree. In this structure,
1
the root is at the top, and its branches are moving in a downward direction. Therefore, we can
say that the Tree data structure is an efficient way of storing the data in a hierarchical
way.44.9M993
o A tree data structure is defined as a collection of objects or entities known as nodes that
are linked together to represent or simulate hierarchy.
o A tree data structure is a non-linear data structure because it does not store in a sequential
manner. It is a hierarchical structure as elements in a Tree are arranged in multiple levels.
o In the Tree data structure, the topmost node is known as a root node. Each node contains
some data, and data can be of any type. In the above tree structure, the node contains the
name of the employee, so the type of data would be a string.
o Each node contains some data and the link or reference of other nodes that can be called
children.
In the above structure, each node is labeled with some number. Each arrow shown in the above
figure is known as a link between the two nodes.
o Root: The root node is the topmost node in the tree hierarchy. In other words, the root
node is the one that doesn't have any parent. In the above structure, node numbered 1
2
is the root node of the tree. If a node is directly linked to some other node, it would be
called a parent-child relationship.
o Child node: If the node is a descendant of any node, then the node is known as a child
node.
o Parent: If the node contains any sub-node, then that node is said to be the parent of that
sub-node.
o Sibling: The nodes that have the same parent are known as siblings.
o Leaf Node:- The node of the tree, which doesn't have any child node, is called a leaf
node. A leaf node is the bottom-most node of the tree. There can be any number of leaf
nodes present in a general tree. Leaf nodes can also be called external nodes.
o Internal nodes: A node has atleast one child node known as an internal
o Ancestor node:- An ancestor of a node is any predecessor node on a path from the root
to that node. The root node doesn't have any ancestors. In the tree shown in the above
image, nodes 1, 2, and 5 are the ancestors of node 10.
o Descendant: The immediate successor of the given node is known as a descendant of a
node. In the above figure, 10 is the descendant of node 5.
o Recursive data structure: The tree is also known as a recursive data structure. A tree
can be defined as recursively because the distinguished node in a tree data structure is
known as a root node. The root node of the tree contains a link to all the roots of its
subtrees. The left subtree is shown in the yellow color in the below figure, and the right
subtree is shown in the red color. The left subtree can be further split into subtrees shown
in three different colors. Recursion means reducing something in a self-similar manner.
So, this recursive property of the tree data structure is implemented in various
applications.
o Number of edges: If there are n nodes, then there would n-1 edges. Each arrow in the
structure represents the link or path. Each node, except the root node, will have atleast
3
one incoming link known as an edge. There would be one link for the parent-child
relationship.
o Depth of node x: The depth of node x can be defined as the length of the path from the
root to the node x. One edge contributes one-unit length in the path. So, the depth of node
x can also be defined as the number of edges between the root node and the node x. The
root node has 0 depth.
o Height of node x: The height of node x can be defined as the longest path from the node
x to the leaf node.
Based on the properties of the Tree data structure, trees are classified into various categories.
Implementation of Tree
The tree data structure can be created by creating the nodes dynamically with the help of the
pointers. The tree in the memory can be represented as shown below:
The above figure shows the representation of the tree data structure in the memory. In the above
structure, the node contains three fields. The second field stores the data; the first field stores the
address of the left child, and the third field stores the address of the right child.
1. struct node
2. {
3. int data;
4. struct node *left;
5. struct node *right;
6. }
The above structure can only be defined for the binary trees because the binary tree can have
utmost two children, and generic trees can have more than two children. The structure of the
node for generic trees would be different as compared to the binary tree.
Applications of trees
4
The following are the applications of trees:
o Storing naturally hierarchical data: Trees are used to store the data in the hierarchical
structure. For example, the file system. The file system stored on the disc drive, the file
and folder are in the form of the naturally hierarchical data and stored in the form of
trees.
o Organize data: It is used to organize data for efficient insertion, deletion and searching.
For example, a binary tree has a logN time for searching an element.
o Trie: It is a special kind of tree that is used to store the dictionary. It is a fast and efficient
way for dynamic spell checking.
o Heap: It is also a tree data structure implemented using arrays. It is used to implement
priority queues.
o B-Tree and B+Tree: B-Tree and B+Tree are the tree data structures used to implement
indexing in databases.
o Routing table: The tree data structure is also used to store the data in routing tables in
the routers.
o General tree: The general tree is one of the types of tree data structure. In the general
tree, a node can have either 0 or maximum n number of nodes. There is no restriction
imposed on the degree of the node (the number of nodes that a node can contain). The
topmost node in a general tree is known as a root node. The children of the parent node
are known as subtrees.
There can be n number of subtrees in a general tree. In the general tree, the subtrees are
unordered as the nodes in the subtree cannot be ordered.
5
Every non-empty tree has a downward edge, and these edges are connected to the nodes
known as child nodes. The root node is labeled with level 0. The nodes that have the
same parent are known as siblings.
o Binary tree: Here, binary name itself suggests two numbers, i.e., 0 and 1. In a binary
tree, each node in a tree can have utmost two child nodes. Here, utmost means whether
the node has 0 nodes, 1 node or 2 nodes.
To know more about the binary tree, click on the link given below:
https://www.javatpoint.com/binary-tree
o Binary Search tree: Binary search tree is a non-linear data structure in which one node
is connected to n number of nodes. It is a node-based data structure. A node can be
represented in a binary search tree with three fields, i.e., data part, left-child, and right-
child. A node can be connected to the utmost two child nodes in a binary search tree, so
the node contains two pointers (left child and right child pointer).
Every node in the left subtree must contain a value less than the value of the root node,
and the value of each node in the right subtree must be bigger than the value of the root
node.
A node can be created with the help of a user-defined data type known as struct, as shown
below:
1. struct node
2. {
3. int data;
4. struct node *left;
5. struct node *right;
6. }
6
The above is the node structure with three fields: data field, the second field is the left pointer of
the node type, and the third field is the right pointer of the node type.
o AVL tree
It is one of the types of the binary tree, or we can say that it is a variant of the binary search tree.
AVL tree satisfies the property of the binary tree as well as of the binary search tree. It is a self-
balancing binary search tree that was invented by Adelson Velsky Lindas. Here, self-balancing
means that balancing the heights of left subtree and right subtree. This balancing is measured in
terms of the balancing factor.
We can consider a tree as an AVL tree if the tree obeys the binary search tree as well as a
balancing factor. The balancing factor can be defined as the difference between the height of the
left subtree and the height of the right subtree. The balancing factor's value must be either 0, -1,
or 1; therefore, each node in the AVL tree should have the value of the balancing factor either as
0, -1, or 1.
To know more about the AVL tree, click on the link given below:
https://www.javatpoint.com/avl-tree
o Red-Black Tree
The red-Black tree is the binary search tree. The prerequisite of the Red-Black tree is that we
should know about the binary search tree. In a binary search tree, the value of the left-subtree
should be less than the value of that node, and the value of the right-subtree should be greater
than the value of that node. As we know that the time complexity of binary search in the average
case is log2n, the best case is O(1), and the worst case is O(n).
When any operation is performed on the tree, we want our tree to be balanced so that all the
operations like searching, insertion, deletion, etc., take less time, and all these operations will
have the time complexity of log2n.
The red-black tree is a self-balancing binary search tree. AVL tree is also a height balancing
binary search tree then why do we require a Red-Black tree. In the AVL tree, we do not know
how many rotations would be required to balance the tree, but in the Red-black tree, a maximum
of 2 rotations are required to balance the tree. It contains one extra bit that represents either the
red or black color of a node to ensure the balancing of the tree.
o Splay tree
The splay tree data structure is also binary search tree in which recently accessed element is
placed at the root position of tree by performing some rotation operations. Here, splaying means
the recently accessed node. It is a self-balancing binary search tree having no explicit balance
condition like AVL tree.
7
It might be a possibility that height of the splay tree is not balanced, i.e., height of both left and
right subtrees may differ, but the operations in splay tree takes order of logN time where n is the
number of nodes.
Splay tree is a balanced tree but it cannot be considered as a height balanced tree because after
each operation, rotation is performed which leads to a balanced tree.
o Treap
Treap data structure came from the Tree and Heap data structure. So, it comprises the properties
of both Tree and Heap data structures. In Binary search tree, each node on the left subtree must
be equal or less than the value of the root node and each node on the right subtree must be equal
or greater than the value of the root node. In heap data structure, both right and left subtrees
contain larger keys than the root; therefore, we can say that the root node contains the lowest
value.
In treap data structure, each node has both key and priority where key is derived from the Binary
search tree and priority is derived from the heap data structure.
The Treap data structure follows two properties which are given below:
o Right child of a node>=current node and left child of a node <=current node (binary tree)
o Children of any subtree must be greater than the node (heap)
o B-tree
B-tree is a balanced m-way tree where m defines the order of the tree. Till now, we read that the
node contains only one key but b-tree can have more than one key, and more than 2 children. It
always maintains the sorted data. In binary tree, it is possible that leaf nodes can be at different
levels, but in b-tree, all the leaf nodes must be at the same level.
The root node must contain minimum 1 key and all other nodes must contain atleast ceiling of
m/2 minus 1 keys.
Binary Tree
8
The Binary tree means that the node can have maximum two children. Here, binary name itself
suggests that 'two'; therefore, each node can have either 0, 1 or 2 children.
The above tree is a binary tree because each node contains the utmost two children. The logical
representation of the above tree is given below:
In the above tree, node 1 contains two pointers, i.e., left and a right pointer pointing to the left
and right node respectively. The node 2 contains both the nodes (left and right node); therefore, it
has two pointers (left and right). The nodes 3, 5 and 6 are the leaf nodes, so all these nodes
contain NULL pointer on both left and right parts.
9
o The minimum number of nodes possible at height h is equal to h+1.
o If the number of nodes is minimum, then the height of the tree would be maximum.
Conversely, if the number of nodes is maximum, then the height of the tree would be
minimum.
As we know that,
n = 2h+1 -1
n+1 = 2h+1
log2(n+1) = log2(2h+1)
log2(n+1) = h+1
h = log2(n+1) - 1
As we know that,
n = h+1
h= n-1
10
The full binary tree is also known as a strict binary tree. The tree can only be considered as the
full binary tree if each node must contain either 0 or 2 children. The full binary tree can also be
defined as the tree in which each node must contain 2 children except the leaf nodes.
In the above tree, we can observe that each node is either containing zero or two children;
therefore, it is a Full Binary tree.
o The number of leaf nodes is equal to the number of internal nodes plus 1. In the above
example, the number of internal nodes is 5; therefore, the number of leaf nodes is equal to
6.
o The maximum number of nodes is the same as the number of nodes in the binary tree,
i.e., 2h+1 -1.
o The minimum number of nodes in the full binary tree is 2*h-1.
o The minimum height of the full binary tree is log2(n+1) - 1.
o The maximum height of the full binary tree can be computed as:
n= 2*h - 1
n+1 = 2*h
h = n+1/2
11
The complete binary tree is a tree in which all the nodes are completely filled except the last
level. In the last level, all the nodes must be as left as possible. In a complete binary tree, the
nodes should be added from the left.
The above tree is a complete binary tree because all the nodes are completely filled, and all the
nodes in the last level are added at the left first.
A tree is a perfect binary tree if all the internal nodes have 2 children, and all the leaf nodes are at
the same level.
12
The below tree is not a perfect binary tree because all the leaf nodes are not at the same level.
Note: All the perfect binary trees are the complete binary trees as well as the full binary tree, but
vice versa is not true, i.e., all complete binary trees and full binary trees are the perfect binary
trees.
The degenerate binary tree is a tree in which all the internal nodes have only one children.
The above tree is a degenerate binary tree because all the nodes have only one child. It is also
known as a right-skewed tree as all the nodes have a right child only.
13
The above tree is also a degenerate binary tree because all the nodes have only one child. It is
also known as a left-skewed tree as all the nodes have a left child only.
The balanced binary tree is a tree in which both the left and right trees differ by atmost 1. For
example, AVL and Red-Black trees are balanced binary tree.
The above tree is a balanced binary tree because the difference between the left subtree and right
subtree is zero.
The above tree is not a balanced binary tree because the difference between the left subtree and
the right subtree is greater than 1.
A Binary tree is implemented with the help of pointers. The first node in the tree is represented
by the root pointer. Each node in the tree consists of three parts, i.e., data, left pointer and right
pointer. To create a binary tree, we first need to create the node. We will create the node of user-
defined as shown below:
14
1. struct node
2. {
3. int data,
4. struct node *left, *right;
5. }
In the above structure, data is the value, left pointer contains the address of the left node,
and right pointer contains the address of the right node.
1. #include<stdio.h>
2. struct node
3. {
4. int data;
5. struct node *left, *right;
6. }
7. void main()
8. {
9. struct node *root;
10. root = create();
11. }
12. struct node *create()
13. {
14. struct node *temp;
15. int data;
16. temp = (struct node *)malloc(sizeof(struct node));
17. printf("Press 0 to exit");
18. printf("\nPress 1 for new node");
19. printf("Enter your choice : ");
20. scanf("%d", &choice);
21. if(choice==0)
22. {
23. return 0;
24. }
25. else
26. {
15
27. printf("Enter the data:");
28. scanf("%d", &data);
29. temp->data = data;
30. printf("Enter the left child of %d", data);
31. temp->left = create();
32. printf("Enter the right child of %d", data);
33. temp->right = create();
34. return temp;
35. }
36. }
The above code is calling the create() function recursively and creating new node on each
recursive call. When all the nodes are created, then it forms a binary tree structure. The process
of visiting the nodes is known as tree traversal. There are three types traversals used to visit a
node:
o Inorder traversal
o Preorder traversal
o Postorder traversal
In this article, we will discuss the Binary search tree. This article will be very helpful and
informative to the students with technical background as it is an important topic of their course.
Before moving directly to the binary search tree, let's first see a brief description of the tree.
What is a tree?
A tree is a kind of data structure that is used to represent the data in hierarchical form. It can be
defined as a collection of objects or entities called as nodes that are linked together to simulate a
hierarchy. Tree is a non-linear data structure as the data in a tree is not stored linearly or
sequentially.
Now, let's start the topic, the Binary Search tree.History of Java
A binary search tree follows some order to arrange the elements. In a Binary search tree, the
value of left node must be smaller than the parent node, and the value of right node must be
greater than the parent node. This rule is applied recursively to the left and right subtrees of the
root.
16
Let's understand the concept of Binary search tree with an example.
In the above figure, we can observe that the root node is 40, and all the nodes of the left subtree
are smaller than the root node, and all the nodes of the right subtree are greater than the root
node.
Similarly, we can see the left child of root node is greater than its left child and smaller than its
right child. So, it also satisfies the property of binary search tree. Therefore, we can say that the
tree in the above image is a binary search tree.
Suppose if we change the value of node 35 to 55 in the above tree, check whether the tree will be
binary search tree or not.
In the above tree, the value of root node is 40, which is greater than its left child 30 but smaller
than right child of 30, i.e., 55. So, the above tree does not satisfy the property of Binary search
tree. Therefore, the above tree is not a binary search tree.
o Searching an element in the Binary search tree is easy as we always have a hint that
which subtree has the desired element.
17
o As compared to array and linked lists, insertion and deletion operations are faster in BST.
Now, let's see the creation of binary search tree using an example.
Suppose the data elements are - 45, 15, 79, 90, 10, 55, 12, 20, 50
o First, we have to insert 45 into the tree as the root of the tree.
o Then, read the next element; if it is smaller than the root node, insert it as the root of the
left subtree, and move to the next element.
o Otherwise, if the element is larger than the root node, then insert it as the root of the right
subtree.
Now, let's see the process of creating the Binary search tree using the given data element. The
process of creating the BST is shown below -
As 15 is smaller than 45, so insert it as the root node of the left subtree.
As 79 is greater than 45, so insert it as the root node of the right subtree.
18
Step 4 - Insert 90.
90 is greater than 45 and 79, so it will be inserted as the right subtree of 79.
55 is larger than 45 and smaller than 79, so it will be inserted as the left subtree of 79.
19
Step 7 - Insert 12.
12 is smaller than 45 and 15 but greater than 10, so it will be inserted as the right subtree of 10.
20 is smaller than 45 but greater than 15, so it will be inserted as the right subtree of 15.
50 is greater than 45 but smaller than 79 and 55. So, it will be inserted as a left subtree of 55.
20
Now, the creation of binary search tree is completed. After that, let's move towards the
operations that can be performed on Binary search tree.
We can perform insert, delete and search operations on the binary search tree.
Searching means to find or locate a specific element or node in a data structure. In Binary search
tree, searching a node is easy because elements in BST are stored in a specific order. The steps of
searching a node in Binary Search tree are listed as follows -
1. First, compare the element to be searched with the root element of the tree.
2. If root is matched with the target element, then return the node's location.
3. If it is not matched, then check whether the item is less than the root element, if it is
smaller than the root element, then move to the left subtree.
4. If it is larger than the root element, then move to the right subtree.
5. Repeat the above procedure recursively until the match is found.
6. If the element is not found or not present in the tree, then return NULL.
Now, let's understand the searching in binary tree using an example. We are taking the binary
search tree formed above. Suppose we have to find node 20 from the below tree.
Step1:
21
Step2:
Step3:
Now, let's see the algorithm to search an element in the Binary search tree.
Now let's understand how the deletion is performed on a binary search tree. We will also see an
example to delete an element from the given tree.
In a binary search tree, we must delete a node from the tree by keeping in mind that the property
of BST is not violated. To delete a node from BST, there are three possible situations occur -
It is the simplest case to delete a node in BST. Here, we have to replace the leaf node with NULL
and simply free the allocated space.
We can see the process to delete a leaf node from BST in the below image. In below image,
suppose we have to delete node 90, as the node to be deleted is a leaf node, so it will be replaced
with NULL, and the allocated space will free.
In this case, we have to replace the target node with its child, and then delete the child node. It
means that after replacing the target node with its child node, the child node will now contain the
value to be deleted. So, we simply have to replace the child node with NULL and free up the
allocated space.
23
We can see the process of deleting a node with one child from BST in the below image. In the
below image, suppose we have to delete the node 79, as the node to be deleted has only one
child, so it will be replaced with its child 55.
So, the replaced node 79 will now be a leaf node that can be easily deleted.
This case of deleting a node in BST is a bit complex among other two cases. In such a case, the
steps to be followed are listed as follows -
The inorder successor is required when the right child of the node is not empty. We can obtain
the inorder successor by finding the minimum element in the right child of the node.
We can see the process of deleting a node with two children from BST in the below image. In the
below image, suppose we have to delete node 45 that is the root node, as the node to be deleted
has two children, so it will be replaced with its inorder successor. Now, node 45 will be at the
leaf of the tree so that it can be deleted easily.
24
Insertion in Binary Search tree
A new key in BST is always inserted at the leaf. To insert an element in BST, we have to start
searching from the root node; if the node to be inserted is less than the root node, then search for
an empty location in the left subtree. Else, search for the empty location in the right subtree and
insert the data. Insert in BST is similar to searching, as we always have to maintain the rule that
the left subtree is smaller than the root, and right subtree is larger than the root.
Now, let's see the process of inserting a node into BST using an example.
Let's see the time and space complexity of the Binary search tree. We will see the time
complexity for insertion, deletion, and searching operations in best case, average case, and worst
case.
1. Time Complexity
Operation Best case time Average case time Worst case time
s complexity complexity complexity
25
Insertion O(log n) O(log n) O(n)
2. Space Complexity
Insertion O(n)
Deletion O(n)
Search O(n)
Now, let's see the program to implement the operations of Binary Search tree.
In this program, we will see the implementation of the operations of binary search tree. Here, we
will see the creation, inorder traversal, insertion, and deletion operations of tree.
Here, we will see the inorder traversal of the tree to check whether the nodes of the tree are in
their proper location or not. We know that the inorder traversal always gives us the data in
ascending order. So, after performing the insertion and deletion operations, we perform the
inorder traversal, and after traversing, if we get data in ascending order, then it is clear that the
nodes are in their proper location.
1. #include <iostream>
2. using namespace std;
3. struct Node {
4. int data;
26
5. Node *left;
6. Node *right;
7. };
8. Node* create(int item)
9. {
10. Node* node = new Node;
11. node->data = item;
12. node->left = node->right = NULL;
13. return node;
14. }
15. /*Inorder traversal of the tree formed*/
16. void inorder(Node *root)
17. {
18. if (root == NULL)
19. return;
20. inorder(root->left); //traverse left subtree
21. cout<< root->data << " "; //traverse root node
22. inorder(root->right); //traverse right subtree
23. }
24. Node* findMinimum(Node* cur) /*To find the inorder successor*/
25. {
26. while(cur->left != NULL) {
27. cur = cur->left;
28. }
29. return cur;
30. }
31. Node* insertion(Node* root, int item) /*Insert a node*/
32. {
33. if (root == NULL)
34. return create(item); /*return new node if tree is empty*/
35. if (item < root->data)
36. root->left = insertion(root->left, item);
37. else
38. root->right = insertion(root->right, item);
39. return root;
27
40. }
41. void search(Node* &cur, int item, Node* &parent)
42. {
43. while (cur != NULL && cur->data != item)
44. {
45. parent = cur;
46. if (item < cur->data)
47. cur = cur->left;
48. else
49. cur = cur->right;
50. }
51. }
52. void deletion(Node*& root, int item) /*function to delete a node*/
53. {
54. Node* parent = NULL;
55. Node* cur = root;
56. search(cur, item, parent); /*find the node to be deleted*/
57. if (cur == NULL)
58. return;
59. if (cur->left == NULL && cur->right == NULL) /*When node has no children*/
60. {
61. if (cur != root)
62. {
63. if (parent->left == cur)
64. parent->left = NULL;
65. else
66. parent->right = NULL;
67. }
68. else
69. root = NULL;
70. free(cur);
71. }
72. else if (cur->left && cur->right)
73. {
74. Node* succ = findMinimum(cur->right);
28
75. int val = succ->data;
76. deletion(root, succ->data);
77. cur->data = val;
78. }
79. else
80. {
81. Node* child = (cur->left)? cur->left: cur->right;
82. if (cur != root)
83. {
84. if (cur == parent->left)
85. parent->left = child;
86. else
87. parent->right = child;
88. }
89. else
90. root = child;
91. free(cur);
92. }
93. }
94. int main()
95. {
96. Node* root = NULL;
97. root = insertion(root, 45);
98. root = insertion(root, 30);
99. root = insertion(root, 50);
100. root = insertion(root, 25);
101. root = insertion(root, 35);
102. root = insertion(root, 45);
103. root = insertion(root, 60);
104. root = insertion(root, 4);
105. printf("The inorder traversal of the given binary tree is - \n");
106. inorder(root);
107. deletion(root, 25);
108. printf("\nAfter deleting node 25, the inorder traversal of the given binary tree is - \n");
109. inorder(root);
29
110. insertion(root, 2);
111. printf("\nAfter inserting node 2, the inorder traversal of the given binary tree is - \n");
112. inorder(root);
113. return 0;
114. }
Output
So, that's all about the article. Hope the article will be helpful and informative to you.
What is a Stack?
A Stack is a linear data structure that follows the LIFO (Last-In-First-Out) principle. Stack has
one end, whereas the Queue has two ends (front and rear). It contains only one pointer top
pointer pointing to the topmost element of the stack. Whenever an element is added in the stack,
it is added on the top of the stack, and the element can be deleted only from the stack. In other
words, a stack can be defined as a container in which insertion and deletion can be done from
the one end known as the top of the stack.
o It is called as stack because it behaves like a real-world stack, piles of books, etc.
o A Stack is an abstract data type with a pre-defined capacity, which means that it can store
the elements of a limited size.
o It is a data structure that follows some order to insert and delete the elements, and that
order can be LIFO or FILO.
Working of Stack
Stack works on the LIFO pattern. As we can observe in the below figure there are five memory
blocks in the stack; therefore, the size of the stack is 5.
30
Suppose we want to store the elements in a stack and let's assume that stack is empty. We have
taken the stack of size 5 as shown below in which we are pushing the elements one by one until
the stack becomes full.
Since our stack is full as the size of the stack is 5. In the above cases, we can observe that it goes
from the top to the bottom when we were entering the new element in the stack. The stack gets
filled up from the bottom to the top.
When we perform the delete operation on the stack, there is only one way for entry and exit as
the other end is closed. It follows the LIFO pattern, which means that the value entered first will
be removed last. In the above case, the value 5 is entered first, so it will be removed only after
the deletion of all the other elements.
o push(): When we insert an element in a stack then the operation is known as a push. If
the stack is full then the overflow condition occurs.
o pop(): When we delete an element from the stack, the operation is known as a pop. If the
stack is empty means that no element exists in the stack, this state is known as an
underflow state.
o isEmpty(): It determines whether the stack is empty or not.
o isFull(): It determines whether the stack is full or not.'
o peek(): It returns the element at the given position.
o count(): It returns the total number of elements available in a stack.
31
o change(): It changes the element at the given position.
o display(): It prints all the elements available in the stack.
PUSH operation
POP operation
o Before deleting the element from the stack, we check whether the stack is empty.
o If we try to delete the element from the empty stack, then the underflow condition
occurs.
o If the stack is not empty, we first access the element which is pointed by the top
o Once the pop operation is performed, the top is decremented by 1, i.e., top=top-1.
32
Applications of Stack
o Balancing of symbols: Stack is used for balancing a symbol. For example, we have the
following program:
1. int main()
2. {
3. cout<<"Hello";
4. cout<<"javaTpoint";
5. }
As we know, each program has an opening and closing braces; when the opening braces come,
we push the braces in a stack, and when the closing braces appear, we pop the opening braces
from the stack. Therefore, the net value comes out to be zero. If any symbol is left in the stack, it
means that some syntax occurs in a program.
o String reversal: Stack is also used for reversing a string. For example, we want to
reverse a "javaTpoint" string, so we can achieve this with the help of a stack.
First, we push all the characters of the string in a stack until we reach the null character.
After pushing all the characters, we start taking out the character one by one until we
reach the bottom of the stack.
o UNDO/REDO: It can also be used for performing UNDO/REDO operations. For
example, we have an editor in which we write 'a', then 'b', and then 'c'; therefore, the text
written in an editor is abc. So, there are three states, a, ab, and abc, which are stored in a
33
stack. There would be two stacks in which one stack shows UNDO state, and the other
shows REDO state.
If we want to perform UNDO operation, and want to achieve 'ab' state, then we
implement pop operation.
o Recursion: The recursion means that the function is calling itself again. To maintain the
previous states, the compiler creates a system stack in which all the previous records of
the function are maintained.
o DFS(Depth First Search): This search is implemented on a Graph, and Graph uses the
stack data structure.
o Backtracking: Suppose we have to create a path to solve a maze problem. If we are
moving in a particular path, and we realize that we come on the wrong way. In order to
come at the beginning of the path to create a new path, we have to use the stack data
structure.
o Expression conversion: Stack can also be used for expression conversion. This is one of
the most important applications of stack. The list of the expression conversion is given
below:
o Infix to prefix
o Infix to postfix
o Prefix to infix
o Prefix to postfix
Postfix to infix
o Memory management: The stack manages the memory. The memory is assigned in the
contiguous memory blocks. The memory is known as stack memory as all the variables
are assigned in a function call stack memory. The memory size assigned to the program is
known to the compiler. When the function is created, all its variables are assigned in the
stack memory. When the function completed its execution, all the variables assigned in
the stack are released.
In array implementation, the stack is formed by using the array. All the operations regarding the
stack are performed using arrays. Lets see how each operation can be implemented on the stack
using array data structure.
34
Adding an element into the top of the stack is referred to as push operation. Push operation
involves following two steps.
1. Increment the variable Top so that it can now refere to the next memory location.
2. Add element at the position of incremented top. This is referred to as adding new element
at the top of the stack.
Stack is overflown when we try to insert an element into a completely filled stack therefore, our
main function must always avoid stack overflow condition.
Algorithm:
00:00/06:36
1. begin
2. if top = n then stack full
3. top = top + 1
4. stack (top) : = item;
5. end
Deletion of an element from the top of the stack is called pop operation. The value of the
variable top will be incremented by 1 whenever an item is deleted from the stack. The top most
element of the stack is stored in an another variable and then the top is decremented by 1. the
operation returns the deleted value that was stored in another variable as the result.
35
The underflow condition occurs when we try to delete an element from an already empty stack.
Algorithm :
1. begin
2. if top = 0 then stack empty;
3. item := stack(top);
4. top = top - 1;
5. end;
1. int pop ()
2. {
3. if(top == -1)
4. {
5. printf("Underflow");
6. return 0;
7. }
8. else
9. {
10. return stack[top - - ];
11. }
12. }
Peek operation involves returning the element which is present at the top of the stack without
deleting it. Underflow condition can occur if we try to return the top element in an already empty
stack.
Algorithm :
1. Begin
2. if top = -1 then stack empty
3. item = stack[top]
36
4. return item
5. End
1. int peek()
2. {
3. if (top == -1)
4. {
5. printf("Underflow");
6. return 0;
7. }
8. else
9. {
10. return stack [top];
11. }
12. }
C program
1. #include <stdio.h>
2. int stack[100],i,j,choice=0,n,top=-1;
3. void push();
4. void pop();
5. void show();
6. void main ()
7. {
8.
9. printf("Enter the number of elements in the stack ");
10. scanf("%d",&n);
11. printf("*********Stack operations using array*********");
12.
13. printf("\n----------------------------------------------\n");
14. while(choice != 4)
15. {
37
16. printf("Chose one from the below options...\n");
17. printf("\n1.Push\n2.Pop\n3.Show\n4.Exit");
18. printf("\n Enter your choice \n");
19. scanf("%d",&choice);
20. switch(choice)
21. {
22. case 1:
23. {
24. push();
25. break;
26. }
27. case 2:
28. {
29. pop();
30. break;
31. }
32. case 3:
33. {
34. show();
35. break;
36. }
37. case 4:
38. {
39. printf("Exiting....");
40. break;
41. }
42. default:
43. {
44. printf("Please Enter valid choice ");
45. }
46. };
47. }
48. }
49.
50. void push ()
38
51. {
52. int val;
53. if (top == n )
54. printf("\n Overflow");
55. else
56. {
57. printf("Enter the value?");
58. scanf("%d",&val);
59. top = top +1;
60. stack[top] = val;
61. }
62. }
63.
64. void pop ()
65. {
66. if(top == -1)
67. printf("Underflow");
68. else
69. top = top -1;
70. }
71. void show()
72. {
73. for (i=top;i>=0;i--)
74. {
75. printf("%d\n",stack[i]);
76. }
77. if(top == -1)
78. {
79. printf("Stack is empty");
80. }
81. }
Java Program
1. import java.util.Scanner;
2. class Stack
39
3. {
4. int top;
5. int maxsize = 10;
6. int[] arr = new int[maxsize];
7.
8.
9. boolean isEmpty()
10. {
11. return (top < 0);
12. }
13. Stack()
14. {
15. top = -1;
16. }
17. boolean push (Scanner sc)
18. {
19. if(top == maxsize-1)
20. {
21. System.out.println("Overflow !!");
22. return false;
23. }
24. else
25. {
26. System.out.println("Enter Value");
27. int val = sc.nextInt();
28. top++;
29. arr[top]=val;
30. System.out.println("Item pushed");
31. return true;
32. }
33. }
34. boolean pop ()
35. {
36. if (top == -1)
37. {
40
38. System.out.println("Underflow !!");
39. return false;
40. }
41. else
42. {
43. top --;
44. System.out.println("Item popped");
45. return true;
46. }
47. }
48. void display ()
49. {
50. System.out.println("Printing stack elements .....");
51. for(int i = top; i>=0;i--)
52. {
53. System.out.println(arr[i]);
54. }
55. }
56. }
57. public class Stack_Operations {
58. public static void main(String[] args) {
59. int choice=0;
60. Scanner sc = new Scanner(System.in);
61. Stack s = new Stack();
62. System.out.println("*********Stack operations using array*********\n");
63. System.out.println("\n------------------------------------------------\n");
64. while(choice != 4)
65. {
66. System.out.println("\nChose one from the below options...\n");
67. System.out.println("\n1.Push\n2.Pop\n3.Show\n4.Exit");
68. System.out.println("\n Enter your choice \n");
69. choice = sc.nextInt();
70. switch(choice)
71. {
72. case 1:
41
73. {
74. s.push(sc);
75. break;
76. }
77. case 2:
78. {
79. s.pop();
80. break;
81. }
82. case 3:
83. {
84. s.display();
85. break;
86. }
87. case 4:
88. {
89. System.out.println("Exiting....");
90. System.exit(0);
91. break;
92. }
93. default:
94. {
95. System.out.println("Please Enter valid choice ");
96. }
97. };
98. }
99. }
100. }
C# Program
1. using System;
2.
3. public class Stack
4. {
5. int top;
42
6. int maxsize=10;
7. int[] arr = new int[10];
8. public static void Main()
9. {
10. Stack st = new Stack();
11. st.top=-1;
12. int choice=0;
13. Console.WriteLine("*********Stack operations using array*********");
14. Console.WriteLine("\n----------------------------------------------\n");
15. while(choice != 4)
16. {
17. Console.WriteLine("Chose one from the below options...\n");
18. Console.WriteLine("\n1.Push\n2.Pop\n3.Show\n4.Exit");
19. Console.WriteLine("\n Enter your choice \n");
20. choice = Convert.ToInt32(Console.ReadLine());
21. switch(choice)
22. {
23. case 1:
24. {
25. st.push();
26. break;
27. }
28. case 2:
29. {
30. st.pop();
31. break;
32. }
33. case 3:
34. {
35. st.show();
36. break;
37. }
38. case 4:
39. {
40. Console.WriteLine("Exiting....");
43
41. break;
42. }
43. default:
44. {
45. Console.WriteLine("Please Enter valid choice ");
46. break;
47. }
48. };
49. }
50. }
51.
52. Boolean push ()
53. {
54. int val;
55. if(top == maxsize-1)
56. {
57.
58. Console.WriteLine("\n Overflow");
59. return false;
60. }
61. else
62. {
63. Console.WriteLine("Enter the value?");
64. val = Convert.ToInt32(Console.ReadLine());
65. top = top +1;
66. arr[top] = val;
67. Console.WriteLine("Item pushed");
68. return true;
69. }
70. }
71.
72. Boolean pop ()
73. {
74. if (top == -1)
75. {
44
76. Console.WriteLine("Underflow");
77. return false;
78. }
79.
80. else
81.
82. {
83. top = top -1;
84. Console.WriteLine("Item popped");
85. return true;
86. }
87. }
88. void show()
89. {
90.
91. for (int i=top;i>=0;i--)
92. {
93. Console.WriteLine(arr[i]);
94. }
95. if(top == -1)
96. {
97. Console.WriteLine("Stack is empty");
98. }
99. }
100. }
Instead of using array, we can also use linked list to implement stack. Linked list allocates the
memory dynamically. However, time complexity in both the scenario is same for all the
operations i.e. push, pop and peek.
In linked list implementation of stack, the nodes are maintained non-contiguously in the
memory. Each node contains a pointer to its immediate successor node in the stack. Stack is said
to be overflown if the space left in the memory heap is not enough to create a node.
45
The top most node in the stack always contains null in its address field. Lets discuss the way in
which, each operation is performed in linked list implementation of stack.
Adding a node to the stack is referred to as push operation. Pushing an element to a stack in
linked list implementation is different from that of an array implementation. In order to push an
element onto the stack, the following steps are involved.
46
C implementation :
1. void push ()
2. {
3. int val;
4. struct node *ptr =(struct node*)malloc(sizeof(struct node));
5. if(ptr == NULL)
6. {
7. printf("not able to push the element");
8. }
9. else
10. {
11. printf("Enter the value");
12. scanf("%d",&val);
13. if(head==NULL)
14. {
15. ptr->val = val;
16. ptr -> next = NULL;
17. head=ptr;
18. }
47
19. else
20. {
21. ptr->val = val;
22. ptr->next = head;
23. head=ptr;
24.
25. }
26. printf("Item pushed");
27.
28. }
29. }
Deleting a node from the top of stack is referred to as pop operation. Deleting a node
from the linked list implementation of stack is different from that in the array
implementation. In order to pop an element from the stack, we need to follow the
following steps :
30. Check for the underflow condition: The underflow condition occurs when we
try to pop from an already empty stack. The stack will be empty if the head
pointer of the list points to null.
31. Adjust the head pointer accordingly: In stack, the elements are popped only
from one end, therefore, the value stored in the head pointer must be deleted and
the node must be freed. The next node of the head node now becomes the head
node.
C implementation
1. void pop()
2. {
3. int item;
4. struct node *ptr;
5. if (head == NULL)
6. {
7. printf("Underflow");
48
8. }
9. else
10. {
11. item = head->val;
12. ptr = head;
13. head = head->next;
14. free(ptr);
15. printf("Item popped");
16.
17. }
18. }
Displaying all the nodes of a stack needs traversing all the nodes of the linked list
organized in the form of stack. For this purpose, we need to follow the following steps.
C Implementation
1. void display()
2. {
3. int i;
4. struct node *ptr;
5. ptr=head;
6. if(ptr == NULL)
7. {
8. printf("Stack is empty\n");
9. }
10. else
11. {
12. printf("Printing Stack elements \n");
13. while(ptr!=NULL)
49
14. {
15. printf("%d\n",ptr->val);
16. ptr = ptr->next;
17. }
18. }
19. }
Menu Driven program in C implementing all the stack operations using linked list :
1. #include <stdio.h>
2. #include <stdlib.h>
3. void push();
4. void pop();
5. void display();
6. struct node
7. {
8. int val;
9. struct node *next;
10. };
11. struct node *head;
12.
13. void main ()
14. {
15. int choice=0;
16. printf("\n*********Stack operations using linked list*********\n");
17. printf("\n----------------------------------------------\n");
18. while(choice != 4)
19. {
20. printf("\n\nChose one from the below options...\n");
21. printf("\n1.Push\n2.Pop\n3.Show\n4.Exit");
22. printf("\n Enter your choice \n");
23. scanf("%d",&choice);
24. switch(choice)
25. {
26. case 1:
27. {
50
28. push();
29. break;
30. }
31. case 2:
32. {
33. pop();
34. break;
35. }
36. case 3:
37. {
38. display();
39. break;
40. }
41. case 4:
42. {
43. printf("Exiting....");
44. break;
45. }
46. default:
47. {
48. printf("Please Enter valid choice ");
49. }
50. };
51. }
52. }
53. void push ()
54. {
55. int val;
56. struct node *ptr = (struct node*)malloc(sizeof(struct node));
57. if(ptr == NULL)
58. {
59. printf("not able to push the element");
60. }
61. else
62. {
51
63. printf("Enter the value");
64. scanf("%d",&val);
65. if(head==NULL)
66. {
67. ptr->val = val;
68. ptr -> next = NULL;
69. head=ptr;
70. }
71. else
72. {
73. ptr->val = val;
74. ptr->next = head;
75. head=ptr;
76.
77. }
78. printf("Item pushed");
79.
80. }
81. }
82.
83. void pop()
84. {
85. int item;
86. struct node *ptr;
87. if (head == NULL)
88. {
89. printf("Underflow");
90. }
91. else
92. {
93. item = head->val;
94. ptr = head;
95. head = head->next;
96. free(ptr);
97. printf("Item popped");
52
98.
99. }
100. }
101. void display()
102. {
103. int i;
104. struct node *ptr;
105. ptr=head;
106. if(ptr == NULL)
107. {
108. printf("Stack is empty\n");
109. }
110. else
111. {
112. printf("Printing Stack elements \n");
113. while(ptr!=NULL)
114. {
115. printf("%d\n",ptr->val);
116. ptr = ptr->next;
117. }
118. }
119. }
Queue
1. A queue can be defined as an ordered list which enables insert operations to be performed at
one end called REAR and delete operations to be performed at another end called FRONT.
3. For example, people waiting in line for a rail ticket form a queue.
53
Applications of Queue
Due to the fact that queue performs actions on first in first out basis which is quite fair for the
ordering of actions. There are various applications of queues discussed as below.
1. Queues are widely used as waiting lists for a single shared resource like printer, disk,
CPU.
2. Queues are used in asynchronous transfer of data (where data is not being transferred at
the same rate between two processes) for eg. pipes, file IO, sockets.
3. Queues are used as buffers in most of the applications like MP3 media player, CD player,
etc.
4. Queue are used to maintain the play list in media players in order to add and remove the
songs from the play-list.
5. Queues are used in operating systems for handling interrupts.
Complexity
Data Time Complexity Space
Structu Complei
re ty
Queue θ(n) θ(n) θ(1) θ(1) O(n) O(n) O(1) O(1) O(n)
Types of Queue
In this article, we will discuss the types of queue. But before moving towards the types, we
should first discuss the brief introduction of the queue.
What is a Queue?
Queue is the data structure that is similar to the queue in the real world. A queue is a data
structure in which whatever comes first will go out first, and it follows the FIFO (First-In-First-
Out) policy. Queue can also be defined as the list or collection in which the insertion is done
54
from one end known as the rear end or the tail of the queue, whereas the deletion is done from
another end known as the front end or the head of the queue.
The real-world example of a queue is the ticket queue outside a cinema hall, where the person
who enters first in the queue gets the ticket first, and the last person enters in the queue gets the
ticket at last. Similar approach is followed in the queue in data structure.
45.4
Types of Queue
There are four different types of queue that are listed as follows -
55
Simple Queue or Linear Queue
In Linear Queue, an insertion takes place from one end while the deletion occurs from another
end. The end at which the insertion takes place is known as the rear end, and the end at which the
deletion takes place is known as front end. It strictly follows the FIFO rule.
The major drawback of using a linear Queue is that insertion is done only from the rear end. If
the first three elements are deleted from the Queue, we cannot insert more elements even though
the space is available in a Linear Queue. In this case, the linear Queue shows the overflow
condition as the rear is pointing to the last element of the Queue.
Circular Queue
In Circular Queue, all the nodes are represented as circular. It is similar to the linear Queue
except that the last element of the queue is connected to the first element. It is also known as
Ring Buffer, as all the ends are connected to another end. The representation of circular queue is
shown in the below image -
The drawback that occurs in a linear queue is overcome by using the circular queue. If the empty
space is available in a circular queue, the new element can be added in an empty space by simply
incrementing the value of rear. The main advantage of using the circular queue is better memory
utilization.
Priority Queue
It is a special type of queue in which the elements are arranged based on the priority. It is a
special type of queue data structure in which every element has a priority associated with it.
Suppose some elements occur with the same priority, they will be arranged according to the
FIFO principle. The representation of priority queue is shown in the below image -
56
Insertion in priority queue takes place based on the arrival, while deletion in the priority queue
occurs based on the priority. Priority queue is mainly used to implement the CPU scheduling
algorithms.
There are two types of priority queue that are discussed as follows -
In Deque or Double Ended Queue, insertion and deletion can be done from both ends of the
queue either from the front or rear. It means that we can insert and delete elements from both
front and rear ends of the queue. Deque can be used as a palindrome checker means that if we
read the string from both ends, then the string would be the same.
Deque can be used both as stack and queue as it allows the insertion and deletion operations on
both ends. Deque can be considered as stack because stack follows the LIFO (Last In First Out)
principle in which insertion and deletion both can be performed only from one end. And in
deque, it is possible to perform both insertion and deletion from one end, and Deque does not
follow the FIFO principle.
57
To know more about the deque, you can click the link - https://www.javatpoint.com/ds-deque
o Input restricted deque - As the name implies, in input restricted queue, insertion
operation can be performed at only one end, while deletion can be performed from both
ends.
o Output restricted deque - As the name implies, in output restricted queue, deletion
operation can be performed at only one end, while insertion can be performed from both
ends.
The fundamental operations that can be performed on queue are listed as follows -
58
o Enqueue: The Enqueue operation is used to insert the element at the rear end of the
queue. It returns void.
o Dequeue: It performs the deletion from the front-end of the queue. It also returns the
element which has been removed from the front-end. It returns an integer value.
o Peek: This is the third operation that returns the element, which is pointed by the front
pointer in the queue but does not delete it.
o Queue overflow (isfull): It shows the overflow condition when the queue is completely
full.
o Queue underflow (isempty): It shows the underflow condition when the Queue is
empty, i.e., no elements are in the Queue.
We can easily represent queue by using linear arrays. There are two variables i.e. front and rear,
that are implemented in the case of every queue. Front and rear variables point to the position
from where insertions and deletions are performed in a queue. Initially, the value of front and
queue is -1 which represents an empty queue. Array representation of a queue containing 5
elements along with the respective values of front and rear, is shown in the following figure.
The above figure shows the queue of characters forming the English word "HELLO". Since, No
deletion is performed in the queue till now, therefore the value of front remains -1 . However, the
value of rear increases by one every time an insertion is performed in the queue. After inserting
an element into the queue shown in the above figure, the queue will look something like
following. The value of rear will become 5 while the value of front remains same.
59
After deleting an element, the value of front will increase from -1 to 0. however, the queue will
look something like following.
Check if the queue is already full by comparing rear to max - 1. if so, then return an overflow
error.Triggers in SQL (Hindi)
If the item is to be inserted as the first element in the list, in that case set the value of front and
rear to 0 and insert the element at the rear end.
Otherwise keep increasing the value of rear and insert each element one by one having rear as
the index.
Algorithm
o Step 1: IF REAR = MAX - 1
Write OVERFLOW
60
Go to step
[END OF IF]
o Step 2: IF FRONT = -1 and REAR = -1
SET FRONT = REAR = 0
ELSE
SET REAR = REAR + 1
[END OF IF]
o Step 3: Set QUEUE[REAR] = NUM
o Step 4: EXIT
C Function
1. void insert (int queue[], int max, int front, int rear, int item)
2. {
3. if (rear + 1 == max)
4. {
5. printf("overflow");
6. }
7. else
8. {
9. if(front == -1 && rear == -1)
10. {
11. front = 0;
12. rear = 0;
13. }
14. else
15. {
16. rear = rear + 1;
17. }
18. queue[rear]=item;
19. }
20. }
If, the value of front is -1 or value of front is greater than rear , write an underflow message and
exit.
61
Otherwise, keep increasing the value of front and return the item stored at the front end of the
queue at each time.
Algorithm
o Step 1: IF FRONT = -1 or FRONT > REAR
Write UNDERFLOW
ELSE
SET VAL = QUEUE[FRONT]
SET FRONT = FRONT + 1
[END OF IF]
o Step 2: EXIT
C Function
1. int delete (int queue[], int max, int front, int rear)
2. {
3. int y;
4. if (front == -1 || front > rear)
5.
6. {
7. printf("underflow");
8. }
9. else
10. {
11. y = queue[front];
12. if(front == rear)
13. {
14. front = rear = -1;
15. else
16. front = front + 1;
17.
18. }
19. return y;
20. }
21. }
62
2. #include<stdlib.h>
3. #define maxsize 5
4. void insert();
5. void delete();
6. void display();
7. int front = -1, rear = -1;
8. int queue[maxsize];
9. void main ()
10. {
11. int choice;
12. while(choice != 4)
13. {
14. printf("\n*************************Main Menu*****************************\
n");
15. printf("\
n=================================================================\n");
63
34. printf("\nEnter valid choice??\n");
35. }
36. }
37. }
38. void insert()
39. {
40. int item;
41. printf("\nEnter the element\n");
42. scanf("\n%d",&item);
43. if(rear == maxsize-1)
44. {
45. printf("\nOVERFLOW\n");
46. return;
47. }
48. if(front == -1 && rear == -1)
49. {
50. front = 0;
51. rear = 0;
52. }
53. else
54. {
55. rear = rear+1;
56. }
57. queue[rear] = item;
58. printf("\nValue inserted ");
59.
60. }
61. void delete()
62. {
63. int item;
64. if (front == -1 || front > rear)
65. {
66. printf("\nUNDERFLOW\n");
67. return;
68.
64
69. }
70. else
71. {
72. item = queue[front];
73. if(front == rear)
74. {
75. front = -1;
76. rear = -1 ;
77. }
78. else
79. {
80. front = front + 1;
81. }
82. printf("\nvalue deleted ");
83. }
84.
85.
86. }
87.
88. void display()
89. {
90. int i;
91. if(rear == -1)
92. {
93. printf("\nEmpty queue\n");
94. }
95. else
96. { printf("\nprinting values .....\n");
97. for(i=front;i<=rear;i++)
98. {
99. printf("\n%d\n",queue[i]);
100. }
101. }
102. }
Output:
65
*************Main Menu**************
==============================================
1.insert an element
2.Delete an element
3.Display the queue
4.Exit
Value inserted
*************Main Menu**************
==============================================
1.insert an element
2.Delete an element
3.Display the queue
4.Exit
Value inserted
*************Main Menu**************
===================================
1.insert an element
2.Delete an element
3.Display the queue
4.Exit
value deleted
*************Main Menu**************
==============================================
66
1.insert an element
2.Delete an element
3.Display the queue
4.Exit
90
*************Main Menu**************
==============================================
1.insert an element
2.Delete an element
3.Display the queue
4.Exit
Although, the technique of creating a queue is easy, but there are some drawbacks of using this
technique to implement a queue.
o Memory wastage : The space of the array, which is used to store queue elements, can
never be reused to store the elements of that queue because the elements can only be
inserted at front end and the value of front might be so high so that, all the space before
that, can never be filled.
The above figure shows how the memory space is wasted in the array representation of queue. In
the above figure, a queue of size 10 having 3 elements, is shown. The value of the front variable
is 5, therefore, we can not reinsert the values in the place of already deleted element before the
67
position of front. That much space of the array is wasted and can not be used in the future (for
this queue).
On of the most common problem with array implementation is the size of the array which
requires to be declared in advance. Due to the fact that, the queue can be extended at runtime
depending upon the problem, the extension in the array size is a time taking process and almost
impossible to be performed at runtime since a lot of reallocations take place. Due to this reason,
we can declare the array large enough so that we can store queue elements as enough as possible
but the main problem with this declaration is that, most of the array slots (nearly half) can never
be reused. It will again lead to memory wastage.
Due to the drawbacks discussed in the previous section of this tutorial, the array implementation
can not be used for the large scale applications where the queues are implemented. One of the
alternative of array implementation is linked list implementation of queue.
The storage requirement of linked representation of a queue with n elements is o(n) while the
time requirement for operations is o(1).
In a linked queue, each node of the queue consists of two parts i.e. data part and the link part.
Each element of the queue points to its immediate next element in the memory.
In the linked queue, there are two pointers maintained in the memory i.e. front pointer and rear
pointer. The front pointer contains the address of the starting element of the queue while the rear
pointer contains the address of the last element of the queue.
Insertion and deletions are performed at rear and front end respectively. If front and rear both are
NULL, it indicates that the queue is empty.
There are two basic operations which can be implemented on the linked queues. The operations
are Insertion and Deletion.
68
Insert operation
The insert operation append the queue by adding an element to the end of the queue. The new
element will be the last element of the queue.
Firstly, allocate the memory for the new node ptr by using the following statement.
There can be the two scenario of inserting this new node ptr into the linked queue.
In the first scenario, we insert element into an empty queue. In this case, the condition front =
NULL becomes true. Now, the new element will be added as the only element of the queue and
the next pointer of front and rear pointer both, will point to NULL.
In the second case, the queue contains more than one element. The condition front = NULL
becomes false. In this scenario, we need to update the end pointer rear so that the next pointer of
rear will point to the new node ptr. Since, this is a linked queue, hence we also need to make the
rear pointer point to the newly added node ptr. We also need to make the next pointer of rear
point to NULL.
In this way, the element is inserted into the queue. The algorithm and the C implementation is
given as follows.
Algorithm
o Step 1: Allocate the space for the new node PTR
o Step 2: SET PTR -> DATA = VAL
69
o Step 3: IF FRONT = NULL
SET FRONT = REAR = PTR
SET FRONT -> NEXT = REAR -> NEXT = NULL
ELSE
SET REAR -> NEXT = PTR
SET REAR = PTR
SET REAR -> NEXT = NULL
[END OF IF]
o Step 4: END
C Function
1. void insert(struct node *ptr, int item; )
2. {
3.
4.
5. ptr = (struct node *) malloc (sizeof(struct node));
6. if(ptr == NULL)
7. {
8. printf("\nOVERFLOW\n");
9. return;
10. }
11. else
12. {
13. ptr -> data = item;
14. if(front == NULL)
15. {
16. front = ptr;
17. rear = ptr;
18. front -> next = NULL;
19. rear -> next = NULL;
20. }
21. else
22. {
23. rear -> next = ptr;
24. rear = ptr;
25. rear->next = NULL;
70
26. }
27. }
28. }
Deletion
Deletion operation removes the element that is first inserted among all the queue elements.
Firstly, we need to check either the list is empty or not. The condition front == NULL becomes
true if the list is empty, in this case , we simply write underflow on the console and make exit.
Otherwise, we will delete the element that is pointed by the pointer front. For this purpose, copy
the node pointed by the front pointer into the pointer ptr. Now, shift the front pointer, point to its
next node and free the node pointed by the node ptr. This is done by using the following
statements.
1. ptr = front;
2. front = front -> next;
3. free(ptr);
Algorithm
o Step 1: IF FRONT = NULL
Write " Underflow "
Go to Step 5
[END OF IF]
o Step 2: SET PTR = FRONT
o Step 3: SET FRONT = FRONT -> NEXT
o Step 4: FREE PTR
o Step 5: END
C Function
1. void delete (struct node *ptr)
2. {
3. if(front == NULL)
4. {
5. printf("\nUNDERFLOW\n");
6. return;
7. }
71
8. else
9. {
10. ptr = front;
11. front = front -> next;
12. free(ptr);
13. }
14. }
72
25. case 1:
26. insert();
27. break;
28. case 2:
29. delete();
30. break;
31. case 3:
32. display();
33. break;
34. case 4:
35. exit(0);
36. break;
37. default:
38. printf("\nEnter valid choice??\n");
39. }
40. }
41. }
42. void insert()
43. {
44. struct node *ptr;
45. int item;
46.
47. ptr = (struct node *) malloc (sizeof(struct node));
48. if(ptr == NULL)
49. {
50. printf("\nOVERFLOW\n");
51. return;
52. }
53. else
54. {
55. printf("\nEnter value?\n");
56. scanf("%d",&item);
57. ptr -> data = item;
58. if(front == NULL)
59. {
73
60. front = ptr;
61. rear = ptr;
62. front -> next = NULL;
63. rear -> next = NULL;
64. }
65. else
66. {
67. rear -> next = ptr;
68. rear = ptr;
69. rear->next = NULL;
70. }
71. }
72. }
73. void delete ()
74. {
75. struct node *ptr;
76. if(front == NULL)
77. {
78. printf("\nUNDERFLOW\n");
79. return;
80. }
81. else
82. {
83. ptr = front;
84. front = front -> next;
85. free(ptr);
86. }
87. }
88. void display()
89. {
90. struct node *ptr;
91. ptr = front;
92. if(front == NULL)
93. {
94. printf("\nEmpty queue\n");
74
95. }
96. else
97. { printf("\nprinting values .....\n");
98. while(ptr != NULL)
99. {
100. printf("\n%d\n",ptr -> data);
101. ptr = ptr -> next;
102. }
103. }
104. }
Output:
***********Main Menu**********
==============================
1.insert an element
2.Delete an element
3.Display the queue
4.Exit
Enter value?
123
***********Main Menu**********
==============================
1.insert an element
2.Delete an element
3.Display the queue
4.Exit
Enter value?
90
***********Main Menu**********
==============================
75
1.insert an element
2.Delete an element
3.Display the queue
4.Exit
123
90
***********Main Menu**********
==============================
1.insert an element
2.Delete an element
3.Display the queue
4.Exit
***********Main Menu**********
==============================
1.insert an element
2.Delete an element
3.Display the queue
4.Exit
90
***********Main Menu**********
==============================
1.insert an element
2.Delete an element
3.Display the queue
4.Exit
76
Enter your choice ?4
Circular Queue
There was one limitation in the array implementation of Queue. If the rear reaches to the end
position of the Queue then there might be possibility that some vacant spaces are left in the
beginning which cannot be utilized. So, to overcome such limitations, the concept of the circular
queue was introduced.
As we can see in the above image, the rear is at the last position of the Queue and front is
pointing somewhere rather than the 0 th position. In the above array, there are only two elements
and other three positions are empty. The rear is at the last position of the Queue; if we try to
insert the element then it will show that there are no empty spaces in the Queue. There is one
solution to avoid such wastage of memory space by shifting both the elements at the left and
adjust the front and rear end accordingly. It is not a practically good approach because shifting
all the elements will consume lots of time. The efficient approach to avoid the wastage of the
memory is to use the circular queue data structure.
A circular queue is similar to a linear queue as it is also based on the FIFO (First In First Out)
principle except that the last position is connected to the first position in a circular queue that
forms a circle. It is also known as a Ring Buffer.
The following are the operations that can be performed on a circular queue:
46.2M
77
1K
Enqueue operation
78
o If rear != max - 1, then rear will be incremented to mod(maxsize) and the new value
will be inserted at the rear end of the queue.
o If front != 0 and rear = max - 1, it means that queue is not full, then set the value of rear
to 0 and insert the new element there.
o When front ==0 && rear = max-1, which means that front is at the first position of the
Queue and rear is at the last position of the Queue.
o front== rear + 1;
Step 4: EXIT
Dequeue Operation
o First, we check whether the Queue is empty or not. If the queue is empty, we cannot
perform the dequeue operation.
o When the element is deleted, the value of front gets decremented by 1.
o If there is only one element left which is to be deleted, then the front and rear are reset to
-1.
79
Step 1: IF FRONT = -1
Write " UNDERFLOW "
Goto Step 4
[END of IF]
Step 4: EXIT
Let's understand the enqueue and dequeue operation through the diagrammatic
representation.
80
Implementation of circular queue using Array
81
1. #include <stdio.h>
2.
3. # define max 6
4. int queue[max]; // array declaration
5. int front=-1;
6. int rear=-1;
7. // function to insert an element in a circular queue
8. void enqueue(int element)
9. {
10. if(front==-1 && rear==-1) // condition to check queue is empty
11. {
12. front=0;
13. rear=0;
14. queue[rear]=element;
15. }
16. else if((rear+1)%max==front) // condition to check queue is full
17. {
18. printf("Queue is overflow..");
19. }
20. else
21. {
22. rear=(rear+1)%max; // rear is incremented
23. queue[rear]=element; // assigning a value to the queue at the rear position.
24. }
25. }
26.
27. // function to delete the element from the queue
28. int dequeue()
29. {
30. if((front==-1) && (rear==-1)) // condition to check queue is empty
31. {
32. printf("\nQueue is underflow..");
33. }
34. else if(front==rear)
35. {
82
36. printf("\nThe dequeued element is %d", queue[front]);
37. front=-1;
38. rear=-1;
39. }
40. else
41. {
42. printf("\nThe dequeued element is %d", queue[front]);
43. front=(front+1)%max;
44. }
45. }
46. // function to display the elements of a queue
47. void display()
48. {
49. int i=front;
50. if(front==-1 && rear==-1)
51. {
52. printf("\n Queue is empty..");
53. }
54. else
55. {
56. printf("\nElements in a Queue are :");
57. while(i<=rear)
58. {
59. printf("%d,", queue[i]);
60. i=(i+1)%max;
61. }
62. }
63. }
64. int main()
65. {
66. int choice=1,x; // variables declaration
67.
68. while(choice<4 && choice!=0) // while loop
69. {
70. printf("\n Press 1: Insert an element");
83
71. printf("\nPress 2: Delete an element");
72. printf("\nPress 3: Display the element");
73. printf("\nEnter your choice");
74. scanf("%d", &choice);
75.
76. switch(choice)
77. {
78.
79. case 1:
80.
81. printf("Enter the element which is to be inserted");
82. scanf("%d", &x);
83. enqueue(x);
84. break;
85. case 2:
86. dequeue();
87. break;
88. case 3:
89. display();
90.
91. }}
92. return 0;
93. }
Output:
84
Implementation of circular queue using linked list
As we know that linked list is a linear data structure that stores two parts, i.e., data part and the
address part where address part contains the address of the next node. Here, linked list is used to
implement the circular queue; therefore, the linked list follows the properties of the Queue.
When we are implementing the circular queue using linked list then both the enqueue and
dequeue operations take O(1) time.
1. #include <stdio.h>
2. // Declaration of struct type node
3. struct node
85
4. {
5. int data;
6. struct node *next;
7. };
8. struct node *front=-1;
9. struct node *rear=-1;
10. // function to insert the element in the Queue
11. void enqueue(int x)
12. {
13. struct node *newnode; // declaration of pointer of struct node type.
14. newnode=(struct node *)malloc(sizeof(struct node)); // allocating the memory to the newnod
e
15. newnode->data=x;
16. newnode->next=0;
17. if(rear==-1) // checking whether the Queue is empty or not.
18. {
19. front=rear=newnode;
20. rear->next=front;
21. }
22. else
23. {
24. rear->next=newnode;
25. rear=newnode;
26. rear->next=front;
27. }
28. }
29.
30. // function to delete the element from the queue
31. void dequeue()
32. {
33. struct node *temp; // declaration of pointer of node type
34. temp=front;
35. if((front==-1)&&(rear==-1)) // checking whether the queue is empty or not
36. {
37. printf("\nQueue is empty");
86
38. }
39. else if(front==rear) // checking whether the single element is left in the queue
40. {
41. front=rear=-1;
42. free(temp);
43. }
44. else
45. {
46. front=front->next;
47. rear->next=front;
48. free(temp);
49. }
50. }
51.
52. // function to get the front of the queue
53. int peek()
54. {
55. if((front==-1) &&(rear==-1))
56. {
57. printf("\nQueue is empty");
58. }
59. else
60. {
61. printf("\nThe front element is %d", front->data);
62. }
63. }
64.
65. // function to display all the elements of the queue
66. void display()
67. {
68. struct node *temp;
69. temp=front;
70. printf("\n The elements in a Queue are : ");
71. if((front==-1) && (rear==-1))
72. {
87
73. printf("Queue is empty");
74. }
75.
76. else
77. {
78. while(temp->next!=front)
79. {
80. printf("%d,", temp->data);
81. temp=temp->next;
82. }
83. printf("%d", temp->data);
84. }
85. }
86.
87. void main()
88. {
89. enqueue(34);
90. enqueue(10);
91. enqueue(23);
92. display();
93. dequeue();
94. peek();
95. }
Output:
In this article, we will discuss the double-ended queue or deque. We should first see a brief
description of the queue.
88
What is a queue?
A queue is a data structure in which whatever comes first will go out first, and it follows the
FIFO (First-In-First-Out) policy. Insertion in the queue is done from one end known as the rear
end or the tail, whereas the deletion is done from another end known as the front end or
the head of the queue.
The real-world example of a queue is the ticket queue outside a cinema hall, where the person
who enters first in the queue gets the ticket first, and the person enters last in the queue gets the
ticket at last.
The deque stands for Double Ended Queue. Deque is a linear data structure where the insertion
and deletion operations are performed from both ends. We can say that deque is a generalized
version of the queue.
Though the insertion and deletion in a deque can be performed on both ends, it does not follow
the FIFO rule. The representation of a deque is given as follows -
Types of deque
In input restricted queue, insertion operation can be performed at only one end, while deletion
can be performed from both ends.
89
Output restricted Queue
In output restricted queue, deletion operation can be performed at only one end, while insertion
can be performed from both ends.
o Insertion at front
o Insertion at rear
o Deletion at front
o Deletion at rear
We can also perform peek operations in the deque along with the operations listed above.
Through peek operation, we can get the deque's front and rear elements of the deque. So, in
addition to the above operations, following operations are also supported in deque -
90
Insertion at the front end
In this operation, the element is inserted from the front end of the queue. Before implementing
the operation, we first have to check whether the queue is full or not. If the queue is not full, then
the element can be inserted from the front end by using the below conditions -
o If the queue is empty, both rear and front are initialized with 0. Now, both will point to
the first element.
o Otherwise, check the position of the front if the front is less than 1 (front < 1), then
reinitialize it by front = n - 1, i.e., the last index of the array.
In this operation, the element is inserted from the rear end of the queue. Before implementing the
operation, we first have to check again whether the queue is full or not. If the queue is not full,
then the element can be inserted from the rear end by using the below conditions -
o If the queue is empty, both rear and front are initialized with 0. Now, both will point to
the first element.
o Otherwise, increment the rear by 1. If the rear is at last index (or size - 1), then instead of
increasing it by 1, we have to make it equal to 0.
91
Deletion at the front end
In this operation, the element is deleted from the front end of the queue. Before implementing the
operation, we first have to check whether the queue is empty or not.
If the queue is empty, i.e., front = -1, it is the underflow condition, and we cannot perform the
deletion. If the queue is not full, then the element can be inserted from the front end by using the
below conditions -
If the deque has only one element, set rear = -1 and front = -1.
Else if front is at end (that means front = size - 1), set front = 0.
92
In this operation, the element is deleted from the rear end of the queue. Before implementing the
operation, we first have to check whether the queue is empty or not.
If the queue is empty, i.e., front = -1, it is the underflow condition, and we cannot perform the
deletion.
If the deque has only one element, set rear = -1 and front = -1.
Check empty
This operation is performed to check whether the deque is empty or not. If front = -1, it means
that the deque is empty.
Check full
This operation is performed to check whether the deque is full or not. If front = rear + 1, or front
= 0 and rear = n - 1 it means that the deque is full.
The time complexity of all of the above operations of the deque is O(1), i.e., constant.
Applications of deque
o Deque can be used as both stack and queue, as it supports both operations.
o Deque can be used as a palindrome checker means that if we read the string from both
ends, the string would be the same.
Implementation of deque
93
1. #include <stdio.h>
2. #define size 5
3. int deque[size];
4. int f = -1, r = -1;
5. // insert_front function will insert the value from the front
6. void insert_front(int x)
7. {
8. if((f==0 && r==size-1) || (f==r+1))
9. {
10. printf("Overflow");
11. }
12. else if((f==-1) && (r==-1))
13. {
14. f=r=0;
15. deque[f]=x;
16. }
17. else if(f==0)
18. {
19. f=size-1;
20. deque[f]=x;
21. }
22. else
23. {
24. f=f-1;
25. deque[f]=x;
26. }
27. }
28.
29. // insert_rear function will insert the value from the rear
30. void insert_rear(int x)
31. {
32. if((f==0 && r==size-1) || (f==r+1))
33. {
34. printf("Overflow");
35. }
94
36. else if((f==-1) && (r==-1))
37. {
38. r=0;
39. deque[r]=x;
40. }
41. else if(r==size-1)
42. {
43. r=0;
44. deque[r]=x;
45. }
46. else
47. {
48. r++;
49. deque[r]=x;
50. }
51.
52. }
53.
54. // display function prints all the value of deque.
55. void display()
56. {
57. int i=f;
58. printf("\nElements in a deque are: ");
59.
60. while(i!=r)
61. {
62. printf("%d ",deque[i]);
63. i=(i+1)%size;
64. }
65. printf("%d",deque[r]);
66. }
67.
68. // getfront function retrieves the first value of the deque.
69. void getfront()
70. {
95
71. if((f==-1) && (r==-1))
72. {
73. printf("Deque is empty");
74. }
75. else
76. {
77. printf("\nThe value of the element at front is: %d", deque[f]);
78. }
79.
80. }
81.
82. // getrear function retrieves the last value of the deque.
83. void getrear()
84. {
85. if((f==-1) && (r==-1))
86. {
87. printf("Deque is empty");
88. }
89. else
90. {
91. printf("\nThe value of the element at rear is %d", deque[r]);
92. }
93.
94. }
95.
96. // delete_front() function deletes the element from the front
97. void delete_front()
98. {
99. if((f==-1) && (r==-1))
100. {
101. printf("Deque is empty");
102. }
103. else if(f==r)
104. {
105. printf("\nThe deleted element is %d", deque[f]);
96
106. f=-1;
107. r=-1;
108.
109. }
110. else if(f==(size-1))
111. {
112. printf("\nThe deleted element is %d", deque[f]);
113. f=0;
114. }
115. else
116. {
117. printf("\nThe deleted element is %d", deque[f]);
118. f=f+1;
119. }
120. }
121.
122. // delete_rear() function deletes the element from the rear
123. void delete_rear()
124. {
125. if((f==-1) && (r==-1))
126. {
127. printf("Deque is empty");
128. }
129. else if(f==r)
130. {
131. printf("\nThe deleted element is %d", deque[r]);
132. f=-1;
133. r=-1;
134.
135. }
136. else if(r==0)
137. {
138. printf("\nThe deleted element is %d", deque[r]);
139. r=size-1;
140. }
97
141. else
142. {
143. printf("\nThe deleted element is %d", deque[r]);
144. r=r-1;
145. }
146. }
147.
148. int main()
149. {
150. insert_front(20);
151. insert_front(10);
152. insert_rear(30);
153. insert_rear(50);
154. insert_rear(80);
155. display(); // Calling the display function to retrieve the values of deque
156. getfront(); // Retrieve the value at front-end
157. getrear(); // Retrieve the value at rear-end
158. delete_front();
159. delete_rear();
160. display(); // calling display function to retrieve values after deletion
161. return 0;
162. }
Output:
A priority queue is an abstract data type that behaves similarly to the normal queue except that
each element has some priority, i.e., the element with the highest priority would come first in a
priority queue. The priority of the elements in a priority queue will determine the order in which
elements are removed from the priority queue.
98
The priority queue supports only comparable elements, which means that the elements are either
arranged in an ascending or descending order.
For example, suppose we have some values like 1, 3, 4, 8, 14, 22 inserted in a priority queue
with an ordering imposed on the values is from least to the greatest. Therefore, the 1 number
would be having the highest priority while 22 will be having the lowest priority.
o Every element in a priority queue has some priority associated with it.
o An element with the higher priority will be deleted before the deletion of the lesser
priority.
o If two elements in a priority queue have the same priority, they will be arranged using the
FIFO principle.
1, 3, 4, 8, 14, 22
All the values are arranged in ascending order. Now, we will observe how the priority queue will
look after performing the following operations:
o poll(): This function will remove the highest priority element from the priority queue. In
the above priority queue, the '1' element has the highest priority, so it will be removed
from the priority queue.
o add(2): This function will insert '2' element in a priority queue. As 2 is the smallest
element among all the numbers so it will obtain the highest priority.
o poll(): It will remove '2' element from the priority queue as it has the highest priority
queue.
o add(5): It will insert 5 element after 4 as 5 is larger than 4 and lesser than 8, so it will
obtain the third highest priority in a priority queue.
99
o Ascending order priority queue: In ascending order priority queue, a lower priority
number is given as a higher priority in a priority. For example, we take the numbers from
1 to 5 arranged in an ascending order like 1,2,3,4,5; therefore, the smallest number, i.e., 1
is given as the highest priority in a priority queue.
o Descending order priority queue: In descending order priority queue, a higher priority
number is given as a higher priority in a priority. For example, we take the numbers from
1 to 5 arranged in descending order like 5, 4, 3, 2, 1; therefore, the largest number, i.e., 5
is given as the highest priority in a priority queue.
Now, we will see how to represent the priority queue through a one-way list.
We will create the priority queue by using the list given below in which INFO list contains the
data elements, PRN list contains the priority numbers of each data element available in
the INFO list, and LINK basically contains the address of the next node.
100
Let's create the priority queue step by step.
In the case of priority queue, lower priority number is considered the higher priority,
i.e., lower priority number = higher priority.
Step 1: In the list, lower priority number is 1, whose data value is 333, so it will be inserted in
the list as shown in the below diagram:
Step 2: After inserting 333, priority number 2 is having a higher priority, and data values
associated with this priority are 222 and 111. So, this data will be inserted based on the FIFO
principle; therefore 222 will be added first and then 111.
Step 3: After inserting the elements of priority 2, the next higher priority number is 4 and data
elements associated with 4 priority numbers are 444, 555, 777. In this case, elements would be
inserted based on the FIFO principle; therefore, 444 will be added first, then 555, and then 777.
Step 4: After inserting the elements of priority 4, the next higher priority number is 5, and the
value associated with priority 5 is 666, so it will be inserted at the end of the queue.
The priority queue can be implemented in four ways that include arrays, linked list, heap data
structure and binary search tree. The heap data structure is the most efficient way of
implementing the priority queue, so we will implement the priority queue using a heap data
structure in this topic. Now, first we understand the reason why heap is the most efficient way
among all the other data structures.
101
Analysis of complexities using different implementations
What is Heap?
A heap is a tree-based data structure that forms a complete binary tree, and satisfies the heap
property. If A is a parent node of B, then A is ordered with respect to the node B for all nodes A
and B in a heap. It means that the value of the parent node could be more than or equal to the
value of the child node, or the value of the parent node could be less than or equal to the value of
the child node. Therefore, we can say that there are two types of heaps:
o Max heap: The max heap is a heap in which the value of the parent node is greater than
the value of the child nodes.
102
o Min heap: The min heap is a heap in which the value of the parent node is less than the
value of the child nodes.
Both the heaps are the binary heap, as each has exactly two child nodes.
The common operations that we can perform on a priority queue are insertion, deletion and peek.
Let's see how we can maintain the heap data structure.
If we insert an element in a priority queue, it will move to the empty slot by looking from top to
bottom and left to right.
If the element is not in a correct place then it is compared with the parent node; if it is found out
of order, elements are swapped. This process continues until the element is placed in a correct
position.
103
o Removing the minimum element from the priority queue
As we know that in a max heap, the maximum element is the root node. When we remove the
root node, it creates an empty slot. The last inserted element will be added in this empty slot.
Then, this element is compared with the child nodes, i.e., left-child and right child, and swap
with the smaller of the two. It keeps moving down the tree until the heap property is restored.
104
Program to create the priority queue using the binary max heap.
1. #include <stdio.h>
2. #include <stdio.h>
3. int heap[40];
4. int size=-1;
5.
6. // retrieving the parent node of the child node
7. int parent(int i)
8. {
9.
10. return (i - 1) / 2;
11. }
12.
13. // retrieving the left child of the parent node.
14. int left_child(int i)
15. {
16. return i+1;
17. }
18. // retrieving the right child of the parent
19. int right_child(int i)
20. {
21. return i+2;
22. }
23. // Returning the element having the highest priority
24. int get_Max()
25. {
26. return heap[0];
27. }
28. //Returning the element having the minimum priority
29. int get_Min()
30. {
31. return heap[size];
32. }
33. // function to move the node up the tree in order to restore the heap property.
34. void moveUp(int i)
105
35. {
36. while (i > 0)
37. {
38. // swapping parent node with a child node
39. if(heap[parent(i)] < heap[i]) {
40.
41. int temp;
42. temp=heap[parent(i)];
43. heap[parent(i)]=heap[i];
44. heap[i]=temp;
45.
46.
47. }
48. // updating the value of i to i/2
49. i=i/2;
50. }
51. }
52.
53. //function to move the node down the tree in order to restore the heap property.
54. void moveDown(int k)
55. {
56. int index = k;
57.
58. // getting the location of the Left Child
59. int left = left_child(k);
60.
61. if (left <= size && heap[left] > heap[index]) {
62. index = left;
63. }
64.
65. // getting the location of the Right Child
66. int right = right_child(k);
67.
68. if (right <= size && heap[right] > heap[index]) {
69. index = right;
106
70. }
71.
72. // If k is not equal to index
73. if (k != index) {
74. int temp;
75. temp=heap[index];
76. heap[index]=heap[k];
77. heap[k]=temp;
78. moveDown(index);
79. }
80. }
81.
82. // Removing the element of maximum priority
83. void removeMax()
84. {
85. int r= heap[0];
86. heap[0]=heap[size];
87. size=size-1;
88. moveDown(0);
89. }
90. //inserting the element in a priority queue
91. void insert(int p)
92. {
93. size = size + 1;
94. heap[size] = p;
95.
96. // move Up to maintain heap property
97. moveUp(size);
98. }
99.
100. //Removing the element from the priority queue at a given index i.
101. void delete(int i)
102. {
103. heap[i] = heap[0] + 1;
104.
107
105. // move the node stored at ith location is shifted to the root node
106. moveUp(i);
107.
108. // Removing the node having maximum priority
109. removeMax();
110. }
111. int main()
112. {
113. // Inserting the elements in a priority queue
114.
115. insert(20);
116. insert(19);
117. insert(21);
118. insert(18);
119. insert(12);
120. insert(17);
121. insert(15);
122. insert(16);
123. insert(14);
124. int i=0;
125.
126. printf("Elements in a priority queue are : ");
127. for(int i=0;i<=size;i++)
128. {
129. printf("%d ",heap[i]);
130. }
131. delete(2); // deleting the element whose index is 2.
132. printf("\nElements in a priority queue after deleting the element are : ");
133. for(int i=0;i<=size;i++)
134. {
135. printf("%d ",heap[i]);
136. }
137. int max=get_Max();
138. printf("\nThe element which is having the highest priority is %d: ",max);
139.
108
140.
141. int min=get_Min();
142. printf("\nThe element which is having the minimum priority is : %d",min);
143. return 0;
144. }
o int parent(int i): This function returns the index of the parent node of a child node, i.e., i.
o int left_child(int i): This function returns the index of the left child of a given index, i.e.,
i.
o int right_child(int i): This function returns the index of the right child of a given index,
i.e., i.
o void moveUp(int i): This function will keep moving the node up the tree until the heap
property is restored.
o void moveDown(int i): This function will keep moving the node down the tree until the
heap property is restored.
o void removeMax(): This function removes the element which is having the highest
priority.
o void insert(int p): It inserts the element in a priority queue which is passed as an
argument in a function.
o void delete(int i): It deletes the element from a priority queue at a given index.
o int get_Max(): It returns the element which is having the highest priority, and we know
that in max heap, the root node contains the element which has the largest value, and
highest priority.
o int get_Min(): It returns the element which is having the minimum priority, and we
know that in max heap, the last node contains the element which has the smallest value,
and lowest priority.
Output
109
What is an Algorithm?
Characteristics of an Algorithm
o Input: An algorithm has some input values. We can pass 0 or some input value to an
algorithm.
o Output: We will get 1 or more output at the end of an algorithm.
o Unambiguity: An algorithm should be unambiguous which means that the instructions in
an algorithm should be clear and simple.
o Finiteness: An algorithm should have finiteness. Here, finiteness means that the
algorithm should contain a limited number of instructions, i.e., the instructions should be
countable.
o Effectiveness: An algorithm should be effective as each instruction in an algorithm
affects the overall process.
o Language independent: An algorithm must be language-independent so that the
instructions in an algorithm can be implemented in any of the languages with the same
output.
Dataflow of an Algorithm
110
o Problem: A problem can be a real-world problem or any instance from the real-world
problem for which we need to create a program or the set of instructions. The set of
instructions is known as an algorithm.
o Algorithm: An algorithm will be designed for a problem which is a step by step
procedure.
o Input: After designing an algorithm, the required and the desired inputs are provided to
the algorithm.
o Processing unit: The input will be given to the processing unit, and the processing unit
will produce the desired output.
o Output: The output is the outcome or the result of the program.
Let's understand the algorithm through a real-world example. Suppose we want to make a lemon
juice, so following are the steps required to make a lemon juice:
Step 2: Squeeze the lemon as much you can and take out its juice in a container.
Step 5: When sugar gets dissolved, add some water and ice in it.
The above real-world can be directly compared to the definition of the algorithm. We cannot
perform the step 3 before the step 2, we need to follow the specific order to make lemon juice.
An algorithm also says that each and every instruction should be followed in a specific order to
perform a specific task.
111
Now we will look an example of an algorithm in programming.
The following are the steps required to add two numbers entered by the user:
Step 1: Start
Step 4: Add the values of a and b and store the result in the sum variable, i.e., sum=a+b.
Step 6: Stop
Factors of an Algorithm
The following are the factors that we need to consider for designing an algorithm:
o Modularity: If any problem is given and we can break that problem into small-small
modules or small-small steps, which is a basic definition of an algorithm, it means that
this feature has been perfectly designed for the algorithm.
o Correctness: The correctness of an algorithm is defined as when the given inputs
produce the desired output, which means that the algorithm has been designed algorithm.
The analysis of an algorithm has been done correctly.
o Maintainability: Here, maintainability means that the algorithm should be designed in a
very simple structured way so that when we redefine the algorithm, no major change will
be done in the algorithm.
o Functionality: It considers various logical steps to solve the real-world problem.
o Robustness: Robustness means that how an algorithm can clearly define our problem.
o User-friendly: If the algorithm is not user-friendly, then the designer will not be able to
explain it to the programmer.
o Simplicity: If the algorithm is simple then it is easy to understand.
o Extensibility: If any other algorithm designer or programmer wants to use your
algorithm then it should be extensible.
Importance of Algorithms
112
1. Theoretical importance: When any real-world problem is given to us and we break the
problem into small-small modules. To break down the problem, we should know all the
theoretical aspects.
2. Practical importance: As we know that theory cannot be completed without the
practical implementation. So, the importance of algorithm can be considered as both
theoretical and practical.
Issues of Algorithms
The following are the issues that come while designing an algorithm:
Approaches of Algorithm
The following are the approaches used after considering both the theoretical and practical
importance of designing an algorithm:
o Brute force algorithm: The general logic structure is applied to design an algorithm. It
is also known as an exhaustive search algorithm that searches all the possibilities to
provide the required solution. Such algorithms are of two types:
1. Optimizing: Finding all the solutions of a problem and then take out the best
solution or if the value of the best solution is known then it will terminate if the
best solution is known.
2. Sacrificing: As soon as the best solution is found, then it will stop.
o Divide and conquer: It is a very implementation of an algorithm. It allows you to design
an algorithm in a step-by-step variation. It breaks down the algorithm to solve the
problem in different methods. It allows you to break down the problem into different
methods, and valid output is produced for the valid input. This valid output is passed to
some other function.
o Greedy algorithm: It is an algorithm paradigm that makes an optimal choice on each
iteration with the hope of getting the best solution. It is easy to implement and has a faster
execution time. But, there are very rare cases in which it provides the optimal solution.
113
o Dynamic programming: It makes the algorithm more efficient by storing the
intermediate results. It follows five different steps to find the optimal solution for the
problem:
1. It breaks down the problem into a subproblem to find the optimal solution.
2. After breaking down the problem, it finds the optimal solution out of these
subproblems.
3. Stores the result of the subproblems is known as memorization.
4. Reuse the result so that it cannot be recomputed for the same subproblems.
5. Finally, it computes the result of the complex program.
o Branch and Bound Algorithm: The branch and bound algorithm can be applied to only
integer programming problems. This approach divides all the sets of feasible solutions
into smaller subsets. These subsets are further evaluated to find the best solution.
o Randomized Algorithm: As we have seen in a regular algorithm, we have predefined
input and required output. Those algorithms that have some defined set of inputs and
required output, and follow some described steps are known as deterministic algorithms.
What happens that when the random variable is introduced in the randomized algorithm?.
In a randomized algorithm, some random bits are introduced by the algorithm and added
in the input to produce the output, which is random in nature. Randomized algorithms are
simpler and efficient than the deterministic algorithm.
o Backtracking: Backtracking is an algorithmic technique that solves the problem
recursively and removes the solution if it does not satisfy the constraints of a problem.
Algorithm Analysis
The algorithm can be analyzed in two levels, i.e., first is before creating the algorithm, and
second is after creating the algorithm. The following are the two analysis of an algorithm:
114
o Priori Analysis: Here, priori analysis is the theoretical analysis of an algorithm which is
done before implementing the algorithm. Various factors can be considered before
implementing the algorithm like processor speed, which has no effect on the
implementation part.
o Posterior Analysis: Here, posterior analysis is a practical analysis of an algorithm. The
practical analysis is achieved by implementing the algorithm using any programming
language. This analysis basically evaluate that how much running time and space taken
by the algorithm.
Algorithm Complexity
o Time complexity: The time complexity of an algorithm is the amount of time required to
complete the execution. The time complexity of an algorithm is denoted by the big O
notation. Here, big O notation is the asymptotic notation to represent the time complexity.
The time complexity is mainly calculated by counting the number of steps to finish the
execution. Let's understand the time complexity through an example.
1. sum=0;
2. // Suppose we have to calculate the sum of n numbers.
3. for i=1 to n
4. sum=sum+i;
5. // when the loop ends then sum holds the sum of the n numbers
6. return sum;
In the above code, the time complexity of the loop statement will be atleast n, and if the value of
n increases, then the time complexity also increases. While the complexity of the code, i.e.,
return sum will be constant as its value is not dependent on the value of n and will provide the
result in one step only. We generally consider the worst-time complexity as it is the maximum
time taken for any given input size.
115
2. To store constant values
3. To store variable values
4. To track the function calls, jumping statements, etc.
Auxiliary space: The extra space required by the algorithm, excluding the input size, is known as
an auxiliary space. The space complexity considers both the spaces, i.e., auxiliary space, and
space used by the input.
So,
Types of Algorithms
o Search Algorithm
o Sort Algorithm
Search Algorithm
On each day, we search for something in our day to day life. Similarly, with the case of
computer, huge data is stored in a computer that whenever the user asks for any data then the
computer searches for that data in the memory and provides that data to the user. There are
mainly two techniques available to search the data in an array:
o Linear search
o Binary search
Linear Search
Linear search is a very simple algorithm that starts searching for an element or a value from the
beginning of an array until the required element is not found. It compares the element to be
searched with all the elements in an array, if the match is found, then it returns the index of the
element else it returns -1. This algorithm can be implemented on the unsorted list.
Binary Search
A Binary algorithm is the simplest algorithm that searches the element very quickly. It is used to
search the element from the sorted list. The elements must be stored in sequential order or the
sorted manner to implement the binary algorithm. Binary search cannot be implemented if the
elements are stored in a random manner. It is used to find the middle element of the list.
Sorting Algorithms
116
Sorting algorithms are used to rearrange the elements in an array or a given data structure either
in an ascending or descending order. The comparison operator decides the new order of the
elements.
Asymptotic Analysis
As we know that data structure is a way of organizing the data efficiently and that efficiency is
measured either in terms of time or space. So, the ideal data structure is a structure that occupies
the least possible time to perform all its operation and the memory space. Our focus would be on
finding the time complexity rather than space complexity, and by finding the time complexity,
we can decide which data structure is the best for an algorithm.
The main question arises in our mind that on what basis should we compare the time complexity
of data structures?. The time complexity can be compared based on operations performed on
them. Let's consider a simple example.
Suppose we have an array of 100 elements, and we want to insert a new element at the beginning
of the array. This becomes a very tedious task as we first need to shift the elements towards the
right, and we will add new element at the starting of the array.
Suppose we consider the linked list as a data structure to add the element at the beginning. The
linked list contains two parts, i.e., data and address of the next node. We simply add the address
of the first node in the new node, and head pointer will now point to the newly added node.
Therefore, we conclude that adding the data at the beginning of the linked list is faster than the
arrays. In this way, we can compare the data structures and select the best possible data structure
for performing the operations.
How to find the Time Complexity or running time for performing the operations?
The measuring of the actual running time is not practical at all. The running time to perform any
operation depends on the size of the input. Let's understand this statement through a simple
example.
Suppose we have an array of five elements, and we want to add a new element at the beginning
of the array. To achieve this, we need to shift each element towards right, and suppose each
element takes one unit of time. There are five elements, so five units of time would be taken.
117
Suppose there are 1000 elements in an array, then it takes 1000 units of time to shift. It concludes
that time complexity depends upon the input size.
Therefore, if the input size is n, then f(n) is a function of n that denotes the time complexity.
Calculating the value of f(n) for smaller programs is easy but for bigger programs, it's not that
easy. We can compare the data structures by comparing their f(n) values. We can compare the
data structures by comparing their f(n) values. We will find the growth rate of f(n) because there
might be a possibility that one data structure for a smaller input size is better than the other one
but not for the larger sizes. Now, how to find f(n).
f(n) = 5n2 + 6n + 12
where n is the number of instructions executed, and it depends on the size of the input.
When n=1
From the above calculation, it is observed that most of the time is taken by 12. But, we have to
find the growth rate of f(n), we cannot say that the maximum amount of time is taken by 12.
Let's assume the different values of n to find the growth rate of f(n).
n 5n2 6n 12
As we can observe in the above table that with the increase in the value of n, the running time of
5n2 increases while the running time of 6n and 12 also decreases. Therefore, it is observed that
for larger values of n, the squared term consumes almost 99% of the time. As the n 2 term is
contributing most of the time, so we can eliminate the rest two terms.
118
Therefore,
f(n) = 5n2
Here, we are getting the approximate time complexity whose result is very close to the actual
result. And this approximate measure of time complexity is known as an Asymptotic complexity.
Here, we are not calculating the exact running time, we are eliminating the unnecessary terms,
and we are just considering the term which is taking most of the time.
It is used to mathematically calculate the running time of any operation inside an algorithm.
Example: Running time of one operation is x(n) and for another operation, it is calculated as
f(n2). It refers to running time will increase linearly with an increase in 'n' for the first operation,
and running time will increase exponentially for the second operation. Similarly, the running
time of both operations will be the same if n is significantly small.
Worst case: It defines the input for which the algorithm takes a huge time.
Best case: It defines the input for which the algorithm takes the lowest time
Asymptotic Notations
The commonly used asymptotic notations used for calculating the running time complexity of an
algorithm is given below:
119
It is the formal way to express the upper boundary of an algorithm running time. It measures the
worst case of time complexity or the algorithm's longest amount of time to complete its
operation. It is represented as shown below:
For example:
If f(n) and g(n) are the two functions defined for positive integers,
then f(n) = O(g(n)) as f(n) is big oh of g(n) or f(n) is on the order of g(n)) if there exists
constants c and no such that:
This implies that f(n) does not grow faster than g(n), or g(n) is an upper bound on the function
f(n). In this case, we are calculating the growth rate of the function which eventually calculates
the worst time complexity of a function, i.e., how worst an algorithm can perform.
f(n)<=c.g(n)
2*1+3<=5*1
120
5<=5
If n=2
2*2+3<=5*2
7<=10
We know that for any value of n, it will satisfy the above condition, i.e., 2n+3<=c.n. If the value
of c is equal to 5, then it will satisfy the condition 2n+3<=c.n. We can take any value of n
starting from 1, it will always satisfy. Therefore, we can say that for some constants c and for
some constants n0, it will always satisfy 2n+3<=c.n. As it is satisfying the above condition, so
f(n) is big oh of g(n) or we can say that f(n) grows linearly. Therefore, it concludes that c.g(n) is
the upper bound of the f(n). It can be represented graphically as:
The idea of using big o notation is to give an upper bound of a particular function, and eventually
it leads to give a worst-time complexity. It provides an assurance that a particular function does
not behave suddenly as a quadratic or a cubic fashion, it just behaves in a linear manner in a
worst-case.
121
o It basically describes the best-case scenario which is opposite to the big o notation.
o It is the formal way to represent the lower bound of an algorithm's running time. It
measures the best amount of time an algorithm can possibly take to complete or the best-
case time complexity.
o It determines what is the fastest time that an algorithm can run.
If we required that an algorithm takes at least certain amount of time without using an upper
bound, we use big- Ω notation i.e. the Greek letter "omega". It is used to bound the growth of
running time for large input size.
If f(n) and g(n) are the two functions defined for positive integers,
then f(n) = Ω (g(n)) as f(n) is Omega of g(n) or f(n) is on the order of g(n)) if there exists
constants c and no such that:
Is f(n)= Ω (g(n))?
f(n)>=c.g(n)
To check the above condition, we first replace f(n) by 2n+3 and g(n) by n.
2n+3>=c*n
Suppose c=1
2n+3>=n (This equation will be true for any value of n starting from 1).
122
As we can see in the above figure that g(n) function is the lower bound of the f(n) function when
the value of c is equal to 1. Therefore, this notation gives the fastest running time. But, we are
not more interested in finding the fastest running time, we are interested in calculating the worst-
case scenarios because we want to check our algorithm for larger input that what is the worst
time that it will take so that we can take further decision in the further process.
Let f(n) and g(n) be the functions of n where n is the steps required to execute the program then:
f(n)= θg(n)
c1.g(n)<=f(n)<=c2.g(n)
where the function is bounded by two limits, i.e., upper and lower limit, and f(n) comes in
between. The condition f(n)= θg(n) will be true if and only if c1.g(n) is less than or equal to f(n)
123
and c2.g(n) is greater than or equal to f(n). The graphical representation of theta notation is given
below:
As c1.g(n) should be less than f(n) so c1 has to be 1 whereas c2.g(n) should be greater than f(n)
so c2 is equal to 5. The c1.g(n) is the lower limit of the of the f(n) while c2.g(n) is the upper limit
of the f(n).
c1.g(n)<=f(n)<=c2.g(n)
c1.n <=2n+3<=c2.n
If n=2
1*2<=2*2+3<=2*2
Therefore, we can say that for any value of n, it satisfies the condition c1.g(n)<=f(n)<=c2.g(n).
Hence, it is proved that f(n) is big theta of g(n). So, this is the average-case scenario which
provides the realistic time complexity.
124
Why we have three different asymptotic analysis?
As we know that big omega is for the best case, big oh is for the worst case while big theta is for
the average case. Now, we will find out the average, worst and the best case of the linear search
algorithm.
Suppose we have an array of n numbers, and we want to find the particular element in an array
using the linear search. In the linear search, every element is compared with the searched
element on each iteration. Suppose, if the match is found in a first iteration only, then the best
case would be Ω(1), if the element matches with the last element, i.e., nth element of the array
then the worst case would be O(n). The average case is the mid of the best and the worst-case, so
it becomes θ(n/1). The constant terms can be ignored in the time complexity so average case
would be θ(n).
So, three different analysis provide the proper bounding between the actual functions. Here,
bounding means that we have upper as well as lower limit which assures that the algorithm will
behave between these limits only, i.e., it will not go beyond these limits.
constant - ?(1)
linear - ?(n)
logarithmic - ?(log n)
exponential - 2?(n)
cubic - ?(n3)
polynomial - n?(1)
quadratic - ?(n2)
Graph
A graph can be defined as group of vertices and edges that are used to connect these vertices. A
graph can be seen as a cyclic tree, where the vertices (Nodes) maintain any complex relationship
among them instead of having parent child relationship.
Definition
A graph G can be defined as an ordered set G(V, E) where V(G) represents the set of vertices
and E(G) represents the set of edges which are used to connect these vertices.
125
A Graph G(V, E) with 5 vertices (A, B, C, D, E) and six edges ((A,B), (B,C), (C,E), (E,D),
(D,B), (D,A)) is shown in the following figure.
A graph can be directed or undirected. However, in an undirected graph, edges are not associated
with the directions with them. An undirected graph is shown in the above figure since its edges
are not attached with any of the directions. If an edge exists between vertex A and B then the
vertices can be traversed from B to A as well as A to B.
In a directed graph, edges form an ordered pair. Edges represent a specific path from some vertex
A to another vertex B. Node A is called initial node while node B is called terminal node.
126
Graph Terminology
Path
A path can be defined as the sequence of nodes that are followed in order to reach some terminal
node V from the initial node U.
Closed Path
A path will be called as closed path if the initial node is same as terminal node. A path will be
closed path if V0=VN.
Simple Path
If all the nodes of the graph are distinct with an exception V 0=VN, then such path P is called as
closed simple path.
Cycle
A cycle can be defined as the path which has no repeated edges or vertices except the first and
last vertices.
Connected Graph
A connected graph is the one in which some path exists between every two vertices (u, v) in V.
There are no isolated nodes in connected graph.
Complete Graph
A complete graph is the one in which every node is connected with all other nodes. A complete
graph contain n(n-1)/2 edges where n is the number of nodes in the graph.
Weighted Graph
127
In a weighted graph, each edge is assigned with some data such as length or weight. The weight
of an edge e can be given as w(e) which must be a positive (+) value indicating the cost of
traversing the edge.
Digraph
A digraph is a directed graph in which each edge of the graph is associated with some direction
and the traversing can be done only in the specified direction.
Loop
An edge that is associated with the similar end points can be called as Loop.
Adjacent Nodes
If two nodes u and v are connected via an edge e, then the nodes u and v are called as neighbours
or adjacent nodes.
A degree of a node is the number of edges that are connected with that node. A node with degree
0 is called as isolated node.
Graph representation
In this article, we will discuss the ways to represent the graph. By Graph representation, we
simply mean the technique to be used to store some graph into the computer's memory.
A graph is a data structure that consist a sets of vertices (called nodes) and edges. There are two
ways to store Graphs into the computer's memory:
In sequential representation, an adjacency matrix is used to store the graph. Whereas in linked
list representation, there is a use of an adjacency list to store the graph.
Keep Watching
Now, let's start discussing the ways of representing a graph in the data structure.
Sequential representation
128
In sequential representation, there is a use of an adjacency matrix to represent the mapping
between vertices and edges of the graph. We can use an adjacency matrix to represent the
undirected graph, directed graph, weighted directed graph, and weighted undirected graph.
If adj[i][j] = w, it means that there is an edge exists from vertex i to vertex j with weight w.
An entry Aij in the adjacency matrix representation of an undirected graph G will be 1 if an edge
exists between Vi and Vj. If an Undirected Graph G consists of n vertices, then the adjacency
matrix for that graph is n x n, and the matrix A = [aij] can be defined as -
aij = 0 {Otherwise}
It means that, in an adjacency matrix, 0 represents that there is no association exists between the
nodes, whereas 1 represents the existence of a path between two edges.
If there is no self-loop present in the graph, it means that the diagonal entries of the adjacency
matrix will be 0.
In the above figure, an image shows the mapping among the vertices (A, B, C, D, E), and this
mapping is represented by using the adjacency matrix.
There exist different adjacency matrices for the directed and undirected graph. In a directed
graph, an entry Aij will be 1 only when there is an edge directed from Vi to Vj.
In a directed graph, edges represent a specific path from one vertex to another vertex. Suppose a
path exists from vertex A to another vertex B; it means that node A is the initial node, while node
B is the terminal node.
129
Consider the below-directed graph and try to construct the adjacency matrix of it.
In the above graph, we can see there is no self-loop, so the diagonal entries of the adjacent matrix
are 0.
It is similar to an adjacency matrix representation of a directed graph except that instead of using
the '1' for the existence of a path, here we have to use the weight associated with the edge. The
weights on the graph edges will be represented as the entries of the adjacency matrix. We can
understand it with the help of an example. Consider the below graph and its adjacency matrix
representation. In the representation, we can see that the weight associated with the edges is
represented as the entries in the adjacency matrix.
In the above image, we can see that the adjacency matrix representation of the weighted directed
graph is different from other representations. It is because, in this representation, the non-zero
values are replaced by the actual weight assigned to the edges.
130
Adjacency matrix is easier to implement and follow. An adjacency matrix can be used when the
graph is dense and a number of edges are large.
Though, it is advantageous to use an adjacency matrix, but it consumes more space. Even if the
graph is sparse, the matrix still consumes the same space.
An adjacency list is used in the linked representation to store the Graph in the computer's
memory. It is efficient in terms of storage as we only have to store the values for edges.
In the above figure, we can see that there is a linked list or adjacency list for every node of the
graph. From vertex A, there are paths to vertex B and vertex D. These nodes are linked to nodes
A in the given adjacency list.
An adjacency list is maintained for each node present in the graph, which stores the node value
and a pointer to the next adjacent node to the respective node. If all the adjacent nodes are
traversed, then store the NULL in the pointer field of the last node of the list.
The sum of the lengths of adjacency lists is equal to twice the number of edges present in an
undirected graph.
Now, consider the directed graph, and let's see the adjacency list representation of that graph.
131
For a directed graph, the sum of the lengths of adjacency lists is equal to the number of edges
present in the graph.
Now, consider the weighted directed graph, and let's see the adjacency list representation of that
graph.
In the case of a weighted directed graph, each node contains an extra field that is called the
weight of the node.
In an adjacency list, it is easy to add a vertex. Because of using the linked list, it also saves space.
In this program, there is an adjacency matrix representation of an undirected graph. It means that
if there is an edge exists from vertex A to vertex B, there will also an edge exists from vertex B
to vertex A.
Here, there are four vertices and five edges in the graph that are non-directed.
132
12. }
13.
14. /* function to add edges to the graph */
15. void insertEdge(int arr[][V], int i, int j) {
16. arr[i][j] = 1;
17. arr[j][i] = 1;
18. }
19.
20. /* function to print the matrix elements */
21. void printAdjMatrix(int arr[][V]) {
22. int i, j;
23. for (i = 0; i < V; i++) {
24. printf("%d: ", i);
25. for (j = 0; j < V; j++) {
26. printf("%d ", arr[i][j]);
27. }
28. printf("\n");
29. }
30. }
31.
32. int main() {
33. int adjMatrix[V][V];
34.
35. init(adjMatrix);
36. insertEdge(adjMatrix, 0, 1);
37. insertEdge(adjMatrix, 0, 2);
38. insertEdge(adjMatrix, 1, 2);
39. insertEdge(adjMatrix, 2, 0);
40. insertEdge(adjMatrix, 2, 3);
41.
42. printAdjMatrix(adjMatrix);
43.
44. return 0;
45. }
Output:
133
After the execution of the above code, the output will be -
In this program, there is an adjacency list representation of an undirected graph. It means that if
there is an edge exists from vertex A to vertex B, there will also an edge exists from vertex B to
vertex A.
134
25. struct AdjNode* newNode = (struct AdjNode*)malloc(sizeof(struct AdjNode));
26. newNode->dest = dest;
27. newNode->next = NULL;
28. return newNode;
29. }
30.
31. struct Graph* createGraph(int V)
32. {
33. struct Graph* graph = (struct Graph*)malloc(sizeof(struct Graph));
34. graph->V = V;
35. graph->array = (struct AdjList*)malloc(V * sizeof(struct AdjList));
36.
37. /* Initialize each adjacency list as empty by making head as NULL */
38. int i;
39. for (i = 0; i < V; ++i)
40. graph->array[i].head = NULL;
41. return graph;
42. }
43.
44. /* function to add an edge to an undirected graph */
45. void addEdge(struct Graph* graph, int src, int dest)
46. {
47. /* Add an edge from src to dest. The node is added at the beginning */
48. struct AdjNode* check = NULL;
49. struct AdjNode* newNode = newAdjNode(dest);
50.
51. if (graph->array[src].head == NULL) {
52. newNode->next = graph->array[src].head;
53. graph->array[src].head = newNode;
54. }
55. else {
56.
57. check = graph->array[src].head;
58. while (check->next != NULL) {
59. check = check->next;
135
60. }
61. // graph->array[src].head = newNode;
62. check->next = newNode;
63. }
64.
65. /* Since graph is undirected, add an edge from dest to src also */
66. newNode = newAdjNode(src);
67. if (graph->array[dest].head == NULL) {
68. newNode->next = graph->array[dest].head;
69. graph->array[dest].head = newNode;
70. }
71. else {
72. check = graph->array[dest].head;
73. while (check->next != NULL) {
74. check = check->next;
75. }
76. check->next = newNode;
77. }
78. }
79. /* function to print the adjacency list representation of graph*/
80. void print(struct Graph* graph)
81. {
82. int v;
83. for (v = 0; v < graph->V; ++v) {
84. struct AdjNode* pCrawl = graph->array[v].head;
85. printf("\n The Adjacency list of vertex %d is: \n head ", v);
86. while (pCrawl) {
87. printf("-> %d", pCrawl->dest);
88. pCrawl = pCrawl->next;
89. }
90. printf("\n");
91. }
92. }
93.
94. int main()
136
95. {
96.
97. int V = 4;
98. struct Graph* g = createGraph(V);
99. addEdge(g, 0, 1);
100. addEdge(g, 0, 3);
101. addEdge(g, 1, 2);
102. addEdge(g, 1, 3);
103. addEdge(g, 2, 4);
104. addEdge(g, 2, 3);
105. addEdge(g, 3, 4);
106. print(g);
107. return 0;
108. }
Output:
In the output, we will see the adjacency list representation of all the vertices of the graph. After
the execution of the above code, the output will be -
BFS algorithm
In this article, we will discuss the BFS algorithm in the data structure. Breadth-first search is a
graph traversal algorithm that starts traversing the graph from the root node and explores all the
neighboring nodes. Then, it selects the nearest node and explores all the unexplored nodes.
While using BFS for traversal, any node in the graph can be considered as the root node.
137
There are many ways to traverse the graph, but among them, BFS is the most commonly used
approach. It is a recursive algorithm to search all the vertices of a tree or graph data structure.
BFS puts every vertex of the graph into two categories - visited and non-visited. It selects a
single node in a graph and, after that, visits all the nodes adjacent to the selected node.
o BFS can be used to find the neighboring locations from a given source location.
o In a peer-to-peer network, BFS algorithm can be used as a traversal method to find all the
neighboring nodes. Most torrent clients, such as BitTorrent, uTorrent, etc. employ this
process to find "seeds" and "peers" in the network.
o BFS can be used in web crawlers to create web page indexes. It is one of the main
algorithms that can be used to index web pages. It starts traversing from the source page
and follows the links associated with the page. Here, every web page is considered as a
node in the graph.
o BFS is used to determine the shortest path and minimum spanning tree.
o BFS is also used in Cheney's technique to duplicate the garbage collection.
o It can be used in ford-Fulkerson method to compute the maximum flow in a flow
network.
Algorithm
The steps involved in the BFS algorithm to explore a graph are given as follows -
Step 2: Enqueue the starting node A and set its STATUS = 2 (waiting state)
Step 4: Dequeue a node N. Process it and set its STATUS = 3 (processed state).
Step 5: Enqueue all the neighbours of N that are in the ready state (whose STATUS = 1) and set
their STATUS = 2
(waiting state)
[END OF LOOP]
138
Step 6: EXIT
Now, let's understand the working of BFS algorithm by using an example. In the example given
below, there is a directed graph having 7 vertices.
In the above graph, minimum path 'P' can be found by using the BFS that will start from Node A
and end at Node E. The algorithm uses two queues, namely QUEUE1 and QUEUE2. QUEUE1
holds all the nodes that are to be processed, while QUEUE2 holds all the nodes that are
processed and deleted from QUEUE1.
1. QUEUE1 = {A}
2. QUEUE2 = {NULL}
Step 2 - Now, delete node A from queue1 and add it into queue2. Insert all neighbors of node A
to queue1.
1. QUEUE1 = {B, D}
2. QUEUE2 = {A}
Step 3 - Now, delete node B from queue1 and add it into queue2. Insert all neighbors of node B
to queue1.
1. QUEUE1 = {D, C, F}
2. QUEUE2 = {A, B}
Step 4 - Now, delete node D from queue1 and add it into queue2. Insert all neighbors of node D
to queue1. The only neighbor of Node D is F since it is already inserted, so it will not be inserted
again.
1. QUEUE1 = {C, F}
139
2. QUEUE2 = {A, B, D}
Step 5 - Delete node C from queue1 and add it into queue2. Insert all neighbors of node C to
queue1.
1. QUEUE1 = {F, E, G}
2. QUEUE2 = {A, B, D, C}
Step 5 - Delete node F from queue1 and add it into queue2. Insert all neighbors of node F to
queue1. Since all the neighbors of node F are already present, we will not insert them again.
1. QUEUE1 = {E, G}
2. QUEUE2 = {A, B, D, C, F}
Step 6 - Delete node E from queue1. Since all of its neighbors have already been added, so we
will not insert them again. Now, all the nodes are visited, and the target node E is encountered
into queue2.
1. QUEUE1 = {G}
2. QUEUE2 = {A, B, D, C, F, E}
Time complexity of BFS depends upon the data structure used to represent the graph. The time
complexity of BFS algorithm is O(V+E), since in the worst case, BFS algorithm explores every
node and edge. In a graph, the number of vertices is O(V), whereas the number of edges is O(E).
The space complexity of BFS can be expressed as O(V), where V is the number of vertices.
In this code, we are using the adjacency list to represent our graph. Implementing the Breadth-
First Search algorithm in Java makes it much easier to deal with the adjacency list since we only
have to travel through the list of nodes attached to each node once the node is dequeued from the
head (or start) of the queue.
In this example, the graph that we are using to demonstrate the code is given as follows -
140
1. import java.io.*;
2. import java.util.*;
3. public class BFSTraversal
4. {
5. private int vertex; /* total number number of vertices in the graph */
6. private LinkedList<Integer> adj[]; /* adjacency list */
7. private Queue<Integer> que; /* maintaining a queue */
8. BFSTraversal(int v)
9. {
10. vertex = v;
11. adj = new LinkedList[vertex];
12. for (int i=0; i<v; i++)
13. {
14. adj[i] = new LinkedList<>();
15. }
16. que = new LinkedList<Integer>();
17. }
18. void insertEdge(int v,int w)
19. {
20. adj[v].add(w); /* adding an edge to the adjacency list (edges are bidirectional in this exa
mple) */
21. }
22. void BFS(int n)
23. {
141
24. boolean nodes[] = new boolean[vertex]; /* initialize boolean array for holding the data
*/
25. int a = 0;
26. nodes[n]=true;
27. que.add(n); /* root node is added to the top of the queue */
28. while (que.size() != 0)
29. {
30. n = que.poll(); /* remove the top element of the queue */
31. System.out.print(n+" "); /* print the top element of the queue */
32. for (int i = 0; i < adj[n].size(); i++) /* iterate through the linked list and push all neighbo
rs into queue */
33. {
34. a = adj[n].get(i);
35. if (!nodes[a]) /* only insert nodes into queue if they have not been explored already
*/
36. {
37. nodes[a] = true;
38. que.add(a);
39. }
40. }
41. }
42. }
43. public static void main(String args[])
44. {
45. BFSTraversal graph = new BFSTraversal(10);
46. graph.insertEdge(0, 1);
47. graph.insertEdge(0, 2);
48. graph.insertEdge(0, 3);
49. graph.insertEdge(1, 3);
50. graph.insertEdge(2, 4);
51. graph.insertEdge(3, 5);
52. graph.insertEdge(3, 6);
53. graph.insertEdge(4, 7);
54. graph.insertEdge(4, 5);
55. graph.insertEdge(5, 2);
142
56. graph.insertEdge(6, 5);
57. graph.insertEdge(7, 5);
58. graph.insertEdge(7, 8);
59. System.out.println("Breadth First Traversal for the graph is:");
60. graph.BFS(2);
61. }
62. }
Output
Conclusion
In this article, we have discussed the Breadth-first search technique along with its example,
complexity, and implementation in java programming language. Here, we have also seen the
real-life applications of BFS that show it the important data structure algorithm.
In this article, we will discuss the DFS algorithm in the data structure. It is a recursive algorithm
to search all the vertices of a tree data structure or a graph. The depth-first search (DFS)
algorithm starts with the initial node of graph G and goes deeper until we find the goal node or
the node with no children.
Because of the recursive nature, stack data structure can be used to implement the DFS
algorithm. The process of implementing the DFS is similar to the BFS algorithm.
The step by step process to implement the DFS traversal is given as follows -
1. First, create a stack with the total number of vertices in the graph.
2. Now, choose any vertex as the starting point of traversal, and push that vertex into the
stack.
3. After that, push a non-visited vertex (adjacent to the vertex on the top of the stack) to the
top of the stack.
4. Now, repeat steps 3 and 4 until no vertices are left to visit from the vertex on the stack's
top.
5. If no vertex is left, go back and pop a vertex from the stack.
143
6. Repeat steps 2, 3, and 4 until the stack is empty.
Algorithm
Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)
Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)
Step 5: Push on the stack all the neighbors of N that are in the ready state (whose STATUS = 1)
and set their STATUS = 2 (waiting state)
[END OF LOOP]
Step 6: EXIT
Pseudocode
144
11. end if
12. end while
13. END DFS()
Now, let's understand the working of the DFS algorithm by using an example. In the example
given below, there is a directed graph having 7 vertices.
1. STACK: H
Step 2 - POP the top element from the stack, i.e., H, and print it. Now, PUSH all the neighbors
of H onto the stack that are in ready state.
1. Print: H]STACK: A
Step 3 - POP the top element from the stack, i.e., A, and print it. Now, PUSH all the neighbors
of A onto the stack that are in ready state.
1. Print: A
2. STACK: B, D
Step 4 - POP the top element from the stack, i.e., D, and print it. Now, PUSH all the neighbors
of D onto the stack that are in ready state.
1. Print: D
2. STACK: B, F
145
Step 5 - POP the top element from the stack, i.e., F, and print it. Now, PUSH all the neighbors of
F onto the stack that are in ready state.
1. Print: F
2. STACK: B
Step 6 - POP the top element from the stack, i.e., B, and print it. Now, PUSH all the neighbors of
B onto the stack that are in ready state.
1. Print: B
2. STACK: C
Step 7 - POP the top element from the stack, i.e., C, and print it. Now, PUSH all the neighbors of
C onto the stack that are in ready state.
1. Print: C
2. STACK: E, G
Step 8 - POP the top element from the stack, i.e., G and PUSH all the neighbors of G onto the
stack that are in ready state.
1. Print: G
2. STACK: E
Step 9 - POP the top element from the stack, i.e., E and PUSH all the neighbors of E onto the
stack that are in ready state.
1. Print: E
2. STACK:
Now, all the graph nodes have been traversed, and the stack is empty.
The time complexity of the DFS algorithm is O(V+E), where V is the number of vertices and E
is the number of edges in the graph.
In this example, the graph that we are using to demonstrate the code is given as follows -
146
1. /*A sample java program to implement the DFS algorithm*/
2.
3. import java.util.*;
4.
5. class DFSTraversal {
6. private LinkedList<Integer> adj[]; /*adjacency list representation*/
7. private boolean visited[];
8.
9. /* Creation of the graph */
10. DFSTraversal(int V) /*'V' is the number of vertices in the graph*/
11. {
12. adj = new LinkedList[V];
13. visited = new boolean[V];
14.
15. for (int i = 0; i < V; i++)
16. adj[i] = new LinkedList<Integer>();
17. }
18.
19. /* Adding an edge to the graph */
20. void insertEdge(int src, int dest) {
21. adj[src].add(dest);
22. }
23.
24. void DFS(int vertex) {
25. visited[vertex] = true; /*Mark the current node as visited*/
26. System.out.print(vertex + " ");
27.
147
28. Iterator<Integer> it = adj[vertex].listIterator();
29. while (it.hasNext()) {
30. int n = it.next();
31. if (!visited[n])
32. DFS(n);
33. }
34. }
35.
36. public static void main(String args[]) {
37. DFSTraversal graph = new DFSTraversal(8);
38.
39. graph.insertEdge(0, 1);
40. graph.insertEdge(0, 2);
41. graph.insertEdge(0, 3);
42. graph.insertEdge(1, 3);
43. graph.insertEdge(2, 4);
44. graph.insertEdge(3, 5);
45. graph.insertEdge(3, 6);
46. graph.insertEdge(4, 7);
47. graph.insertEdge(4, 5);
48. graph.insertEdge(5, 2);
49.
50. System.out.println("Depth First Traversal for the graph is:");
51. graph.DFS(0);
52. }
53. }
Output
Spanning tree
148
In this article, we will discuss the spanning tree and the minimum spanning tree. But before
moving directly towards the spanning tree, let's first see a brief description of the graph and its
types.
Graph
A graph can be defined as a group of vertices and edges to connect these vertices. The types of
graphs are given as follows -
o Undirected graph: An undirected graph is a graph in which all the edges do not point to
any particular direction, i.e., they are not unidirectional; they are bidirectional. It can also
be defined as a graph with a set of V vertices and a set of E edges, each edge connecting
two different vertices.
o Connected graph: A connected graph is a graph in which a path always exists from a
vertex to any other vertex. A graph is connected if we can reach any vertex from any
other vertex by following edges in either direction.
o Directed graph: Directed graphs are also known as digraphs. A graph is a directed graph
(or digraph) if all the edges present between any vertices or nodes of the graph are
directed or have a defined direction.
A spanning tree can be defined as the subgraph of an undirected connected graph. It includes all
the vertices along with the least possible number of edges. If any vertex is missed, it is not a
spanning tree. A spanning tree is a subset of the graph that does not have cycles, and it also
cannot be disconnected.
A spanning tree consists of (n-1) edges, where 'n' is the number of vertices (or nodes). Edges of
the spanning tree may or may not have weights assigned to them. All the possible spanning trees
created from the given graph G would have the same number of vertices, but the number of
edges in the spanning tree would be equal to the number of vertices in the given graph minus 1.
A complete undirected graph can have nn-2 number of spanning trees where n is the number of
vertices in the graph. Suppose, if n = 5, the number of maximum possible spanning trees would
be 55-2 = 125.
Basically, a spanning tree is used to find a minimum path to connect all nodes of the graph.
Some of the common applications of the spanning tree are listed as follows -
149
o Cluster Analysis
o Civil network planning
o Computer network routing protocol
Now, let's understand the spanning tree with the help of an example.
As discussed above, a spanning tree contains the same number of vertices as the graph, the
number of vertices in the above graph is 5; therefore, the spanning tree will contain 5 vertices.
The edges in the spanning tree will be equal to the number of vertices in the graph minus 1. So,
there will be 4 edges in the spanning tree.
Some of the possible spanning trees that will be created from the above graph are given as
follows -
Properties of spanning-tree
150
o There can be more than one spanning tree of a connected graph G.
o A spanning tree does not have any cycles or loop.
o A spanning tree is minimally connected, so removing one edge from the tree will make
the graph disconnected.
o A spanning tree is maximally acyclic, so adding one edge to the tree will create a loop.
o There can be a maximum nn-2 number of spanning trees that can be created from a
complete graph.
o A spanning tree has n-1 edges, where 'n' is the number of nodes.
o If the graph is a complete graph, then the spanning tree can be constructed by removing
maximum (e-n+1) edges, where 'e' is the number of edges and 'n' is the number of
vertices.
So, a spanning tree is a subset of connected graph G, and there is no spanning tree of a
disconnected graph.
A minimum spanning tree can be defined as the spanning tree in which the sum of the weights of
the edge is minimum. The weight of the spanning tree is the sum of the weights given to the
edges of the spanning tree. In the real world, this weight can be considered as the distance, traffic
load, congestion, or any random value.
Let's understand the minimum spanning tree with the help of an example.
The sum of the edges of the above graph is 16. Now, some of the possible spanning trees created
from the above graph are -
151
So, the minimum spanning tree that is selected from the above spanning trees for the given
weighted graph is -
A minimum spanning tree can be found from a weighted graph by using the algorithms given
below -
o Prim's Algorithm
o Kruskal's Algorithm
Prim's algorithm - It is a greedy algorithm that starts with an empty spanning tree. It is used to
find the minimum spanning tree from the graph. This algorithm finds the subset of edges that
152
includes every vertex of the graph such that the sum of the weights of the edges can be
minimized.
To learn more about the prim's algorithm, you can click the below link
- https://www.javatpoint.com/prim-algorithm
Kruskal's algorithm - This algorithm is also used to find the minimum spanning tree for a
connected weighted graph. Kruskal's algorithm also follows greedy approach, which finds an
optimum solution at every stage instead of focusing on a global optimum.
To learn more about the prim's algorithm, you can click the below link
- https://www.javatpoint.com/kruskal-algorithm
So, that's all about the article. Hope the article will be helpful and informative to you. Here, we
have discussed spanning tree and minimum spanning tree along with their properties, examples,
and applications.
In this article, we will discuss the Linear Search Algorithm. Searching is the process of finding
some particular element in the list. If the element is present in the list, then the process is called
successful, and the process returns the location of that element; otherwise, the search is called
unsuccessful.
Two popular search methods are Linear Search and Binary Search. So, here we will discuss the
popular searching technique, i.e., Linear Search Algorithm.
Linear search is also called as sequential search algorithm. It is the simplest searching
algorithm. In Linear search, we simply traverse the list completely and match each element of
the list with the item whose location is to be found. If the match is found, then the location of the
item is returned; otherwise, the algorithm returns NULL.
It is widely used to search an element from the unordered list, i.e., the list in which items are not
sorted. The worst-case time complexity of linear search is O(n).
The steps used in the implementation of Linear Search are listed as follows -
153
Algorithm
1. Linear_Search(a, n, val) // 'a' is the given array, 'n' is the size of given array, 'val' is the value to s
earch
2. Step 1: set pos = -1
3. Step 2: set i = 1
4. Step 3: repeat step 4 while i <= n
5. Step 4: if a[i] == val
6. set pos = i
7. print pos
8. go to step 6
9. [end of if]
10. set ii = i + 1
11. [end of loop]
12. Step 5: if pos = -1
13. print "value is not present in the array "
14. [end of if]
15. Step 6: exit
To understand the working of linear search algorithm, let's take an unsorted array. It will be easy
to understand the working of linear search with an example.
Now, start from the first element and compare K with each element of the array.
154
The value of K, i.e., 41, is not matched with the first element of the array. So, move to the next
element. And follow the same process until the respective element is found.
Now, the element to be searched is found. So algorithm will return the index of the element
matched.
155
Linear Search complexity
Now, let's see the time complexity of linear search in the best case, average case, and worst case.
We will also see the space complexity of linear search.
1. Time Complexity
o Best Case Complexity - In Linear search, best case occurs when the element we are
finding is at the first position of the array. The best-case time complexity of linear search
is O(1).
o Average Case Complexity - The average case time complexity of linear search is O(n).
o Worst Case Complexity - In Linear search, the worst case occurs when the element we
are looking is present at the end of the array. The worst-case in linear search could be
when the target element is not present in the given array, and we have to traverse the
entire array. The worst-case time complexity of linear search is O(n).
The time complexity of linear search is O(n) because every element in the array is compared
only once.
2. Space Complexity
Now, let's see the programs of linear search in different programming languages.
156
1. #include <stdio.h>
2. int linearSearch(int a[], int n, int val) {
3. // Going through array sequencially
4. for (int i = 0; i < n; i++)
5. {
6. if (a[i] == val)
7. return i+1;
8. }
9. return -1;
10. }
11. int main() {
12. int a[] = {70, 40, 30, 11, 57, 41, 25, 14, 52}; // given array
13. int val = 41; // value to be searched
14. int n = sizeof(a) / sizeof(a[0]); // size of array
15. int res = linearSearch(a, n, val); // Store result
16. printf("The elements of the array are - ");
17. for (int i = 0; i < n; i++)
18. printf("%d ", a[i]);
19. printf("\nElement to be searched is - %d", val);
20. if (res == -1)
21. printf("\nElement is not present in the array");
22. else
23. printf("\nElement is present at %d position of array", res);
24. return 0;
25. }
Output
1. #include <iostream>
2. using namespace std;
3. int linearSearch(int a[], int n, int val) {
157
4. // Going through array linearly
5. for (int i = 0; i < n; i++)
6. {
7. if (a[i] == val)
8. return i+1;
9. }
10. return -1;
11. }
12. int main() {
13. int a[] = {69, 39, 29, 10, 56, 40, 24, 13, 51}; // given array
14. int val = 56; // value to be searched
15. int n = sizeof(a) / sizeof(a[0]); // size of array
16. int res = linearSearch(a, n, val); // Store result
17. cout<<"The elements of the array are - ";
18. for (int i = 0; i < n; i++)
19. cout<<a[i]<<" ";
20. cout<<"\nElement to be searched is - "<<val;
21. if (res == -1)
22. cout<<"\nElement is not present in the array";
23. else
24. cout<<"\nElement is present at "<<res<<" position of array";
25. return 0;
26. }
Output
1. using System;
2. class LinearSearch {
3. static int linearSearch(int[] a, int n, int val) {
4. // Going through array sequencially
5. for (int i = 0; i < n; i++)
158
6. {
7. if (a[i] == val)
8. return i+1;
9. }
10. return -1;
11. }
12. static void Main() {
13. int[] a = {56, 30, 20, 41, 67, 31, 22, 14, 52}; // given array
14. int val = 14; // value to be searched
15. int n = a.Length; // size of array
16. int res = linearSearch(a, n, val); // Store result
17. Console.Write("The elements of the array are - ");
18. for (int i = 0; i < n; i++)
19. Console.Write(" " + a[i]);
20. Console.WriteLine();
21. Console.WriteLine("Element to be searched is - " + val);
22. if (res == -1)
23. Console.WriteLine("Element is not present in the array");
24. else
25. Console.Write("Element is present at " + res +" position of array");
26. }
27. }
Output
1. class LinearSearch {
2. static int linearSearch(int a[], int n, int val) {
3. // Going through array sequencially
4. for (int i = 0; i < n; i++)
5. {
6. if (a[i] == val)
159
7. return i+1;
8. }
9. return -1;
10. }
11. public static void main(String args[]) {
12. int a[] = {55, 29, 10, 40, 57, 41, 20, 24, 45}; // given array
13. int val = 10; // value to be searched
14. int n = a.length; // size of array
15. int res = linearSearch(a, n, val); // Store result
16. System.out.println();
17. System.out.print("The elements of the array are - ");
18. for (int i = 0; i < n; i++)
19. System.out.print(" " + a[i]);
20. System.out.println();
21. System.out.println("Element to be searched is - " + val);
22. if (res == -1)
23. System.out.println("Element is not present in the array");
24. else
25. System.out.println("Element is present at " + res +" position of array");
26. }
27. }
Output
1. <html>
2. <head>
3. </head>
4. <body>
5. <script>
160
6. var a = [54, 26, 9, 80, 47, 71, 10, 24, 45]; // given array
7. var val = 71; // value to be searched
8. var n = a.length; // size of array
9. function linearSearch(a, n, val) {
10. // Going through array sequencially
11. for (var i = 0; i < n; i++)
12. {
13. if (a[i] == val)
14. return i+1;
15. }
16. return -1
17. }
18. var res = linearSearch(a, n, val); // Store result
19. document.write("The elements of the array are: ");
20. for (i = 0; i < n; i++)
21. document.write(" " + a[i]);
22. document.write("<br>" + "Element to be searched is: " + val);
23. if (res == -1)
24. document.write("<br>" + "Element is not present in the array");
25. else
26. document.write("<br>" + "Element is present at " + res +" position of array");
27. </script>
28. </body>
29. </html>
Output
1. <?php
2. $a = array(45, 24, 8, 80, 62, 71, 10, 23, 43); // given array
161
3. $val = 62; // value to be searched
4. $n = sizeof($a); //size of array
5. function linearSearch($a, $n, $val) {
6. // Going through array sequencially
7. for ($i = 0; $i < $n; $i++)
8. {
9. if ($a[$i] == $val)
10. return $i+1;
11. }
12. return -1;
13. }
14. $res = linearSearch($a, $n, $val); // Store result
15. echo "The elements of the array are: ";
16. for ($i = 0; $i < $n; $i++)
17. echo " " , $a[$i];
18. echo "<br>" , "Element to be searched is: " , $val;
19. if ($res == -1)
20. echo "<br>" , "Element is not present in the array";
21. else
22. echo "<br>" , "Element is present at " , $res , " position of array";
23. ?>
Binary Search Algorithm
In this article, we will discuss the Binary Search Algorithm. Searching is the process of finding
some particular element in the list. If the element is present in the list, then the process is called
successful, and the process returns the location of that element. Otherwise, the search is called
unsuccessful.
Linear Search and Binary Search are the two popular searching techniques. Here we will discuss
the Binary Search Algorithm.
Binary search is the search technique that works efficiently on sorted lists. Hence, to search an
element into some list using the binary search technique, we must ensure that the list is sorted.
Binary search follows the divide and conquer approach in which the list is divided into two
halves, and the item is compared with the middle element of the list. If the match is found then,
the location of the middle element is returned. Otherwise, we search into either of the halves
depending upon the result produced through the match.
162
NOTE: Binary search can be implemented on sorted array elements. If the list elements are not
arranged in a sorted manner, we have first to sort them.
Algorithm
1. Binary_Search(a, lower_bound, upper_bound, val) // 'a' is the given array, 'lower_bound' is the in
dex of the first array element, 'upper_bound' is the index of the last array element, 'val' is the valu
e to search
2. Step 1: set beg = lower_bound, end = upper_bound, pos = - 1
3. Step 2: repeat steps 3 and 4 while beg <=end
4. Step 3: set mid = (beg + end)/2
5. Step 4: if a[mid] = val
6. set pos = mid
7. print pos
8. go to step 6
9. else if a[mid] > val
10. set end = mid - 1
11. else
12. set beg = mid + 1
13. [end of if]
14. [end of loop]
15. Step 5: if pos = -1
16. print "value is not present in the array"
17. [end of if]
18. Step 6: exit
To understand the working of the Binary search algorithm, let's take a sorted array. It will be
easy to understand the working of Binary search with an example.
o Iterative method
o Recursive method
The recursive method of binary search follows the divide and conquer approach.
163
Let the elements of array are -
We have to use the below formula to calculate the mid of the array -
beg = 0
end = 8
164
Now, the element to search is found. So algorithm will return the index of the element matched.
Now, let's see the time complexity of Binary search in the best case, average case, and worst
case. We will also see the space complexity of Binary search.
1. Time Complexity
o Best Case Complexity - In Binary search, best case occurs when the element to search is
found in first comparison, i.e., when the first middle element itself is the element to be
searched. The best-case time complexity of Binary search is O(1).
o Average Case Complexity - The average case time complexity of Binary search
is O(logn).
o Worst Case Complexity - In Binary search, the worst case occurs, when we have to
keep reducing the search space till it has only one element. The worst-case time
complexity of Binary search is O(logn).
2. Space Complexity
Now, let's see the programs of Binary search in different programming languages.
1. #include <stdio.h>
165
2. int binarySearch(int a[], int beg, int end, int val)
3. {
4. int mid;
5. if(end >= beg)
6. { mid = (beg + end)/2;
7. /* if the item to be searched is present at middle */
8. if(a[mid] == val)
9. {
10. return mid+1;
11. }
12. /* if the item to be searched is smaller than middle, then it can only be in left subarray */
13. else if(a[mid] < val)
14. {
15. return binarySearch(a, mid+1, end, val);
16. }
17. /* if the item to be searched is greater than middle, then it can only be in right subarray */
18. else
19. {
20. return binarySearch(a, beg, mid-1, val);
21. }
22. }
23. return -1;
24. }
25. int main() {
26. int a[] = {11, 14, 25, 30, 40, 41, 52, 57, 70}; // given array
27. int val = 40; // value to be searched
28. int n = sizeof(a) / sizeof(a[0]); // size of array
29. int res = binarySearch(a, 0, n-1, val); // Store result
30. printf("The elements of the array are - ");
31. for (int i = 0; i < n; i++)
32. printf("%d ", a[i]);
33. printf("\nElement to be searched is - %d", val);
34. if (res == -1)
35. printf("\nElement is not present in the array");
166
36. else
37. printf("\nElement is present at %d position of array", res);
38. return 0;
39. }
Output
1. #include <iostream>
2. using namespace std;
3. int binarySearch(int a[], int beg, int end, int val)
4. {
5. int mid;
6. if(end >= beg)
7. {
8. mid = (beg + end)/2;
9. /* if the item to be searched is present at middle */
10. if(a[mid] == val)
11. {
12. return mid+1;
13. }
14. /* if the item to be searched is smaller than middle, then it can only be in left subarray */
15. else if(a[mid] < val)
16. {
17. return binarySearch(a, mid+1, end, val);
18. }
19. /* if the item to be searched is greater than middle, then it can only be in right subarray */
20. else
21. {
22. return binarySearch(a, beg, mid-1, val);
23. }
167
24. }
25. return -1;
26. }
27. int main() {
28. int a[] = {10, 12, 24, 29, 39, 40, 51, 56, 70}; // given array
29. int val = 51; // value to be searched
30. int n = sizeof(a) / sizeof(a[0]); // size of array
31. int res = binarySearch(a, 0, n-1, val); // Store result
32. cout<<"The elements of the array are - ";
33. for (int i = 0; i < n; i++)
34. cout<<a[i]<<" ";
35. cout<<"\nElement to be searched is - "<<val;
36. if (res == -1)
37. cout<<"\nElement is not present in the array";
38. else
39. cout<<"\nElement is present at "<<res<<" position of array";
40. return 0;
41. }
Output
1. using System;
2. class BinarySearch {
3. static int binarySearch(int[] a, int beg, int end, int val)
4. {
5. int mid;
6. if(end >= beg)
7. {
8. mid = (beg + end)/2;
9. if(a[mid] == val)
10. {
168
11. return mid+1; /* if the item to be searched is present at middle */
12. }
13. /* if the item to be searched is smaller than middle, then it can only be in left subarray */
14. else if(a[mid] < val)
15. {
16. return binarySearch(a, mid+1, end, val);
17. }
18. /* if the item to be searched is greater than middle, then it can only be in right subarray */
19. else
20. {
21. return binarySearch(a, beg, mid-1, val);
22. }
23. }
24. return -1;
25. }
26. static void Main() {
27. int[] a = {9, 11, 23, 28, 38, 45, 50, 56, 70}; // given array
28. int val = 70; // value to be searched
29. int n = a.Length; // size of array
30. int res = binarySearch(a, 0, n-1, val); // Store result
31. Console.Write("The elements of the array are - ");
32. for (int i = 0; i < n; i++)
33. {
34. Console.Write(a[i] + " ");
35. }
36. Console.WriteLine();
37. Console.WriteLine("Element to be searched is - " + val);
38. if (res == -1)
39. Console.WriteLine("Element is not present in the array");
40. else
41. Console.WriteLine("Element is present at " + res + " position of array");
42. }
43. }
Output
169
Program: Write a program to implement Binary search in Java.
1. class BinarySearch {
2. static int binarySearch(int a[], int beg, int end, int val)
3. {
4. int mid;
5. if(end >= beg)
6. {
7. mid = (beg + end)/2;
8. if(a[mid] == val)
9. {
10. return mid+1; /* if the item to be searched is present at middle
11. */
12. }
13. /* if the item to be searched is smaller than middle, then it can only
14. be in left subarray */
15. else if(a[mid] < val)
16. {
17. return binarySearch(a, mid+1, end, val);
18. }
19. /* if the item to be searched is greater than middle, then it can only be
20. in right subarray */
21. else
22. {
23. return binarySearch(a, beg, mid-1, val);
24. }
25. }
26. return -1;
27. }
28. public static void main(String args[]) {
29. int a[] = {8, 10, 22, 27, 37, 44, 49, 55, 69}; // given array
30. int val = 37; // value to be searched
170
31. int n = a.length; // size of array
32. int res = binarySearch(a, 0, n-1, val); // Store result
33. System.out.print("The elements of the array are: ");
34. for (int i = 0; i < n; i++)
35. {
36. System.out.print(a[i] + " ");
37. }
38. System.out.println();
39. System.out.println("Element to be searched is: " + val);
40. if (res == -1)
41. System.out.println("Element is not present in the array");
42. else
43. System.out.println("Element is present at " + res + " position of array");
44. }
45. }
Output
1. <?php
2. function binarySearch($a, $beg, $end, $val)
3. {
4. if($end >= $beg)
5. {
6. $mid = floor(($beg + $end)/2);
7. if($a[$mid] == $val)
8. {
9. return $mid+1; /* if the item to be searched is present at middle */
10. }
11. /* if the item to be searched is smaller than middle, then it can only be in left subarray */
171
12. else if($a[$mid] < $val)
13. {
14. return binarySearch($a, $mid+1, $end, $val);
15. }
16. /* if the item to be searched is greater than middle, then it can only be in right subarray */
17. else
18. {
19. return binarySearch($a, $beg, $mid-1, $val);
20. }
21. }
22. return -1;
23. }
24. $a = array(7, 9, 21, 26, 36, 43, 48, 54, 68); // given array
25. $val = 68; // value to be searched
26. $n = sizeof($a); // size of array
27. $res = binarySearch($a, 0, $n-1, $val); // Store result
28. echo "The elements of the array are: ";
29. for ($i = 0; $i < $n; $i++)
30. echo " " , $a[$i];
31. echo "<br>" , "Element to be searched is: " , $val;
32. if ($res == -1)
33. echo "<br>" , "Element is not present in the array";
34. else
35. echo "<br>" , "Element is present at " , $res , " position of array";
36. ?>
172