0% found this document useful (0 votes)
20 views319 pages

OceanofPDF - Com DATA STRUCTURES Through C PROBLEMS AND S - Rashmi V

The document is a comprehensive guide on Data Structures using C++, authored by Rashmi, aimed at helping students understand complex concepts through practical examples and problem-solving. It covers various topics such as algorithms, data structures, arrays, stacks, queues, linked lists, trees, graphs, and sorting techniques, along with laboratory experiments for hands-on learning. The book is structured to facilitate clear understanding and is intended for students in undergraduate and postgraduate computer science programs.

Uploaded by

ankitshrestha911
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views319 pages

OceanofPDF - Com DATA STRUCTURES Through C PROBLEMS AND S - Rashmi V

The document is a comprehensive guide on Data Structures using C++, authored by Rashmi, aimed at helping students understand complex concepts through practical examples and problem-solving. It covers various topics such as algorithms, data structures, arrays, stacks, queues, linked lists, trees, graphs, and sorting techniques, along with laboratory experiments for hands-on learning. The book is structured to facilitate clear understanding and is intended for students in undergraduate and postgraduate computer science programs.

Uploaded by

ankitshrestha911
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 319

DATA STRUCTURES THROUGH C++

PROBLEMS AND SOLUTIONS

Published by Rashmi
Version 1.0
Copyright © 2020
Author: Rashmi
Preface
The Author is in teaching field for Computer Science and
Engineering colleges. The purpose of writing this book made for
the following.
During teaching when the questions were asked on any topic
of Data Structures, many students used to get confusion to give the
right answer. In order to share the knowledge I have made the
questions and showed the Data Structure implementation examples
through C++ in this book.
The importance of Data Structures is well known in various
engineering fields. The book is structured to cover the key aspects
of the subject on Data Structures. It provides logical method of
explaining various complicated concepts and step wise methods to
explain the topics. Each chapter is well supported with necessary
illustrations, practical examples and solved the problems. This
make the student to understand the subject more clear and makes
more interesting for study. This book will be useful to students as
well as subject learners in graduation and post-graduation in
different universities.
The programs are tested and the same output is written in this
book.
This book will have periodical updates.

Good Luck!
Table of Contents
1. What is an Algorithm?
1.1 What are the properties of algorithm?
1.2 How to write an Algorithm?
1.3 What is Algorithm Analysis and its Complexities?
1.4 What is the Efficiency of Algorithm?
1.5 What is Best Case, Worst Case and Average Case Analysis?
2. What is a Data Structure?
2.1 What are the Types of Data Structures?
2.1.1 What are Primitive Data Structures?
2.1.2 What are Non-Primitive Data Structures?
2.2 What are the Operations Performed on Data Structures?
3. What is an Array?
3.1 What are the Operations Performed on Arrays?
4. What is Stack ADT?
4.1 What are the Operations Performed on the Stack?
4.1.1 What is the Algorithm for Push Operation on Stacks?
4.1.2 What is the Algorithm for Pop Operation on Stacks?
4.2 What are the Applications Performed on Stack?
4.3 How to Implement Stack Using Arrays?
5. What is Queue ADT?
5.1 What are the Operation Performed on Queue?
5.1.1 What is the Algorithm for Enqueue Operation on Queue?
5.1.2 What is the Algorithm for Dequeue Operation on Queue?
5.2 What are the Applications of Queues?
5.3 How to Implement Queue Using Arrays?
6. How to Evaluate the Expressions?
6.1 How to Evaluation of Postfix Expression?
6.2 How to Convert Infix to Postfix Expressions?
7. What are Linked Lists?
7.1 What are Single Linked Lists and Chains?
7.1.1 What are the basic operation performed on single linked list?
7.1.2 How to Deletion of a Node in Singly Linked List?
7.1.3 How to Traverse and Display a Singly Linked List?
7.2 What are Circular Lists?
7.2.1 How to Implement Circular Linked List?
7.2.2 How to perform basic operation performed on Single Linked
List?
7.3 What are Linked Stacks?
7.3.1 How to implement Push operation on Linked Stacks?
7.3.2 How to implement Pop operation on Linked Stacks?
7.4 What are Linked Queues?
7.4.1 How to implement Enqueue operation on Linked Queues?
7.4.2 How to implement Dequeue operation on Linked Queues?
7.5 What is a Polynomial?
7.5.1 How to Represent the Polynomials using Linked Lists?
7.5.2 What is Polynomial Addition?
7.5.3 What is Circular Representation of Polynomials?
7.6 What are Equivalence Classes?
7.7 What is a Sparse Matrix?
7.7.1 How to represent Sparse Matrix?
7.8 What are Doubly Linked Lists?
7.8.1 What are the basic operation performed on Doubly Linked
List?
7.9 What are Generalized Lists?
7.10 What are Recursive Algorithms for Lists?
7.10.1 What are Reference Count, Shared and Recursive Lists?
8. What are Trees?
8.1 What is the Terminology of Tree Data Structure?
8.2 What are the Representation of Trees?
8.3 What are Binary Trees?
8.2.1 What are the Properties of Binary Trees?
8.2.2 What are the Types of Binary Trees?
8.2.3 How to Represent the Binary Trees?
8.2.4 What are Binary Tree Traversals?
8.4 What are Expression Trees?
8.5 What are Threaded Binary Trees?
8.5.1 What are Threads?
8.5.2 What is Inorder Traversal of a Threaded Binary Tree?
8.5.3 How to Insert a node into a Threaded Binary Tree?
8.6 What are Heaps?
8.6.1 What are Priority Queues?
8.6.2 What are different types of Heaps?
8.7 What are Binary Search Trees?
8.7.1 What is the search operation in Binary Search Tree?
8.7.2 How to insert into a Binary Search Tree?
8.7.3 How to delete from Binary Search Tree?
8.7.4 How to Join two Binary Search Trees?
8.7.5 How to Split Binary Search Trees?
8.7.6 How to find the Height of Binary Search Tree?
9. What is a Graph?
9.1 What is the Graph Abstract Data Type?
9.2 What are the different Types of Graphs?
9.3 What are the Properties of Graph?
9.4 How to Represent a Graph?
9.4.1 What is Adjacency Matrix Representation of Graphs?
9.4.2 What is Adjacency List Representation of Graphs?
9.4.3 What is Adjacency Multilist Representation of Graphs?
9.5 What are Elementary Graph Operations?
9.5.1 What is Breadth First Search?
9.5.2 What is Depth First Search?
9.6 What are Connected Components?
9.7 What are Spanning Trees?
9.8 What are Biconnected Components?
9.9 What is Minimum Cost Spanning Trees?
9.9.1 What is Kruskal’s Algorithm?
9.9.2 What is Prim’s Algorithm?
9.9.3 What is Sollins Algorithm?
9.10 What is the Shortest Path and Transitive Closure?
9.11 What is Single Source/All Destinations by Dijkstra’s Algorithm?
9.12 What is All Pairs Shortest Path?
10. What is Sorting?
10.1 What is Insertion Sort?
10.2 What is Quick Sort?
10.3 What is Merge Sort?
10.4 What is Heap Sort?
10.5 What is Radix Sort?
10.6 What is Selection Sort?
10.7 What is Bubble Sort?
Laboratory Work on Data Structures
Experiment 1: How to implement Multistack in a Single Array through C++
Experiment 2: How to implement Circular Queue through C++
Experiment 3: How to implement Singly Linked List through C++
Experiment 4: How to implement Doubly Linked List through C++
Experiment 5: How to implement Binary Search Tree through C++
Experiment 6: How to implement Heaps through C++
Experiment 7: How to implement Breadth First Search Techniques.
Experiment 8: How to implement Depth First Search Technique through
C++
Experiment 9: How to implement Prim’s Algorithm through C++
Experiment 10: How to implement Dijkstra’s Algorithm through C++
Experiment 11: How to implement Kruskal’s Algorithm through C++
Experiment 12: How to implement Merge Sort though C++
Experiment 13: How to implement Quick Sort through C++
Experiment 14: How to implement Data Searching using Divide and
Conquer Technique through C++
1. What is an Algorithm?
Definition of Algorithm: The algorithm is defined as a collection of
unambiguous instructions occurring in some specific sequence and such an
algorithm should produce output for given set of input in finite amount of
time.
After understanding the problem statement we have to create an
algorithm for the given problem. The algorithm is then converted into some
programming language and then given to some computing device. The
computer then executes this algorithm which is actually submitted in the
form of some source program. During the process of execution it requires
some set of input. With the help of algorithm (in the form of program) and
input set, the result is produced as the output. If the given input is invalid
then it should raise appropriate error message otherwise correct input will
be produced as the output.

Fig. 1.1: Notion of algorithm

1.1 What are the properties of algorithm?


Simply writing the sequence of instructions is not sufficient to
accomplish certain task. It is required to have following properties with the
algorithm.
1. Non-ambiguity: Each step in an algorithm should be non-
ambiguous which means each instruction should be clear and
precise. The instruction in the algorithm should not donate any
conflicted meaning. This property also indicates the
effectiveness of the algorithm.
2. Range of Input: The range of input should be specified. Since an
algorithm is input driven and if the range of the input is not been
specified then the algorithm can go in an infinite state.
3. Multiplicity: The same algorithm can be represented in several
different ways. That means we can write in simple English such
as sequence of instructions or we can write in the form pseudo
code. For the same problem we can write different algorithms to
solve it. For example if we want to search an element in the
given list of values we can use either “sequential search
method” or “binary search method” in an algorithm since
“searching” is the task performed here.
4. Speed: The algorithms are written using some specific ideas
(which is properly known as logic of algorithm). But such
algorithms should be efficient and should produce the output
with fast speed.
5. Finiteness: The algorithm should be finite which means after
performing required operations it should be terminated.

1.2 How to write an Algorithm?


An algorithm is basically sequence of instructions written in simple
English language. The algorithm is divided into two sections:

1. Algorithm heading: It consists of name of algorithm,


problem description, input and output.
2. Algorithm body: It consists of logical body of the algorithm
by making use of various programming constructs and
assignment statements.
Let us understand some rules for writing an algorithm.
1. Algorithm is procedure consisting of heading and body. The
heading consists of keyword, algorithm and name of the
algorithm and parameter list. The syntax is shown below.

2. Then in the heading section we should write the following


things:
//Problem Description:
//Input:
//Output:

3. Then body of an algorithm is written, in which various


programming constructs like if, for, while or some assignment
statements may be written.
4. The compound statements should be enclosed within { and }
brackets.
5. Single line comments are written using // as the beginning of the
comment.
6. The identifier should begin by letter and not by digit. An
identifier can be a combination of alphanumeric string.
It is not necessary to write data types especially for identifiers. It
will be represented by the context itself. The basic data types
include integer, float, char, Boolean, ... etc. The pointer type is
pointing to a certain memory location. The compound data type
such as structure or record can also be used.

7. Using assignment operator ← an assignment statement can be


given.
For instance:
variable ← expression

8. There are other types of operators such as Boolean operators i.e.,


true or false. Logical operators such as and, or, not. And
relational operators such as <, >, >=, <=, =, ≠
9. The array indices are stored within the square braces [ and ] .
The index of array usually start at zero. The multidimensional
arrays can also be used in the algorithm.
10.
The inputting and outputting can be done using read and write .
For example:
write(“ This message will be displayed on console ”);
read( val );

11.
The conditional statements such as if-then or if-then-else are
written in the following form:
if (condition) then statement
if (condition) then statement else statement
If the if-then statement is of compound statement then { and }
should be used for enclosing block.

12.
while statement can be written as:
while (condition) do
{
Statement 1;
Statement 2;
...
Statement n;
}
When the condition is true the block enclosed with { } gets executed
otherwise statement after } will be executed.

13.
The general form for writing for loop is:
for variable ← value 1 to value n do

{
Statement 1;
Statement 2;
...
Statement n;
}
Here, value1 is initialization condition and valuen is a terminating
condition. The step indicates increments or decrements of value1 for
executing for loop. Sometimes the keyword step is used to denote
increment or decrement the value of the variable for example:
for i←1 to n step 1
{
write(i);
}

14.
The break statement is used to exit the inner loop. The return
statement is used to return control from one point to another.
Generally used while existing from function.
The statements in an algorithm executes in sequential order i.e., in
the same order as they appear one after the other.
Example 1: Write an algorithm to check weather given number is even or
odd.
Algorithm eventest(val)
//Problem Description: This algorithm test weather given
//number is even or odd
//Input: The number to be tested i.e., val
//Output: Appropriate messages indicating to be even or oddness
if(val%2==0) then
write (“Given number is even”)
else
write(“Given number is odd”)
Example2: Write an algorithm to find factorial of n number.
Algorithm fact(n)
//Problem Description: This algorithm finds the factorial.
//of given number n
//Input: The number n of which the factorial is to be calculated.
//Output: Factorial value of given n number
if(n←1) then
return 1;
else
return n*fact(n-1);
1.3 What is Algorithm Analysis and its Complexities?
The analysis of the program does not mean simply working of the
program but to check weather for all possible situations program works or
not. The analysis also works for efficiency of program. Efficiency in the
sense:

1. The program requires less amount of storage space.


2. The program gets executed in very less amount of time.
The time and space are the factors which determine the efficiency of
the program. The execution of the program time cannot be computed in
terms of seconds because of the following factors.
1. The hardware of the machine.
2. The amount of time required by each machine instruction.
3. The amount of time required by the compilers to execute the
instruction.
4. The instruction set.
Hence the required for particular program to execute means the total
number of times the statements gets executed. The number of times the
statement executing is known as frequency count .
Consider the following statement.
y=y+1
The statement will execute only once. Hence frequency count of this
statement is said to be 1.
Again consider,
if(a<5)
{
b=10;
}
The above if statement will execute only for single time. Hence again the
frequency count of the above code is 1.
Consider,

We have shown the number of times the above code gets executed.
Hence we can say the total frequency count of the above code is:
1+6+5+5=17
Now let us consider the execution of for loop.

Total frequency count is 1 + (n+1) + n +n = 3n+2 .


In the above example i=1, executes once. The i<=n will be executed for
n+1 times. That is n times when I is really less or equal to n (i.e., when
condition is true) and more time this statement will be executed when I
becomes greater than n (i.e., when the condition is false). The statement i++
is executed for n times. Similarly the printf statement will be executed for n
times.
Let us consider one more example.

The nested execution will be


(1 + (n+1) + n) + ([1 + (n+1) + n) +n]
= (2n + 2) (3n +2)
= 6n 2 + 10n +4 is the frequency count.
Now we can compute the frequency count very easily. Suppose the
frequency count is 3n+2. Then just neglect all the constants and then
specify the time complexity in terms of big-oh notation. Hence for 3n+2
frequency count, we will get O(n) as the time complexity. Similarly, if the
time complexity is 6n2 +10n+4, then we will get the time complexity to be
O(n2 ).
1.4 What is the Efficiency of Algorithm?
Consider we have two algorithms that perform same task, and the first
algorithm has the computing time of O(n) and the second of O(n2 ), then we
usually prefer the first one.
The reason for this is that as n increases the time required for the
execution of second algorithm will get more time required for the execution
of first one.
We will study various values for computing functions for the constant
values.
n log 2 n n log n n2 n3 2n
1 0 0 1 1 2
2 1 2 4 8 4
4 2 8 16 64 16
8 3 24 64 512 256
16 4 64 256 4096 65536

So, the rate of growth of common computing time functions is in the


order as shown below.
O(log2 n) < O(n) < O(n log2 n) < O(n2 ) < O(n3 ) < O(2n )

1.5 What is Best Case, Worst Case and Average Case Analysis?
If an algorithm takes minimum amount of time to run to completion for
a specific set of input then it is called as best case time complexity.
For example: While searching a particular element by using sequential
search we get the desired element at first place itself then it is called best
case time complexity.
If an algorithm takes maximum amount of time to run to completion
for a specific set of input then it is called as worst case time complexity.
For example: While searching a particular element by using linear
search method we get the desired element at end of the list then it is called
worst case time complexity.
The time complexity that we get for certain set of inputs is in an
average same. Then for corresponding input such a time complexity is
called as an average case time complexity.
2. What is a Data Structure?
Data structure: A data structure is the mathematical or logical way of
organizing the data elements in the memory of a computer so that it can be
used efficiently.
or
Organization of data in memory locations for convenient handling of user is
called as Data Structure.
Example: Array, stack, queue, tree, graph.
2.1 What are the Types of Data Structures?
Data structures are generally categorized into two classes:
1) Primitive data structures.
2) Non-primitive data structures.

Fig. 2.1: Types of Data Structures

2.1.1 What are Primitive Data Structures?


Primitive data structures are the basic data types supported by a
programming language. Example: int, float, char
2.1.2 What are Non-Primitive Data Structures?
Non-primitive data structures are the data structures created using
primitive data structures.
Example: linked lists, stacks, queues, trees, and graphs.
Non-primitive data structures can further be classified into two
categories:
1) Linear Data Structure
2) Non-linear Data Structure
2.1.2.1 What is a Linear Data Structure?
A data structure is said to be linear if its elements from any sequence.
There are basically two ways of representing such linear structure in
memory.
a) One way is to have the linear relationships between the elements
represented by means of sequential memory location. These linear
structures are called arrays.
b) The other way is to have the linear relationship between the elements
represented by means of pointers or links. These linear structures are
called linked lists.
Examples of non- primitive data structures are:
1) Arrays
2) Queues
3) Stacks
4) Linked lists
2.1.2.2 What is a Non-Linear Data Structure?
If the elements of the data structure are not linearly related then that
structure is called as non-linear data structure.
Example: graphs, trees
2.2 What are the Operations Performed on Data Structures?
Following are the common operations that can be performed on any
data structure.
1) Traversing- It is used to access each data item exactly once so that it
can be processed.
2) Searching- It is used to find out the location of the data item if it exists
in the given collection of data items.
3) Inserting- It is used to add a new data item in the given collection of
data items.
4) Deleting- It is used to delete an existing data item from the given
collection of data items.
5) Sorting- It is used to arrange the data items in some order i.e. in
ascending or descending order in case of numerical data and in
dictionary order in case of alphanumeric data.
6) Merging- It is used to combine the data items of two sorted files into
single file in the sorted form.
3. What is an Array?
An array is a collection of homogeneous elements that occupy
consecutive memory locations. An array is static data structure. An array is
a linear data structure.
Array elements can be accessed an index(also known as the subscript).
In C, arrays are declared using the following syntax:
Data_Type name [size];
For example: int marks[10];

Fig. 3.1: Array Representation of 10 elements

Limitations of arrays:

1. Arrays are of fixed size (static).


2. Data elements are stored in contiguous memory locations which
may not be always available.
3. Insertion and deletion of elements can be problematic because of
shifting of elements from their positions.

3.1 What are the Operations Performed on Arrays?


1) Traversing: It is used to access each data item exactly once so that it
can be processed.
Example:
We have linear array A as below:

Here we will start from beginning and will go till last element and
during this process we will access value of each element exactly once as
below:
A [1] = 100
A [2] = 200
A [3] = 300
A [4] = 400
A [5] = 500
2) Insertion: It is used to add a new data item in the given collection of
data items.
Example for insertion of element into an array:
We have linear array A has the following elements as shown below:
11, 21, 51, 31, 15

New element to be inserted is 100 and location for insertion is 3. So


shift the elements from 5th location to 3rd location downwards by 1
place. And then insert 100 at 3rd location. It is shown below:

3) Deletion: It is used to delete an existing data item from the given


collection of data items.
Example:
We have linear array A as below:
The element to be deleted is 51 which is at 3rd location. So shift the
elements from 4th to 6th location upwards by 1 place. It is shown
below:

After deletion the array will be:

4) Searching: Searching is the process of identifying whether a particular


value is present in an array or not. If the value is present in the array,
then searching is said to be successful and the searching process gives
the location of that value in the array.
However, if the value is not present in the array, the searching
process displays an appropriate message and in this case searching is
said to be unsuccessful.
There are two popular methods for searching the array elements:

i. linear search
ii. Binary search.

i. Linear Search: Linear search, also called as sequential search, is


a very simple method used for searching an array for a particular
value, which works by comparing the value to be searched with
every element of the array one by one in a sequence until a match
is found else not found will be displayed. Linear search is mostly
used to search an unordered list of elements (array in which data
elements are not sorted.
Example for Linear Search:
Consider the list:

Element to be searched is: S=31

Element to be searched, 31==11 (False)

Element to be searched, 31==15(False)

Element to be searched, 31==21(False)

Element to be searched, 31==8(False)

Element to be searched, 31==7(False)


Element to be searched, 31==31(True) successful. That is the search
element is found at index 5, which is at location 6.
Algorithm for Linear Search:
Algorithm LinearSearch(int S)
begin
int pos,i,flag=0;
for i=0 to n do
if (S == a[i]) then do
pos=i;
flag=1;
break;
end if
end for
if (flag == 1) then element found
otherwise print element not found
end if
end
PROGRAM for Linear Search:
#include<stdio.h>
#include<conio.h>
int main()
{
int i,n,S, a[100],flag=0;
printf("\nEnter the no of elements in the list: ");
scanf("%d",&n);
for(i=0;i<n;i++)
{
printf("\nEnter a[%d] element : ",i);
scanf("%d",&a[i]);
}
printf("\nEnter element to be searched: ");
scanf("%d",&S);
for(i=0;i<n;i++)
{
if(S==a[i])
{
flag=1;
break;
}
}
if(flag==1)
printf("\nElement %d Found at %d location: ",S,i+1);
else
printf("\n %d is Not Found",S);
return 0;
}
Complexity of Linear Search Algorithm:
Best case: Best case of linear search occurs when search element is equal to
the first element of the array. In this case, only one comparison will be
made so best case time complexity is O(1).
Worst case: Worst case occurs if the search is the last element of the list or
if the element is not present in the list. In both the cases, n comparisons are
required. So its time complexity is O(n).
Average case: t ime complexity is O(n).

ii. Binary Search: Binary search is an extremely efficient algorithm


when it is compared to linear search. Binary search technique
searches “data” in minimum possible comparisons. Binary search
can only be applied on a sorted list.
Apply the following conditions to search a “data”.

1. Find the middle element of the array (i.e., n/2 is the middle
element if the array or the sub-array contains n elements).
2. Compare the middle element with the data to be searched, and
then there are following three cases.
a) If it is a desired element, then search is successful.
b) If it is less than desired data, then search only the first half of
the array, i.e., the elements which come to the left side of the
middle element.
c) If it is greater than the desired data, then search only the
second half of the array, i.e., the elements which come to the
right side of the middle element.
d) Repeat the same steps until an element are found or exhaust the
search area.
Example 1 for Binary Search:
In the given list of elements 11, 13, 21, 33, 51, 56, 66, 81, 99 find the
location of search element 13.
Step 1: Internally in the memory the above elements are stored as shown
below.

Step 2: Search element 13 is compared with middle element 51. (Since,


middle element= (0+8)/2 = 4)
Since both are not matching and search element 13 is smaller than 50.
So, we search only in the left sublist (i.e., 11,13, 21, 33).

Step 3: The search element 13 is compared with the middle element 13.
(Since, middle element= (0+3)/2 = 1)

Both the middle elements and the element to search are same i.e., 13
which is present at the index 1.
Example 2 for Binary Search:
In the given list of elements 11, 13, 21, 33, 51, 56, 66, 81, 99 find the
location of search element 81.
Step 1: Internally in the memory the above elements are stored as shown
below.

Step 2: Search element element 81 is compared with middle element 51.

Since both are not matching and search element 81 is greater than 51. So,
we search only in the right sublist (i.e., 56, 66, 81, 99). (Since, middle
element= (0+8)/2 = 4)
Step 3: The search element 80 is compared with the middle element 66.
(Since, middle element= (5+8)/2 = 6)

Since both are not matching and 81 is larger than 66, we search only the
right sublist (i.e., 81 and 99). (Since, middle element= (6+8)/2 = 7)

Finally, search element and the element present are equal i.e. 81 which is
present at the index 7.
Algorithm for Binary Search:
Algorithm Binsearch (a,low,high,key)
{
// low:=0; high:=n;
If(low==high) then
{
If(key=a[mid]) then return low
else return 0;
}
else
{
//reduce p into smaller problem
mid:=[(low+high)/2];
if(key==a[mid]) then return mid
else if (x<a[mid]) then
return Binsearch(a,low,mid-1,key);
else return Binsearch(a,mid+1,high,key);
}
}
PROGRAM for Binary Search with Recursion:
//Implementation of Binary Search without using recursion
#include<stdio.h>
#include<conio.h>
int main()
{
int i,S,a[100],n,flag=0,st,end,mid;
printf("\nEnter the no of elements in the list: ");
scanf("%d",&n);
printf("\nEnter the sorted list: ");
for(i=0;i<n;i++)
{
printf("\nEnter a[%d] element : ",i);
scanf("%d",&a[i]);
}
printf("\nEnter element to be searched: ");
scanf("%d",&S);
st=0,end=n-1;
while(st<=end)
{
mid=(st+end)/2;
if(S==a[mid])
{
flag=1;
printf("\n%d found at %d location:",S,mid+1);
break;
}
else if(S>a[mid])
st=mid+1;
else
end=mid-1;
}
if(flag==0)
printf("\n %d Not Found: ",S);
return 0;
}
PROGRAM for Binary Search using Recursion:
//Implementation of Binary Search using recursion
#include<stdio.h>
#include<conio.h>
#define MAX 20
int a[MAX],n;
void bubble()
{
int t,i,j;
for(i=0;i<n;i++)
{
for(j=0;j<n-1-i;j++)
{
if(a[j]>a[j+1])
{
t=a[j];
a[j]=a[j+1];
a[j+1]=t;
}
}
}
}
int binary(int l,int u, int key)
{
int mid;
if(l<=u)
{
mid=(l+u)/2;
if(key==a[mid])
return mid;
else if(key>a[mid])
return binary(mid+1,u,key);
else
return binary(l,mid-1,key);
}
return -1;
}
void main()
{
int i,key,pos;
clrscr();
printf("\nEnter the no of elements to sort: ");
scanf("%d",&n);
for(i=0;i<n;i++)
{
printf("\nEnter a[%d] element : ",i);
scanf("%d",&a[i]);
}
bubble();
printf("\nAfter rearange the element in order: \n");
for(i=0;i<n;i++)
printf(" %d",a[i]);
printf("\nEnter element for search: ");
scanf("%d",&key);
pos=binary(0,n-1,key);
if(pos!=-1)
printf("\n%d found at %d location:",key,++pos);
else
printf("\n %d Not Found: ",key);
getch();
}
Complexity of Binary Search Algorithm:
Best case: Best case of linear search occurs when search element is equal to
the middle element of the array. In this case, only one comparison will be
made so best case time complexity is O(1).
Worst case: In this case maximum logn comparisons are required. So its
time complexity is O(logn).
Average case: Time complexity is O(logn).
5) Sorting:
Sorting means arranging the elements of an array either in
ascending order or in descending order. That is, if A is an array, then the
elements of A are arranged in a sorted order (ascending order) in such a
way that A[0] < A[1] < A[2] < …..<A[N].
Example int A[] = {21, 34, 11, 9, 1, 0, 22};
Then the sorted array (ascending order) can be given as: A[] = {0,
1, 9, 11, 21, 22, 34};
Different techniques for sorting are:

i. Selection Sort
ii. Bubble Sort
iii. Insertion Sort
iv. Quick Sort
v. Merge Sort
vi. Radix Sort
All the sorting techniques will be discussed in the last chapter.
4. What is Stack ADT?
STACK AS ABSTRACT DATATYPE (ADT):
A stack is a data structure in which addition of new element or
deletion of an existing element always takes place at the same end. This end
is known as top of stack. When an item is added to a stack, the operation is
called push, and when an item is removed from the stack the operation is
called pop. Stack is also called as Last-In-First-Out (LIFO) list. It means
that the last element that is inserted will be the first element to be removed
from the stack.

Fig. 4.1 Stack Data Structure

Abstract Datatype Stack


{
instances:
Linear list of elements, one end is called top and other end is called bottom.
operations:
empty()– returns true if stack is empty otherwise false
size()– returns the number of elements in the stack
top()– returns top element of the stack
push(x)– add element x at the top of the stack
pop()– remove top element from the stack
}
4.1 What are the Operations Performed on the Stack?
Representation of stacks (operations performed on stacks):
There are two possible operations performed on a stack. They are push
and pop.
1) Push: Allows adding an element at the top of the stack.
2) Pop: Allows removing an element from the top of the stack.
4.1.1 What is the Algorithm for Push Operation on Stacks?
Algorithm for PUSH Operation:

1. stack overflow? If top=max_stacksize then write overflow and exit.


2. read item
3. set top = top + 1
4. set stack[top] = item
5. Exit
Example for inserting elements into stack:

Fig. 4.2: Adding elements into the stack


If the elements are added continuously to the stack using the push
operation then the stack grows at one end. Initially when the stack is empty
the top = -1. The top is a variable which indicates the position of the
topmost element in the stack.
4.1.2 What is the Algorithm for Pop Operation on Stacks?
Algorithm for POP Operation on Stacks:
On deletion of elements the stack shrinks at the same end, as the
elements at the top get removed.

1. stack underflow? If top = -1 then write underflow and exit


2. repeat step 3 to 5 until top >= 0
3. set item = stack[top]
4. top = top – 1

5. write deleted item


6. exit

Fig 4.3: Deleting elements from the Stack

4.2 What are the Applications Performed on Stack?


Applications of Stacks:
1) Stack is used by compilers to check for balancing of parentheses,
brackets and braces.
2) Stack is used to evaluate a postfix expression.
3) Stack is used to convert an infix expression into postfix/prefix form.
4) In recursion, all intermediate arguments and return values are stored
on the processor’s stack.
5) During a function call the return address and arguments are pushed
onto a stack and on return they are popped off.
4.3 How to Implement Stack Using Arrays?
Implementation of Stacks using Arrays:
The stacks can be implemented by using arrays and linked lists. If
arrays are used for implementing the stacks, it would be very easy to
manage the stacks. But the problem with an array is that we are required to
declare the size of the array before using it in a program. This means the
size of the stack should be fixed.
C++ program to illustrate about stacks using arrays:
#include<iostream.h>
#include<conio.h>
class stack
{
int stk[5], top;
public:
stack()
{
top = -1;
}
void push(int x)
{
if(top>4)
{
cout<<”stack overflow”;
return;
}
stk[++top]=x;
cout<<”inserted”<<x;
}
void pop()
{
it(top<0)
{
cout<<”stack empty”;
return;
}
cout<<”deleted”<<stk[top--];
}
void display()
{
if(top<0)
{
cout<<”stack empty”;
return;
}
for(int i=top; i>=0; i--)
cout<<stk[i]<<” “;
}
};
main()
{
int opt, ele;
stack st;
while(1)
{
cout<<”\n 1. push 2. pop 3. display 4. exit”; cout<<”enter the option”;
cin>>opt;
switch(opt)
{
case 1:
cout<<” enter the element”;
cin>>ele;
st.push(ele);
break;
case 2:
st.pop();
break;
st.display();
break;
exit(0);
case 3:
default:
}
}
}
5. What is Queue ADT?
QUEUE AS ABSTRACT DATATYPE (ADT):
Queue is a linear data structure that permits insertion of new
element at one end and deletion of an element at the other end. The end at
which the deletion of an element take place is called front, and the end at
which insertion of a new element can take place is called rear. The deletion
or insertion of elements can take place only at the front or rear end called
dequeue and enqueue. The first element that gets added into the queue is
the first one to get removed from the queue. Hence the queue is referred to
as First-In-First-Out list (FIFO).

Fig. 5.1: Queue Data Structure


Abstract Datatype Queue
{
instances:
Linear list of elements, one end is called front and other end is called rear.
operations:
empty()– returns true if queue is empty otherwise false
size()– returns the number of elements in the queue
front(x)– returns first element of the queue pointed by front
rear(x)– add element x at the rear of the queue
}
5.1 What are the Operation Performed on Queue?
Representation of Queue (operations performed on Queue):
There are two possible operations performed on a queue. They are
enqueue and dequeue.
1) enqueue: Allows inserting an element at the rear of the queue.
2) dequeue: Allows removing an element from the front of the queue.
5.1.1 What is the Algorithm for Enqueue Operation on Queue?
Algorithm for Enqueue operation(inserting an element):
1) initialize front = 0,rear = -1.
2) check overflow condition? If front = 0and rear = max_size then write
overflow and exit.
3) if front = NULL then set front = 0 and rear = 0 else if rear = max_size
then set rear = 0
4) set rear = rear + 1
5) queue[rear] = item
6) exit
Example for Enqueue operation on Queues:
Let us consider a queue, which can hold maximum of five elements.
Step 1: Initially the queue is empty.

Step 2: Now, insert 11 to the queue. Then queue status will be:

Step 3: Next, insert 22 to the queue. Then the queue status is:
Step 4: Again insert another element 33 to the queue. The status of the
queue is:

Step 5: Again insert another element 44 to the queue. The status of the
queue is:

Step 6: Again insert another element 55 to the queue. The status of the
queue is:

Step 7: Again insert another element 66 to the queue. The status of the
queue is:

An element can be added to the queue only at the rear end of the queue.
Before adding an element in the queue, it is checked whether queue is full.
If the queue is full, then addition cannot take place. Otherwise, the element
is added to the end of the list at the rear side.
5.1.2 What is the Algorithm for Dequeue Operation on Queue?
Algorithm for Dequeue operation (deleting an element):

1. Check underflow condition? if front < 0 then write underflow


and exit
2. Set item = queue[front]
3. if front = rear then set front = rear = NULL else if front =
max_size then set front = 0
4. Set front = front + 1
5. exit
Example for Dequeue operation on Queues:
Step 1: The Queue is having the following situation:

Step 2: Now, delete an element 11. The element deleted is the element at the
front of the queue. So the status of the queue is:

Step 3: Now, delete an element 22. The element deleted is the element at the
front of the queue. So the status of the queue is:

Step 4: Now, delete an element 33. The element deleted is the element at the
front of the queue. So the status of the queue is:
Step 5: Now, delete an element 44. The element deleted is the element at the
front of the queue. So the status of the queue is:

Step 6: Now, delete an element 55. The element deleted is the element at the
front of the queue. So the status of the queue is:

The dequeue operation deletes the element from the front of the queue.
Before deleting and element, it is checked if the queue is empty. If not the
element pointed by front is deleted from the queue and front is now made to
point to the next element in the queue.
5.2 What are the Applications of Queues?
Queue is used when have to be processed in First In First Out order like
Breadth First Search. This property of Queue makes it also useful in
following kind of scenarios.
1) When a resource is shared among multiple consumers.
Examples include CPU scheduling, Disk Scheduling.
2) When data is transferred asynchronously (data not necessarily
received at same rate as sent) between two processes.
Examples include IO Buffers, pipes, file IO, etc.
5.3 How to Implement Queue Using Arrays?
Implementation of Queues using Arrays:
The stacks can be implemented by using arrays and linked lists. If
arrays are used for implementing the queues, it would be very easy to
manage the queues. But the problem with an array is that we are required to
declare the size of the array before using it in a program. This means the
size of the queue should be fixed.
C++ program to illustrate about queues using arrays:
#include<iostream.h>
#include<conio.h>
class queue
{
int que[5];
int front, rear;
public:
queue()
{
front = rear = -1;
}
void enqueue(int x)
{
if(rear > 4)
{
cout<<”queue overflow”;
front = rear = -1;
return;
}
que[++rear]=x;
cout<<”inserted”<<x;
}
void dequeue()
{
it(front = rear)
{
cout<<”queue empty”;
return;
}
cout<<”deleted”<<que[front++];
}
void display()
{
it(rear = front)
{
cout<<”queue empty”;
return;
}
for(int i=front + 1; i<=rear; i++)
cout<<que[i]<<” “;
}
};
main()
{
int opt, ele;
queue qt;
while(1)
{
cout<<”\n 1. enqueue 2. dequeue 3. display 4. exit”; cout<<” enter the
option”;
cin>>opt;
switch(opt)
{
case 1:
cout<<” enter the element”;
cin>>ele;
qt.enqueue(ele);
break;
case 2:
qt.dequeue();
break;
case 3:
qt.display();
break;
default: exit(0);
}
}
}
6. How to Evaluate the Expressions?
EVALUATION OF EXPRESSIONS:
Expression:
“An expression is defined as the combination of operators and operands”.
“An expression is defined as the combination of variables, constants and
operators arranged as per the syntax of the language”.
Operand is the quantity on which a mathematical operation is performed.
Operand may be a variable like x, y, z or a constant like 5, 4, 6 etc. Operator is a
symbol which performs a mathematical or logical operation between the operands.
Examples of operators include +, -, *, /, ^ etc.
An expression can be represented using three different notations.
They are infix, postfix and prefix notations:
Infix: An arithmetic expression in which we fix (place) the arithmetic operator in
between the two operands.
Example: (A + B) * (C - D)
Prefix: An arithmetic expression in which we fix (place) the arithmetic operator
before (pre) its two operands. The prefix notation is called as polish notation.
Example: * + A B – C D
Postfix: An arithmetic expression in which we fix (place) the arithmetic operator
after (post) its two operands. The postfix notation is called as suffix notation and is
also referred to reverse polish notation.
Example: A B + C D - *
The three important features of postfix expression are:
1) The operands maintain the same order as in the equivalent infix expression.
2) The parentheses are not needed to designate the expression unambiguously.
3) While evaluating the postfix expression the priority of the operators is no
longer relevant.
We consider five binary operations: +, -, *, / and $ or ↑ (exponentiation). For
these binary operations, the following in the order of precedence (highest to lowest):
Operator Precedence Value
exponentiation ($, ^) highest 1
*, / next highest 2
+, - lowest 3
As programmers we write the expressions into two types. They are simple
and complex expressions. Let us consider the complex expression as follows:
x=a/b–c+d*e–a*c
Description Operator Rank Associatively
Function expression ()
1 Left to Right
Array expression []
Unary plus +
Unary minus -
Increment/Decrement ++/--
Logical negation !
One’s complement ~ 2 Right to left
Pointer reference *
Address of &
Size of an object Sizeof
Type cast (conversion) (type)
Multiplication *
Division / 3 Left to Right
Modulus %
Addition +
4 Left to Right
Subtraction -
Left shift <<
5 Left to Right
Right shift >>
Less than <
Less than or equal to <=
6
Greater than >
Left to Right
Greater than or equal to >=
Equality ==
7 Left to Right
Not equal to !=
Bit wise AND & 8 Left to Right
Bit wise XOR ^ 9 Left to Right
Bit wise OR | 10 Left to Right
Logical AND && 11 Left to Right
Logical OR || 12 Left to Right
Conditional ?: 13 Right to Left
Assignment =,*=,/=,%=,+=, 14 Right to Left
-, =,& etc
Comma operator , 15 Left to Right
In the above expression we first understand the meaning of the expression and
then the order of performing the operation. For example, a = 4, b = c = 2, d = e = 3
then the value of x is found as ((4 / 2) – 2) + (3 * 3) – (4 * 2)
=0 + 9 – 8
=1
Or
(4 / (2 – 2 + 3)) * (3 – 4) * 2
= (4 / 3) * (- 1) * 2
= - 2.66666
Mostly we prefer the first method because we know multiplication is performed
before addition and division is performed before subtraction. In any programming
language, we follow hierarchy of operators for evaluation of expressions. The
operator precedence is shown in the above table.
6.1 How to Evaluation of Postfix Expression?
EVALUATION OF POSTFIX EXPRESSION:
The standard representation for writing expressions is infix notation which
means that placing the operator in between the operands. But the compiler uses the
postfix notation for evaluating the expression rather than the infix notation.
It is an easy task for evaluating the postfix expression than infix expression
because there are no parentheses. To evaluate an expression we scan it from left to
right. The postfix expression is evaluated easily by the use of a stack.
When an operand is seen, it is pushed onto the stack. When an operator is seen,
the operator is applied to the two operands that are popped from the stack and the
result is pushed onto the stack. When an expression is given in postfix notation,
there is no need to know any precedence rules.
Example 1:
Evaluate the postfix expression: 6 2 / 3 - 4 2 * +
Stack
Token Top
[0] [1] [2]
6 6 0
2 6 2 1
/ 06-Feb 0
3 06-Feb 3 1
- 6/2–3 0
4 6/2–3 4 1
2 6/2–3 4 2 2
* 6/2–3 4*2 1
+ 6/2–3+4*2 0

6.2 How to Convert Infix to Postfix Expressions?


INFIX TO POSTFIX CONVERSION:
Procedure to convert from infix expression to postfix expression is as
follows.
1) Fully parenthesize the expression.
2) Move all the binary operators so that they replace their corresponding right
parenthesis.
3) Delete all parenthesis.
Example:
a/b–c+d*e–a*c
According to step1 of the algorithm
((((a / b) – c) + (d * e)) – (a * c))
Performing the step2 and step3 gives
ab/c-de*+ac*-
Example: (simple expression)
We have simple expression a + b * c, then the postfix expression is abc*+.
The output translation of the given infix expression to postfix expression is as
follows.
Stack
Token Top Output
[0] [1] [2]
A -1 a
+ + 0 a
B + 0 ab
* + * 1 ab
C + * 1 abc
eos -1 abc*+
In the above example, we have stacked the operators as long as the precedence
of operator at the top of the stack is less than the incoming operator until eos (end of
stack).
Example (parenthesized expression)

1. Scan the infix expression from left to right.


2. Follow the following operations.
a) If the scanned symbol is left parenthesis, push it onto the stack.
b) If the scanned symbol is an operand, then place directly in the postfix
expression (output).
c) If the symbol scanned is a right parenthesis, then go on popping all the
items from the stack and place them in the postfix expression till we get the
matching left parenthesis.
d) If the scanned symbol is an operator, then go on removing all the operators
from the stack and place them in the postfix expression, if and only if the
precedence of the operator which is on the top of the stack is greater than
(or greater than or equal) to the precedence of the scanned operator and
push the scanned operator onto the stack otherwise, push the scanned
operator onto the stack.
We have parenthesized expression a * (b + c) *d, then the postfix
expression is abc+*d*.
Stack
Token top Output
[1] [2] [3]
a -1 a
* * 0 a
( * ( 1 a
b * ( 1 ab
+ * ( + 2 ab
c * ( + 2 abc
) * 0 abc+
* * 0 abc+*
D * 0 abc+*d
Eos * 0 abc+*d*
Parenthesis makes translation process more difficult because the equivalent
postfix expression will be parenthesis free. The postfix of our example is abc+*d*.
Here we stack the operators until we reach the right parenthesis. At that point we
unstuck till we reach the left parenthesis.
7. What are Linked Lists?
7.1 What are Single Linked Lists and Chains?
A linked list allocates space for each element separately in its own
block of memory called a "node". The list gets an overall structure by using
pointers to connect all its nodes together. Each node contains two fields - a
"data" field to store element, and a "next" field which is a pointer used to
connect to the next node. Each node is allocated in the heap using new()
and it is explicitly de-allocated using delete() . The front of the list is a
pointer to the “start” node. The single linked list is called as linear list or
chain.

Fig. 7.1: Singly Linked List Representation


The beginning of the linked list is stored in a "start " pointer which
points to the first node. The first node contains a pointer to the second node.
The second node contains a pointer to the third node and so on. The last
node in the list has its next field set to NULL to mark the end of the list.
“A singly linked list is a linked list in which each node contains
only one link pointing to the next node in the list”.
Abstract DataType SlinkedList
{
instances:
finite collection of zero or more elements linked by pointers
operations:
Count( ): Count the number of elements in the list.
Addatbeg(x): Add x to the beginning of the list.
Addatend(x): Add x at the end of the list.
Insert(k, x): Insert x just after kth element.
Delete(k): Delete the kth element.
Search(x): Return the position of x in the list otherwise return -1 if not
found
Display( ): Display all elements of the list
}
Implementation of Single Linked List
Before writing the code to build the list, we need to create a start
node, used to create and access other nodes in the linked list.

Creating a structure with one data item and a next pointer, which
will be pointing to next node of the list. This is called as self-
referential structure.
Initialize the start pointer to be NULL.

struct slinklist
{
int data;
struct slinklist *next;
};
typedef struct slinklist node;
node start=NULL;

7.1.1 What are the basic operation performed on single linked list?
The different operations performed on the single linked list are listed
as follows.
1) Creation
2) Insertion
3) Deletion
4) Traversing & Display
7.1.1.1 How to creating a node for Single Linked List?
Creating a singly linked list starts with creating a node. Sufficient
memory has to be allocated for creating a node. The information is stored in
the memory, allocated by using the new() function. The function getnode(),
is used for creating a node, after allocating memory for the node, the
information for the node data part has to be read from the user and set next
field to NULL and finally return the node.
node* getnode()
{
node* newnode;
newnode = new node;
cout<<“Enter data”;
cin>>newnode->data;
newnode-data=NULL;
return newnode;
}

Creating a Singly Linked List with “n” number of nodes


The following steps are to be followed to create “n‟ number of nodes.
1) Get the new node using getnode().
newnode=getnode();
2) If the list is empty, assign new node as start.
start = newnode;
3) If the list is not empty, follow the steps given below.

i. The next field of the new node is made to point the first
node (i.e. start node) in the list by assigning the address of
the first node.
ii. The start pointer is made to point the new node by
assigning the address of the new node.

4) Repeat the above steps “n‟ times.

Fig. 7.2: Singly Linked List with 4 nodes


The function createlist(), is used to create “n” number of nodes
void createlist(int n)
{
int i;
node *newnode;
node *temp;
for(i = 0; i < n ; i++)
{
newnode = getnode();
if(start = = NULL)
{
start = newnode;
}
else
{
temp = start;
while(temp -> next != NULL)
temp = temp -> next;
temp -> next = newnode;
}
}
}
7.1.1.2 How to Insertion of a Node in Singly Linked List?
One of the most important operations that can be done in a singly
linked list is the insertion of a node. Memory is to be allocated for the
newnode before reading the data. The newnode will contain empty data
field and empty next field. The data field of the newnode is then stored with
the information read from the user. The next field of the newnode is
assigned to NULL.
The newnode can then be inserted at three different places namely:
1) Inserting a node at the beginning.
2) Inserting a node at the end.
3) Inserting a node at specified position.
7.1.1.3 How to insert a node at the beginning of Singly Linked List?
The following steps are to be followed to insert a newnode at the
beginning of the list:
1. Get the newnode using getnode() then newnode = getnode();
2. If the list is empty then
start = newnode.
3. If the list is not empty, follow the steps given below:
newnode -> next = start;
start = newnode;

Fig. 7.3: Insertion of node at the beginning of Singly Linked List


The function insert_at_beg(), is used for inserting a node at the beginning.

void insert_at_beg()
{
node *newnode;
newnode = getnode();
if(start == NULL)
{
start = newnode;
}
else
{
newnode -> next = start;
start = newnode;
}
}
7.1.1.4 How to insert a node at the ending of the Singly Linked List?
The following steps are followed to insert a new node at the end of the
list:

1.
Get the new node using getnode() then newnode = getnode();
2.
If the list is empty then start = newnode.
3.
If the list is not empty follow the steps given below:

temp = start;
while(temp -> next != NULL)
temp = temp -> next;
temp -> next = newnode;

Fig. 7.4: Insertion of node at the ending of the Singly Linked List
The function insert_at_end(), is used for inserting a node at the end.
void insert_at_end()
{
node *newnode, *temp;
newnode = getnode();
if(start == NULL)
{
start = newnode;
}
else
{
temp = start;
while(temp -> next != NULL)
temp = temp -> next;
temp -> next = newnode;
}
}
7.1.1.5 How to insert a node at the specified position in the Singly
Linked List?
The following steps are followed, to insert a new node in an
intermediate position in the list:

1. Get the new node using getnode() then

newnode = getnode();

2. Ensure that the specified position is in between first node and


last node.
3. If not, specified position is invalid. This is done by countnode()
function.
4. Store the starting address (which is in start pointer) in temp and
prev pointers. Then traverse the temp pointer upto the specified
position followed by prev pointer.
5. After reaching the specified position, follow the steps given
below:

prev -> next = newnode;


newnode -> next = temp;

Fig. 7.5: Insertion of node at the specified position in Singly Linked List
The function insert_at_mid(), is used for inserting a node in the
intermediate position.
void insert_at_mid()
{
node *newnode, *temp, *prev;
int pos, nodectr, ctr = 1;
newnode = getnode();
cout<< Enter the position;
cin>>pos;
nodectr = countnode(start);
if(pos > 1 && pos < nodectr)
{
temp = prev = start;
while(ctr < pos)
{
prev = temp;
temp = temp -> next;
ctr++;
}
prev -> next = newnode;
newnode -> next = temp;
}
else
{
cout<< pos;
}
}
7.1.2 How to Deletion of a Node in Singly Linked List?
Another operation that can be done in a singly linked list is the
deletion of a node. Memory is to be released for the node to be deleted. A
node can be deleted from the list from three different places.
1) Deleting a node at the beginning.
2) Deleting a node at the end.
3) Deleting a node at specified position.
7.1.2.1 How to delete a node at the beginning of the Singly Linked List?
The following steps are followed, to delete a node at the beginning
of the list:

1. If list is empty then display “Empty List” message.


2. If the list is not empty, follow the steps given below:
temp = start;
start = start -> next;
delete temp;

Fig. 7.6: Deletion of node at the beginning of Singly Linked List

The function delete_at_beg(), is used for deleting the first node in the list.

void delete_at_beg()
{
node *temp;
if(start == NULL)
{
cout<< No nodes are exist;
return ;
}
else
{
temp = start;
start = temp -> next;
delete temp;
cout<<Node deleted;
}
}
7.1.2.2 How to delete a node at the end of the Singly Linked List?
The following steps are followed to delete a node at the end of the list:

1. If list is empty then display “Empty List‟ message.


2. If the list is not empty, follow the steps given below:
temp = prev = start;
while(temp -> next != NULL)
{
prev = temp;
temp = temp -> next;
}
prev -> next = NULL;
delete temp;

Fig. 7.7: Deletion of node at the ending of Singly Linked List


The function delete_at_last(), is used for deleting the last node in the list.
void delete_at_last()
{
node *temp, *prev;
if(start == NULL)
{
cout<<Empty List;
return ;
}
else
{
temp = start;
prev = start;
while(temp -> next != NULL)
{
prev = temp;
temp = temp -> next;
}
prev -> next = NULL;
delete temp;
cout<<:Node deleted”;
}
}
7.1.2.3 How to delete a node at the specified position in a Singly Linked
List?
The following steps are followed, to delete a node from the specified
position in the list.

1.
If list is empty then display “Empty List‟ message
2.
If the list is not empty, follow the steps given below.
if(pos > 1 && pos < nodectr)
{
temp = prev = start;
ctr = 1;
while(ctr < pos)
{
prev = temp;
temp = temp -> next;
ctr++;
}
prev -> next = temp -> next;
delete temp;
cout<<node deleted;
}
Fig. 7.8: Deletion of node at the specified position node in the Singly Linked List
The function delete_at_mid(), is used for deleting the specified position
node in the list.
void delete_at_mid()
{
int ctr = 1, pos, nodectr;
node *temp, *prev;
if(start == NULL)
{
cout<<Empty List;
return ;
}
else
{
cout<<Enter position of node to delete;
cin>>pos;
nodectr = countnode(start);
if(pos > nodectr)
{
cout<<This node doesnot exist;
}
if(pos > 1 && pos < nodectr)
{
temp = prev = start;
while(ctr < pos)
{
prev = temp;
temp = temp -> next;
ctr ++;
}
prev -> next = temp -> next;
delete temp;
cout<<Node deleted;
}
else
{
cout<<Invalid position;
getch();
}
}
}
7.1.3 How to Traverse and Display a Singly Linked List?
Traversal and Displaying a Singly Linked List (Left to Right):
To display the information, you have to traverse (move) a linked list,
node by node from the first node, until the end of the list is reached.
Traversing a list involves the following steps.

1.
Assign the address of start pointer to a temp pointer.
2.
Display the information from the data field of each node.
The function traverse() is used for traversing and displaying the
information stored in the list from left to right.
void traverse()
{
node *temp;
temp = start;
cout<< The contents of List (Left to Right); if(start == NULL )
cout<<Empty List;
else
{
while (temp != NULL)
{
cout<<temp -> data;
temp = temp -> next;
}
}
cout<<X;
}
7.2 What are Circular Lists?
Circular linked list is a linked list which consists of collection of
nodes each of which has two parts, namely the data part and the next part.
The data part holds the value of the element and the next part has the
address of the next node. The last node of list has the next pointing to the
first node thus making the circular traversal possible in the list.
It is just a single linked list in which the next field of the last node
points back to the address of the first node. A circular linked list has no
beginning and no end. In circular linked list no null pointers are used, hence
all pointers contain valid address.
Fig.7.9: Circular Linked List
AbstractDataType CLinkedList
{
Instances:
finite collection of zero or more elements linked by pointers
Operations:
Count( ): Count the number of elements in the list.
Addatbeg(x): Add x to the beginning of the list.
Addatend(x): Add x at the end of the list.
Insert(k, x): Insert x just after kth element.
Delete(k): Delete the kth element.
Search(x): Return the position of x in the list otherwise return -1 if not
found
Display( ): Display all elements of the list
}
7.2.1 How to Implement Circular Linked List ?
Before writing the code to build the list, we need to create a start
node, used to create and access other nodes in the linked list.

Creating a structure with one data item and a next pointer, which
will be pointing to next node of the list. This is called as self-
referential structure.
Initialize the start pointer to be NULL.
struct clinklist
{
int data;
struct clinklist* next;
};
typedef struct clinklist node; node *start = NULL;

7.2.2 How to perform basic operation performed on Single Linked


List?
The different operations performed on the circular linked list are
listed as follows.

1. Creation
2. Insertion
3. Deletion
4. Traversing
5. Display

7.2.2.1 How to create a node for Circular Linked List?


Creating a circular linked list starts with creating a node. Sufficient
memory has to be allocated for creating a node. The information is stored in
the memory, allocated by using the new() function. The function getnode(),
is used for creating a node, after allocating memory for the node, the
information for the node data part has to be read from the user and set next
field to NULL and finally return the node.

node* getnode()
{
node* newnode;
newnode = new node;
cout<< Enter data;
cin>>newnode -> data
newnode -> next = NULL;
return newnode;
}
Creating a Circular Linked List with “n” number of nodes:
The following steps are to be followed to create “n” number of nodes.

1. Get the new node using getnode(). newnode = getnode();


2. If the list is empty, assign new node as start.
start = newnode;
3. If the list is not empty, follow the steps given below.
temp = start;
while(temp -> next != NULL) temp = temp -> next;
temp -> next = newnode;

4. Repeat the above steps “n‟ times.


newnode -> next = start;

Fig. 7.10: Circular Linked List with 4 nodes

The function createlist(), is used to create “n” number of nodes


void createlist(int n)
{
int i;
node *newnode;
node *temp;
for(i = 0; i < n ; i++)
{
newnode = getnode();
if(start = = NULL)
{
start = newnode;
}
else
{
temp = start;
while(temp -> next != NULL)
temp = temp -> next;
temp -> next = newnode;
}
newnode -> next = start;
}
}
7.2.2.2 How to insert a node in Circular Lists?
We can insert a node into the Circular Lists in three ways as shown
below.
1) Inserting a node at the beginning.
2) Inserting a node at the end.
7.2.2.2.1 How to insert a node at the beginning of Circular List?
The following steps are to be followed to insert a new node at the
beginning of the circular list:

1.
Get the new node using getnode().
newnode = getnode();

2.
If the list is empty, assign new node as start.
start = newnode;
newnode -> next = start;
3. If the list is not empty, follow the steps given below: last = start;
while(last -> next != start)
last = last -> next;
newnode -> next = start;
start = newnode;
last -> next = start;

Fig. 7.11: Insertion of node at the beginning of Circular Linked List

7.2.2.2.2 How to insert a node at the end of the list in Circular List?
The following steps are followed to insert a new node at the end of the
list:

1. Get the new node using getnode(). newnode = getnode();


2. If the list is empty, assign new node as start.
start = newnode;
newnode -> next = start;
3. If the list is not empty follow the steps given below:
temp = start;
while(temp -> next != start) temp = temp -> next;
temp -> next = newnode; newnode -> next = start;
Fig. 7.12: Insertion of node at the ending of Circular Linked List

7.2.2.3 How to delete of a node in Circular Lists?


We can delete a node into the Circular Lists in three ways as shown
below.
1) Deleting a Node at the Beginning
2) Deleting a Node at the End
7.2.2.3.1 How to delete a node at the beginning of the Circular List?
The following steps are followed, to delete a node at the beginning
of the list:
1. If the list is empty, display a message “Empty List‟.
2. If the list is not empty, follow the steps given below:
last = temp = start;
while(last -> next != start)
last = last -> next;
start = start -> next;
last -> next = start;
3. After deleting the node, if the list is empty then start = NULL.
Fig. 7.13: Deletion of the node at the beginning of the Circular Linked List

7.2.2.3.2 How to delete a node at the ending of the Circular List?


The following steps are followed to delete a node at the end of the list:
1) If the list is empty, display a message “Empty List‟.
2) If the list is not empty, follow the steps given below:
temp = start;
prev = start;
while(temp -> next != start)
{
prev = temp;
temp = temp -> next;
}
prev -> next = start;
3) After deleting the node, if the list is empty then start = NULL.
7.2.2.3 How to traverse a Circular Single Linked List?
The following steps are followed, to traverse a list from left to right:
1) If list is empty then display “Empty List‟ message.
2) If the list is not empty, follow the steps given below:
temp = start;
do
{
cout<<temp -> data;
temp = temp -> next;
} while(temp != start);.
7.3 What are Linked Stacks?
A stack is a data structure in which addition of new element or
deletion of an existing element always takes place at the same end. This end
is known as top of stack. When an item is added to a stack, the operation is
called push, and when an item is removed from the stack the operation is
called pop.
Stack is also called as Last-In-First-Out (LIFO) list which means
that the last element that is inserted will be the first element to be removed
from the stack. Stack can be implemented using linked list and the same
operations can be performed at the end of the list using top pointer.

Fig. 7.14: Linked Stack Representation


Operations performed on linked stacks:
The operations performed on the Linked Stack or Stacks using Linked
list are two types.
1) Push operation
2) Pop operation
7.3.1 How to implement Push operation on Linked Stacks?
Create a temporary node and store the value of x in the data part of
the node. Now make next part of temp point to Top and then top point to
temp. That will make the new node as the topmost element in the stack.

Fig. 7.15: Implementing push operation in the top of Stack

Algorithm for PUSH Operation:

1. temp -> data = x


2. temp -> next = top
3. top = temp
4. exit
7.3.2 How to implement Pop operation on Linked Stacks?
The data in the topmost node of the stack is first stored in a variable
called item. Then a temporary pointer is created to point to top. The top is
now safely moved to the next node below it in the stack. Temp node is
deleted and the item is returned.
Algorithm for POP Operation:
if top = NULL) stack empty
return;
else
x = top -> data;
temp = top;
top = top -> next;
delete temp;
return x;
exit;

Fig. 7.16: Implementing pop operation at the top of Stack

7.4 What are Linked Queues?


Queue is a linear data structure that permits insertion of new
element at one end and deletion of an element at the other end. The end at
which the deletion of an element take place is called front, and the end at
which insertion of a new element can take place is called rear. The deletion
or insertion of elements can take place only at the front or rear end called
dequeue and enqueue.
The first element that gets added into the queue is the first one to get
removed from the queue. Hence the queue is referred to as First-In-First-
Out list (FIFO). We can perform the similar operations on two ends of the
list using two pointers front pointer and rear pointer.
Operations on performed on Linked Queues:
The operations performed on Linked Queues or Queues using Linked
List are:

1. Enqueue operation
2. Dequeue operation
7.4.1 How to implement Enqueue operation on Linked Queues?
In linked list representation of queue, the addition of new element to
the queue takes place at the rear end. It is the normal operation of adding a
node at the end of a list.
Algorithm for Enqueue (inserting an element):
if front = NULL then
rear = front = temp;
return
end if
rear - > next = temp;
rear = rear - > next;
7.4.2 How to implement Dequeue operation on Linked Queues?
The dequeue( ) operation deletes the first element from the front end
of the queue. Initially it is checked, if the queue is empty. If it is not empty,
then return the value in the node pointed by front, and moves the front
pointer to the next node.
Algorithm for Dequeue (deleting an element)
if front = NULL;
display “Queue is empty”;
return
else
while front != NULL;
temp = front;
front = front - > next;
delete temp;
end while
end if
7.5 What is a Polynomial?
A polynomial is of the form

Where, ci is the coefficient of the ith term and n is the degree of the
polynomial. Some examples are:
1) 5x2 + 3x + 1
2) 12x3 + 4
3) 4x6 + 10x4 – 5x + 3
4) 5x4 – 8x3 + 2x2 + 4x1 + 9
5) 23x9 + 18x7 – 41x6 + 163x4 – 5x + 3
Polynomial as Abstract Data Type:
The abstract data type of a polynomial is defined as follows:
class polynomial
{
instances:
p(x) = a0 x0 + a1 x1 + ……….. + an xn , is a set of ordered pairs

of <e i ,a i > where a i ϵ coefficients and e i ϵ exponents, e i are integers >=0


operations:
polynomial();
int operator();
coeffiecient coef(exponent e);
exponent Leadexp();
polynomial Add(polynomial poly);
polynomial Multiply(polynomial poly);
};
7.5.1 How to Represent the Polynomials using Linked Lists?
It is not necessary to write terms of the polynomials in decreasing
order of degree. In other words the two polynomials 1 + x and x + 1 are
equivalent. The computer implementation requires implementing
polynomials as a list of pairs of coefficient and exponent. Each of these
pairs will constitute a structure, so a polynomial will be represented as a list
of structures. A linked list structure that represents polynomials 5x4 – 8x3 +
2x2 + 4x1 + 9.

Fig. 7.17: Singly Linked List for the Polynomial F(x) = 5x4 – 8x3 + 2x 2 + 4x1 + 9

Advantages for Polynomials using Linked Lists:

Save space
Easy to maintain
Do not need to allocate memory size initially
Disadvantages for Polynomials using Linked Lists:

It is difficult to back up to the start of the list


It is not possible to jump to the beginning of the list from the
end of the list
7.5.2 What is Polynomial Addition?
To add two polynomials we need to scan them once. If we find
terms with the same exponent in the two polynomials then we add the
coefficients otherwise we copy the term of larger exponent into the sum and
go on. When we reach at the end of one of the polynomial then remaining
part of the other is copied into the sum.
To add two polynomials follow the following steps:

Read two polynomials.


Add them.
Display the resultant polynomial.
7.5.3 What is Circular Representation of Polynomials?
A single linked list in which last node next has null is called a chain.
It is possible to free all nodes of a polynomial more efficiently then the list
structure is modified such that the last node next points to the first node in
the list called circular list. The circular list representation of polynomial
5x4 – 8x3 + 2x2 + 4x 1 + 9

Fig. 7.18: Circular Linked List for the Polynomial F(x) = 5x4 – 8x3 + 2x2 + 4x1 + 9

If a node is no longer in use we free that node so that node can be


reused later. An efficient erase algorithm is used for freeing the nodes. The
list is maintained for the freed nodes. When we need a node, we examine
this list.
If the list is empty, we need to use new function to create a newnode. If
the list is not empty, then we may use any one of the node.
void erase(polypointer *ptr)
{
polypointer temp;
while(*ptr)
{
temp = *ptr;
*ptr = *ptr -> next;
delete temp;
}
}
The list of free nodes is called list of available space or avail list or
avail. Avail be a variable of type polypointer and always points to the first
node in the list. Initially, avail is set to NULL. Here instead of using new
and delete we use getnode() and retnode().
polypointer getnode()
{
polypointer node;
if(avail)
{
node = avail;
avail = avail -> next;
}
else
node = new poly pointer;
return node;
}
void retnode(polypointer node)
{
node -> next = avail;
avail = node;
}
Fig 7.19: Zero Polynomial

Fig 7.20: Non-Zero Polynomial


We can erase circular list in fixed amount of time irrespective of how
many number of nodes it may contain. By using erase algorithm, we have
the problem of handling zero polynomials. To avoid the zero polynomial,
we use header node into each polynomial. The coefficient and exponent
fields of the header is not relevant. The diagrammatic representation of zero
and non–zero polynomials are as follows.
void cerase(polypointer *ptr)
{
polypointer temp;
if(*ptr)
{
temp = *ptr -> next;
*ptr -> next = avail;
avail = temp;
*ptr = null;
}
}
7.6 What are Equivalence Classes?
A relation ≡, over a set S, is said to be equivalence relation over S if
and only if it is reflexive, symmetric and transitive over S.
1. For any polygon x, x ≡ x, x is electrically equivalent to itself
then ≡is reflexive.
2. For any two polygons x and y, x ≡ y implies y ≡ x then the
relation ≡ is symmetric
3. For any three polygons x , y and x, x ≡ y, y ≡ z implies x ≡ z
then the relation ≡ is transitive
The examples of equivalence relations are numerous. For example
“equal to” relationship is an equivalence relation.
x=x
x = y implies y = x
x = y, y = z implies x = z
The equivalence relation is to partition set “S” into equivalent classes
such that the two members x and y of “S” are in the same equivalence class
if and only if x ≡ y. For example there are 12 variables numbered from 1 to
12 with pairs as follows:
1 ≡ 5, 4 ≡ 2, 7 ≡ 11, 9 ≡ 10, 8 ≡ 5, 7 ≡ 9, 4 ≡ 6, 3 ≡ 12 and 12 ≡ 1
The equivalence classes over the equivalence relation is as follows.
The 12 variables partitioned into 3 classes.
{ 1, 3, 5, 8, 12}, { 2, 4, 6}, { 7, 9, 10, 11}
The algorithm for equivalence classes defined works in two phases. In
the first phase the equivalence pair (i,j) are read and stored. In second
phase, we begin at 1 and find all pairs of (1,j). The values 1 and j are in the
same class. By transitivity, all the pairs of the form (j, k) implies k is in the
same equivalence class.
void equivalence()
{
initialize;
while(there are more pairs)
{
read the next pair (i,j);
process this pair;
}
initialize the output;
do
{
output a new equivalence class;
}while(not done);
}
The inputs n and m represents number of objects and number of related
pairs. The data structure used here is array to hold these pairs. As we have
problem of using arrays we go linked lists. Each node will have data and
next parts. We will use largest one dimensional array seq(n) and
outputting the object we need out(i).

void equivalence()
{
initialize seq to null and out to true;
while(there are more pairs)
{
read the next pair (i,j);
put j on the seq(i) list;
put i on the seq(j) list;
}
for(i= 0; i<n; i++)
if(out(i))
{
out(i) = false;
output this equivalence class;
}
}

Fig. 7.21: Equivalence Classes

7.7 What is a Sparse Matrix?


“A matrix that contains very few number of non-zero elements is
called sparse matrix”
or
“A matrix that contains more number of zero values when compared
with non-zero values is called a sparse matrix”
AbstractDatatype sparsematrix
{
instances:
a set of triples (row, col, value) where row, col are two integers and value
comes from the set item.
operations:
for all a, b ϵ sparse matrix, x ϵ item, i,j, maxcol, maxrow ϵ index
sparsematrix create(maxrow, maxcol);
sparesematrix matrixtranspose(a);
sparsematrix matrixadd(a, b);
sparsematrix matrixmultiply(a, b);
}
7.7.1 How to represent Sparse Matrix?
For linked representation, we need three structures.
1. head node

2. row node

3. column node

The matrix representation for the sparse matrix is shown below for
example.

Fig. 7.22: Sparse Matrix


Fig. 7.23: Sparse Matrix Representation
In the above matrix representation there are 5 rows, 6 columns and 6
non-zero values. The linked representation is as follows:

Fig. 7.24: Linked List Representation for Sparse Matrix


7.8 What are Doubly Linked Lists?
A double linked list is a two-way list in which all nodes will have
two links. This helps in accessing both successor node and predecessor
node from the given node position. It provides bi-directional traversing.
Each node has three fields namely
1) Left link
2) Data
3) Right link
The left link points to the predecessor node and the right link points
to the successor node. The data field stores the required data. The beginning
of the double linked list is stored in a "start " pointer which points to the
first node. The first node’s left link and last node’s right link is set to
NULL.

Fig. 7.25: Doubly Linked List Representation

AbstractDataType DLinkedList
{
instances:
finite collection of zero or more elements linked by two pointers, one
pointing the previous node and the other pointing to the next node.
operations:
Count(): Count the number of elements in the list.
Addatbeg(x): Add x to the beginning of the list.
Addatend(x): Add x at the end of the list.
Insert(k, x): Insert x just after kth element.
Delete(k): Delete the kth element.
Search(x): Return the position of x in the list otherwise return -1 if not
found
Display( ): Display all elements of the list
}
Implementation of Doubly Linked List:
Before writing the code to build the list, we need to create a start
node, used to create and access other nodes in the linked list.

Creating a structure with one data item and a right pointer,


which will be pointing to next node of the list and left pointer
pointing to the previous node. This is called as self-referential
structure.
Initialize the start pointer to be NULL.
struct dlinklist
{
struct dlinklist * left; int data;
struct dlinklist * right;
};
typedef struct dlinklist node; node *start = NULL;

7.8.1 What are the basic operation performed on Doubly Linked List?
The different operations performed on the doubly linked list are
listed as follows.

1. Creation
2. Insertion
3. Deletion
4. Traversal and Display
Creating a node for Doubly Linked List:
Creating a doubly linked list starts with creating a node. Sufficient
memory has to be allocated for creating a node. The information is stored in
the memory, allocated by using the new() function.
The function getnode(), is used for creating a node, after allocating
memory for the node, the information for the node data part has to be read
from the user and set left and right field to NULL and finally return the
node.

node* getnode()
{
node* newnode;
newnode = new node;
cout<< Enter data;
cin>>newnode -> data;
newnode -> left = NULL;
newnode -> right = NULL;
return newnode;
}
7.8.1.1 How to Create a Doubly Linked List?
The following steps are to be followed to create “n‟ number of nodes.
1) Get the new node using getnode(). newnode = getnode();
2) If the list is empty, assign new node as start. start = newnode;
3) If the list is not empty, follow the steps given below.

i. The left field of the new node is made to point the previous
node.
ii. The previous nodes right field must be assigned with
address of the new node.
4) Repeat the above steps “n” times.

Fig. 7.26: Doubly Linked List with 3 nodes


The function createlist(), is used to create “n‟ number of nodes
void createlist(int n)
{
int i;
node *newnode;
node *temp;
for(i = 0; i < n ; i++)
{
newnode = getnode();
if(start = = NULL)
{
start = newnode;
}
else
{
temp = start;
while(temp -> right != NULL)
{
temp = temp -> right;
}
temp -> right = newnode;
newnode -> left = temp;
}
}
}
7.8.1.2 How to insertion of a node in Doubly Linked List?
One of the most important operations that can be done in a doubly
linked list is the insertion of a node. Memory is to be allocated for the
newnode before reading the data. The newnode will contain empty data
field and empty next field. The data field of the newnode is then stored with
the information read from the user. The left and right fields of the newnode
are set to NULL. The newnode can then be inserted at three different places
namely:

1. Inserting a node at the beginning.


2. nserting a node at the end.
3. Inserting a node at specified position.
7.8.1.2.1 How to insert a node at the beginning of Doubly Linked List?
The following steps are to be followed to insert a newnode at the
beginning of the list:
1) Get the newnode using getnode() then newnode = getnode();
2) If the list is empty then start = newnode.
3) If the list is not empty, follow the steps given below:
newnode -> right = start;
start -> left = newnode;
start = newnode;

Fig. 7.27: Insertion of node at the beginning of the Doubly Linked List

7.8.1.2.2 How to insert a node at the ending of Doubly Linked List?


The following steps are followed to insert a new node at the end of the
list:
1) Get the new node using getnode() then newnode = getnode();
2) If the list is empty then start = newnode.
3) If the list is not empty follow the steps given below:
temp = start;
while(temp -> right != NULL)
temp = temp -> right;
temp -> right = newnode;
newnode -> left = temp;

Fig. 7.28: Insertion of node at the ending of the Doubly Linked List

7.8.1.2.3 How to insert a node at the beginning of Doubly Linked List?


The following steps are followed, to insert a new node in an
intermediate position in the list:

1. Get the new node using getnode() then newnode = getnode();


2. Ensure that the specified position is in between first node and
last node. If not, specified position is invalid. This is done by
countnode() function.
3. Store the starting address (which is in start pointer) in temp and
prev pointers. Then traverse the temp pointer upto the specified
position followed by prev pointer.
4. After reaching the specified position, follow the steps given
below:
newnode -> left = temp;
newnode ->right = temp ->right;
temp -> right ->left = newnode;
temp -> right = newnode;
Fig. 7.29: Insertion of the node at the specified position in the Doubly Linked List

7.8.1.3 How to delete a node in Doubly Linked List?


Another operation that can be done in a doubly linked list is the
deletion of a node. Memory is to be released for the node to be deleted. A
node can be deleted from the list from three different places.

1. Deleting a node at the beginning.


2. Deleting a node at the end.
3.
Deleting a node at specified position.

7.8.1.3.1 How to delete a node at the beginning of Doubly Linked List?


The following steps are followed, to delete a node at the beginning
of the list:
1) If list is empty then display “Empty List‟ message.
2) If the list is not empty, follow the steps given below:
temp = start;
start = start -> right;
start -> left = NULL;
delete temp;
Fig. 7.30: Deletion of the node at the beginning of the Doubly Linked List

7.8.1.3.2 How to delete a node at the ending of Doubly Linked List?


The following steps are followed to delete a node at the end of the list:
1) If list is empty then display “Empty List” message.
2) If the list is not empty, follow the steps given below: temp = start;
while(temp -> right != NULL)
{
temp = temp ->right;
}
temp –> left -> right = NULL;
delete temp;

Fig. 7.31: Deletion of the node at the ending of the Doubly Linked List

7.8.1.3.3 How to delete a node at the specified position of Doubly


Linked List?
The following steps are followed, to delete a node from the specified
position in the list.
1) If list is empty then display “Empty List” message
2) If the list is not empty, follow the steps given below.
if(pos > 1 && pos < nodectr)
{
temp = start; ctr = 1; while(ctr < pos)
{
temp = temp -> right; ctr++;
}
temp -> right -> left = temp -> left;
temp -> left -> right = temp -> right;
delete temp;
}

Fig. 7.32: Deletion of the node at the specified position of the Doubly Linked List

7.8.1.4 How to traverse and display a Doubly Linked List?


TRAVERSAL AND DISPLAYING A LIST (LEFT TO RIGHT):
To display the information, you have to traverse the list, node by
node from the first node, until the end of the list is reached. The function
traverse_left_right () is used for traversing and displaying the information
stored in the list from left to right. The following steps are followed, to
traverse a list from left to right:
1) If list is empty then display “Empty List” message.
2) If the list is not empty, follow the steps given below:
temp = start;
while(temp != NULL)
{
cout<<temp -> data;
temp = temp -> right;
}
TRAVERSAL AND DISPLAYING A LIST (RIGHT TO LEFT):
To display the information from right to left, you have to traverse
the list, node by node from the first node, until the end of the list is reached.
The function traverse_right_left () is used for traversing and displaying the
information stored in the list from right to left. The following steps are
followed, to traverse a list from right to left:
1) If list is empty then display “Empty List‟ message.
2) If the list is not empty, follow the steps given below:
temp = start;
while(temp -> right != NULL)
temp = temp -> right;
while(temp != NULL)
{
cout<< temp -> data;
temp = temp -> left;
}
7.8.1.5 How to count the number of nodes in Doubly Linked List?
The following code will count the number of nodes exist in the list (using
recursion).
int countnode(node *start)
{
if(start = = NULL)
return 0;
else
return(1 + countnode(start ->right ));
}
7.9 What are Generalized Lists?
A generalized list is defined as follows:
“A generalized list A, is a finite sequence of n>=0 elements, 0, 1, 2, 3, …,
n-1 where i is either an atom or a list. The elements i , 0<=i<=n-1, that are
not atoms are the sublists of A”.
REPRESENTATION OF GENERALIZED LISTS :
A linear list is a finite sequence of n>=0 elements, 0, 1, 2, …, n-1 where
i the elements are restricted as atoms. The elements i , 0<=i<n-1, of linear
list permits to have its own structure that leads to generalized list notation.
The elements now 0<=i<=n-1 may be atoms or lists. The list A is written as
A = {0, 1, 2, …, n-1}. The capital letters are used to represent the names of
the lists. The length of the list is indicated by n. the lower case letters are
used to represent the atoms. If n>=0, then 0 is the head of A and tail of A is
1, 2, …, n-1. Some examples of generalized lists are listed as follows:
1. D = ( ) indicates null list or empty list, its length is zero
2. A = (a, (b, c)) indicates a list of length two, its first element is an
atom and its second element is a linear list (b, c)
3. B = (A, A, ( )) indicates a list of length 3 whose first two elements
are the list A and third is a null list.
4. C = (a, C) indicates a list of length 2, C corresponds to the infinite
list C=(a, (a, (a, ……..).
5. D is an empty list. For list A, we have head(A) = a and tail(A) = (b,
c). the tail(A) also have head and tail (b, c) and ( ).Looking at the list
B we head(B) = A and tail(B) = (A, ( )). Similarly continue the
process ….. . Two important consequences are lists may be shared by
other lists and lists may be recursive. Let us consider the situation
that lists are neither shared nor recursive. For example, polynomial
with several variables.
p(x,y,z) = x 10 y 3 z 2 + 2x 8 y 3 z 2 + 3x 8 y 2 z 2 + x 4 y 4 z + 6x 3 y 4 z + 2yz
sequential representation for p using the structure with four fields
A
to represent a single array element. The node is of the form as shown below.

p(x,y,z) = x 10 y 3 z 2 + 2x 8 y 3 z 2 + 3x 8 y 2 z 2 + x 4 y 4 z + 6x 3 y 4 z + 2yz
A sequential representation for p using the structure with four fields
to represent a single array element. The node is of the form
The node vary in size and difficult to maintain storage. Then the
idea is to use a general list structure with nodes of fixed size and rewrite the
p(x,y,z) as
((x 10 + 2x 8 )y 3 + 3x 8 y 2 )z 2 + ((x 4 + 6x 3 )y 4 + 2y)z
Every polynomial can be represented using node of type polynode and is
defined as follows:
enum triple {var, ptr, no};
class polynode
{
polynode *link
int exp;
triple trio;
union{
char var1;
polynode *dlink;
int coef;
};
};
In this representation, there three types of nodes depending on the
value of trio. If trio == var, then the node is head node, the field var1 is used
to indicate the name of the variable and exp = 0. If trio == ptr, then
coefficient itself is a list pointed by the dlink field. If trio == no, then
coefficient is an integer and is stored in the coef field and exp is based on
the list. The representation for the polynomial p=3x2 y is as follows:

Fig. 7.33: Generalized List for 3x2 y

Every generalized list can be represented by using the node structure


as follows.

enum boolean {false, true};


class genlist;
class genlistnode
{
Friend class genlist;
genlistnode *link
boolean tag
union
{
char data;
genlistnode *dlink;
};
};
class genlist
{
private:
genlistnode *first;
};

Fig. 7.34: Generalized Lists

7.10 What are Recursive Algorithms for Lists?


When a data object is defined recursively, it is easy to describe
recursive algorithms that work on the data objects recursively. A recursive
algorithm consists of two components. The first one is the recursive
function itself called workhorse and the second one is the function that call
the recursive function at the top level is called driver.
When a recursive is used to implement a class operation, we require
two class member functions. The driver is declared as public member
function and workhorse is declared as a private member function. The
different types of recursive algorithms are
1. Copying a list – produces an exact copy of a non-recursive list “l ” in
which no sublists are shared.
driver
void genlist :: copy(const genlist &l)
{
first = copy(l, first);
}
workhorse
genlistnode *genlist :: copy(genlistnode *p)
{
genlistnode *q = 0; if(p)
{
q = new genlistnode; q -> tag = p -> tag;
if(!p -> tag)
q -> data = p -> data;
else
q -> dlink = copy(p -> dlink);
q -> link = copy(p -> link);
}
return q;
}
2. List Equality – determines whether two lists are identical or not. To be
identical, the lists must have same structure and same data in corresponding
data members.
driver
int operator ==(const genlist &l, const genlist &m)
{return equal(l, first, m, first);
}
workhorse
int equal(genlistnode *s, genlistnode *t)
{
int x;
if((!s) &&(!t))
return 1;
if(s && t &&(s -> tag == t -> tag))
{
if(!s -> tag)
if(s -> data == t -> data)
x = 1;
else
x = 0;
else
x = equal(s -> dlink, t -> dlink);
if(x)
return equal(s -> link, t -> link);
}
return 0;
}
3. List Depth – computes the depth of the list. If the list is empty then the
depth of the list is zero.
0if s is an atom

depth(s) = 1 + max{ depth(x1 , x2 ….. xn )} if s is the list (x1 , x2 , …..xn )


driver

int genlist :: depth()


{
return depth(first);
}
workhorse
int genlist :: depth(genlistnode *s)
{
if(*s)
return 0;
genlistnode *p = s;
int m = 0;
while(p)
{
if(p -> tag)
{
int n = depth(p -> dlink);
if(m < n)
m = n;
}
p = p -> link;
}
return m + 1;
}
7.10.1 What are Reference Count, Shared and Recursive Lists?
When lists are allowed to be shared by other lists and when
recursive lists are permitted we have some problems. Sharing of sublists
can gain storage space. A sublist in the list can be named. For example, A =
(a, (b, c)),the sublist (b, c) could be assigned the name Z by writing A = (a,
Z(b, c)). For consistency, we write A(a, Z(b, c)).
Lists that are shared by other lists such as A create problems when
we add or delete a node at the front. If the first node is deleted then the
change of the pointers from B to point to second node. If the newnode is
added then pointers from B now should point to the newnode. But we do
not all the points from which a particular list is referenced. The problem can
be easily solved by using head node. The use of head node when
performing add or delete at the front of the lists will eliminate the need to
retain all pointers to a specific list. If each list have head nodes then
generalized lists is shown in the below figure.

Fig. 7.35: Generalized Lists for Reference Counts, Shared and Recursive Lists
The values in the data field of the head node is the reference count
of the corresponding list. When the lists are shared by other lists we need a
mechanism to determine whether or not the list of nodes may be physically
returned to available space list. This mechanism provides reference count
maintained in the head node of each list. Since the data field of the head
node is free the reference count is maintained in this field. The reference
count is the number of pointers to that list.

1.
ref(x) = 1, accessible only via x.
2.
ref(y) = 3, pointed to by y and two pointers from z
3.
ref(z) = 1, accessible only via z
4.
ref(w) = 2, accessible via w and a pointer to itself.
When reference count decreases and becomes zero then all the
nodes are physically returned to the available space list. the class definition
for genlist is unchanged and genlistnode is as follows:

enum tag {0, 1, 2};


class genlistnode
{
Friend class genlist;
genlistnode *link
int tag t;
union
{
char data;
genlistnode *dlink;
int ref;
};
};
A recursive algorithm for erasing the list will examine all the top
level nodes of the list whose reference count has become zero. So any such
sublist found, using erase algorithm we erase the list and link it to available
space list. For any recursive list the reference count never becomes zero.
8. What are Trees?
A tree is a non-linear data structure that is used to represents
hierarchical relationships between individual data items.
A tree is a finite set of one or more nodes such that,

1. There is a specially designated node called root.


2. The remaining nodes are partitioned into n>=0 disjoint sets T1,
T2,..Tn, where each of these set is a tree T1,…Tn are called the
subtrees of the root.
In tree data structure, every individual element is called as “node”.
Node in a tree data structure stores actual data of particular element and
link to next element in hierarchical structure.
If a tree consists of ‘n’ number of nodes then we have maximum of ‘n-
1’ number of links. In the below tree, number of nodes are 9. So this is a
tree with 9 nodes and 8 edges.

Fig. 8.1 Tree with 9 nodes and 8 edges

8.1 What is the Terminology of Tree Data Structure?


1) Root:

First node in tree data structure is a root.


Root node is considered as origin of tree.
Every tree must have a root node.
Any tree have only one root node.
For example, in the below tree root node is ‘A’.

2) Edge:

In a tree data structure, connecting link between any two nodes


is called as an Edge.
Branch or edge is the link between the parent and its child.
If there are ‘n’ number of nodes there will be maximum of ‘n-1’
number of edges.
For example, in the below figure the number of node, n=5. Therefore
number of edges = n-1 = 5-1 = 4.

3) Parent:

The node which is predecessor of any node is called as “parent


node”.
The parent node can have child or children.
The node which has branch from it to any other node.
For example in the below graph, the parent nodes are A, B, C and D.

4) Child:

The node which is descendent of any node is called as “child


node”.
The node which has a link from its parent node is a child node.
Any node except the root node in the tree is considered as the
child node.
For example, in the below figure ‘B’, ‘C’ are child nodes to ‘A’.
‘D’, ‘E’, ‘F’ are the Child nodes to ‘C’.

5) Siblings:

Children of the same parent are said to be siblings.


For example, in the below tree B, E are siblings. C, D are siblings and F,
G, H are siblings.

6) Leaf:

The nodes which does not have child are called as leaf node.
Nodes that have degree zero are called leaf or terminal nodes.
Leaf nodes are also known as external nodes or terminal nodes.
For example, in the below tree the nodes D, I, F, G, H are the leaf nodes.

7) Internal nodes:

Nodes which have at least one child are called as internal nodes.
Nodes other than leaf nodes are internal nodes.
These nodes are also called as non-terminal nodes.
For example, the below tree A, B, C , E are internal nodes or non-
terminal nodes.

8) Degree:

Total number of children of a node is called as degree of that


node.
The number of subtrees of a nodes is called as its degree.
The degree of a tree is the maximum of the degree of the nodes
in the tree.
For example, in the below figure,
Degree of A = 2
Degree of C = 3
Degree of G = 1
Degree of I = 0
So, Degree of tree = 3.
9) Level:

The level of a node is defined by letting the root to be at level


one.
If a node is at level ‘l’, then its children are at level ‘l+1’.
For example, the below tree shows the different levels of the tree.

10) Height:

Total number of edges from leaf node to the particular node to a


particular node in the largest path is height of that node.
Height of root node is said to be height of a tree.
Height of all leaf nodes is ‘0’.
For example, the below tree has the height of tree as 3.

11) Depth:

Total number of edges from root node to a particular node is


depth of that node.
Total number of edges from root node to leaf node in longest
path is ‘Depth of a Tree’.
Depth of root node is 0.
For example, in the below tree has the depth of the tree is 3.
12) Ancestors:
Ancestors of a node are all the nodes along the path from root to that
node. Hence root is ancestor of all the nodes in the tree.
13) Climbing:
The process of traversing the tree from the leaf to the root is called
climbing the tree.
14) Descending:
The process of traversing the tree from the root to the leaf is called
descending the tree.
15) Forest:
It is a collection of disjoint trees. It is obtained by removing root.
16) Predecessor:
Consider the node X, then the node previous to node X is called
predecessor node.
17) Successor:
Consider the node X, then the node that comes next to node X is
called successor node.
8.2 What are the Representation of Trees?
Tree can be represented in the different ways. They are:
1) List Representation
2) Left-Child-Right Sibling Representation
3) Representation as a Degree-Two-Tree.
1) List Representation:
The information in the root node comes first, followed by the list of
subtrees of that node as shown below.

Fig. 8.2 Node structure for a tree of degree k


Consider the below tree,
Fig. 8.3 Tree
The above tree can be represented in the list representation as follows.

Fig. 8.4 List Representation of Tree

If T is a k-array tree with n nodes, each having a fixed size, then


n(k-1)+1 of the nk child fields as 0,n ≥ 1.
Proof: Since each non-zero child fields points to a node, there is
exactly one link points to each node other than root. Therefore,
number of non-zero child in an n-node tree is ‘n-1’.
So, the total number of child fields for k array is nk.
Hence, the number of zero fields = nk-(n-1) = n(k-1)+1.
2) Left-Child-Right Sibling Representation:
In the left-child-right sibling representation the node structure is
shown below.
Fig. 8.5 Node Structure for Left-Child-Right Sibling Representation
The below figure shows the left-child-right representation of that tree..

Fig. 8.6: Left-Child-Right Sibling Representation of Tree

3) Representation of Degree-Two Tree:


To obtain the degree-two representation of tree in the figure 9.2.2,
we simply rotate the right-sibling pointers to left-child- right sibling
tree which is in the figure 9.2.5 clock wise by 45 degrees. The
representation of degree-two tree is shown below.
Fig.8.7 Representation of Degree-Two Tree

8.3 What are Binary Trees?


A binary tree is a finite set of nodes that is either empty or consists
of a root and two disjoint binary trees called the left subtree and right
subtree.
Left child: The node present to the left of the parent node is called the left
child.
Right child: The node present to the right of the parent node is called the
right child.
Fig. 8.8: Binary Tree

8.2.1 What are the Properties of Binary Trees?


Some of the important properties of a binary tree are as follows:
1) Maximum number of nodes on level ‘i' of a binary tree is 2i-1 , i≥1.
Proof:
Let us prove it with induction.
Here, level is the number of edges from root to that particular node.
Level of root is 1. i.e. for root; l=1, number of nodes=21-1 =20 =1.
Assume that maximum number of nodes on level l is 2l-1 .
Consider at ‘l+1’ level; level l has 2l-1 and as it is a binary tree, each
node has at most 2 children, next level would have twice nodes to
previous.
Therefore, 2*2l-1 = 21+l-1 = 2(l+1)-1 .
So, at level l+1, we have 2(l+1)-1 nodes.
Hence it is applicable for all i values. Hence proved.
2) Maximum number of nodes in binary tree of height ‘h’ is 2h -1.
Proof:
We know that, height of a tree is maximum number of nodes on root to
leaf path.
The height of leaf node is 1.
A tree has maximum nodes if all levels have maximum nodes.
So, maximum number of nodes in binary tree of height ‘h’ is
1+2+4+…+2h-1 = 2h-1 .
(Since the geometric series with h terms is sum of the series.)
3) In a binary tree with ‘n’ nodes, minimum possible height or minimum
number of levels are [log2 (N+!)].
4) A binary tree with L levels has at least [log2 L]+1 levels.
5) In binary tree, number of leaf nodes is always one more than nodes with
two children.
6) If h = height of a binary tree, then
Maximum number of leaves = 2h.
Maximum number of nodes = 2h+1–1 .
7) If a binary tree contains m nodes at level l, it contains at most 2m nodes
at level l+1.
8) Since a binary tree can contain at most one node at level 0 (the root), it
can contain at most 2l node at level l.
9) The total number of edges in a full binary tree with n node is n – 1.
8.2.2 What are the Types of Binary Trees?
There are different types of Binary Trees as shown below.
1) Skewed Binary Trees
2) Strictly Binary Trees
3) Complete Binary Tree
8.2.2.1 What is a Skewed Binary Tree?
If the new nodes in the tree are added only to one side of the binary
tree then it is a skewed binary tree. Skew tree is a binary tree such that all
the nodes except one have one and only one child.
A binary tree of ‘n’ nodes such that its depth is ‘n-1’ is also called as
skewed binary tree.
There are two types of skewed binary trees. They are:

1. Left Skewed Binary Tree


2. Right Skewed Binary Tree
1) Left Skewed Binary Tree:
A binary tree which is dominated by its left child nodes is called as
left skewed binary tree.

Fig. 8.9: Left Skewed Binary Tree

2) Right Skewed Binary Tree:


A binary tree which is dominated by its right child nodes is called
as right skewed binary tree.

Fig. 8.10: Right Skewed Binary Tree

8.2.2.2 What is a Strictly Binary Tree?


If the binary tree has each node consisting of either two nodes or no
nodes at all, then it is called a strictly binary tree.

Fig. 8.11: Strictly Binary Tree


8.2.2.3 What is a Complete Binary Tree?
If all the nodes of a binary tree consist of two nodes each and the
nodes at the last level does not consist any nodes, then that type of binary
tree is called a complete binary tree.
A full binary tree of depth ‘k’ is a binary tree having 2k -1 nodes,
k≥0. A binary tree with ‘n’ nodes and depth ‘k’ is complete if and only if its
nodes correspond to the nodes numbered from 1 to n in the full tree of depth
k. A full binary tree is a tree with every nodes in the tree has either 0 or 2
children.
In a complete binary tree, every level except the last is completely
filled and all nodes at the last level are as far as possible. The below figure
is an example for complete binary tree.

Fig. 8.12: Complete Binary Tree

8.2.3 How to Represent the Binary Trees?


There are two ways in which a binary tree can be represented in
memory. They are:
1) Array representation of binary trees.
2) Linked representation of binary trees.
9.2.3.1 What is Array Representation of Binary Trees?

When arrays are used to represent the binary trees, then an array of
k
size 2 is declared where, k is the depth of the tree. For example if the
3
depth of the binary tree is 3, then maximum 2 - 1 = 7 elements will be
present in the node and hence the array size will be 8. This is because the
elements are stored from position one leaving the position 0 vacant.

Fig. 8.13: Array Representation of Binary Tree


But an array of bigger size is declared so that later new nodes can be
added to the existing tree. The following binary tree can be represented
using arrays as shown.
The root element is always stored in position 1 . The left child of
node i is stored in position 2i and right child of node is stored in position 2i
+ 1 . The formulas for identifying the parent, left child and right child of a
particular node.
1) Parent( i ) = i / 2, if I ≠ 1. If i = 1 then i is the root node and root
does not has parent.
2) Left child( i ) = 2i, if 2i ≤ 2 n, where n is the maximum number of
elements in the tree. If 2i > n, then i has no left child.
3) Right child( i ) = 2i + 1, if 2i + 1 ≤ 2 n. If 2i + 1 > n, then i has no
right child.
The empty positions in the tree where no node is connected are
represented in the array using -1, indicating absence of a node. Using the
formula, we can see that for a node 3, the parent is 3/2 is 1. Referring to the
array locations, we find that 50 is the parent of 40. The left child of node 3
is 2*3 is 6. But the position 6 consists of -1 indicating that the left child
does not exist for the node 3. Hence 50 does not have a left child. The right
child of node 3 is 2*3 + 1 is 7. The position 7 in the array consists of 20.
Hence, 20 is the right child of 40.
Disadvantage in the array representation is in the most cases, there will
be a lot of unutilized space. For complete binary tree, this representation is
ideal as no space is wasted. The below figure shows how to represent the
complete binary tree in the memory for array representation.

Fig. 8.14: Array Representation of Complete Binary Tree

For skewed binary tree of depth ‘k’, will require 2k -1 space where we
will use only k spaces. The below figure shows how the skewed binary tree
represented in the array.
Fig. 8.15: Array Representation of Skewed Binary Tree

9.2.3.2 What is Linked Representation of Binary Trees?


In linked representation of binary trees, instead of arrays, pointers
are used to connect the various nodes of the tree. Hence each node of the
binary tree consists of three parts namely, the data, left and right. The data
part stores the data, left part stores the address of the left child and the right
part stores the address of the right child. Here we use doubly linked list to
represent binary tree in which every node consists of three fields i.e., left
child, data and right child.

Fig. 8.16: Node for Binary Tree Representation


struct binarytree
{
struct binarytree *LeftChild;
int data;
struct binarytree *RightChild;
};
struct binarytree node;
node *root = NULL;

Logically the binary tree in linked form can be represented as shown.

Fig. 8.17: Linked List Representation of Binary Tree

The pointers storing NULL value indicates that there is no node


attached to it. Traversing through this type of representation is very easy.
The left child of a particular node can be accessed by following the left
link of that node and the right child of a particular node can be accessed by
following the right link of that node.
Complete binary tree in Linked List Representation is shown below.
Fig. 8.18: Linked Representation for Complete Binary Tree

Linked List Representation for Skewed Binary Tree is shown below.

Fig. 8.19: Linked Representation for Complete Binary Tree

8.2.4 What are Binary Tree Traversals?


A tree traversal is a method of visiting every node in the tree. By
visit, we mean that some type of operation is performed. For example, we
may want to print the contents of the nodes. There are three standard ways
of traversing a binary tree T with root R. They are:
1) Preorder Traversal
2) Inorder Traversal
3) Postorder Traversal
Observe that each algorithm contains the same three steps, and that
the left subtree of R is always traversed before the right subtree. The
difference between the algorithms is the time at which the root R is
processed. The three algorithms are sometimes called the node-left-right
(NLR) traversal, the left-node-right (LNR) traversal and the left-right-node
(LRN) traversal. Traversal algorithms using recursive approach.
8.2.4.1 What is Preorder Traversal?
Steps for Preorder Traversal:
1. Process the root R.
2. Traverse the left subtree of R in preorder.
3. Traverse the right subtree of R in preorder.

Fig. 8.20: Preorder Traversal

In the preorder traversal, the node element is visited first and then
the left subtree of the node and then the right subtree of the node is visited.
Consider we have 6 nodes in the tree A, B, C, D, E, F. The traversal always
starts from the root of the tree. The node A is the root and hence it is visited
first. The value at this node is processed.
Now we check if there exists any left child for this node if so apply
the preorder procedure on the left subtree. Now check if there is any right
subtree for the node A, the preorder procedure is applied on the right
subtree. Since there exists a left subtree for node A, B is now considered as
the root of the left subtree of A and preorder procedure is applied. Hence
we find that B is processed next and then it is checked if B has a left
subtree. This recursive method is continued until all the nodes are visited.
Example for preorder traversal :
Step 1:

Step 2:

Step 3:

Step 4:
Step 5:

Step 6:

Algorithm for Preorder Traversal:


PREORDER( ROOT )
Temp = ROOT
If temp = NULL
return
display temp -> data
If temp - > left ≠ NULL
PREORDER ( temp - > left )
If temp -> right ≠ NULL
PREORDER ( temp - > right )
8.2.4.2 What is Inorder Traversal?
Inorder Traversal:
1. Traverse the left subtree of R in inorder.
2. Process the root R.
3. Traverse the right subtree of R in inorder.

Fig. 8.21: Inorder Traversal


In the Inorder traversal method, the left subtree of the node element
is visited first and then the node element is processed and at last the right
subtree of the node element is visited. For example, the traversal starts with
the root of the binary tree. The node A is the root and it is checked if it has
the left subtree. Then the inorder traversal procedure is applied on the left
subtree of the node A.
Now we find that node D does not have left subtree. Hence the node
D is processed and then it is checked if here is a right subtree for node D.
Since there is no right subtree, the control returns back to the previous
function which was applied on B. Since left of B is already visited, now B
is
processed. It is checked if B has the right subtree. If so apply the inroder
traversal method on the right subtree of the node B. This recursive
procedure is followed till all the nodes are visited.
Example for Inorder Traversal:
Step 1:

Step 2:

Step 3:

Step 4:
Step 5:

Step 6:

Algorithm for Inorder Traversal:


INORDER( ROOT )
Temp = ROOT
If temp = NULL
return
If temp - > left ≠ NULL
INORDER ( temp - > left )
display temp -> data
If temp -> right ≠ NULL
INORDER ( temp - > right )
8.2.4.3 What is Postorder Traversal?
Postorder Traversal:
1. Traverse the left subtree of R in postorder.
2. Traverse the right subtree of R in postorder.
3. Process the root R.

Fig. 8.22: Postorder Traversal

In the postorder traversal method the left subtree is visited first, then
the right subtree and at last the node element is processed. For example, A
is the root node. Since A has the left subtree the postorder traversal method
is applied recursively on the left subtree of A. Then when left subtree of A
is completely is processed, the postorder traversal method is recursively
applied on the right subtree of the node A. If right subtree is completely
processed, then the node element A is processed.
Example:
Step 1:
Step 2:

Step 3:

Step 4:
Step 5:

Step 6:

Algorithm for Postorder Traversal:


POSTORDER( ROOT )
Temp = ROOT
If temp = NULL
return
If temp - > left ≠ NULL
POSTORDER ( temp - > left )
If temp -> right ≠ NULL
POSTORDER ( temp - > right )
display temp -> data

8.4 What are Expression Trees?


The trees are many times used to represent an expression and if
done so, those types of trees are called expression trees. The following
expression is represented using the binary tree, where the leaves represent
the operands and the internal nodes represent the operators.
B^A+B*A–C

Fig. 8.23: Expression Trees

If the expression tree is traversed using preorder, inorder and


postorder traversal methods, then we get the expressions in prefix, infix and
postfix forms as shown.
-+^BA*
BA CB^
A+B*A-
CBA^BA
*C–
8.5 What are Threaded Binary Trees?
In binary tree, the leaf nodes have no children. Therefore the left
and right fields of the leaf nodes are made NULL. But NULL waste
memory space so to avoid NULL in the node we will set threads.
8.5.1 What are Threads?
Threads are links that point to its predecessor node and successor
node. To construct threads we use the following rules.

1. If ptr - > leftchild is NULL, replace ptr - >leftchild with a


pointer to its inorder predecessor of ptr
2. If ptr - > rightchild is NULL, replace ptr - >rightchild with a
pointer to its inorder successor of ptr

Let us consider the binary tree as follows

Fig. 8.24: Binary Tree

The corresponding threaded binary tree is as follows


Fig. 8.25: Threaded Binary Tree

The structure of a threaded binary tree is as follows


struct threadedbtree
{
int leftthread, rightthread;
int data;
struct threadedbtree *leftchild;
struct threadedbtree *rightchild;
};
8.5.2 What is Inorder Traversal of a Threaded Binary Tree?
The basic idea in inorder threaded binary tree is that the left thread
should point to the predecessor and the right thread points to inorder
successor. The head node is the starting node and the root node of the tree s
is attached to the left of the head node.
There are two additional fields in each node named as left thread
and right thread set initially to 0. To explain about inorder thread traversing
of a binary tree let us consider the values for creating a threaded binary tree
10, 8, 6, 12
Initially, create a head node of the tree
Now let us take the first value 10, this will be the root node and s attached
to the left of head node as follows

The NULL links of the roots left and right will be pointed to the head node
as follows

Next comes 8 now 8 is compared with root as it is less then attach 8 as the
left child of the root 10.
new - > left = root- > left
new - > right = root
root - > left = new
root - > lth = 1

The left link of node 8 points to its inorder predecessor and right link of the
node 8 points to its inorder successor.
Similarly, the next node 6 is attached to the left of the node 8. The next
node is 12 when compared with the root node 10 it is greater so we attach
the node 12 to the right of the root node 10 which is as follows.

new - > right = root- > right


new - > left = root
root - > rth = 1
root - > right = new
8.5.3 How to Insert a node into a Threaded Binary Tree?
Let us consider now how to insert the node into the threaded binary
tree. The case we consider here is inserting the node “r” as the right child
of a node “s”. The cases for insertion are
If “s” has an empty right subtree, then the insertion is simple as shown
below.

Fig. 8.26: Inserting a node into Threaded Binary Tree as leaf node

If the right subtree of “s” is not empty, then this right subtree is
made the right subtree of “r” after insertion. Then “r” becomes inorder
predecessor of a node that has leftthread = = true and consequently there is
a thread which has to be updated to point to “r” . the node containing this
thread was previously the inorder successor of “s” .

Fig. 8.27: Inserting a node into Threaded Binary Tree

8.6 What are Heaps?


8.6.1 What are Priority Queues?
Heaps are used to implement priority queues. In this type of queues
the element to be deleted is one with highest (lowest) priority. We can insert
the element at arbitrary priority can be inserted into the queue. In the Queue
data structure, insertion is performed at the rear end and deletion is
performed at the front end which is based on FIFO principle.
Priority Queue is a variant of Queue data structure in which
insertion is performed in the order of arrival and deletion is performed
based on the priority. Priority Queues are used in the operation system for
load balancing, interrupt handling and artificial intelligence etc.
Heap data structure is used for implementing priority queues.
Example for Priority Queue:
Consider that we are selling the services of a machine. Each user
pays a fixed amount per their use. But the time needed by the each user is
different. Now we want to maximize the returns from the machine under
the assumption that the machine is not idle. This can be maintained by
using a priority queue of all persons waiting to use the machine. Whenever
the machine becomes idle, the user with the smallest time requirement is
selected. Hence a min priority queue is required.
If each user needs the same amount of time on the machine but they are
ready to pay different amounts for the service, then a priority queue based
on the amount of payment can be maintained. Whenever the machine is idle
then the user paying more amount will be selected. This requires a max
priority queue.
8.6.2 What are different types of Heaps?
There are two different types of heap data structures. They are:

1. Max Heap
2. Min Heap
Ever heap data structure will satisfy the following properties:
Property 1: [structure]
All levels in a heap must be full, except the last level and nodes must be
filled from left to right strictly.
Property 2: [ordering]
Nodes must be arranged in an order according to values based on max heap
or min heap.
8.6.3.1 What is Min Heap?
Definition for Min heap:
A min tree is a tree in which the key value in each node is no larger
than the key values in its children (if any). A min heap is a complete binary
tree that is also a min tree.

Fig. 8.28: Min heap

8.6.3.2 What is Max Heap?


Definition for Max Heap:
A max heap is a complete binary tree that is also a max tree. A max
tree is a tree in which the key value in each node is larger than the key
values of its children if any.

Fig. 8.29: Max heap

8.6.3.2.1 How to insert a value into Max Heap?


The following steps are to be considered to insert into a max heaps.
Step 1: Insert a new node as the leaf node from left to right.
Step 2: Compare new node value with its parent node.
Step 3: If new node value is greater than its parent, then swap both of them.
Step 4: Repeat step 2 and step 3 until new node value is less than its parent
node or new node reached to root.
Let us consider a max heap of five elements.

Fig. 8.30: Max heap


When an element is added to this heap, the resulting is six element
heap and it is a complete binary tree. To determine the correct place for the
element to be inserted we use bubbling up process that begin at new node
and move to the root. The node we want to insert bubbles up to ensure a
max heap. By inserting 5 value initially the heap will become as shown
below.

Fig. 8.31: After inserting a number into max heap


If the element we want to insert is with key value 1, it may be inserted
as the left child of 2. But if the key value we want to insert is 5 then we
cannot insert as left child of 2 because heap property fails. So 2 is moved
down as left child and the place for 5 is the old place of 2.

Fig. 8.32: Final Max Heap


8.6.3.2.2 How to delete a value from Max Heap?
When an element is deleted from the max heap, it is taken from the root
of the heap.
Step 1: Swap the root node with the last node in the max heap.
Step 2: Delete last node.
Step 3: Compare root value with its left child value.
Step 4: If the root value is smaller than its left child, then compare left child
with its right sibling, else go to step 6.
Step 5: If the left child value is greater than the right siblings, then swap the
root node with the left child else swap the root with the left child.
Step 6: If root is larger than its left child, then compare root values with its
left child value.
Step 7: If the root is smaller than its right child, then swap the root with
right child otherwise stop the process.
Step 8: Repeat the same until root node is fixed as its exact position.
When an element is to be deleted from the max heap it is taken from
the root of the heap. For example, a deletion from the heap results in
removal of element 21 then the heap will have only five elements.

Fig. 8.33: Initial Max Heap


To do this we remove the element in position 6. Now we have right
structure. But the root is vacant and the element 2 is not in the heap. If 2 is
inserted into the root then the result binary tree is not max heap.
Fig. 8.34: Heap after deletion of element 21
The element at the root should be largest in the tree apart from left
and right child. This element is 20. It is moves to the root and create
vacancy at position 3. Since it has no children we insert 2 at this place.

Fig. 8.35: Final Max Heap

8.7 What are Binary Search Trees?


A Binary Search Tree (BST) is a binary tree. It may be empty or it
may if not empty than it satisfies the following properties.
1) Each node has exactly one key and the keys in the tree are
distinct
2) The keys if any in the left sub tree are smaller than the key in
the root
3) The keys if any in the right sub tree are larger than the key in
the root
The left and right sub trees are also binary search trees.
The reason why we go for a Binary Search tree is to improve the
searching efficiency. The average case time complexity of the search
operation in a binary search tree is O(log n).

Fig. 8.36: Binary Search Tree


Consider the following list of numbers. A binary search tree can be
constructed using this list of numbers, as shown.
38, 14, 8, 23, 18, 20, 56, 45, 82, 70
Initially 38 is taken and placed as the root node. The next number 14
is taken and compared with 38. As 14 is lesser than 38, it is placed as the
left child of 38. Now the third number 8 is taken and compared starting
from the root node 38. Since is 8 is less than 38 move towards left of 38.
Now 8 is compared with 14, and as it is less than 14 and also 14 does not
have any child, 8 is attached as the left child of 14.
This process is repeated until all the numbers are inserted into the
tree. Remember that if a number to be inserted is greater than a particular
node element, then we move towards the right of the node and start
comparing again.
8.7.1 What is the search operation in Binary Search Tree?
In a binary search tree, the search operation is performed with O(log
n) time complexity.
The search operation is performed as follows.
Step 1: First read the element to search.
Step 2: Compare the element with the root node value.
Step 3: If both are found, then “element found” will be displayed.
Step 4: If both are not matched, then check whether the key elements are
either smaller or greater than the root value.
Step 5: If key is smaller, then continue the search in the left subtree.
Step 6: If key is larger, then continue the search in the right subtree.
Step 7: Repeat until the element is found
Step 8: If no node has matched, then display “element not found”
Consider the following binary search tree,

Fig. 8.37: Binary Search Tree

Example 1:
Now if we want to search for element 30. i.e., the key element will be
30. Compare the key element with root node, which is same so “element
found” will be displayed.
Example 2:
If we want to search for the key element 80 do the following process.
Now compare root with key i.e. 30<80 so not equal. Search in the right
subtree value with key element i.e., 48<80 so not equal. Search in the right
subtree value with the key value i.e., 80=80. Hence the key is found.
Example 2:
If we want to search for the key element 40 do the following process.
Compare the root and key i.e., 30<40 which is not equal. So, search in the
right subtree value with the key element i.e., 48>40. Now, search for the left
child i.e., NULL. Hence the element is not found.
Algorithm SEARCH( ROOT, k )
Temp = ROOT, par =NULL, loc = NULL
While temp ≠ NULL
If k = temp - > data
loc = temp
break
If k < temp - > data
par = temp
temp = temp - > left
else
par = temp
temp = temp - > right
8.7.2 How to insert into a Binary Search Tree?
Insertion into a Binary Search Tree (BST) will also perform in O(log n)
time complexity. In binary search tree, new node is always inserted as a leaf
node. The insertion operation is performed as follows:
Step 1: Create a new node with given value and set its left and right child as
NULL.
Step 2: Check weather tree is empty.
Step 3: If the tree is empty, then set the new node as root.
Step 4: If the tree is not empty, then check weather value of the new node is
smaller or larger than the node.
Step 5: If new node is smaller to the node, then move to the left child. If
new node is larger to the node, then move to the right child.
Step 6: Repeat the above step until we reach to a leaf node.
Step 7: After reaching to a leaf node, then insert the new node as left child
if new node is smaller else insert as right child.
Example 1 for insertion into Binary Search Tree:
Construct a Binary Search Tree with the following numbers.
10, 12, 40, 4, 20, 7, 5
Step 1: insert(10)
Since, the tree is empty make the new node as root.

Step 2: insert(12)
Since, 12>10 insert 12 as right child of 10.

Step 3: insert(40)
Since 10<40, move to right child. Then again 12<40, insert 40 as the
right child of 12.
Step 4: insert(4)
Since 4<10, insert 4 as left child of 10.

Step 5: insert(20)
Since 20>10, move to the right child. Again 20>12, move to the right
child. Then 20<40 move to left child. Here left child of 40 is NULL. Insert
20 directly as the left child of 40.

Step 6: insert(7)
Since 7<10, move to left child. As 7>4, insert as right child to 4.
Step 7: insert(5)
Since 5<10, move to left subtree. Again 5>4, move to right subtree.
Here since 5<7, insert 5 as left child to 7. So the final tree after insertion of
all elements is as follows.

Example 1 for insertion into Binary Search Tree:


The BST itself is constructed using the insert operation described
below. Consider the following list of numbers. A binary tree can be
constructed using this list of numbers.
38, 14, 8, 23, 18, 20, 56, 45, 82.
For example we want to insert the element is 70 . While inserting a
node into the binary search tree first we have find the appropriate position
in the binary search tree. We start comparing the node value 70 with the
root if it is greater than the root then it is inserted on the right branch of the
root else on the left branch of the root.
Now compare the node 70 with root node 38. As node 70 is greater
than the root 38 we will move to the right subtree. Now compare node 70
with the node 56 as it greater then move to right and compare node 70 with
node 82 as it less than the node 82 we attach 70 as left child of node 82. The
diagram is shown below.
Steps of Algorithm INSERT( ROOT, k ) into a Binary Search Tree:
1) Read the value for the node which is to be created and store it in a node
called new.
2) Initially if(root!=NULL) then root = new
3) Again read the next value of node created in new
4) If(new - > data < root - > data) then attach the new node as a left child of
root otherwise attach the new node as a right child of root
5) Repeat step3 and 4 for constructing required binary search tree
completely.
8.7.3 How to delete from Binary Search Tree?
The deletion of a node from a binary search tree occurs with three
possibilities
1) Deletion of a leaf node.
2) Deletion of a node having one child.
3) Deletion of a node having two children.
8.7.3.1 How to delete a leaf node in Binary Search Tree?
This is the simplest deletion in which we set the left and right
pointer of parent node as NULL. For example consider the binary search
tree as follows.

From the above tree diagram the node we want to delete is the node
8, then we will set the left pointer of its parent (node 14) to NULL. Then
after deletion the binary search tree is as follows.

Steps of Algorithm to delete a leaf node in Binary Search Tree:


if(temp - > left == NULL && temp - > right == NULL)
if(parent - > left == temp)
parent - > left = NULL
else
parent - > right = NULL
8.7.3.2 How to delete a node with one child in Binary Search Tree?
Consider the following Binary Search Tree (BST) as shown below.

After deletion of element 5, we will get the following binary search


tree since element 3 is replaced with the element 5.

Steps of Algorithm to delete a node with one child in Binary Search Tree:
if(temp - > left !=NULL && temp - > right == NULL)
if(parent - > left ==temp)
parent - > left = temp - > left
else
parent - > right = temp - > left
temp == NULL
delete temp
8.7.3.3 How to delete a node with two children in Binary Search Tree?
The node if we want to delete is having two children.
From the diagram the node we want to delete is having the value 15
then we find the inorder successor of the node i.e., 40 will become the root
node. 3 will become the left child of root node 40 and its right child will be
80.

Algorithm to delete a node with two children in Binary Search Tree:


if(temp - > left != NULL && temp - > right != NULL)
parent = temp
temp_succ = temp - > right
while(temp_succ - > left != NULL)
parent = temp_succ
temp_succ = temp_succ - > left
temp - > data = temp_succ - > data
parent - > right = NULL
8.7.4 How to Join two Binary Search Trees?
There are two different ways to join a binary tree. They are:
a) ThreeWayJoin(small, mid, large):
b) TwoWayJoin(small, large)
a) ThreeWayJoin(small, mid, large):
This creates a binary search tree consisting of pairs; initially two
subtrees small and large as well as mid. Let us assume small tree contain
nodes smaller to mid and large tree contains nodes larger than mid.
Example for ThreeWayJoin(small, large, mid) is shown below.

Fig. 8.38: Three Way Join for Binary Search Tree

a) TwoWayJoin(small, big):
This joins only two Binary Search Tree small and big to obtain a single
Binary Search Tree. It is assumed that all keys of small are smaller than all
keys of big.
Example for TwoWayJoin(small, big) is shown below.

Fig. 8.39: Two Way Join for Binary Search Tree


8.7.5 How to Split Binary Search Trees?
The Binary Search Tree that is split into three parts i.e., small is a
Binary Search Tree that contains all values less than k; large is a Binary
Search Tree contain all values greater than k. If any value equal to k then it
will return to mid.
Example to Split Binary Search Trees is shown below.

Fig. 8.40: Splitting Binary Search Tree


8.7.6 How to find the Height of Binary Search Tree?
The height of the Binary Search Tree is the length of the path from the
root to the deepest node in the tree.
The height of a binary search tree with “n” elements can become as
large as “n”. For instance, when the values like 1, 2, …, n are inserted into
the empty binary search tree. If insertions and deletions are made at random
then the height of the binary search tree is O(log n) on average.
9. What is a Graph?
DEFINITION OF GRAPH:
“A graph G is a collection of two sets V & E. Where V is a finite
non empty of vertices and E is a finite non empty set of edges”.
Or
“A graph G is a defined as a set of objects called nodes and edges”. G
= (V, E).
A graph G consists of two things:
1) A set V of elements called nodes (or points or vertices).
2) A set E of edges such that each edge e in E is identified with a
unique pair (u, v) of nodes in V, denoted by e = (u, v)
9.1 What is the Graph Abstract Data Type?
The Graph Abstract Data Type is shown below.
Abstract data type Graph
{
Instances: a nonempty set of vertices and a set of undirected edges, where
each edge is a pair of vertices
Operations: for all graph Graph, v, v1 and v2 Vertices
Graph Create() - return an empty graph.
Graph InsertVertex(graph, v)- return a graph with v inserted. v has no
incident edge.
InsertEdge(graph, v1,v2) - return a graph with new edge between v1 and
v2.
Graph DeleteVertex(graph, v) - return a graph in which v and all edges
incident to it are removed
Graph DeleteEdge(graph, v1, v2) - return a graph in which the edge (v1,
v2) is removed
Boolean IsEmpty(graph) - if (graph==empty graph) return TRUE else
return FALSE
List Adjacent(graph,v) - return a list of all vertices that are adjacent to v.
}

Fig. 9.1: Graph G

The above Graph G has,


G = { { V1, V2, V3, V4}, { E1, E2, E3, E4, E5} }
9.2 What are the different Types of Graphs?
There are basically two types of graphs:
1) Directed graphs
2) Undirected graph
1) Directed Graph:
In a directed graph or digraph, the directions are shown on the edges.
The edges between the vertices are ordered. In the below graph, the edge is
in between the vertices V1 and V2. The vertex V1 is called head and the
vertex V2 is called tail. The edge is the set of (V1, V2) and not (V2, V1).

Fig. 9.2: Directed Graph


2) Undirected Graph:
In undirected graph, there are no directions shown on the edges. The
edges are not ordered. In the above graph, the edge E1 is the set of (V1, V4)
and (V4, V1).

Fig. 9.3: Undirected Graph


Multi-Graph:
If there is more than one edge between the vertices then it is called
multi-graph which is shown below.

Fig. 9.4 Multi-Graph


Planar Graph:
A planar graph is a kind of graph that can be drawn in a plane and
which contains no crossing edges. It is shown below.

Fig. 9.5: Planar Graph


9.3 What are the Properties of Graph?
Complete graph – If an undirected graph of n vertices contains n(n-1)/2
number of edges then it is called complete graph.
Sub graph – A sub graph G of a graph G is a graph such that the set of
vertices and set of edges of G are power subset of the set of edges of G.

Fig. 9.6: Graph G

Fig. 9.7: Sub graphs of Graph G

Connected graph – An undirected graph is said to be a connected graph if


for every pair of distinct vertices Vi and Vj in V(G) there is a graph from Vi
to Vj in G.
An undirected graph is connected if for every pair of distinct vertices u and
v in V(G) there is a path from u to v in G. A directed graph G is said to be
strongly connected if and only if for every pair of distinct vertices u and v
in V(G), there is a directed path from u to v and there is a path from v to u.
Example for different types of graphs are shown below.
Fig. 9.8: Connected Graph
The above graph is example for connected graph since all the vertices are
connected to each other.

Fig. 9.9: Not Connected Graph


In the above graph, vertex 1 was unconnected from 2, 3, 4 vertices.
Weakly connected graph - A Directed graph is called a weakly connected
graph if for any two nodes I and J, there is a directed path from I to J or
from J to I.

Fig. 9.10: Weakly Connected Graph


In the above graph, from vertex 3 we cannot reach to 1 and 2 vertices.
Strongly connected graph - A Directed graph is called a strongly connected
graph if for any two nodes I and J, there is a directed path from I to J and
also from J to I.

Fig. 9.11: Strongly Connected Graph


In the above graph, the vertices are strongly connected to each other so it is
a strongly connected graph.
Weighted graph – A weighted graph is a graph that contains weights along
with its edges.
Path – A path is denoted using sequence of vertices and there exists an edge
from one vertex to next vertex. A path from vertex u to vertex v in graph G
is a sequence of vertex u, i1 , i2 , … , ik , v such that (u, i1 ), (i1 , i2 ), (i2 , i3 ),
… , (ik ,v) are edges in E(G).
Length of the Path -The length of a path is number of edges on it.

Fig. 9.12: Graph G1

In the above graph G1 the length of path from (1, 4) is 1-2-4. So the length
from (1, 4) is 2.
Simple Path -A simple path is a path in which all vertices except possibly
the first and last are distinct. A path such that are the vertices are distinct,
except that the first and last could be the same is called as simple path.

Fig. 9.13: Graph


For example in the above graph, path (1, 2) (2, 4) is written as 1, 2, 4 of length 2 is considered as
simple path.
If the path is having (1, 2) (2, 4) (4, 2) (2, 1) is written as 1, 2, 4, 2, 1. Here two vertices repeats twice
which is not the first and last vertices. Hence it is not a simple path.
Cycle – a closed walk through the graph with repeated vertices so that the
starting and ending vertex is same.
In the above graph in the figure 10.3.8,
1, 2, 3, 1 is a cycle. 1, 2, 3, 4, 1 is a cycle. 4, 2, 3, 4 is also a cycle.
Indegree – The number of edges that are incident to that vertex is indegree.
Indegree of a vertex V to be the number of edges for which V is the head.

Fig. 9.14: Graph


In the above directed graph, indegree(1) = 1, indegree(2) = 1, indegree(3) =
1.
Outdegree – The number of edges that are exiting out from the vertex is
outdegree. The outdegree is defined to be the number of edges for which V
is the tail.
For example in the above figure 10.3.9, outdegree(1) = 1, outdegree(2)=2,
outdegree(3) = 0.
Degree – The number of edges associated with the vertex is degree of a
vertex. The degree of a vertex is the number of edges incident to that
vertex.

Fig. 9.15: Graph G

In the above graph, degree(A) = 3, degree(B) = 3, degree(C) =3, degree(D)


= 3.
Self loop – self loop is the edge that connects the same vertex to itself.
Source node - A node where the indegree is 0 but has a positive value for
outdegree is called a source node. That is there are only outgoing arcs to the
node and no incoming arcs to the node.
Sink node - A node where the outdegree is 0 and has a positive value for
indegree is called the sink node. That is there is only incoming arcs to the
node and no outgoing arcs the node.
Forest – it is a set of disjoint trees. If we remove the root node of a given
tree then it becomes forest.
9.4 How to Represent a Graph?
There are different ways of representing graphs. They are:
1) Adjacency Matrix
2) Adjacency List
3) Adjacency Multilists
9.4.1 What is Adjacency Matrix Representation of Graphs?
In this representation, the adjacency matrix of a graph G is a two
dimensional n x n matrix, say A = (ai,j ), where

The space required for representing a graph using its adjacency matrix
is n bits.The matrix is symmetric in case of undirected graph, while it may
2

be asymmetric if the graph is directed. This matrix is also called as Boolean


matrix or bit matrix. The below figure shows the adjacency matrix
representation of the graph G. The adjacency matrix is also useful to store
multigraph as well as weighted graph.

Fig. 9.16: Adjacency Matrix Representation for Directed Graph

Fig. 9.17: Adjacency Matrix Representation for Directed Graph G2


Fig. 9.18: Adjacency Matrix Representation for Undirected Graph G1
From the adjacency matrix,
➢ For undirected graph the degree of any vertex ‘i’ is its

row sum i.e., .


➢ For directed graph, the indegre of any vertex ‘i’ is its

column sum i.e., .

➢ Outdegree of any vertex ‘i’ is its row sum i.e.,


.
The adjacency matrix for a weighted graph is called as cost adjacency
matrix. The below figure shows the cost adjacency matrix representation of
the graph G.
Fig. 9.19: Adjacency Matrix Representation for Weighted Undirected Graph

Advantages for Adjacency Matrix Representation:


1) Easy to Represent
2) Removal of edge requires O(1) times.
Disadvantages for Adjacency Matrix Representation:
1) Consumes O(n2 ) memory.
2) Adding a new vertex is O(n2 ) times.
9.4.2 What is Adjacency List Representation of Graphs?
As we have problem with array, to overcome it we have selected the
flexible data structure called linked lists. The type in which a graph is
created with the linked list is called adjacency list.
Graph of n vertices are represented as n linked lists. There will be
one list of each vertex in G. The nodes in list ‘i’ represent the vertices that
are adjacent from vertex I. Here each node is represented with two fields
i.e., data and links. Data field contains the indices of the vertices adjacent to
vertex ‘i’.
We know that the graph is set of vertices and edges we will maintain
two structures for vertices and edges respectively. The graph contains four
vertices V1, V2, V3, V4 so to maintain them we use linked list of head
nodes and adjacent nodes.
Fig. 9.20: Graph

struct head
{
char data;
struct head *down;
struct head *next;
};
struct node
{
char data;
struct node *next1;
}

Fig. 9.21: Adjacency List Representation of Graph


The down pointer helps us to go to each node in the graph whereas
next pointer is for going to adjacent node of each of the head node.
For the undirected graph G1, the adjacency matrix representation is
shown below.

Fig. 9.22: Adjacency List Representation of Graph G1


For the directed graph G1, the adjacency matrix representation is
shown below.

Fig. 9.23: Adjacency List Representation of Graph G2


Advantages for Adjacency List Representation:
1) Time complexities requires only O(|V|+|E|). In worst case it requires
O(V2 ).
2) Adding vertex is easier.
Disadvantages for Adjacency List Representation:
1) To check weather an edge present between two vertex is not effecient
and can take O(V).
2) In undirected graph, the same edge is represented twice.
9.4.3 What is Adjacency Multilist Representation of Graphs?
In adjacency multilist, we maintain adjacency lists as multilists. (i.e.,
lists in which nodes may be shared among several lists).
Here the node structure is shown below.

Fig. 10.4.3.1: Node structure for Adjacency Multilist Representation of Graphs

Where, m is the boolean field to indicate weather the edge was examined or
not.
Consider the graph G as shown below.

Fig. 9.24: Graph G


Adjacency multilist of Graph G is shown below in the following figure.

Fig. 9.25: Adjacency multlist of Graph G


9.5 What are Elementary Graph Operations?
There are two elementary graph operations called graph traversal
techniques. They are as follows.
1) Breadth First Search
2) Depth First Search
9.5.1 What is Breadth First Search?
To traverse a graph by BFS, first a vertex V1 in the graph will be
visited then all the adjacent vertices of it will be traversed. Suppose if V1
has the adjacent vertices (V1, V2, …, Vn) then they will be printed first and
the process continues for all vertices. The data structures used for keeping
all vertices and their adjacent vertices is Queue. We also use array for
visited vertices. The nodes that are visited are marked. In BFS, traversing is
done in breadth wise fashion. The Breadth First Search (BFS) algorithm is
to traverse the graph as close as possible to he root node. BFS visits the
node level-by-level I.e., roo node is visited first, in the next step all th
adjacency nodes are visited and then further levels are visited in level order.

Fig. 9.26: Breadth First Search


Steps of Algorithm for Breadth First Search:
Create a graph, depending on the type of graph either it may be directed
or undirected set the value of flag to 1 or 0.
1) Read the vertex from which you want to traverse the graph say Vi.
2) Initialize the visited array to 1 at the index of Vi.
3) Insert the visited vertex Vi in the queue.
4) Visit the vertex which is at the front of the queue, delete it from the
queue and place the adjacent vertices in the queue.
5) Repeat the step5 till queue is not empty.
Example for Breadth First Search:

Fig. 9.27: Graph

Step 1: Initialize the queue.

Step 2: Start with the vertex from visiting S (starting node), and mark it as
visited.
Step 3: We then see an unvisited adjacent node from S . In this example, we
have three nodes but alphabetically we choose A , mark it as visited and
enqueue it.

Step 4: Next, the unvisited adjacent node from S is B . We mark it as visited


and enqueue it.

Step 5: Next, the unvisited adjacent node from S is C . We mark it as visited


and enqueue it.
Step 6: Now, S is left with no unvisited adjacent nodes. So, we dequeue S
and find A .

Step 7: Next, the unvisited adjacent node from A is D . We mark it as


visited and enqueue it.

Step 8: Now, A is dequeued.


Step 9: Now, B is dequeued.

Step 10: Now, C is dequeued.

Step 11: Now, D is dequeued.


9.5.2 What is Depth First Search?
Depth First Search (DFS) algorithm traverses a graph in a depth ward
motion and uses a stack to remember to get the next vertex to start a search,
when a dead end occurs in any iteration. DFS is useful for performing
number of computations on graphs, including finding a path from one
vertex to another determining weather or not a graph is connected and
computing a spanning tree of a connected graph. DFS algorithm is to
traverse the graph in such a way that it tries to go far away from the root
node as shown in the below figure.

Fig. 9.28: Depth First Search


Steps of Algorithm fo Depth First Search:
Create a graph, depending on the type of graph either it may be directed
or undirected.
1) Read the vertex from which you want to traverse the graph say Vi.
2) Insert the visited vertex Vi and push all the adjacent vertices into the
Stack.
3) If no adjacent vertices found ten pop the top vertex from the Stack.
4) Repeat the step4 till stack is empty.
Example for Depth First Search:
Consider the following Graph for Depth First Search Traversal.
Fig. 9.29: Graph

Step 1: Initialize the stack.

Step 2: Mark S as visited and put it onto the stack. See any unvisited
adjacent node from S . We have three nodes and we can pick any of them.
For this example, we shall take the node in an alphabetical order.

Step 3: Mark A as visited and put it onto the stack. See any unvisited
adjacent node from A. Both S and D are adjacent to A but we are concerned
for unvisited nodes only.
Step 4: Visit D and mark it as visited and put onto the stack. Here, we have
B and C nodes, which are adjacent to D and both are unvisited. However,
we shall again choose in an alphabetical order.

Step 5: We choose B , mark it as visited and put onto the stack. Here B does
not have any unvisited adjacent node. So, we pop B from the stack.

Step 6: We check the stack top for return to the previous node and check if
it has any unvisited nodes. Here, we find D to be on the top of the stack.
Step 7: Only unvisited adjacent node is from D is C now. So we visit C ,
mark it as visited and put it onto the stack.

Step 8: So now we pop one by one node from stack until is empty.
9.6 What are Connected Components?
The maximal connected sub-graph of a graph is called connected
component of a graph. If a graph G is an undirected graph, then we can
determine the connectivity of the graph by simply calling either DFS or
BFS and determine if there are any unvisited vertex. We can identify the
connected components by repetitively calling either DFS(V) or BFS(V),
where V is a vertex that has not yet visited.
Consider a Graph as shown below.
Fig. 9.30: Connected Components
The connected sub-graph of a graph is below.

Fig. 9.31: Connected Components of the Graph

9.7 What are Spanning Trees?


A spanning tree of a graph G is a tree which has all vertices being
covered with minimum possible number of edges and does not form a
cycle. A spanning Tree of a graph is a subgraph that contains all vertices
and is a tree. A graph may have many spanning trees.
Consider the following Graph G.
Fig. 9.32: Graph G
Some of the possible spanning trees are shown below for Graph G. Any tree
consisting of edges in G and including all vertices in G is called a spanning
tree.

Fig. 9.33: Possible Spanning Trees for Graph G


A spanning tree is constructed either by using depth first search or
breadth first search. The spanning tree resulting from depth first search is
depth first spanning tree. The spanning tree generated using breadth first
spanning tree. A spanning tree is a minimal sub-graph, G1 of G such that
V(G1 )=V(G) and G1 is connected.
9.8 What are Biconnected Components?
A biconnected graph is a connected graph that has no articulation
points. A maximal connected subgraph H (no subgraph that is both
biconnected and properly contains H).
Consider a Graph as shown below.

Fig. 9.34: Graph


The Biconnected graph of the above graph is below.
Fig. 9.35: Biconnected Components

9.9 What is Minimum Cost Spanning Trees?


In a weighted graph, a minimum spanning tree is a spanning tree
that has minimum weight than all other spanning trees of the same graph.
Three different algorithms can be used.
1) Kruskal’s Algorithm
2) Prim’s Algorithm
3) Sollins Algorithm
9.9.1 What is Kruskal’s Algorithm?
This algorithm is used to find the minimum cost spanning tree
using greedy approach. This algorithm treats the graph as a forest and
every node in it has an individual tree. A tree connects to another only if
and only iff it has least cost among the available options and do not violate
the minimum spanning tree properties.
Steps for Kruskal’s Algorithm:

1.
For each vertex V in graph G do
2.
Create a set with the element V
3.
Initialize priority queue Q containing all the edges in descending
order of their weights
4.
Define forest = NULL
5.
while T has edges <= N-1 do
6.
Select an edge with minimum weight
7.
if T(v) T(u)
add (u,v) to forest
union T(v) and T(u)

8. return T.
To explain kruskal’s algorithm let us consider the following example:
Step1: Consider the Graph as shown below.

Fig. 9.36: Graph

Step 2: Arrange all edges in their increasing order of their weight.


Step 3: Initially the tree is

Step 4: Add the edges which has the least cost / weight order. We have to
consider edge weight 10 which connects the vertices 0 and 5.

Step 5: Add the edges which has the least cost / weight order. We have to
consider edge weight 12 which connects the vertices 2 and 3.
Step 6: Add the edges which has the least cost / weight order. We have to
consider edge weight 14 which connects the vertices 1 and 6.

Step 7: Add the edges which has the least cost / weight order. We have to
consider edge weight 16 which connects the vertices 1 and 2.

Step 7: Add the edges which has the least cost / weight order. We have to
consider edge weight 22 which connects the vertices 3 and 4.
Step 7: Add the edges which has the least cost / weight order. We have to
consider edge weight 25 which connects the vertices 4 and 5.

9.9.2 What is Prim’s Algorithm?


Prim's algorithm is used to find minimum cost spanning tree. It (as
Kruskal's algorithm) uses the greedy approach. Prim's algorithm shares a
similarity with the shortest path first algorithms. Prim's algorithm, in
contrast with Kruskal's algorithm, treats the nodes as a single tree and
keeps on adding new nodes to the spanning tree from the given graph.
Steps for Prim’s Algorithm:

1. let G be a graph that is connected, weighted and undirected


graph
2. create two sets V and V’ such that
V’ is empty
V contains all the vertices in the graph
select minimum weighted edge (i, j) from G and insert i & j into the set
V’
3. repeat step 4 until V is not equal to V’
4. find all neighbours of all vertices which are in set V’ such that
one end – point of neighbour edge is in V and another not in V’.
5. sum up all selected edges weights and exit.
Example for Prim’s Algorithm:

Fig. 9.37: Graph

Step 1: Remove all loops and remove all parallel edges

Step 2: Consider the least cost weighted edge 10 connecting the vertices 0
and 5.
Step 2: Connecting to the vertices 0 and 5 we have to consider the least cost
edge until it does not form a cycle. So connect least cost edge 25 with
connected vertices 4 and 5.

Step 2: Connecting to the vertices having the edges 10 and 25 we have to


consider the least cost edge. So connect least cost edge 22 with connected
vertices 3 and 4.

Step 2: Connecting to the vertices having the edges 10, 25 and 22. We have
to consider the least cost edge until it does not form a cycle. So connect
least cost edge 12 with connected vertices 2 and 3.
Step 2: Connecting to the vertices having the edges 10, 25, 22 and 12. We
have to consider the least cost edge until it does not form a cycle. So
connect least cost edge 16 with connected vertices 1 and 2.

Step 2: Connecting to the vertices having the edges 10, 25, 22, 12 and 16.
We have to consider the least cost edge until it does not form a cycle. So
connect least cost edge 14 with connected vertices 1 and 6.

9.9.3 What is Sollins Algorithm?


Sollin's algorithm is used to find minimum cost spanning tree. It (as
Kruskal's algorithm) uses the greedy approach.
To explain sollin’s algorithm let us consider the following example:
Fig. 9.38: Graph

Step 1: Remove all loops and remove all parallel edges

Step 2:

Step3: Initially consider all the vertices.


Step 4: Consider all he least cost vertices edges in greedy approach as
shown below.

Step 5: Consider the greedy approach for considering the edges of vertices
until it forms a cycle.

9.10 What is the Shortest Path and Transitive Closure?


The minimum cost path between the vertices in a graph is called the
shortest path.
Transitive closure is a Boolean matrix in which the existence of
directed paths of lengths between vertices is mentioned. It is generated by
BFS and DFS. While computing transitive closure we have start with some
vertex and find all edges which are reachable to every other vertex.
Consider a Graph as shown below.

Fig. 9.39: Digraph G


The corresponding adjacency matrix for Digraph G is

Fig. 9.40: Adjacency Matrix A for G

Transitive closure matrix (A+ ) for the Graph G is shown below.

Fig. 9.41: Transitive closure matrix A+


Reflexive transitive closure (A*) for the Graph G is
Fig. 9.42: Reflexive transitive closure A*
9.11 What is Single Source/All Destinations by Dijkstra’s
Algorithm?
This algorithm is used to find the shortest path between the vertices
in the graph. This algorithm finds the cost of shortest path from source
vertex to destination vertex for non-negative edge cost.
Steps for Dijkstra’s Algorithm:
1) Assign every node a tentative distance.
2) Set initial node as current and mark all nodes as unvisited.
3) For current node, consider all unvisited nodes and calculate tentative
distance. Compare current distance with calculated distance and assign
the smaller value.
4) When all the neighbours are considered of the current node mark it
visited.
5) If the destination is marked visited then stop.
6) End
To explain about shortest path by using dijkstra’s algorithm let us
consider the following example
Fig. 9.43: Directed Graph
Let us consider length adjacency matrix,

Fig. 9.44: Length Adjacency Matrix of Graph

Since, dist[v] = min(dist[v], dist[w] + cost[w,v])


Let us consider vertex ‘4’ is the source node. The value of dist and the
vertex is selected in the first step. In the next step consider the next shortest
cost path and check the dist from ‘v’ to all other vertex which are not
included in the set ‘S’.
Steps in Dijkstra’s Algorithm:
Step 1: Assign every node a tentative distance.
Step2: Set initial node as current and mark all nodes as unvisited.
Step 3: For current node, consider all unvisited nodes and calculate
tentative distance. Compare current distance with calculated distance and
assign the smaller value.
Step 4: If the destination is marked visited then stop.
The time complexity required is O(n log n + e).
9.12 What is All Pairs Shortest Path?
In all pairs shortest path problem, we are to find shortest path between
all pairs of vertices u and v, u ≠ v. This problem can also solved as n
independent single source/ all destinations using each vertices of graph G as
a source vertex. If we follow this approach it takes O(n4 ). If we follow
dynamic programming approach, we can reduce the time complexity to
O(n3 ). The algorithm requires graph G has no cycles with negative length.
Here we represent the graph as length adjacency matrix.
We define A k [i][j] to be the length of the shortest path from i to j
going through no intermediate vertex of index grater than ‘k’. An-1 [i][j] will
be the length of the shortest i to j path in G, since G contains no vertex with
index grater than n-1. A-1 [i][j] is the length[i][j].
The basic idea in all pairs shortest algorithm is to generate the matrix
A , A0 , A1 , …, An-1 . In all pairs shortest problem we will follow the
-1

formula as shown below.


Ak [i][j] = min{Ak-1 [i][j], Ak-1 [i][k] + Ak-1 [k][j]}; k ≥ 0
and
A-1 [i][j] = length[i][j]
Example for All Pairs Shortest Path:
Let us consider the graph G,

Fig. 9.45: Graph G


Step 1: Initially A-1 is same as length.

Step 2: Now consider A 0 , the path from i to j that contains the index value
less than or equal to 0.
A0 [2][1] = min(A-1 [2][1], A-1 [2][0] + A-1 [0][1]) =min(α, 3 + 4) = 7.
Step 3: Let us calculate A1 [i][j], here the index value should not exceed 1.

A1 [0][2] = min(A0 [0][2], A0 [0][1] + A0 [1][2]) =min(11, 4 + 3) = 6.


Step 4: Compute A2 [i][j], where index value should not exceed 2.

A2 [1][0] = min(A1 [1][0], A1 [1][2] + A1 [2][0]) =min(6, 2 + 3) = 5.


10. What is Sorting?
. Sorting is a technique to rearrange the list of elements either in
ascending or descending order, which can be numerical, lexicographical, or
any user-defined order. Sorting can be classified in two types:
1) Internal Sorting
2) External Sorting
1) Internal Sorting:
If the data to be sorted remains in main memory and also the sorting
is carried out in main memory then it is called internal sorting. Internal
sorting takes place in the main memory of a computer. The internal sorting
methods are applied to small collection of data. The following are some
internal sorting techniques:

1. Insertion sort
2. Merge Sort
3. Quick Sort
4. Heap Sort

2) External Sorting:
If the data resides in secondary memory and is brought into main
memory in blocks for sorting and then result is returned back to secondary
memory is called external sorting. External sorting is required when the
data being sorted do not fit into the main memory.
The following are some external sorting techniques:

1. Two-Way External Merge Sort


2. K-way External Merge Sort

10.1 What is Insertion Sort?


Insertion sorting algorithm is a simple sorting algorithm that works in
the way we sort playing cards in our hands. The basic step in this method is
to insert a new records into a sorted sequence of i records in such a way that
the resulting sequence of size i+1 is also ordered.
This sorting is very simple to implement. This method is very efficient
when we want to sort small number of elements. It has excellent
performance when almost the elements are sorted. It is more efficient than
bubble and selection sorts. This sorting is stable. It is an in-place sorting
technique.
Example for Insertion Sort Technique:
Consider the list of elements as shown below.

Step1: The process starts with the first element.

Step 2: Compare 70 with 30 and insert it at its position.

Step 3: Compare 20 with the elements in sorted zone and insert it in that
zone at appropriate position.

Step 4: Compare 50 with the elements in sorted zone and insert it in that
zone at appropriate position.
Step 5: Compare 40 with the elements in sorted zone and insert it in that
zone at appropriate position.

Step 6: Compare 10 with the elements in sorted zone and insert it in that
zone at appropriate position.

Step 7: Compare 60 with the elements in sorted zone and insert it in that
zone at appropriate position. Finally we get the sorted list of elements.

The algorithm is as follows :


Insert_sort(a[0…….n-1])
for i = 1 to n-1 do
{
temp = a[i];
j = i – 1;
while(j >= 0 and a[j] > temp) do
{
a[j+1] = a[j];
j = j – 1;
}
a[j+1] = temp;
}
PROGRAM TO ILLUSTRATE INSERTION SORT TECHNIQUE:
#include<iostream.h>
void insertion_sort(int,int);
main()
{
int a[10],n,i;
cout<<”enter the size of the array”;
cin>>n;
cout<<”enter the array elements”;
for(i=0;i<n;i++)
cin>>a[i];
insertion_sort(a,n);
cout<<“the elements after sorting are”; for(i=0;i<n;i++)
cout>>a[i];
}
void insertion_sort(int a, int n)
{
int i,j,temp;
for(i=1;i<n-1;i++)
{
temp=a[i];
j=i-1;
while(j>=0 && a[j] > temp)
{
a[j+1] = a[j];
j=j-1;
}
a[j=1]=temp;
}
The time complexity of insertion sort for best case is O(n), average
case and worst case is O(n2).
Time complexity, T(n) =1+2+3+4+5+......+n-1
=n(n-1)/2
=O(n2 )
10.2 What is Quick Sort?
This sorting algorithm uses divide and conquer strategy. In this
method, the division is carried out dynamically. It contains three steps:
Divide – split the array into two sub arrays so that each element in the
right sub array is greater than the middle element and each element in the
left sub array is less than the middle element. The splitting is done based on
the middle element called pivot . All the elements less than pivot will be in
the left sub array and all the elements greater than pivot will be on right sub
array.
Conquer – recursively sort the two sub arrays.
Combine – combine all the sorted elements in to a single list.
Consider an array A[i] where i is ranging from 0 to n – 1then the
division of elements is as follows:
A[0]……A[m – 1], A[m], A[m + 1] …….A[n]
The partition algorithm is used to arrange the elements such that all the
elements are less than pivot will be on left sub array and greater then pivot
will be on right sub array. The time complexity of quick sort algorithm in
worst case is O(n2 ), best case and average case is O(n log n).
Example for Quick sort Technique:
Step 1: We will now split the array into two parts. The left sublist will
contain the elements less than pivot (i.e., 50) and right sublist contains
elements grater than pivot

Step 2: We will increment i. If a[i] ≤ pivot, we will continue to increment it


until the element pointed by i is grater than a[low].

Step 3: Increment i as a[i] ≤ a[low].

Step 4: As a[i] > a[low], we will stop incrementing i.

Step 5: As a[j] > pivot (i.e., 70>50). We will decrement j. We will continue
to decrement j until the element pointed by j is less than a[low].
Step 6: Now we cannot decrement j because 40 < 50. Hence, we will swap
a[i] and a[j] i.e., 90 and 40.

Step 7: As a[i] is less than a[low] and a[j] is greater than a[low] we will
continue incrementing i and decrementing j, until the false conditions are
obtained.

Step 8: We will stop incrementing i and stop decrementing j. As i is smaller


than j we will swap 80 and 20.

Step 9: As a[i] < a[low] and a[j] > a[low], we will continue incrementing I
and decrementing j.
Step 10: As a[j] < a[low] and j has crossed i, that is j < i, we will swap
a[low] and a[j].

Step 11: We will now start sorting left sublist, assuming the first element of
left sublist as pivot element. Thus now new pivot = 20.

Step 12: Now we will set i and j pointer and then we will start comparing
a[i] with a[low] or a[pivot]. Similarly comparision with a[j] and a[pivot].
Step 13: As a[i] > a[pivot], hence stop incrementing i. Now as a[j] >
a[pivot], hence decrement j.

Step 14: Now j cannot be decremented because 10 < 20. Hence we will
swap a[i] and a[j].

Step 15: As a[i] < a[low] increment I.

Step 16: Now as a[i] > a[low] or a[j] > a[pivot] decrement j.
Step 17: As a[j] < a[low] we cannot decrement j now. We will now swap
a[low] and a[j] as j has crossed i and i < j.

Step 18: As there is only one element in left sublist hence we will sort right
sublist.

Step 19: As left sublist is sorted completely we will sort right sublist,
assuming first element of right sublist as pivot.

Step 20: As a[i] < a[pivot], hence we will stop incrementing i. Similarly a[j]
< a[pivot], hence we stop decrementing j. Swap a[i] and a[j].

Step 21: As a[i] < a[pivot], increment i.


Step 22: As a[i] > a[pivot], decrement j.

Step 23: Now swap a[pivot] and a[j].

Step 24: The left sublist now contains 70 and right sublist contains only 90.
We cannot further subdivide the list.

Finally, this is the sorted list using quick sort.


The algorithm for quick sort:
Algorithm Quick_sort(a[0…….n-1], low, high)
if(low<high) then
m = partition(a[low …… high])
Quick_sort(a[low … m-1]);
Quick_sort(a[m+1 ….. high])
Algorithm partition(a[low …. high])
pivot =a[low], i = low, j = high
while(i <= j)
do
{
while(a[i] <= pivot)
i = i + 1;
while(a[j] >= pivot)
j = j – 1;
if(i <= j) then
swap(a[i], a[j]);
}
swap(a[low], a[j]);
return j;
PROGRAM TO ILLUSTRATE QUICK SORT TECHNIQUE:
#include<iostream.h>
void Quick_sort(int, int, int);
void partition(int, int, int);
void swap(int, int *, int *);
main()
{
int a[10],n,i;
cout<<”enter the size of the array”; cin>>n;
cout<<”enter the array elements”; for(i=0;i<n;i++)
cin>>a[i];
Quick_sort(a,0,n-1);
cout<<“the elements after sorting are”; for(i=0;i<n;i++)
cout>>a[i];
}
void Quick_sort(int a, int low, int high)
{
int m;
if(low<high)
{
m = partition(a, low, high);
Quick_sort(a, low, m-1);
Quick_sort(a, m+1, high);
}
}
int partition(int a, int low, int high)
{
int pivot =a[low], i = low, j = high; while(i <= j)
{
while(a[i] <= pivot)
i = i + 1;
while(a[j] >= pivot)
j = j – 1;
if(i <= j)
swap(&a[i], &a[j]);
}
swap(&a[low], &a[j]);
return j;
}
void swap(int a, int *i, int *j)
{
int temp;
temp = a[*i];
a[*i] = a[*j];
a[*j] = temp;
}
Time Complexity for Quick Sort:
Worst Case Complexity [Big-O]: O(n2)
It occurs when the pivot element picked is either the greatest or the
smallest element. This condition leads to the case in which the pivot
element lies in an extreme end of the sorted array. One sub-array is always
empty and another sub-array contains n - 1 elements.
Thus, quicksort is called only on this sub-array. However, the quick
sort algorithm has better performance for scattered pivots.
Best Case Complexity [Big-omega]: O(n*log n)
It occurs when the pivot element is always the middle element or near
to the middle element.
Average Case Complexity [Big-theta]: O(n*log n)
It occurs when the pivot element is in the extreme ends of the list.
10.3 What is Merge Sort?
This sorting algorithm uses divide and conquer strategy. In this
method, the division is carried out dynamically. It contains three steps:
Divide – split the array into two sub arrays s1 and s2 with each n/2
elements.
Conquer – sort the two sub arrays s1 and s2.
Combine – combine or merge s1 and s2 elements into a unique sorted list.
Example 1 for Merge sort technique:
Applying merge sort technique to sort the list E, X, A, M, P, L, E in
alphabetical order.

Example 2 for Merge sort technique:


Applying merge sort technique to sort the list 40, 19, 22, 15, 81, 48, 92, 17
in ascending order.
The Algorithm for Merge Sort technique is shown below.
Algorithm Merge_sort(a[0…….n-1], low, high)
if(low<high) then
mid = (low + high) / 2;
Merge_sort(a, low, mid);
Merge_sort(a, m+1,high);
Combine(a, low, mid, high);
Algorithm Combine(a[0 ….n-1], low, mid, high)
{
k = low;
i = low;
j = mid+ 1;
while(i <= mid && j <= high) do
{
if(a[i] <= a[j]) then
{
temp[k] = a[i];
k++;
i++;
}
else
{
temp[k] = a[j];
k++;
j++;
}
}
while(i <= mid) do
{
temp[k] = a[i];
k++;
i++;
}
while(j <= high) do
{
temp[k] = a[j];
k++;
j++;
}
}
PROGRAM TO ILLUSTRATE MERGE SORT TECHNIQUE:
#include<iostream.h>
void Merge_sort(int, int, int);
void Combine(int, int, int, int);
main()
{
int a[10],n,i;
cout<<”enter the size of the array”; cin>>n;
cout<<”enter the array elements”; for(i=0;i<n;i++)
cin>>a[i];
Merge_sort(a,0,n-1);
cout<<“the elements after sorting are”; for(i=0;i<n;i++)
cout>>a[i];
}
void Merge_sort(int a, int low, int high)
{
int mid;
if(low<high)
{
mid = low + high / 2;
Merge_sort(a, low, mid);
Merge_sort(a, mid+1, high);
Combine(a, low, mid, high);
}
}
void Combine(int a, int low, int mid, int high)
{
int i, j, k, temp;
k = low;
i = low;
j = mid+ 1;
while(i <= mid && j <= high) do
{
if(a[i] <= a[j]) then
temp[k] = a[i];
k++;
i++;
}
else
{
temp[k] = a[j];
k++;
j++;
}
}
while(i <= mid) do
{
temp[k] = a[i];
k++;
i++;
}
while(j <= high) do
{
temp[k] = a[j];
k++;
j++;
}
}
Time Complexity of Merge Sort:
Best Case Complexity: O(n*log n)
Worst Case Complexity: O(n*log n)
Average Case Complexity: O(n*log n)
10.4 What is Heap Sort?
Heap is a complete binary tree and also a Max(Min) tree. A Max(Min)
tree is a tree whose root value is larger(smaller) than its children. This
sorting technique has been developed by J.W.J. Williams. It is working
under two stages.

1. Heap construction
2. Deletion of a Maximum element key
The heap is first constructed with the given ‘n’ numbers. The maximum
key value is deleted for ‘n -1’ times to the remaining heap. Hence we will
get the elements in decreasing order. The elements are deleted one by one
and stored in the array from last to first. Finally we get the elements in
ascending order.
The important points about heap sort technique are:

The time complexity of heap sort is O(n log n).


This is an in-place sorting algorithm.
For random input it works slower than quick sort
Heap sort is not a stable sorting method
The space complexity of heap sort is O(1).

Example for Heap Sort Technique:


Step 1:
Step 2:

Step 3:

Step 4:
Step 5:

Step 6:

Step 7:
Step 8:

Step 9:

Step 10:
Step 11:

Step 12:

Step 13:
Step 14:

Step 15:

Step 16:

Step 17:
Step 18:

Step 19:

Algorithm Heap Sort Technique:

1. Build a max heap from the input data.


2. At this point, the largest item is stored at the root of the heap.
Replace it with the last item of the heap followed by reducing
the size of heap by 1. Finally, heapify the root of tree.
3. Repeat above steps while size of heap is greater than 1.
Steps for Working of Heap Sort:
Initially on receiving an unsorted list,
Step 1: In heap sort is to build Max-Heap. Repeat Second, Third and Fourth
steps, until we have the complete sorted list in our array.
Step 2: Once heap is built, the first element of the Heap is largest, so we
exchange first and last element of a heap.
Step 3: We delete and put last element (largest) of the heap in our array.
Step 4: Then we again make heap using the remaining elements, to again
get the largest element of the heap and put it into the array. We keep on
doing the same repeatedly until we have the complete sorted list in our
array.
PROGRAM TO ILLUSTRATE HEAP SORT TECHNIQUE:
#include<iostream.h>
void Heap_sort(int, int);
void Makeheap(int, int);
main()
{
int a[10], n, i;
cout<<”enter the size of the array”;
cin>>n;
cout<<”enter the array elements”;
for(i=0;i<n;i++)
cin>>a[i];
Makeheap(a, n);
Heap_sort(a, n);
cout<<“the elements after sorting are”; for(i=0;i<n;i++)
cout>>a[i];
}
void Makeheap(int a, int n)
{
int i, val, j, parent;
for(i=1;i<n;i++)
{
val = a[i];
j = i;
parent = (j – 1) / 2;
while(j>0 && parent < val)
{
a[j] = a[parent];
j = parent;
parent = (j – 1) / 2;
}
a[j] = val;
}

void Heap_sort(int a, int n)


{
int i, j, k, temp;
for(i=n-1;i>0;i--)
{
temp = a[i];
a[i] = a[0];
k = 0;
if(i == 1)
j = -1;
else
j = 1;
if(i > 2 && a[2] > a[1])
j = 2;
while(j >=0 && temp < a[j])
{
a[k] = a[j];
k = j;
j = 2 * k +1;
if(j+1 <= i-1 && a[j] < a[j+1])
j++;
if(j > i-1)
j = -1;
}
a[k] = temp;
}
Time Complexity for Heap Sort:
Heap Sort has O(n log n) time complexities for all the best case,
average case, and worst cases.
10.5 What is Radix Sort?
In the radix sort method sorting can be done digit by digit and thus all
the elements can be sorted.
Example for Radix Sort:
Consider the unsorted array of 8 elements.
45, 37, 05, 09, 06, 11, 18, 27
Step 1: Now sort the elements according to last digit.

Now sort this number.


Step 2: Now sort the above array with the help of second last digit.
Since the list of elements is of two digit that is why we will stop
comparing. Now whatever list we have got (shown in above array) is of
sorted elements. Thus finally the sorted list by radix sort method will be
05, 06, 09, 11, 18, 27, 37, 45
Step of Algorithm for Radix Sort:

1. Read the total number of elements in the array.


2. Store the unsorted elements in the array.
3. Now the simple procedure is to sort the elements by digit by
digit.
4. Sort the elements according to the last digit then second last
digit and so on.
5. Thus the elements should be sorted for upto the most significant
bit.
6. Store the sorted element in the array and print them.
7. Stop.
PROGRAM TO ILLUSTRATE RADIX SORT TECHNIQUE:
#include <iostream,h>
int getMax(int arr[], int n)
{
int max = arr[0];
for (int i = 1; i < n; i++)
if (arr[i] > max)
max = arr[i];
return max;
}
void countSort(int arr[], int n, int exp)
{
int output[n], i, count[10] = {0};
for (i = 0; i < n; i++)
count[(arr[i] / exp) % 10]++;
for (i = 1; i < 10; i++)
count[i] += count[i-1];
for (i = n - 1; i >= 0; i--)
{
output[count[(arr[i] / exp) % 10] - 1] = arr[i];
count[(arr[i] / exp) % 10]--;
}
for (i = 0; i < n; i++)
arr[i] = output[i];
}
void radixsort(int arr[], int n)
{
int exp, m;
m = getMax(arr, n);
for (exp = 1; m/exp > 0; exp *= 10)
countSort(arr, n, exp);
}
int main()
{
int n, i;
cout<<"\nEnter the number of data element to be sorted: ";
cin>>n;
int arr[n];
for(i = 0; i < n; i++)
{
cout<<"Enter element "<<i+1<<": ";
cin>>arr[i];
}
radixsort(arr, n);
cout<<"\nSorted Data ";
for (i = 0; i < n; i++)
cout<<"->"<<arr[i];
return 0;
}
10.6 What is Selection Sort?
Selection sort is a simple sorting algorithm. This sorting algorithm is an
in-place comparison-based algorithm in which the list is divided into two
parts, the sorted part at the left end and the unsorted part at the right end.
Initially, the sorted part is empty and the unsorted part is the entire list.
The smallest element is selected from the unsorted array and swapped
with the leftmost element. Then that element becomes a part of the sorted
array. This process continues till the unsorted list becomes empty.
Example for Selection Sort:
Sorting the given list of numbers 20, 13, 10, 15, 2 using Selection Sort
Technique.
Initially, set the first element as minimum. Compare minimum with the
second element. If the second element is smaller than minimum, assign the
second element as minimum. Compare minimum with the third element.
Again if the third element is smaller then assign minimum to the third
element otherwise do nothing. This process goes until the last element.
After each iteration, minimum is placed in the front of the unsorted list. For
each iteration, indexing starts from the first unsorted element. All the above
process is repeated until all the elements are processed at their correct
positions.
In First iteration.
In second iteration.
In third iteration.

In fourth iteration.
Algorithm for Selection Sort:
SelectionSort(array, size)
repeat(size-1) times
set the first unsorted element as the minimum
for each of the unsorted elements
if element < currentMinimum
set element as newMinimum
swap minimum with first unsorted position
end selection sort
PROGRAM FOR SELECTION SORT:
#include<iostream.h>
void swap(int *a, int *b)
{
int temp = *a;
*a = *b;
*b = temp;
}
void printArray(int array[], int size) {
for (int i = 0; i < size; i++) {
cout << array[i] << " ";
}
cout << endl;
}
void selectionSort(int array[], int size) {
for (int step = 0; step < size - 1; step++) {
int min_idx = step;
for (int i = step + 1; i < size; i++) {
if (array[i] < array[min_idx])
min_idx = i;
}
swap(&array[min_idx], &array[step]);
}
}
int main() {
int data[] = {20, 13, 10, 15, 2};
int size = sizeof(data) / sizeof(data[0]);
selectionSort(data, size);
cout << "Sorted array in Acsending Order:\n";
printArray(data, size);
}
Time Complexity for Selection Sort:
Worst Case Complexity: O(n2)
If we want to sort in ascending order and the array is in descending order
then, the worst case occurs.
Best Case Complexity: O(n2)
It occurs when the array is already sorted
Average Case Complexity: O(n2)
It occurs when the elements of the array are in jumbled order i.e., neither
ascending nor descending.
10.7 What is Bubble Sort?
Bubble sort is the simplest kind of sorting method. We do the bubble
sort in several iterations called as passes.
Steps of Algorithm for Bubble Sort:

1. Read the total number of element say n.


2. Store the elements on the array.
3. Set the i=0.
4. Compare the adjacent elements.
5. Repeat step 4 for all n elements.
6. Increment the value of I by one and repeat step 4, 5 for i<n.
7. Print the sorted list of elements.
8. Stop.
Example for Bubble Sort:
Starting from the first index, compare the first and the second elements.
If the first element is greater than the second element they are swapped.
Now compare the second and the third elements. Swap them if they are not
in order. Repeat the above process until the last element.
Repeating the above process for remaining iterations. After each
iterations, the largest element among the unsorted elements is placed at the
end. In each iteration the comparison takes place up to the last unsorted
element. The array will be sorted when all the unsorted elements are placed
at their correct positions.
Algorithm for Bubble Sort:
BubbleSort(array)
for i < -1 to indexOfLastUnsortedEle – 1
if leftEle > rightEle
swap leftEle and rightEle
end BubbleSort
PROGRAM FOR ILLUSTRATION OF BUBBLE SORT:
#include<iostream.h>
void bubbleSort(int array[], int size) {
for (int step = 0; step < size - 1; ++step) {
for (int i = 0; i < size - step - 1; ++i) {
if (array[i] > array[i + 1]) {
int temp = array[i];
array[i] = array[i + 1];
array[i + 1] = temp;
}
}
}
}
void printArray(int array[], int size) {
for (int i = 0; i < size; ++i) {
cout << " " << array[i];
}
cout << "\n";
}
int main() {
int data[] = {-12, 45, 0, 11, -19};
int size = sizeof(data) / sizeof(data[0]);
bubbleSort(data, size);
cout << "Sorted Array in Ascending Order:\n";
printArray(data, size);
}
Time Complexity for Bubble Sort:
Worst Case Complexity: O(n2)
If we want to sort in ascending order and the array is in descending order
then, the worst case occurs.
Best Case Complexity: O(n)
If the array is already sorted, then there is no need for sorting.
Average Case Complexity: O(n2)
It occurs when the elements of the array are in jumbled order i.e., neither
ascending nor descending.
Laboratory Work on Data Structures
Experiment 1: How to implement Multistack in a
Single Array through C++
AIM: To write a C++ Program to implement Multistack in a Single Array.
Description:
When a stack is created using single array, we cannot able to store large
amount of data, thus this problem is rectified using more than one stack in
the same array of sufficient array. This technique is called as Multiple
Stack .
A simple way to implement k stacks is to divide the array in k slots of size
n/k each, and fix the slots for different stacks, i.e., use arr[0] to arr[n/k-1]
for first stack, and arr[n/k] to arr[2n/k-1] for stack2 where arr[] is the array
to be used to implement two stacks and size of array be n.
PROGRAM:
#include<iostream.h>
#include<conio.h>
#define Maxsize 50
class multiplestack// multi Stack declaration
{
private:
int *t,*b,n,*mstack;
public:
multiplestack(){}
multiplestack(int x=5) //constructor which divides the single array into x
stacks
{
n=x;
t= new int[n];
b= new int[n];
mstack=new int[Maxsize];//declaring array mstack to maxsize
for(int i=0;i<n;i++)
t[i]=b[i]=(i*(Maxsize/n))-1;
}
/* func: to add element to the particular stack i and element x */
void Add(int i,int x)
{
if(t[i]==b[i+1])//if top of ith stack reaches the bottom of (i+1)th stack the
stack is full.
StackFull(i);
else //otherwise insert the element at the ith top position by incrementing
top by 1.
mstack[++t[i]]=x;
}
/* func: to delete element from the particular stack i */
int Delete(int i)
{
if(t[i]==b[i]) //if top of ith stack is equal to the bottom of ith stack, then the
stack is empty.
{
StackEmpty(i);
return 0;
}
int x=mstack[t[i]--]; //otherwise decrement the top of ith stack by 1.
return x;
}
void StackFull(int i)//func: To display stack is full
{
cout<<"\nStack "<<i<<" is Full\n";
}
void StackEmpty(int i)//func: To display stack is empty
{
cout<<"\nStack "<<i<<" is Empty\n";
}
/* func: To display the ith stack element */
void Display(int i)
{
if(t[i]==b[i]) //if top of ith stack is equal to the bottom of ith stack, then the
stack is empty.
StackEmpty(i);
else //display all elements from bottom+1 position to top of the stack i.
{
cout<<"\nelements in Stack "<<i<<"are::";
for(int j=b[i]+1;j<=t[i];j++)
cout<<"\t"<<mstack[j];
}
}
};
int main()
{
multiplestack ms(5);
int x;
clrscr();
cout<<"\nPushing Elements(45,54,78) onto stack 0";
// inserting elements 45,54 and 78 into stack 0.
ms.Add(0,45);
ms.Add(0,54);
ms.Add(0,78);
ms.Display(0);
x=ms.Delete(0);
cout<<"\nDeleted element from the stack 0 is:"<<x;
ms.Display(0);
cout<<"\nPushing elements 3,-1,9 onto stack 1";
ms.Add(1,3);
ms.Add(1,-1);
ms.Add(1,9);
ms.Display(1);
cout<<"\n::::Displaying elements from all Stacks(0-4)::::\n";
for(x=0;x<5;x++)
ms.Display(x);
return 1;
}
OUTPUT:
Experiment 2: How to implement Circular Queue
through C++
AIM: To write a C++ program to implement Circular Queue.
Description:
Circular Queue is a linear data structure in which the operations are
performed based on FIFO (First In First Out) principle and the last position
is connected back to the first position to make a circle.
Operations on Circular Queue:
• enQueue(value) This function is used to insert an element into the
circular queue. In a circular queue, the new element is always
inserted at Rear position.
• deQueue() This function is used to delete an element from the
circular queue. In a circular queue, the element is always deleted
from front position.
PROGRAM:
#include<iostream.h>
#include<conio.h>
#define Maxsize 5
class circular_queue// circular queue class declaration
{
private:
int rear,front;
int *cqueue;
int counter;//counter to check whether circular queue is full or not.
public:
circular_queue(){ //circular queue default constructor to initialize the
data members.
cqueue=new int[Maxsize];
front=rear=-1;
counter=-1;
}
/* Add func: Enqueue an element into the circular queue */
void Add(const int& x)
{
int newrear=(rear+1) % Maxsize;
if(counter==Maxsize-1){ //checking whether queue reaches the max
limit.
cout<<"element:"<<x<<"can't be inserted into queue";
cout<<":: Queue is full\n";
}
else{ // If queue not full, insert the element at the rear position.
cqueue[rear=newrear]=x;
if (front==-1) front=0;
counter++;
cout<<"element inserted:"<<x<<"\n";
}
}
int Delete(int x) /* Add func: Dequeue an element from the circular
queue */
{
if(counter==-1){ //check whether queue is empty, if empty return
back
cout<<"Queue is empty\n";
//return 0;
}
//if queue is not empty, the delete the element from the front position.
x=cqueue[front];
front=(front+1) % Maxsize;
counter--;
return x;
}
void display() // To display elements in the queue.
{
int i;
i=front;
if(counter==-1) //if counter <0 then queue is empty, so nothing to
display.
cout<<"Queue is empty\n";
else // if counter >0 then display all the elements from front to rear.
{
cout<<"\n elements in circular queue are:";
while(i!=rear)
{
cout<<cqueue[i]<<"\t";
i=(i+1) % Maxsize;
}
cout<<cqueue[i];
cout<<"\n";
}
}
};
int main()
{
circular_queue cq;
int x;
clrscr();
cout<<"Intially circular queue:";
cq.display();
cout<<"::Inserting 6 values(5,9,6,3,1,8) into circular queue::\n";
cq.Add(5);
cq.Add(9);
cq.Add(6);
cq.Add(3);
cq.Add(1);
cq.Add(8);
cq.display();
cout<<"\n::Deleting two elements from circular queue:: \n";
x=cq.Delete(x);
cout<<"Deleted element:"<<x<<"\n";
x=cq.Delete(x);
cout<<"Deleted element:"<<x<<"\n";
cout<<"::After deletion of two elements from circular queue::";
cq.display();
cq.Add(18);
cout<<"::After element 18 added to the queue::\n";
cq.display();
return 0;
}
OUTPUT:
Experiment 3: How to implement Singly Linked
List through C++
AIM: To write a C++ program to implement Singly Linked List.
Description:
A linked list is represented by a pointer to the first node of the linked list.
The first node is called head. If the linked list is empty, then value of head
is NULL. Each node in a list consists of atleast in two parts:
1) data
2) pointer to the next node
On the list, we can perform operations like insertion, deletion and display of
list nodes.
PROGRAM:
#include<iostream.h>
#include<conio.h>
#include<stdlib.h>
template <class Type> class List;//forward declaration
template <class Type>
//Node declaration which contains data and link fields
class ListNode{
friend class List<Type>;
private:
Type data;
ListNode<Type> *link;
public:
ListNode(Type element=0)//0 is the default argument to the constructor
{
data=element;
link=0;//null pointer constant
}
};
//List class declaration which contains a list operations
template<class Type>
class List{
private:
ListNode<Type> *first;
public:
List()//constructor initializing first to 0
{
first=0;
}
//List Manipulation Operations
void Create2(Type ele1,Type ele2)
{
first= new ListNode<Type>(ele1);//create and initialize first node
//creating the second node
first->link=new ListNode<Type>(ele2);
}
void Insert(int pos, Type ele)
{
ListNode<Type> *t=new ListNode<Type>(ele);
if(!first)//insert into empty list
{
first=t;
return;
}
//insert after position pos
ListNode<Type> *ptr;
int c=1;
ptr=first;
while(c<pos && ptr->link!=0)
{
ptr=ptr->link;
c++;
}
t->link=ptr->link;
ptr->link=t;
}
void Delete(int pos)//Deletion of node at the given position
{
ListNode<Type> *ptr;
int i=1;
ptr=first;
while(i<pos-1 && ptr->link->link!=0)//checking position along with end of
the list
{
ptr=ptr->link;
i++;
}
ptr->link = ptr->link->link;
delete(ptr->link);//deletion of the node
}
void Display()// To display the list elements
{
ListNode<Type> *ptr;
ptr=first;
if(!first)//If first is null, list is empty
{
cout<<"List is Empty";
}
else//displaying all the data fields in the list until ptr reaches last node(ptr-
>link==NULL)
{
cout<<"Element in the List:";
do{
cout<<ptr->data<<"\t";
ptr=ptr->link;
}while(ptr!=0);
}
}
};
int main()
{
int ch,e1,e2,pos;
List <int> sl;
clrscr();
do{
cout<<"\nMain
Menu::::\n1:Create\n2:Insert\n3:Delete\n4:Display\n5:Exit\n";
cout<<"enter your choice:";
cin>>ch;
switch(ch){
//case 1: create.....
case 1: cout<<"Enter the first node element value(int):";
cin>>e1;
cout<<"Enter the Next Node value:";
cin>>e2;
sl.Create2(e1,e2);
break;
//case 2: inserting at given position.....
case 2:cout<<"enter the element value:";
cin>>e1;
cout<<"Enter the position value in list to insert the node:";
cin>>pos;
sl.Insert(pos,e1);
break;
//case 3: Deletion of nodes at the given position.....
case 3:cout<<"Enter the node position to delete the node:";
cin>>pos;
sl.Delete(pos);
break;
case 4:sl.Display();//case 4: To display all nodes data.....
break;
case 5:exit(0);
default: cout<<"Wrong Choice!!!!!!!";
break;
}
}while(1);
return 0;
}
OUTPUT:
Experiment 4: How to implement Doubly Linked
List through C++
AIM: To write a C++ program to implement Doubly Linked List.
Description:
In double linked list, every node has link to its previous node and next
node. So, we can traverse forward by using next field and can traverse
backward by using previous field. Every node in a double linked list
contains three fields: left, right and data.
In a double linked list, we perform the following operations:
1. Insertion: Insertion operation can be performed in three ways:
• Inserting an element at the beginning of the list.
• Inserting an element at the ending of the list.
• Inserting an element at the given position.
2. Deletion: Deletion operation can be performed in three different
ways:
• Deleting an element from beginning of the list.
• Deleting an element from ending of the list.
• Deleting an element from the specified position.
3. Display/ Traversing: Display can be performed in two ways:
• Display from start to end.
• Display of list element from end to start(In reverse).
PROGRAM:
/*
* C++ Program to Implement Doubly Linked List
*/
#include<iostream.h>
#include<conio.h>
#include<stdio.h>
#include<stdlib.h>
/*
* Node Declaration
*/
struct node
{
int info;
struct node *next;
struct node *prev;
}*start;

/*
Class Declaration
*/
class double_llist
{
public:
void create_list(int value);
void add_begin(int value);
void add_after(int value, int position);
void delete_element(int value);
void search_element(int value);
void display_dlist();
void count();
void reverse();
double_llist()
{
start = NULL;
}
};

/*
* Main: Conatins Menu
*/
int main()
{
int choice, element, position;
double_llist dl;
clrscr();
while (1)
{
cout<<endl<<"Operations on Doubly linked list"<<endl;
cout<<"1.Create Node\n2.Add at begining\n3.Add after
position\n4.Delete\n5.Display\n6.Count\n7.Reverse\n8.Quit"<<endl;
cout<<"Enter your choice : ";
cin>>choice;
switch ( choice )
{
case 1:
cout<<"Enter the element: ";
cin>>element;
dl.create_list(element);
cout<<endl;
break;
case 2:
cout<<"Enter the element: ";
cin>>element;
dl.add_begin(element);
cout<<endl;
break;
case 3:
cout<<"Enter the element: ";
cin>>element;
cout<<"Insert Element after postion: ";
cin>>position;
dl.add_after(element, position);
cout<<endl;
break;
case 4:
if (start == NULL)
{
cout<<"List empty,nothing to delete"<<endl;
break;
}
cout<<"Enter the element for deletion: ";
cin>>element;
dl.delete_element(element);
cout<<endl;
break;
case 5:
dl.display_dlist();
cout<<endl;
break;
case 6:
dl.count();
break;
case 7:
if (start == NULL)
{
cout<<"List empty,nothing to reverse"<<endl;
break;
}
dl.reverse();
cout<<endl;
break;
case 8:
exit(1);
default:
cout<<"Wrong choice"<<endl;
}
}
return 0;
}

/*
* Create Double Link List
*/
void double_llist::create_list(int value)
{
struct node *s, *temp;
temp = new(struct node);
temp->info = value;
temp->next = NULL;
if (start == NULL)
{
temp->prev = NULL;
start = temp;
}
else
{
s = start;
while (s->next != NULL)
s = s->next ;
s->next = temp;
temp->prev = s;
}
}

/*
* Insertion at the beginning
*/
void double_llist::add_begin(int value)
{
if (start == NULL)
{
cout<<"First Create the list."<<endl;
return;
}
struct node *temp;
temp = new(struct node);
temp->prev = NULL;
temp->info = value;
temp->next = start;
start->prev = temp;
start = temp;
cout<<"Element Inserted"<<endl;
}

/*
* Insertion of element at a particular position
*/
void double_llist::add_after(int value, int pos)
{
if (start == NULL )
{
cout<<"First Create the list."<<endl;
return;
}
struct node *tmp, *q;
int i;
q = start;
for (i = 0;i < pos - 1;i++)
{
q = q->next;
if (q == NULL)
{
cout<<"There are less than ";
cout<<pos<<" elements."<<endl;
return;
}
}
tmp = new(struct node);
tmp->info = value;
if (q->next == NULL)
{
q->next = tmp;
tmp->next = NULL;
tmp->prev = q;
}
else
{
tmp->next = q->next;
tmp->next->prev = tmp;
q->next = tmp;
tmp->prev = q ;
}
cout<<"Element Inserted"<<endl;
}
/*
* Deletion of element from the list
*/
void double_llist::delete_element(int value)
{
struct node *tmp, *q;
/*first element deletion*/
if (start->info == value)
{
tmp = start;
start = start->next;
start->prev = NULL;
cout<<"Element Deleted"<<endl;
free(tmp);
return;
}
q = start;
while (q->next->next != NULL)
{
/*Element deleted in between*/
if (q->next->info == value)
{
tmp = q->next;
q->next = tmp->next;
tmp->next->prev = q;
cout<<"Element Deleted"<<endl;
free(tmp);
return ;
}
q = q->next;
}
/*last element deleted*/
if (q->next->info == value)
{
tmp = q->next;
free(tmp);
q->next = NULL;
cout<<"Element Deleted"<<endl;
return;
}
cout<<"Element "<<value<<" not found"<<endl;
}
/*
* Display elements of Doubly Link List
*/
void double_llist::display_dlist()
{
struct node *q;
if (start == NULL)
{
cout<<"List empty,nothing to display"<<endl;
return;
}
q = start;
cout<<"The Doubly Link List is :"<<endl;
while (q != NULL)
{
cout<<q->info<<" <-> " ;
q = q->next;
}
cout<<"NULL"<<endl;
}

/*
* Number of elements in Doubly Link List
*/
void double_llist::count()
{
struct node *q = start;
int cnt = 0;
while (q != NULL)
{
q = q->next;
cnt++;
}
cout<<"Number of elements are: "<<cnt<<endl;
}

/*
* Reverse Doubly Link List
*/
void double_llist::reverse()
{
struct node *p1, *p2;
p1 = start;
p2 = p1->next;
p1->next = NULL;
p1->prev = p2;
while (p2 != NULL )
{
p2->prev = p2->next;
p2->next = p1;
p1 = p2;
p2 = p2->prev;
}
start = p1;
cout<<"List Reversed"<<endl;
}
OUTPUT:
Experiment 5: How to implement Binary Search
Tree through C++
AIM: To write a C++ program to implement Binary Search Tree.
Description:
Binary Search Tree, is a node-based binary tree data structure which has the
folllowing features.
● The left subtree of a node contains only nodes with keys less than the
node’s keys.
● The right subtree of a node contains only nodes with keys grater than
the node’s keys.
● The left and right subtree each must be also an binary tree.
● Search: Searches an element in a tree.
● Insert: Inserts an element in a tree.
● Pre-Order Traversal: Traverses a tree in a pre-ordered manner.
● Post-Order Traversal: Traverses a tree in post-order manner.
● Deletion: Deleting an element from the tree. In Deletion, again we
have three cases i.e., deletion of leaf node, deletion of node which has
only one child, deletion of the parent node having both the childern i.e.,
left and right child.
PROGRAM:
#include<iostream.h>
#include<conio.h>
#include<stdlib.h>
class BST;//forward declaration.
class node{ // BST node declaration. Here each node has data and links to
left and right child.
friend class BST;
int info;
node *left;
node *right;
};
/* BST class declaration which contains the root node as data member and
different operarions on BST will be defined in this class*/
class BST
{
public:
node *root;
node* insert(node *,int);
void preorder(node *);
void postorder(node *);
void inorder(node *);
node* search(node *,int);
node* del(node *, int);
node* findmin(node*);
BST() // Default constructor which creates a empty tree
{
root=NULL;
}
};
node* BST::insert(node *ptr, int val)
/* insertion is a recursive function which accepts a ptr and element that
need to be inserted into the tree.*/
{
if(ptr==NULL)/* If ptr is NULL, then we can create a newnode and make
info as val and left and right child to NULL and return back*/
{
node *newnode= new node;
newnode->info=val;
newnode->left=newnode->right=NULL;
ptr=newnode;
}
else
{
if(ptr->info==val) // check value already exist or not. No duplicates are
allowed.
cout<<"Elements already in the tree\n";
else if(ptr->info < val) // if new value is greater than the node then move to
right of the tree
ptr->right=insert(ptr->right,val);
else //otherwise new value is less than the node then move to left of the tree
ptr->left=insert(ptr->left, val);
}
return ptr;
}
void BST::inorder(node *ptr) // func: Inorder traversing of tree i.e., left-
>root->right
{
if(ptr!=NULL)
{
inorder(ptr->left);
cout<<ptr->info<<"\t";
inorder(ptr->right);
}
}
void BST::postorder(node *ptr) // func: Post-order traversing of tree i.e.,
left-> right-> root
{
if(ptr!=NULL)
{
postorder(ptr->left);
postorder(ptr->right);
cout<<ptr->info<<"\t";
}
}
void BST::preorder(node *ptr) // func: Pre-order traversing of tree i.e., root-
>left ->right
{
if(ptr!=NULL)
{
cout<<ptr->info<<"\t";
preorder(ptr->left);
preorder(ptr->right);
}
}
node * BST::search(node *ptr,int ele) //func : Searching in the BST to find
whether element exist
{
if(ptr==NULL) // empty tree condition or element not found case
return NULL;
else if(ptr->info ==ele) // if element matches with the node data then return
the node .
return ptr;
else if(ptr->info<ele) // if node value is less than element then search in
right sub-tree.
return search(ptr->right,ele);
else // if node value is greater than element then search in left sub-tree.
return search(ptr->left,ele);
}
node * BST::del(node *ptr,int ele)
{
if(ptr==NULL)
return NULL; //empty tree condition
else if(ptr->info <ele)
ptr->right=del(ptr->right,ele);
else if(ptr->info >ele)
ptr->left=del(ptr->left,ele);
else if(ptr->left !=NULL && ptr->right!=NULL) //two children
{
node *tempmin=findmin(ptr->right);
ptr->info=tempmin->info;
ptr->right=del(ptr->right,tempmin->info);
}
else
{ // one or no child case
node *temp=ptr;
if(ptr->left==NULL)
ptr=ptr->right;
else if(ptr->right==NULL)
ptr=ptr->left;
else
delete temp;
}
return ptr;
}
node* BST::findmin(node *ptr)
/* To find the minimum node in the right subtree i.e., leftmost node in the
right subtree*/
{
if(ptr==NULL)
return NULL;
while(ptr->left!=NULL)
ptr=ptr->left;
return ptr;
}
int main()
{
int choice,val;
BST bst;
node *temp;
clrscr();
while(1)
{
cout<<"MAIN MENU\n 1: Insert an element\n 2: Inorder Traversal \n
3:Postorder traversal\n 4:Preorder Traversal \n 5:Deletion of element\n
6:Search an element\n 7:exit\n";
cout<<"Enter your choice:";
cin>>choice;
switch(choice)
{
case 1:cout<<"Enter the value to insert :";
cin>>val;
bst.root=bst.insert(bst.root,val);
break;
case 2:cout<<"Inorder Traversal of BST:\n";
bst.inorder(bst.root);
break;
case 3:cout<<"Postorder Traversal of BST:\n";
bst.postorder(bst.root);
break;
case 4:cout<<"Preorder Traversal of BST:\n";
bst.preorder(bst.root);
break;
case 5:cout<<"Enter an element to delete:";
cin>>val;
temp=bst.del(bst.root,val);
break;
case 6:cout<<"Enter an element to search:";
cin>>val;
temp=bst.search(bst.root,val);
if(temp==NULL)
cout<<"element not found";
else
cout<<"element found";
break;
case 7:exit(0);
default:cout<<" Wrong Choice.Try Again!!!!";
}
}
return 1;
}
OUTPUT:
Experiment 6: How to implement Heaps through
C++
AIM: To write a C++ program to implement Heaps
Description:
A Binary Heap is a Binary Tree with the following properties.
1. It’s a complete tree (All levels are completely filled except
possibly the last level and the last level has all keys as left as
possible). This property of Binary Heap makes them suitable to be
stored in an array.
2. A Binary Heap is either Min Heap or Max Heap. In a Min Binary
Heap, the key at root must be minimum among all keys present in
Binary Heap. The same property must be recursively true for all
nodes in Binary Tree. Max Binary Heap is similar to Min Heap.

PROGRAM:
#include<iostream.h>
#include<conio.h>
#include<stdlib.h>
class MaxHeap // MaxHeap class declaration
{
private:
int *heap;
int heapSize;
int capacity;
public:
void Push(int);
void Pop();
void display();
MaxHeap(int n=20) // Constructor to initialize the capacity the heap
{
if(n<1)
cout<<"capacity must be >=1";
else
{
capacity=n;
heapSize=0;
heap= new int[capacity+1];
}
}
};
void MaxHeap::Push(int e)//func: To insert the element into the heap &
perform hepification
{
if(heapSize==capacity){
cout<<"Heap is full";
}
else
{
int currentNode=++heapSize;
while(currentNode!=1 && heap[currentNode/2]<e) // repeat until element
is less the current node/2 value
{
heap[currentNode]=heap[currentNode/2];
currentNode/=2;
}
heap[currentNode]=e;
}
}
void MaxHeap::Pop() // Delete the root from the max heap and perform
percolgate
{
if(heapSize<1)
{
cout<<"Heap is empty!!! Cannot Delete\n";
}
else
{
int last=heap[heapSize--]; // delete the last node and save it in last and place
it in the right position
int currentNode=1;
int child=2;
while(child<=heapSize)
{
if(child<heapSize && heap[child]<heap[child+1])
child++;
if(last>=heap[child])
break;
heap[currentNode]=heap[child];
currentNode=child;
child*=2;
}
heap[currentNode]=last;
}
}
void MaxHeap::display() // TO display elements in the heap
{
if(heapSize<1)
cout<<"No elements in Heap";
else
{
cout<<"Element in max heap are:";
for(int i=1;i<=heapSize;i++)
cout<<"\t"<<heap[i];
}
}
int main()
{
MaxHeap mh(30);
int ch,ele;
clrscr();
while(1)
{
cout<<":::::MainMenu::::";
cout<<"\n 1:Insert \n 2:Delete \n 3:Display \n 4:exit\n Enter your choice";
cin>>ch;
switch(ch)
{
case 1:cout<<"Enter an element to insert into max heap:";
cin>>ele;
mh.Push(ele);
break;
case 2:mh.Pop();
cout<<"element deleted";
break;
case 3:mh.display();
break;
case 4:exit(0);
default:cout<<"Wrong choice.Try Again!!!!!!";
}
}
}
OUTPUT:
Experiment 7: How to implement Breadth First
Search Techniques.
AIM: To write a C++ program to implement Breadth First Search.
Description:
Graph traverse means visiting every vertex and edge exactly once in a well-
defined order. Breadth-first search (BFS ) is an algorithm for traversing or
searching tree or graph data structures. It starts at the tree root (or some
arbitrary node of a graph, sometimes referred to as a 'search key') and
explores the neighbor nodes first, before moving to the next level
neighbours.BFS and its application in finding connected components of
graphs
PROGRAM:
#include<iostream.h>
#include<conio.h>
#include<stdlib.h>
int cost[10][10],i,j,k,n,qu[10],front,rare,v,visit[10],visited[10];
main()
{
int m;
clrscr();
cout<<"enterno of vertices";
cin>> n;
cout<<"ente no of edges";
cin>> m;
cout<<"\nEDGES \n";
for(k=1;k<=m;k++)
{
cin>>i>>j;
cost[i][j]=1;
}
cout<<"enter initial vertex";
cin>>v;
cout<<"Visitied vertices\n";
cout<< v;
visited[v]=1;
k=1;
while(k<n)
{
for(j=1;j<=n;j++)
if(cost[v][j]!=0 && visited[j]!=1 && visit[j]!=1)
{
visit[j]=1;
qu[rare++]=j;
}
v=qu[front++];
cout<<v << " ";
k++;
visit[v]=0; visited[v]=1;
}
}
OUTPUT:
Experiment 8: How to implement Depth First
Search Technique through C++
AIM: To write a C++ program to implement Depth First Search
Technique.
Description:
Graph traverse means visiting every vertex and edge exactly once in a well-
defined order. Depth-first search (DFS ) is an algorithm for traversing or
searching tree or graph data structures. One starts at the root (selecting
some arbitrary node as the root in the case of a graph) and explores as far as
possible along each branch before backtracking.
PROGRAM:
#include<iostream.h>
#include<conio.h>
#include<stdlib.h>
int cost[10][10],i,j,k,n,stk[10],top,v,visit[10],visited[10];
main()
{
int m;
clrscr();
cout<<"enterno of vertices";
cin>> n;
cout<<"ente no of edges";
cin>> m;
cout<<"\nEDGES \n";
for(k=1;k<=m;k++)
{
cin>>i>>j;
cost[i][j]=1;
}
cout<<"enter initial vertex";
cin>>v;
cout<<"ORDER OF VISITED VERTICES";
cout<< v <<" ";
visited[v]=1;
k=1;
while(k<n)
{
for(j=n;j>=1;j--)
if(cost[v][j]!=0 && visited[j]!=1 && visit[j]!=1)
{
visit[j]=1;
stk[top]=j;
top++;
}
v=stk[--top];
cout<<v << " ";
k++;
visit[v]=0; visited[v]=1;
}
}
OUTPUT:
Experiment 9: How to implement Prim’s
Algorithm through C++
AIM: To write a C++ program to implement Prim’s Algorithm.
Description:
Prim's algorithm is a greedy algorithm that finds a minimum spanning tree
for a weighted undirected graph. This means it finds a subset of the edges
that forms a tree that includes every vertex, where the total weight of all the
edges in the tree is minimized. The algorithm operates by building this tree
one vertex at a time, from an arbitrary starting vertex, at each step adding
the cheapest possible connection from the tree to another vertex.
PROGRAM:
#include <iostream.h>
#include <conio.h>
#define ROW 7
#define COL 7
#define infi 5000 //infi for infinityclass prims
typedef int bool;
const bool true=1, false=0;
struct prims
{
int graph[ROW][COL],nodes;
public:
prims();
void createGraph();
void primsAlgo();
};

prims :: prims(){
for(int i=0;i<ROW;i++ )
for(int j=0;j<COL;j++)
graph[i][j]=0;
}

void prims :: createGraph(){


int i,j;
cout<<"Enter Total Nodes : ";
cin>>nodes;
cout<<"\n\nEnter Adjacency Matrix : \n";
for(i=0;i<nodes;i++)
for(j=0;j<nodes;j++)
cin>>graph[i][j];

//Assign infinity to all graph[i][j] where weight is 0.for(i=0;i<nodes;i++)


{
for(j=0;j<nodes;j++){
if(graph[i][j]==0)
graph[i][j]=infi;
}
}

void prims :: primsAlgo(){


int selected[ROW],i,j,ne,x,y,min; //ne for no. of
edgesintfalse=0,true=1,min,x,y;

for(i=0;i<nodes;i++)
selected[i]=false;

selected[0]=true;
ne=0 ;

while(ne < nodes-1){


min=infi;

for(i=0;i<nodes;i++)
{
if(selected[i]==true){
for(j=0;j<nodes;j++){
if(selected[j]==false){
if(min > graph[i][j])
{
min=graph[i][j];
x=i;
y=j;
}
}
}
}
}
selected[y]=true;
cout<<"\n"<<x+1<<" --> "<<y+1;
ne=ne+1;
}
}

void main()
{
prims MST;
clrscr();
cout<<"\nPrims Algorithm to find Minimum Spanning Tree\n";
MST.createGraph();
MST.primsAlgo() ;
getch();
}
OUTPUT:
Experiment 10: How to implement Dijkstra’s
Algorithm through C++
AIM: To write a C++ program to implement Dijkstra’s Algorithm.
Description:
Dijkstra's algorithm is an algorithm for finding the shortest paths between
nodes in a graph. For a given source node in the graph, the algorithm finds
the shortest path between that node and every other. It can also be used for
finding the shortest paths from a single node to a single destination node by
stopping the algorithm once the shortest path to the destination node has
been determined.
PROGRAM:
#include<iostream.h>
#include<conio.h>
typedef int bool;
const bool true=1, false=0;
class Dijkstra{
int cost[20][20],v,n,dist[20];
public:
void dijkstra() //func: to findout the shortest path from source to all other
vertices
{
bool s[20];
int i,u,w,j,min=9999;
for(i=1;i<=n;i++)//initial distance from source to other vertices
{
s[i]=false;
dist[i]=cost[v][i];
}
s[v]=true;
dist[v]=0;
for(i=2;i<=n;i++) // To find out the next non-visited minimum vertex
{
min=9999;
for(j=1;j<=n;j++)
{
if(i==j || s[j]==true)
continue;
else
if(min>dist[j])
u=j;
}
s[u]=true;
for(j=1;j<=n;j++)
{
if(s[j]==false) //if the vertex is not visited then calculate and compare the
present and previous distances
{
if(dist[j]>dist[u]+cost[u][j])
dist[j]=dist[u]+cost[u][j];
}
}
}
}
void read() // func read(): To read the cost adjacency matrix data
{
cout<<"\n enter number of vertices:";
cin>>n;
cout<<"\n enter the source node:";
cin>>v;
cout<<":::enter 9999 for infinity:::\n";
for(int i=1;i<=n;i++)
{
for(int j=1;j<=n;j++)
{
cout<<"enter the lenght["<<i<<"]["<<j<<"]:";
cin>>cost[i][j];
}
}
}
void display() // Display all the shortest distances from the source to the
other vertices.
{
for(int i=1;i<=n;i++)
cout<<"\n Shortest path from source "<<v<<"to "<<i<<" node is :"
<<dist[i];
}
};
int main()
{
Dijkstra d;
clrscr();
d.read();
d.dijkstra();
d.display();
}
OUTPUT:
Experiment 11: How to implement Kruskal’s
Algorithm through C++
AIM: To write a C++ program to implement Kruskal’s Algorithm.
Description: Kruskal's algorithm is a minimum-spanning-tree algorithm
which finds an edge of the least possible weight that connects any two trees
in the forest. It is a greedy algorithm in graph theory as it finds a minimum
spanning tree for a connected weighted graph adding increasing cost arcs at
each step. This means it finds a subset of the edges that forms a tree that
includes every vertex, where the total weight of all the edges in the tree is
minimized. If the graph is not connected, then it finds a minimum spanning
forest (a minimum spanning tree for each connected component).
PROGRAM:
#include<iostream.h>
#include<conio.h>
class Kruskal
{
int parent[20],n,cost[20][20],t[20][3];
public:
void read() // to read the cost-adjacency matrix data
{
cout<<"\n enter number of vertices for given graph:";
cin>>n;
cout<<"enter 9999 for infinity\n";
for(int i=1;i<=n;i++)
{
for(int j=1;j<=n;j++)
{
cout<<"\nenter the cost["<<i<<"]["<<j<<"]:";
cin>>cost[i][j];
}
}
}
int kruskal()
{
int i,j,min,a,b,cnt,mincost=0,u,v;
clrscr();
for(i=1;i<=n;i++) //initially all vertex parent is -1
parent[i]=-1;
cnt=1;
cout<<"Resultant spanning tree:";
while(cnt<n)
{
for(i=1,min=9999;i<=n;i++) //finding out the minimum cost edge from the
graph
{
for(j=1;j<=n;j++)
{
if(cost[i][j]==0)
break;
else
if(min > cost[i][j])
{
min=cost[i][j];
u=i;
v=j;
}
}
}
a=find(u); //find will return the parent node of given vertex
b=find(v);
if(a!=b) //parents are not equal then visit that edge and union the edge
vertex
{
cnt++;
t[i][1]=u;
t[i][2]=v;
mincost=mincost+cost[u][v];
uni(a,b);
cout<<u<<" ,"<<v<<"\n";
}
cost[u][v]=cost[v][u]=999;
}
return mincost;
}
int find(int i) //return the parent node of i
{
while(parent[i]>=0)
i=parent[i];
return i;
}
void uni(int i,int j) // joins two vertex with parent child relationship –help to
check not to form cycles.
{
parent[i]=j;
}
};
int main()
{
Kruskal k;
clrscr();
k.read();
int e=k.kruskal();
cout<<"\n The minimum cost of the spanning tree is:"<<e<<"\n";
}
OUTPUT:
Experiment 12: How to implement Merge Sort
though C++
AIM: To write a C++ program to implement Merge Sort.
Description:
Merge Sort is a Divide and Conquer algorithm. It divides input array in two
halves, calls itself for the two halves and then merges the two sorted halves.
The merge() function is used for merging two halves. The merge(arr, l, m,
r) is key process that assumes that arr[l..m] and arr[m+1..r] are sorted and
merges the two sorted sub-arrays into one.
PROGRAM:
#include<iostream.h>
#include<conio.h>
class Merge_Sort
{
private:
int *a,*b;
public:
int len;
void read() // reads the array data
{
int i;
cout<<"Enter the lenght of Array:";
cin>>len;
a=new int[len];
b=new int[len];
for(i=1;i<=len;i++)
{
cout<<"Enter the Element a["<<i<<"]:";
cin>>a[i];
}
}
void display() // to display array elements
{
int i;
if(len==0)
cout<<"Array is Empty";
else
{
cout<<"Array Elements are:";
for(i=1;i<=len;i++)
cout<<a[i]<<"\t";
cout<<"\n";
}
}
void MergeSort(int left,int right) //To divide the array to two halfs and
joining it back
{
if(left<right)
{
int mid=(left+right)/2;
MergeSort(left,mid);
MergeSort(mid+1,right);
Merge(left,mid,right);
}
}
void Merge(int low,int mid,int high)//takes to halves and join them in the
sorted order
{
int i,j,k,h;
h=low;
i=low;
j=mid+1;
while(i<=mid && j<=high)
{
if(a[i]>a[j])
{
b[h]=a[j];
h++;
j++;
}
else
{
b[h]=a[i];
h++;
i++;
}
}
while(i<=mid)//remaining element in left side
{
b[h]=a[i];
h++;
i++;
}
while(j<=high)//remaining elements in right half
{
b[h]=a[j];
h++;
j++;
}
for(k=low;k<=high;k++)
a[k]=b[k];
}
};
int main()
{
Merge_Sort ms;
clrscr();
ms.read();
cout<<"::::Array before Sorting::::\n";
ms.display();
ms.MergeSort(1,ms.len);
cout<<"\n::::Array After Sorting::::\n";
ms.display();
return(0);
}
OUTPUT:
Experiment 13: How to implement Quick Sort
through C++
AIM: To write a C++ program to implement Quick Sort.
Description:
QuickSort is a Divide and Conquer algorithm. It picks an element as pivot
and partitions the given array around the picked pivot.(Always pick first
element as pivot). The key process in quickSort is partition(). Target of
partitions is, given an array and an element x of array as pivot, put x at its
correct position in sorted array and put all smaller elements (smaller than x)
before x, and put all greater elements (greater than x) after x. All this should
be done in linear time.
PROGRAM:
#include<iostream.h>
#include<conio.h>
class Quick_Sort
{
private:
int *a;
public:
int len;
Quick_Sort()
{
len=0;
}
void read() //read the array data
{
int i;
cout<<"Enter lenght of the array:";
cin>>len;
a=new int[len];
for(i=1;i<=len;i++)
{
cout<<"Enter the value:a["<<i<<"]:";
cin>>a[i];
}
}
void display() // To display the array elements
{
int i;
if(len==0)
cout<<"Array is Empty";
else
{
cout<<"Array Elements are:";
for(i=1;i<=len;i++)
cout<<a[i]<<"\t";
cout<<"\n";
}
}
void quicksort(int left,int right) //dividing the elements into two halves at
the pivot position
{
if(left<right)
{
int j;
j=partition(left,right+1);
quicksort(left,j-1);
quicksort(j+1,right);
}
}
int partition(int l,int r)// placing the pivot at the right position
{
int pivot,i,j,temp;
pivot=a[l];
i=l;
j=r;
do
{
do
{
i++;
}while(a[i]<pivot);
do
{
j--;
}while(a[j]>pivot);
if(i<j)//Swap element at position i and j
{
temp=a[i];
a[i]=a[j];
a[j]=temp;
}
}while(i<=j);
a[l]=a[j];
a[j]=pivot;
return(j);
}
};
int main()
{
Quick_Sort qs;
clrscr();
qs.read();
cout<<"\n::::Array before sorting::::\n";
qs.display();
qs.quicksort(1,qs.len);
cout<<"\n::::Array After Sorting::::\n";
qs.display();
return(0);
}
OUTPUT:
Experiment 14: How to implement Data
Searching using Divide and Conquer Technique
through C++
AIM: To write a C++ program to implement Data Searching using Divide
and Conquer Technique.
Description:
Binary Search is applied on the sorted array or list. In binary search, we
first compare the value with the elements in the middle position of the
array. If the value is matched, then we return the value. If the value is less
than the middle element, then it must lie in the lower half of the array and if
it's greater than the element then it must lie in the upper half of the array.
We repeat this procedure on the lower (or upper) half of the array. Binary
Search is useful when there are large numbers of elements in an array.
PROGRAM:
#include<iostream.h>
#include<conio.h>
class DataSearch
{
private:
int *a;
int n;
public:
DataSearch(int x)
{
n=x;
a= new int[n];
}
void read() //to read the array data
{
int i;
cout<<"Enter elements in ascending order:\n";
for(i=0;i<n;i++)
{
cout<<"Enter element a["<<i<<"]:";
cin>>a[i];
}
}
int RecSearch(int low,int high,int ele) // recursively searching the element
in the array.
{
if(low<=high)
{
int mid=(low+high)/2;
if(a[mid]==ele) //if element found, return the value
return mid;
else if(a[mid]>ele) //if element is less than mid then search in left side to
mid
return RecSearch(low,mid-1,ele);
else //otherwise search to the right of the mid
return RecSearch(mid+1,high,ele);
}
return -1; //if element not found
}
};
int main()
{
int n,ele,c;
cout<<"Enter the range of the array:";
cin>>n;
DataSearch ds(n);
ds.read();
cout<<"Enter the element to found:";
cin>>ele;
c=ds.RecSearch(0,n-1,ele);
if(c==-1)
cout<<"Element not found ...";
else
cout<<"Element "<<ele<<"is found at position at "<<c<<"\n";
return 0;
}
OUTPUT:
OTHER BOOKS BY AUTHOR:

You might also like