ADS R20 - Merged
ADS R20 - Merged
I year I sem
AND ALGORITHMS (R18)
DIGITAL NOTES
(Affiliated to JNTUH, Hyderabad, Approved by AICTE - Accredited by NBA & NAAC – ‘A’ Grade - ISO 9001:2015 Certified)
Maisammaguda, Dhulapally (Post Via. Hakimpet), Secunderabad – 500100, Telangana State, India
Dept_CSE Page 1
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
Objectives:
The fundamental design, analysis, and implementation of basic data structures. Basic concepts
in the specification and analysis of programs.
1. Principles for good program design, especially the uses of data abstraction. Significance
of algorithms in the computer field
2. Various aspects of algorithm development Qualities of a good solution
TEXT BOOKS:
UNIT I
Algorithms, Performance analysis- time complexity and space complexity, Asymptotic Notation-
Big Oh, Omega and Theta notations, Complexity Analysis Examples. Data structures-Linear and
non linear data structures, ADT concept, Linear List ADT, Array representation, Linked
representation, Vector representation, singly linked lists -insertion, deletion, search operations,
doubly linked lists-insertion, deletion operations, circular lists. Representation of single, two
dimensional arrays, Sparse matrices and their representation.
UNIT II
Stack and Queue ADTs, array and linked list representations, infix to postfix conversion using
stack, implementation of recursion, Circular queue-insertion and deletion, Dequeue ADT, array
and linked list representations, Priority queue ADT, implementation using Heaps, Insertion into a
Max Heap, Deletion from a Max Heap, java.util package-ArrayList, Linked List, Vector classes,
Stacks and Queues in java.util, Iterators in java.util.
UNIT III
Searching–Linear and binary search methods, Hashing-Hash functions, Collision Resolution
methods-Open Addressing, Chaining, Hashing in java.util-HashMap, HashSet, Hashtable.
Sorting –Bubble sort, Insertion sort, Quick sort, Merge sort, Heap sort, Radix sort, comparison of
sorting methods.
UNIT IV
Trees- Ordinary and Binary trees terminology, Properties of Binary trees, Binary tree ADT,
representations, recursive and non recursive traversals, Java code for traversals, Threaded binary
trees. Graphs- Graphs terminology, Graph ADT, representations, graph traversals/search
methods-dfs and bfs, Java code for graph traversals, Applications of Graphs-Minimum cost
spanning tree using Kruskal’s algorithm, Dijkstra’s algorithm for Single Source Shortest Path
Problem.
UNIT V
Dept_CSE Page 2
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
Search trees- Binary search tree-Binary search tree ADT, insertion, deletion and searching
operations, Balanced search trees, AVL trees-Definition and examples only, Red Black trees –
Definition and examples only, B-Trees-definition, insertion and searching operations, Trees in
java.util- TreeSet, Tree Map Classes, Tries(examples only),Comparison of Search trees. Text
compression-Huffman coding and decoding, Pattern matching-KMP algorithm.
TEXT BOOKS:
1. Data structures, Algorithms and Applications in Java, S.Sahni, Universities Press.
2. Data structures and Algorithms in Java, Adam Drozdek, 3rd edition, Cengage Learning.
3. Data structures and Algorithm Analysis in Java, M.A.Weiss, 2nd edition,
4. Addison-Wesley (Pearson Education).
REFERENCE BOOKS:
1. Java for Programmers, Deitel and Deitel, Pearson education.
2. Data structures and Algorithms in Java, R.Lafore, Pearson education.
3. Java: The Complete Reference, 8th editon, Herbert Schildt, TMH.
4. Data structures and Algorithms in Java, M.T.Goodrich, R.Tomassia, 3rd edition, Wiley India
Edition.
5. Data structures and the Java Collection Frame work,W.J.Collins, Mc Graw Hill.
6. Classic Data structures in Java, T.Budd, Addison-Wesley (Pearson Education).
7. Data structures with Java, Ford and Topp, Pearson Education.
8. Data structures using Java, D.S.Malik and P.S.Nair, Cengage learning.
9. Data structures with Java, J.R.Hubbard and A.Huray, PHI Pvt. Ltd.
10. Data structures and Software Development in an Object-Oriented Domain, J.P.Tremblay and
G.A.Cheston, Java edition, Pearson Education.
Dept_CSE Page 3
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
INDEX
UNIT NO TOPIC PAGE NO
Algorithms 6-8
Performance analysis- time complexity and space complexity 6-8
Asymptotic Notation- Big Oh, Omega and Theta notations 8-10
Complexity Analysis Examples 10-15
Data structures-Linear and Non Linear Data Structures 15-17
ADT concept 15-17
I Linear List ADT, Array representation, Linked representation,
17-22
Vector representation
Singly linked lists - insertion, deletion, search operations 22-30
Doubly linked lists-insertion, deletion operations 40-49
Circular lists - insertion, deletion, search operations 30-40
Representation of single, two dimensional arrays 50-51
Sparse matrices and their representation 49-52
Stack ADT Array and Linked list representations 53-61
Queue ADT Array and Linked list representations 67-77
Infix to Postfix conversion using stack, Implementation of Recursion 62-67
Circular Queue- insertion and deletion 84-89
Dequeue ADT Array and Linked list representations 78-83
II
Priority Queue ADT 89-93
Implementation using Heaps, Insertion into a Max Heap, Deletion from
a Max Heap 94-97
Java.util package-ArrayList, Linked List, Vector classes 98-104
Stacks and Queues in java.util 105-106
Iterators in java.util 106-107
Searching 108-126
Linear and binary search methods 108-113
Hashing-Hash functions 114-116
III Collision Resolution methods-Open Addressing, Chaining, Hashing in
java.util-HashMap, HashSet, Hashtable. 117-126
Sorting 126-152
Bubble sort, Insertion sort, Quick sort, Merge sort, Heap sort, Radix sort,
126-152
comparison of sorting methods
Dept_CSE Page 4
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
Dept_CSE Page 5
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
UNIT -1
Basic concepts of Algorithm
Preliminaries of Algorithm:
An algorithm may be defined as a finite sequence of instructions each of which has a clear
meaning and can be performed with a finite amount of effort in a finite length of time.
The algorithm word originated from the Arabic word “Algorism” which is linked to the name
of the Arabic mathematician AI Khwarizmi. He is considered to be the first algorithm designer
for adding numbers.
Structure and Properties of Algorithm:
An algorithm has the following structure
1. Input Step
2. Assignment Step
3. Decision Step
4. Repetitive Step
5. Output Step
1. Finiteness: An algorithm must terminate after a finite number of steps.
2. Definiteness: The steps of the algorithm must be precisely defined or
unambiguously specified.
3. Generality: An algorithm must be generic enough to solve all problems of a particular class.
4. Effectiveness: the operations of the algorithm must be basic enough to be put down
on pencil and paper. They should not be too complex to warrant writing another
algorithm for the operation.
5. Input-Output: The algorithm must have certain initial and precise inputs, and
outputs that may be generated both at its intermediate and final steps.
An algorithm does not enforce a language or mode for its expression but only demands
adherence to its properties.
1. To save time (Time Complexity): A program that runs faster is a better program.
2. To save space (Space Complexity): A program that saves space over a
competingprogram is considerable desirable.
Dept_CSE Page 6
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
Efficiency of Algorithms
The performances of algorithms can be measured on the scales of time and space. The
performance of a program is the amount of computer memory and time needed to run a
program. We use two approaches to determine the performance of a program. One is
analytical and the other is experimental. In performance analysis we use analytical
methods, while in performance measurement we conduct experiments.
Time Complexity: The time complexity of an algorithm or a program is a function of the
running time of the algorithm or a program. In other words, it is the amount of computer
time it needs to run to completion.
Suppose M is an algorithm, and suppose n is the size of the input data. Clearly the
complexity f(n) of M increases as n increases. It is usually the rate of increase of f(n)
with some standard functions. The most common computing times are
O(1), O(log2 n), O(n), O(n log2 n), O(n2), O(n3), O(2n)
Example:
Program Segment A Program Segment B Program Segment C
x =x + 2; for k =1 to n for j =1 to n do
do
------------------------- for x = 1 to n
- x =x + 2;
do
end; x =x + 2;
-------------------------- en
d
end;
Dept_CSE Page 7
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
Total Frequency Count of Program Segment A
Program Statements Frequency
Count
x =x + 1
2;
Total Frequency 1
Count
Total Frequency Count of Program Segment B
Program Statements Frequency Count
for k =1 to n do (n+1
)
x =x + 2; n
end;
n
for j =1 to n do (n+1)
for x = 1 to n do n(n+1)
x =x + n2
2;
2
End n
end; N
2
3n n+1
Total Frequency +3
Count
2
The total frequency counts of the program segments A, B and C given by 1, (3n+1) and (3n +3n+1)
2
respectively are expressed as O(1), O(n) and O(n ). These are referred to as the time complexities of the
program segments since they are indicative of the running times of the program segments. In a
similar manner space complexities of a program can also be expressed in terms of mathematical
notations, which is nothing but the amount of memory they require for their execution.
Asymptotic Notations:
It is often used to describe how the size of the input data affects an algorithm’s usage of
Dept_CSE Page 8
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
computational resources. Running time of an algorithm is described as a function of input size n
for large n.
Dept_CSE Page 9
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
Big oh(O): Definition: f(n) = O(g(n)) (read as f of n is big oh of g of n) if there exist a positive
integer n 0 and a positive number c such that |f(n)| ≤ c|g(n)| for all n ≥ n 0 . Here g(n) is the upper
bound of the function f(n).
f(n) g(n)
3 2 3 3
16n + 45n +n f(n) = O(n )
12n
34n – 40 n f(n) = O(n)
50 1 f(n) = O(1)
Omega(Ω): Definition: f(n) = Ω(g(n)) ( read as f of n is omega of g of n), if there exists a positive
integer n0 and a positive number c such that |f(n)| ≥ c |g(n)| for all n ≥ n 0. Here g(n) is the lower
bound of the function f(n).
f(n) g(n
)
3 2 3 3
16n +8n + 2 N f(n) = Ω(n )
24n + N f(n) = Ω(n)
9
Theta(Θ): Definition: f(n) = Θ(g(n)) (read as f of n is theta of g of n), if there exists a positive
integer n0 and two positive constants c1 and c2 such that c1 |g(n)| ≤ |f(n)| ≤ c2 |g(n)| for all n ≥
n0. The function g(n) is both an upper bound and a lower bound for the function f(n) for all
values of n, n ≥ n0 .
f(n) g(n
)
3 2
16n + 30n
2
– n2 f(n) = Θ(n )
90
n n
7. + 30n
2 2n f(n) = Θ (2 )
Dept_CSE Page 10
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
Little oh(o): Definition: f(n) = O(g(n)) ( read as f of n is little oh of g of n), if f(n) = O(g(n))
and f(n) ≠ Ω(g(n)).
f(n) g(n
)
2 2
f(n) = o since f(n) = O(n ) and
(n )
18n + n2
2
9 f(n) ≠ Ω(n ) however f(n) ≠ O(n).
Relations Between O, Ω, Θ:
Theorem : For any two functions g(n) and f(n),
f(n) = (g(n)) iff
f(n) = O(g(n)) and f(n) = (g(n)).
Time Complexity:
2
Quadratic O(n ) Number of operations proportional to the square of the size of
the input data.
3
Cubic O(n ) Number of operations proportional to the cube of the size of
the input data.
n
Exponential O(2 ) Exponential number of operations, fast growing.
n
O(k )
O(n!)
Dept_CSE Page 11
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
Dept_CSE Page 12
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
• There must be a case of the problem(known as base case or stopping case) that is
handled differently from the other cases.
• In the base case, the recursive calls stop and the problem is solved directly.
Source Code:
(Recursive)
#include<stdio.h>
#include<conio.h>
void linear_search(int n,int a[20],int i,int k)
{
if(i>=n
)
{ printf("%d is not found",k);
return;
}
if(a[i]==k)
{
printf("%d is found at
%d",k,i+1); return;
}
els
e linear_search(n,a,i+1,k);
}
void main()
{
int i,a[20],n,k;
clrscr();
printf("Enter no of
elements:"); scanf("%d",&n);
printf("Enter elements:");
for(i=0;i<n;i++)
scanf("%d",&a[i]);
printf("Enter search
element:"); scanf("%d",&k);
linear_search(n,a,0,k);
getch();
}
Dept_CSE Page 13
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
Enter no of elements:5
Enter elements:1 2 3 4
5 Enter search
element:3
3 is found at position 3
Source Code:
(Recursive)
#include<stdio.h>
#include<conio.h>
void binary_search(int a[20],int data,int low,int high)
{
int mid;
if(low<=high
)
{
mid=(low+high)/
2;
if(a[mid]==data)
printf("Data found at %d",mid+1);
else
if(a[mid]>data)
binary_search(a,data,low,mid-1);
else
binary_search(a,data,mid+1,high);
}
}
void main()
{
int i,a[20],n,data;
clrscr();
printf("Enter no of
elements:"); scanf("%d",&n);
printf("Enter elements:");
for(i=0;i<n;i++)
scanf("%d",&a[i]);
printf("Enter search
element:");
scanf("%d",&data);
binary_search(a,data,0,n-1);
getch();
Dept_CSE Page 14
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
}
Dept_CSE Page 15
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
Input & Output:
Enter no of elements:3
Enter elements:1 2 3
Enter search
element:25 Not found
Enter no of
elements:3 Enter
elements:1 2 3 Enter
search element:3
Data found at 3
Time Complexity of Binary Search:
Time Complexity for binary search is O(log 2 N)
Fibonacci Search:
Source Code:
(Recursive)
#include<stdio.h>
#include<conio.h>
void fib_search(int a[],int n,int search,int pos,int begin,int end)
{
int fib[20]={0,1,1,2,3,5,8,13,21,34,55,89,144};
if(end<=0)
{
printf("\nNot found");
return;//data not found
}
els
e
{ pos=begin+fib[--end];
if(a[pos]==search && pos<n)
{
printf("\n Found at
%d",pos); return;//data
found
}
if((pos>=n)||(search<a[pos]))
fib_search(a,n,search,pos,begin,e
nd);
els
e
{ begin=pos+
1; end--;
fib_search(a,n,search,pos,begin,end);
}
}
}
void main()
{
int
n,i,a[20],search,pos=0,begin=0,k=0,end;
int
fib[20]={0,1,1,2,3,5,8,13,21,34,55,89,144}
;
clrscr();
Dept_CSE Page 16
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
printf("Enter the
n:");
scanf("%d",&n);
printf("Enter elements to array:");
for(i=0;i<n;i++)
scanf("%d",&a[i]);
Dept_CSE Page 17
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
printf("Enter the search
element:"); scanf("%d",&search);
while(fib[k]<n)
{
k++;
}
end=k;
printf("Max.no of passes :
%d",end);
fib_search(a,n,search,pos,begin,e
nd); getch();
}
Data structure
A data structure is a specialized format for organizing and storing data. General data structure types
include the array, the file, the record, the table, the tree, and so on. Any data structure is designed to
organize data to suit a specific purpose so that it can be accessed and worked with in appropriate ways
Abstract Data Type
In computer science, an abstract data type (ADT) is a mathematical model for data types where a
data type is defined by its behavior (semantics) from the point of view of a user of the data, specifically in
terms of possible values, possible operations on data of this type, and the behavior of these operations.
When a class is used as a type, it is an abstract type that refers to a hidden representation. In this model an
ADT is typically implemented as a class, and each instance of the ADT is usually a n object of that class. In
ADT all the implementation details are hidden
Dept_CSE Page 18
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
1. Linear data structures are the data structures in which data is arranged in a list or in a sequence.
2. Non linear data structures are the data structures in which data may be arranged in a hierarchical manner
1. Push() will add an item to the end of the list. This takes constant time.
2. Pop() will remove the item at the end of the list. This takes constant time.
3. Top() will return the value of the item at the top.
All operations on a stack happen in constant time, because no matter what, the stack is always working with
the top- most value, and the stack always knows exactly where that is. This is the main reason why Stacks
are so amazingly fast.
QUEUE
A Queue is a data structure where you can only access the oldest item in the list. It is analogous to a line in the
grocery store, where many people may be in the line, but the person in the front gets serviced first.
A typical Queue implementation has 3 operations, which are similar to the functions in Stacks. They are:
enqueue(), dequeue(), and Front().
1. Enqueue() will add an item to the end of the list. This takes constant time.
2. Dequeue() will remove an item from the beginning of the list. This takes constant time.
3. Front() will return the value of front-most item.
Queues, like Stacks, are very fast because all of the operations are simple, and constant-time.
I will provide a sample implementation in C. However, this code will produce a Queue that cannot resize
when it runs out of room.
A tree data structure can be defined recursively (locally) as a collection of nodes (starting at a root node),
where each node is a data structure consisting of a value, together with a list of references to nodes (the
"children"), with the constraints that no reference is duplicated, and none points to the root.
DEFNITION: A tree is a data structure made up of nodes or vertices and edges without having any cycle. The
tree with no nodes is called the null or empty tree. A tree that is not empty consists of a root node and
potentially many levels of additional nodes that form a hierarchy.
Dept_CSE Page 19
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
GRAPH:
In computer science, a graph is an abstract data type that is meant to implement the undirected
graph and directed graph concepts from mathematics, specifically the field of graphtheory.
A graph data structure consists of a finite (and possibly mutable) set of vertices or nodes or points, together
with a set of unordered pairs of these vertices for an undirected graph or a set of ordered pairs for a directed
graph. These pairs are known as edges, arcs, or lines for an undirected graph and as arrows, directed
edges, directed arcs, or directed lines for a directed graph. The vertices may be part of the graph structure,
or may be external entities represented by integer indices or references.
A graph data structure may also associate to each edge some edge value, such as a symbolic label or a numeric
attribute (cost, capacity, length, etc.).
LIST ADT
List is basically the collection of elements arranged in a sequential manner. In memory we can store
the list in two ways: one way is we can store the elements in sequential memory locations. That means we
can store the list in arrays. The other way is we can use pointers or links to associate elements sequentially.
This is known as linked list.
Array representation
You will know an array is simply an area of memory allocated for a set number of elements of a known
size. You can access these elements by their index (position in the array) and also set or retrieve the
value stored.
An array is always of a fixed size; it does not grow as more elements are required. The programmer
must ensure that only valid values in the array are accessed, and must remember the location in the
array of each value. Arrays are basic types in most programming languages
Linked representation
A linked list is made up of a linear series of nodes (For non-linear arrangements of nodes, see Trees and
Graphs. These nodes, unlike the elements in an array, do not have to be located next to each other in
memory in order to be accessed. The reason is that each node contains a link to another node. The most
basic node would have a data field and just one link field. This node would be a part of what is known as a
singly linked list, in which all nodes contain only a next link. This is different than a doubly linked list, in
which all nodes have two links, a next and
a previous.
The linked list requires linear O(N) time to find or access a node, because there is no simple formula as
listed above for the array to give the memory location of the node. One must traverse all links from the
beginning until the requested node is reached. If nodes are to be inserted at the beginning or end of a
linked list, the time is O(1), since references or pointers, depending on the language, can be maintained to
the head and tail nodes. If a node should be inserted in the middle or at some arbitrary position, the
running time is not actually O(1), as the operation to get to the position in the list is O(N).
Vector representation
Vectors are much like arrays. Operations on a vector offer the same big O as their counterparts on an
array. Like arrays, vector data is allocated in contiguous memory.
Unlike static arrays, which are always of a fixed size, vectors can be grown. This can be done either
explicitly or by adding more data. In order to do this efficiently, the typical vector implementation grows by
Dept_CSE Page 20
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
doubling its allocated
Dept_CSE Page 21
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
space (rather than incrementing it) and often has more space allocated to it at any one time than it needs.
This is because reallocating memory is usually an expensive operation .Vectors are simply arrays which
have wrapped grow/shrink functions.
ArrayList<T> al = new
ArrayList<T>(); Vector<T> v =
Major
new Differences between ArrayList and Vector:
Vector<T>();
1. Synchronization : Vector is synchronized, which means only one thread at a time can access the
code, while arrayList is not synchronized, which means multiple threads can work on arrayList at
the same time. For example, if one thread is performing an add operation, then there can be
another thread performing a remove operation in a multithreading environment.
If multiple threads access arrayList concurrently, then we must synchronize the block of the code
which modifies the list structurally, or alternatively allow simple element modifications. Structural
modification means addition or deletion of element(s) from the list. Setting the value of an existing
element is not a structural modification.
class GFG
{
public static void main (String[] args)
{
// creating an ArrayList
ArrayList<String> al = new ArrayList<String>();
// adding object to arraylist
al.add("Practice.GeeksforGeeks.org");
Dept_CSE Page 22
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
al.add("quiz.GeeksforGeeks.org");
al.add("code.GeeksforGeeks.org");
al.add("contribute.GeeksforGeeks.org");
// traversing elements using
Iterator'
System.out.println("ArrayList
elements are:"); Iterator it =
al.iterator();
while (it.hasNext())
System.out.println(it.next())
;
// creating Vector
Vector<String> v = new Vector<String>();
v.addElement("Practice");
v.addElement("quiz");
v.addElement("code");
Practice.GeeksforGeeks.org
quiz.GeeksforGeeks.org
code.GeeksforGeeks.org
contribute.GeeksforGeeks.org
Practice
quiz
code
2. If we don’t know how much data we are going to have, but know the rate at which it grows,
Vector has an advantage, since we can set the increment value in vectors.
3. ArrayList is newer and faster. If we don’t have any explicit requirements for using either of them,
we use ArrayList over vector.
LINKED LIST
Introduction to Linked List:
A linked list is a linear collection of data elements, called nodes, where the linear order is given by means of
pointers.Each node is divided into two parts:
The data items in the linked list are not in consecutive memory locations. They may be
anywhere, but the accessing of these data items is easier as each data item contains the
address of the next data item.
1. In array implementation of the linked lists a fixed set of nodes represented by an array is
established at the beginning of the execution
2. A pointer to a node is represented by the relative position of the node within thearray.
3. In array implementation, it is not possible to determine the number of nodes required for the
linked list. Therefore;
a. Less number of nodes can be allocated which means that the program will have
overflow problem.
b. More number of nodes can be allocated which means that some amount of the memory
storage will be wasted.
4. The solution to this problem is to allow nodes that are dynamic, rather than static.
5. When a node is required storage is reserved /allocated for it and when a node is no longer
needed, the memory storage is released /freed.
Advantages of linked lists
1. Linked lists are dynamic data structures. i.e.,they can grow or shrink during the execution of a program.
2. Linked lists have efficient memory utilization. Here, memory is not pre-allocated. Memory is
allocated whenever it is required and it is de-allocated (removed) when it is no longerneeded.
3. Insertion and Deletions are easier and efficient. Linked lists provide flexibility in inserting a data
item at a specified position and deletion of the data item from the given position.
4. Many complex applications can be easily carried out with linkedlists.
Dept_CSE Page 24
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
2. Double Linked List.
3. Circular Linked List.
4. Circular Double Linked List.
A single linked list is one in which all nodes are linked together in some sequential manner. Hence, it is
also called as linear linked list.
A double linked list is one in which all nodes are linked together by multiple links which helps in
accessing both the successor node (next node) and predecessor node (previous node) from any
arbitrary node within the list. Therefore each node in a double linked list has two link fields (pointers) to
point to the left node (previous) and the right node (next). This helps to traverse in forward direction and
backward direction.
A circular linked list is one, which has no beginning and no end. A single linked list can be made a
circular linked list by simply storing address of the very first node in the link field of the last node.
A circular double linked list is one, which has both the successor pointer and predecessor pointer in
the circular manner.
1. Linked lists are used to represent and manipulate polynomial. Polynomials are expression
containing terms with non zero coefficient and exponents. For example:
P(x) = a0 Xn + a1 Xn-1 + …… + an-1 X + an
2. Represent very large numbers and operations of the large number such as addition, multiplicationand
Dept_CSE Page 25
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
division.
3. Linked lists are to implement stack, queue, trees and graphs.
4. Implement the symbol table in compiler construction.
The beginning of the linked list is stored in a "start" pointer which points to the first node. The
first node contains a pointer to the second node. The second node contains a pointer to the third
node, ... and so on. The last node in the list has its next field set to NULL to mark the end of the
list. Code can access any node in the list by starting at the start and following the next pointers.
The start pointer is an ordinary local pointer variable, so it is drawn separately on the left top to
show that it is in the stack. The list nodes are drawn on the right to show that they are allocated
in the heap.
1. Creating a structure with one data item and a next pointer, which will be pointing
tonext node of the list. This is called as self-referential structure.
2. Initialize the start pointer to be NULL.
Dept_CSE Page 26
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
The basic operations in a single linked list are:
Creation.
Insertion.
Deletion.
Traversing.
Insertion of a Node:
The new node can then be inserted at three different places namely:
1. Inserting a node at the beginning.
2. Inserting a node at the end.
3. Inserting a node at intermediate position.
Dept_CSE Page 27
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
putting the address of the head in the next field of the new node.
2. New node should be considered as a head. It can be achieved by declaring head equals to a newnode.
1. The following steps are followed to insert a new node at the end of thelist:
2. Get the new node using getnode()
newnode = getnode();
3. If the list is empty then start = newnode.
4. If the list is not empty follow the steps givenbelow:
temp = start;
while(temp -> next != NULL)
1. The following steps are followed, to insert a new node in an intermediate position in the list
2. Get the new node using getnode().
newnode = getnode();
Dept_CSE Page 28
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
3. Ensure that the specified position is in between first node and last node. If not, specified
position is invalid. This is done by countnode() function.
4. Store the starting address (which is in start pointer) in temp and prev pointers. Then traverse
the temp pointer upto the specified position followed by prev pointer.
5. After reaching the specified position, follow the steps given below:
Deletion of a node:
A node can be deleted from the list from three different places namely.
1. Deleting a node at the beginning.
2. Deleting a node at the end.
3. Deleting a node at intermediate position.
The following steps are followed, to delete a node at the beginning of the list:
i. temp = start;
ii. start = start -> next;
iii. free(temp);
1. The following steps are followed to delete a node at the end of the list:
Dept_CSE Page 29
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
}
The following steps are followed, to delete a node from an intermediate position in the list (List
must contain more than two node).
Dept_CSE Page 30
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
Source Code (LINKED LIST USING JAVA PROGRAM)
Using LinkedList
//LinkedListDemo.java class
LinkedList implements List
{
class Node
{
Object data; // data item
Node next; // refers to next node in the list
Dept_CSE Page 31
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
System.out.println("List is empty: no
deletion"); return null;
}
Node tmp = head; // tmp saves reference to
head head = tmp.next;
count--;
return tmp.data;
}
public Object deleteAfter(Object key) // delete node after key item
{
p = find(key); // p = “location of key
node” if( p == null )
{
System.out.println(key + " key is not
found"); return null;
}
if( p.next == null ) // if(there is no node after key node)
{
System.out.println("No
deletion"); return null;
}
else
{
Node tmp = p.next; // save node after key node
p.next = tmp.next; // point to next of node
deleted count--;
return tmp.data; // return deleted node
}
}
public void displayList()
{
p = head; // assign mem. address of 'head' to 'p'
System.out.print("\nLinked List: ");
while( p != null ) // start at beginning of list until
end of list {
System.out.print(p.data + " -> "); // print
data p = p.next; // move to next node
}
System.out.println(p); // prints 'null'
}
public boolean isEmpty() // true if list is empty
{
return (head == null);
}
public int size()
{
return count;
}
} // end of LinkeList
class class
LinkedListDemo
{
public static void main(String[] args)
Dept_CSE Page 32
ADVANCE DATA STRUCTURES M.Tech. I year I sem
AND ALGORITHMS (R18)
{
LinkedList list = new LinkedList(); //
create list object list.createList(4); // create
4 nodes list.displayList();
list.insertFirst(55); // insert 55 as first
node list.displayList();
list.insertAfter(66, 33); // insert 66 after 33
list.displayList();
Object item = list.deleteFirst(); // delete
first node if( item != null )
{
System.out.println("deleteFirst():
" + item); list.displayList();
}
item = list.deleteAfter(22); // delete a node after
node(22) if( item != null )
{
System.out.println("deleteAfter(22): " +
item); list.displayList();
}
System.out.println("size(): " + list.size());
}
}
Dept_CSE Page 33
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
OUTPUT:
It is just a single linked list in which the link field of the last node points back to the
address of the first node. A circular linked list has no beginning and no end. It is necessary
to establish a special pointer called start pointer always pointing to the first node of the list.
1. Creation.
2. Insertion.
3. Deletion.
4. Traversing.
Dept of Page 30
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
The following steps are to be followed to insert a new node at the beginning of the
circular list: Function to insert node in the beginning of the List,
The following steps are followed to insert a new node at the end of the list:
start) temp =
newnode; newnode
Dept of Page 31
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
The following steps are followed, to delete a node at the beginning of the list:
Dept of Page 32
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
-> next;
The following steps are followed to delete a node at the end of the list:
The following steps are followed, to traverse a list from left toright:
do
} while(temp != start);
Source Code:(CIRCULAR LINKED LIST USING JAVA PROG insertion traversing techniques )
class GFG
{
static class Node
{
int
data;
Node
next;
};
static Node addToEmpty(Node last, int data)
{
// This function is only for
empty list if (last != null)
return last;
// Creating a node
dynamically. Node temp = new
Node();
// Assigning
the data.
temp.data =
data; last =
temp;
return last;
}
static Node addBegin(Node last, int data)
{
if (last == null)
Dept of Page 34
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
return addToEmpty(last,
Node();
temp.data = data;
temp.next =
last.next;
last.next = temp;
return last;
}
Node();
temp.data = data;
temp.next =
last.next;
last.next = temp;
last =
temp;
return
last;
}
Node
temp, p;
p =
last.next;
Dept of Page 35
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
do
{
if (p.data == item)
{
temp = new
Node();
temp.data =
data;
temp.next =
p.next; p.next
= temp;
if (p ==
last)
last =
temp;
return last;
}
p = p.next;
} while(p != last.next);
System.out.println(item + " not present
in the list."); return last;
}
// If list is empty,
return. if (last ==
null)
{
System.out.println("Li
st is empty.");
return;
}
// Traversing the
list. do
{
System.out.print(p.data +
Dept of Page 36
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
}
while(p != last.next);
// Driven code
public static void main(String[] args)
{
Node last = null;
last =
addToEmpty(last, 6);
last = addBegin(last,
4); last =
addBegin(last, 2);
last = addEnd(last,
8); last =
addEnd(last, 12);
last = addAfter(last, 10, 8);
traverse(last);
}
}
OUTPUT:2 4 6 8 10 12
Dept of Page 37
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
head_ref =
ptr1;
return
head_ref;
}
System.out.printf("\n");
}
Dept of Page 38
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
if (head ==
null)
return
null;
// Find the required
node Node curr = head,
prev = new
Node(); while (curr.data !=
key) { if (curr.next ==
head) {
System.out.printf("\nGiven node is not found"
+ " in the list!!!");
break;
}
prev =
curr; curr =
curr.next;
}
Dept of Page 39
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
/* Initialize lists as
empty */ Node head = null;
/* Created linked list will be 2.5.7.8.10 */
head =
push(head, 2);
head =
push(head, 5);
head =
push(head, 7);
head =
push(head, 8);
head = push(head, 10);
System.out.printf("List Before
Deletion: "); printList(head);
head = deleteNode(head, 7);
System.out.printf("List After
Deletion: "); printList(head);
}
}
List Before Deletion: 10 8 7 5 2
required as one can use the previous links to observe the preceding element. It has a
dynamic size, which can be determined only at run time.
A double linked list is a two-way list in which all nodes will have two links. This helps in
accessing both successor node and predecessor node from the given node position. It
provides bi- directional traversing. Each node contains three fields:
1. Left link.
2. Data.
3. Right link.
The left link points to the predecessor node and the right link points to the successor
node. The data field stores the required data. The basic operations in a double
linked list are:
1. Creation.
2. Insertion.
3. Deletion.
4. Traversing.
The beginning of the double linked list is stored in a "start" pointer which points to the first
node. The first node’s left link and last node’s right link is set to NULL.
Creating a double linked list starts with creating a node. Sufficient memory has to be
allocated for creating a node. The information is stored in the memory, allocated by
using the malloc() function.
Dept of Page 41
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
The following steps are to be followed to insert a new node at the beginning of the list:
newnode; start =
newnode;
The following steps are followed to insert a new node at the end of the list:
Dept of Page 42
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
-> right;
left = temp;
The following steps are followed, to insert a new node in an intermediate position in the list:
1. Get the new node using getnode(). newnode=getnode();
2. Ensure that the specified position is in between first node and last
node. If not, specified position is invalid. This is done by countnode()
function.
3. Store the starting address (which is in start pointer) in temp and prev
pointers. Then traverse the temp pointer upto the specified position
followed by prev pointer.
4. After reaching the specified position, follow the steps given
below: newnode -> left = temp;
newnode;
Dept of Page 43
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
The following steps are followed, to delete a node at the beginning of the list:
= NULL;
free(temp);
The following steps are followed to delete a node at the end of the list:
{
temp = temp -> right;
}
temp -> left -> right =
NULL; free(temp);
Dept of Page 44
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
Deleting a node at Intermediate position:
The following steps are followed, to delete a node from an intermediate position
in the list (List must contain more than two nodes).
free(temp);
The following steps are followed, to traverse a list from left to right:
temp = start;
while(temp !=
NULL)
Dept of Page 45
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
Traversal and displaying a list (Right to Left):
The following steps are followed, to traverse a list from right to left:
-> right;
while(temp != NULL)
class LinkedDeque
{
public class DequeNode
{
DequeNode prev;
Object data;
DequeNode next;
DequeNode( Object item ) // constructor
{
data = item;
} // prev & next automatically refer to null
}
private DequeNode first, last;
private int count;
public void addFirst(Object item)
{
if( isEmpty() )
first = last = new DequeNode(item);
else
{
DequeNode tmp = new DequeNode(item);
tmp.next = first;
first.prev = tmp;
first = tmp;
}
count++;
}
Dept of Page 46
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
public void addLast(Object item)
Dept of Page 47
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
{
if( isEmpty() )
first = last = new DequeNode(item);
else
{
DequeNode tmp = new DequeNode(item);
tmp.prev = last;
last.next = tmp;
last = tmp;
}
count++;
}
public Object removeFirst()
{
if( isEmpty() )
{
System.out.println("Deque is empty");
return null;
}
Else
{
Object item = first.data;
first = first.next;
first.prev = null;
count--;
return item;
}
}
public Object removeLast()
{
if( isEmpty() )
{
System.out.println("Deque is empty");
return null;
}
else
{
Object item = last.data;
last = last.prev;
last.next = null;
count--;
return item;
}
}
public Object getFirst()
{
if( !isEmpty() )
Dept of Page 48
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
return( first.data );
else
return null;
}
public Object getLast()
{
if( !isEmpty() )
return( last.data );
else return null;
}
public boolean isEmpty()
{
return (count == 0);
}
public int size()
{
return(count);
}
public void display()
{
DequeNode p = first;
System.out.print("Deque: [ ");
while( p != null )
{
System.out.print( p.data + " " );
p = p.next;
}
System.out.println("]");
}
}
class LinkedDequeDemo
{
public static void main( String args[])
{ LinkedDeque dq = new LinkedDeque();
System.out.println("removeFirst():" + dq.removeFirst());
dq.addFirst('A');
dq.addFirst('B');
dq.addFirst('C');
dq.display();
dq.addLast('D');
dq.addLast('E');
System.out.println("getFirst():" + dq.getFirst());
System.out.println("getLast():" + dq.getLast());
dq.display();
System.out.println("removeFirst():"+dq.removeFirst());
System.out.println("removeLast():"+ dq.removeLast());
dq.display();
Dept of Page 49
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
System.out.println("size():" + dq.size());
}
}
OUTPUT:
00000
02600
1. Array representation
2. Linked list representation
Dept of Page 50
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
Method 1: Using Arrays
2D array is used to represent a sparse matrix in which there are three rows named as
EXAMPLE:
int size = 0;
for (int i = 0; i < 4; i++)
{
for (int j = 0; j < 5; j++)
{
if (sparseMatrix[i][j] != 0)
{
size++;
Dept of Page 51
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
}
}
}
Dept of Page 52
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
Method 2: Using Linked Lists
In linked list, each node has four fields. These four fields are defined
as:
1. Row: Index of row, where non-zero element is located
2. Column: Index of column, where non-zero element is located
3. Value: Value of the non zero element located at index – (row,column)
4. Next node: Address of the next node
Dept of Page 53
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
UNIT – 2
STACKS AND QUEUES
A stack is a container of objects that are inserted and removed according to the
last-in first-out (LIFO) principle. In the pushdown stacks only two operations are
allowed: push the item into the stack, and pop the item out of the stack. A stack is
a limited access data structure - elements can be added and removed from the
stack only at the top. push adds an item to the top of the stack, pop removes the
item from the top. A helpful analogy is to think of a stack of books; you can remove
only the top book, also you can add a new book on the top.
A stack may be implemented to have a bounded capacity. If the stack is full and
does not contain enough space to accept an entity to be pushed, the stack is then
considered to be in an overflow state. The pop operation removes an item from the
top of the stack. A pop either reveals previously concealed items or results in an
empty stack, but, if the stack is empty, it goes into underflow state, which means
Let us consider a stack with 6 elements capacity. This is called as the size of the
stack. The number of elements to be added should not exceed the maximum size
of the stack. If we attempt to add new element beyond the maximum size, we will
encounter a stack overflow condition. Similarly, you cannot remove elements
beyond the base of the stack. If such is the case, we will reach a stack underflow
condition.
Dept of Page 54
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
When an element is taken off from the stack, the operation is performed by pop().
Procedure:
STACK: Stack is a linear data structure which works under the principle of last in first
out. Basic operations: push, pop, display.
DISPLAY: IF (TOP==0), display Stack is empty else printing the elements in the
stack from stack [0] to stack [top].
SOURCE CODE:
stack ADT using array
import java.io.*;
class stackclass
{
int top,ele,stack[],size;
Dept of Page 55
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
stackclass(int n)
{
stack=new int[n];
size=n;
top= -1;
}
void push(int x)
{
ele=x;
stack[++top]=ele;
}
int pop()
{
if(!isempty())
{
System.out.println("Deleted element is");
return stack[top--];
}
else
{
System.out.println("stack is empty");
return -1;
}
}
boolean isempty()
{
if(top==-1)
return true;
else
return false;
}
boolean isfull()
{
if(size>(top+1))
return false;
else
return true;
}
int peek()
{
if(!isempty())
return stack[top];
else
{
System.out.println("stack is empty");
return -1;
}
Dept of Page 56
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
}
void size()
{
System.out.println("size of the stack is :"+(top+1));
}
void display()
{
if(!isempty())
{
for(int i=top;i>=0;i--)
System.out.print(stack[i]+" ");
}
else
System.out.println("stack is empty");
}
}
class stacktest
{
public static void main(String args[])throws Exception
{
BufferedReader br=new BufferedReader(new InputStreamReader(System.in));
System.out.println("enter the size of stack");
int size=Integer.parseInt(br.readLine());
stackclass s=new stackclass(size);
int ch,ele;
do
{
System.out.println();
System.out.println("1.push");
System.out.println("2.pop");
System.out.println("3.peek");
System.out.println("4.size");
System.out.println("5.display");
System.out.println("6.is empty");
System.out.println("7.is full");
System.out.println("8.exit");
System.out.println("enter ur choise :");
ch=Integer.parseInt(br.readLine());
switch(ch)
{
case 1:if(!s.isfull())
{
System.out.println("enter the element to insert: ");
ele=Integer.parseInt(br.readLine());
s.push(ele);
}
Dept of Page 57
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
else
{
System.out.print("stack is overflow");
}
break;
case 2:int del=s.pop();
if(del!=-1)
System.out.println(del+" is deleted");
break;
case 3:int p=s.peek();
if(p!=-1)
System.out.println("peek element is: +p);
break;
case 4:s.size();
break;
case 5:s.display();
break;
case 6:boolean b=s.isempty();
System.out.println(b);
break;
case 7:boolean b1=s.isfull();
System.out.println(b1);
break;
case 8 :System.exit(1);
}
}while(ch!=0);
}
}
OUTPUT:
Dept of Page 58
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
We can represent a stack as a linked list. In a stack push and pop operations
are performed at one end called top. We can perform similar operations at one
end of list using top pointer.
Dept of Page 59
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
data=0;
next=prev=null;
}
Stack1(int d)
{
data=d;
next=prev=null;
}
void push(int n)
{
Stack1 nn;
nn=new Stack1(n);
if(top==null)
top=nn;
else
{
nn.next=top;
top.prev=nn;
top=nn;
}
}
int pop()
{
int k=top.data;
if(top.next==null)
{
top=null;
return k;
}
else
{
top=top.next;
top.prev=null;
return k;
}
}
boolean isEmpty()
{
if(top==null)
return true;
else
return false;
}
void display()
{
Stack1 ptr;
Dept of Page 60
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
for(ptr=top;ptr!=null;ptr=ptr.next)
System.out.print(ptr.data+" ");
}
public static void main(String args[ ])throws Exception
{
int x;
int ch;
BufferedReader b=new BufferedReader(new InputStreamReader(System.in));
Stack1 a=new Stack1();
do{
System.out.println("enter 1 for pushing");
System.out.println("enter 2 for poping");
System.out.println("enter 3 for isEmpty");
System.out.println("enter 4 for display");
System.out.println("Enter 0 for exit");
System.out.println("enter ur choice ");
ch=Integer.parseInt(b.readLine());
switch(ch)
{
case 1:System.out.println("enter element to insert");
int e=Integer.parseInt(b.readLine());
a.push(e);
break;
case 2:if(!a.isEmpty())
{
int p=a.pop();
System.out.println("deleted element is "+p);
}
else
{
System.out.println("stack is empty");
}
break;
case 3:System.out.println(a.isEmpty());
break;
case 4:if(!a.isEmpty())
{
a.d isplay();
}
else
{
System.out.println("list is empty");
}
}
}while(ch!=0);
}
}
Dept of Page 61
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
OUTPUT:
Stack Applications:
1. Stack is used by compilers to check for balancing of parentheses, brackets and braces.
2. Stack is used to evaluate a postfix expression.
3. Stack is used to convert an infix expression into postfix/prefix form.
4. In recursion, all intermediate arguments and return values are stored
onthe processor’s stack.
5. During a function call the return address and arguments are pushed
ontoa stack and on return they are popped off.
6. Depth first search uses a stack data structure to find an element from a graph.
Dept of Page 62
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
Procedure:
Procedure to convert from infix expression to postfix expression is as follows:
Convert the following infix expression A + B * C – D / E * H into its equivalent postfix expression.
A A
+ A +
B AB +
* AB +*
C ABC -
Dept of Page 63
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
- ABC*+ -
D ABC*+D -
/ ABC*+D -/
E ABC*+DE -/
* ABC*+DE/ -*
H ABC*+DE/H -*
Source Code:
import java.io.*;
class
InfixToPostfix
{
java.util.Stack<Character> stk =new java.util.Stack<Character>();
if( ch == ')' )
{
item = stk.pop();
while( item != '(' )
{
postfix = postfix +
item; item =
stk.pop();
}
}
} // end of for-
loop return
postfix;
} // end of toPostfix() method
Dept of Page 65
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
return( c=='+' || c=='-' || c=='*' || c=='/' );
}
Dept of Page 66
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
//InfixToPostfixDemo.ja
va class
InfixToPostfixDemo
{
public static void main(String args[]) throws IOException
{
InfixToPostfix obj = new InfixToPostfix();
BufferedReader br=new BufferedReader(new
InputStreamReader(System.in)); System.out.println("Enter
Expression:"); String infix = br.readLine();
OUTPU
T:
Dept of Page 67
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
Procedure:
The postfix expression is evaluated easily by the use of a stack. When a number is
seen, it is pushed onto the stack; when an operator is seen, the operator is applied
to the two numbers that are popped from the stack and the result is pushed onto
the stack.
6 6
5 6, 5
2 6,5,2
stack.
Dept of Page 68
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
Next, a ‘+’ is seen, so 40
+ 5 40 45 6, 45 and 5 are popped and 40
+ 5 = 45 is pushed
pushed
result 6 * 48 = 288 is
pushed
A queue is a data structure that is best described as "first in, first out". A queue is
another special kind of list, where items are inserted at one end called the rear and
deleted at the other end called the front. A real world example of a queue is people
waiting in line at the bank. As each person enters the bank, he or she is
"enqueued" at the back of the line. When a teller becomes available, they are
"dequeued" at the front of the line.
Let us consider a queue, which can hold maximum of five elements. Initially the queue is empty.
0 1 2 3 4
Q u e u e E mp t y FRONT= REAR = 0
FR
Dept of Page 69
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
0 1 2 3 4
REAR=REAR+1=1
11
FRONT = 0
F R
REAR=REAR+1=2
11 22
FRONT = 0
F R
Again insert another element 33 to the queue. The status of the queue is:
0 1 2 3 4
REAR=REAR+1=3
11 22 33
FRONT = 0
F R
Now, delete an element. The element deleted is the element at the front of the
queue. So the status of the queue is:
Dept of Page 70
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
0 1 2 3 4
REAR = 3
22 33
FRONT = FRONT + 1 = 1
F R
0 1 2 3 4
REAR = 3
33
FRONT = FRONT + 1 = 2
F R
Now, insert new elements 44 and 55 into the queue. The queue status is:
0 1 2 3 4
REAR = 5
33 44 55
FRONT = 2
F R
Dept of Page 71
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
Next insert another element, say 66 to the queue. We cannot insert 66 to the
queue as the rear crossed the maximum size of the queue (i.e., 5). There will be
queue full signal. The queue status is as follows:
0 1 2 3 4
REAR = 5
33 44 55
FRONT = 2
F R
Now it is not possible to insert an element 66 even though there are two vacant
positions in the linear queue. To over come this problem the elements of the queue
are to be shifted towards the beginning of the queue so that it creates vacant
position at the rear end. Then the FRONT and REAR are to be adjusted properly.
The element 66 can be inserted at the rear end. After this operation, the queue
status is as follows:
0 1 2 3 4
REAR = 4
44 55 66
FRONT = 0
F R
This difficulty can overcome if we treat queue position with index 0 as a position
that comes after position with index 4 i.e., we treat the queue as a circular
queue.
Dept of Page 72
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
Procedure for Queue operations using array:
In order to create a queue we require a one dimensional array Q(1:n) and two
variables front and rear. The conventions we shall adopt for these two variables are
that front is always 1 less than the actual front of the queue and rear always points
to the last element in the queue. Thus, front = rear if and only if there are no
elements in the queue. The initial condition then is front = rear = 0.
The various queue operations to perform creation, deletion and display the
elements in a queue are as follows:
Source Code:
Queue ADT using array
import java.util.*;
class queue
{
int front,rear;
int que[];
int max,count=0;
queue(int n)
{
max=n;
que=new int[max];
front=rear=-1;
}
boolean isfull()
{
if(rear==(max-1))
return true;
else
return false;
}
boolean isempty()
{
if(front==-1)
return true;
else
return false;
}
Dept of Page 73
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
void insert(int n)
{
if(isfull())
System.out.println("list is full");
else
{
rear++;
que[rear]=n;
if(front==-1)
front=0;
count++;
}
}
int delete()
{
int x;
if(isempty())
return -1;
else
{
x=que[front];
que[front]=0;
if(front==rear)
front=rear=-1;
else
front++;
count--;
}
return x;
}
void display()
{
if(isempty())
System.out.println("queue is empty");
else
for(int i=front;i<=rear;i++)
System.out.println(que[i]);
}
int size()
{
return count;
}
public static void main(String args[])
{
int ch;
Scanner s=new Scanner(System.in);
System.out.println("enter limit");
int n=s.nextInt();
queue q=new queue(n);
do
{
Dept of Page 74
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
System.out.println("1.insert");
System.out.println("2.delete");
System.out.println("3.display");
System.out.println("4.size");
System.out.println("enter ur choise :");
ch=s.nextInt();
switch(ch)
{
case 1:System.out.println("enter element :");
int n1=s.nextInt();
q.insert(n1);
break;
case 2:int c1=q.delete();
if(c1>0)
System.out.println("deleted element is :"+c1);
else
System.out.println("can't delete");
break;
case 3:q.display();
break;
case 4:System.out.println("queue size is "+q.size());
break;
}
}
while(ch!=0);
}
}
OUTPUT:
Dept of Page 75
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
Source Code:
Dept of Page 76
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
data=d;
next=null;
}
Qlnk getFront()
{
return front;
}
Qlnk getRear()
{
return rear;
}
void insertelm(int item)
{
Qlnk nn;
nn=new Qlnk(item);
if(isEmpty())
{
front=rear=nn;
}
else
{
rear.next=nn;
rear=nn;
}
}
int delelm()
{
if(isEmpty())
{
System.out.println("deletion failed");
return -1;
}
else
{
int k=front.data;
if(front!=rear)
front=front.next;
else
rear=front=null;
return k;
}
}
boolean isEmpty()
{
if(rear==null)
return true;
else
Dept of Page 77
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
return false;
}
int size()
{
Qlnk ptr;
int cnt=0;
for(ptr=front;ptr!=null;ptr=ptr.next)
cnt++;
return cnt;
}
void display()
{
Qlnk ptr;
if(!isEmpty())
{
for(ptr=front;ptr!=null;ptr=ptr.next)
System.out.print(ptr.data+" ");
}
else
System.out.println("q is empty");
}
public static void main(String arr[])throws Exception
{
BufferedReader br=new BufferedReader(new
InputStreamReader(System.in)); Qlnk m=new Qlnk();
int ch;
do
{
System.out.println("enter 1 for insert");
System.out.println("enter 2 for deletion");
System.out.println("enter 3 for getFront");
System.out.println("enter 4 for getRear");
System.out.println("enter 5 for size");
System.out.println("enter 6 for display");
System.out.println("enter 0 for exit");
System.out.println("enter ur choice");
ch=Integer.parseInt(br.readLine());
switch(ch)
{
case 1:System.out.println("enter ele to insert");
int item=Integer.parseInt(br.readLine());
m.insertelm(item);break;
case 2:int k=m.delelm();
System.out.println("deleted ele is "+k);break;
case 3:System.out.println("front index is"+(m.getFront()).data);break;
case 4:System.out.println("rear index is"+(m.getRear()).data);break;
case 5:System.out.println("size is"+m.size());break;
Dept of Page 78
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
case 6:m.display();break;
}
}while(ch!=0);
}
}
OUTPUT:
Applications of Queues:
There are two problems associated with linear queue. They are:
Dept of Page 79
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
DEQUE(Double Ended Queue)
The output restricted DEQUE allows deletions from only one end and input restricted
DEQUE allow insertions at only one end. The DEQUE can be constructed in two
ways they are
1) Using array
2)Using linked
list
Operations in DEQUE
Dept of Page 80
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
Dept of Page 81
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
Applications of DEQUE:
Dept of Page 82
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
count = 0;
}
public void insertLast(Object item)
{
if(count == maxSize)
{
System.out.println("Deque is full");
return;
}
last = (last+1) % maxSize;
que[last] = item;
if(first == -1 && last == 0)
first = 0;
count++;
}
public Object deleteLast()
{
if(count == 0)
{
System.out.println("Deque is empty");
return(' ');
}
Object item = que[last];
que[last] = ' ';
if(last > 0)
last = (last-1) % maxSize;
count--;
if(count == 0)
first = last = -1;
return(item);
}
public void insertFirst(Object item)
{
if(count == maxSize)
{
System.out.println("Deque is full"); return;
}
if(first > 0)
first = (first-1) % maxSize;
else if(first == 0)
first = maxSize-1;
que[first] = item;
count++;
}
public Object deleteFirst()
{
if(count == 0)
Dept of Page 83
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
{
System.out.println("Deque is empty"); return(' ');
}
Object item = que[first];
que[first] = ' ';
if(first == maxSize-1)
first = 0;
else
first = (first+1) % maxSize;
count--;
if(count == 0)
first = last = -1;
return(item);
}
void display()
{
System.out.println("----------------------------- ");
System.out.print("first:"+first + ", last:"+ last);
System.out.println(", count: " + count);
System.out.println(" 0 1 2 3 4 5");
System.out.print("Deque: ");
for( int i=0; i<maxSize; i++ )
System.out.print(que[i]+ " ");
System.out.println("\n ---------------------------- ");
}
public boolean isEmpty() // true if queue is empty
{
return (count == 0);
}
public boolean isFull() // true if queue is full
{
return (count == maxSize);
}
}
class ArrayDequeDemo
{
public static void main(String[] args)
{
ArrayDeque q1 = new ArrayDeque(6); // queue holds a max of 6
items q1.insertLast('A'); /* (a) */
q1.insertLast('B');
q1.insertLast('C');
q1.insertLast('D');
System.out.println("deleteFirst():"+q1.deleteFirst());
q1.display();
q1.insertLast('E'); /* (b) */
q1.display();
Dept of Page 84
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
/* (c) */
System.out.println("deleteLast():"+q1.deleteLast());
System.out.println("deleteLast():"+q1.deleteLast());
q1.display();
q1.insertFirst('P');
q1.insertFirst('Q'); /* (d) */
q1.insertFirst('R');
q1.display();
q1.deleteFirst();
q1.display(); /* (e) */
q1.insertFirst('X');
q1.display(); /* (f) */
q1.insertLast('Y');
q1.display(); /* (g) */
q1.insertLast('Z');
q1.display(); /* (h) */
}
}
OUTPUT:
Dept of Page 85
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
Circular Queue:
Circular queue is a linear data structure. It follows FIFO principle. In circular queue
the last node is connected back to the first node to make a circle.
2. Elements are added at the rear end and the elements are deleted at front end of the queue
3. Both the front and the rear pointers points to the beginning of the array.
Let us consider a circular queue, which can hold maximum (MAX) of six elements.
Initially the queue is empty.
Dept of Page 86
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
Dept of Page 85
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
Source Code:
import java.util.*;
class CirQue
{
int front,rear,next=0;
int que[];
int max,count=0;
CirQue(int n)
{
max=n;
que=new int[max];
front=rear=-1;
}
boolean isfull()
{
if(front==(rear+1)%max)
return true;
else
return false;
}
boolean isempty()
{
if(front==-1&&rear==-1)
return true;
else
return false;
}
Dept of Page 86
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
int delete()
{
if(isempty())
{
return -1;
}
else
{
count --;
int x=que[front];
if(front==rear)
front=rear=-1;
else
{
next=(front+1)%max;
front=next;
}
return x;
}}
void insert(int item)
{
if(isempty())
{
que[++rear]=item;
front=rear;
count ++;
}
else if(!isfull())
{
next=(rear+1)%max;
if(next!=front)
{
que[next]=item;
rear=next;
}
count ++;
}
else
System.out.println("q is full");
}
Dept of Page 87
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
void display()
{
if(isempty())
System.out.println("queue is empty");
else
next=(front)%max;
while(next<=rear)
{
System.out.println(que[next]); next++;
}
}
int size()
{
return count;
}
public static void main(String args[])
{
int ch;
Scanner s=new Scanner(System.in);
System.out.println("enter limit");
int n=s.nextInt();
CirQue q=new CirQue(n);
do
{System.out.println("1.insert");
System.out.println("2.delete");
System.out.println("3.display");
System.out.println("4.size");
System.out.println("enter ur choice
:"); ch=s.nextInt();
switch(ch)
{case 1:System.out.println("enter element :"); int
n1=s.nextInt();
q.insert(n1);
break;
case 2:int c1=q.delete();
if(c1>0)
System.out.println("deleted element is :"+c1);
else
System.out.println("can't delete");
break;
case 3:q.display();
break;
case 4:System.out.println("queue size is
"+q.size()); break;
}
}
while(ch!=0);
Dept of Page 88
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
}
}
OUTPUT:
Queue
DEFINITION:
A priority queue is a collection of zero or more elements. Each element has a priority or value.
1. Unlike the queues, which are FIFO structures, the order of deleting from a
priority queue is determined by the element priority.
Min priority queue: Collection of elements in which the items can be inserted
arbitrarily, but only smallest element can be removed.
Max priority queue: Collection of elements in which insertion of items can be in any
order but only largest element can be removed.
Dept of Page 89
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
In priority queue, the elements are arranged in any order and out of which only the smallest or
Dept of Page 90
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
The implementation of priority queue can be done using arrays or linked list.
The data structure heap is used to implement the priority queue effectively.
APPLICATIONS:
1. The typical example of priority queue is scheduling the jobs in operating system.
Typically OS allocates priority to jobs. The jobs are placed in the queue and
position of the job in priority queue determines their priority. In OS there are 3 jobs-
real time jobs, foreground jobs and background jobs. The OS always schedules the
real time jobs first. If there is no real time jobs pending then it schedules
foreground jobs. Lastly if no real time and foreground jobs are pending then OS
schedules the background jobs.
2. In network communication, the manage limited bandwidth for transmission the
priority queue is used.
3. In simulation modeling to manage the discrete events the priority queue is used.
1. Find an element
2. Insert a new element
3. Remove or delete an element
The abstract data type specification for a max priority queue is given below. The
specification for a min priority queue is the same as ordinary queue except while
deletion, find and remove the element with minimum priority
Dept of Page 91
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
SOURCE CODE:
LinkedPriorityQueueDemo.java
class Node
{
String data; // data item
int prn; // priority number (minimum has highest priority)
Node next; // "next" refers to the next node
Node( String str, int p ) // constructor
{
data = str;
prn = p;
} // "next" is automatically set to null
}
class LinkedPriorityQueue
{
Node head; // “head” refers to first node
public void insert(String item, int pkey) // insert item after pkey
{
Node newNode = new Node(item, pkey); // create new node
int k;
if( head == null ) k = 1;
else if( newNode.prn < head.prn ) k = 2;
else k = 3;
switch( k )
{
case 1: head = newNode; // Q is empty, add head node
head.next = null;
break;
case 2: Node oldHead = head; // add one item before head
head = newNode;
newNode.next = oldHead;
break;
case 3: Node p = head; // add item before a node
Node prev = p;
Node nodeBefore = null;
while( p != null )
{
if( newNode.prn < p.prn )
{
nodeBefore = p;
break;
}
else
{
prev = p; // save previous node of current node
p = p.next; // move to next node
Dept of Page 92
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
}
} // end of while
newNode.next = nodeBefore;
prev.next = newNode;
} // end of switch
} // end of insert() method
public Node delete()
{
if( isEmpty() )
{
System.out.println("Queue is empty");
return null;
}
else
{
Node tmp = head;
head = head.next;
return tmp;
}
}
public void displayList()
{
Node p = head; // assign address of head to p
System.out.print("\nQueue: ");
while( p != null ) // start at beginning of list until end of list
{
System.out.print(p.data+"(" +p.prn+ ")" + " ");
p = p.next; // move to next node
}
System.out.println();
}
public boolean isEmpty() // true if list is empty
{
return (head == null);
}
public Node peek() // get first item
{
return head;
}
}
class LinkedPriorityQueueDemo
{
public static void main(String[] args)
{ LinkedPriorityQueue pq = new LinkedPriorityQueue(); // create new queue list
Node item;
pq.insert("Babu", 3);
pq.insert("Nitin", 2);
Dept of Page 93
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
pq.insert("Laxmi", 2);
pq.insert("Kim", 1);
pq.insert("Jimmy", 3);
pq.displayList();
item = pq.delete();
if( item != null )
System.out.println("delete():" + item.data + "(" +item.prn+")");
pq.displayList();
pq.insert("Scot", 2);
pq.insert("Anu", 1);
pq.insert("Lehar", 4);
pq.displayList();
} }
OUTPUT:
Dept of Page 94
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
HEAPS
Heap is a tree data structure denoted by either a max heap or a min heap.
A max heap is a tree in which value of each node is greater than or equal to value of its
children nodes. A min heap is a tree in which value of each node is less than or equal to value
Now if we want to insert 7. We cannot insert 7 as left child of 4. This is because the max
heap has a property that value of any node is always greater than the parent nodes.
Hence 7 will bubble up 4 will be left child of 7.
Note: When a new node is to be inserted in complete binary tree we start from bottom and
from left child on the current level. The heap is always a complete binary tree.
Dept of Page 95
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
The insertion strategy just outlined makes a single bubbling pass from a leaf toward
the root. At each level we do (1) work, so we should be able to implement the strategy
to have complexity O(height) = O(log n).
For deletion operation always the maximum element is deleted from heap. In Max heap
the maximum element is always present at root. And if root element is deleted then we need to
reheapify the tree.
Dept of Page 96
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
Delete root element:25, Now we cannot put either 12 or 18 as root node and that should
be greater than all its children elements.
Now we cannot put 4 at the root as it will not satisfy the heap property. Hence we will
bubble up 18 and place 18 at root, and 4 at position of 18.
If 18 gets deleted then 12 becomes root and 11 becomes parent node of 10.
Dept of Page 97
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
Thus deletion operation can be performed. The time complexity of deletion operation is O(log n).
1. Remove the maximum element which is present at the root. Then a hole is
created at the root.
2. Now reheapify the tree. Start moving from root to children nodes. If any
maximum element is found then place it at root. Ensure that the tree is satisfying
the heap property or not.
3. Repeat the step 1 and 2 if any more elements are to be deleted.
int item,
temp;
if(size==0)
Applications Of Heap:
1. Heap is used in sorting algorithms. One such algorithm using heap is known as heap sort.
2. In priority queue implementation the heap is used.
Dept of Page 98
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
ArrayList in Java
ArrayList is a part of collection framework and is present in java.util package. It provides us
dynamic arrays in Java. Though, it may be slower than standard arrays but can be helpful in
programs where lots of manipulation in the array is needed.
1. ArrayList inherits AbstractList class and implements List interface.
2. ArrayList is initialized by a size, however the size can increase if collection grows or
shrunk if objects are removed from the collection.
3. Java ArrayList allows us to randomly access the list.
4. ArrayList can not be used for primitive types, like int, char, etc. We need a wrapper class
for such cases (see this for details).
5. ArrayList in Java can be seen as similar to vector in C++.
1. forEach(Consumer action): Performs the given action for each element of the
Iterable until all elements have been processed or the action throws an
exception.
2. retainAll(Collection c): Retains only the elements in this list that are contained
in the specified collection.
Dept of Page 99
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
26. boolean addAll(Collection C): This method is used to append all the elements from
a specific collection to the end of the mentioned list, in such a order that the values
are returned by the specified collection’s iterator.
27. boolean add(Object o): This method is used to append a specificd element to the
end of a list.
28. boolean addAll(int index, Collection C): Used to insert all of the elements
starting at the specified position from a specific collection into the mentioned
list.
LinkedList in Java
Linked List are linear data structures where the elements are not stored in contiguous
locations and every element is a separate object with a data part and address part. The
elements are linked using pointers and addresses. Each element is known as a node.
Due to the dynamicity and ease of insertions and deletions, they are preferred over the
arrays. It also has few disadvantages like the nodes cannot be accessed directly instead
we need to start from the head and follow through the link to reach to a node we wish to
access.
To store the elements in a linked list we use a doubly linked list which provides a linear
data structure and also used to inherit an abstract class and implement list and deque
interfaces.
In Java, LinkedList class implements the list interface. The LinkedList class also consists of
various constructors and methods like other java collections.
Constructors for Java LinkedList:
1. LinkedList(): Used to create an empty linked list.
2. LinkedList(Collection C): Used to create a ordered list which contains all the
elements of a specified collection, as returned by the collection’s iterator.
The Vector class implements a growable array of objects. Vectors basically fall in legacy
classes but now it is fully compatible with collections.
1. Vector implements a dynamic array that means it can grow or shrink as required.
Like an array, it contains components that can be accessed using an integer
index
2. They are very similar to ArrayList but Vector is synchronised and have
somelegacy method which collection framework does not contain.
3. It extends AbstractList and implements List interfaces.
Constructor
1. Vector(): Creates a default vector of initial capacity is 10.
3. Vector(int size, int incr): Creates a vector whose initial capacity is specified by
size and increment is specified by incr. It specifies the number of elements to
allocate each time that a vector is resized upward.
4. Vector(Collection c): Creates a vector that contains the elements of collection c.
If increment is specified, Vector will expand according to it in each allocation cycle but if
increment is not specified then vector’s capacity get doubled in each allocation cycle.
Vector defines three protected data member:
1. int capacityIncreament: Contains the increment value.
METHODS IN VECTOR :
1. boolean add(Object obj): This method appends the specified element to the end of thisvector.
2. Syntax: public boolean add(Object obj)
3. Returns: true if the specified element is added
4. successfully into the Vector, otherwise it returns false.
5. Exception: NA.
6. void add(int index, Object obj): This method inserts the specified element at the
specified position in this Vector.
7. Syntax: public void add(int index, Object obj)
8. Returns: NA.
9. Exception: IndexOutOfBoundsException, method throws this exception
10. if the index (obj position) we are trying to access is out of range
11. (index size()).
16. boolean addAll(int index, Collection c) This method inserts all of the elements in the
specified Collection into this Vector at the specified position.
17. Syntax: public boolean addAll(int index, Collection c)
18. Returns: true if this list changed as a result of the call.
19. Exception: IndexOutOfBoundsException -- If the index is out of range,
20. NullPointerException -- If the specified collection is null.
void clear() This method removes all of the elements from this vector.
Returns: NA.
Exception: NA.
Exception: NA.
Exception: NA.
Exception: NA.
int indexOf(Object o): This method returns the index of the first occurrence
of the specified element in this vector, or -1 if this vector does not contain
the element.
Object get(int index):This method returns the element at the specified position in this Vector.
Java.Util Package-Stacks
public class
Stack<E> extends
Vector<E> Class
constructors
Class methods
Java.util.Interfaces
1 Deque<E>
This is a linear collection that supports element insertion and removal at both ends.
2 Enumeration<E>
3 EventListener
This is a tagging interface that all event listener interfaces must extend.
4 Formattable
5 Iterator<E>
6 Queue<E>
Iterators IN JAVA.UTIL
List Iterator in Java is an Iterator which allows users to traverse Collection in both
direction. It contains the following methods:
UNIT-3
SEARCHING LINEAR AND BINARY SEARCH METHODS:
Searching: Searching is the technique of finding desired data items that has been stored within
some data structure. Data structures can include linked lists, arrays, search trees, hash tables, or
various other storage methods. The appropriate search algorithm often depends on the data
structure being searched. Search algorithms can be classified based on their mechanism of
searching. They are
Linear searching
Binary searching
Linear or Sequential searching: Linear Search is the most natural searching method and It is
very simple but very poor in performance at times .In this method, the searching begins with
searching every element of the list till the required record is found. The elements in the list may be
in any order. i.e. sorted or unsorted.
We begin search by comparing the first element of the list with the target element. If it
matches, the search ends and position of the element is returned. Otherwise, we will move to
next element and compare. In this way, the target element is compared with all the elements until
a match occurs. If the match do not occur and there are no more elements to be compared, we
conclude that target element is absent in the list by returning position as -1.
Suppose we want to search for element 11(i.e. Target element = 11). We first compare the
target element with first element in list i.e. 55. Since both are not matching we move on the next
elements in the list and compare. Finally we will find the match after 5 comparisons at position 4
starting from position 0.
Linear search can be implemented in two ways. i) Non recursive ii) recursive
import java.io.*;
class LinearSearch
{
public static void main(String args[]) throws IOException
{
int count=0;
BufferedReader br=new BufferedReader(new
InputStreamReader(System.in));
System.out.println("enter n value");
int n=Integer.parseInt(br.readLine());
int arr[]=new int[n];
System.out.println("enter elements");
for(int i=0;i<n;i++)
{
arr[i]=Integer.parseInt(br.readLine());
}
System.out.println("enter element to search");
int key=Integer.parseInt(br.readLine());
for(int i=0;i<n;i++)
{
if(arr[i]==key)
System.out.println("element found : " + key + " in position :" +
(i+1)); else
count++;
}
if(count==n)
System.out.println(key + " element not found, search failed");
}}
OUTPUT:
import java.io.*;
class RecursiveLinearSearch
{
public static int arr[], key;
public static void main(String args[]) throws IOException
{
BufferedReader br=new BufferedReader(new
InputStreamReader(System.in));
System.out.println("enter n value");
int n=Integer.parseInt(br.readLine());
arr=new int[n];
System.out.println("enter elements");
for(int i=0;i<n;i++)
{
arr[i]=Integer.parseInt(br.readLine());
}
System.out.println("enter element to search");
key=Integer.parseInt(br.readLine());
if( linearSearch(arr.length-1) )
System.out.println(key + " found in the list" );
else
System.out.println(key + " not found in the list");
}
static boolean linearSearch(int n)
{
if( n < 0 ) return false;
if(key == arr[n])
return true;
else
return linearSearch(n-1);
}}
OUTPUT:
BINARY
SEARCHING
Binary search is a fast search algorithm with run-time complexity of Ο(log n). This search
algorithm works on the principle of divide and conquer. Binary search looks for a particular item
by comparing the middle most item of the collection. If a match occurs, then the index of item is
returned. If the middle item is greater than the item, then the item is searched in the sub-array to
the left of the middle item. Otherwise, the item is searched for in the sub-array to the right of the
middle item. This process continues on the sub-array as well until the size of the subarray
reduces to zero.
Before applying binary searching, the list of items should be sorted in ascending or
descending order. Best case time complexity is O(1) and Worst case time complexity is
O(log n)
Algorithm:
Binary_Search (A [ ], U_bound, VAL)
Step 1 : set BEG = 0 , END = U_bound ,
POS = -1 Step 2 : Repeat while (BEG <=
END )
Step 3 : set MID = ( BEG + END ) / 2
Step 4 : if A [ MID ] == VAL then
POS = MID
print VAL “ is available at “,
POS GoTo Step 6
End if
if A [ MID ] > VAL
then set END =
MID – 1
Else
set BEG = MID + 1
End if
End
SOURCE CODE:
class BinarySearch
{
static Object[] a = { "AP", "KA", "MH", "MP", "OR", "TN", "UP",
"WB"}; static Object key = "UP";
public static void main(String args[])
{
if( binarySearch() )
System.out.println(key + " found in the list");
else
System.out.println(key + " not found in the list");
}
static boolean binarySearch()
{
int c, mid, low = 0, high = a.length-1;
while( low <= high)
{
mid = (low + high)/2;
c = ((Comparable)key).compareTo(a[mid]);
if( c < 0) high = mid-1;
else if( c > 0) low = mid+1;
else return true;
}
return false;
}
}
OUTPUT:
import java.io.*;
class RecursiveBinarySearch
{ public static int arr[], key;
public static void main(String args[]) throws IOException
{
BufferedReader br=new BufferedReader(new InputStreamReader(System.in));
System.out.println("enter n value");
int n=Integer.parseInt(br.readLine());
arr=new int[n];
System.out.println("enter elements");
for(int i=0;i<n;i++)
{
arr[i]=Integer.parseInt(br.readLine());
}
System.out.println("enter element to search");
key=Integer.parseInt(br.readLine());
Hash table is a data structure used for storing and retrieving data very quickly. Insertion of data in the
hash table is based on the key value. Hence every entry in the hash table is associated with some
key.
Using the hash key the required piece of data can be searched in the hash table by few or
more key comparisons. The searching time is then dependent upon the size of the hash
table
The effective representation of dictionary can be done using hash table. We can place the
dictionary entries in the hash table using hash function.
Hash function is a function which is used to put the data in the hash table. Hence one can use the
same hash function to retrieve the data from the hash table. Thus hash function is used to
implement the hash table.
The integer returned by the hash function is called hash key.
For example: Consider that we want place some employee records in the hash table The record
of employee is placed with the help of key: employee ID. The employee ID is a 7 digit number for
placing the record in the hash table. To place the record 7 digit number is converted into 3 digits
by taking only last three digits of the key.
If the key is 496700 it can be stored at 0 th position. The second key 8421002, the record of those
key is placed at 2nd position in the array. Hence the hash function will be- H(key) = key%1000
Where key%1000 is a hash function and key obtained by hash function is called hash key.
Bucket and Home bucket: The hash function H(key) is used to map several dictionary entries in
the hash table. Each position of the hash table is called bucket.
The function H(key) is home bucket for the dictionary with pair whose value is key.
TYPES OF HASH FUNCTION
There are various types of hash functions that are used to place the record in the hash table-
1 Division Method: The hash function depends upon the remainder of division. Typically the
divisor is table length.
For eg; If the record 54, 72, 89, 37 is placed in the hash table and if the table size is 10 then
31112 = 9678321
for the hash table of size 1000
digits)
The given record is multiplied by some constant value. The formula for computing the hash key
is- H(key) = floor(p *(fractional part of key*A)) where p is integer constant and A is constant real
H(key) = floor(50*(107*0.61803398987))
= floor(3306.4818458045)
= 3306
At 3306 location in the hash table the record 107 will be placed.
4. Digit Folding:
The key is divided into separate parts and using some simple operation these parts are
combined to produce the hash key.
For eg; consider a record 12365412 then it is divided into separate parts as 123 654 12 and
these are added together
H(key) = 123+654+12
= 789
The digit analysis is used in a situation when all the identifiers are known in advance. We first
transform the identifiers into numbers using some radix, r. Then examine the digits of each
identifier. Some digits having most skewed distributions are deleted. This deleting of digits is
continued until the number of remaining digits is small enough to give an address in the range of
the hash table. Then these digits are used to calculate the hash address.
COLLISION
the hash function is a function that returns the key value using which the record can be placed in
the hash table. Thus this function helps us in placing the record in the hash table at appropriate
position and due to this we can retrieve the record directly from that location. This function need
to be designed very carefully and it should not return the same hash key address for two different
records. This is an undesirable situation in hashing.
Definition: The situation in which the hash function returns the same hash key (home bucket) for
more than one record is called collision and two same hash keys returned for different records is
called synonym.
Similarly when there is no room for a new pair in the hash table then such a situation is called
overflow. Sometimes when we handle collision it may lead to overflow conditions. Collision and
overflow show the poor hash functions.
For example, 0
1 131
Consider a hash function. 2
3 43
H(key) = recordkey%10 having the hash table size of 10 4 44
5
The record keys to be placed are 6 36
7 57
131, 44, 43, 78, 19, 36, 57 and 77 8 78
131%10=1 9 19
44%10=4
43%10=3
78%10=8
19%10=9
36%10=6
57%10=7
77%10=7
Now if we try to place 77 in the hash table then we get the hash key to be 7 and at index 7 already
the record key 57 is placed. This situation is called collision. From the index 7 if we look for next
vacant position at subsequent indices 8.9 then we find that there is no room to place 77 in the hash
table. This situation is called overflow.
If collision occurs then it should be handled by applying some techniques. Such a technique
is called collision handling technique.
1. Chaining
2. Open addressing (linear probing)
3. Quadratic probing
4. Double hashing
5. Double hashing
6. Rehashing
CHAINING
In collision handling method chaining is a concept which introduces an additional field with data i.e.
chain. A separate chain table is maintained for colliding data. When collision occurs then a linked
list(chain) is maintained at the home bucket.
For eg;
Consider the keys to be placed in their home
buckets are 131, 3, 4, 21, 61, 7, 97, 8, 9
A chain is maintained for colliding elements. for instance 131 has a home bucket (key)
1. similarly key 21 and 61 demand for home bucket 1. Hence a chain is maintained at
index 1.
This is the easiest method of handling collision. When collision occurs i.e. when two records
demand for the same home bucket in the hash table then collision can be solved by placing the
second record linearly down whenever the empty bucket is found. When use linear probing (open
addressing), the hash table is represented as a one-dimensional array with indices that range from
0 to the desired table size-1. Before inserting any elements into this table, we must initialize the
table to represent the situation where all slots are empty. This allows us to detect overflows and
collisions when we inset elements into the table. Then using some suitable hash function the
element can be inserted into the hash table.
For example:
We will use Division hash function. That means the keys are placed using the
H(key) = key % 10
=1
Index 1 will be the home bucket for 131. Continuing in this fashion we will place 4,
8, 7. Now the next key to be inserted is 21. According to the hash function
H(key)=21%10
H(key) = 1
But the index 1 location is already occupied by 131 i.e. collision occurs. To resolve this collision we
will linearly move down and at the next empty location we will prob the element. Therefore 21 will
be placed at the index 2. If the next element is 5 then we get the home bucket for 5 as index 5 and
this bucket is empty so we will put the element 5 at index 5.
2 NULL 21 21
3 NULL NULL 31
4 4 4 4
5 NULL 5 5
6 NULL NULL 61
7 7 7 7
8
8 8 8
9
NULL NULL NULL
39
19%10 = 9 cluster is formed
18%10 = 8 29
39%10 = 9
29%10 = 9
8%10 = 8
19
QUADRATIC PROBING:
Quadratic probing operates by taking the original hash value and adding successive values of
an arbitrary quadratic polynomial to the starting value. This method uses following formula.
The double hashing requires another hash function whose probing efficiency is same as
some another hash function required when handling random collision.
The double hashing is more complex to implement than quadratic probing. The quadratic probing is
fast technique than double hashing.
REHASHING
Rehashing is a technique in which the table is resized, i.e., the size of table is doubled by
creating a new table. It is preferable is the total size of table is a prime number. There are
situations in which the rehashing is required.
Advantages:
1. This technique provides the programmer a flexibility to enlarge the table size if required.
2. Only the space gets doubled with simple hash function which avoids occurrence ofcollisions.
EXTENSIBLE HASHING
Extensible hashing is a technique which handles a large amount of data. The data to be placed in
the hash table is by extracting certain number of bits. Extensible hashing grow and shrink similar
to B-trees.
In extensible hashing referring the size of directory the elements are to be placed in buckets. The
levels are indicated in parenthesis.
Levels
(0 (1
) )
001 111 data to be
010 placed in
bucket
The bucket can hold the data of its global depth. If data in bucket is more than global depth
then, split the bucket and double the directory.
Deletion Operation:
Applications of hashing:
1. In compilers to keep track of declared variables.
2. For online spelling checking the hashing functions are used.
3. Hashing helps in Game playing programs to store the moves made.
4. For browser program while caching the web pages, hashing isused.
5. Construct a message authentication code (MAC)
6. Digital signature.
7. Time stamping
8. Key updating: key is hashed at specific intervals resulting in newkey
SORTING TECHNIQUES:
Sorting in general refers to various methods of arranging or ordering things based on criterias
(numerical, chronological, alphabetical, hierarchical etc.). There are many approaches to sorting
data and each has its own merits and demerits.
Bubble Sort:
Bubble Sort is probably one of the oldest, easiest, straight-forward, inefficient sorting algorithms.
Bubble sort is a simple sorting algorithm that works by repeatedly stepping through the list to be
sorted, comparing each pair of adjacent items and swapping them if they are in the wrong order.
The pass through the list is repeated until no swaps are needed, which indicates that the list is
sorted. The algorithm gets its name from the way smaller elements "bubble" to the top of the list.
Because it only uses comparisons to operate on elements, it is a comparison sort. Although the
algorithm is simple, most of the other sorting algorithms are more efficient for large lists. Bubble
sort is not a stable sort which means that if two same elements are there in the list, they may not
get their same order with respect to each other.
Step 4: Exit
Let us take the array of numbers "5 1 4 2 8", and sort the array from lowest number to greatest
number using bubble sort. In each step, elements written in bold are being compared. Three
passes will be required.
First Pass:
(51428) ( 1 5 4 2 8 ), Here, algorithm compares the first two elements, and swaps since 5 > 1.
(14258)
(14258),
Now, since these elements are already in order (8 > 5), algorithm does not swap them.
Second Pass:
(14258) (14258)
(12458) (12458)
Now, the array is already sorted, but our algorithm does not know if it is completed. The
algorithm needs one whole pass without any swap to know it is sorted.
Third Pass:
(12458) (12458)
(12458) (12458)
(12458) (12458)
(12458) (12458)
Time
Complexity: Worst Case Performance O(N2)
Bubble Sort
import
java.io.*; class
BubbleSort
{
public static void main(String[] args) throws IOException
{
BufferedReader br=new BufferedReader(new
InputStreamReader(System.i
n)); System.out.println("enter n value");
int
n=Integer.parseInt(br.readLine());
int arr[]=new int[n];
System.out.println("enter
elements"); for(int i=0;i<n;i++)
{
arr[i]=Integer.parseInt(br.readLine());
}
System.out.print("\n Unsorted array:
"); display( arr );
bubbleSort( arr );
System.out.print("\n Sorted array:
"); display( arr );
}
static void bubbleSort(int[] a)
{
int i, pass, exch, n =
a.length; int tmp;
for( pass = 0; pass < n; pass++ )
{
exch = 0;
for( i = 0; i < n-pass-1; i++ )
if( ((Comparable)a[i]).compareTo(a[i+1]) > 0)
{
tmp = a[i];
a[i] = a[i+1];
a[i+1] =
tmp;
exch++;
}
if( exch == 0 ) return;
}
}
static void display( int a[] )
{
Dept of Page 128
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
for( int i = 0; i < a.length;
i++ ) System.out.print( a[i]
+ " " );
}
}
OUTPUT:
Insertion Sort:
An algorithm consider the elements one at a time, inserting each in its suitable place among
those already considered (keeping them sorted). Insertion sort is an example of an
incremental algorithm. It builds the sorted sequence one number at a time. This is a suitable
sorting technique in playing card games. Insertion sort provides several advantages:
1. Simple implementation
2. Efficient for (quite) small data sets
3. Adaptive (i.e., efficient) for data sets that are already substantially sorted: the time
complexity is O(n + d), where d is the number of inversions
4. More efficient in practice than most other simple quadratic (i.e., O(n2)) algorithms
such as selection sort or bubble sort; the best case (nearly sorted input) is O(n)
5. Stable; i.e., does not change the relative order of elements with equal keys
6. In-place; i.e., only requires a constant amount O(1) of additional memory space
7. Online; i.e., can sort a list as it receives it
Step-by-step example:
1. Step 1: The second element of an array is compared with the elements that appear before it (only
first element in this case). If the second element is smaller than first element, second element is
inserted in the position of first element. After first step, first two elements of an array will be sorted.
2. Step 2: The third element of an array is compared with the elements that appears before it (first and
second element). If third element is smaller than first element, it is inserted in the position of first
element. If third element is larger than first element but, smaller than second element, it is inserted in
the position of second element. If third element is larger than both the elements, it is kept in the
position as it is. After second step, first three elements of an array will be sorted.
3. Step 3: Similarly, the fourth element of an array is compared with the elements that appear before it
(first, second and third element) and the same procedure is applied and that element is inserted in the
proper position. After third step, first four elements of an array will be sorted.
If there are n elements to be sorted. Then, this procedure is repeated n-1 times to get sorted list of
array.
Time Complexity:
Worst Case Performance
O(N2
) Best Case Performance(nearly)
O(N)
Average Case Performance
O(N2
)
Source Code:
//Insertion Sort import
java.io.*; class
InsertionSort
{
public static void main(String[] args) throws IOException
{
BufferedReader br=new BufferedReader(new
InputStreamReader(System.in));
System.out.println("enter n value"); int
n=Integer.parseInt(br.readLine()); int arr[]=new
int[n]; System.out.println("enter elements"); for(int
i=0;i<n;i++)
{
arr[i]=Integer.parseInt(br.readLine());
}
System.out.print("\n Unsorted array: "); display( arr );
insertionSort( arr ); System.out.print("\n Sorted
array: "); display( arr );
}
static void insertionSort(int a[])
{
int i, j, n = a.length; int item;
for( j = 1; j < n; j++ )
Quick sort: It is a divide and conquer algorithm. Developed by Tony Hoare in 1959. Quick sort first
divides a large array into two smaller sub-arrays: the low elements and the high elements. Quick sort
can then recursively sort the sub-arrays.
ALGORITH
M:
Step 1: Pick an element, called a pivot, from the array.
Step 2: Partitioning: reorder the array so that all elements with values less than the pivot come
before the pivot, while all elements with values greater than the pivot come after it (equal
values can go either way). After this partitioning, the pivot is in its final position. This is called
the partition operation.
Step 3: Recursively apply the above steps to the sub-array of elements with smaller values and
separately to the sub-array of elements with greater values.
Advantages:
One of the fastest algorithms on average.
Does not need additional memory (the sorting takes place in the array - this is called in-place
processing).
WORST CASE O(N2) BEST CASE
O(N log2 N) AVERAGE CASE O(N
log2 N)
SOURCE CODE:
//Quick Sort
import java.io.*;
class QuickSort
{
public static void main(String[] args) throws IOException
{
BufferedReader br=new BufferedReader(new
InputStreamReader(System.in));
System.out.println("enter n value"); int
n=Integer.parseInt(br.readLine()); int arr[]=new
int[n]; System.out.println("enter elements");
for(int i=0;i<n;i++)
{
arr[i]=Integer.parseInt(br.readLine());
}
OUTPUT:
MERGE SORT:
Merge Sort:
Merge sort is based on Divide and conquer method. It takes the list to be sorted and divide it in
half to create two unsorted lists. The two unsorted lists are then sorted and merged to get a
sorted list. The two unsorted lists are sorted by continually calling the merge-sort algorithm; we
eventually get a list of size 1 which is already sorted. The two lists of size 1 are then merged.
Merge Sort Procedure: This is a divide and conquer algorithm. This works as follows :
1. Divide the input which we have to sort into two parts in the middle. Call it the left part and
right part. Example: Say the input is -10 32 45 -78 91 1 0 -16 then the left part will be -10 32 45
-78 and the right part will be 91 1 0 6.
2. Sort each of them separately. Note that here sort does not mean to sort it using
some other method. We use the same function recursively.
3. Then merge the two sorted parts.
Input the total number of elements that are there in an array (number_of_elements). Input the
array (array[number_of_elements]). Then call the function MergeSort() to sort the input array.
MergeSort() function sorts the array in the range [left,right] i.e. from index left to index right
inclusive. Merge()
function merges the two sorted parts. Sorted parts will be from [left, mid] and [mid+1, right].
After merging output the sorted array.
MergeSort() function:
It takes the array, left-most and right-most index of the array to be sorted as arguments. Middle
index (mid) of the array is calculated as (left + right)/2. Check if (left<right) cause we have to
sort only when left<right because when left=right it is anyhow sorted. Sort the left part by calling
MergeSort() function again over the left part MergeSort(array,left,mid) and the right part by
recursive call of MergeSort function as MergeSort(array,mid + 1, right). Lastly merge the two
arrays using the Merge function.
Merge() function:
It takes the array, left-most , middle and right-most index of the array to be merged as
arguments. Finally copy back the sorted array to the original array.
void msort()
{
sort(0, a.length-1);
}
void sort(int left, int right)
{
if(left < right)
{
int mid = (left+right)/2;
sort(left, mid);
sort(mid+1, right);
merge(left, mid, right);
} }
void merge(int left, int mid, int right)
{
int i = left;
int j = left;
int k = mid+1;
while( j <= mid && k <= right )
{
if(a[j] < a[k])
tmp[i++] = a[j++];
else
tmp[i++] = a[k++];
}
while( j <= mid )
tmp[i++] = a[j++];
for(i=left; i < k; i++)
a[i] = tmp[i];
}
static void display( int a[] )
{
for( int i = 0; i < a.length; i++ )
System.out.print( a[i] + " " );
}}
OUTPUT:
2,5,6,9 EXAMPLE 1
HEAP SORT:
The heap sort algorithm can be divided into two parts. In the first step, a heap is built out of the
data. In the second step, a sorted array is created by repeatedly removing the largest element from
the heap, and inserting it into the array. The heap is reconstructed after each removal. Once all
objects have been removed from the heap, we have a sorted array. The direction of the sorted
elements can be varied by choosing a min-heap or max-heap in step one. Heap sort can be
performed in place. The array can be split into two parts, the sorted array and the heap.
The (Binary) heap data structure is an array object that can be viewed as a nearly complete binary
tree.
Step 1. Build Heap – O(n)-Build binary tree taking N items as input, ensuring the heap structure
property is held, in other words, build a complete binary tree. Heapify the binary tree making sure
the binary tree satisfies the Heap Order property.
Step 2. Perform n deleteMax operations – O(log(n))- Delete the maximum element in the heap – which
SOURCE CODE:
import java.io.*;
class HeapSort
{
int[] a;
int maxSize; int
currentSize;
public HeapSort(int m)
{
maxSize = m; currentSize =
0;
a = new int[maxSize];
}
public static void main(String[] args) throws IOException
{
Given an array of 6 elements: 15, 19, 10, 7, 17, 16, sort it in ascending order using heap
sort. Steps:
1. Consider the values of the elements as priorities and build the heaptree.
2. Start deleteMin operations, storing each deleted element at the end of the heap array. After
performing step 2, the order of the elements will be opposite to the order in the heap tree.
Hence, if we want the elements to be sorted in ascending order, we need to build the heap tree
in descending order - the greatest element will have the highest priority.
Note that we use only one array , treating its parts differently:
a. when building the heap tree, part of the array will be considered as the heap, and the rest
part -the original array.
b. when sorting, part of the array will be the heap, and the rest part - the sortedarray.
This will be indicated by colors: white for the original array, blue for the heap and red for the
sorted array
The last node to be processed is array[1]. Its left child is the greater of the
children. The item at array[1] has to be percolated down to the left, swapped
with array[2].
Percolate once more (10 is less that 15, so it cannot be inserted in the previoushole)
The element 10 is less than the children of the hole, and we percolate the holedown:
3. DeleteMax 16
Store 16 in a temporary place. A hole is created at the top
Percolate the hole down (7 cannot be inserted there - it is less than the children of the hole)
Insert 7 in the
hole
Store 10 in the hole (10 is greater than the children of the hole)
As 7 will be adjusted in the heap, its cell will no longer be a part of the heap. Instead it
becomes a cell from the sorted array
Store 7 in the hole (as the only remaining element in the heap
Time
Complexity: Worst Case Performance O(N log2
N) Best Case Performance(nearly) O(N log2
N) Average Case Performance O(N log2
N)
Radix/Bucket Sort:
Bucket sort, or bin sort, is a sorting algorithm that works by partitioning an array into a number
of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by
recursively applying the bucket sorting algorithm. It is a distribution sort, and is a cousin of radix
sort in the most to least significant digit flavor.
Bucket sort works as follows:
Radix/Bucket Procedure:
2. Checking the biggest number (big) in the list and number of digits in the biggest number (nd).
3. Inserting the numbers in to the buckets based on the one’s digits and collecting the
numbers and again inserting in to buckets based on the ten’s digits and soon…
4. Inserting and collecting is continued ‘nd’ times. The elements get sorted.
Sorting by least significant digit (1s place) gives: 170, 90, 802, 2, 24,
45, 75, 66 Sorting by next digit (10s place) gives:
802, 2, 24, 45, 66, 170, 75, 90
Sorting by most significant digit (100s place) gives: 2, 24, 45, 66, 75, 90, 170, 802
It is important to realize that each of the above steps requires just a single pass over the data,
since each item can be placed in its correct bucket without having to be compared with other
items.
Some LSD radix sort implementations allocate space for buckets by first counting the number
of keys that belong in each bucket before moving keys into those buckets. The number of
times that each digit occurs is stored in an array. Consider the previous list of keys viewed in a
different way:
The first counting pass starts on the least significant digit of each key, producing an array of
bucket sizes:
A second counting pass on the next more significant digit of each key will produce an
array of bucket sizes:
A third and final counting pass on the most significant digit of each key will produce an
array of bucket sizes:
6 (bucket size for digits of 0: 002, 024, 045, 066, 075, 090)
1 (bucket size for digits of 1: 170)
1 (bucket size for digits of 8: 802)
Time Complexity:
Dept of Page 147
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
Worst Case Performance O(N log2
N) Best Case Performance(nearly) O(N
log2 N) Average Case Performance O(N
log2 N)
SOURCE CODE:
import java.io.*;
class RadixSort
{
public static void main(String[] args) throws IOException
{
radixSort(arr,n,n1);
divisor = divisor*radix;
for(j = k = 0; j < radix; j++)
{
while( !queue[j].isEmpty())
arr[k++] = (Integer)queue[j].removeFirst();
}
}
}
static void display( int a[] )
{
for( int i = 0; i < a.length; i++ )
System.out.print( a[i] + " " );
}
}
OUTPUT:
Time complexities:
In this article, we will discuss important properties of different sorting techniques including their
complexity, stability and memory constraints. Before understanding this article, you should
understand basics of different sorting techniques
• Selection sort –
Best, average and worst case time complexity: n^2 which is independent of distribution of data.
• Merge sort –
Best, average and worst case time complexity: nlogn which is independent of distribution of data.
• Heap sort –
Best, average and worst case time complexity: nlogn which is independent of distribution of data.
• Quick sort –
It is a divide and conquer approach with recurrence relation:
Worst case: when the array is sorted or reverse sorted, the partition algorithm divides the array in
two subarrays with 0 and n-1 elements. Therefore,
Best case and Average case: On an average, the partition algorithm divides the array in two
subarrays with equal size. Therefore,
In-place/Outplace technique –
A sorting technique is inplace if it does not use any extra memory to sort the array. Among the
comparison based techniques discussed, only merge sort is outplaced technique as it requires
an extra array to merge the sorted subarrays.
Among the non-comparison based techniques discussed, all are outplaced techniques.
Counting sort uses a counting array and bucket sort uses a hash table for sorting the array.
Online/Offline technique –
A sorting technique is considered Online if it can accept new data while the procedure is
ongoing i.e. complete data is not required to start the sorting operation.
Among the comparison based techniques discussed, only Insertion Sort qualifies for this
because of the underlying algorithm it uses i.e. it processes the array (not just elements) from
left to right and if new elements are added to the right, it doesn’t impact the ongoing operation.
Stable/Unstable technique –
A sorting technique is stable if it does not change the order of elements with the same value.
Out of comparison based techniques, bubble sort, insertion sort and merge sort are stable
techniques. Selection sort is unstable as it may change the order of elements with the same
value. For example, consider the array 4, 4, 1, 3.
In the first iteration, the minimum element found is 1 and it is swapped with 4 at 0th position.
Therefore, the order of 4 with respect to 4 at the 1st position will change. Similarly, quick sort
and heap sort are also unstable.
Out of non-comparison based techniques, Counting sort and Bucket sort are stable sorting
techniques whereas radix sort stability depends on the underlying algorithm used for sorting
UNIT 4
Trees- Ordinary and Binary trees terminology
TREES
A Tree is a data structure in which each element is attached to one or more
elements directly beneath it.
Terminology
• The connections between elements are called branches.
• A tree has a single root, called root node, which is shown at the top of the
tree. i.e. root is always at the highest level 0.
• Each node has exactly one node above it, called parent. Eg: A is the parent of B,C
andD.
• The nodes just below a node are called its children. ie. child nodes are
one level lower than the parent node.
• A node which does not have any child called leaf or terminal node. Eg: E,
F, K, L, H, I and M are Leaf node. Nodes with at least one child are called
non terminal or internalnodes.
• The child nodes of same parent are said to be siblings.
• A path in a tree is a list of distinct nodes in which successive nodes are
connected by branches in the tree.
• The length of a particular path is the number of branches in thatpath. The
degree of a node of a tree is the number of children of that node.
• The maximum number of children a node can have is often referred to as the
order of atree. The height or depth of a tree is the length of the longest path
from root to any leaf.
1. Root: This is the unique node in the tree to which further sub trees are attached. Eg:A
2. Degree of the node: The total number of sub-trees attached to the node is
called the degreeof the node.Eg: For node A degree is 3. For node K degree
is 0
3. Leaves: These are the terminal nodes of the tree. The nodes with degree 0
are always theleaf nodes. Eg: E, F, K, L,H, I, J
4. Internal nodes: The nodes other than the root node and the leaves are called the internal
6. Predecessor: While displaying the tree, if some particular node occurs previous to
some other node then that node is called the predecessor of the other node. Eg: E is
the predecessor
of the node B.
7. Successor: The node which occurs next to some other node is a successor
node. Eg: Bis the successor of E and F.
8. Level of the tree: The root node is always considered at level 0, then its adjacent
childrenare supposed to be at level 1 and so on. Eg: A is at level 0, B,C,D are at level
1, E,F,G,H,I,J are
at level 2, K,L are at level 3.
9. Height of the tree: The maximum level is the height of the tree. Here height of the tree is
3. The height if the tree is also called depth of the tree.
10. Degree of tree: The maximum degree of the node is called the degree of the tree.
BINARY TREES
Binary tree is a tree in which each node has at most two children, a left child and
a right child. Thus the order of binary tree is 2.
A binary tree is either empty or consists of a node called the root, left and
right sub trees are themselves binary trees.
A binary tree is a finite set of nodes which is either empty or consists of a root
and two disjoint trees called left sub-tree and right sub-tree.
In binary tree each node will have one data field and two pointer fields for
representing the sub- branches. The degree of each node in the binary tree will
be at the most two.
C
Dept of Page 154
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
2. Right skewed binary tree: If the left sub-tree is missing in every node of a tree we call it
isright CC
sub-tree.
3. Complete binary tree:The tree in which degree of each node is at the most two is
called a complete binary tree. In a complete binary tree there is exactly one node at level
0, two nodes at level 1 and four
nodes at level 2 and so on. So we can say that a complete binary tree depth d will
contain exactly 2l nodes at each level l, where l is from 0 to d.
B C
D E F G
Note:
1. A binary tree of depth n will have maximum 2n-1 nodes.
2. A complete binary tree of level l will have maximum 2l nodes at each level, where l starts
from0.
3. Any binary tree with n nodes will have at the most n+1 null branches.
4. The total number of edges in a complete binary tree with n terminal nodes are 2(n-1).
The nodes of a binary tree can be numbered in a natural way, level by level, left to right.
The nodes of an complete binary tree can be numbered so that the root is assigned the
number 1, a left child is assigned twice the number assigned its parent, and a right child is
assigned one more than twice the number assigned its parent.
3. Since a binary tree can contain at most one node at level 0 (the root), it can contain at
most 2l node at level l.
A full binary tree of height h has all its leaves at level h. Alternatively; All non leaf nodes of
a full binary tree have two children, and the leaf nodes have no children. A full binary tree
with height h has 2h + 1 - 1 nodes. A full binary tree of height h is a strictly binary tree all of
whose leaves are at level h.
3+1
For example, a full binary tree of height 3 contains 2 – 1 = 15 nodes.
a) Sequential Representation
b) Linked Representation
a) Sequential Representation
The simplest way to represent binary trees in memory is the sequential
representation that uses one-dimensional array.
1) The root of binary tree is stored in the 1 st location of array
th
2) If a node is in the j location of array, then its left child is in the location 2J+1 and its rightchild
in the location 2J+2
d+1
The maximum size that is required for an array to store a tree is 2 -1, where d is the depth
of
the
tree.
2. In this type of representation the maximum depth of the tree has to be fixed.
Because we have decide the array size. If we choose the array size quite larger
than the depth of the tree, then it will be wastage of the memory. And if we coose
array size lesser than the depth of the tree then we will be unable to represent
some part of the tree.
3. The insertions and deletion of any node in the tree will be costlier as other
nodes has to be adjusted at appropriate positions so that the meaning of
binary tree can
b) Linked Representation
Linked representation of trees in memory is implemented using pointers. Since
each node in a binary tree can have maximum two children, a node in a linked
representation has two pointers for both left and right child, and one information field. If a
node does not have any child, the corresponding pointer field is made NULL pointer.
B C
D E F G
H I
C-B-A-D-E is the inorder traversal i.e. first we go towards the leftmost node. i.e. C so print
that node C. Then go back to the node B and print B. Then root node A then move towards
the right sub-tree print D and finally E. Thus we are following the tracing sequence of
Left|Root|Right. This type of traversal is called inorder traversal. The basic principle is to
traverse left sub-tree then root
Pseudo Code:
A-B-C-D-E is the preorder traversal of the above fig. We are following Root|Left|Right
path i.e. data at the root node will be printed first then we move on the left sub-tree and
go on printing the data till we reach to the left most node. Print the data at that node and
then move to the right sub-tree. Follow the same principle at each sub-tree and go on
printing the data accordingly.
template <class T>
void preorder(bintree<T> *temp)
{
if(temp!=NULL)
{
cout<<”temp->data”; preorder(temp->left);
preorder(temp->right);} }
From figure the postorder traversal is C-D-B-E-A. In the postorder traversal we are
following the Left|Right|Root principle i.e. move to the leftmost node, if right sub-tree is
there or not if not then print the leftmost node, if right sub-tree is there move towards
the right most node. The key idea here is that at each sub-tree we are following the
Left|Right|Root principle and print the data accordingly.
Dept of Page 160
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
Pseudo Code:
PROGRAM:
class Node
{
Object
data;
Node left;
Node right;
Node( Object d ) // constructor
{
data = d;
}
}
class BinaryTree
{
Object
tree[]; int
maxSize;
java.util.Stack<Node> stk = new
java.util.Stack<Node>(); BinaryTree( Object a[], int
n ) // constructor
{
maxSize = n;
tree = new
Object[maxSize]; for( int
i=0; i<maxSize; i++ )
tree[i] = a[i];
}
public Node buildTree( int index )
{
Node p = null;
Dept of Page 161
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
if( tree[index] != null )
{
}
}
}
public void postorderIterative(Node p)
{
if(p == null )
{
System.out.println("Tree is
empty"); return;
}
Node tmp = p;
while( p != null
)
{
while( p.left != null )
{
stk.push(p
); p =
p.left;
Dept of Page 165
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
}
class BinaryTreeDemo
{
public static void main(String args[])
{
Object arr[] = {'E', 'C', 'G', 'A', 'D', 'F', 'H', null,'B',
null, null, null, null, null, null, null, null, null, null };
BinaryTree t = new BinaryTree( arr, arr.length );
Node root = t.buildTree(0); // buildTree() returns reference to root
System.out.print("\n Recursive Binary Tree Traversals:");
System.out.print("\n inorder: ");
t.inorder(root);
System.out.print("\n preorder: ");
t.preorder(root);
System.out.print("\n postorder:
"); t.postorder(root);
System.out.print("\n Non-recursive Binary Tree
Traversals:"); System.out.print("\n inorder: ");
t.inorderIterative(root);
System.out.print("\n preorder: ");
t.preorderIterative(root);
System.out.print("\n postorder: ");
t.postorderIterative(root);
}
}
Inorder traversal of a Binary tree is either be done using recursion or with the use of a auxiliary
stack. The idea of threaded binary trees is to make inorder traversal faster and do it without
stack and without recursion. A binary tree is made threaded by making all right child pointers
that would normally be NULL point to the inorder successor of the node (if it exists). There are
two types of threaded binary trees.
Single Threaded: Where a NULL right pointers is made to point to the inorder successor (if
successor exists)
Double Threaded: Where both left and right NULL pointers are made to point to inorder
predecessor and inorder successor respectively. The predecessor threads are useful for
reverse inorder traversal and postorder traversal.
The threads are also useful for fast accessing ancestors of a node.
Following diagram shows an example Single Threaded Binary Tree. The dotted lines represent
threads.
Terminology of Graph
Graphs:-
A graph G is a discrete structure consisting of nodes (called vertices) and lines joining the
nodes (called edges). Two vertices are adjacent to each other if they are joint by an edge. The
edge joining the two vertices is said to be an edge incident with them. We use V (G) and E(G) to
denote the set of vertices and edges of G respectively.
Graph Representations
Graph data structure is represented using following representations...
1. Adjacency Matrix
2. Incidence Matrix
3. Adjacency List
Adjacency Matrix
In this representation, graph can be represented using a matrix of size total number of vertices
by total number of vertices. That means if a graph with 4 vertices can be represented using a
matrix of 4X4 class. In this matrix, rows and columns both represents vertices. This matrix is
filled with either 1 or 0. Here, 1 represents there is an edge from row vertex to column vertex
and 0 represents there is no edge from row vertex to column vertex.
Incidence Matrix
In this representation, graph can be represented using a matrix of size total number of vertices
by total number of edges. That means if a graph with 4 vertices and 6 edges can be
represented using a matrix of 4X6 class. In this matrix, rows represents vertices and columns
represents edges. This matrix is filled with either 0 or 1 or -1. Here, 0 represents row edge is not
connected to column vertex, 1 represents row edge is connected as outgoing edge to column
vertex and -1 represents row edge is connected as incoming edge to column vertex.
Adjacency List
In this representation, every vertex of graph contains list of its adjacent vertices.
For example, consider the following directed graph representation implemented using linked list...
Graph traversals
Graph traversal means visiting every vertex and edge exactly once in a well-defined order.
While using certain graph algorithms, you must ensure that each vertex of the graph is visited
exactly once. The order in which the vertices are visited are important and may depend upon the
algorithm or question that you are solving.
During a traversal, it is important that you track which vertices have been visited. The most
common way of tracking vertices is to mark them.
S.pop( )
//Push all the neighbours of v in stack that
are not visited for all neighbours w of v in
Graph G:
if w is not visited :
S.push( w )
mark w as visited
DFS-recursive(G, s):
mark s as visited
for all neighbours w of s in
Graph G: if w is not visited:
DFS-recursive(G, w)
3) Path Finding
We can specialize the DFS algorithm to find a path between two given
vertices u and z.
i) Call DFS(G, u) with u as the start vertex.
ii) Use a stack S to keep track of the path between the start vertex and
thecurrent vertex.
iii) As soon as destination vertex z is encountered, return the path as
thecontents of the stack
See this for details.
7) Solving puzzles with only one solution, such as mazes. (DFS can be
adapted to find all solutions to a maze by only including nodes on the current
path in the visited set.)
1) Shortest Path and Minimum Spanning Tree for unweighted graph In unweighted
graph, the shortest path is the path with least number of edges. With Breadth First, we always
reach a vertex from given source using minimum number of edges. Also, in case of
unweighted graphs, any spanning tree is Minimum Spanning Tree and we can use either
Depth or Breadth first traversal for finding a spanning tree.
2) Peer to Peer Networks. In Peer to Peer Networks like BitTorrent, Breadth First Search
is used to find all neighbor nodes.
3) Crawlers in Search Engines: Crawlers build index using Bread First. The idea is to start
from source page and follow all links from source and keep doing same. Depth First
Traversal can also be used for crawlers, but the advantage with Breadth First Traversal is,
depth or levels of built tree can be limited.
4) Social Networking Websites: In social networks, we can find people within a given
distance ‘k’ from a person using Breadth First Search till ‘k’ levels.
5) GPS Navigation systems: Breadth First Search is used to find all neighboring locations.
10) To test if a graph is Bipartite We can either use Breadth First or Depth FirstTraversal.
11) Path Finding We can either use Breadth First or Depth First Traversal to find if there
is apath between two vertices.
12) Finding all nodes within one connected component: We can either use Breadth
First orDepth First Traversal to find all nodes reachable from a givennode.
Many algorithms like Prim’s Minimum Spanning Tree and Dijkstra’s Single Source Shortest
Path use structure similar to Breadth First Search.
There can be many more applications as Breadth First Search is one of the core algorithm for
Graphs.
Source code for BFS & DFS
Java programs for the implementation of bfs and dfs for a given graph.
//bfs
import java.io.*;
class quelist
{
public int front;
public int rear;
public int maxsize;
public int[] que;
class vertex
{
public char label;
public boolean wasvisited;
class graph
{
public final int MAX = 20;
public int nverts;
public int adj[][];
public vertex vlist[];
quelist qu;
public graph()
{
nverts = 0;
vlist = new vertex[MAX];
adj = new int[MAX][MAX];
qu = new quelist(MAX);
for(int i=0;i<MAX;i++)
for(int j=0;j<MAX;j++)
adj[i][j] = 0;
}
public void addver(char lab)
{
vlist[nverts++] = new vertex(lab);
}
return i;
return (MAX+1);
}
char c = t.charAt(0);
int start = gr.getind(c);
gr.addedge(start,end);
}
System.out.print("The vertices in the graph traversed
breadthwise:"); gr.brfs();
}
}
OUTPUT:
//dfs
import java.io.*;
import java.util.*;
class Stack
{
int stk[]=new int[10];
int top;
Stack()
{
top=-1;
}
void push (int item)
{
if (top==9)
System.out.println("Stack overflow");
else
stk[++top]=item;
}/*end push*/
boolean isempty()
{
if (top<0)
return true;
else
return false;
}/*end isempty*/
int pop()
{
if (isempty())
{
System.out.println("Stack underflow");
return 0;
}
else
return (stk[top--]);
}/*end pop*/
void stacktop()
{
if(isempty())
System.out.println("Stack underflow ");
else
System.out.println("Stack top is "+(stk[top]));
}/*end stacktop*/
void display()
{
System.out.println("Stack-->");
for(int i=0;i<=top;i++)
System.out.println(stk[i]);
}/*end display*/
}
class Graph
{
int MAXSIZE=51;
int adj[][]=new int[MAXSIZE][MAXSIZE];
int visited[]=new int [MAXSIZE];
Stack s=new Stack();
/*Function for Depth-First-Search */
void createGraph()
{
int n,i,j,parent,adj_parent,initial_node;
int ans=0,ans1=0;
/*All graph nodes are unvisited, hence assigned zero to visited field of each node
*/ for (int c=1;c<=50;c++)
visited[c]=0;
System.out.println("\nEnter graph structure for BFS ");
do
{
System.out.print("\nEnter parent node :");
parent=getNumber();
do
{
System.out.print("\nEnter adjacent node for node "+parent+ " :
"); adj_parent=getNumber();
adj[parent][adj_parent]=1;
adj[adj_parent][parent]=1;
System.out.print("\nContinue to add adjacent node for "+parent+"(1/0)?");
ans1= getNumber();
} while (ans1==1);
System.out.print("\nContinue to add graph
node?"); ans= getNumber();
}while (ans ==1);
for (j=1;j<=n;j++)
System.out.print(" "+adj[i][j]);
System.out.print("\n");
}
int getNumber()
{
String str;
int ne=0;
InputStreamReader input=new InputStreamReader(System.in);
BufferedReader in=new BufferedReader(input);
try
{
str=in.readLine();
ne=Integer.parseInt(str);
}
catch(Exception e)
{
System.out.println("I/O Error");
}
return ne; }}
class Graph_DFS
{
public static void main(String args[])
{
Graph g=new Graph();
g.createGraph(); } /* end of program */}
OUTPUT:
Applications of Graphs
In case of parallel edges, keep the one which has the least cost associated and
remove all others.
The least cost is 2 and edges involved are B,D and D,T. We add them. Adding them
does not violate spanning tree properties, so we continue to our next edge selection.
Next cost is 3, and associated edges are A,C and C,D. We add them again −
Next cost in the table is 4, and we observe that adding it will create a circuit in the graph.
−
We ignore it. In the process we shall ignore/avoid all edges that create a circuit.
We observe that edges with cost 5 and 6 also create circuits. We ignore them and move
on.
Now we are left with only one node to be added. Between the two least cost edges
available 7 and 8, we shall add the edge with cost 7.
By adding edge S,A we have included all the nodes of the graph and we now have
minimum cost spanning tree.
ANOTHER EXAMPLE
EX2:
What is Minimum Spanning Tree?
Given a connected and undirected graph, a spanning tree of that graph is a subgraph
that is a tree and connects all the vertices together. A single graph can have many
different spanning trees. A minimum spanning tree (MST) or minimum weight spanning
tree for a weighted, connected and undirected graph is a spanning tree with weight less
than or equal to the weight of every other spanning tree. The weight of a spanning tree
is the sum of weights given to each edge of the spanning tree.
The algorithm is a Greedy Algorithm. The Greedy Choice is to pick the smallest weight
edge that does not cause a cycle in the MST constructed so far. Let us understand it
with an example: Consider the below input graph.
The graph contains 9 vertices and 14 edges. So, the minimum spanning tree
formed will be having (9 – 1) = 8 edges.
After sorting:
Weight Src Dest
1 7 6
2 8 2
2 6 5
4 0 1
4 2 5
6 8 6
7 2 3
7 7 8
8 0 7
8 1 2
9 3 4
10 5 4
11 1 7
14 3 5
Now pick all edges one by one from sorted list of edges
Time Complexity: O(ElogE) or O(ElogV). Sorting of edges takes O(ELogE) time. After
sorting, we iterate through all edges and apply find-union algorithm. The find and union
operations can take atmost O(LogV) time. So overall complexity is O(ELogE + ELogV)
time. The value of E can be atmost O(V2), so O(LogV) are O(LogE) same. Therefore,
overall time complexity is O(ElogE) or O(ElogV)
Java program that implements Kruskal’s algorithm to generate minimum cost spanning tree.
SOURCE CODE:
import java.io.*;
import
java.util.*; class
Graph
{
int i,n; //no of nodes
int noe; //no edges in the graph
int graph_edge[][]=new
int[100][4]; int tree[][]=new int
[10][10];
int sets[][]=new int[100][10];
int top[]=new int[100];
int cost=0;
int getNumber()
{
String
str; int
ne=0;
InputStreamReader input=new
InputStreamReader(System.in); BufferedReader in=new
BufferedReader(input); try
{
str=in.readLine();
ne=Integer.parseInt(st
r);
}
catch(Exception e)
{
System.out.println("I/O Error");
}
return ne;
}/*end getNumber*/
void read_graph()
{
System.out.print("Enter the no. of nodes in the undirected weighted
graph
::"); n=getNumber();
noe=0;
if(w!=0
)
{ noe++;
graph_edge[noe][1]=i
;
graph_edge[noe][2]=j
;
} graph_edge[noe][3]=
} w;
}
}
void sort_edges()
{
/**** Sort the edges using bubble sort in increasing order**************/
for(int i=1;i<=noe-1;i++)
{
for(int j=1;j<=noe-i;j++)
{
if(graph_edge[j][3]>graph_edge[j+1][3])
{
int t=graph_edge[j][1];
graph_edge[j][1]=graph_edge[j+1][1];
graph_edge[j+1][1]=t;
t=graph_edge[j][2];
graph_edge[j][2]=graph_edge[j+1][2];
graph_edge[j+1][2]=t;
t=graph_edge[j][3];
graph_edge[j][3]=graph_edge[j+1][3];
graph_edge[j+1][3]=t;
}
}
}
}
void algorithm()
{// ->make a set for each node for(int
i=1;i<=n;i++)
{
sets[i][1]=i
; top[i]=1;
}
for(i=1;i<=noe;i++)
{
int
p1=find_node(graph_edge[i][1]);
int
p2=find_node(graph_edge[i][2]);
if(p1!=p2)
{
System.out.print("The edge included in the tree is
::"); System.out.print("< "+graph_edge[i][1]+" , ");
System.out.println(graph_edge[i][2]+" > ");;
cost=cost+graph_edge[i][3];
tree[graph_edge[i][1]][graph_edge[i][2]]=graph_edge[i]
[3];
tree[graph_edge[i][2]][graph_edge[i][1]]=graph_edge[i]
[3];
for(int
j=1;j<=top[p2];j++)
{
top[p1]++;
sets[p1][top[p1]]=sets[p2][j];
}
top[p2]=0;
}
els
e
{ System.out.println("Inclusion of the edge ");
System.out.print(" < "+graph_edge[i][1]+" , ");
System.out.println(graph_edge[i][2]+"> forms a
so it is cycle
removed\n\n");
}
}
int find_node(int n)
{
for(int i=1;i<=noe;i++)
{
for(int j=1;j<=top[i];j++)
{
if(n==sets[i][j])
return i;
}
}
class Kruskal1
{
public static void main(String args[])
{
Graph obj=new
Graph();
obj.read_graph();
obj.sort_edges();
obj.algorithm();
}
}
OUTPUT:
Given a graph and a source vertex in the graph, find shortest paths from source to all vertices
in the given graph.
Dijkstra’s algorithm is very similar to Prim’s algorithm for minimum spanning tree. Like Prim’s
MST, we generate a SPT (shortest path tree) with given source as root. We maintain two sets,
one set contains vertices included in shortest path tree, other set includes vertices not yet
included in shortest path tree. At every step of the algorithm, we find a vertex which is in the
other set (set of not yet included) and has a minimum distance from the source.
Below are the detailed steps used in Dijkstra’s algorithm to find the shortest path from a single
source vertex to all other vertices in the given graph.
Algorithm
1) Create a set sptSet (shortest path tree set) that keeps track of vertices included in shortest
path tree, i.e., whose minimum distance from source is calculated and finalized. Initially, this
set isempty.
2) Assign a distance value to all vertices in the input graph. Initialize all distance values as
INFINITE. Assign distance value as 0 for the source vertex so that it is picked first.
3) While sptSet doesn’t include all vertices
….a) Pick a vertex u which is not there in sptSet and has minimum distance value.
….b) Include u to sptSet.
….c) Update distance value of all adjacent vertices of u. To update the distance values, iterate
through
all adjacent vertices. For every adjacent vertex v, if sum of distance value of u (from source)
and weight of edge u-v, is less than the distance value of v, then update the distance value of
v.
The set sptSet is initially empty and distances assigned to vertices are {0, INF, INF, INF, INF,
INF, INF, INF} where INF indicates infinite. Now pick the vertex with minimum distance value.
The vertex 0 is picked, include it in sptSet. So sptSet becomes {0}. After including 0 to sptSet,
update distance values of its adjacent vertices. Adjacent vertices of 0 are 1 and 7. The distance
values of 1 and 7 are updated as 4 and
8. Following subgraph shows vertices and their distance values, only the vertices with finite
distance values are shown. The vertices included in SPT are shown in green colour.
Pick the vertex with minimum distance value and not already included in SPT (not in sptSET).
The vertex 1 is picked and added to sptSet. So sptSet now becomes {0, 1}. Update the distance
values of adjacent vertices of 1. The distance value of vertex 2 becomes 12.
Pick the vertex with minimum distance value and not already included in SPT (not in sptSET).
Vertex 7 is picked. So sptSet now becomes {0, 1, 7}. Update the distance values of adjacent
vertices of 7. The distance value of vertex 6 and 8 becomes finite (15 and 9 respectively).
Pick the vertex with minimum distance value and not already included in SPT (not in sptSET).
Vertex 6 is picked. So sptSet now becomes {0, 1, 7, 6}. Update the distance values of adjacent
vertices of 6. The distance value of vertex 5 and 8 are updated.
We repeat the above steps until sptSet does include all vertices of given graph. Finally, we get
the following Shortest Path Tree (SPT).
We use a boolean array sptSet[] to represent the set of vertices included in SPT. If a value
sptSet[v] is true, then vertex v is included in SPT, otherwise not. Array dist[] is used to store
shortest distance values of all vertices.
Notes:
1) The code calculates shortest distance, but doesn’t calculate the path information. We can
create a parent array, update the parent array when distance is updated (like prim’s
implementation) and use it show the shortest path from source to different vertices.
2) The code is for undirected graph, same dijkstra function can be used for directed graphs also.
3) The code finds shortest distances from source to all vertices. If we are interested only in
shortest distance from the source to a single target, we can break the for the loop when the
picked minimum distance vertex is equal to target (Step 3.a of the algorithm).
5) Dijkstra’s algorithm doesn’t work for graphs with negative weight edges. For graphs with
negative weight edges, Bellman–Ford algorithm can be used, we will soon be discussing it
as a separate post.
import java.util.PriorityQueue;
import java.util.List;
import java.util.ArrayList;
import
java.util.Collections;
class Vertex implements Comparable<Vertex>
class Edge
{
public final Vertex target;
public final double weight;
public Edge(Vertex argTarget, double argWeight)
{ target = argTarget; weight = argWeight; }
}
class Dijkstra1
{
public static void computePaths(Vertex source)
{
source.minDistance = 0.;
PriorityQueue<Vertex> vertexQueue =
new
PriorityQueue<Vertex>(); vertexQueue.add(source);
while
(!vertexQueue.isEmpty()) {
Vertex u =
vertexQueue.poll();
}
}
UNIT V
BINARY SEARCH
TREE
In the simple binary tree the nodes are arranged in any fashion. Depending on user’s desire
the new nodes can be attached as a left or right child of any desired node. In such a case
finding for any node is a long cut procedure, because in that case we have to search the
entire tree. And thus the searching time complexity will get increased unnecessarily. So to
make the searching algorithm faster in a binary tree we will go for building the binary search
tree. The binary search tree is based on the binary search algorithm. While creating the
binary search tree the data is systematically arranged. That means values at left sub-tree <
While inserting any node in binary search tree, look for its appropriate position in the binary
search tree. We start comparing this new node with each node of the tree. If the value of the
node which is to be inserted is greater than the value of the current node we move on to the
right sub-branch otherwise we move on to the left sub-branch. As soon as the appropriate
position is found we attach this new node as left or right child appropriately.
This is the simplest deletion, in which we set the left or right pointer of parent node as
NULL.
From the above fig, we want to delete the node having value 5 then we will set left
pointer of its parent node as NULL. That is left pointer of node having value 7 is set to
NULL.
If we want to delete the node 15, then we will simply copy node 18 at place of 16 and then
set the node free
Let us consider that we want to delete node having value 7. We will then find out the
inorder successor of node 7. We will then find out the inorder successor of node 7. The
inorder successor will be simply copied at location of node 7.
That means copy 8 at the position where value of node is 7. Set left pointer of 9 as
NULL. This completes the deletion procedure.
In the below tree, if we want to search for value 9. Then we will compare 9 with root node
10. As 9 is less than 10 we will search on left sub branch. Now compare 9 with 5, but 9 is
greater than 5. So we will move on right sub tree. Now compare 9 with 8 but 9 is greater
than 8 we will move on right sub branch. As the node we will get holds the value 9. Thus
the desired node can be searched.
Another example for search a node
in BST Example: search for 45 in the
tree
(key fields are show in node rather than in separate obj ref to by data field):
1. start at the root, 45 is greater than 25, search in right subtree
2. 45 is less than 50, search in 50’s left subtree
3. 45 is greater than 35, search in 35’s right subtree
4. 45 is greater than 44, but 44 has no right subtree so 45 is not in the BST
import
java.util.*; class
Bstnode
{
Bstnode
rc,lc;
Bstnode
root; int
data;
Bstnode()
{
data=0;
rc=lc=null
;
}
Bstnode(int item)
{
data=item
;
lc=rc=null
;
}
Bstnode[] search(int key)
{
Bstnode par ,ptr;
Bstnode b[]=new
Bstnode[2]; ptr=root;
par=null;
while(ptr!=null
)
{
if(ptr.data==key)
{
b[0]=par;
b[1]=ptr;
return
b;
}
else if(ptr.data<key)
{
else
{
}
par=ptr;
ptr=ptr.lc
;
els
e par1.lc=nul
l; ptr1.lc=ptr.lc;
ptr1.rc=ptr.rc;
}
else // if par1=null
ptr1.lc =
ptr.lc; if(par!=null)
{
if(par.lc==ptr)
par.lc=ptr1;
els
e par.rc=ptr1;
}
els
e
root=ptr1;
return ptr.data;
}
int deletenode(int item)
{
Bstnode
ptr=root,par=null;
boolean flag=false;
int k;
while(ptr!=null&&flag==fals
e)
{
if(item<ptr.data)
{
par=ptr;
ptr=ptr.lc
}
}
if(flag==false)
{
System.out.println("item not found hence can not
delete"); return -1; }
if(ptr.lc==null&&ptr.rc==nul
l)
k=deleteleaf(par,ptr);
else if(ptr.lc!=null&&ptr.rc!=null)
k=delete2childnode(par,pt
r);
else
k=delete1childnode(par,ptr);
return k;
}
public static void main(String saichandra[])
{
Bstnode b=new Bstnode();
Scanner s=new Scanner
(System.in); int ch;
do
{
System.out.println("1.insert");
System.out.println("2.delete");
System.out.println("3.search");
System.out.println("4.inorder");
System.out.println("5.preorder");
System.out.println("6.postorder");
System.out.print("enter ur
choice:"); ch=s.nextInt();
switch(ch)
{
case 1:System.out.print("enter
element:"); int n=s.nextInt();
b.insert(n)
; break;
case 2:if(b.root!=null)
{
System.out.print("enter element:");
int n1=s.nextInt();
int
res=b.deletenode(n1);
if(res!=-1)
System.out.println("deleted element is:"+res);
Adelsion Velski and Lendis in 1962 introduced binary tree structure that is balanced with
respect to height of sub trees. The tree can be made balanced and because of this
retrieval of any node can be done in Ο(log n) times, where n is total number of nodes.
From the name of these scientists the tree is called AVL tree.
Definition:
An empty tree is height balanced if T is a non empty binary tree with TL and TR as its left
and right sub trees. The T is height balanced if and only if
i. TL and TR are height balanced.
For any node in AVL tree the balance factor i.e. BF(T) is -1, 0 or +1.
Theorem: The height of AVL tree with n elements (nodes) is O(log n).
Proof: Let an AVL tree with n nodes in it. Nh be the minimum number of nodes in an AVL
tree of height h.
In worst case, one sub tree may have height h-1 and other sub tree may have height h-
2. And both these sub trees are AVL trees. Since for every node in AVL tree the height
of left and right sub trees differ by at most 1.
Hence
Nh = Nh-1+Nh-2+1
Where Nh denotes the minimum number of nodes in an AVL tree of
heighth. N0=0 N1=2
We can also write it
as N > Nh = Nh-
1+Nh-2+1
> 2Nh-2
> 4Nh-4
.
.
> 2iNh-2i
If value of h is even, let i =
h/2-1 Then equation
becomes
N > 2h/2-1N2
= N > 2(h-1)/2x4 (N2 = 4)
= O(log N)
N)
This proves that height of AVL tree is always O(log N). Hence search, insertion and
deletion can be carried out in logarithmic time.
The AVL tree follows the property of binary search tree. In fact AVL trees are
basically binary search trees with balance factors as -1, 0, or +1.
After insertion of any node in an AVL tree if the balance factor of any node becomes
other than -1, 0, or +1 then it is said that AVL property is violated. Then we have to
restore the destroyed balance condition. The balance factor is denoted at right top
After insertion of a new node if balance condition gets destroyed, then the nodes on
that path(new node insertion point to root) needs to be readjusted. That means
only the affected sub tree is to be rebalanced.
The rebalancing should be such that entire tree should satisfy AVL property.
Insertion of a node.
There are four different cases when rebalancing is required after insertion of new node.
1. An insertion of new node into left sub tree of left child. (LL).
2. An insertion of new node into right sub tree of left child. (LR).
3. An insertion of new node into left sub tree of right child.(RL).
4. An insertion of new node into right sub tree of rightchild.(RR).
Some modifications done on AVL tree in order to rebalance it is called rotations of AVL tree
Insertion Algorithm:
1. Insert a new node as new leaf just as an ordinary binary search tree.
2. Now trace the path from insertion point(new node inserted as leaf) towards
root. For each node ‘n’ encountered, check if heights of left (n) and right (n)
differ by at most 1.
a. If yes, move towards parent (n).
b. Otherwise restructure by doing either a single rotation or a doublerotation.
Thus once we perform a rotation at node ‘n’ we do not require to perform any
rotation at any ancestor on ‘n’.
When node ‘1’ gets inserted as a left child of node ‘C’ then AVL property gets destroyed
i.e. node A has balance factor +2.
The LL rotation has to be applied to rebalance the nodes.
2. RR rotation:
When node ‘4’ gets attached as right child of node ‘C’ then node ‘A’ gets unbalanced.
The rotation which needs to be applied is RR rotation as shown in fig.
When node ‘3’ is attached as a right child of node ‘C’ then unbalancing occurs because
of LR. Hence LR rotation needs to be applied.
When node ‘2’ is attached as a left child of node ‘C’ then node ‘A’ gets unbalanced
as its balance factor becomes -2. Then RL rotation needs to be applied to
rebalance the AVL tree.
Example:
Insert 1, 25, 28, 12 in the following AVL tree.
Insert 1
To insert node ‘1’ we have to attach it as a left child of ‘2’. This will unbalance the tree
as follows.
Insert 25
We will attach 25 as a right child of 18. No balancing is required as entire tree preserves
the AVL property
Insert 28
The node ‘28’ is attached as a right child of 25. RR rotation is required to rebalance.
Deletion:
Even after deletion of any particular node from AVL tree, the tree has to be restructured
in order to preserve AVL property. And thereby various rotations need to be applied.
Searching:
The searching of a node in an AVL tree is very simple. As AVL tree is basically binary
search tree, the algorithm used for searching a node from binary search tree is the same
one is used to search a node from AVL tree.
BTREES
➢ Multi-way trees are tree data structures with more than two branches at a node. The
data structures of m-way search trees, B trees and Tries belong to this category of
tree structures.
➢ AVL search trees are height balanced versions of binary search trees, provide efficient
retrievals and storage operations. The complexity of insert, delete and search
operations on AVL search trees id O(log n).
➢ Applications such as File indexing where the entries in an index may be very large,
maintaining the index as m-way search trees provides a better option than AVL search
trees which are but only balanced binary search trees.
➢ While binary search trees are two-way search trees, m-way search trees are
extendedbinary search trees and hence provide efficient retrievals.
➢ B trees are height balanced versions of m-way search trees and they do not recommend
representation of keys with varying sizes. Tries are tree based data structures that
support keys with varying sizes.
Definition:
A B tree of order m is an m-way search tree and hence may be empty. If non empty,
then the following properties are satisfied on its extended tree representation
1. The root node must have at least two child nodes and at most m child nodes.
2. All internal nodes other than the root node must have at least |m/2 | non empty child nodes
and at most m non empty child nodes.
3. The number of keys in each internal node is one less than its number of child nodes and
these keys partition the keys of the tree into sub trees.
Example:
F K O B tree of order
4
Level
1
C D G M N P Q W
S T X Y
Leve
l
3
The order 5 means at the most 4 keys are allowed. The internal node should have at
least 3 non empty children and each leaf node must contain at least 2 keys.
1 3 7 14
Step 2: Insert 8, Since the node is full split the node at medium 1, 3, 7, 8, 14
1 3 8 14
8 11 14 17
3 5
Step 4: Now insert 13. But if we insert 13 then the leaf node will have 5 keys which is not
allowed. Hence 8, 11, 13, 14, 17 is split and medium node 13 is moved up.
7 1
3
1 3 5 6 8 11 12 14 17 20 23
Step 6: The 26 is inserted to the right most leaf node. Hence 14, 17, 20, 23, 26 the
node is split and 20 will be moved up.
7 1 2
3 0
1 3 5 6 8 11 12 14 17 23 26
4 7 13 20
1 3 5 6 8 11 12 14 16 17 18 23 24 25 26
Step 8: Finally insert 19. Then 4, 7, 13, 19, 20 needs to be split. The median 13 will be
moved up to from a root node.
The tree then will be -
Deletion:
Now we will delete 20, the 20 is not in a leaf node so we will find its successor which is
23, Hence 23 will be moved up to replace 20.
Next we will delete 18. Deletion of 18 from the corresponding node causes the node with only
one key, which is not desired (as per rule 4) in B-tree of order 5. The sibling node to
immediate right has an extra key. In such a case we can borrow a key from parent and move
spare key of sibling up.
Now delete 5. But deletion of 5 is not easy. The first thing is 5 is from leaf node. Secondly
this leaf node has no extra keys nor siblings to immediate left or right. In such a situation
we can combine this node with one of the siblings. That means remove 5 and combine 6
with the node 1, 3. To make the tree balanced we have to move parent’s key down.
Hence we will move 4 down as 4 is between 1, 3, and 6. The tree will be-
But again internal node of 7 contains only one key which not allowed in B-tree. We then will
try to borrow a key from sibling. But sibling 17, 24 has no spare key. Hence we can do is
that, combine 7 with 13 and 17, 24. Hence the B-tree will be
The running time of search operation depends upon the height of the tree. It is O(log n).
Height of B-tree
The maximum height of B-tree gives an upper bound on number of disk access. The
maximum number of keys in a B-tree of order 2m and depth h is
1 + 2m + 2m(m+1) + 2m(m+1)
2 + . . .+ 2m(m+1)h-1
h
= 1 + ∑ 2m(m+1)
i-1
i=1
The maximum height of B-tree with n keys
log m+1 n = O(log
n) 2m
class BTree
{
final int MAX = 4;
final int MIN = 2;
class BTNode // B-Tree node
{
int count;
int key[] = new int[MAX+1];
BTNode child[] = new BTNode[MAX+1];
}
BTNode root = new BTNode();
class Ref // This class creates an object reference
{
int m;
} // and is used to retain/save index values
// of current node between method calls.
/*
* New key is inserted into an appropriate node.
* No node has key equal to new key (duplicate keys are not allowed.
*/
void insertTree( int val )
{
Ref i = new Ref();
BTNode c = new BTNode();
BTNode node = new BTNode();
boolean pushup;
pushup = pushDown( val, root, i, c );
if ( pushup )
{
node.count = 1;
node.key[1] = i.m;
node.child[0] = root;
node.child[1] = c;
root = node;
} }
/*
* New key is inserted into subtree to which current node points.
* If pushup becomes true, then height of the tree grows.
*/
boolean pushDown( int val, BTNode node, Ref p, BTNode c )
{ Ref k = new Ref();
if ( node == null )
{
p.m = val;
c = null;
return true;
}
else
{
if ( searchNode( val, node, k ) )
System.out.println("Key already exists.");
if ( pushDown( val, node.child[k.m], p, c ) )
{
if ( node.count < MAX )
{
pushIn( p.m, c, node, k.m );
return false;
}
else
{
split( p.m, c, node, k.m, p, c );
return true;
}
}
return false;
}
}
/*
* Search through a B-Tree for a target key in the node: val
* Outputs target node and its position (pos) in the node
*/
BTNode searchTree( int val, BTNode root, Ref pos )
{
if ( root == null )
return null ;
else
{
if ( searchNode( val, root, pos ) )
return root;
else
return searchTree( val, root.child[pos.m], pos );
}
}
/*
* This method determines if the target key is present in
* the current node, or not. Seraches keys in the current node;
* returns position of the target, or child on which to continue search.
*/
boolean searchNode( int val, BTNode node, Ref pos )
{
if ( val < node.key[1] )
{
pos.m = 0 ;
return false ;
}
else
{
pos.m = node.count ;
Dept of Page 238
CSE
DATA STRUCTURES AND M.Tech. I year I sem
ALGORITHMS (R18)
void displayTree()
{
display( root );
}
// displays the B-Tree
void display( BTNode root )
{
int i;
if ( root != null )
{
for ( i = 0; i < root.count; i++ )
{
display( root.child[i] );
System.out.print( root.key[i+1] + " " );
}
display( root.child[i] );
}
}
} // end of BTree class
////////////////////////// BTreeDemo.java /////////////////////////////
class BTreeDemo
{
public static void main( String[] args )
{
BTree bt = new BTree();
int[] arr = { 11, 23, 21, 12, 31, 18, 25, 35, 29, 20, 45,
27, 42, 55, 15, 33, 36, 47, 50, 39 };
for ( int i = 0; i < arr.length; i++ )
bt.insertTree( arr[i] );
System.out.println("B-Tree of order 5:");
bt.displayTree();
}}
4) Every path from a node (including root) to any of its descendant NULL node has the same
number of black nodes.
From the above examples, we get some idea how Red-Black trees ensure balance.
Following is an important fact about balancing in Red-Black Trees.
1) Perform standard BST insertion and make the color of newly inserted nodes as RED.
2) If x is root, change color of x as BLACK (Black height of complete tree increases by 1).
3) Do following if color of x’s parent is not BLACK and xis not ….a) If x’s uncle is RED root.
(Grand parent must have been black from property 4)
……..(i) Change color of parent and uncle as BLACK.
……..(ii) color of grand parent as RED.
……..(iii) Change x = x’s grandparent, repeat steps 2 and 3 for new x.
Deletion is fairly complex process. To understand deletion, notion of double black is used.
When a black node is deleted and replaced by a black child, the child is marked as double
black. The main task now becomes to convert this double black to single black.
Deletion Steps
Following are detailed steps for deletion.
1) Perform standard BST delete. When we perform standard delete operation in BST, we
always end up deleting a node which is either leaf or has only one child (For an internal node,
we copy the successor and then recursively call delete for successor, successor is always a
leaf node or a node with one child). So we only need to handle cases where a node is leaf or
has one child. Let v be the node to be deleted and u be the child that replaces v (Note that u is
NULL when v is a leaf and color of NULL is considered as Black).
2) Simple Case: If either u or v is red, we mark the replaced child as black (No change in
black height). Note that both u and v cannot be red as v is parent of u and two consecutive reds
are not allowed in red-black tree.
Do following while the current node u is double black and it is not root. Let sibling of node be s.
….(a): If sibling s is black and at least one of sibling’s children is red, perform
rotation(s). Let the red child of s be r. This case can be divided in four subcases depending
upon positions of s and r.
…………..(i) Left Left Case (s is left child of its parent and r is left child of s or both children
of s are red). This is mirror of right right case shown in below diagram.
…………..(ii) Left Right Case (s is left child of its parent and r is right child). This is mirror
of right left case shown in below diagram (iii) Right Right Case (s is right child of its
parent and r is right
child of s or both children of s are red)
In this case, if parent was red, then we didn’t need to recur for prent, we can simply make it black
(red
+ double black = single black)
…..(c): If sibling is red, perform a rotation to move old sibling up, recolor the old sibling and parent.
The
new sibling is always black (See the below diagram). This mainly converts the tree to black
sibling case (by rotation) and leads to case (a) or (b). This case can be divided in two
subcases.
…………..(i) Left Case (s is left child of its parent). This is mirror of right right case
shown in below diagram. We right rotate the parent p.
…………..(iii) Right Case (s is right child of its parent). We left rotate the parent p.
If u is root, make it single black and return (Black height of complete tree reduces by 1).
Prefix Codes, means the codes (bit sequences) are assigned in such a way that the code
assigned to one character is not the prefix of code assigned to any other character. This is how
Huffman Coding makes sure that there is no ambiguity when decoding the generated
bitstream.
Let us understand prefix codes with a counter example. Let there be four characters a, b, c and
d, and their corresponding variable length codes be 00, 01, 0 and 1. This coding leads to
ambiguity because code assigned to c is the prefix of codes assigned to a and b. If the
compressed bit stream is 0001, the de-compressed output may be “cccd” or “ccb” or “acd” or
“ab”.
See this for applications of Huffman Coding.
Now min heap contains 5 nodes where 4 nodes are roots of trees with single element
each, and one heap node is root of tree with 3 elements
character Frequency
c 12
d 13
Internal Node 14
e 16
f 45
Step 3: Extract two minimum frequency nodes from heap. Add a new internal node with
frequency 12+13=25
Now min heap contains 4 nodes where 2 nodes are roots of trees with single element each,
and two heap nodes are root of tree with more than one nodes.
character Frequency
Internal Node 14
e 16
Internal Node 25
f 45
character Frequency
Internal Node 25
Internal Node 30
f 45
Step 5: Extract two minimum frequency nodes. Add a new internal node with frequency 25 + 30 = 55
character Frequency
f 45
Internal Node 55
Step 6: Extract two minimum frequency nodes. Add a new internal node with frequency 45 +
55 = 100
Since the heap contains only one node, the algorithm stops here.
Steps to print codes from Huffman Tree:
Traverse the tree formed starting from the root. Maintain an auxiliary array. While moving to
the left child, write 0 to the array. While moving to the right child, write 1 to the array. Print
the array when a leaf node is encountered.
return;
}
// number of
characters. int n = 6;
char[] charArray = { 'a', 'b', 'c', 'd', 'e', 'f'
}; int[] charfreq = { 5, 9, 12, 13, 16, 45
};
hn.c = charArray[i];
hn.data =
charfreq[i];
hn.left = null;
hn.right =
null;
f: 0
c: 100
d: 101
a: 1100
b: 1101
e: 111
Time complexity: O(nlogn) where n is the number of unique characters. If there are n
nodes, extractMin() is called 2*(n – 1) times. extractMin() takes O(logn) time as it calles
minHeapify(). So, overall complexity is O(nlogn).
At first the pattern is set to the left end of the text, and matching process starts. After a
mismatch is found, pattern is shifted one place right and a new matching process
starts, and so on. The pattern and text are in arrays pat[1..m] and text[1..n]
respectively.
3. i:=1;
5. i:=i+1;
6. j:=j+1
7. end;
10. end.
The worst case happens when pat and text are all a’s but b at the end, such as pat =
aaaaab and text = aaaaaaaaaaaaaaaaaaaaaaaaaaaaab. The time is obviously O(mn).
On average the situation is not as bad, but in the next section we introduce a much
better algorithm. We call the operation pat[i]=text[j] a comparison between characters,
and measure the complexity ofa given algorithm by the number of character
comparisons. The rest of computing time is proportional to this measure.
Knuth-Morris-Pratt algorithm (KMP algorithm)
When we shift the pattern to the right after a mismatch is found at i on the pattern and j
on the text, we did not utilise the matching history for the portion pat[1..i] and text[j-
i+1..j]. If we can get information on how much to shift in advance, we can shift the
pattern more, as shown in the following example.
Example 1.
1 2 3 4 5 6 7 8 91011121314
text ababaabbabab
b a pattern a b a b b
ababb
ababb
ababb
After mismatch at 5, there is no point in shifting the pattern one place. On the other
hand we know “ab” repeats itself in the pattern. We also need the condition pat[3] <>
pat[5] to ensure that we do not waste a match at position 5. Thus after shifting two
places, we resume matching at position 5. Now we have a mismatch at position 6. This
time shifting two places does not work, since “ab” repeats itself and we know that
shifting two places will invite amismatch.
The condition pat[1]<>pat[4] is satisfied. Thus we shift pat three places and resume
matching at position 6, and find a mismatch at position 8. For a similar reason to the
previous case, we shift pat three places, and finally we find the pattern at position 9 of
the text. We spent 15 comparisons between characters. If we followed Algorithm 1, we
would spend 23 comparisons. Confirm this.
The information of how much to shift is given by array h[1..m], which is
defined by h[1] = 0
The meaning of h[i] is to maximise the portion A and B in the above figure, and
require b<>c. The value of h[i] is such maximum s. Then in the main matching
process, we can resume matching after we shift the pattern after a mismatch at i on
the pattern to position h[i] on the pattern, and we can keep going with the pointer j on
the text. In other words, we need not to backtrack on the text. The maximisation of
such s, (or minimisation of shift), is necessary in order not to overlook an occurrence
of the pattern in the text. The main matching algorithm follows.
5. end
by f(1) = 0
The definitions of h[i] and f(i) are similar. In the latter we do not care about
pat[s]<>pat[i]. The computation of h is like pattern matching of pat on itself.
xxxx a
xxxx xxx b
a x
t i-1 i
h[f(i)]
xx
f(i)
x xx x a
xxxxa x xx x b
1. t:=0; h[1]:=0;
3. /* t = f(i-1) */
t:=h[t]; 5. t:=t+1;
6. /* t=f(i) */
Example. pat = a b a b b
i = 2, at line 7 t=1, pat[1]<>pat[2], f(2) = 1,
h[2]:=1 i
ababb
abab
bt
ababb
i|1 23 45
pat | a b a bb
f| 011 23
h| 01013
The time of Algorithm 2 is clearly O(n), since each character in the text is examined at most
twice, which gives the upper bound on the number of comparisons. Also, whenever a
mismatch is found, the pattern is shifted to the right. The shifting can not take place more
than n-m_1 times. The analysis of Algorithm 3 is a little more tricky. Trace the changes on
the value of t in the algorithm. We have a doubly nested loop, one by the outer for and the
other by while. The value of t can be increased by one at line 5, and m-1 times in total,
which we regard as income. If we get into the while loop, the value of t is decreased,
which we regard as expenditure. Since the total income is m-1, and t can not go to
negative, the total number of executions of t:=h[t] can not exceed m-1. Thus the total
time is O(m).
Summarising these analyses, the total time for the KMP algorithm, which includes
the pre- processing of the pattern, is O(m+n), which is linear.
Source code:.
//KMPDemo.java
import
java.io.*; class
KMPDemo
{
public static void main(String[] args) throws IOException
{
BufferedReader br=new BufferedReader(new
InputStreamReader(System.in));
System.out.println(" Enter any String:");
String T = br.readLine();
BufferedReader br1=new BufferedReader(new
InputStreamReader(System.in));
System.out.println(" Enter String for pattern
matching:"); String P = br1.readLine();
OUTPU
T:
These trees can be compares based on their operations. We will see the time complexity of
these trees
Search Average
Tree Case
Search Worst
Tree Case
class TreeSetDemo {
public static void main(String[] args)
{
TreeSet<String> ts1 = new TreeSet<String>();
• Secondly, if we are depending on default natural sorting order, compulsory the object
should be homogeneous and comparable otherwise we will get
RuntimeException:ClassCastException
NOTE:
1. An object is said to be comparable if and only if the corresponding class implements
Comparable interface.
2. String class and all Wrapper classes already implements Comparable interface but
StringBuffer class doesn’t implements Comparable interface.Hence we got
ClassCastException in the above example.
3. For an empty tree-set, when trying to insert null as first value, one will get NPE from JDK
7.From
1.7 onwards null is not at all accepted by TreeSet. However upto JDK 6, null will be
accepted as first value, but any if insertion of any more values in the TreeSet, will also
NullPointerException.
Hence it was considered as bug and thus removed in JDK 7.
TreeSet implements SortedSet so it has availability of all methods in Collection, Set and
SortedSet interfaces. Following are the methods in Treeset interface.
1. void add(Object o): This method will add specified element according to some sorting
order in TreeSet. Duplicate entires will not get added.
2. boolean addAll(Collection c): This method will add all elements of specified Collection to
the set. Elements in Collection should be homogeneous otherwise ClassCastException
will be thrown. Duplicate Entries of Collection will not be added to TreeSet.
3. void clear(): This method will remove all the elements.
4. boolean contains(Object o): This method will return true if given element is present in
TreeSet else it will return false.
5. Object first(): This method will return first element in TreeSet if TreeSet is not null else it
will throw NoSuchElementException.
6. Object last(): This method will return last element in TreeSet if TreeSet is not null else it
will throw NoSuchElementException.
7. SortedSet headSet(Object toElement): This method will return elements of TreeSet which
are less than the specified element.
8. SortedSet tailSet(Object fromElement): This method will return elements of TreeSet
which are greater than or equal to the specified element.
9. SortedSet subSet(Object fromElement, Object toElement): This method will return
elements ranging from fromElement to toElement. fromElement is inclusive and
toElement is exclusive.
10. boolean isEmpty(): This method is used to return true if this set contains no elements or
is empty and false for the opposite case.
11. Object clone(): The method is used to return a shallow copy of the set, which is just a
simple copied set.
12. int size(): This method is used to return the size of the set or the number of elements
present in the set.
13. boolean remove(Object o): This method is used to return a specific element from the set.
14. Iterator iterator(): Returns an iterator for iterating over the elements of the set.
15. Comparator comparator(): This method will return Comparator used to sort elements in
TreeSet or it will return null if default natural sorting order is used.
16. ceiling(E e): This method returns the least element in this set greater than or equal to
the given element, or null if there is no such element.
17. descendingIterator(): This method returns an iterator over the elements in this set in
descending order.
18. descendingSet(): This method returns a reverse order view of the elements contained in this
set.
19. floor(E e): This method returns the greatest element in this set less than or equal to the
given element, or null if there is no such element.
20. higher(E e): This method returns the least element in this set strictly greater than the
given element, or null if there is no such element.
21. lower(E e): This method returns the greatest element in this set strictly less than the
given element, or null if there is no such element.
TreeMap in Java
The TreeMap in Java is used to implement Map interface and NavigableMap along with the
Abstract Class. The map is sorted according to the natural ordering of its keys, or by a
Comparator provided at map creation time, depending on which constructor is used. This
proves to be an efficient way of sorting and storing the key-value pairs. The storing order
maintained by the treemap must be consistent with equals just like any other sorted map,
irrespective of the explicit comparators. The treemap implementation is not synchronized in
the sense that if a map is accessed by multiple threads, concurrently and at least one of the
threads modifies the map structurally, it must be synchronized externally. Some important
features of the treemap are:
Internal structure: The methods in TreeMap while getting keyset and values, return Iterator
that are fail-fast in nature, thus any concurrent modification will throw
ConcurrentModificationException.
TreeMap is based upon tree data structure. Each node in the tree has,
• 3 Variables (K key=Key, V value=Value, boolean color=Color)
• 3 References (Entry left = Left, Entry right = Right, Entry parent = Parent)
Constructors in TreeMap:
1 TreeMap() : Constructs an empty tree map that will be sorted by using the natural
order of its keys.
2 TreeMap(Comparator comp) : Constructs an empty tree-based map that will be
sorted by using the Comparator comp.
3 TreeMap(Map m) : Initializes a tree map with the entries from m, which will be sorted
by using the natural order of the keys.
4 TreeMap(SortedMap sm) : Initializes a tree map with the entries from sm, which will
be sorted in the same order as sm
Methods of TreeMap:
1 boolean containsKey(Object key): Returns true if this map contains a mapping for
the specified key.
2 boolean containsValue(Object value): Returns true if this map maps one or more
keys to the specified value.
3 Object firstKey(): Returns the first (lowest) key currently in this sorted map.
4 Object get(Object key): Returns the value to which this map maps the specified key.
5 Object lastKey(): Returns the last (highest) key currently in this sorted map.
6 Object remove(Object key): Removes the mapping for this key from this TreeMap
if present.
7 void putAll(Map map): Copies all of the mappings from the specified map to this map.
8 Set entrySet(): Returns a set view of the mappings contained in this map.
9 int size(): Returns the number of key-value mappings in this map.
10 Collection values(): Returns a collection view of the values contained in this map.
11 Object clone(): The method returns a shallow copy of this TreeMap.
12 void clear(): The method removes all mappings from this TreeMap and clears the map.
13 SortedMap headMap(Object key_value): The method returns a view of the portion
of the map strictly less than the parameter key_value.
14 Set keySet(): The method returns a Set view of the keys contained in the treemap.
15 Object put(Object key, Object value): The method is used to insert a mapping into a map
16 SortedMap subMap((K startKey, K endKey): The method returns the portion of
this map whose keys range from startKey, inclusive, to endKey, exclusive.
17 Object firstKey(): The method returns the first key currently in this tree map.
Every node of Trie consists of multiple branches. Each branch represents a possible
character of keys. We need to mark the last node of every key as end of word
node. A Trie node field isEndOfWord is used to distinguish the node as end of word
node. A simple structure to represent nodes of the English alphabet can be as following,
// Trie node
struct
TrieNode
{
struct TrieNode *children[ALPHABET_SIZE];
// isEndOfWord is true if the node
// represents end of a
word bool isEndOfWord;
};
Inserting a key into Trie is a simple approach. Every character of the input key is inserted
as an individual Trie node. Note that the children is an array of pointers (or references) to
next level trie nodes. The key character acts as an index into the array children. If the
input key is new or an extension of the existing key, we need to construct non-existing
nodes of the key, and mark end of the word for the last node. If the input key is a prefix of
the existing key in Trie, we simply mark the last node of the key as the end of a word.
The key length determines Trie depth.
Searching for a key is similar to insert operation, however, we only compare the
characters and move down. The search can terminate due to the end of a string or lack
of key in the trie. In the former case, if the isEndofWord field of the last node is true, then
the key exists in the trie. In the second case, the search terminates without examining all
the characters of the key, since the key is not present in the trie.
The following picture explains construction of trie using keys given in the example below,
root
/ \ \
t a b
| | |
h n y
| | \ |
e s y e
/ | |
i r w
| | |
r e e
|
R
In the picture, every character is of type trie_node_t. For example, the root is of type
trie_node_t, and it’s children a, b and t are filled, all other nodes of root will be NULL.
Similarly, “a” at the next level is having only one child (“n”), all other children are NULL.
The leaf nodes are in blue.
// Java implementation of search and insert operations
// on Trie
public class Trie {
// trie node
static class TrieNode
{
TrieNode[] children = new TrieNode[ALPHABET_SIZE];
TrieNode(){
isEndOfWord = false;
for (int i = 0; i < ALPHABET_SIZE; i++)
children[i] = null;
}
};
pCrawl = pCrawl.children[index];
}
if (pCrawl.children[index] == null)
return false;
pCrawl = pCrawl.children[index];
}
// Driver
public static void main(String args[])
{
// Input keys (use only 'a' through 'z' and lower case)
// Construct trie
int i;
for (i = 0; i < keys.length ; i++)
insert(keys[i]);
if(search("these") == true)
System.out.println("these --- " + output[1]);
else System.out.println("these --- " + output[0]);
if(search("their") == true)
System.out.println("their --- " + output[1]);
else System.out.println("their --- " + output[0]);
if(search("thaw") == true)
System.out.println("thaw --- " + output[1]);
else System.out.println("thaw --- " + output[0]);
}
}
// This code is contributed by Sumit Ghosh
output
the --- Present in trie
these --- Not present in trie
their --- Present in trie
thaw --- Not present in trie
Trie | (Delete)
In the previous post on trie we have described how to insert and search a node in trie. Here is
an algorithm how to delete a node from trie.
During delete operation we delete the key in bottom up manner using recursion. The
following are possible conditions when deleting key from trie,
1. Key may not be there in trie. Delete operation should not modify trie.
2. Key present as unique key (no part of key contains another key (prefix), nor the key
itself is prefix of another key in trie). Delete all the nodes.
3. Key is prefix key of another long key in trie. Unmark the leaf node.
4. Key present in trie, having atleast one other key as prefix key. Delete nodes from end
of key until first leaf node of longest prefix key.
1. insert(key, object): insert the (key, object) pair. For instance, this could be a word
and its definition, a name and phone number, etc. The key is what will be used to
access the object.
2. lookup(key): return the associated object.
3. delete(key): remove the key and its object from the data structure. We may or
may not care about this operation.
A balanced binary search tree is a tree that automatically keeps its height small
(guaranteed to be logarithmic) for a sequence of insertions and deletions. This structure
provide efficient implementations for abstract data structures such as associative arrays.
The primary step to get the flexibility that we need to guarantee balance in binary search
trees is to allow the nodes in our trees to hold more than one key. This can be done using
2–3 search trees (not binary, but balanced).
The 2–3 tree is a way to generalize BSTs to provide the flexibility that we need to
guarantee fast performance. It allows 1 or 2 keys per node. It allows for the possibility of
a 3-node and 2-node.
• 2-node: one key, two children; left is less, and right is greater than the key.
• 3-node: two keys, three children; left is less, middle is between, and right is
greater than the two keys.
• Perfect Balance: Every path from the root to the null link has the same length.
• Symmetric Order: Every node is larger than all the nodes on the left subtree,
smaller than the keys on the right subtree, and in case of 3-node, all nodes in the
middle are between the two keys of the 3-node. So, we can traverse the nodes in
ascending order; In-order traversal.
Operations Overview
We aren’t going to discuss the implementation code, because it’s complicated, rather,
we will be giving an overview of two of the main operations of a 2–3 search tree. These
operations are search and insert.
search
Searching for an item in a 2–3 tree is similar to searching for an item in a binary
search tree since it maintains a symmetric order.
You compare between the given key against the key(s) in the node. If smaller than, go
left. If between the two keys(of a 3-node), go to the middle link. If greater than, go right
All insertion operations starts with searching for the node (at the bottom) where you can
insert the new node into it.
If the node at which the search terminates is a 2-node, we just replace it with a 3-node
containing its key and the new key to be inserted.
In 2–3 search trees, we insert into a node, and not attaching a new node to a null link (like in
BSTs), … Why? To remain perfectly balanced.
Suppose that we want to insert into a single 3-node. Such a node has no room for a
new key. So, to be able to perform this insertion, we temporarily convert the 3-node
into a 4-node (a node with three keys, and four children).
Then, we split the 4-node into three 2-nodes, one with the middle key (at the root), one
with the smallest of the three keys (pointed to by the left link of the root), and one with
the largest of the three keys (pointed to by the right link of the root).
Suppose that the search ends at a 3-node at the bottom whose parent is a 2-node.
In this case, we follow the same steps as just described, by making a temporary 4-
node, then splitting the 4-node, but then, instead of creating a new node to hold the
middle key, we move the middle key to the parent node (2-node).
Now suppose that the search ends at a 3-node at the bottom whose parent is a 3-node.
Again, we make a temporary 4-node, then split the 4-node, moving the middle key to
the parent node (3-node). Since the parent node is a 3-node, we convert it into a
temporary new 4- node. Then, we perform exactly the same transformation on that
node.
These transformations preserve the properties of a 2–3 tree that the tree is in a
symmetric order and perfectly balanced.
This is because, when we insert or move keys around, we keep the keys in order; we
maintain a symmetric order.
And, we increase the height of the tree when we end up with a temporary 4-node at
the root. In this case we split the temporary 4-node into three 2-nodes. So, we can still
split the root node (4-node) while maintaining perfect balance in the tree.
Analysis
• The worst case when when all the nodes are 2-nodes; tree height is LogN.
• The best case when all the nodes are 3-nodes; tree height is LogN (to the base of 3).
Here is a summary, for symbol table implementations after introducing the 2–3 search trees.
COURSE OBJECTIVES
Understand and apply linear data structures-List, Stack and Queue.
Understand the graph algorithms.
Learn different algorithms analysis techniques.
Apply data structures and algorithms in real time applications
Able to analyze the efficiency of algorithm.
SYLLABUS
UNIT I LINEAR DATA STRUCTURES 9
Introduction - Abstract Data Types (ADT) – Stack – Queue – Circular Queue - Double Ended
Queue - Applications of stack – Evaluating Arithmetic Expressions - Other Applications -
Applications of Queue - Linked Lists - Singly Linked List - Circularly Linked List - Doubly
Linked lists – Applications of linked list – Polynomial Manipulation.
UNIT II NON-LINEAR TREE STRUCTURES 9
Binary Tree – expression trees – Binary tree traversals – applications of trees – Huffman
Algorithm - Binary search tree - Balanced Trees - AVL Tree - B-Tree - Splay Trees – Heap-
Heap operations- -Binomial Heaps - Fibonacci Heaps- Hash set.
UNIT III GRAPHS 9
Representation of graph - Graph Traversals - Depth-first and breadth-first traversal -
Applications of graphs - Topological sort – shortest-path algorithms - Dijkstra‟s algorithm –
Bellman-Ford algorithm – Floyd's Algorithm - minimum spanning tree – Prim's and Kruskal's
algorithms.
UNIT IV ALGORITHM DESIGN AND ANALYSIS 9
Algorithm Analysis – Asymptotic Notations - Divide and Conquer – Merge Sort – Quick Sort -
Binary Search - Greedy Algorithms – Knapsack Problem – Dynamic Programming – Optimal
Binary Search Tree - Warshall‟s Algorithm for Finding Transitive Closure.
UNIT V ADVANCED ALGORITHM DESIGN AND 9
ANALYSIS
Backtracking – N-Queen's Problem - Branch and Bound – Assignment Problem - P & NP
problems – NP-complete problems – Approximation algorithms for NP-hard problems –
Traveling salesman problem-Amortized Analysis.
TOTAL : 45 PERIODS
REFERENCES:
Anany Levitin “Introduction to the Design and Analysis of Algorithms” Pearson Education,
1. 2015
E. Horowitz, S.Sahni and Dinesh Mehta, “Fundamentals of Data structures in C++”,
2. University Press, 2007
E. Horowitz, S. Sahni and S. Rajasekaran, “Computer Algorithms/C++”, Second Edition,
3. University Press, 2007
4. Gilles Brassard, “Fundamentals of Algorithms”, Pearson Education 2015
5. Harsh Bhasin, “Algorithms Design and Analysis”, Oxford University Press 2015
6. John R.Hubbard, “Data Structures with Java”, Pearson Education, 2015
7. M. A. Weiss, “Data Structures and Algorithm Analysis in Java”, Pearson Education Asia,
2013
8. Peter Drake, “Data Structures and Algorithms in Java”, Pearson Education 2014
9. T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, "Introduction to algorithms",
Thrid Edition, PHI Learning Private Ltd, 2012
10. Tanaenbaum A.S.,Langram Y. Augestein M.J, “Data Structures using C” Pearson
Education , 2004.
11. V. Aho, J. E. Hopcroft, and J. D. Ullman, “Data Structures and Algorithms”, Pearson
Education, 1983
COURSE OUTCOMES (COs)
C201.1: Describe, explain and use abstract data types including stacks, queues and lists
C201.2: Design and Implement Tree data structures and Sets
C201.3: Able to understand and implement non linear data structures - graphs
C201.4: Able to understand various algorithm design and implementation
PART - A
UNIT – I
1. Define data structure. What is the main advantage of data structure?
A data structure is a logical or mathematical way of organizing data. It is the way of
organizing, storing and retrieving data and the set of operations that can be performed on
that data.
Eg.: Arrays, structures, stack, queue, linked list, trees, graphs.
2. What are the different types of data structures.
Primitive Data Structure- It is basic data structure which is defined by the language and
can be accessed directly by the computer.
6. How much memory is required for storing two matrices A(10,15,20) and B(11,16,21)
where each element requires 16 bit for storage.
Number of elements in array A = 10*15*20 =3000
Element Size = 16 bits.
Memory required for storing A = 3000*16=48,000
Number of elements in array A = 11*16*21=3696
Element Size = 16 bits
Memory Required for storing A = 3696 *16 = 59,136
Total = 107136 bits = 107136/8 = 13,392 bytes.
7. What are the differences between arrays and structures? (JAN 2012)
ARRAYS STRUCTURES
1.Array size should be mentioned during Declared using the keyword “struct”.
the declaration.
2. Array uses static memory location. Each member has its own memory
location.
3. Each array element has only one part. Only one member can be handled at a
time.
8. Define stack. Give some applications of stack.
A stack is an ordered list in which insertions and deletions are made at one end called the
top. Stack is called as a Last In First Out(LIFO) data structure. Stack is used in Function
call, Recursion and evaluation of expression.
9. How do you check the stack full and stack empty condition?
Void StackFull()
{
If (top == maxsize-1)
Printf(“Stack is Full”);
}
Void StackEmpty()
{
If (top == -1)
Printf(“Stack is Empty”);
}
10. Define the terms: Infix, postfx and prefix.
INFIX: It is a conventional way of writing an expression.The notation is
<Operand><Operator><Operand>
This is called infix because the operators are in between the operands.
EXAMPLE: A+B
POSTFIX: In this notation the operator is suffixed by operands.
<Operand><Operand><Operator>
EXAMPLE: AB+
PREFIX: In this notation the operator preceeds the two operands.
<Operator><Operands><Operand>
EXAMPLE: +AB
11. What are the advantages in reverse polish (prefix and postfix notation) over polish
(infix) notation?
The advantages in prefix & postfix notation over infix notation is:
The scanning of the expression is required in only one direction viz. from left to
right and only once; where as for the infix expression the scanning has to be done in both
directions.
For example, to evaluate the postfix expression abc*+, we scan from left to right
until we encounter *. The two operands which appear immediately to the left of this
operator are its operands and the expression bc* is replaced by its value.
12. Define queue and give its applications
A Queue is an ordered list in which all insertions take place at one end called the rear and
all deletions take place at the opposite end called the front. The Queue is called as the
FIFO data structure.
Applications of Queue:
1. It is used in batch processing of O.S
2. It is used in simulation
3. It is used in queuing theory
4. It is used in computer networks where the server takes the jobs of the clients
using queuing strategy.
13. What is a circular queue? How do you check the queue full condition?
In circular queue, the elements are arranged in a circular fashion. Circular queue is a data
structure which efficiently utilizes the memory space & the elements Q[0], Q[1], …, Q[n-
1] are arranged in circular fashion such that Q[n-1] is followed by Q[0].
It returns queue full condition only when the queue does not have any space to insert new
values. But ordinary queue returns queue full condition when the rear reaches the last
position.
Void CircularQFull()
{
if (front == (rear+1)%maxsize)
printf(“Circular Queue is Full”);
}
14. Write an algorithm to count the nodes in a circular queue
int countcq()
{
Count = 0;
If (front = -1)
Printf (“ Queue is empty”);
Else
{ i = front
while (i !=rear)
{
Count++;
i = (i+1)%maxsize;
}
Count++;
}
Return(count);
}
15. Define Dequeue.
Dequeue is a queue in which insertion and deletion can happen in both the ends
(front & rear) of the queue.
Insertion Insertion
10 20 30
Deletion Deletion
16. What are the two kinds of dequeue?
Input restricted dequeue -- restricts the insertion of elements at one end (rear) only, but
the deletion of elements can be done at both the ends of a queue.
Output restricted dequeue --Restricts the deletion of elements at one end (front) only,
and allows insertion to be done at both the ends of a deque.
17. What is a priority queue?
A queue in which we are able to insert or remove items from any position based on some
priority is referred to as priority queue.
18. Define Linked list and give its applications.
It is an ordered collection of homogeneous data elements. The elements of the linked list
are stored in non contiguous memory locations. So each element contains the address of
the next element in the list. The last node contains the NULL pointer which represents the
end of the list.
Example:
First
1 6 4 10 NULL
In a doubly linked list, the head always points to the first node. The prev pointer of the
first node points to NULL and the next pointer of the last node points to NULL.
21. What are the advantages of using doubly linked list over singly linked list?
The advantage of using doubly linked list is,it uses the double set of pointers.One
pointing to the next item and other pointing to the preceeding item.This allows us to
traverse the list in either direction.
22. List the advantages of linked list
Since linked list follows dynamic memory allocation, the list can grow
dynamically, the insertion and deletion of elements into the list requires no data
movement
UNIT-II
1. Define tree.
A tree is a finite set of one or more nodes such that there is a specially designated node
called the root. The remaining nodes are partitioned into n>=0 disjoint sets T1, T2, …,
Tn, where each of these sets is a tree. T1, …,Tn are called the subtrees of the root.
2. Define the following terms: node, leaf node, ancestors, siblings of a node
Node: Each element of a binary tree is called node of a tree. Each node may be a root of a
tree with zero or more sub trees.
Leaf node: A node with no children (successor) is called leaf node or terminal node.
Ancestor: Node n1 is an ancestor of node n2 if n1 is either a father of n2 or father of
some ancestor of n2.
Siblings: Two nodes are siblings if they are the children of the same parent.
3. Define level of a node, degree of a node, degree of a tree, height and depth of a tree.
Level of a node: The root node is at level 1. If a node is at level l, then its children are at
level i+1.
Degree of a node: The number of sub trees of a node is called as degree of a node.
The degree of a tree is the maximum of the degree of the nodes in the tree.
The height or depth of a tree is defined to be the maximum level of any node in the tree.
4. What are the ways to represent Binary trees in memory?
1. Array representation (or) Sequential Representation.
2. Linked List representation (or) Node representation.
5. Define binary tree.
Binary tree is a finite set of elements that is either empty or is partitioned into three
disjoint subsets. The first subset contains the single element called the root of tree. The
other two subsets are themselves binary tree called the left and right sub tree of original
tree. In other words, a binary tree is a tree in which each node can have a maximum of
two children.
6. Define Full binary tree (or) Complete binary tree
A full binary tree of depth k is a binary tree of depth k having 2k – 1 nodes. In other words,
all the levels in the binary tree should contain the maximum number of nodes.
Given tree:
A
B C
D E F G
H I J
Inorder : DHBEAFCIGJ
Preorder: ABDHECFGIJ
Postorder: HDEBFIJGCA
11. How many null branches can a binary tree have with 20 node?
21 null branches
Let us take a tree with 5 nodes (n=5)
Null Branches
It will have only 6 (ie,5+1) null branches. In general, a binary tree with n nodes has
exactly n+ 1 null node. Thus a binary tree with 20 nodes will have 21 null branches.
12. What is a binary search tree?
A binary search tree is a binary tree. It may be empty. If it is not empty then, it satisfies the
following properties.
23. Explain Hash Function. Mention Different types of popular hash function.
Hash Function takes an identifier and computes the address of that identifier in the hash table.
1.Division method
2.Square method
3.Folding method
24..Define Splay Tree.
A splay tree is a self-adjusting binary search treewith the additional property that recently
accessed elements are quick to access again. It performs basic operations such as insertion,
look-up and removal in O(log n) amortized time.
25. What are the different rotations in splay tree?
Zig Rotation.
Zag Rotation
Zig-Zag Rotation.
Zag-Zig Rotation
Zig-Zig Rotation
Zag-Zag- Rotation
26.Write short notes on Heap.
Heap is a special case of balanced binary tree data structure where the root-node key is compared
with its children and arranged accordingly. If α has child node β then −
key(α) ≥ key(β)
27.Define Binomial Heap.
A Binomial Heap is a collection of Binomial Trees A Binomial Tree of order 0 has 1 node. A
Binomial Tree of order k can be constructed by taking two binomial trees of order k-1, and
making one as leftmost child of other.
A Binomial Tree of order k has following properties.
a) It has exactly 2k nodes.
b) It has depth as k.
c) There are exactly kCi nodes at depth i for i = 0, 1, . . . , k.
d) The root has degree k and children of root are themselves Binomial Trees with order k-1, k-
2,.. 0 from left to right.
28.Define Fibonacci Heaps.
Fibonacci heap is a data structure for priority queue operations, consisting of a collection
of heap-ordered trees. It has a better amortized running time than many other priority queue data
structures including the binary heap and binomialheap.
29.Write notes on Hash Set.
Implements Set Interface.
Underlying data structure for HashSet is hashtable.
As it implements the Set Interface, duplicate values are not allowed.
Objects that you insert in HashSet are not guaranteed to be inserted in same order.
Objects are inserted based on their hash code.
NULL elements are allowed in HashSet.
HashSet also implements Searlizable and Cloneable interfaces.
UNIT-III
1. Write the concept of Prim’s spanning tree.
Prim’s algorithm constructs a minimum spanning tree through a sequence of expanding
sub trees. The initial sub tree in such a sequence consists of a single vertex selected
arbitrarily from the set V of the graph’s vertices.
On each iteration, we expand the current tree in the greedy manner by simply attaching to
it the nearest vertex not in that tree. The algorithm stops after all the graph’s vertices have
been included in the tree being constructed
2. What is the purpose of Dijikstra’s Algorithm?
Dijikstra’s algorithm is used to find the shortest path between sources to every vertex.
This algorithm is applicable to undirected and directed graphs with nonnegative weights
only.
3. How efficient is prim’s algorithm?
It depends on the data structures chosen for the graph itself and for the priority queue of
the set V-VT whose vertex priorities are the distances to the nearest tree vertices.
4. Mention the two classic algorithms for the minimum spanning tree problem.
Prim’s algorithm
Kruskal’s algorithm
5. What is the Purpose of the Floyd algorithm?
The Floyd’s algorithm is used to find the shortest distance between every pair of vertices
in a graph.
6. What are the conditions involved in the Floyd’s algorithm?
Construct the adjacency matrix.
Set the diagonal elements to zero
Ak[i,j]= min Ak-1[i,j]
Ak-1[i,k]and Ak-1[k,j]
7. Write the concept of kruskal’s algorithm.
Kruskal’s algorithm looks at a minimum spanning tree for a weighted connected graph
G=(V,E) as an acyclic sub graph with |V|-1 edges for which the sum of the edge weights
is the smallest. Consequently, the algorithm constructs a minimum spanning tree as an
expanding sequence of sub graphs, which are always acyclic but are not necessarily
connected on the intermediate stages of the algorithm. The algorithm begins by sorting
the graph’s edges in non decreasing order of their weights. Then, starting with the empty
sub graph, it scans this sorted list, adding the next edge on the list to the current sub graph
if such an inclusion does not create a cycle and simply skipping the edge otherwise.
8. What is the difference between dynamic programming with divide and conquer
method?
Divide and conquer divides an instance into smaller instances with no intersections
whereas dynamic programming deals with problems in which smaller instances overlap.
Consequently divide and conquer algorithm do not explicitly store solutions to smaller
instances and dynamic programming algorithms do.
9. State two obstacles for constructing minimum spanning tree using exhaustive-
search approach.
The number spanning tree grows exponentially with the graph size
Generating all spanning trees for a given graph is not easy; in fact, it is
more difficult than finding a minimum spanning tree for a weighted graph by
using one of several efficient algorithms available for this problem
10. Define spanning tree and minimum spanning tree problem.
A spanning tree of a connected graph is its connected acyclic sub graph that contains all
the vertices of the graph. A minimum spanning tree problem is the problem of finding a
minimum spanning tree for a given weighted connected graph.
11. Define the single source shortest paths problem.
Dijkstra’s algorithm solves the single-source shortest-path problem of finding shortest
paths from a given vertex (the source) to all the other vertices of a weighted graph or
digraph. It works as Prim’s algorithm but compares path lengths rather than edge lengths.
Dijkstra’s algorithm always yields a correct solution for a graph with nonnegative
weights
12. Mention the methods for generating transitive closure of digraph.
Depth First Search (DFS)
Breadth First Search (BFS)
13. What do you meant by graph traversals?
Graph traversal (also known asgraph search) refers to the process of visiting (checking
and/or updating) each vertex in a graph. Such traversalsare classified by the order in
which the vertices are visited. Tree traversal is a special case of graph traversal
14. Define Depth First Search DFS
Depth First Search (DFS) algorithm traverses a graph in a depthward motion and uses a stack
to remember to get the next vertex to start a search, when a dead end occurs in any iteration.
15. Write down the steps involved in DFS
Rule 1 − Visit the adjacent unvisited vertex. Mark it as visited. Display it. Push it in a
stack.
Rule 2 − If no adjacent vertex is found, pop up a vertex from the stack. (It will pop up all
the vertices from the stack, which do not have adjacent vertices.)
Rule 3 − Repeat Rule 1 and Rule 2 until the stack is empty
16. Define Breadth First Search (BFS)
Breadth First Search (BFS) algorithm traverses a graph in a breadthward motion and uses
a queue to remember to get the next vertex to start a search, when a dead end occurs in
any iteration.
17. Write down the steps involved in Breadth First Search (BFS)
Rule 1 − Visit the adjacent unvisited vertex. Mark it as visited. Display it. Insert
it in a queue.
Rule 2 − If no adjacent vertex is found, remove the first vertex from the queue.
Rule 3 − Repeat Rule 1 and Rule 2 until the queue is empty
18. Define graph data structure
A graph is a pictorial representation of a set of objects where some pairs of objects are
connected by links. The interconnected objects are represented by points termed
as vertices, and the links that connect the vertices are called edges. Formally, a graph is a
pair of sets (V, E), where V is the set of vertices and Eis the set of edges, connecting the
pairs of vertices.
UNIT-IV
1. Define Algorithm.
An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for
obtaining a required output for any legitimate input in a finite amount of time.
2. Define order of an algorithm.
The order of an algorithm is a standard notation of an algorithm that has been developed
to represent function that bound the computing time for algorithms. The order of an
algorithm is a way of defining its efficiency. It is usually referred as Big O notation.
3. What are the features of efficient algorithm?
Free of ambiguity
Efficient in execution time
Concise and compact
Completeness
Definiteness
Finiteness
4. Define Asymptotic Notations.
The notation that will enable us to make meaningful statements about the time and space
complexities of a program. This notation is called asymptotic notation. Some of the
asymptotic notation are 1. Big Oh notation, 2. Theta notation, 3. Omega notation, 4. Little
Oh notation.
5. What is best-case efficiency?
The best-case efficiency of an algorithm is its efficiency for the best-case input of size
n, which is an input or inputs for which the algorithm runs the fastest among all
possible inputs of that size.
6. Define divide and conquer design technique
A problem’s instance is divided into several smaller instances of the same problem,
ideally of about the same size
The smaller instances are solved
If necessary, the solutions obtained for the smaller instances are combined to get a
solution to the original instance.
7. List out some of the stable and unstable sorting techniques.
Stable sorting techniques includes Bubble sort, Insertion sort, Selection sort, Merge sort
and Unstable sorting techniques includes Shell sort, Quick sort, Radix sort, Heap sort
problem with solutions to its smaller sub problems of the same type. Dynamic
programming suggests solving each smaller sub problem once and recording the results
in a table from which a solution to the original problem can be then obtained.
17. Define objective function and optimal solution
To find a feasible solution that either maximizes or minimizes a given objective function.
It has to be the best choice among all feasible solution available on that step.
18. Define knapsack problem using dynamic programming.
Designing a dynamic programming algorithm for the knapsack problem: given n items of
known weights w1. . . wn and values v1, . . . , vn and a knapsack of capacity W, find the
most valuable subset of the items that fit into the knapsack. We assume here that all the
weights and the knapsack capacity are positive integers; the item values do not have to be
integers
19. Mention different algorithm design techniques
Methods of specifying an algorithm
Proving an algorithms correctness
Analyzing an algorithm
Coding an algorithm
20. Mention the two properties of sorting algorithms
A sorting algorithm is called stable if it preserves the relative order of any two
equal elements in its input.
An algorithm is said to be in place if it does not require extra memory
UNIT-V
1. On what basis problems are classified?
Problems are classified into two types based on time complexity. They are
Polynomial (P) Problem
Non-Polynomial (NP) Problem
2. Define Polynomial (P) problem
Class P is a class of decision problems that can be solved in polynomial time by
(deterministic) algorithms. This class of problems is called polynomial.
3. Define Non Polynomial (NP) problem
Class NP is the class of decision problems that can be solved by nondeterministic
polynomial algorithms. This class of problems is called nondeterministic polynomial
4. Give some examples of Polynomial problem
Selection sort
Bubble Sort
String Editing
Factorial
Graph Coloring
5. Give some examples of Non-Polynomial problem
Travelling Salesman Problem
Knapsack Problem.
6. Define backtracking
The principal idea is to construct solutions one component at a time and evaluate such
partially constructed candidates as follows. If a partially constructed solution can be
developed further without violating the problem’s constraints, it is done by taking the
first remaining legitimate option for the next component. If there is no legitimate option
for the next component, no alternatives for any remaining component need to be
considered. In this case, the algorithm backtracks to replace the last component of the
partially constructed.
7. Define state space tree
It is convenient to implement this kind of processing by constructing a tree of choices being
made, called the state-space tree. Its root represents an initial state before the search for a
solution begins. The nodes of the first level in the tree represent the choices made for the first
component of a solution; the nodes of the second level represent the choices for the second
component, and so on
8. When a node in a state space tree is said to promising and non promising?
A node in a state-space tree is said to be promising if it corresponds to a partially
constructed solution that may still lead to a complete solution; otherwise, it is called non
promising. Leaves represent either non promising dead ends or complete solutions found by the
algorithm
9. Define n-queens problem
The problem is to place n queens on an n × n chessboard so that no two queens attack
each other by being in the same row or in the same column or on the same diagonal
10. Define branch and bound method
Branch and bound is an algorithm that enhances the idea of generating a state
space tree with idea of estimating the best value obtainable from a current
node of the decision tree
If such an estimate is not superior to the best solution seen up to that point in
the processing, the node is eliminated from further consideration
11. How NP-hard problems are different from NP-Complete?
NP-hard : If an NP-hard problem can be solved in polynomial time, then all NP-complete
problems can be solved in polynomial time.
NP-Complete: A problem that is NP-complete has the property that it can be solved in
polynomial time if all other NP-complete problems can also be solved in polynomial time
12. Define decision problem
Any problem for which the answer is either zero or one is called a decision problem. An
algorithm for a decision problem is termed a Decision algorithm
13. Define optimization Problem
Any problem that involves the identification of an optimal (maximum or minimum) value
of a given cost function is known as an optimization problem. An Optimization algorithm
is used to solve an optimization problem
14. Mention the relation between P and NP
15. Mention the relation between P, NP, NP-Hard and NP Complete Problem
PART-B
UNIT-I
1. Write the algorithm for performing operations in a stack. Trace your algorithm
with suitable example
Stack
A stack is an ordered collection of items into which new items may be inserted and from
which items may be deleted at one end called the top of the stack
Stack is a linear data structure which follows Last-in First-out principle, in which both
insertion and deletion occur at only one end of the list called the top.
The insertions operations is called push and deletion operations is called pop operation.
Every insertion stack pointer is incremented by one, every deletion stack pointer will be
decremented by one
Operations on stack
Push:- the process of inserting a new element to the top of the stack. For every push
operations the top is incremented by one.
Pop:- Pop removes the item in the top of the stack
C
B B B D
A A A A A A
}Else
{x = s[top];
top = top-1; }
Algorithm for display:
void display(int s[10], int top)
{
Int i;
If ( top ==-1)
Printf(“Stack Empty”);
else
{
For( i = 0; i<=top; i++)
Printf(“%d”, s[i]);
}
}
2. Evaluate the postfix expression that is obtained in (i) for the values A = 5, B =3, C=
2, D= 2, E = 4, F = 3, G = 8, H=6
3. Write the algorithms for PUSH, POP and change operations on stack. Using these
algorithms, how do you check whether the given string is a palindrome?
If (pos>top)
{
printf(“ Change operation is not possible”);
}
Else
{
S[pos] = val;
}
4. Write the algorithm for converting infix expression to postfix expression with the
suitable example
Infix to postfix Conversion:
<,>,<=,>= 5
==, != 6
&& 7
|| 8
1. Read the infix expression 1 character at a time and repeat the steps 2 to 5 until it
encounters the delimiter.
2. If the ( character is an operand)
Append it to the postfix string
3. else if the ( character is ‘(‘)
push it into the stack
4. else if ( character is ‘)’)
pop all the elements from the stack and append it to the postfix string till it encounter ‘(‘.
Discard both paranthesis in the output.
5. else if ( the character is an operator)
{
While ( stack not empty and priority of(top element in the stack is higher =
priority of the input character))
Pop the operator from the stack and append it to the postfix string
}
Push the operator into the stack
}
6. While(Stack is not empty)
Pop the symbols from the stack and append it to the postfix expression
5. Write the algorithm for evaluating the postfix expression with the suitable example.
Evaluation of expression
Algorithm
Step 1: Read the input postfix string one character at a time till the end of input
While( not end of input)
{
Symbol = next input character
If(symbol is an operand)
Push symbol into the stack
Else /* symbol is an operator */
{
Operand 2 = pop element from the stack;
Operand 1 = pop element from the stack;
Value = result of applying symbol to operand1 and operand2
Push the value into the stack
}
}
A singly linked list is a linked list in which each node contains only one link pointing to
the next node in the list.
In a singly linked list, the first node always pointed by a pointer called HEAD. If the link of the
node points to NULL, then that indicates the end of the list.
t->data=val;
t->link=NULL;
curr=first;
i=1;
while(curr!=NULL&&i<pos)
{
prev=curr;
curr=curr->link;
i++;
}
if(i==1)
{
t->link=first;
first=t;
}
else
{
prev->link=t;
t->link=curr;
}
}
Example:
{
prev=curr;
curr=curr->link;
}
if(curr==NULL)
printf("\n elememt not found");
else
if(curr==first)
{
t=first;
first=first->link;
}
else
{
t=curr;
prev->link=curr->link;
}
free(t);}
Example:
node *curr;
curr=first;
if(curr==NULL)
{
printf("\nlist is empty");
}
else
{
while(curr->link!=NULL)
{
printf("%d->",curr->data);
curr=curr->link;
}
}
printf("%d\n",curr->data);
}
7. Explain creation, insertion and deletion of doubly linked list with example
The Doubly linked list is a collection of nodes each of which consists of three parts namely the
data part, prev pointer and the next pointer. The data part stores the value of the element, the
prev pointer has the address of the previous node and the next pointer has the value of the next
node.
In a doubly linked list, the head always points to the first node. The prev pointer of the
first node points to NULL and the next pointer of the last node points to NULL.
Algorithm for Creation:
void create()
{
node *t;
inti,n;
printf("\nenter the no. of elements in the list");
scanf("%d",&n);
first=NULL;
for(i=1;i<=n;i++)
{
t=(node*)malloc(sizeof(node));
scanf("%d",&t->data);
t->llink=NULL;
t->rlink=NULL;
if(first==NULL)
first=last=t;
else
{
last->rlink=t;
t->llink=last;
last=t;
}
}
}
Algorithm for insertion:
void insert(intpos,intval)
{
int i;
node *t,*curr,*prev;
t=(node*)malloc(sizeof(node));
t->data=val;
t->llink=NULL;
t->rlink=NULL;
curr=first;
i=1;
while(curr!=NULL&&i<pos)
{
curr=curr->rlink;
i++;
}
if(curr==first)
{
t->rlink=first;
first->llink=t;
first=t;
}
else if(curr==NULL)
{
last->rlink=t;
t->llink=last;
last=t;
}
else
{
curr->llink->rlink=t;
t->llink=curr->rlink;
t->rlink=curr;
curr->llink=t;
}
}
Example:
free(t);
}
Example:
void push(int x)
{
node*t;
t=(node*)malloc(sizeof(node));
t->data=x;
t->llink=NULL;
t->rlink=NULL;
if(top==NULL)
top=t;
else
{
t->rlink=top;
top->llink = t
top=t;
}
printf("\n");
printf("\n the element is pushed \n");
}
int pop()
{
node*t;
int x;
if (top==NULL)
{
printf("\n");
printf("stack empty \n");
return(-1);
}
else
{
x=top->data;
t=top;
top=top->rlink;
top->llink = NULL;
free(t);
return(x);
}}
void display()
{
node*curr;
curr=top;
while(curr !=NULL)
{
printf("\n%d",curr->data);
curr=curr->rlink;
}
}
};
Structdequeue
{
Struct node *front;
Struct node *rear;
};
Push(X, D)
/* Insert X on the front end of deque D */
void push(int X, structdequeue *D)
{
struct node *temp;
int *q;
temp = (struct node *) malloc(sizeof(struct node));
temp->data = X;
temp->link = NULL;
if (D->front == NULL)
D->front=D->rear = temp;
Else
{
Temp->link= D->front;
D->front = temp;
}
}
Pop(D) :
/* Remove the front item from deque D and return it */
Return(item)
}
}
Inject(X,D) :
/* Insert item X on the rear end of deque D */
Eject(D) :
/*Remove the rear item from deque D and return it*/
D->front = NULL;
Return(item);
}
}
void display()
{
int i;
if(front==-1)
11. Give the algorithm for performing polynomial addition using linked list.
{
temp3->coefft = temp1->coefft + temp2->coefft;
temp3->exp = temp1->exp;
}
else if (temp1->exp> temp2->exp)
{
temp3->coefft = temp1->coefft;
temp3->exp=temp1->exp;
}
else
{
temp3->coefft = temp2->coefft;
temp3->exp=temp2->exp;
}
if (p3==NULL)
p3=temp3;
else
{
p3->link = temp3;
p3 = p3->link;
}
}
while (temp1 != NULL)
{
temp3 = (polynode *)malloc(sizeof(polynode));
temp3->link = NULL;
temp3->coefft = temp1->coefft;
temp3->exp=temp1->exp;
if (p3==NULL)
p3=temp3;
else
{
p3->link = temp3;
p3 = p3->link;
}
}
while (temp2 != NULL)
{
temp3 = (polynode *)malloc(sizeof(polynode));
temp3->link = NULL;
temp3->coefft = temp2->coefft;
temp3->exp=temp2->exp;
if (p3==NULL)
p3=temp3;
else
{
p3->link = temp3;
p3 = p3->link;
}
}
}
UNIT-II
1. Find out the inorder, preorder, postorder traversal for the binary tree representing the
expression (a+b*c)/(d-e) with the help of procedures
Expression Tree:
/
-
+
* d e
a
b c
Inorder traversal
The inorder traversal of a binary tree is performed as
traverse the left subtree in inorder.
Visit the root.
Traverse the right subtree in inorder.
Void preorder(Tree T)
{
If(T!=NULL)
{
Printelement(t->element);
preorder(T->left);
preorder(t->right);
}
}
2. A file contains only colons, spaces, newlines, commas and digits in the following
frequency. colon-100, space – 605 newline – 100, comma – 705, 0-431, 1-242, 2-176, 2-59, 4-
185, 5-250, 6-174,7-199, 8-205, 9-217. Construct the Huffman code. Explain Huffman
algorithm
Symbol Code
Colon 01011
Space 00
New line 0100
, 110
0 100
1 1010
2 0111
3 01010
4 11100
5 1011
6 0110
7 11101
8 11110
9 11111
4. What is Binary search tree? Write an algorithm to add a node into a binary search
tree.
while(curr!=NULL)
{
prev=curr;
if(x==curr->data)
{
printf(“duplicate value”);
return;
}
elseif(x<curr->data)
curr=curr->lchild;
else
curr=curr->rchild;
}
/*perform insertion*/
curr=(node*)malloc(sizeof(node));
curr->data=x;
curr->lchilde=curr->rchild=NULL;
if(root==NULL)
root=curr;
else if(x<prev->data)
prev->lchild=curr;
else
prev->rchild=curr;
}
40
10 50
5 30 80
35
20 60
65
Search is finished and the element is not found. Hence, attach 25 as the right child of 20
Binary tree after insertion:
40
10 50
5 30 80
35
20 60
25 65
30 30
Delete(35)
5 40 5 50
2 35 80 2 80
30 30
Delete(40)
5 40 5 80
2 80 2
30
35
Delete(30)
5 40
5 40
2 35 80
2 80
Take either the largest node (it is inorder predecessor) in the left subtree or the smallest node
(then it is inorder successor) in the right subtree and replace the node to be deleted with this
node and delete the in order predecessor or successor.
2 40
35 80
Both the largest element in the left subtree & the smallest element in the right subtree can have
the degree atmost “one”
5. Write an algorithm to find a node in a tree. Show the resulting binary search tree if
the elements are added into it in the following order:
50, 20, 55, 80, 53, 30, 60, 25, 5, …
curr=root;
prev=NULL;
/* search for x */
while(curr!=NULL)
{
prev=curr;
if(x==curr->data)
{
printf(“duplicate value”);
return;
}
elseif(x<curr->data)
curr=curr->lchild;
else
curr=curr->rchild;
}
/*perform insertion*/
curr=(node*)malloc(sizeof(node));
curr->data=x;
curr->lchilde=curr->rchild=NULL;
if(root==NULL)
root=curr;
else if(x<prev->data)
prev->lchild=curr;
else
prev->rchild=curr;
}
50
20 55
5 30 53 80
25 60
.
6. Write an algorithm to delete a node from a tree (it may contain 0, 1, or 2 children.
30 30
Delete(35)
5 40 5 50
2 35 80 2 80
30 30
Delete(40)
5 40 5 80
2 80 2
The child of the deleted node have to take the position of its parent.
30
35
Delete(30)
5 40
5 40
2 35 80
2 80
Take either the largest node (it is inorder predecessor) in the left subtree
The smallest node (then it is inorder successor) in the right subtree
2 40
35 80
Both the largest element in the left subtree & the smallest element in the right subtree can have
the degree atmost “one”.
7. Explain the steps involved in converting the general tree to a binary tree. Convert the
following general tree to a binary tree.
b d
c
e f g h i
e c
h d
f
g i
8. Construct a binary tree given the preorder and in order sequences as below
preorder: A B D G C E H I F, Inorder : D G B A H E I C F
B C
E
D F
G H I
9. Prove “For any non-empty binary tree T, if n0 is the number of leaf nodes and n2 is the
number of nodes of degree 2, then n0= n2+1”
Proof
Let n be the total no of nodes in the binary tree let n, be the no of nodes of degree 1
n= n0+ n1+n2-------- A
All the nodes except the root node has a branch coming into it. Let B be the no of branches in
binary tree
n = B+1------- 1
(deg=0)
Nodes of degree-1 will have 1 branch
Nodes of degree – 2 will have 2 branch
Hence proved.
10.What do you mean by a threaded binary tree? Write the algorithm for in order
traversal of a threaded binary tree. Trace the algorithm with an example.
In a binary tree, all the leaf nodes are having the left child and right child fields to be NULL.
Here more memory space is wasted to store the NULL values. These NULL pointers can be
utilized to store useful information. The NULL left child is used to point the in order
predecessor and the NULL right child is used to store the in order successor. This is called as
in order threaded binary tree.
Structure of a node:
LTHREAD LLINK DATA RLINK RTHREAD
if LTHREAD= 0, LLINK points to the left child;
if LTHREAD = 1, LLINK points to the in-order predecessor;
if RTHREAD = 0, RLINK points to the right child;
if RTHREAD= 1, RLINK points to the in-order successor.
HEAD
Algorithm
Step-1: For the current node check whether it has a left child which is not there in the visited list.
If it has then go to step-2 or else step-3.
Step-2: Put that left child in the list of visited nodes and make it your current node in
consideration. Go to step-6.
Step-3: For the current node check whether it has a right child. If it has then go to step-4 else go
to step-5
Step-4: Make that right child as your current node in consideration. Go to step-6.
Step-5: Check for the threaded node and if its there make it your current node.
Step-6: Go to step-1 if all the nodes are not over otherwise quit
In order Traversal for the above threaded binary tree: D B A E G C H F J
11. What is the representation of binary tree in memory? Explain in detail. / Explain the
B-tree with insertion and deletion operations.
Representation of Binary tree in memory:
1. Array Representation
2. Linked List Representation
Array Representation:
o The root node is stored at location 0.
o Left child of the node at location i is stored at location 2i+1
o Right child of the node at location i is stored at location 2i+2
If the child is in ith location, its parent will be in (i-1)/2 thlocation.
Node
12.Define expression tree. How to construct an expression tree for the post fix expression?
/ Write steps involved in constructing expression tree.
Expression tree:
An expression tree is built up from the simple operands and operators of an(arithmetic or logical)
expression by placing the simple operands as the leaves of a binary tree and the operators as the
interior nodes.
Example:
(a+b*c)/(d-e)
Expression Tree:
-
+
* d e
a
b c
Inorder traversal
The inorder traversal of a binary tree is performed as
traverse the left subtree in inorder.
Visit the root.
Traverse the right subtree in inorder.
}
In order traversal for the given expression tree: a + b * c / d - e
Preorder traversal
The preorder traversal of a binary tree is performed as
Visit the root.
traverse the left subtree in inorder.
Traverse the right subtree in inorder.
Recursive Routine for preorder traversal
Void preorder(Tree T)
{
If(T!=NULL)
{
Printelement(t->element);
preorder(T->left);
preorder(t->right);
}
}
Pre order traversal for the given expression tree:/ + a * b c - d e
Postorder traversal
The postorder traversal of a binary tree is performed as
traverse the left subtree in inorder.
Traverse the right subtree in inorder.
Visit the root.
Recursive Routine for postorder traversal
Void postorder(Tree T)
{
If(T!=NULL)
{
postorder(T->left);
postorder(t->right);
Printelement(t->element);
}
}
Post order traversal for the given expression tree: a b c * + d e - /
UNIT-III
1. Construct a minimum spanning tree using Kruskal’s algorithm with your own
example
Kruskal's algorithm to find the minimum cost spanning tree uses the greedy approach. This
algorithm treats the graph as a forest and every node it has as an individual tree. A tree connects
to another only and only if, it has the least cost among all available options and does not violate
MST properties.
To understand Kruskal's algorithm let us consider the following example −
In case of parallel edges, keep the one which has the least cost associated and remove all others.
The least cost is 2 and edges involved are B,D and D,T. We add them. Adding them does not
violate spanning tree properties, so we continue to our next edge selection.
Next cost is 3, and associated edges are A,C and C,D. We add them again −
Next cost in the table is 4, and we observe that adding it will create a circuit in the graph. −
We ignore it. In the process we shall ignore/avoid all edges that create a circuit.
We observe that edges with cost 5 and 6 also create circuits. We ignore them and move on.
Now we are left with only one node to be added. Between the two least cost edges available 7
and 8, we shall add the edge with cost 7.
By adding edge S,A we have included all the nodes of the graph and we now have minimum cost
spanning tree.
2. How will find the shortest path between two given vertices using Dijikstra’s
algorithm? Explain the pseudo code with an example
4
a b
3 2 5
6
c d e
7 4
Dijkstra’s algorithm finds the shortest path from a source vertex(v) to all the remaining
vertices.
Steps:
1. Initialize s[i] =false &dist[i] = length[v][i] for all i=0 to n-1.
2. Assign s[v] = true &dist[v] = 0;
3. Choose a vertex u with minimum dist& s[u] = false
4. Put s[u] = true.
5. Modify dist[w] for all vertices with s[w]= false
Dist[w] = min { dist[w], dist[u] + length[u][w]}
6. repeat the steps 3 to 5 until the shortest path is found for all the remaining vertices.
Ans:
a-b = 4
a-c = 3
a-d = 2
a-e = 6
3. Discuss about the algorithm and pseudocode to find minimum spanning tree using
Prim’s algorithm.
Prim's algorithm to find minimum cost spanning tree (as Kruskal's algorithm) uses the
greedy approach. Prim's algorithm shares a similarity with the shortest path first algorithms.
Prim's algorithm, in contrast with Kruskal's algorithm, treats the nodes as a single tree and
keeps on adding new nodes to the spanning tree from the given graph.
To contrast with Kruskal's algorithm and to understand Prim's algorithm better, we shall use
the same example −
Remove all loops and parallel edges from the given graph. In case of parallel edges, keep the one
which has the least cost associated and remove all others.
Now, the tree S-7-A is treated as one node and we check for all edges going out from it. We
select the one which has the lowest cost and include it in the tree.
After this step, S-7-A-3-C tree is formed. Now we'll again treat it as a node and will check all the
edges again. However, we will choose only the least cost edge. In this case, C-3-D is the new
edge, which is less than other edges' cost 8, 6, 4, etc.
After adding node D to the spanning tree, we now have two edges going out of it having the
same cost, i.e. D-2-T and D-2-B. Thus, we can add either one. But the next step will again yield
edge 2 as the least cost. Hence, we are showing a spanning tree with both edges included.
4. Write Floyd’s algorithm for the all-pairs shortest path problem and explain with an
example
5 2
4
1 9 3
6
6
3 5
For k = 1 to n do
For i = 1 to n do
For j = 1 to n do
Dk[I,j] = min{Dk-1[I,j] or Dk-1[I,j] and Dk-1[k,j]
Return D(n)
}
Ans:
1 2 3 4 5
1 0 5 69 8
2 5 0 54 3
3 6 5 08 2
4 9 4 80 6
5 8 3 26 0
MST solves the problem of finding a minimum total weight subset of edges that spans all the
vertices. Another common graph problem is to find the shortest paths to all reachable vertices
from a given source. We have already seen how to solve this problem in the case where all the
edges have the same weight (in which case the shortest path is simply the minimum number of
edges) using BFS. Now we will examine two algorithms for finding single source shortest paths
for directed graphs when the edges have different weights - Bellman-Ford and Dijkstra's
algorithms. Several related problems are:
Single destination shortest path - find the transpose graph (i.e. reverse the edge
directions) and use single source shortest path
Single pair shortest path (i.e. a specific destination) - asymptotically this problem can be
solved no faster than simply using single source shortest path algorithms to all the
vertices
All pair shortest paths - one technique is to use single source shortest path for each
vertex, but later we will see a more efficient algorithm
Single Source Shortest Path
Problem
Given a directed graph G(V,E) with weighted edgesw(u,v), define the path weight of a path p as
For a given source vertex s, find the minimum weight paths to every vertex reachable from s
denoted
Bellman-Ford Algorithm
The Bellman-Ford algorithm uses relaxation to find single source shortest paths on directed
graphs that may contain negative weight edges. The algorithm will also detect if there are any
negative weight cycles (such that there is no solution).
BELLMAN-FORD(G,w,s)
INITIALIZE-SINGLE-SOURCE(G,s)
for i = 1 to |G.V|-1
for each edge (u,v) ∈ G.E
RELAX(u,v,w)
for each edge (u,v) ∈ G.E if v.d>u.d + w(u,
return FALSE
return TRUE
INITIALIZE-SINGLE-SOURCE(G,s)
for each vertex v ∈ G.V
v.d = ∞
v.pi = NIL
s.d = 0
Using vertex 5 as the source (setting its distance to 0), we initialize all the other distances to ∞.
Iteration 1: Edges (u5,u2) and (u5,u4) relax updating the distances to 2 and 4
Iteration 2: Edges (u2,u1), (u4,u2) and (u4,u3) relax updating the distances to 1, 2, and 4
respectively. Note edge (u4,u2) finds a shorter path to vertex 2 by going through vertex 4
Iteration 3: Edge (u2,u1) relaxes (since a shorter path to vertex 2 was found in the previous
iteration) updating the distance to 1
Negative cycle checks: We now check the relaxation condition one additional time for each edge.
If any of the checks pass then there exists a negative weight cycle in the graph.
v3.d>u1.d + w(1,3) ⇒ 4 ≯ 6 + 6 = 12 ✓
v4.d>u1.d + w(1,4) ⇒ 2 ≯ 6 + 3 = 9 ✓
v1.d>u2.d + w(2,1) ⇒ 6 ≯ 3 + 3 = 6 ✓
v4.d>u3.d + w(3,4) ⇒ 2 ≯ 3 + 2 = 5 ✓
v2.d>u4.d + w(4,2) ⇒ 3 ≯ 2 + 1 = 3 ✓
v3.d>u4.d + w(4,3) ⇒ 3 ≯ 2 + 1 = 3 ✓
v2.d>u5.d + w(5,2) ⇒ 3 ≯ 0 + 4 = 4 ✓
v4.d>u5.d + w(5,4) ⇒ 2 ≯ 0 + 2 = 2 ✓
Note that for the edges on the shortest paths the relaxation criteria gives equalities.
Additionally, the path to any reachable vertex can be found by starting at the vertex and
following the π's back to the source. For example, starting at vertex 1, u1.π = 2, u2.π = 4, u4.π = 5
⇒ the shortest path to vertex 1 is {5,4,2,1}
7. Describe in detail about depth first and breadth first traversals with appropriate
example
This is a very different approach for traversing the graph nodes. The aim of BFS algorithm
is to traverse the graph as close as possible to the root node. Queue is used here. If we do the
breadth first traversal of the above graph and print the visited node as the output, it will print the
following output. “A B C D E F G”. The BFS visits the nodes level by level, so it will start with
level A which is the root node, and then it moves to the next levels which are B, C and D, then
the last levels which are D,E,F and F.
Breadth First Traversal:
1. Visit vertex v.
2. Visit all the unvisited vertices that are adjacent to v.
3. Unvisited vertices that are adjacent to the newly visited vertices are visited.
Algorithmic Steps
Step 1: Push the root node in the Queue.
Step 2: Loop until the queue is empty.
Step 3: Remove the node from the Queue.
Step 4: If the removed node has unvisited child nodes, mark them as visited and insert the
unvisited children in the queue.
Algorithm:
bfs ( )
{
mark v visited;
enqueue (v);
while ( not is_empty (Q) )
{
x = front (Q);
dequeue (Q);
for each y adjacent to x if y unvisited {
mark y visited;
enqueue (y);
insert ( (x, y) in T );
}
}
}
The aim of DFS traversal is to traverse the graph in such a way that it tries to go far from
the root node. Stack is used in the implementation of the depth first search. If we do the
depth first traversal of the above graph and print the visited node, it will be “A B E F C
D”. DFS visits the root node and then its children nodes until it reaches the end node, i.e. E
and F nodes, then moves up to the parent nodes.
UNIT-IV
1. Discuss briefly the sequence of steps in designing and analyzing an algorithm.
An algorithm is a set of steps of operations to solve a problem performing calculation,
data processing, and automated reasoning tasks. An algorithm is an efficient method that
can be expressed within finite amount of time and space.An algorithm is the best way to
represent the solution of a particular problem in a very simple and efficient way. If we
have an algorithm for a specific problem, then we can implement it in any programming
language, meaning that the algorithm is independent from any programming
languages.
Algorithm Design
The important aspects of algorithm design include creating an efficient algorithm
to solve a problem in an efficient way using minimum time and space.To solve a
problem, different approaches can be followed. Some of them can be efficient with
respect to time consumption, whereas other approaches may be memory efficient.
However, one has to keep in mind that both time consumption and memory usage cannot
be optimized simultaneously. If we require an algorithm to run in lesser time, we have to
invest in more memory and if we require an algorithm to run with lesser memory, we
need to have more time.
Problem Development Steps
The following steps are involved in solving computational problems.
Problem definition
Development of a model
Specification of an Algorithm
Designing an Algorithm
Checking the correctness of an Algorithm
Analysis of an Algorithm
Implementation of an Algorithm
Program testing
Documentation
Characteristics of Algorithms
The main characteristics of algorithms are as follows −
Algorithms must have a unique name
Algorithms should have explicitly defined set of inputs and outputs
Algorithms are well-ordered with unambiguous operations
Algorithms halt in a finite amount of time. Algorithms should not run for infinity, i.e., an
algorithm must end at some point
In theoretical analysis of algorithms, it is common to estimate their complexity in the asymptotic
sense, i.e., to estimate the complexity function for arbitrarily large input. The term "analysis of
algorithms" was coined by Donald Knuth.
Algorithm analysis is an important part of computational complexity theory, which provides
theoretical estimation for the required resources of an algorithm to solve a specific computational
problem. Most algorithms are designed to work with inputs of arbitrary length. Analysis of
algorithms is the determination of the amount of time and space resources required to execute it.
Usually, the efficiency or running time of an algorithm is stated as a function relating the input
length to the number of steps, known as time complexity, or volume of memory, known as
space complexity.
The Need for Analysis
By considering an algorithm for a specific problem, we can begin to develop pattern recognition
so that similar types of problems can be solved by the help of this algorithm.
Algorithms are often quite different from one another, though the objective of these algorithms
are the same. For example, we know that a set of numbers can be sorted using different
algorithms. Number of comparisons performed by one algorithm may vary with others for the
same input. Hence, time complexity of those algorithms may differ. At the same time, we need to
calculate the memory space required by each algorithm.
Analysis of algorithm is the process of analyzing the problem-solving capability of the algorithm
in terms of the time and size required (the size of memory for storage while implementation).
However, the main concern of analysis of algorithms is the required time or performance.
Generally, we perform the following types of analysis −
Worst-case − The maximum number of steps taken on any instance of size a.
Best-case − The minimum number of steps taken on any instance of size a.
Average case − An average number of steps taken on any instance of size a.
Amortized − A sequence of operations applied to the input of size a averaged over time.
To solve a problem, we need to consider time as well as space complexity as the program may
run on a system where memory is limited but adequate space is available or may be vice-versa.
In this context, if we compare bubble sort and merge sort. Bubble sort does not require
additional memory, but merge sort requires additional space. Though time complexity of bubble
sort is higher compared to merge sort, we may need to apply bubble sort if the program needs to
run in an environment, where memory is very limited
ω – Notation
We use ω-notation to denote a lower bound that is not asymptotically tight. Formally, however,
we define ω(g(n)) (little-omega of g of n) as the set f(n) = ω(g(n)) for any positive constant C >
0 and there exists a value n0>0
, such that 0⩽c.g(n)<f(n) ,For example, n22=ω(n)
, but n22≠ω(n2). The relation f(n)=ω(g(n))implies that the following limit exists
limn→∞(f(n)g(n))=∞
That is, f(n) becomes arbitrarily large relative to g(n) as n approaches infinity.
Example
Let us consider same function, f(n)=4.n3+10.n2+5.n+1
Considering g(n)=n2
limn→∞(4.n3+10.n2+5.n+1n2)=∞
Hence, the complexity of f(n) can be represented as o(g(n)), i.e. ω(n2)
Apriori and Apostiari Analysis
Apriori analysis means, analysis is performed prior to running it on a specific system.This
analysis is a stage where a function is defined using some theoretical model. Hence, we
determine the time and space complexity of an algorithm by just looking at the algorithm rather
than running it on a particular system with a different memory, processor, and compiler.
Apostiari analysis of an algorithm means we perform analysis of an algorithm only after running
it on a system. It directly depends on the system and changes from system to system.
In an industry, we cannot perform Apostiari analysis as the software is generally made for an
anonymous user, which runs it on a system different from those present in the industry.
In Apriori, it is the reason that we use asymptotic notations to determine time and space
complexity as they change from computer to computer; however, asymptotically they are the
same
Example
In the following example, we have shown Merge-Sort algorithm step by step. First, every
iteration array is divided into two sub-arrays, until the sub-array contains only one element.
When these sub-arrays cannot be divided further, then merge operations are performed.
Note that to sort the entire array, the initial call should be Quick-Sort (A, 1, length[A])
As a first step, Quick Sort chooses one of the items in the array to be sorted as pivot. Then, the
array is partitioned on either side of the pivot. Elements that are less than or equal to pivot will
move towards the left, while the elements that are greater than or equal to pivot will move
towards the right.
Partitioning the Array
Partitioning procedure rearranges the sub-arrays in-place.
Function: Partition (A, p, r)
x ← A[p]
i ← p-1
j ← r+1
while TRUE do
Repeat j ← j - 1
until A[j] ≤ x
Repeat i← i+1
until A[i] ≥ x
if i < j then
exchange A[i] ↔ A[j]
else
return j
Analysis
The worst case complexity of Quick-Sort algorithm is O(n2). However using this technique, in
average cases generally we get the output in O(n log n) time
Areas of Application
Greedy approach is used to solve many problems, such as
Finding the shortest path between two vertices using Dijkstra’s algorithm.
Finding the minimal spanning tree in a graph using Prim’s /Kruskal’s algorithm, etc.
Where Greedy Approach Fails
In many problems, Greedy algorithm fails to find an optimal solution, moreover it may produce a
worst solution. Problems like Travelling Salesman and Knapsack cannot be solved using this
approach.
In this version of Knapsack problem, items can be broken into smaller pieces. So, the thief may
take only a fraction xi of ith item.
0⩽xi⩽1
The ith item contributes the weight xi.wito the total weight in the knapsack and profit xi.pi
to the total profit.
Algorithm: Greedy-Fractional-Knapsack (w[1..n], p[1..n], W)
for i = 1 to n
do x[i] = 0
weight = 0
for i = 1 to n
if weight + w[i] ≤ W then
x[i] = 1
weight = weight + w[i]
else
x[i] = (W - weight) / w[i]
weight = W
break
return x
Solution
After sorting all the items according to pi/wi
.First all of B is chosen as weight of B is less than the capacity of the knapsack. Next, item A is
chosen, as the available capacity of the knapsack is greater than the weight of A. Now, C is
chosen as the next item. However, the whole item cannot be chosen as the remaining capacity of
the knapsack is less than the weight of C.
Hence, fraction of C (i.e. (60 − 50)/20) is chosen.
Now, the capacity of the Knapsack is equal to the selected items. Hence, no more item can be
selected.
The total weight of the selected items is 10 + 40 + 20 * (10/20) = 60
And the total profit is 100 + 280 + 120 * (10/20) = 380 + 60 = 440
This is the optimal solution. We cannot gain more profit selecting any different combination of
items
frequently used data in the root and closer to the root element, while placing the least frequently
used data near leaves and in leaves.
Here, the Optimal Binary Search Tree Algorithm is presented. First, we build a BST from a set
of provided n number of distinct keys < k1, k2, k3, ...kn>. Here we assume, the probability of
accessing a key Ki is pi. Some dummy keys (d0, d1, d2, ...dn) are added as some searches may be
performed for the values which are not present in the Key set K. We assume, for each dummy
key di probability of access is qi.
Optimal-Binary-Search-Tree(p, q, n)
e[1…n + 1, 0…n],
w[1…n + 1, 0…n],
root[1…n + 1, 0…n]
for i = 1 to n + 1 do
e[i, i - 1] := qi - 1
w[i, i - 1] := qi - 1
for l = 1 to n do
for i = 1 to n – l + 1 do
j = i + l – 1 e[i, j] := ∞
w[i, i] := w[i, i -1] + pj + qj
for r = i to j do
t := e[i, r - 1] + e[r + 1, j] + w[i, j]
if t < e[i, j]
e[i, j] := t
root[i, j] := r
return e and root
Analysis
The algorithm requires O (n3) time, since three nested for loops are used. Each of these loops
takes on at most n values.
Example
Considering the following tree, the cost is 2.80, though this is not an optimal result.
To get an optimal solution, using the algorithm discussed in this chapter, the following tables are
generated.
In the following tables, column index is i and row index is j.
11.Explain in detail about the Warshall‟s Algorithm for Finding Transitive Closure.
The Floyd Warshall Algorithm is for solving the All Pairs Shortest Path problem. The problem is
to find shortest distances between every pair of vertices in a given edge weighted directed Graph.
Example:
UNIT-V
1. Discuss in detail about Backtracking with N-Queens Problem
The n-queens puzzle is the problem of placing n queens on an n×n chessboard such that no two
queens attack each other. Given an integer n, print all distinct solutions to the n-queens puzzle.
Each solution contains distinct board configurations of the n-queens’ placement, where the
ADVANCED ALGORITHM DESIGN AND
ANALYSIS
– N-Queen's Problem - Branch and Bound – Assignment Problem - P & NP problems – NP-
complete problems – Approximation algorithms for NP-hard problems – Traveling salesman
problem-Amortized Analysis.
solutions are a permutation of [1,2,3..n] in increasing order, here the number in the ith place
denotes that the ith-column queen is placed in the row with that number. For eg below figure
represents a chessboard
[3 1 4 2].
Algorithm
1) Start in the leftmost column
2) If all queens are placed
return true
3) Try all rows in the current column. Do following for every tried row.
a) If the queen can be placed safely in this row then mark this [row,
column] as part of the solution and recursively check if placing
queen here leads to a solution.
b) If placing the queen in [row, column] leads to a solution then return
true.
Input:
The first line of input contains an integer T denoting the no of test cases. Then T test cases
follow. Each test case contains an integer n denoting the size of the chessboard.
Output:
For each test case, output your solutions on one line where each solution is enclosed in square
brackets '[', ']' separated by a space . The solutions are permutations of {1, 2, 3 …, n} in
increasing order where the number in the ith place denotes the ith-column queen is placed in the
row with that number, if no solution exists print -1.
Constraints:
1<=T<=10
1<=n<=10
Example:
Input
2
1
4
Output:
[1 ]
[2 4 1 3 ] [3 1 4 2 ]
of inputs for which the answer is Yes. Most of the algorithms discussed in the previous chapters
are polynomial time algorithms.
For input size n, if worst-case time complexity of an algorithm is O(nk), wherek is a constant,
the algorithm is a polynomial time algorithm.
Algorithms such as Matrix Chain Multiplication, Single Source Shortest Path, All Pair Shortest
Path, Minimum Spanning Tree, etc. run in polynomial time. However there are many problems,
such as traveling salesperson, optimal graph coloring, Hamiltonian cycles, finding the longest
path in a graph, and satisfying a Boolean formula, for which no polynomial time algorithms is
known. These problems belong to an interesting class of problems, called the NP-
Complete problems, whose status is unknown.
In this context, we can categorize the problems as follows −
P-Class
The class P consists of those problems that are solvable in polynomial time, i.e. these problems
can be solved in time O(nk) in worst-case, where k is constant.
These problems are called tractable, while others are called intractable or superpolynomial.
Formally, an algorithm is polynomial time algorithm, if there exists a polynomialp(n) such that
the algorithm can solve any instance of size n in a timeO(p(n)).
Problem requiring Ω(n50) time to solve are essentially intractable for large n. Most known
polynomial time algorithm run in time O(nk) for fairly low value ofk.
The advantages in considering the class of polynomial-time algorithms is that all
reasonable deterministic single processor model of computation can be simulated on each
other with at most a polynomial slow-d
NP-Class
The class NP consists of those problems that are verifiable in polynomial time. NP is the class
of decision problems for which it is easy to check the correctness of a claimed answer, with the
aid of a little extra information. Hence, we aren’t asking for a way to find a solution, but only to
verify that an alleged solution really is correct.
Every problem in this class can be solved in exponential time using exhaustive search.
P versus NP
Every decision problem that is solvable by a deterministic polynomial time algorithm is also
solvable by a polynomial time non-deterministic algorithm.
All problems in P can be solved with polynomial time algorithms, whereas all problems in NP -
P are intractable.
It is not known whether P = NP. However, many problems are known in NP with the property
that if they belong to P, then it can be proved that P = NP.
If P ≠ NP, there are problems in NP that are neither in P nor in NP-Complete.
The problem belongs to class P if it’s easy to find a solution for the problem. The problem
belongs to NP, if it’s easy to check a solution that may have been very tedious to find.
If a polynomial time algorithm exists for any of these problems, all problems in NP would be
polynomial time solvable. These problems are called NP-complete. The phenomenon of NP-
completeness is important for both theoretical and practical reasons.
Amortized Analysis
Amortized analysis is generally used for certain algorithms where a sequence of similar
operations are performed.
Amortized analysis provides a bound on the actual cost of the entire sequence, instead of
bounding the cost of sequence of operations separately.
Amortized analysis differs from average-case analysis; probability is not involved in
amortized analysis. Amortized analysis guarantees the average performance of each
operation in the worst case.
It is not just a tool for analysis, it’s a way of thinking about the design, since designing and
analysis are closely related.
Aggregate Method
The aggregate method gives a global view of a problem. In this method, if noperations takes
worst-case time T(n) in total. Then the amortized cost of each operation is T(n)/n. Though
different operations may take different time, in this method varying cost is neglected.
Accounting Method
In this method, different charges are assigned to different operations according to their actual
cost. If the amortized cost of an operation exceeds its actual cost, the difference is assigned to
the object as credit. This credit helps to pay for later operations for which the amortized cost
less than actual cost.
If the actual cost and the amortized cost of ith operation are cici and cl^cl, then
∑i=1ncl^⩾∑i=1ncii1ncli1nci
Potential Method
This method represents the prepaid work as potential energy, instead of considering prepaid
work as credit. This energy can be released to pay for future operations.
If we perform n operations starting with an initial data structure D0. Let us consider, ci as the
actual cost and Di as data structure of ith operation. The potential function Ф maps to a real
number Ф(Di), the associated potential ofDi. The amortized cost cl^cl can be defined by
cl^=ci+Φ(Di)−Φ(Di−1)clciΦDiΦDi1
Hence, the total amortized cost is
∑i=1ncl^=∑i=1n(ci+Φ(Di)−Φ(Di−1))=∑i=1nci+Φ(Dn)−Φ(D0)i1ncli1nciΦDiΦDi1i1nciΦDnΦD
0
Dynamic Table
If the allocated space for the table is not enough, we must copy the table into larger size table.
Similarly, if large number of members are erased from the table, it is a good idea to reallocate
the table with a smaller size.
Using amortized analysis, we can show that the amortized cost of insertion and deletion is
constant and unused space in a dynamic table never exceeds a constant fraction of the total
space.