0% found this document useful (0 votes)
9 views206 pages

Ads

The document outlines the learning outcomes and key concepts for two courses: CSC22 on Object Oriented Programming and CSC23 on Advanced Data Structures. It covers essential topics such as OOP principles, class and object usage, inheritance, data structures, algorithms, and their applications. Additionally, it includes references for further reading and exercises related to array data structures.

Uploaded by

ahmadayaan00
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views206 pages

Ads

The document outlines the learning outcomes and key concepts for two courses: CSC22 on Object Oriented Programming and CSC23 on Advanced Data Structures. It covers essential topics such as OOP principles, class and object usage, inheritance, data structures, algorithms, and their applications. Additionally, it includes references for further reading and exercises related to array data structures.

Uploaded by

ahmadayaan00
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 206

CSC22: Object Oriented Programming

LEARNING OUTCOMES
Understand OOP concepts and features.
Make use of objects and classes for developing programs.
Learn to develop software using OO approach.
1. OO Concepts: Programming Paradigms: Unstructured Programming, Structured Programming, Object Oriented
Programming; ADT; Class; Object; Message; Encapsulation; Polymorphism; Inheritance; Pros and Cons of Object-oriented
Methodology; cin and cout Objects.
2. Classes and Objects: Classes; Friend Functions: Benefits and Restrictions, Friends Classes; Inline Functions; Constructor,
Parameterized Constructor; Destructor and its usages; Static Data Member and Static Member Functions; Creating Object;
Passing and Returning Object(s) to/from a Function; Object Assignment; Nested and Local Classes; Arrays of Objects; Pointer
to Objects; this Pointer, Pointer to Derived Type; References; Reference vs Pointer; Reference Parameters; Dynamic Memory
Allocation.
3. Function and Operator overloading: Function overloading: Rules, Overloading Constructors, Copy Constructors; Default
Function Arguments vs. Function Overloading. Operator Overloading: Operators that cannot be Overloaded, Overloading
Operators using Member Function and Friends Functions, Overloading different operators including prefix and postfix form of
++ and – operators.
4. Inheritance & Virtual function: Inheritance: Types of Inheritances, Base-Class Access Control, Protected Members,
Protected Base-class Inheritance, Multiple Inheritance and problems, Solution to Multiple Inheritance Problem, Passing
Parameters to Base Class Constructors; Virtual functions: Introduction, Calling a Virtual Function using Base Class Reference,
Pure Virtual Function, Abstract Class.
5. Generic Function, Exception and File Handling: Generic Functions: Benefits, Functions with Two Generic Types,
Explicitly Overloading a Generic Function, Overloading a generic function, Restriction, Generic Classes. Exception Handling,
user defined Exception. C++ Streams; C++ File Handling: Opening/Closing a File, Reading /Writing a Text File, Random
Access, Reading /Writing Object to a File.
REFERENCES
Herbert Schildt: The Complete Reference C++. McGraw Hill
H.M. Deitel & P.J. Deitel: C++ How to Program. PHI
A. N. Kamthane: Object Oriented Programming with ANSI and TURBO C++. Pearson

CSC23: Advanced Data Structures


LEARNING OUTCOMES
Illustrate terminology and concepts of data structures.
Derive the mapping functions to maps the indices of multi dimensional arrays to index of 1D array.
Design the efficient algorithms for different matrix operations for various special matrices.
Design algorithm for various operations of different data structures.
Applications of various data structures to solve different problems.
Device applications based on Graph data structures.
1. List and Matrices: Data Structure, Linear Data Structure, Array Data Structure, Multi Dimensional Array, Mapping of
Indices of 2D and 3D Arrays to the Index of 1D Array, Matrix, Mapping of Indices of Matrix Elements to One Dimensional
(1D) Array Index, Special Matrices, Triangular, Diagonal, Tri-Diagonal, Representation in Row Major and Column Major
Order, Mapping of non-null Elements in 1D Array, Sparse Matrix, Single Linked List, Circular Linked List, Doubly Linked
List, Circular Doubly Linked List, Applications of Linked Lists: Bin Sort, Radix Sort, Convex Hull.
2. Stacks and Queues: Stack Data Structure, Various Stack Operations, Representation and Implementation of Stack using
Array and Linked List, Applications of Stack: Conversion of Infix to Postfix Expressions, Parenthesis Matching, Towers of
Hanoi, Rat in a Maze, Implementation of Recursive Functions, Queue Data Structure, Various Queue Operations, Circular
Queue, Representation and implementation of queues using Array and Linked List, Applications of Queue Railroad Car
Rearrangement, Machine Shop Simulation, Image-Component Labeling, Priority Queues: Priority Queue Using Heap; Max
and Min Heap; Insertion into Heap; Deletion from a Heap; Applications of Priority Queue: Heap Sort.
3. Trees: Binary Trees and their Properties; Representation of Binary Trees: Array-Based and Linked Representations; Binary
Tree Traversals; Binary Search Trees (BST); Operations on BST: Search, Insertion and Deletion; BST with Duplicates;
Applications of BST: Histogramming, Best- Fit Bin Packing, AVL Trees; AVL Tree Representation; Introduction to Red-
Black and Splay Trees, B-Trees and their Representation; Operations on B-Tree: Search, Insertion and Deletion; B+-Trees.
4. Sorting, Searching, and Hashing: Insertion Sort, Bubble Sorting, Quick Sort, Merge Sort, Shell sort, Sequential search,
binary search, Introduction to Hashing, Hash Table Representation, Hash Functions, Collision and Overflows, Linear Probing,
Random Probing, Double Hashing, and Open Hashing.
5. Graphs and Disjoint Sets: Graph Terminology & Representations, Graphs & Multi- graphs, Directed Graphs,
Representations of Graphs, Weighted Graph Representations; Graph Traversal Methods: Breadth-First Search and Depth-First
Search; Spanning Tree and Shortest Path Finding Problems, Disjoint Sets, Various Operations of Disjoint Sets, Disjoint Sets
Implementation.
REFERENCES
Sartaj Sahni: Data Structures, Algorithms and Applications in C++. Universities Press
D. Samanta: Classic Data Structures, PHI
Narasimha Karumanchi: Data Structures and Algorithms Made Easy. CareerMonk

3|Page
Lecture Slide - 1
CSC23: Advanced Data
Structures
 Sartaj Sahni: Data Structures, Algorithms and
Applications in C++, 2nd Edition, Silicon Press
 D. Samanta: Classic Data Structures, 2nd Edition, PHI

What is Data Structure?


 Organization, processing, retrieving, and storing of data
is called data structure
 It includes data and operations on those data
 Data structure includes detail algorithm for its each
operation
 Solutions of some problem needs special data struture
 D.S. may be classify as linear and non-linear D.S.
 Linear DS – data organized in linear fashion. Exp. Array,
Linked-List, Stack, Queue
 Non-linear DS- data organized in non-linear fashion. Exp.
Tree, Graph
2

1
Array Data Structure
 It is linear data structure
 Collection of homogeneous data elements stored in
contiguous memory locations
 Linearity is maintained using physical existence of the
elements
 Random access of the element is allowed and suitable
where number of elements are know in advance
 Data members of Array DS – 1D array a[], length, and
size
 Methods of Array DS – isEmpty(), size(), insert(x,
index), del(index), indexOf(x), get(index), display(), etc

Array DS – isEmpty()
isEmpty() method
 It is used to check whether array is empty.
 The time complexity of this algorithm is O(1).
 Algorithm
1. Algorithm isEmpty(){
2. if(size == 0) then{
3. return true;
4. Else{
5. return false;
6. }
7. }

2
Array DS – size() method
 It is used to get the number of elements stored in the
array.
 The time complexity of this algorithm is O(1).
 Algorithm
1. Algorithm size(){
2. return size;
3. }

Array DS – insert(x, index) method


 It is used to insert element x at (index )th location.
 There may be two exceptional conditions – (i) “Array is
full” and (ii) “Invalid index”.
 The major task is movement of elements at
position=index to last element one position right.
 There should be minimum number of movements
when x is inserted at index=size, and maximum
number of movements when it is inserted at index=0.
 The time complexity of this algorithm is O(n) and Ω(1).
6

3
Array DS – insert(x, index) method
 Algorithm
1. Algorithm insert(x, index){
2. if(size == length) then throw Exception(“Array is full.”);
3. if(index<0 OR index>size) then throw Exception(“Invalid index”);
4. for i = size-1 to index, step -1 do
5. a[i+1] = a[i];
6. a[index] = x;
7. size = size + 1;
8. }

0 1 2 3 4 5 6 7 8 9

Array DS – del(index) method


 It is used to delete array element from (index )th
location.
 There may be two exceptional conditions – (i) “Array is
empty” and (ii) “Invalid index”.
 The major task is movement of elements at
position=index+1 to last element one position left.
 There should be minimum number of movements when
element is deleted from index=size-1, and maximum
number of movements when it is deleted from index=0.
 The time complexity of this algorithm is O(n) and Ω(1).
8

4
Array DS – del(index) method
 Algorithm
1. Algorithm del(index){
2. if(size == 0) then throw Exception(“Array is empty.”);
3. if(index<0 OR index≥size) then throw Exception(“Invalid index”);
4. x = a[index];
5. for i = index+1 to size-1 do
6. a[i-1] = a[i];
7. size = size - 1;
8. return x;
9. }

Array DS – indexOf(x)
indexOf(x) method
 It is searching operation.
 It return index of first occurrence of x in the array; return
-1 if x is not in the array.
 The time complexity of this algorithm is O(n) and Ω(1).
 Algorithm
1. Algorithm indexOf(x){
2. index = -1;
3. for i = 0 to size-1 do
4. if(a[i] == x) then{
5. index = i; break;
6. }
7. return index;
8. }
10

5
Array DS – get(index) method
 It return (index)th element of the array.
 There is one exceptional condition “invalid index”.
 The time complexity of this algorithm is Θ(1).
 Algorithm
1. Algorithm get(index){
2. if(index<0 OR index≥size) then{
3. throw Exception(“Invalid index”);
4. }
5. return a[index];
6. }

11

Array DS – display() method


 It is used to display all elements of the array.
 The time complexity of this algorithm is Θ(n).
 Algorithm
1. Algorithm display(){
2. for i = 0 to size-1 do{
3. print a[i]
4. }
5. }

12

6
Array DS - Exercises
1. Suppose that 1, 2, 2, 3, 4, 4, 5, 6, 6, …, n is a series for
a given value n. We want to store terms of this series
into an array object by inserting one by one using
insert() method of the array DS. Write algorithms for
this task (i) that takes min. no. of movements (ii) that
takes max. no. of movements. Also compute the total
movements.
2. Write an algorithm for above mentioned task using
insert(x, index) method. where index=(x + floor(x/2)-
1) if x is odd, and index=(x+floor(x/2)-2) if x is even.
Compute total number of movements for given n.
13

Array DS – Exercises (Solution)


1. (i) Number of movements = 0
1. Algorithm insertTermsOfSeries(n){
2. Array a(n + floor(n/2));
3. for i = 1 to n do{
4. If (i % 2 == 1) then{
5. a.insert(i, a.size());
6. }
7. Else{
8. a.insert(i, a.size());
9. a.insert(i, a.size());
10. }
11. }
12. }
14

7
Array DS – Exercises (Solution)
1. (ii) Number of movements = (n+floor(n/2)-1)*(n+floor(n/2) )/2
1. Algorithm insertTermsOfSeries(n){
2. Array a(n + floor(n/2));
3. for i = n to 1, step -1 do{
4. If (i % 2 == 1) then{
5. a.insert(i, 0);
6. }
7. Else{
8. a.insert(i, 0);
9. a.insert(i, 0);
10. }
11. }
12. }
15

Array DS – Exercises (Solution)


2. Number of movements = floor(n/2)
1. Algorithm insertTermsOfSeries(n){
2. Array a(n + floor(n/2));
3. for i = 1 to n do{
4. If (i % 2 == 1) then{
5. a.insert(i, i+floor(i/2)-1);
6. }
7. Else{
8. a.insert(i, i+floor(i/2)-2);
9. a.insert(i, i+floor(i/2)-2);
10. }
11. }
12. }
16

8
Array DS - Exercises
3. Suppose that an array object has the series: 1, 2, 2, 3,
4, 4, 5, 6, 6,7 …, n for a given value n. We want to
delete duplicate number of the series using del()
method of the array DS. Write algorithms for this task
(i) that takes min. no. of movements (ii) that takes
max. no. of movements. Also compute the total
movements.

17

Array DS – Exercises (Solution)


3. (i) Number of movements = (nx(n-2))/4 if n is even and
(n-1)2 / 4 if n is odd.
1. Algorithm delDuplicateTermsOfSeries(a, n){
2. if (n%2 == 1) then{
3. m = n + ((n-1)/2) -2;
4. }Else{
5. m = n + (n/2) – 1;
6. }
7. for i = m to 2, step -3 do{
8. a.del(i);
9. }
10. }

18

9
Discussion
For n = 4; the series is: 0 1 2 3 4 5 For n = 4/5
1 2 2 3 4 4 Del Inde #Shift
Elem x
0 1 2 3 4 5 6 No
For n = 5; the series is: 1 5 0/1
1 2 2 3 4 4 5
2 2 2/3

General case even-n/odd-n Sm = m/2(a+am)


Del Elem No index #Shift [sum of m terms of AP]
For even n: m=n/2, a=0,
1 (n+n/2-1) / n+(n-1)/2-2 0/1
am=(n-2), Sm = n/4[0+n-
2 (n+n/2-4) / n+(n-1)/2-5 2/3 2] = n.(n-2)/4
... ... ... For odd n: m = (n-1)/2;
a = 1, am = (n-2), Sm =
(n/2)/((n-1)/2) 2/ 2 n-2/n-2
(n-1)/4x[1+(n-2)] = (n-
1).(n-1)/4=(n-1)2/4
19

Array DS – Exercises (Solution)


3. (ii) Number of movements = (nx(3n-2))/8 if n is even;
((n-1)x(3n-1))/8 if n is odd.
1. Algorithm delDuplicateTermsOfSeries(a, n){
2. if (n%2 == 1) then{
3. m = n-2;
4. }Else{
5. m = n-1;
6. }
7. for i = 1 to m, step 2 do{
8. a.del(i);
9. }
10. }

20

10
Discussion
For n = 4/5
For n = 4; the series is: 0 1 2 3 4 5
Del inde #Shift
1 2 2 3 4 4
Elem x
No
0 1 2 3 4 5 6
For n = 5; the series is: 1 1 4/5
1 2 2 3 4 4 5
2 3 1/2
General case even-n/odd-n
Sm = m/2(a+am)
Del Elem No index #Shift [sum of m terms of AP]
1 1/1 n + n/2 -2/ For even n: m=n/2,
(n + (n-1)/2 - a=n+n/2-2, am=1, Sm =
2) n/4[n+n/2-2+1] = n.(3n-
2)/8
2 3/3 n+n/2-4 /
For odd n: m = (n-1)/2;
(n+(n-1)/2-4)
a = n+(n-1)/2-2, am =2,
... ... ... Sm = (n-1)/4x[n+(n-1)/2
(n/2)/((n-1)/2) (n-1)/ (n-2) 1/ 2 -2+2] = (n-1).(3n-1)/8
21

11
Lecture Slide - 2
CSC23: Advanced Data
Structures
 Sartaj Sahni: Data Structures, Algorithms and
Applications in C++, 2nd Edition, Silicon Press
 D. Samanta: Classic Data Structures, 2nd Edition, PHI

Multi-Dimensional Array
 A multi-dimensional array has more than one
dimension. For example, a[3][2], a[4][3][2], a[5][3][2][4]
are multi-dimensional arrays.
 A 2D array has two dimensions. For Example a[3][2] is
2D array.
 Number of elements in a 2D array should be equal to
product of both dimensions. For example, number of
elements in array a[3][2] will be 3 x 2 = 6.
 Individual element of a 2D array accessed using two
indices. For example, individual element of 2D array
a[3][2], accessed as ai,j with i=0, 1, 2 and j=0, 1.
 A matrix is represented by 2D array. 2

1
Mapping of 2D array in 1D array
 Row Major Order – 2D array is a[m][n]
 a0,0 a0,1  a0,n1 
a a1,1  a1,n1 
 1,0
     
 
am1,0 am1,1  am1,n1  mn
Mapping in 1D array
0 1 2 … n-1 n … m.n-1
a0,0 a0,1 a0,2 … a0,n-1 a1,0 … am-1,n-1
Map(i, j) = No. of elements in row-0 to row (i-1) + No. of
elements up to jth col in ith row - 1= i x n + (j+1) - 1
=nxi+j
3

Mapping of 2D array in 1D array


 Row Major Order – 2D array is a[3][2]
 Here, m = 3, n = 2; map(i, j)=ni+j = 2i+j

a0,0 a0,1 
 
 a1, 0 a1,1 
a2,0 a2,1 
32

Mapping in 1D array
0 1 2 3 4 5
a0,0 a0,1 a1,0 a1,1 a2,0 a2,1

2
Mapping of 2D array in 1D array
 Column Major Order – 2D array is a[m][n]
 a0,0 a0,1  a0,n1 
a a1,1  a1,n1 
 1,0
     
 
am1,0 am1,1  am1,n1  mn
Mapping in 1D array
0 1 2 … m-1 m … m.n-1
a0,0 a1,0 a2,0 … am-1,0 a0,1 … am-1,n-1
Map(i, j) = No. of elements in col-0 to col (j-1) + No. of
elements up to ith row in jth col - 1= j x m + (i+1) - 1
= m x j+ i
5

Mapping of 2D array in 1D array


 Column Major Order – 2D array is a[3][2]
 Here, m = 3, n = 2; map(i, j)=mj+i = 3j+i

a0,0 a0,1 
 
 a1, 0 a1,1 
a2,0 a2,1 
32

Mapping in 1D array
0 1 2 3 4 5
a0,0 a1,0 a2,0 a0,1 a1,1 a2,1

3
Mapping of 3D array in 1D array
 Row Major Order – 3D array is a[m1][m2][m3]
i 0 i  m1 1
 a0,0,0 a0,0,1  a0,0,m31   am11,0,0 am11,0,1  am11,0,m31 
 a a0,1,m31   a
a0,1,1  am11,1,1  am11,1,m31 
 0,1,0  m11,1,0
           
   
a0,m21,0 a0,m21,1  a0,m21,m31  m2m3 am11,m21,0 am11,m21,1  am11,m21,m31  m2m3
Mapping in 1D array
0 1 2 … m3-1 m3 … m1.m2.m3-1
a0,0,0 a0,0,1 a0,0,2 … a0,0,m3-1 a0,1,0 … am1-1,m2-1, m3-1
Map(i, j, k) = No. of elements in 1st-dim=0 to 1st-dim=(i-1) + No. of
elements in 2nd-dim=0 to 2nd-dim=(j-1) when 1st-dim=i + No. of
elements up to 3rd-dim=k when 1st-dim=i and 2nd dim=j - 1= m2.m3.i
+m3.j+(k+1) - 1=m2.m3.i +m3.j+k
7

Mapping of 3D array in 1D array


 Row Major Order – 3D array is a[2][3][2]
 Here, m1=2, m2=3, m3=2; map(i,j,k)=m2.m3.i+m3.j+k
=3.2.i+2.j+k = 6i+2j+k
i0 i 1
a0,0,0 a0,0,1  a1,0,0 a1,0,1 
   
 a0,1,0 a0,1,1   a1,1,0 a1,1,1 
a0,2,0 a0, 2,1  a1,2,0 a1,2,1 
32 32

Mapping in 1D array
0 1 2 3 4 5 6 7 8 9 10 11
a0,0,0 a0,0,1 a0,1,0 a0,1,1 a0,2,0 a0,2,1 a1,0,0 a1,0,1 a1,1,0 a1,1,1 a1,2,0 a1,2,1

4
Mapping of 3D array in 1D array
 Column Major Order – 3D array is a[m1][m2][m3]
i 0 i  m1 1
 a0,0,0 a0,0,1  a0,0,m31   am11,0,0 am11,0,1  am11,0,m31 
 a a0,1,m31   a
a0,1,1  am11,1,1  am11,1,m31 
 0,1,0  m11,1,0
           
   
a0,m21,0 a0,m21,1  a0,m21,m31  m2m3 am11,m21,0 am11,m21,1  am11,m21,m31  m2m3
Mapping in 1D array
0 1 2 … m1-1 m1 … m1.m2.m3-1
a0,0,0 a1,0,0 a2,0,0 … am1-1,0,0 a0 ,1,0 … am1-1,m2-1, m3-1
Map(i, j, k) = No. of elements in 3rd-dim=0 to 3rd-dim=(k-1) + No. of
elements in 2nd-dim=0 to 2nd-dim=(j-1) when 3rd-dim=k + No. of
elements up to 1st-dim=i when 3rd-dim=k and 2nd dim=j - 1= m1.m2.k
+m1.j+(i+1) - 1=m1.m2.k +m1.j+i
9

Mapping of 3D array in 1D array


 Column Major Order – 3D array is a[2][3][2]
 Here, m1=2, m2=3, m3=2; map(i,j,k)=m1.m2.k+m1.j+i
=2.3.k+2.j+i = 6k+2j+i
i0 i 1
a0,0,0 a0,0,1  a1,0,0 a1,0,1 
   
 a0,1,0 a0,1,1   a1,1,0 a1,1,1 
a0,2,0 a0, 2,1  a1,2,0 a1,2,1 
32 32

Mapping in 1D array
0 1 2 3 4 5 6 7 8 9 10 11
a0,0,0 a1,0,0 a0,1,0 a1,1,0 a0,2,0 a1,2,0 a0,0,1 a1,0,1 a0,1,1 a1,1,1 a0,2,1 a1,2,1

10

5
2D array – Assignment
 Suppose that there is a 2D array a[m][n]. We want of
store elements of this 2D array to 1D array from last row
to first row and within a row from right to left. Derive
mapping function to map the index of element ai,j in 1D.
 Example- the 2d array is a[3][2] it is mapped in 1D array.
a0,0 a0,1 
 
a a
 1,0 1,1 
a2,0 a2,1 
32
 Mapping in 1D array.
0 1 2 3 4 5
a2,1 a2,0 a1,1 a1,0 a0,1 a0,0
11

Matrix
 Arrangement of data in rows and columns is called matrix.
It is represented by 2D array.
 Example – Following is a matrix of order m x n.
 a0,0 a0,1  a0,n1 
a a1,1  a1,n1 
 1,0
     
 
am1,0 am1,1  am1,n1  mn
 Data members – one 2D array a[m][n], rows, cols.
 Methods – read(), print(), add(), mul(), det(), inverse() etc

12

6
Matrix – read() method
 It is used to read the matrix.
 Time complexity of this algorithm is Θ(m.n).
1. Algorithm read(){
2. print(“Enter a matrix of order “, rows, “x”, cols);
3. for i = 0 to rows-1 do{
4. for j = 0 to cols-1 do{
5. read a[i][j];
6. }
7. }
8. }

13

Matrix – print() method


 It is used to print the matrix.
 Time complexity of this algorithm is Θ(m.n).
1. Algorithm print(){
2. print(“The Matrix is\n”);
3. for i = 0 to rows-1 do{
4. for j = 0 to cols-1 do{
5. print (a[i][j] + “\t”);
6. }
7. print(“\n”);
8. }
9. }

14

7
Matrix – add() method
 It is used to add two matrices. If order of both matrices
are not same then addition should be failed.
 Time complexity of this algorithm is Θ(m.n).
1. Algorithm add(matrix B){
2. if(rows != B.rows OR cols != B.cols) then throw Exception(“Failed”);
3. matrix C(rows, cols);
4. for i = 0 to rows-1 do{
5. for j = 0 to cols-1 do{
6. C.a[i][j] = a[i][j] + B.a[i][j];
7. }
8. }
9. return C;
10. }
15

Matrix – mul() method


 It is used to multiply two matrices. If number of cols in
first matrix not equal to number of rows in second matrix
then multiplication should be failed.
1. Algorithm mul(matrix B){
2. if(cols != B.rows) then throw Exception(“Multiplication Failed”);
3. matrix C(rows, B.cols);
4. for i = 0 to rows-1 do
5. for j = 0 to B.cols-1 do{
6. C.a[i][j] = 0;
7. for k=0 to cols-1 do
8. C.a[i][j] = C.a[i][j] + a[i][k] * B.a[k][j];
9. }
10. return C;
11. }
16

8
Matrix – det() method
 If Matrix is not a square matrix – throw exception.
 First matrix is converted in LTM or UTM, then multiply
diagonal elements. Example – A3x3 compute det as below:
 
2 1 1 R R 2 R 2 0 1  R3R3 1 R2 2 0 1 
4 0 1 2 2 1 0  2 1  2 0  2 1
  R3 R3 R1    1
2 0 1 0 1 0  0 0 
 2
So that det = 2 x -2 x 1/2 = -2
Direct method det = 2x(0x1-1x0)-1x(4x1 – 1x2) + 1x(4x0 – 0x2) = 2x0 –
1x(4-2) + 1x0 = -2.

17

Matrix – det() method


 Convert in UTM and multiply diagonal elements.
1. Algorithm det(){
2. if(rows != cols) then throw Exception(“It is not a square matrix”);
3. for k = 0 to rows – 2 do{
4. x = a[k][k];
5. for i = k+1 to rows-1 do{
6. y = a[i][k];
7. for j = 0 to cols-1 do
8. a[i][j] = a[i][j] – a[k][j] * y / x;
9. }
10. }
11. d = 1;
12. for i = 0 to rows-1 do
13. d = d * a[i][i]
14. return d;
15. }
18

9
Matrix – inverse() method
 If Matrix is not a square matrix – throw exception.
 If determinant of the given matrix is zero inverse is not
possible.
 Take a unit matrix of same order and apply row operations
to convert given matrix into unit matrix and apply same
row operations on unit matrix.
 When given matrix becomes unit matrix then unit matrix
becomes inverse of given matrix.

19

Matrix – inverse() method


 Forward operation.
 
2 1 1 R R 2 R 2 1 1  R3R3 1 R2 2 1 1
4 0 1 2 2 1 0  2 1  2 0 2 1
  R3 R3 R1    1
2 0 1 0 1 0  0 0 
 2
 
1 0 0 R R 2 R  1 0 0 R3R3 1 R2  1 0 0
0 1 0 2 2 1  2 1 0  2  2 1 0
  R3 R3 R1    1 
0 0 1  1 0 1 0  1
 2 
20

10
Matrix – inverse() method
 Backward operation.
     
2 1 1  R1 R1 2 R3 2 1 0  R1 R1  R2 2 0 0
2
0  2 1  0  2 0   0  2 0
 1  R2 R2 2 R3  1  1
0 0  0 0  0 0 
 2  2  2
     
 1 0 0 R1 R1 2 R3  1 1  2 1 1 2
R R 
R2 0 1  1
 2 1 0   2 0 2   2 0 2
 1  R2 R2  2 R3  1   1 
0  1 0  1 0  1
 2   2   2 
21

Matrix – inverse() method


 diagonal operation.  
2 0 0  R1  R1 1 0 0
0   0 1 0
2
0  2
 1  R2 R22
0 0  R3 2 R3 0 0 1
 2
  R 0 1  1 
0 1  1 R1  21 
 2 2
 2 0 2   1 0 1   1 1
2 1 1 0  
 1  R2  R22  
 0  1  R3 2 R3  0  1 2  4 0 1  1 2
0 
2
1 
 2       
2 0 1 0 1 2 
 
22

11
1. Algorithm inverse(){ Matrix B(rows, cols); B.read(); //read a unit matrix
2. for k = 0 to rows-2 do{ //Forward operation
3. x = a[k][k];
4. for i = k+1 to rows-1 do{
5. y = a[i][k];
6. for j = 0 to cols-1 do {
7. a[i][j] = a[i][j] – a[k][j] * y / x;
8. B.a[i][j] = B.a[i][j] – B.a[k][j]*y/x;
9. } }}
10. for k = rows-1 to 1, step -1 do{ //Backward operation
11. x = a[k][k];
12. for i = 0 to k-1 do{
13. y = a[i][k];
14. for j = 0 to cols-1 do {
15. a[i][j] = a[i][j] – a[k][j] * y / x;
16. B.a[i][j] = B.a[i][j] – B.a[k][j]*y/x;
17. } }}
18. for i = 0 to rows-1 do{ // Making diagonal 1
19. x = a[i][i];
20. for j = 0 to cols-1 do{
21. a[i][j] = a[i][j] / x;
22. B.a[i][j] = B.a[i][j]/x;
23. }}
24. Return B;
25. } 23

Some Special Matrices


 All of these special matrices are square matrix of
order n.
 These matrices have some zero (null) elements, that is
not needed to store in memory.
 If we store only non-null elements in 1D array, then
there should be saving of memory space.
 So, mapping of non-null elements from 2D to 1D
needed.
 Some operations of these matrices may be performed
efficiently.
 Examples – diagonal matrix, upper and lower
triangular matrices, tri-diagonal matrix.
24

12
Diagonal Matrix
 A square matrix of order n is diagonal matrix if non
diagonal elements must be null.
 So, Anxn is a diagonal matrix if ai,j = 0, if i≠j.
 Number of non-null elements in this diagonal matrix
should be n.
 So, mapping of non-null elements from 2D to 1D
needed.
 Data members – one 1D array, n.
 Operations – read(), print(), add(), mul(), det(),
inverse().

25

Diagonal Matrix - Mapping


 Here row and column major order should be same.
a0,0 0  0 
 0 a  0 
 1,1

     
 
 0 0  an1,n1  nn

0 1 2 … n-1
a0,0 a1,1 a2,2 … an-1,n-1

 The elements ai,j is non-null if i=j.


 Map(i,j)=i = j

26

13
Diagonal Matrix – read() method
 It read only diagonal element.
 Time complexity of this algorithm is Θ(n).
1. Algorithm read(){
2. print(“Enter diagonal elements of the diagonal matrix”);
3. for i = 0 to n-1 do{
4. read a[map(i, i)];
5. }
6. }

27

Diagonal Matrix – print() method


 It is used to print the diagonal matrix of order nxn.
 Time complexity of this algorithm is Θ(n2).
1. Algorithm print(){
2. print(“The Diagonal Matrix is\n”);
3. for i = 0 to n-1do{
4. for j = 0 to n-1do{
5. if(i == j){
6. print (a[map(i, j)] + “\t”);
7. }Else{
8. print(“0” + “\t”);
9. } }
10. print(“\n”);
11. } }
28

14
Diagonal Matrix – add() method
 It is used to add two diagonal matrices. If order of both
matrices are not same then addition should be failed.
 Time complexity of this algorithm is Θ(n).
1. Algorithm add(matrix B){
2. if(n != B.n) then throw Exception(“Addition Failed”);
3. DiagonalMatrix C(n);
4. for i = 0 to n-1 do{
5. C.a[map(i, i)] = a[map(i, i)] + B.a[map(i, i)] ;
6. }
7. return C;
8. }

29

Diagonal Matrix– mul() method


 It is used to multiply two diagonal matrices. If order of
both matrices will be not equal multiplication should be
failed.
1. Algorithm mul(matrix B){
2. if(n != B.n) then throw Exception(“Multiplication Failed”);
3. DiagonalMatrix C(n);
4. for i = 0 to n-1 do{
5. C.a[map(i, i)] = a[map(i, i)] * B.a[map(i, i)];
6. }
7. return C;
8. }

30

15
Diagonal Matrix– det() method
 Multiply diagonal elements.
 Time Complexity of this algorithm is Θ(n).

1. Algorithm det(){
2. d = 1;
3. for i = 0 to n-1 do{
4. d = d * a[map(i, i)];
5. }
6. return d;
7. }

31

Diagonal Matrix – inverse() method


 Inverse is not possible if determinant is zero.
 Inverse of a diagonal matrix is obtained by taking the
reciprocal of diagonal elements.

1. Algorithm inverse(){
2. if(det()==0) then throw Exception(“inverse not exist”);
3. DiagonalMatrix B(n);
4. for i = 0 to n-1 do{
5. B.a[map(i, i)] = 1.0 / a[map(i, i)] ;
6. }
7. return B;
8. }

32

16
Diagonal Matrix – Assignment
 Prove that product of two diagonal matrices is a diagonal
matrix.
 Let A and B are two diagonal matrices of order n.
Therefore, ai,j = bi,j = 0 for every i ≠ j.
 Let C = A x B; We have to show that ci,j =0 when i ≠ j.
 Let i ≠ j; therefore ai,j = bi,j = 0, we will show that ci,j = 0.
n n
ci , j   ai , k  bk , j ai ,i  bi , j  ai , j  b j , j  a i,k  bk , j
k 1 k 1; k  i ;k  j

 In first term bi,j=0; in 2nd term ai,j=0; and in 3rd term ai,k=0
and bk,j = 0, as i ≠ k and k ≠ j. Therefore ci,j = 0. Hence
proved.
33

Lower Triangular Matrix


 A square matrix of order n is called lower triangular
matrix if all of its elements above the diagonal should
be zero (null).
 So, Anxn is a lower triangular matrix if ai,j = 0, if i<j.
 Number of non-null elements in this lower triangular
matrix should be n.(n+1)/2.
 So, mapping of non-null elements from 2D to 1D
needed.
 Data members – one 1D array, n.
 Operations – read(), print(), add(), mul(), det(),
inverse().

34

17
Lower Triangular Matrix - Mapping
 Row Major order.
 a0,0 0  0 
a a1,1  0  0 1 2 3 … n.(n+1)/2 - 1
 1,0
      a0,0 a1,0 a1,1 a2,0 … an-1,n-1
 
an1,0 an1,1  an1,n1  nn

 Map(i, j) = No. of non-null elements in row-0 to row (i-1)


+ No. non-null elements up to jth column in ith row -1 =
(1 + 2 + … + i) + (j+1) – 1 = ix(i+1)/2 + j

35

Lower Triangular Matrix - Mapping


 Column Major order.
 a0,0 0  0 
a a1,1  0 
 1,0
     
 
an1,0 an1,1  an1,n1  nn
0 1 2 … n-1 n … n.(n+1)/2 - 1
a0,0 a1,0 a2,0 … an-1,0 a1,1 … an-1,n-1

 Map(i, j) = No. of non-null elements in col-0 to col (j-1) +


No. non-null elements up to ith row in jth column -1 =
[n + (n-1) + … + (n-j+1)] + (i-j+1) – 1 = [n+n+… j terms] –
(0 + 1 + 2 + … + j) + i = n.j + i – jx(j+1)/2
36

18
Lower Triangular Matrix – read() method
 It read only non-null elements.
 No. of read operation should be n.(n+1)/2.
1. Algorithm read(){
2. print(“Enter non-null elements of the lower triangular matrix “);
3. for i = 0 to n-1 do{
4. for j =0 to i do{
5. read a[map(i, i)];
6. }
7. }
8. }

37

Lower Triangular Matrix – print() method


 It is used to print the lower trianglular matrix of order
nxn.
1. Algorithm print(){
2. print(“The Lower Triangular Matrix is\n”);
3. for i = 0 to n-1do{
4. for j = 0 to n-1do{
5. if(i ≥ j){
6. print (a[map(i, j)] + “\t”);
7. }Else{
8. print(“0” + “\t”);
9. } }
10. print(“\n”);
11. } }
38

19
Lower Triangular Matrix – add() method
 It is used to add two lower triangular matrices. If order of
both matrices are not same then addition should be
failed.
 Theorem: Addition of two LTM is a LTM.
 Proof: Let A and B are LTM of order n. Therefore ai,j = bi,j
=0 for i < j. Let C = A + B, then we have to show that ci,j = 0
for i < j.
Let i < j, we have to show that ci,j = 0.
ci,j = ai,j + bi,j . But ai,j = bi,j =0 as A and B are LTM.
Therefore ci,j = 0 + 0 = 0. Hence proved.

39

Lower Triangular Matrix – add() method


 Total number of “+” operations performed = n. (n+1)/2.
1. Algorithm add(B){
2. if (n != B.n) then throw Exception(“Addition failed”);
3. LTM C(n);
4. for i = 0 to n-1do{
5. for j = 0 to i do{
6. C.a[map(i, j)] = a[map(i, j)] + B.a[map(i, j)];
7. }
8. }
9. return C;
10. }

40

20
Lower Triangular Matrix – mul() method
 It is used to multiply two lower triangular matrices. If
order of both matrices are not same then multiplication
should be failed.
 Theorem: Multiplication of two LTM is a LTM.
 Proof: Let A and B are LTM of order n. Therefore ai,j = bi,j
=0 for i < j. Let C = A x B, then we have to show that ci,j = 0
for i < j.
Let i < j, wen have to show
j 1
that ci,j =n 0.
ci , j   ai ,k  bk , j  ai , k  bk , j   ai ,k  bk , j
k 1 k 1 k j
In 1st term, k < j, so bk,j=0. In 2nd term, i<j and k≥j, so i<k, ‫﮶‬
ai,k=0. Therefore ci,j = ai,kx0+0xbk,j = 0. Hence proved.
41

Lower Triangular Matrix – mul() method


 Each elements of the resultant LTM does not needed
same multiplication operations.
a0,0 0 0  b0,0 0 0
   
 a1,0 a1,1 0   b1,0 b1,1 0  
a2,0 a2,1 a2, 2  b2,0 b2,1 b2, 2 

 a0,0  b0,0 0 0 
 
 a1 , 0  b0, 0  a1,1  b1, 0 a 1,1  b1,1 0 
a2,0  b0,0  a2,1  b1,0  a2,2  b2,0 a2,1  b1,1  a2, 2  b2,1 a2, 2  b2,2 

42

21
Lower Triangular Matrix – mul() method
 Total number of “x” operations = n. (n+1).(n+2)/6.
1. Algorithm mul(B){
2. if (n != B.n) then throw Exception(“Multiplication failed”);
3. LTM C(n);
4. for i = 0 to n-1do{
5. for j = 0 to i do{
6. C.a[map(i, j)] = 0;
7. for k=j to i do{
8. C.a[map(i, j)] = C.a[map(i, j)]+a[map(i, k)] x B.a[map(k, j)];
9. }
10. }
11. }
12. return C;
13. }
43

Lower Triangular Matrix – mul() method


 Total number of “x” operations = n. (n+1).(n+2)/6.
Row Number No. of x operations n1
(i  1)  (i  2)
0 1 Total operations  
1 1+2 i 0 2
1  n1 n1 n1

2 1+2+3   i 2  3 i  2  1
… …
2  i 0 i 0 i 0 

i 1 + 2 + 3 + … + (i+1)
1  n.(n 1).(2n 1) n.(n 1) 
… …    3  2 n
2 6 2 
n-1 1 + 2 + 3+ … + n
n
 (n 1).(2n 1)  9(n 1) 12  n (2n2  3n 1  9n  9 12)  n (2n2  6n  4)
12 12 12
n 2 n(n  1)(n  2)
 (n  3n  2) 
6 6
1 +2 +3 + … + n = n.(n+1).(2n+1)/6
2 2 2 2
44

22
Lower Triangular Matrix – det() method
 The determinant is obtained by multiplying diagonal
elements.
1. Algorithm det(){
2. d = 1;
3. for i = 0 to n-1do{
4. d = d * a[map(i, i)];
5. }
6. return d;
7. }

45

1. Algorithm inverse(){ if(det() == 0) then throw Exception(“No inverse”);


2. LTM B(n)={0}; //set all its element 0
3. for i = 0 to n-1 do // Making B as unit matrix
4. B.a[map(i, i)] = 1;
5. for k = 0 to n-2do{
6. x = a[map(k, k)];
7. for i = k+1 to n-1 do{
8. y = a[map(i, k)];
9. for j = 0 to k do{
10. a[map(i, j)] = a[map(i, j)] - a[map(k, j)] *y /x;
11. B.a[map(i, j)] = B.a[map(i, j)] – B.a[map(k, j)] *y /x;
12. }
13. }
14. }
15. for i = 0 to n-1 do{
16. x = a[map(i, i)];
17. for j = 0 to i do{
18. a[map(i, j)] = a[map(i, j)]/x;
19. B.a[map(i, j)] = B.a[map(i, j)]/x;
20. }}
21. return B;
22. } 46

23
Lower Triangular Matrix – Assignment
 Suppose that there is a lower triangular matrix of order n.
We want of store non-null elements of this matrix in 1D
array from last row to first row and within a row from
right to left. Derive mapping function to map the index of
element ai,j in 1D.
 Example- the LTM of order 3 is mapped in 1D array as
below: a0,0 0 0 
 
 a1,0 a1,1 0 
a2,0 a2,1 a2, 2 

0 1 2 3 4 5
 Mapping
a2,2 in 1D array.
a2,1 a2,0 a1,1 a1,0 a0,0
47

Upper Triangular Matrix


 A square matrix of order n is called upper triangular
matrix if all of its elements below the diagonal should
be zero (null).
 So, Anxn is a upper triangular matrix if ai,j = 0, if i>j.
 Number of non-null elements in this upper triangular
matrix should be n.(n+1)/2.
 So, mapping of non-null elements from 2D to 1D
needed.
 Data members – one 1D array, n.
 Operations – read(), print(), add(), mul(), det(),
inverse().

48

24
Upper Triangular Matrix - Mapping
 Row Major order.
a0,0 a0,1  a0,n1 
 0 a  a1,n1  0 1 … n-1 n … n.(n+1)/2 - 1
 1,1

      a0,0 a0,1 … a0,n-1 a1,1 … an-1,n-1


 
 0 0  an1,n1  nn

 Map(i, j) = No. of non-null elements in row-0 to row (i-1)


+ No. non-null elements up to jth column in ith row -1 =
[n+(n-1) + (n-2) + … + (n-i+1)]+ (j-i+1) – 1 = (n+n+… i
times)-(1+2+…+(i-1))+j-i= nxi –(1+2+…+i)+j=ni+j-ix(i+1)/2.

49

Upper Triangular Matrix - Mapping


 Column Major order.
a0,0 a0,1  a0,n1 
 0 a  a1,n1  0 1 2 … n.(n+1)/2 - 1
 1,1

      a0,0 a0,1 a1,1 … an-1,n-1


 
 0 0  an1,n1  nn

 Map(i, j) = No. of non-null elements in col-0 to col (j-1) +


No. non-null elements up to ith row in jth column -1 =
(1 + 2+ 3+ …+j) + (i+1) – 1 = jx(j+1)/2 + i

50

25
Upper Triangular Matrix – read() method
 It read only non-null elements.
 No. of read operation should be n.(n+1)/2.
1. Algorithm read(){
2. print(“Enter non-null elements of the upper triangular matrix “);
3. for i = 0 to n-1 do{
4. for j =i to n-1 do{
5. read a[map(i, i)];
6. }
7. }
8. }

51

Upper Triangular Matrix – print() method


 It is used to print the Upper triangular matrix of order
nxn.
1. Algorithm print(){
2. print(“The Upper Triangular Matrix is\n”);
3. for i = 0 to n-1do{
4. for j = 0 to n-1do{
5. if(j ≥ i){
6. print (a[map(i, j)] + “\t”);
7. }Else{
8. print(“0” + “\t”);
9. } }
10. print(“\n”);
11. } }
52

26
Upper Triangular Matrix – add() method
 It is used to add two upper triangular matrices. If order of
both matrices are not same then addition should be
failed.
 Theorem: Addition of two UTM is a UTM.
 Proof: Let A and B are UTM of order n. Therefore ai,j = bi,j
=0 for i > j. Let C = A + B, then we have to show that ci,j = 0
for i > j.
Let i > j, we have to show that ci,j = 0.
ci,j = ai,j + bi,j . But ai,j = bi,j =0 as A and B are UTM.
Therefore ci,j = 0 + 0 = 0. Hence proved.

53

Upper Triangular Matrix – add() method


 Total number of “+” operations performed = n. (n+1)/2.
1. Algorithm add(B){
2. if (n != B.n) then throw Exception(“Addition failed”);
3. UTM C(n);
4. for i = 0 to n-1do{
5. for j = i to n-1 do{
6. C.a[map(i, j)] = a[map(i, j)] + B.a[map(i, j)];
7. }
8. }
9. return C;
10. }

54

27
Upper Triangular Matrix – mul() method
 It is used to multiply two upper triangular matrices. If
order of both matrices are not same then multiplication
should be failed.
 Theorem: Multiplication of two UTM is a UTM.
 Proof: Let A and B are UTM of order n. Therefore ai,j = bi,j
=0 for i > j. Let C = A x B, then we have to show that ci,j = 0
for i > j.
Let i > j, wen have to show
i 1
that ci,j =n 0.
ci , j   ai ,k  bk , j  ai , k  bk , j   ai ,k  bk , j
k 1 k 1 k i
In 1st term, i>k, so ai,k=0. In 2nd term, i>j and k≥i, so k>j, ‫﮶‬
bk,j=0. Therefore ci,j =0xbk,j+ai,kx0= 0. Hence proved.
55

Upper Triangular Matrix – mul() method


 Each elements of the resultant UTM does not needed
same multiplication operations.
a0,0 a0,1 a0, 2  b0,0 b0,1 b0, 2 
   
 0 a1,1 a1, 2    0 b1,1 b1, 2  
 0 0 a2, 2   0 0 b2, 2 

a0,0  b0,0 a0,0  b0,1  a0,1  b1,1 a0,0  b0,2  a0,1  b1, 2  a0, 2  b2,2 
 
 0 a1,1  b1,1 a1 ,1  b1, 2  a1, 2  b2 , 2 
 0 0 a2, 2  b2, 2 

56

28
Upper Triangular Matrix – mul() method
 Total number of “x” operations = n. (n+1).(n+2)/6.
1. Algorithm mul(B){
2. if (n != B.n) then throw Exception(“Multiplication failed”);
3. UTM C(n);
4. for i = 0 to n-1do{
5. for j = i to n-1 do{
6. C.a[map(i, j)] = 0;
7. for k=i to j do{
8. C.a[map(i, j)] = C.a[map(i, j)]+a[map(i, k)] x B.a[map(k, j)];
9. }
10. }
11. }
12. return C;
13. }
57

Upper Triangular Matrix – mul() method


 Total number of “x” operations = n. (n+1).(n+2)/6.
Row Number No. of x operations n1
(n  i)  (n  i  1
0 1 + 2 + 3+ … + n Total operations  
1 1 + 2 + 3+ … + (n-1) i 0 2
1  n1 n1 n1

2 1 + 2 + 3+ … + (n-2)   i 2  (2n 1)  i  n.(n  1)  1
… …
2  i 0 i 0 i 0 

I 1 + 2 + 3 + … + (n-i) 1  n.(n 1).(2n 1) n.(2n  1)(n 1) 2 


    n (n  1) 
… … 2 6 2 
n-1 1
n
 (n 1).(2n 1)  3(2n 1)(n 1)  6.n(n 1)  n (2n2  3n 1 6n2  3n  3  6n2  6n)
12 12
n n 2 n(n  1)(n  2)
 (2n  6n  4)  (n  3n  2) 
2

12 6 6
12+22+32+ … + n2 = n.(n+1).(2n+1)/6 58

29
Upper Triangular Matrix – det() method
 The determinant is obtained by multiplying diagonal
elements.
1. Algorithm det(){
2. d = 1;
3. for i = 0 to n-1do{
4. d = d * a[map(i, i)];
5. }
6. return d;
7. }

59

1. Algorithm inverse(){ if(det() == 0) then throw Exception(“No inverse”);


2. UTM B(n)={0}; //set all its element 0
3. for i = 0 to n-1 do // Making B as unit matrix
4. B.a[map(i, i)] = 1;
5. for k = n-1 to 1, step -1 do{//Backward operation
6. x = a[map(k, k)];
7. for i = 0 to k-1 do{
8. y = a[map(i, k)];
9. for j = k to n-1 do{
10. a[map(i, j)] = a[map(i, j)] - a[map(k, j)] *y /x;
11. B.a[map(i, j)] = B.a[map(i, j)] – B.a[map(k, j)] *y /x;
12. }
13. }
14. }
15. for i = 0 to n-1 do{// diagonal operation
16. x = a[map(i, i)];
17. for j = 0 to i do{
18. a[map(i, j)] = a[map(i, j)]/x;
19. B.a[map(i, j)] = B.a[map(i, j)]/x;
20. }}
21. return B;
22. } 60

30
Tri-diagonal Matrix
 A square matrix of order n is called tri-diagonal matrix
if all of its elements other than diagonal, one above
and one below the diagonal should be zero (null).
 So, Anxn is a tri-diagonal matrix if ai,j = 0, if |i-j|>1.
 Number of non-null elements in this matrix should be
3n-2.
 So, mapping of non-null elements from 2D to 1D
needed.
 Data members – one 1D array, n.
 Operations – read(), print(), add(), mul(), det(),
inverse().

61

Tri-diagonal Matrix - Mapping


 Row Major order.
a0,0 a0,1 0  0 0 
 0 0 
 a1,0 a1,1 a1,2 
       
 
 0 0 0  an1,n2 an1,n1  nn
0 1 2 3 4 … 3n-3
a0,0 a0,1 a1,0 a1,1 a1,2 … an-1,n-1
 Map(i, j) = No. of non-null elements in row-0 to row (i-1)
+ No. non-null elements up to jth column in ith row -1 =
(2+3+3+… i-terms)+ (j-i+2) – 1 = (3+3+3+ … i-terms) + (j-i)
= 3i + j – i = 2i + j
62

31
Tri-diagonal Matrix - Mapping
 Column Major order.
a0,0 a0,1 0  0 0 
 0 0 
 a1,0 a1,1 a1,2 
       
 
 0 0 0  an1,n2 an1,n1  nn
0 1 2 3 4 … 3n-3
a0,0 a1,0 a0,1 a1,1 a2,1 … an-1,n-1
 Map(i, j) = No. of non-null elements in col-0 to col (j-1) +
No. non-null elements up to ith row in jth column -1 =
(2+3+3+ …j-terms) + (i-j+2) – 1 = (3+3+3+… j-terms) + (i-j)
=3j + (i-j) = 2j+i
63

Tri-diagonal Matrix – read() method


 It read only non-null elements.
 No. of read operation should be 3n-2.
1. Algorithm read(){
2. print(“Enter non-null elements of the tri-diagonal matrix “);
3. for i = 0 to n-1 do{
4. for j =max(0, i-1) to min(i+1, n-1) do{
5. read a[map(i, i)];
6. }
7. }
8. }

64

32
Tri-diagonal Matrix – print() method
 It is used to print the tri-diagonal matrix of order nxn.

1. Algorithm print(){
2. print(“The tri-diagonal Matrix is\n”);
3. for i = 0 to n-1do{
4. for j = 0 to n-1do{
5. if(|i-j|<2){
6. print (a[map(i, j)] + “\t”);
7. }Else{
8. print(“0” + “\t”);
9. } }
10. print(“\n”);
11. } }
65

Tri-diagonal Matrix – add() method


 It is used to add two tri-diagonal matrices. If order of both
matrices are not same then addition should be failed.
 Theorem: Addition of two TDM is a TDM.
 Proof: Let A and B are TDM of order n. Therefore ai,j = bi,j
=0 for |i-j|>1. Let C = A + B, then we have to show that ci,j
= 0 for |i-j|>1.
Let |i-j|>1, we have to show that ci,j = 0.
ci,j = ai,j + bi,j . But ai,j = bi,j =0 as A and B are TDM.
Therefore ci,j = 0 + 0 = 0. Hence proved.

66

33
Tri-diagonal Matrix – add() method
 Total number of “+” operations performed = 3n-2.
1. Algorithm add(B){
2. if (n != B.n) then throw Exception(“Addition failed”);
3. TDM C(n);
4. for i = 0 to n-1do{
5. for j = max(0, i-1) to min(i+1, n-1) do{
6. C.a[map(i, j)] = a[map(i, j)] + B.a[map(i, j)];
7. }
8. }
9. return C;
10. }

67

Tri-diagonal Matrix – mul() method


 It is used to multiply two tri-diagonal matrices.
 Theorem: Multiplication of two TDM is not a TDM.
 Proof: Let A and B are TDM of order n. Therefore ai,j = bi,j
=0 for |i-j|>1. Let C = A x B, then we have to show that ci,j
!= 0 for |i-j|>1.
Let i=j+2, we have to show thatj ci,j != 0.
n n
c j  2, j   a j  2, k  bk , j a j  2, j 1  b j 1, j   a j  2,k  bk , j  a j  2,k  bk , j
k 1 k 1 k  j2

In 1st term, aj+2,j+1=bj+1,j≠0. In 2nd term, j≥k, so j+2 ≥k+2, so


(j+2)-k ≥ 2, ‫ ﮶‬aj+2,k=0. In 3rd term k ≥j+2, so k-j ≥2, bk,j=0
Therefore ci,j = aj+2,j+1 x bj+1,j != 0. Hence proved.
68

34
Tri-diagonal Matrix – mul() method
1. Algorithm mul(B){
2. if (n != B.n) then throw Exception(“Multiplicationfailed”);
3. PDM C(n);
4. for i = 0 to n-1do{
5. for j = max(0, i-2) to min(i+2, n-1) do{
6. C.a[map(i, j)] = 0;
7. for k = max(i-1, j-1, 0) to min(i+1, j+1, n-1) do{
8. C.a[map1(i, j)] = C.a[map1(i, j)] + a[map(i, k)] * B.a[map(k, j)];
9. }
10. }
11. }
12. return C;
13. }

69

Tri-diagonal Matrix – det() method


 First we convert it in UTM, then multiply the diagonals.
1. Algorithm det(){
2. TDM B(n) = A;
3. for k=0 to n-2 do{
4. x = B.a[map(k, k)];
5. i=k+1;
6. y = B.a[map(i, k)];
7. m = min(i+1, n-1);
8. if(i==n-1)
9. m++;
10. for j=max(0, i-1) to m-1 do
11. B.a[map(i, j)] = B.a[map(i, j)] - B.a[map(k, j)] * y / x;
12. }
13. d = 1;
14. for i=0 to n-1 do
15. d = d * B.a[map(i, i)];
16. return d;
17. }
70

35
Tri-diagonal Matrix – inverse() method
 The inverse of a TDM should be not a TDM, it will be a
square matrix of same order.
 We take a unit matrix as object of square matrix of
same order.
 We convert the given TDM into unit matrix using
row operations and same operations should be
performed on unit matrix.
 When TDM converted into unit matrix the unit
matrix should becomes its inverse.

71

1. Algorithm inverse(){ SquareMatrix C(n); // Unit matrix as object of square matrix


2. TriDiagonalMatrix B(n) = A; // copy of the original TDM
3. for k=0 to n-2 do{ //Forward operation
4. x = B.a[map(k, k)];
5. i=k+1;
6. y = B.a[map(i, k)];
7. m = min(i+1, n-1);
8. if(i==n-1)
9. m++;
10. for j =max(0, i-1) to m-1 do
11. B.a[map(i, j)] = B.a[map(i, j)] - B.a[map(k, j)] * y / x;
12. for j=0 to n-1 do
13. C.a[i][j] = C.a[i][j] - C.a[k][j] * y /x;
14. }
15. for k=n-1 to 1 do{ //Backward operation
16. x = B.a[map(k, k)];
17. i = k-1;
18. y = B.a[map(i, k)];
19. B.a[map(i, k)] = 0;
20. for j=0 to n-1 do
21. C.a[i][j] = C.a[i][j] - C.a[k][j] * y /x;
22. }
23. for i=0 to n-1 do{ //diagonal operation
24. x = B.a[map(i,i)];
25. B.a[map(i,i)] = 1;
26. for j=0 to n-1 do
27. C.a[i][j] = C.a[i][j] /x;
28. }
29. return C;
30. } 72

36
UT and LT Matrix – Assignment
 Let A is UTM and B is LTM of order n. Devise efficient
algorithms to get (i) C = A x B and (ii) C = B x A. Also derive
the formula to get total number of multiplication
operations.

73

37
Lecture Slide - 3
CSC23: Advanced Data
Structures
 Sartaj Sahni: Data Structures, Algorithms and
Applications in C++, 2nd Edition, Silicon Press
 D. Samanta: Classic Data Structures, 2nd Edition, PHI

Linked List Data Structure


 It is linear data structure.
 Collection of nodes that may be stored in different
parts of the memory locations. Where each node has
two fields data and link.
 Linearity is maintained using link field of the nodes.
 Random access of the element is not allowed. It is
suitable where list grow and shrink dynamically.
 Data members of Linked list DS – first that stores the
address of the first node.
 Methods of Linked list DS – create(n), isEmpty(), size(),
insert(x, index), del(index), indexOf(x), get(index),
display(), etc
2

1
Single Linked List DS – create(n) method
 It is used to create a single linked list of n nodes.
1. Algorithm create(n){
2. For i = 1 to n do {
3. if(i == 1) then I Cur first
4. cur = first = createNewNode(); 1 100 100
5. Else{ 2 200
6. cur.link = createNewNode(); 3 300
7. cur = cur.link;
8. }
4 400
9. print “Enter data ”; 5 500
10. read cur.data;
11. }
12. cur.link = NULL;
13. }
first
100 5 200 10 300 15 400 20 500 25 Null
100 200 300 400 500 3

Single Linked List DS – isEmpty()


isEmpty() method
 It is used to check whether linked list is empty or not.
 If value of the first is NULL then list is empty otherwise
it is not empty.
 The time complexity of this algorithm is Θ(1).
1. Algorithm isEmpty(){
2. If (first == NULL) then
3. return true;
4. Else
5. return false;
6. }
first
Null
4

2
Single Linked List DS – size() method
 It is used to get the number of elements in the linked
list.
1. Algorithm size(){
size cur
2. size=0; cur = first;
0 100
3. while (cur != NULL) do {
1 200
4. size = size + 1;
2 300
5. cur = cur.link;
3 400
6. }
4 500
7. return size;
5 null
8. }

first
100 5 200 10 300 15 400 20 500 25 Null
100 200 300 400 500
5

Single Linked List DS – insert(x, index) method

 It is used to insert element x at (index )th location.


 There may be one exceptional conditions – (i) “Invalid
index”.
 The major task is visiting the nodes to get the address
of a node after which new node should be inserted.
 There should be minimum number of visiting
operations, when x is inserted at index=0, and
maximum number of visiting when it is inserted at
index=size.
 The time complexity of this algorithm is O(n) and Ω(1).
6

3
Single Linked List DS–
DS– insert(x, index) method

1. Algorithm insert(x, index){


2. if(index<0 OR index>size()) then throw Exception(“Invalid index”);
3. newNode = createNode();
4. newNode.data = x; newNode.link = NULL;
5. if(index == 0) then{ newNode.link = first; first = newNode; }
6. Else{prev = first; for i = 1 to index-1 do{ prev = prev.link; }
7. newNode.link = prev.link; prev.link = newNode;
8. } X=30, index=2
9. } Prev=200
30 300 newNode = 600
600
first
100 5 200 10 600 15 400 20 500 25 Null
100 200 300 400 500
7

Single Linked List DS–


DS– del(index) method
 It is used to delete element from (index )th location.
 There may be two exceptional conditions – (i) “linked list
is empty” and (ii) “Invalid index”.
 The major task is visiting of the nodes.
 There should be minimum number of visiting when
element is deleted from index=0, and maximum number
of visiting when it is deleted from index=size-1.
 The time complexity of this algorithm is O(n) and Ω(1).

4
Single Linked List DS–
DS– del(index) method
1. Algorithm del(index){
2. if(size() == 0) then throw Exception(“Linked list is empty.”);
3. if(index<0 OR index≥size()) then throw Exception(“Invalid index”);
4. if(index == 0) then{ x=first.data; del=first; first=first.link; }
5. Else{ prev = first; for i = 1 to index-1 do { prev = prev.link; }
6. del=prev.link; x=del.data; prev.link=del.link;
7. } Index=size-1=4
8. delete(del); Prev=400
9. return x; Del=500
10. } x=25
first
100 5 200 10 300 15 400 20 500 25 Null
100 200 300 400 500
9

Single Linked List DS–


DS– indexOf(x)
indexOf(x) method
 It is searching operation. It return index of first
occurrence of x in the linked list; return -1 if x is not in
the linked ist.
 The time complexity of this algorithm is O(n) and Ω(1).
1. Algorithm indexOf(x){
2. index = 0; cur=first;
3. while(cur!=NULL AND cur.data!=x)do{cur=cur.link; index++;}
4. if(cur==NULL) then{return -1;}
5. Else{ return index;}
6. }

first
100 5 200 10 300 15 400 20 500 25 Null
100 200 300 400 500
10

5
Single Linked List DS – get(index) method
 It return (index)th element of the linked list.
 There is one exceptional condition “invalid index”.
 The time complexity of this algorithm is O(n) and Ω(1).
1. Algorithm get(index){
2. if(index<0 OR index≥size) then throw Exception(“Invalid index”);
3. cur=first;
4. for i = 1 to index do{ cur=cur.link;}
5. return cur.data;
6. }

first
100 5 200 10 300 15 400 20 500 25 Null
100 200 300 400 500

11

Single Linked List DS – display() method


 It is used to display all elements of the array.
 The time complexity of this algorithm is Θ(n).
1. Algorithm display(){
2. cur = first;
3. while (cur != NULL) do{
4. print cur.data;
5. cur = cur.link;
6. }
7. }
first
100 5 200 10 300 15 400 20 500 25 Null
100 200 300 400 500

12

6
Single Linked List DS – Bucket sort
 It is used to sort non-negative integers.
 First we creates m+1 number of buckets (linked list
objects). This is done by creating an array of linked list
i.e. L[m+1], of size m+1, where m is the largest number
in given list of integers.
 We go through each element of the list i.e. a[i] and
insert it in L[a[i]] bucket.
 Then we go though the bucket 0 to m and delete the all
elements from them one by one and stores them in
given list.
 Time complexity is Θ(n), number of buckets may be large
and not applicable on list of non-integer data. 13

Single Linked List DS – Bucket sort


5, 12, 3, 7999, -5
 Algorithm Min=-5, x=-1*min=5
10, 17, 8, 8004, 0
1. Algorithm bucketSort(a[], n){
M = 8004
2. m = max(a, n);
L[0]
3. LinkedList L[m+1];
L[1]
4. for i = 0 to n-1 do
...
5. L[a[i]].insert(a[i], 0);
L[8]
6. i = 0;
L[9]
7. for j = 0 to m do{
L[10]
8. while(L[j].isEmpty() == False){
...
9. a[i] = L[j].del(L[j].size()-1);
L[17]
10. i++;
...
11. }
L[8004]
12. }
0, 8, 10, 17, 8004
13. }
-5, 3, 5, 12, 7999 14

7
Single Linked List DS – Radix sort
 It is used to sort non-negative integers.
 Since number of buckets depends on radix of the
numbers is to be sorted, so it is called radix sort.
 First, we find max number and number of digits in it say
d.
 For a given value of k, we go through each element of
the list and insert a[i] in bucket whose index is kth digit
of a[i]. Then we go through each bucket and delete its
elements and place them in the list.
 This should be repeated for k = 1 to d.
 Time complexity is Θ(nd).
15

Single Linked List DS – Radix sort


1. Algorithm radixSort(a[], n){
35, 5, 378, 35, 3
2. m = max(a, n); d = floor(log10(m)) + 1; Max=378, d=3
3. LinkedList L[10]; L[0]
4. for i=1 to d do{ L[1]
5. for i = 0 to n-1 do L[2]
6. L[(a[i]/10(i-1))%10].insert(a[i], 0); L[3]
7. i = 0; L[4]
8. for j = 0 to 9 do L[5]
9. while(L[j].isEmpty() == False){ L[6]
10. a[i] = L[j].del(L[j].size()-1); L[7]
11. i++; L[8]
12. } L[9]
13. } 3, 5, 35, 35, 378
14. }
16

8
Circular Linked List
 In circular linked list the link field of last node has the
address of first node. L1.first.link
↔L2.first.link
 There is no null pointer problem.
 Merge operation in circular linked list is efficient.
first
L1 100 5 200 10 300 15 400 20 500 25 100
100 200 300 400 500
first
L2 550 5 600 10 650 15 700 20 750 25 550
550 600 650 700 750
 Data members of Linked list DS – first that stores the
address of the first node.
 Methods of Linked list DS – create(n), isEmpty(), size(),
insert(x, index), del(index), indexOf(x), get(index),
display(), etc 17

Circular Linked List DS – create(n) method


 It is used to create a circular linked list of n nodes.
1. Algorithm create(n){
2. For i = 1 to n do {
3. if(i == 1) then I Cur first
4. cur = first = createNewNode(); 1 100 100
5. Else{ 2 200
6. cur.link = createNewNode(); 3 300
7. cur = cur.link;
8. }
4 400
9. print “Enter data ”; 5 500
10. read cur.data;
11. }
12. cur.link = first;
13. }
first
100 5 200 10 300 15 400 20 500 25 100
100 200 300 400 500 18

9
Circular Linked List DS – isEmpty()
isEmpty() method
 It is used to check whether linked list is empty or not.
 If value of the first is NULL then list is empty otherwise
it is not empty.
 The time complexity of this algorithm is O(1).
1. Algorithm isEmpty(){
2. If (first == NULL) then
3. return true;
4. Else
5. return false;
6. }
first
Null
19

Circular Linked List DS – size() method


 It is used to get the number of elements in the linked
list.
1. Algorithm size(){
2. if(first == NULL) then return 0; size cur
3. size=1; cur = first; 1 100
4. while (cur.link != first) do { 2 200
5. size = size + 1; 3 300
6. cur = cur.link; 4 400
7. } 5 500
8. return size;
9. }
first
100 5 200 10 300 15 400 20 500 25 100
100 200 300 400 500
20

10
Circular Linked List DS – insert(x, index) method

 It is used to insert element x at (index )th location.


 There may be one exceptional conditions – (i) “Invalid
index”.
 The major task is visiting the nodes to get the address
of a node after which new node should be inserted.
 There should be minimum number of visiting
operations, when x is inserted at index=1, and
maximum number of visiting when it is inserted at
index=size OR index=0.
 The time complexity of this algorithm is O(n) and Ω(1).
21

Circular Linked List DS–


DS– insert(x, index) method
1. Algorithm insert(x, index){
2. if(index<0 OR index>size()) then throw Exception(“Invalid index”);
3. newNode = createNode();
4. newNode.data = x; newNode.link = newNode;
5. if(first == Null) then {first=newNode;}
6. Else if(index == 0) then{last=first; while(last.link!= first){last=last.link;}
7. newNode.link = first; first = newNode; last.link = first;}
8. Else{prev = first; for i = 1 to index-1 do { prev = prev.link; }
9. newNode.link = prev.link; prev.link = newNode;
X=30, index=5
10. }
Prev=500
11. }
Last =600

first
100 5 200 10 300 15 400 20 500 25 600 30 100
100 200 300 400 500 600
22

11
Circular Linked List DS–
DS– del(index) method
 It is used to delete element from (index )th location.
 There may be two exceptional conditions – (i) “linked list
is empty” and (ii) “Invalid index”.
 The major task is visiting of the nodes.
 There should be minimum number of visiting when
element is deleted from index=1, and maximum number
of visiting when it is deleted from index=0 OR index=size-
1.
 The time complexity of this algorithm is O(n) and Ω(1).
23

Circular Linked List DS–


DS– del(index) method
1. Algorithm del(index){
2. if(first == Null) then throw Exception(“Linked list is empty.”);
3. if(index<0 OR index≥size()) then throw Exception(“Invalid index”);
4. if(index == 0 AND size()==1) then{del=first; x=del.data; first=Null; }
5. Else if(index == 0){last=first; while(last.link!=first){last = last.link}
6. del=first; x=del.data; first=del.link; last.link=first; }
7. Else{ prev = first; for i = 1 to index-1 do { prev = prev.link; }
8. del=prev.link; x=del.data; prev.link=del.link;
9. } Index=4
10. delete(del); Prev=400
11. return x; Del=500
12. } x=
first
100 5 200 10 300 15 400 20 500 25 100
100 200 300 400 500 24

12
Circular Linked List DS–
DS– indexOf
indexOf(x)
(x) method
 It is searching operation. It return index of first
occurrence of x in the linked list; return -1 if x is not in
the linked ist.
 The time complexity of this algorithm is O(n) and Ω(1).
1. Algorithm indexOf(x){ if(first==Null) then return -1;
2. index = 0; cur=first;
3. while(cur.link!=first AND cur.data!=x)do{cur=cur.link; index++;}
4. if(cur.data!=x) then{return -1;} X=30
5. Else{ return index;} Cur=500 index=4
6. }

first
100 5 200 10 300 15 400 20 500 25 100
100 200 300 400 500
25

Circular Linked List DS – get(index) method


 It return (index)th element of the linked list.
 There is one exceptional condition “invalid index”.
 The time complexity of this algorithm is O(n) and Ω(1).
1. Algorithm get(index){
2. if(index<0 OR index≥size()) then throw Exception(“Invalid index”);
3. cur=first;
4. for i = 1 to index do{ cur=cur.link;}
5. return cur.data;
6. }

first
100 5 200 10 300 15 400 20 500 25 100
100 200 300 400 500

26

13
Circular Linked List DS – display() method
 It is used to display all elements of the array.
 The time complexity of this algorithm is O(n).
1. Algorithm display(){ if(first == Null) then return;
2. cur = first;
3. while (cur.link != first) do{
4. print cur.data;
5. cur = cur.link;
6. }
7. print cur.data;
8. }

first
100 5 200 10 300 15 400 20 500 25 100
100 200 300 400 500
27

Doubly Circular Linked List


 In doubly circular linked list each node has Llink, Rlink, and
data fields. Where Llink stores address of its left node
Rlink stores the address of its right node, and data field
stores the data of that node. The Llink of first node has
the address of last node, and Rlink of last nodes has the
address of first node.
 There is no null pointer problem.
 From a node we may move both directions.
 Data members of doubly circular Linked list– first that
stores the address of the first node.
 Methods of doubly circular Linked list DS – create(n),
isEmpty(), size(), insert(x, index), del(index), indexOf(x),
get(index), display(), etc
28

14
Doubly Circular Linked List DS – create(n) method
1. Algorithm create(n){
2. For i = 1 to n do {
3. if(i == 1) then
4. cur = first = createNewNode();
5. Else{ I Cur first prev
6. prev = cur; 1 100 100
7. cur.Rlink = createNewNode(); 2 200 100
8. cur = cur.Rlink; cur.Llik = prev; 3 300 200
9. }
10. print “Enter data ”;
4 400 300
11. read cur.data;
12. }
13. cur.Rlink = first; first.Llink = cur;
14. }

first
100 400 5 200 100 10 300 200 15 400 300 20 100
100 200 300 400 29

Doubly Circular Linked List DS – isEmpty()


isEmpty() method

 It is used to check whether linked list is empty or not.


 If value of the first is NULL then list is empty otherwise
it is not empty.
 The time complexity of this algorithm is O(1).
1. Algorithm isEmpty(){
2. If (first == NULL) then
3. return true;
4. Else
5. return false;
6. }
first
Null
30

15
Doubly Circular Linked List DS – size() method

 It is used to get the number of elements in the linked


list.
1. Algorithm size(){
2. if(first == NULL) then return 0; size cur
3. size=1; cur = first; 1 100
4. while (cur.Rlink != first) do { 2 200
5. size = size + 1; 3 300
6. cur = cur.Rlink; 4 400
7. }
8. return size;
9. }

first
100 400 5 200 100 10 300 200 15 400 300 20 100
100 200 300 400 31

Doubly Circular Linked List DS – insert(x, index) method

 It is used to insert element x at (index )th location.


 There may be one exceptional conditions – (i) “Invalid
index”.
 The major task is visiting the nodes to get the address
of a node after which new node should be inserted.
 There should be minimum number of visiting
operations, when x is inserted at index=1, and
maximum number of visiting when it is inserted at
index=size OR index=0.
 The time complexity of this algorithm is O(n) and Ω(1).
32

16
Doubly Circular Linked List DS–
DS– insert(x, index) method
1. Algorithm insert(x, index){
2. if(index<0 OR index>size) then throw Exception(“Invalid index”);
3. newNode = createNode();
4. newNode.data = x; newNode.Llink = newNode.Rlink = newNode;
5. if(first == Null) then {first=newNode;}
6. Else if(index == 0) then{last=first.Llink;
7. next=first; newNode.Rlink = next; newNode.Llink=last; first = newNode;
8. next.Llink = last.Rlink = first; }
9. Else{prev = first; for i = 1 to index-1 do {prev = prev.Rlink; }
10. next=prev.Rlink; newNode.Rlink = next; newNode.Llink=prev;
11. prev.Rlink = next.Llink = newNode; X=25, index=4
12. } Prev=400
13. } next=100

400 25 100
500
first
100 500 5 200 100 10 300 200 15 400 300 20 500
100 200 300 400 33

Doubly Circular Linked List DS–


DS– del(index) method

 It is used to delete element from (index )th location.


 There may be two exceptional conditions – (i) “linked list
is empty” and (ii) “Invalid index”.
 The major task is visiting of the nodes.
 There should be minimum number of visiting when
element is deleted from index=1, and maximum number
of visiting when it is deleted from index=0 OR index=size-
1.
 The time complexity of this algorithm is O(n) and Ω(1).
34

17
Doubly Circular Linked List DS–
DS– del(index) method
1. Algorithm del(index){
2. if(first == Null) then throw Exception(“Linked list is empty.”);
3. if(index<0 OR index≥size()) then throw Exception(“Invalid index”);
4. if(index == 0 AND size()==0) then{del=first; x=del.data; first=Null; }
5. Else if(index == 0){last=first.Llink; del=first; x=del.data;
6. first=del.Rlink; last.Rlink=first; first.Llink=last}
7. Else{ prev = first; for i = 1 to index-1 do { prev = prev.link; }
8. del=prev.Rlink; x=del.data; next = del.Rlink;
9. prev.Rlink=next; next.Llink = prev;
Index=3
10. }
Prev=300
11. delete(del);
Del=400 next=100
12. return x;
13. }

first
100 400 5 200 100 10 300 200 15 400 300 20 100
100 200 300 400 35

Doubly Circular Linked List DS–


DS– indexOf(x)
indexOf(x) method
 It is searching operation. It return index of first
occurrence of x in the linked list; return -1 if x is not in
the linked ist.
 The time complexity of this algorithm is O(n) and Ω(1).
1. Algorithm indexOf(x){ if(first==Null) then return -1;
2. index = 0; cur=first;
3. while(cur.Rlink!=first AND cur.data!=x)do{cur=cur.Rlink; index++;}
4. if(cur.data!=x) then{return -1;} X=20
5. Else{ return index;} Cur=400 index=3
6. }

first
100 400 5 200 100 10 300 200 15 400 300 20 100
100 200 300 400 36

18
Doubly Circular Linked List DS – get(index) method
 It return (index)th element of the linked list.
 There is one exceptional condition “invalid index”.
 The time complexity of this algorithm is O(n) and Ω(1).
1. Algorithm get(index){
2. if(index<0 OR index≥size) then throw Exception(“Invalid index”);
3. cur=first;
4. for i = 1 to index do{ cur=cur.Rlink;}
5. return cur.data;
6. }

first
100 400 5 200 100 10 300 200 15 400 300 20 100
100 200 300 400
37

Doubly Circular Linked List DS – display() method


 It is used to display all elements of the array.
 The time complexity of this algorithm is O(n).
1. Algorithm display(){
2. cur = first;
3. while (cur.Rlink != first) do{
4. print cur.data;
5. cur = cur.Rlink;
6. }
7. print cur.data;
8. }

first
100 400 5 200 100 10 300 200 15 400 300 20 100
100 200 300 400
38

19
Doubly Circular Linked List DS – Convex Hull
 For a given set of points, the convex hull is the subset of
given set having the vertices of convex polygon that
enclosed all points of the given set.
 First we find the point p0, whose y ordinate is minimum
and in case there is tie it should be the point with min x
ordinate.
 Then sort the points in increasing order of their angle
w.r.t p0 and the points having equal angles are sort in
increasing order of their distance from p0.
 The angles of points p1 and p2 w.r.t. p0 is compared using
vector cross product.
39

Doubly Circular Linked List DS – Convex Hull


i j k P2(x2, y2)
x2  x0 y2  y0 0
x1  x0 y1  y0 0 P0(x0, y0)
P1(x1, y1)
 ((x2  x0 ).(y1  y0 )  ( x1  x0 ).(y2  y0 ))k
 We compute the vectors P0P2 =(x2-x0, y2-y0, 0) and P0P1
=(x1-x0, y1-y0, 0), then takes the cross product of these
two vectors. If result is positive then angle of point P1 is
greater than angle of point P2 w.r.t. point P0.i.e. if (x2-
x0)x(y1- y0)>(x1-x0)x(y2- y0) then angle of point P1 is
greater than angle of point P2 w.r.t. point P0.

40

20
Doubly Circular Linked List DS – Convex Hull
i j k P2(x2, y2)
x2  x1 y2  y1 0 P3(x3, y3)
x3  x2 y3  y2 0 P1(x1, y1)
 ((x2  x1 ).(y3  y2 )  ( x3  x2 ).( y2  y1))k
 For three points P1, P2, and P3; the ∠P1P2P3Will be less than
equal 1800 if cross product of the vectors P1P2 =(x2-x1, y2-
y1, 0) and P2P3 =(x3-x2, y3-y2, 0), is less than equal to 0; i.e. if
(x2-x1)x(y3- y2)≤(x1-x0)x(y2- y0) then ∠P1P2P3 ≤1800.
 After sorting the given points – we creates doubly circular
linked list of these points perform the algorithm given on
next page.
41

Doubly Circular Linked List DS – Convex Hull


1. Algorithm getConvexHull(Point a[],int n) { //points in a[] are sorted
2. DCLL lst; lst.create(a,n);
3. Node *x = lst.getFirst(),*xr, *xrr, *x0;
4. x0 = x; xr = x->next; xrr = xr->next;
5. while(xrr != x0 || xr != xrr) do {
6. xrr=xr->next;
7. if((x->data).isLE180(xr->data, xrr->data)){
8. lst.del(xr); xr = x; x = x->prev;
9. } Else {
10. x = xr; xr = xrr;
11. }
12. }
13. cout<<endl<<"Vertex of Convex HULL"<<endl;
14. lst.display();
15. }
42

21
Doubly Circular Linked List DS – Convex Hull
P8(0, 10) P5(10, 10)
X0=p0 While(xrr != x0 || xr != xrr) P6(5, 10)
x xr xrr isAngle ≤ 1800
P0 P1 P2 True del(P1) P7(0, 5) P3(10, 5)
P8 P0 P2 False P4(5, 5)
P0 P2 P3 False
P2 P3 P4 False P0(0, 0)
P3 P4 P5 True del(P4) P1(5, 0) P2(10, 0)
P2 P3 P5 True del(P3)
P0 P2 P5 False
P2 P5 P6 False P0, p1, p2, p3, p4, p5, p6, p7, p8
P5 P6 P7 False
P6 P7 P8 True del(P7)
P5 P6 P8 True del(P6)
P2 P5 P8 False
P5 P8 P0 False
P8 P0

43

22
Lecture Slide - 4
CSC23: Advanced Data
Structures
 Sartaj Sahni: Data Structures, Algorithms and
Applications in C++, 2nd Edition, Silicon Press
 D. Samanta: Classic Data Structures, 2nd Edition, PHI

Stack Data Structure


 It is linear data structure that has restricted insert()
and del() operation i.e. Insertion and deletion
operations are performed at/from one end of the list
called top of stack.
 The insertion and deletion operations of the stack are
called push(x) and pop() operations respectively.
 Stack D.S. Follow the last in first out (LIFO) strategy.
 It may be implemented either using array or linked list.
 Data members of Stack DS – single 1D array s, an index
variable (tos) that point top element, and length.
 Methods of Stack DS –isEmpty(), size(), push(x), pop(),
peep(index), display(), etc
2

1
Stack DS–
DS– isEmpty
isEmpty()
() method
 It is used to check whether stack is empty or not.
 If value of the tos is -1 then stack is empty otherwise it
is not empty.
 The time complexity of this algorithm is Θ(1).
1. Algorithm isEmpty(){
2. If (tos == -1) then
3. return true;
4. Else
5. return false;
6. }

Stack DS – size() method


 It is used to get the number of elements in the stack.
 Time Complexity of this algorithm is Θ(1).
1. Algorithm size(){
2. return tos + 1;
3. }

7
length=8 6
tos=3 5
4
3 40
2 30
1 20
0 10 4

2
Stack DS – push(x) method
 It is used to insert element x at top of the stack.
 There may be one exceptional conditions – “Stack is
full”.
 The time complexity of this algorithm is Θ(1).
1. Algorithm push(x){
2. If (tos == length-1) then throw Exception(“Stack is full”);
3. tos = tos + 1; 5
4. s[tos] = x; length=6 4
5. } tos=2 3
Push(15) 2 35
Push(25) 1 25
Push(35) 0 15
5

Stack DS – pop() method


 It is used to delete an element from top of the stack.
 There may be one exceptional conditions – “Stack is
empty”.
 The time complexity of this algorithm is Θ(1).
1. Algorithm pop(){
2. If (tos == -1) then throw Exception(“Stack is empty”);
3. x = s[tos]; 5
4. tos = tos - 1; length=6 4
5. return x; tos=-1 3
6. } X=15 2
1
0 15
6

3
Stack DS – peep(index) method
 It is used to access index-th element from top of the
stack.
 There may be one exceptional conditions – “Invalid
index”.
 The time complexity of this algorithm is Θ(1).
1. Algorithm peep(index){
2. If (index<0 OR index>tos) then 5
3. throw Exception(“invalid”); 4 50
4. return s[tos-index]; 3 40
5. } length=6
2 30
tos=4
1 20
0 10
7

Stack DS – display() method


 It is used to display the elements of stack in order of
their insertion in stack.
 The time complexity of this algorithm is Θ(n).
1. Algorithm display(){
2. for i = 0 to tos do{
3. print s[i];
4. }
5. } 5
4 50
3 40
length=6
2 30
tos=4
1 20
0 10
8

4
Stack DS – Implementation using Linked List
1. Stack{
2. LinkedList s;
3. isEmpty(){ return s.isEmpty(); }
4. size() { return s.size(); }
5. push(x) { s.inser(x, s.size()); }
6. pop() { return s.del(s.size()-1); }
7. peep(index) { return s.get(s.size() – index - 1); }
8. diplay() { s.display(); }
9. }

Stack DS – Conversion from infix to postfix expression


Infix Expression: 12 – 5 * 7 / (13 – 8))
Token (x) Stack Postfix Expression Conversion from infix
( to postfix
(direct method)
12 ( 12
12 – 5 * 7 / (13 – 8)
- (- 12 12-5*7/(13, 8, -)
5 (- 12, 5 12-(5, 7, *)/(13, 8, -)
* (-* 12,5 12-(5, 7, *, 13, 8, -, /)
12, 5, 7, *, 13, 8, -, /, -
7 (-* 12,5,7
/ (-/ 12, 5, 7, *
Conversion from infix
( (-/( 12, 5, 7, *
to postfix
13 (-/( 12, 5, 7, *, 13 (direct method)
- (-/(- 12, 5, 7, *, 13 (12 +5) / (13 – 8)
8 (-/(- 12, 5, 7, *, 13, 8 (12, 5, +)/(13-8)
(12, 5, +)/(13, 8, -)
) (-/ 12, 5, 7, *, 13, 8, -
12, 5, +, 13, 8, -, /
) empty 12, 5, 7, *, 13, 8, -, /, - 10

5
Stack DS – Conversion from infix to postfix expression
1. Algorithm infixToPostfix(I){ I = I + “)”; s.push(‘(‘);
2. For each token x of I do {
3. if (x == ‘(‘) then { s.push(x); }
4. Else if(x == ‘)’) then { y = s.pop();
5. While(y != ‘(‘ ) do{ P = P + y; y = s.pop(); }
6. }Else if(x == operator) { y = s.pop();
7. While (operator(y) == True AND precedence(y) ≥ precedence(x) do
8. { P = P + y; y = s.pop(); }
9. s.push(y); s.push(x);
10. }Else if (x == operand) then{ P = P + x; }
11. Else { Throw Exception(“Invalid token”); }
12. }
13. return P;
14. }
11

Stack DS – Evaluation postfix expression


Postfix Expression: 12, 5, 7, *, 13, 8, -, /, -

Token (s) Stack Operation


12 12 Push(12)
5 12, 5 Push(5)
7 12, 5, 7 Push(7)
* 12, 35 y=pop()=7; x=pop()=5, push(x s y)=push(5 * 7) = push(35)
13 12, 35, 13 Push(13)
8 12, 35, 13, 8 Push(8)
- 12, 35, 5 y=pop()=8; x=pop()=13, push(x s y)=push(13-8) = push(5)
/ 12, 7 y=pop()=5; x=pop()=35, push(x s y)=push(35/5) = push(7)
- 5 y=pop()=7; x=pop()=12, push(x s y)=push(12-7) = push(5)

12

6
Stack DS – Evaluation of postfix expression
1. Algorithm Evaluate(P){
2. For each token x of P do {
3. if(x == operator) then {
4. operator = x;
5. y = s.pop();
6. x = s.pop();
7. val = x operator y;
8. s.push(val);
9. }Else {
10. s.push(x);
11. }
12. }
13. return s.pop();
14. }
13

Stack DS – Parenthesis Matching


Infix Expression: (13-5) / (7-5)
Token (x) Stack Operation
( ( Push(x)
13 (
- (
5 (
) Pop()
/
( ( Push(x)
7 (
- (
5 (
) Pop()

14

7
Stack DS – Parenthesis Matching

1. Algorithm isParenthesisMatched(Exp){
2. For each token x of Exp do {
3. if(x == ‘(‘) then{
4. s.push(x);
5. }Else if (x == ‘)’ ) then{
6. if(s.isEmpty()==true) then return false;
7. s.pop();
8. }
9. }
10. if (s.isEmpty()) then
11. return true;
12. Else
13. return false;
14. }
15

Stack DS – Rat in a Maze m=5 n = 6


row col
1 1
2 1
 A maze is a rectangular area made of mxn
2 2
squares with an entry and an exit point. 3 2
3 3
 Some squares of maze has abstacles (blue).
3 4
(4, 6) 4 4
 Entrance at (1, 1) and exit at (m, n).
(3, 6) 5 4
 It is represented by a 2D array of size mxn with (2, 6) 5 3
abstacle cell with 2 and free cell with 0. 4 3
(2, 5) 5 3
(2, 4) 5 4
(3, 4) 4 4
3 4
(3, 3) 2 4
(3, 2) 2 5
(2, 2) 2 6
3 6
(2, 1) 4 6
(1, 1) 5 6
16

8
Stack DS – Rat in a Maze
1. Algorithm path(){ Stack path(mxn – 1); row = col = 1;
2. maze[row][col]=1;
3. While(row ≠ m OR col ≠ n) do{ //Find the neigbhor
4. if(col<n AND maze[row][col+1]==0) then //right
5. {path.push(Point(row, col)); col = col + 1; maze[row][col]=1;}
6. Else if(row<m AND maze[row+1][col]==0) then //down
7. {path.push(Point(row, col)); row = row + 1; maze[row][col]=1;}
8. Else if(col>1 AND maze[row][col-1]==0) then //left
9. {path.push(Point(row, col)); col= col - 1; maze[row][col]=1;}
10. Else if(row>1 AND maze[row-1][col]==0) then //up
11. {path.push(Point(row, col)); row = row - 1; maze[row][col]=1;}
12. Else if (path.isEmpty() == true) then return false;
13. Else { P1 = path.pop(); row = P1.row; col = P1.col;}
14. } path.display(); return true;
15. } 17

Stack DS – Recursive Functions Implementation

 Every recursive functions is implemented using stack.


It should be implemented either by compilers or
programmers.
 Some programming language does not support
recursive function. In such programming languages it is
implemented by programmer using stack.
 Example: We will implements following recursive
functions using stacks.
(i) n! (ii) GCD(m, n) (iii) Fib(n) (iv) D2b(n)
(v) Tower of Hanoi
18

9
n  (n  1)!, if n  1
Stack DS – n! 
if n  1
1,
1. Algorithm Factorial(n){
2. Stack s(n – 1);
3. While(n > 1) do {
4. s.push(n); n = n – 1;
5. }
6. fact = 1;
7. While(s.isEmpty() == false) do {
8. fact = fact * s.pop();
9. }
10. return fact;
11. }

19

 n, if m%n  0
Stack DS –gcd( m, n )  
gcd(n, m%n), otherwise
1. Algorithm GCD(m, n){
2. Stack s1(1), s2(1);
3. s1.push(m); s2.push(n);
4. While(s1.isEmpty() == false) do { m=60, n=12
5. m = s1.pop(); n = s2.pop();
6. if(m%n ≠ 0) then {
7. s1.push(n); s2.push(m%n);
8. }
9. }
10. return n;
11. }

20

10
Stack DS –Fib(n)  1,Fib(n  1)  Fib(n if2),n  1ifnn22

1. Algorithm Fib(n){
Stack s(n/2+1); N=2
2.
Fib=8
3. s.push(n); fib = 0; 1, 1, 2, 3, 5, 8
4. While(s.isEmpty() == false) do {
5. n = s.pop();
6. if(n>2) then { s.push(n-1); s.push(n-2);
7. }Else if(n == 1 OR n == 2) then{ fib = fib + 1;}
8. }
9. return fib;
10. }

21

Stack DS – binary(n)  10n,  binary(n / 2)  ifn%n2, 0 if nn 11


1. Algorithm binary(n){
2. Stack Sr(floor(log2(n))+1), Sp(floor(log2(n))+1); p = 1; b = 0;
3. While(n > 0) do { N r=n%2
4. Sr.push(n%2); Sp.push(p); 13 1
5. p = 10 x p; n = floor(n/2); 6 0
3 1
6. } 1 1
7. While(Sr.isEmpty() == false){ 0 [1101]
8. b = b + Sr.pop() * Sp.pop();
9. } 1 1000
10. return b; 1 100
11. } 0 10
1 1
22

11
Stack DS – Tower of Hanoi Problem
1. Algorithm Move(n){
2. Stack Sn(n2), Ss(n2), Si(n2), Sd(n2);
3. Sn.push(n); Ss(‘A’); Si(‘B’); Sd(‘C’);
4. While(Sn.isEmpty() == false) do {
5. n=Sn.pop(); s=Ss.pop(); i=Si.pop(); d=Sd.pop();
6. if(n>1) then{
7. Sn.push(n-1); Ss(i); Si(s); Sd(d);
8. Sn.push(1); Ss(s); Si(i); Sd(d);
9. Sn.push(n-1); Ss(s); Si(d); Sd(i);
10. }Else { print “\nmove” + s + “->” + d; }
11. }

23

Queue Data Structure


 It is linear data structure that has restricted insert() and
del() operation i.e. Insertion is performed at one end and
deletion operations is performed from other end of the
list.
 The insertion is performed at rear end of the list and
deletion performed from front for the list.
 Queue D.S. Follow the first in first out (FIFO) strategy.
 It may be implemented either using array or linked list.
 Data members of Queue DS – single 1D array s, two index
variables (rear and front) that points the both ends of the
list, and length.
 Methods of Queue DS –isEmpty(), size(), insert(x),
delete(), display(), etc 24

12
Queue DS–
DS– isEmpty
isEmpty()
() method
 It is used to check whether queue is empty or not.
 If value of the rear is -1 then queue is empty otherwise
it is not empty.
 The time complexity of this algorithm is Θ(1).
1. Algorithm isEmpty(){
2. If (rear== -1) then
3. return true;
4. Else
5. return false;
6. }

25

Queue DS – size() method


 It is used to get the number of elements in the stack.
 Time Complexity of this algorithm is Θ(1).
1. Algorithm size(){
2. if(rear == -1) then { return 0; }
3. Else { return rear – front + 1;}
4. }

length=5 0 1 2 3 4
Rear = 2
20 30
Front = 1

26

13
Queue DS – insert(x) method
 It is used to insert element x at rear end of the Queue.
 There may be one exceptional conditions – “Queue is
full”.
 The time complexity of this algorithm is Θ(1).
1. Algorithm insert(x){
2. If (rear== length-1) then throw Exception(“queue is full”);
3. If (rear == -1) then {front = rear = 0; }
4. Else {rear = rear + 1; }
5. q[rear] = x;
6. }
length=6
0 1 2 3 4

27

Queue DS – delete() method


 It is used to delete an element from front of the
queue.
 There may be one exceptional conditions – “Queue is
empty”.
1. The time complexity
Algorithm delete(){ of this algorithm is Θ(1).
2. If (front == -1) then throw Exception(“Queue is empty”);
3. x = q[front];
4. if(front == rear){ front = rear = -1; }
5. Else { front = front + 1; } length=6
6. return x;
7. }
0 1 2 3 4

28

14
Queue DS – display() method
 It is used to display the elements of queue in order of
their insertion in it.
 The time complexity of this algorithm is Θ(n).
1. Algorithm display(){
2. for i = front to rear do{
3. print q[i];
4. }
5. }

0 1 2 3 4 length=5
Rear=3
20 30 40
Front=1
29

Queue DS – Implementation using Linked List


1. Queue{
2. LinkedList q;
3. isEmpty(){ return q.isEmpty(); }
4. size() { return s.size(); }
5. insert(x) { q.inser(x, s.size()); }
6. del() { return q.del(0); }
7. display() { q.display(); }
8. }

30

15
Problem in Simple Queue DS
 Initial state of Queue. Queue q(3); rear = front = -1,
0 1 2
length = 3. 0 1 2

 Insert(7)  rear = front = 0. 7


0 1 2

 Insert (15)  front = 0, rear =1. 7 15


0 1 2
 Del()  return 7, front=rear=1. 15
0 1 2
 Insert(25)  front=0, rear=2. 15 25

 Insert(30)  as rear == length-1, it throw “Queue is


full” Exception and insertion should be failed, even
though one vacant element at index=0. This problem is
overcome with circular queue.
31

Circular Queue DS–


DS– isEmpty()
isEmpty() method
 It is used to check whether queue is empty or not.
 If value of the rear is -1 then queue is empty otherwise
it is not empty.
 The time complexity of this algorithm is Θ(1).
1. Algorithm isEmpty(){
2. If (rear== -1) then
3. return true;
4. Else
5. return false;
6. }

32

16
Circular Queue DS – size() method
 It is used to get the number of elements in the stack.
 Time Complexity of this algorithm is Θ(1).
1. Algorithm size(){
2. if(rear == -1) then { return 0; }
3. Else if (rear > front) then { return rear – front + 1;}
4. Else { return length – front + rear + 1; }
5. }

length=5 0 1 2 3 4
Rear = 1 60 70 50
Front = 4

33

Circular Queue DS – insert(x) method


 It is used to insert element x at rear end of the Queue.
 There may be one exceptional conditions – “Queue is
full”.
 The time complexity of this algorithm is Θ(1).
1. Algorithm insert(x){
2. If ((rear+1)%length == front) then
3. throw Exception(“queue is full”);
4. If (rear == -1) then {front = rear = 0; }
5. Else {rear = (rear + 1)%length; }
6. q[rear] = x;
7. } length=5
F=3, R=0 0 1 2 3 4
Insert(30) 30 10 20
34

17
Circular Queue DS – delete() method
 It is used to delete an element from front of the
queue.
 There may be one exceptional conditions – “Queue is
empty”.
1. The time complexity
Algorithm delete(){ of this algorithm is Θ(1).
2. If (front == -1) then throw Exception(“Queue is empty”);
3. x = q[front];
4. if(front == rear){ front = rear = -1; }
5. Else { front = (front + 1)%length; } length=5
6. return x; R=1, F=3
7. } Del()10, F=4
0 1 2 3 4
Del()20, F=0
40 Del()30, F=1 35

Circular Queue DS – display() method


 It is used to display the elements of queue in order of
their insertion in it.
 The time complexity of this algorithm is Θ(n).
1. Algorithm display(){ i = front;
2. While(i ≠ rear) do{
3. print q[i]; i = (i + 1) % length;
4. }
5. print q[rear];
6. }
0 1 2 3 4 length=5
Rear=1
60 70 50
Front=4
36

18
Queue DS – Railroad Car Re
Re--arrangement
 Let there are n cars that are numbered 1 to n on an
input line such that they are not in sequence from
right to left i.e. [581742963].
 There are k holding tracks on which the cars may be
hold.
 The problem is to re-arrange the cars in sequence
from right to left, using this k holding tracks, on output
line i.e. [987654321].
 For a car c, if it is feasible to keep on more than one
holding tracks having some cars on them, then it will
be keep on having largest car no on them that is < c.
37

Queue DS – Railroad Car Re


Re--arrangement

[]
Holding track-1
[581942763] [987654321]
Input Line Output Line
[]
Holding track-2

38

19
Queue DS – Railroad Car Re-
Re-arrangement
1. Algorithm railRoad(inputOrder[0..n-1], n, k){ Queue track[n]; nextCarToOutput=1;
2. For i = n-1 to 0, step -1 do{
3. If (inputOrder[i] == nextCarToOutput) then {
4. print “\nMove car ” + inputOrder[i] + “ from input track to output track.”;
5. nextCarToOutput++;
6. For j = 0 to k-1 do{ // moving cars from holding to output track.
7. While(track[j].isEmpty() == False AND tarck[j].getFront() == nextCarToOutput) do {
8. print “\nMove car ” + track[j].del() + “ from holding track ” + j + “ to output track”;
9. nextCarToOutput++; j=0;
10. }
11. }
12. }Else { // Put car inputOrder[i] on a holding track
13. c = inputOrder[i]; bestTrack = -1; //best track on which c may be hold
14. bestLast=0; //last best car on a best holding track
15. For j = 0 to k-1 do { // Finding best holding track
16. If (track[j].isEmpty() == False) then { lastCar = track[k].getRear();
17. If (c > lastCar AND lastCar > bestLast) then{
18. bestLast = lastCar; bestTrack = j;
19. }
20. }Else { If (bestTrack == -1) then { bestTrack = j; } }
21. }
22. If(bestTrack == -1) then { return False; // means car arrangement with k tracks is failed. }
23. track[bestTrack].insert(c); // Car c put on holding track
24. print “\nMove car ” + c + “ from input track to holding track ” + bestTrack;
25. }
26. }
27. return True; // means arrangement is done
28. } 39

Queue DS – Image Component Labeling


 A digital binary image is mxm matrix of pixels, where
each pixel is either 0 or 1.

 A pixel with value 1 is called component pixel. Two


adjacent component pixels are pixels of the same
image component .

 The objective of this problem is to label the


component pixels so that two pixels get the same label
iff they are pixels of same image component.

40

20
Queue DS – Image Component Labeling
2 2 2 3 3 R C Row Col
2 2 2 3 3 Id=2 0 0 2 2
2 2 2 3 3
1
1 1 1
1 1 1 1 1

0 1 2 3 4 5 6 7 8 9 10 11

41

Queue DS – Image Component Labeling


1. Algorithm labelImageCompoments(pixel[0..m-1][0..m-1], m){ Queue <Point> q(m*m);
2. id = 1;
3. For r = 0 to m-1 do {
4. For c = 0 to m-1 do {
5. If(pixel[r][c] == 1) then{
6. pixel[r][c] = ++id;
7. q.insert(Point(r, c));
8. While(q.isEmpty() == False) do{
9. Point p = q.del(); row=p.x; col = p.y;
10. If(col < m-1 AND pixel[row][col+1] == 1) then { //Right nhbr
11. pixel[row][col+1] = id; q.inser(Point(row, col+1));
12. }
13. If(row < m-1 AND pixel[row+1][col] == 1) then { //Down nhbr
14. pixel[row+1][col] = id; q.inser(Point(row+1, col));
15. }
16. If(col > 0 AND pixel[row][col-1] == 1) then { //Left nhbr
17. pixel[row][col-1] = id; q.inser(Point(row, col-1));
18. }
19. If(row > 0 AND pixel[row-1][col] == 1) then { //Up nhbr
20. pixel[row-1][col] = id; q.inser(Point(row-1, col));
21. }
22. }
23. }
24. }
25. }
26. }
42

21
Queue DS – Machine Shop Simulation
 A factory (machine shop) comprises m machines.
 The machine works on jobs, and each job comprises several tasks.
 For each task of a job, there is a task time and a m/c on which it is to
be performed.
 The tasks of a job are to be performed in a specific order.
 When the first task of a job is completed, the job goes to a m/c for its
2nd task and so on until its last task is completed.
 When a job arrives at a m/c, the job may have a wait because m/c
might be busy.
 Each m/c can be in one of three states: active, idle, and changeover.
43

Queue DS – Machine Shop Simulation


Job Table

Time M/C Queues Active Jobs Finish Time Job# #Tasks Tasks Length

M1 M2 M1 M2 M1 M2 1 2 (1, 2), (2, 3) 5

Init 1 2 I I L L 2 2 (2, 3), (1, 2) 5

0 - - 1 2 2 3 M/C Table
2 - 1 C 2 5 3 M/C# C. Time
3 2 1 C C 5 7 1 3
5 - 1 2 C 7 8 2 4
7 - 1 C C 10 8
8 - - C 1 10 11
Finish and Wait Time
10 - - I 1 L 11
Job# Finish Time Wait Time
11 - - I C L 15
1 11 6
15 - - I I L L
2 7 2
Total Wait Time 8

44

22
Lecture Slide - 5
CSC23: Advanced Data
Structures
 Sartaj Sahni: Data Structures, Algorithms and
Applications in C++, 2nd Edition, Silicon Press
 D. Samanta: Classic Data Structures, 2nd Edition, PHI

Tree Data Structure


 It is non-linear data structure. In general the tree data
structure is used to represent the data that has
hierarchical relationship between them.
 Example 1: Family tree show the hierarchy of the
various members of a family.
A
 This hierarchy not only give the
ancestors and successors of the
a family member, but some other B C
information too.
F
 For example, if we assume that D E
left is F and right M, then sister,
brother, uncle, aunty, g-father etc. G H I
Information gets implicitly.
2

1
Tree Data-
Data-Example
 Example 2: Expression tree is used to represent a
mathematical expression. (A-B) / (C * D + E) is
represented by following tree.
 These two examples illustrates how powerful this data
structure is to maintain a lot of information implicitly.
/

- +

A B * E

C D
3

Tree D.S.-
D.S.-Basic Terminology
 Node: It is main component of tree D.S. Node stores the
data and having links to its children.
 Parent: Parent of a node is immediate predecessor of
that node.
 Child: All immediate successors of a node
are called child. A
 Link: It is pointer from parent node
to its chid nodes. B C
 Root: It is a special node that
does not has a parent. D E F
 Leaf nodes: The nodes that
does not haves children. G H I

2
Tree Data-
Data-Basic Terminology
 Level: It is the rank of the hierarchy. The level of root
note is zero. If laver of a node is k, then level of its
children should be k+1.
 Height: It represent the height of tree that is equal to
max. level + 1. It is the maximum number of nodes in the
path from root to a leaf. A
 Degree: The degree of a node is the
number of its children. So that the B C
degree of leaf nodes must be zero.
The degree of a tree is the max D E F
degree of a node in the tree.
 Sibling: The nodes having same I
parent are called sibling nodes. G H
5

Tree Data-
Data-Basic Terminology
 Definition [Tree]: A tree t is a finite non-empty set of
nodes. One of these node is called root node, and the
remaining nodes (if any) are partitioned into trees that
are called subtrees of t.
 Definition [Binary tree]: A binary tree t is a finite (may be
empty) set of nodes. When it is not empty, it has a root
node, and the remaining nodes (if any) are partitioned
into two binary trees, which are called left and right sub
trees of t.
 The main differences between binary tree and a tree are:
 Each node of a binary tree has exactly two subtress (one or both may be
empty). Each node of a tree may have any number of subtrees.
 A binary tree may be empty but tree cannot.

3
Types of Binary Tree
 Skewed Binary Tree: A binary tree is called skewed binary
tree if at each level there should be only one node.

A A A

B B B

C C C

D D D

E E E
Right Skewed Left-Right Skewed Left Skewed 7

Types of Binary Tree


 Full Binary Tree: A binary tree is called full binary tree if
at each level there should be maximum number of
possible nodes.

B C

D E F G

Full binary tree

4
Types of Binary Tree
 Complete Binary Tree: A binary tree is called complete
binary tree if at each level (except last level) there should
be maximum number of possible nodes and at last level
the nodes should be as left as possible.
A A

B C B C

D E F D E F

Complete binary tree Not a Complete binary tree

Binary Tree - Property


 Property (i): Minimum number of nodes in a binary tree of
height h is h.
 Proof: A binary tree has minimum number of nodes if it is
skewed binary tree. It will have a node at each level, so
there should be h number of nodes where h is its height.
 Property (ii): If n and e are number of nodes and number
of edges in a non empty binary tree, then n = e + 1.
 Proof: Let n=1, then e=0 i.e. It is true for initial value n=1.
Let it is true for arbitrary n i.e. n=e+1. Let one more node
is inserted to the binary tree, so one edge will be also
added. Therefore n+1 = (e+1)+1. Thus it is true for n+1
and hence proved.

10

5
Binary Tree - Property
 Property (iii): For any non empty binary tree t, if n0 and n2
are number of nodes of degree zero and two respectively,
then n0=n2+1.
 Proof: Let n is the total number of nodes in t, and ni for
i=0, 1, 2 be the number of nodes of degree i. So that we
have
n = n0 + n1 + n2 (1)
If e is the number of edges in t, then
e = n0 x 0 + n1 x 1 + n2 x 2 = n1 + 2n2 (2)
But we know that n = e + 1, ∴ e=n-1= n0 + n1+ n2 – 1 (3)
From equation (2) and (3), we will gets
n1+2n2 = n0+n1+n2-1, ∴ n0=n2+1. Hence proved.
11

Binary Tree - Property


 Property (iv): Number of nodes in a full binary tree t of
height h should be 2h-1 i.e. n=2h-1.
 Proof: Let height of a binary tree is h, then its level should
be 0, 1, 2, …, h-1. In a full binary three, if n is the number
of nodes, then n  k 0 2k
h 1

∴ n=1+2+22+23+ ... +2h-1 = 1.(1-2h)/(1-2) = (1-2h)/(-1) = 2h -1.

Sum of n terms of G.P. a, ar, ar2, ar3, ..., arn-1 is


Sn=a(1-rn)/(1-r).

12

6
Binary Tree - Property
 Property (v): If n be the number of nodes and h is the
height of a complete binary tree, then 2h-1 ≤ n ≤ 2h – 1.
 Proof: In a complete binary tree of height h, there should
be minimum number of nodes if it has only one node at
last level. Therefore, min. no. of nodes in it is equal to no.
of nodes in a full binary tree of height h-1 plus 1.
Therefore, min. no. of nodes in it is 2h-1-1+1 = 2h-1. And it
has max. no of nodes if it is a full binary tree. Therefore
max. no. of nodes in it is 2h-1.
∴ 2h-1 ≤ n ≤ 2h – 1. Hence proved.

13

Binary Tree - Property


 Property (vi): The height of a complete binary tree with n
number of nodes is lg 2 (n  1)
 Proof: In a complete binary tree t, 2h-1 ≤ n ≤ 2h – 1.
∴ n ≤ 2h – 1, ∴ n+1 ≤ 2h, ∴ 2h ≥ n+1, ∴ h ≥ lg2(n+1),
∴ h= lg 2 (n  1) .Hence proved.
 Property (vii): The height of a complete binary tree with n
number of nodes is lg 2 (n)  1
 Proof: In a complete binary tree t, 2h-1 ≤ n ≤ 2h – 1.
∴ 2h-1 ≤ n, ∴ h-1 ≤ lg2(n), ∴ h-1= lg 2 (n)  ,
∴ h= lg 2 (n)  1 .Hence proved.
 Property (viii): Total number of possible binary trees with
n number of nodes is C 2n

n
n 1
14

7
Binary Tree - Traversing
 In binary tree traversing it is fixed that left sub-tree
should be traversed before right sub-tree. So, based on
position of root there are three types of binary tree
traversing. Therefore, if root is before the left and right,
then it is pre-order traversing; if root is after the left and
right, then it is post order traversing; and if root is in-
between left and right, then it is in-order traversing.
 Pre-Order [Root,Left,Right]: C
C, A, G, F, E, D, B
 Post-Order [Left, Right, Root]: A
F, G, D, B, E, A, C
G E
 In-Order [Left,Root, Right]:
G, F, A, D, E, B, C F D B 15

Creation of Binary Tree – using its pre-


pre-
order & In-
In-order Traversing
 From pre-order traversing we get the root node, and from
in-order traversing we partition the nodes into nodes of left
and right sub-trees.
 Pre-Order [Root,Left,Right]: A
A, B, C, D, E, F, G, H, I, J, L, K
 In-Order [Left,Root, Right]: B H
C, E, D, F, B, G, A, H, J, L, I, K C, E, D, F, B, G H, J, L, I, K
 Post-Order [Left, Right, Root]:
C G I
E, F, D, C, G, B, L, J, K, I, H, A
C, E, D, F J, L, I, K
D J K
E, D, F
J, L
E F L 16

8
Creation of Binary Tree – using its post-
post-
order & In-
In-order Traversing
 From post-order traversing we get the root node by
moving from right to left, and from in-order traversing
we partition the nodes into nodes of left and right sub-
trees.
 Post-Order [Left,Right, Root]: C

A, F, D, B, E, G, C
 In-Order [Left,Root, Right]: A G F, G, D, E, B
A, C, F, G, D, E, B
F E D, E, B

D B
17

Binary Tree Representation using Array


 If h is the height of a binary tree, then size of the array
should be 2h-1.
 The root is stored at index=0.
 If index of a node is i, then
 Index of its left child should be 2*(i+1)-1. A
 Index of its right child should be 2*(i+1).

 Index of parent will be floor((i-1)/2).


B C
 Example: The array representation of
the given binary tree is: E F
D
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
A B C D E \0 F \0 G H \0 \0 \0 I \0
G H I

18

9
Binary Tree Representation using Array
 The data members of tree D.S.: a[0 .. 2h-2], h.
 Member functions of Tree D.S. are: build(i), isEmpty(), size(),
preOrder(i), inOrder(i), postOrder(i), levelOrder(), height(),
search(x).
 In constructor function, array is created and initialized with null
value (‘\0’).
 The member functions build(i), preOrder(i), inOrder(i), and
postOrder(i) are recursive function which are called with i=0,
index of root node.
 The build(i) member function is used to create the binary tree;
whereas preOrder(i), inOrder(i), and postOrder(i) member
functions are used to traverse the binary tree in pre-Order, in-
Order, and post-Order respectively.
 The levelOrder() is used to traverse the tree in level order using
Queue. 19

Binary Tree (Array) – Build(


Build(ii)
1. Algorithm build(i){
2. If (i == 0) then { Print “Enter root node ”; }
3. Else if(i%2 == 1) then{ Print “Enter left child of ” + a[(i-1)/2]+“ ”;}
4. Else{ Print “Enter right child of ” + a[floor((i-1)/2)]+“ ”;}
5. read a[i];
6. Print “Does ” + a[i] + “ has left child (Y/y)? ”;
7. read ans;
8. If((ans==‘Y’ OR ans==‘y’) AND (2*(i+1)-1 < 2h-1))then
9. { build(2*(i+1)-1); }
10. Print “Does ” + a[i] + “ has right child (Y/y)? ”;
11. read ans;
12. If((ans==‘Y’ OR ans==‘y’) AND (2*(i+1) < 2h-1))then
13. { build(2*(i+1)); }
14. }
20

10
Binary Tree (Array) – isEmpty
isEmpty()
()
 It is used to check whether binary tree t is empty or not. If 0th
element of the array has null character, then binary tree should
be empty.
1. Algorithm isEmpty(){
2. If (a[0] == ‘\0’) then {
3. return True;
4. }
5. Else{
6. return False;
7. }
8. }

21

Binary Tree (Array) – size()


 It return number of nodes in binary tree t.

1. Algorithm size(){
2. sz = 0;
3. For i = 0 to 2h–2 do{
4. If (a[i] ≠ ‘\0’) then {
5. sz = sz + 1;
6. }
7. }
8. return sz;
9. }

22

11
Binary Tree (Array) – preOrder
preOrder((i)
 It return the sequence of nodes of the binary tree t traversed in
pre-order traversing. It is called by passing index of root node
i.e. 0.
1. Algorithm preOrder(i){
2. If(a[i] ≠ ‘\0’ ) Then { Print a[i] + “ ”; }
3. If(2*(i+1)-1 < 2h-1 AND a[2*(i+1)-1] ≠ ‘\0’){//Left subtree
4. preOrder(2*(i+1)-1);
5. }
6. If(2*(i+1) < 2h-1 AND a[2*(i+1)] ≠ ‘\0’){//Right subtree
7. preOrder(2*(i+1));
8. }
9. }

23

Binary Tree (Array) – inOrder


inOrder((i)
 It return the sequence of nodes of the binary tree t traversed in
in-order traversing. It is called by passing index of root node i.e.
0.
1. Algorithm inOrder(i){
2. If(2*(i+1)-1 < 2h-1 AND a[2*(i+1)-1] ≠ ‘\0’){//Left subtree
3. inOrder(2*(i+1)-1);
4. }
5. If(a[i] ≠ ‘\0’ ) Then { Print a[i] + “ ”; }
6. If(2*(i+1) < 2h-1 AND a[2*(i+1)] ≠ ‘\0’){//Right subtree
7. inOrder(2*(i+1));
8. }
9. }

24

12
Binary Tree (Array) – postOrder
postOrder((i)
 It return the sequence of nodes of the binary tree t traversed in
post-order traversing. It is called by passing index of root node
i.e. 0.
1. Algorithm postOrder(i){
2. If(2*(i+1)-1 < 2h-1 AND a[2*(i+1)-1] ≠ ‘\0’){//Left subtree
3. inOrder(2*(i+1)-1);
4. }
5. If(2*(i+1) < 2h-1 AND a[2*(i+1)] ≠ ‘\0’){//Right subtree
6. postOrder(2*(i+1));
7. }
8. If(a[i] ≠ ‘\0’ ) Then { Print a[i] + “ ”; }
9. }

25

Binary Tree (Array) – levelOrder


levelOrder()
()
 It return the sequence of nodes of the binary tree t traversed in
level-order traversing.
 It uses the queue data structure for it.
1. Algorithm levelOrder(){
2. If(a[0] == ‘\0’) then { return; } // binary tree is empty
3. Queue <int> q(2h); q.insert(0);
4. While(q.isEmpty() == False) do{
5. i = q.del(); print a[i] + “ ”;
6. If(2*(i+1)-1 < 2h-1 AND a[2*(i+1)-1] ≠ ‘\0’){//Left subtree
7. q.insert(2*(i+1)-1);
8. }
9. If(2*(i+1) < 2h-1 AND a[2*(i+1)] ≠ ‘\0’){//Right subtree
10. q.insert(2*(i+1));
11. }
12. }
13. }
26

13
Binary Tree (Array) – height()
 It return the height of the tree t.
1. Algorithm heiglt(){
2. return height;
3. }

27

Binary Tree (Array) – search(x)


 It used to search a key (x) in the binary tree t. If it exist in the
tree, then return its index otherwise return -1.

1. Algorithm search(x){
2. For i = 0 to 2h-2 do {
3. if(a[i] == x) then{
4. return i;
5. }
6. }
7. return -1;
8. }

28

14
Binary Tree (Array) – Limitations
 The array representation of the binary tree has following
limitation:
 It is static – the array size representing the binary tree is
fixed.
 If binary tree is skewed then large number of elements of
the array should be empty.
 To overcome these limitations the binary tree may be
represented by linked (nodes) – but it requires more
memory spaces.

29

Binary Tree Representation using Link


 It is collection of nodes, where each node has data field
and left and right pointer variables to hold the address of
its left and right child respectively. 100 Root
 Example: The link representation of
200 A 300
the given binary tree is: 100

A 400 B 500 Null C 600


200 300

Null D Null Null E Null Null F Null


B C 400 500 600

D E F

30

15
Binary Tree Representation using Link
 The data members of tree D.S.: root of pointer to node type
that hold the address of root node of the binary tree t.
 Member functions of Tree D.S. are: build(r, p), isEmpty(), size(),
getSize(r), preOrder(r), inOrder(r), postOrder(r), levelOrder(),
height(r), search(r, x), getRoot().
 In constructor function, we set the root = NULL.
 The member functions build(r, p), getSize(r), preOrder(r),
inOrder(r), postOrder(r), height(r), and search(r, key) are
recursive function which are called with r=root address of the
root node.
 The levelOrder() is used to traverse the tree in level order using
Queue. Where queue object should be node type pointer to
stores the address of nodes of the binary tree.
31

Binary Tree (Link) – Build(r, p)


1. Algorithm build(r, p){
2. If (r == NULL) then { r = root = createNode();
3. Print “Enter root node ”;
4. }Else if(r == p.left) then{ Print “Enter left child of ” + p.data +“ ”;}
5. Else{ Print “Enter right child of ” + p.data +“ ”;}
6. read r.data;
7. Print “Does ” + r.data + “ has left child (Y/y)? ”;
8. read ans;
9. If(ans==‘Y’ OR ans==‘y’)then
10. { r.left = createNode(); build(r.left, r); }
11. Print “Does ” + r.data + “ has right child (Y/y)? ”;
12. read ans;
13. If(ans==‘Y’ OR ans==‘y’) then
14. {r.right = createNode(); build(r.right, r); }
15. } 32

16
Binary Tree (Link) – isEmpty()
isEmpty()
 It is used to check whether binary tree t is empty or not. If root
is NULL, then binary tree should be empty.

1. Algorithm isEmpty(){
2. If (root == NULL) then {
3. return True;
4. }
5. Else{
6. return False;
7. }
8. }

33

Binary Tree (Link) – size()


 It return number of nodes in binary tree t.
1. Algorithm size(){
2. if(root == NULL) return 0;
3. sz = 0; CQueue <Node *> q(100);
4. q.insert(root);
5. While(q.isEmpty() == false) do{
6. Node *r = q.del();
7. sz++;
8. if(r.left ≠ NULL) then { q.insert(r.left); }
9. if(r.right ≠ NULL) then { q.insert(r.right); }
10. }
11. return sz;
12. }

34

17
Binary Tree (Link) – getSize()
getSize()
 It is recursive function that return number of nodes in binary
tree t. It is called with address of the root node of binary tree.
1. Algorithm size(r){
2. if(r == NULL) then
3. {
4. return 0;
5. }Else
6. {
7. return getSize(r.left) + getSize(r.right) + 1;
8. }
9. }

35

Binary Tree (Link) – preOrder(r)


preOrder(r)
 It return the sequence of nodes of the binary tree t when it is
traversed in pre-order traversing. It is called by passing the
address of the root node of the binary tree.
1. Algorithm preOrder(r){
2. if(r ≠ NULL) then { Print r.data + " "; }
3. if(r.left ≠ NULL) then { preOrder(r.left); }
4. if(r.right ≠ NULL) then { preOrder(r.right); }
5. }

36

18
Binary Tree (Link) – inOrder(r)
inOrder(r)
 It return the sequence of nodes of the binary tree t when it is
traversed in in-order traversing. It is called by passing address of
the root node of the binary tree.
1. Algorithm inOrder(r){
2. if(r.left ≠ NULL) then { inOrder(r.left); }
3. if(r ≠ NULL) then { Print r.data + " "; }
4. if(r.right ≠ NULL) then { inOrder(r.right); }
5. }

37

Binary Tree (Link) – postOrder(r)


postOrder(r)
 It return the sequence of nodes of the binary tree t when it is
traversed in post-order traversing. It is called by passing the
address of the root node of the binary tree.
1. Algorithm postOrder(r){
2. if(r.left ≠ NULL) then { postOrder(r.left); }
3. if(r.right ≠ NULL) then { postOrder(r.right); }
4. if(r ≠ NULL) then { Print r.data + " "; }
5. }

38

19
Binary Tree (Link) – levelOrder()
levelOrder()
 It return the sequence of nodes of the binary tree t when it is
traversed in level-order traversing.
 It uses the queue data structure for it.
1. Algorithm levelOrder(){
2. if(root == NULL) then { return; }
3. CQueue <Node *> q(size());
4. q.insert(root);
5. While(q.isEmpty() == false) do{
6. Node *r = q.del();
7. print r.data + " ";
8. if(r.left ≠ NULL) then { q.insert(r.left); }
9. if(r.right ≠ NULL) then { q.insert(r.right); }
10. }
11. }

39

Binary Tree (Link) – height(r)


 This recursive function return the height of the binary tree t. It
is called by passing the address of the root node of the binary
tree t.
1. Algorithm height(r){
2. if(r == NULL) then { return 0; }
3. Else if(height(r.left) > height(r.right)) then { return height(r.left) + 1; }
4. Else { return height(r.right) + 1; }
5. }

40

20
Binary Tree (Link) – getRoot()
getRoot()
 This function return the address of the root of the binary tree t.

1. Algorithm getRoot(){
2. return root;
3. }

41

Binary Tree (Link) – search(r, x)


 This recursive function is used to search a key (x) in the binary
tree t. If it exist in the tree, then return its address otherwise
return NULL.
 It is called with address of the root node of the binary tree t and
key (x).
1. Algorithm search(r, key){
2. if(r == NULL) then { return NULL; }
3. if(r.data == key) then { return r; }
4. Else{ Node *res1 = search(r.left, key);
5. if(res1 ≠ NULL) then { return res1; }
6. Else{ Node *res2 = search(r.right, key); return res2; }
7. }
8. }

42

21
Expression Tree
 An expression tree is a binary tree which is used to store an
arithmetic expression.
 The internal nodes of an expression tree are operators and
leave are operands.
 Data member of expression tree – root (a pointer variable of
node data type whose data field is of string type)
 Member functions – create(P), inorder(r), preorder(r),
postorder(r), evaluate(r).
 The creation operation uses the postfix expression of the given
arithmetic expression.
 The preorder traversing of the expression tree return the prefix
of the expression.
 The postorder traversing of the expression tree return the
postfix form of the expression. 43

Expression Tree – create(P)


 It is used to create the expression tree from postfix form of the
arithmetic expression using stack.
1. Algorithm create(P){ Stack <Node *> s(100);
2. For each token x of P do {
3. if(isOperator(x) == True) then {
4. Node *t1 = s.pop(), *t2 = s.pop();
5. newNode = createNode(); newNode.data = x;
6. newNode.left = t2; newNode.right = t1; s.push(newNode);
7. Else{ newNode = createNode(); newNode.data = x;
8. newNode.left = newNode.right=NULL; s.push(newNode);
9. }
10. }
11. root = s.pop();
12. }
44

22
Expression Tree – evaluate(r)
 This recursive function is used to evaluate the arithmetic
expression by using expression tree. This function is called with
address of the root of the expression tree.
1. Algorithm evaluate(r){
2. if(r ≠ NULL) then {
3. Node *lptr = r.left, *rptr = r.right;
4. if(isOperator(lptr.data) == False) then { leftOperand = lprt.data; }
5. Else { leftOperand = evaluate(lptr); }
6. if(isOperator(rptr.data) == False) then {rightOperand=lprt.data; }
7. Else { rightOperand = evaluate(rptr); }
8. operator = root.data;
9. val = leftOperand operator rightOperand;
10. return val;
11. }
12. } 45

Heap Trees
 It is complete binary tree. There are two type of heap tree –
max heap tree, and min heap tree.
 In max heap tree the value of a node (in this complete binary)
should be greater than or equal to values of its children.
 In min heap tree the value of a node (in this complete binary)
should be less than or equal to values of its children.
 Heap tree may be represented using linked; but as it is
complete binary tree, so it is better to represent it using array.
60 10

30 35 20 30

10 20 15 25 35
46

23
Heap Trees
 The major operations that can be performed on a heap tree
are:
 Insertion – to insert an element x at the end of the heap tree.
 Deletion – delete the root node of the heap tree.
 Build – create a heap tree from given list of integers.

60 10

30 35 20 30

10 20 15 25 35
47

Heap Trees – insert(x)


 First we insert x at the last in heap tree.
 Next it is compared with its parent node, if it violate the heap
property then that should be interchange. This will be continue
between two nodes on the path from newly inserted node to
the root node till we get a parent node whose value satisfy the
heap property or we reached at root of the heap tree.
10
65
20 15
30 60
25 35 30
10 20 15 35
Insert 65 Insert 15 48

24
Heap Trees – insert(x)
1. Algorithm insert(a[0..n-1], n, x){ // heap tree is stored in a[0..n-2]
2. i = n-1;
3. While((i > 0) AND a[floor((i-1)/2)] < x) do {
4. a[i] = a[floor((i-1)/2)]; i = floor((i-1)/2);
5. }
6. a[i] = x;
7. }

 Insertion operation in min heap change the while condition


While((i > 0) AND a[floor((i-1)/2)] > x)

49

Heap Trees – del()


 It delete the root node of the heap tree. Let heap tree is stored in
a[0..n-1].
 First it stores the value of root in a variable x, and last node at root.
 Adjust the heap tree stored in a[0..n-2] such that it satisfy heap
property.
 The main function in deletion operation is adjust function.

10
60
20 30
30 35
25 35 1
10 20 15 60
Del() Del() 50

25
Heap Trees – del()
1. Algorithm adjust(a[0..n-1], i, n){
2. // It adjust the max heap tree stored in a[i..n-1]
3. j = 2 * (i+1) - 1; x = a[i];
4. While(j ≤ n-1) do {
5. if((j < n-1) AND (a[j] < a[j+1]) then { j = j + 1; }
6. if(x ≥ a[j]) then { break; }
7. a[floor((j-1)/2)] = a[j]; j = 2 * (j+1) – 1;
8. }
9. a[floor((j-1)/2)] = x;
10. }

 The adjust function for min heap tree will be obtained by


changing a[j] > a[j+1] at line 5 and x ≤ a[j] at line 6.

51

Heap Trees – del()


 This algorithm delete the max element from root of the max
heap tree and return it.
1. Algorithm DelMax(a[0..n-1], n, x){
2. if(n == 0) then { throw Exception(“Heap tree is empty”); }
3. x = a[0];
4. a[0] = a[n-1];
5. adjust(a, 0, n-1);
6. return x;
7. }

 If the heap tree is minimum then it delete the min element


from root of the heap tree.

52

26
Heap Trees – build()
 To build heap tree for a list of integers, first we store the data
(integers) into array a[0..n-1], then call the heapify(a, n) function.

1. Algorithm heapify(a[0..n-1], n){


2. //Adjust the elements of array a[0..n-1] to form heap tree
3. For i = floor((n-2)/2) to 0 step -1 do{
4. adjust(a, i, n);
5. }
6. }

53

Heap Trees – build()


 Example: We want to build max heap tree of the list of data 40, 35,
45, 60, 90, 70, 80.
 Iteration 1: i = floor((n-1)/2)=floor((6-1)/2)=floor(2.5)=2. adjust(a,2,7)
0 1 2 3 4 5 6
0 1 2 3 4 5 6
40 35 45 60 90 70 80
40 35 80 60 90 70 45

40 40

35 45 35 80

60 90 70 80 60 90 70 45

54

27
Heap Trees – build()
 Iteration 2: i = 1. adjust(a,1,7)

0 1 2 3 4 5 6
0 1 2 3 4 5 6
40 35 80 60 90 70 45
40 90 80 60 35 70 45

40 40

35 80 90 80

60 90 70 45 60 35 70 45

55

Heap Trees – build()


 Iteration 2: i = 0. adjust(a,0,7)
0 1 2 3 4 5 6
0 1 2 3 4 5 6
40 90 80 60 35 70 45 90 40 80 60 35 70 45

40 90

40 80
90 80

60 35 70 60 35 70 45
45
90

60 80
0 1 2 3 4 5 6
90 60 80 40 35 70 45
40 35 70 45
56

28
Heap Trees – Applications
 There are two main applications of heap tree: (i) sorting
(ii) priority Queue.
 To sort the list of integers, first build max heap tree, then
exchange the first and last elements of the tree and call the
adjust(a, 0, n-1) n-1 times.
1. Algorithm heapSort(a[0..n-1], i, n){
2. heapify(a, n); // build the max heap tree
3. For i = n-1 to 1, step -1 do{
4. x = a[i]; a[i] = a[0]; a[0] = x;
5. adjust(a, 0, i);
6. }
7. }

 This is in-place sorting algorithm and its time complexity is O(nlg n).
57

Heap Trees Applications-


Applications-Priority Queue
 It may be implemented using max heap tree.

1. Class PriorityQueue{
2. MaxHeapTree t;
3. isEmpty(){ return t.isEmpty(); }
4. insert(x) { t.insert(x); }
5. del() { return t.delMax(); }
6. }

58

29
Binary Search Trees
 A binary search tree is a binary tree in which key of each node
must be greater than keys of its left sub tree and less than keys
of its right sub tree.
 The major operations that can be performed on a binary search
tree are:
 Searching – searching of a key in BST. 4
 Insertion – to insert an element x in the BST.
 Deletion – delete a node from the BST. 2 8
 Traversing – traverse the BST inOrder, preOrder, postOrder.
 InOrder traversing of the BST result
sorted keys. 1 3 6 9

5 7
59

Binary Search Trees – search(x)


 Searching a key in BST is faster than sequential search in array
or linked list. So, the application where frequent search
operations are to be performed the data are stored using BST
data structure.
 Suppose in a BST t, key x is to be search. We start from its root
node R. if x is less than the key of the root node, 4
we proceed to its left child; if x is greater than
the key of the root node, we proceed to its
right child. 2 8
 The process will be continue till the x is
not found or we reached to a dead end. 1 3 6 9

5 7
60

30
Binary Search Trees – search(x)
1. Algorithm search(x) {
2. r = Root; flag = False;
3. While(r ≠ NULL AND flag == False) do{
4. If(x == r.data) then { flag = True; }
5. Else if(x < r.data) then { r = r.left; }
6. Else if(x > r.data) then {r = r.right }
7. }
8. If(flag == True) then{
9. Print x + “ has found at the node ” + r;
10. return r;
11. }Else{ Print x + “ does not exist. Search failed.”;
12. return NULL;
13. }
14. }
61

Binary Search Trees – insert(x)


 To insert a node with key = x in BST, the BST is to be searched
for x starting from root node.
 If x is found then throw Exception(“Key x already exist”)
 Otherwise x is to be inserted at the dead end where search is
halts.
 Insert (10). 40 4
 Create the BST by inserting following
keys one by one. 10 70 2 8
40, 10, 20, 70, 80, 30, 50, 60
80
20 50 1 3 6 9

30 60
5 7
62

31
Binary Search Trees – isert(x)
isert(x)
1. Algorithm insert(x) {
2. r = Root; p = NULL;
3. While(r ≠ NULL) do{
4. If(x == r.data) then { throw Exception(“Key already exist”); }
5. Else if(x < r.data) then { p = r; r = r.left; }
6. Else if(x > r.data) then { p = r; r = r.right }
7. }
8. newNode = createNode();
9. newNode.data = x; nelNode.left = newNode.right = NULL;
10. If(p == NULL) then { Root = newNode; }
11. Else if(x < p.data) then { p.left = newNode; }
12. Else if(x > p.data) then { p.right = newNode; }
13. }

63

Binary Search Trees – del(x)


 To delete a node with key = x from BST, the BST is to be
searched for x starting from root node.
 If x is not found then throw Exception(“Key x does not exist”)
 Otherwise if x exist in node N then there should be three cases.
 Case 1: N is leave node – In this case the corresponding
pointer of its parent node set by NULL. 4
 Case 2: N has only one child – In this case the
corresponding pointer of its parent node
2 6
set by address of its child node.
 Case 3: N has two children – In this
case it is interchanged by its in-order 1 3 5 7
successor (or predecessor) and then it is
deleted by using case-1/ case-2.
64

32
Binary Search Trees – del(x)
 Del(5) then del(4)

2 8

1 3 6 9

7
65

1. Algorithm del(x) {
2. r = Root; p = NULL; flag = False;
3. While(r ≠ NULL AND flag == False) do{
4. If(x == r.data) then { flag = True; }
5. Else if(x < r.data) then { p = r; r = r.left; }
6. Else if(x > r.data) then { p = r; r = r.right }
7. }
8. If(flag == False) then { throw(“key ” + x +“ does not exist in BST”); }
9. If(r.left == NULL AND r.right == NULL) then{ //Case1
10. If(r == p.left) then { p.left = NULL; }
11. Else{ p.right = NULL; }
12. Else if(r.left == NULL AND r.right ≠ NULL) then{ //Case2
13. If(r == p.left) then { p.left = r.right; }
14. Else{ p.right = r.right; }
15. Else if(r.left ≠ NULL AND r.right == NULL) then{ //Case2
16. If(r == p.left) then { p.left = r.left; }
17. Else{ p.right = r.left; }
18. Else if(r.left ≠ NULL AND r.right ≠ NULL) then{ //Case3
19. p=r; suc = r.right; While(suc.left ≠ NULL) { p=suc; suc = suc.left; }
20. t = r.data; r.data = suc.data; suc.data = t;
21. If(suc == p.right) then { p.right = suc.right; }
22. Else{ p.left = suc.right; }
23. }
24. }
66

33
Height Balanced Binary Search Tree – AVL Tree
 Balance Factor BF(n): The balance factor of a node n is denoted
by BF(n) is defined as BF(n) = Height of left subtree(n) – Height
of the right subtree(n).
 A binary search tree t is called height balance binary search tree
or AVL tree if balance factors of its nodes are either 0 or ±1.
 The AVL tree is named on G.M. Adelson-Velskii 4 -1
and E.M. Lendis, Russian Mathematicians, who invented AVL
rotations in 1962. 0 2 8 +1
 The given BST is AVL tree.
 Every complete BST is AVL tree but
reverse not true. 0 1 0 3 6 9
0 0

0 5 0 7
67

Height Balanced Binary Search Tree – AVL Tree


 Conversion of the unbalanced BST to balanced BST in Insertion
operation:
 Suppose initially there is a balanced BST. After inserting a new node
it may become unbalance.
 The unbalanced BST may be converted to balance
4 -2
using following steps:
1) Pivot Node selection: The BF factors of the 0
2 15 +2
nodes in the path from root to newly inserted
may be change. The node whose BF changed
from ±1 to ±2 is marked as pivot node.
0
If there are more than one such nodes, then 1 0 3 6 0 20
the node near the newly inserted node -1
will be pivot node.
0 5
2) AVL Rotation: Next it convert unbalanced BST to -1 7
balance using AVL rotations. 8
 Example - Insert (8): BF(4)=-2, BF(15)=2, BF(6)=-1, BF(7)=-1; Pivot =15 68

34
AVL Tree – AVL Rotations (Case 1)
 There 4 cases of the AVL rotations:
Case 1 (Left to Left): Unbalance due to insertion in left sub-tree of
the left child of the pivot node. Rotate clockwise at P.
P A 10 +2

A PR P +1 8 15 0
AL
AL AR
AR PR ±1 5 9 0
0 8

-1 5 10 0 0 4 0 7

0 7 9 0 15 0 69

AVL Tree – AVL Rotations (Case 2)


Case 2 (Right to Right): Unbalance due to insertion in right sub-
tree of the right child of the pivot node. Rotate anti-clockwise at P.

P B 10 -2

PL B P 0 -1
8 15
BR
BL BR
PL BL
0 12 25 ±1
15 0

0 10 25 -1
0 20 0 30

0 8 0 12 30 0 70

35
AVL Tree – AVL Rotations (Case 3)
Case 3 (Left to Right): Unbalance due to insertion in right sub-tree
of the left child of the pivot node. First rotate anti-clockwise at left
child of P then rotate clockwise at P.
P
P B
2
2
A PR B PR P
A
1
A BR
AL B AL BL BR PR

AL BL
BL BR

71

AVL Tree – AVL Rotations (Case 3)


15 +2 15 +2
2
2
-1 3 20 0 +1 7 20 0
1
+1 2 -1 7 18 25 0 -1 3 -1 9 18 25 0
0 0

1 9 +1 +1 2 8 7 0
0 5 0 5
0 0
15 0
1 3
0 8
0 20 0
+1 2 0 5 9 +1

0 1 +1 0 8 0 18 0 2572

36
AVL Tree – AVL Rotations (Case 4)
Case 4 (Right to Left): Unbalance due to insertion in left sub-tree
of the right child of the pivot node. First rotate clockwise at right
child of P then rotate anti-clockwise at P.
P
P B
2
2
PL A PR B P A
1
BL A
B AR PL BL BR AR

BR AR
BL BR

73

AVL Tree – AVL Rotations (Case 4)


10 -2 10 -2
2 2
0 3 50 +1 0 3 30 -2
1
0 2 0 7 30 55 -1 2 7 20 50 0
-1 0
0 0
40 +1 60 0 30 40 +1 55 +1
0 20
0 0 60
+1 10 0 50 35 0
0 35
0 3 40 +1 55 -1
0 20
0 2 0 7 35 0 0 60 74

37
AVL Tree – Example
 Create the AVL tree by inserting 1, 2, 3, 4, 5, 6, 7 one by one.
Case-2
Insert(1) 0 Insert(2) -1 Insert(3)
1 1 1 -2 2 0

2 0 2 -1 1 0 3 0

Insert(4) -1 Insert(5)
2 2 -2
3 0
Case-2 2 -1

1 0 3 -1 1 0 3 -2

1 0 4 0
4 0 4 -1

3 0 5 0
5 0
75

AVL Tree – Example


2 -2 4 0

Case-2
Insert(6)
2 0 5 -1
1 0 4 -1

0 6 0
3 0 5 -1
1 0 3
0
6 4 0
4 -1
Case-2
Insert(7)
2 0 5 -2 2 0 6 0

0 6 -1 0
5 0 7 0
1 0 3 1 0 3
0
7 76

38
AVL Tree – Assignments
 Create the AVL tree by inserting following elements one by one.
Show each steps in insertion operation.
(i) 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1
(ii) 1- 15
(iii) 20, 10, 5, 30, 40, 57, 3, 2, 4, 35, 25, 18, 22, 21
(iv) 40, 50, 70, 30, 42, 15, 20, 25, 27, 26, 60, 53

77

Red-Black Trees

 “Balanced” binary search trees guarantee an


O(lgn) running time
 Red-black-tree
 Binary search tree with an additional attribute

for its nodes: color which can be red or black


 Constrains the way nodes can be colored on

any path from the root to a leaf:

Ensures that no path is more than twice as long as


any other path  the tree is balanced

78

39
Red-Black-Trees Properties
A red black tree is a binary search tree where each node
has a color attribute and satisfy following properties:
1. Every node is either red or black
2. The root is black
3. Every leaf (NIL) is black
4. If a node is red, then both its children are black
• No two consecutive red nodes on a simple path
from the root to a leaf
5. For each node, all paths from that node to descendant
leaves contain the same number of black nodes

79

Red-Black Trees - Example

26

17 41
NIL NIL
30 47

NIL 38 NIL 50
NIL NIL NIL NIL

 For convenience we use a sentinel NIL[T] to represent all


the NIL nodes at the leafs
 NIL[T] has the same fields as an ordinary node
 Color[NIL[T]] = BLACK
 The other fields may be set to arbitrary values
80

40
Red-Black Trees: Black-Height of a Node
h=4
26 bh = 2

h=1 h=3
bh = 1 17 41 bh = 2

NIL NIL h=2 h=2


30 bh = 1 47 bh = 1
h=1
bh = 1
h=1
NIL 38 NIL 50 bh = 1

NIL NIL NIL NIL

 Height of a node: the number of edges in the


longest path to a leaf. In other words, number of internal
nodes in the longest path to a leaf.
 Black-height of a node x: bh(x) is the number of
black nodes (including NIL) on the path from x to a leaf,
not counting x 81

Red-Black-Tree: Most important property

A red-black tree with n internal nodes


has height at most 2lg(n + 1)

 Need to prove two claims first …

82

41
Red-Black-Tree: Property 1
 Any node x with height h(x) has bh(x) ≥ h(x)/2
 Proof
 The property 4 of red black tree is: If a node is red, then both its
children are black
 By property 4, at most h/2 red nodes on the path from the node
to a leaf
h=4
Hence at least h/2 are black

bh = 2 26
h=1 h=3
bh = 1 17 41 bh = 2

NIL NIL h=2 h=2


30 bh = 1 47 bh = 1
h=1
bh = 1
h=1
NIL 38 NIL 50 bh = 1

NIL NIL NIL NIL


83

Red-Black-Tree: Property 2
 The subtree rooted at any node x contains
at least 2bh(x) - 1 internal nodes
h=4
26 bh = 2

h=1 h=3
bh = 1 17 41 bh = 2

NIL NIL h=2 h=2


30 bh = 1 47 bh = 1
h=1
bh = 1
h=1
NIL 38 NIL 50 bh = 1

NIL NIL NIL NIL

84

42
Red-Black-Tree: Property 2 (cont’d)
Proof: By induction on h[x]
Basis: h[x] = 0 
x is a leaf (NIL[T]) 
bh(x) = 0  x
NIL
# of internal nodes: 20 - 1 = 0

Inductive Hypothesis: assume it is true for


h[x]=h-1

85

Red-Black-Tree: Property 2 (cont’d)


Inductive step:
 Prove it for h[x]=h
 Let bh(x) = b, then any child y of x has:
 bh (y) = b (if the child is red), or
 bh (y) = b - 1 (if the child is black)

bh = 2
x
26 bh = 2
y1 y2
bh = 1 17 41
NIL NIL 30 47
NIL 38 NIL 50

NIL NIL
86

43
Red-Black-Tree: Property 2 (cont’d)
 Using inductive hypothesis, the
number of internal nodes for each h
x
child of x is at least:
2bh(x) - 1 - 1 l r h-1

 The subtree rooted at x contains at


least:
(2bh(x) - 1 – 1) + (2bh(x) - 1 – 1) + 1 = bh(l)≥bh(x)-1

2 · (2bh(x) - 1 - 1) + 1 = bh(r)≥bh(x)-1
2bh(x) - 1 internal nodes

87

Height of Red-Black-Trees (cont’d)


Lemma: A red-black tree with n internal nodes has
height at most 2lg(n + 1). height(root) = h root
bh(root) = b
Proof:
n ≥ 2b - 1 ≥ 2h/2 - 1 l r
number n
of internal since b  h/2
nodes

 Add 1 to both sides and then take logs:


n + 1 ≥ 2b ≥ 2h/2
lg(n + 1) ≥ h/2 
h ≤ 2 lg(n + 1)

88

44
Operations on Red-Black-Trees
 The non-modifying binary-search-tree operations
MINIMUM, MAXIMUM, SUCCESSOR,
PREDECESSOR, and SEARCH run in O(h) time

 They take O(lgn) time on red-black trees

 What about TREE-INSERT and TREE-DELETE?

 They will still run on O(lgn)

 We have to guarantee that the modified tree


will still be a red-black tree
89

Red-Black-Tree: INSERT
 Insert the node N in Red-Black tree as in BST and set its color
red. Let parent, grandparent, uncle, and its sibling are denoted
by P, G, U, and S respectively. There are five cases:
 Case 1: N is the root node - change color of N to black.
i.e. color(N) = black.
 Case 2: P is black - It satisfied all properties of red-black tree.
 Case 3: Both P and U are red – change the color of P and U to
black and that of G to red then repair at G by insert(G).
 Case 4: P is red but U is black, P is left child and N is right child
(or P is right child and N is left child) – Rotate Left (Right) at P
and repair at P by insert(P).
 Case 5: P is red but U is black, both P and N are left child (or P
and N are right child) – Change color of P to black and that of
G to red, Rotate Right (Left) at90 G.

45
Red-Black-Tree: INSERT Example
Insert 4
11 Case 3 11 Case 4
2 14 P 2 14 U
1 7 15 1 7 N 15
P 5 8 U 5 8
Both P & U are red P is red U is black and P is
N4 4 left child & N is right child

11 7
P 7 14 U Case 5 2 11
N2 8 15 1 5 8 14
1 5 P is red U is 4 15
black and both P
4 & N are left child
91

Red-Black-Tree: INSERT Example


Create Red-Black tree by inserting following numbers one by one: 40, 50, 70, 30, 42, 15, 20, 25

Insert 40 40 N 40 Insert 50 40 P Case 2, No change


Case 1

NIL NIL NIL 50 N


NIL NIL
Change
NIL NIL
color(N)=black

Insert 70 40 G Case 5
50
Change
NIL U 50 P color(P)=black,
40 70 and color(G)=red;
NIL 70 N rotate Left at G
NIL NIL NIL NIL
NIL NIL

92

46
Red-Black-Tree: INSERT Example
Create Red-Black tree by inserting following numbers one by one: 40, 50, 70, 30, 42, 15, 20, 25

G 50 Case 3 50 N Case 1 50
Insert 30

P 40 70 U 40 70 40 70

N 30 NIL NIL NIL 30 NIL NIL NIL 30 NIL NIL NIL

NIL NIL NIL NIL NIL NIL Change


color(N)=black
Change
color(P)=color(U)=black,
and color(G)=red;
insert(G)

93

Red-Black-Tree: INSERT Example


Create Red-Black tree by inserting following numbers one by one: 40, 50, 70, 30, 42, 15, 20, 25

G 50 Case 2 50
Insert 42

P 40 70 U 40 70

30 N 42 NIL NIL 30 42 NIL NIL

No Change
NIL NIL NIL NIL NIL NIL NIL NIL

94

47
Red-Black-Tree: INSERT Example
Create Red-Black tree by inserting following numbers one by one: 40, 50, 70, 30, 42, 15, 20, 25

50 Case 3 P 50 Case 2 50
Insert 15
G 40 70 N 40 70 40 70

P 30 U 42 NIL NIL 30 42 NIL NIL 30 42 NIL NIL

N 15 NIL NIL NIL 15 NIL NIL NIL 15 NIL NIL NIL

NIL NIL NIL NIL NIL NIL


No change
Change
color(P)=color(U)=black,
and color(G)=red;
insert(G)

95

Red-Black-Tree: INSERT Example


Create Red-Black tree by inserting following numbers one by one: 40, 50, 70, 30, 42, 15, 20, 25

50 Case 4 50
Insert 20

40 70 40 70

G 30 42 NIL NIL G 30 42 NIL NIL

Rotate Left(P);
P 15 U NIL NIL NIL P 20 U NIL NIL NIL insert(P)

NIL N 20 N 15 NIL 50

NIL NIL NIL NIL 40 70

Case 5 20 42 NIL NIL

Change color(P)= black,


and color(G)=red; 15 30 NIL NIL
Rotate Right(G)
NIL NIL NIL NIL
96

48
Red-Black-Tree: INSERT Example
Create Red-Black tree by inserting following numbers one by one: 40, 50, 70, 30, 42, 15, 20, 25

50 Case 3 G 50
Insert 25

40 70 P 40 U 70

G 20 42 NIL NIL 20 N 42 NIL NIL

Change color(P)=
U 15 P 30 NIL NIL 15 30 NIL NIL color(U)= black,
color(G) =red;
NIL NIL NIL NIL NIL NIL insert(G)
25 N 25 40

NIL NIL NIL NIL 20 50

Case 5 15 30 42 70

Change color(P)= black,


and color(G)=red; NIL NIL 25 NIL NIL NIL NIL NIL
Rotate Right(G)
NIL NIL
97

Red-Black-Tree: DELETE
 Delete the node M as in BST. Let C is its non-leaf child. Then
(i) If M is red no violation. (ii) If M is black and C is red, then it violate property 4 & 5
that preserve by changing the color of C black. (ii) If both M and C are black Or C is
NIL, then label C to N and perform following six cases. Where P, S, SL, and SR are
parent, sibling, left child of the sibling, and right child of the sibling respectively:
 Case 1: N is the root  No action.
 Case 2: S is red  (i) Change color(P)=red & color(S)=black, (ii) If (N is left child) then
{rotate_left(P);} Else {rotate_Right(P);} (iii) delete_case4(N)
 Case 3: P, S, SL, & SR are black  (i) Set color(S) = red, (ii) delete_case1(P)
 Case 4: P is red but S, SL, & SR are black Change color(P)=black, and color(S)=red.
 Case 5(i): S and SR are black and SL is red, and N is left child  (i) change color(S) =
red, color(SL)=black, and (ii) rotate_Right(S).
 Case 5(ii): S and SL are black and SR is red, and N is right child  (i) change color(S) =
red, color(SR)=black, and (ii) rotate_Left(S).
 Case 6(i): S is black and SR is red, and N is left child  (i) change color(S)=color(P),
color(P)=color(SR)=black, and (ii) rotate_Left(P).
 Case 6(ii): S is black and SL is red, and N is right child  (i) change color(S)=color(P),
color(P)=color(SL)=black, and (ii) rotate_Right(P).
98

49
Red-Black-Tree: Delete Example – del(30)
50
Delete 30 50

40 70
40 70

20 42 NIL NIL
20 42 NIL NIL

15 M 30 NIL NIL
15 25

NIL NIL 25 C NIL


The node is to be NIL NIL NIL NIL
deleted M=30 is
NIL NIL red. No violation

99

Red-Black-Tree: Delete Example – del(70)


50 P 50
Delete 70

40 70 S 40 NIL N

20 42 NIL NIL SL 20 SR 42

15 25 NIL NIL 15 25 NIL NIL

NIL NIL NIL NIL NIL NIL NIL NIL


Change
Case 2 Case 4 40 color(P)=black,
Change 40
color(S)=red
color(P)=red,
color(S)=black; 20
20 50 P
Rotate Right(P) 50 P
15 25 S 42 NIL N
15 25 S 42 NIL N
NIL NIL NIL NIL NIL NIL
NIL NIL NIL NIL NIL NIL
100
SL SR

50
Red-Black-Tree: Delete -Assignment
50

40 70

20 42 NIL NIL

15 30 NIL NIL

NIL NIL NIL NIL

Delete the nodes of above red-black tree in following order and show the steps:
50, 20, 30, 40, 42, 15.

101

Splay Tree
 A splay tree is a binary search tree.
 It facilitate to quickly access the elements that are recently searched,
inserted, or deleted.
 It perform these basic operations in O(lgn) amortized time.
 For many sequence of non-random operations, splay tree perform
better than other search tree.
 Splay tree was invented by Daniel Sleator and Robert Tarjan in 1985.
 In splay tree, all normal operations on a binary search tree are
combined with splay operation. Where splay operation bring the
splay node at the root of the binary search tree.
 Advantage:
 Since frequently accessed node move to the root or near to root, so such
sequence of operations are performed quickly.

102

51
Splay Tree
 Disadvantages:
 The most significant disadvantage of the splay tree is that its height may becomes linear.
 Example: Let binary search three is as in figure (a) which is balanced. We have searched its
elements in the sequence 10, 20, 40, 50, 60. After these sequence of operation the BST
becomes linear and height becomes 5 (n), so actual cost of an operation should be n (high)
but amortized cost should be O(lg n).
40 Search(10) 10 Search(20) Search(40)
20 40

20 60 20 40 20 60
10

10 50 40 60 10 50
Fig (a): Balanced BST 60
60 50
50
Search(50) 50
50 Search(60)
40 60
40
20
20
10
10 103

Splay Tree – Splay Operation


 First we have to find the splay node x, In insertion operation it is
newly inserted node, In deletion operation it should be parent node
to the actual deleted node, and in search operation it should be node
that match the search key.
 Then using splay operation the x move to the root of the splay tree.

 The splay operation may has sequence of splay steps, each of which
move x closer to the root.
 There are 3 types of splay steps, each of which has two symmetric
variants: left and right handed.
1. Zig Step: When p is root and x is left child of p. Then, rotate Right(p).
p x

x p
Rp Lx

Lx Rx Rx Rp
104

52
Splay Tree – Splay Operation
2. Zig-Zig Step: When p is not root, p is left child of g and x is left child
of p. First rotate Right(g), then rotate Right(p).
g p x

p x g p
Rg Lx

x g
Rp Lx Rx Rp Rg Rx

Lx Rx Rp Rg

105

Splay Tree – Splay Operation


3. Zig-Zag Step: When p is not root, p is left child of g and x is right
child of p. First rotate Left(p), then rotate Right(g).
g g x

p x p g
Rg Rg

x p
Lp Rx Lp Lx Rx Rg

Lx Rx Lp Lx

106

53
Splay Tree – Operations
 The following operations may be performed in a splay tree: Join,
Split, Insertion, deletion, and search.
 Join Operation:

 It may be used to join two BSTs S and T, where all elements of S are
smaller than the elements of T.
 The following steps may be used to join them in a single BST.

1. Splay the largest item in S. Now, this element is the root of S and has
NULL right field.
2. Set the right field of root of S by root of T.
S 5 S 5
S 4
Splay(5) 4 S.Root.Right=T 4 9 T
2 5
2 2 7 11

1 3
1 3 1 3 6 8 10 12 107

Splay Tree – Operations


 Split Operation: Given a BST and its an element x, split operation split
it into two BST one contains all elements less than or equal to x and
other contain all elements greater than x.
 This can be done in following steps:

1. Splay x, which bring the x at its root so that its left sub tree contains
elements smaller than x and right sub-tree contains all elements
larger than x.
2. Split the right sub-tree from rest of the tree.
5 T=S.Root.Right
X=5 6 Splay(5)
S.Root.Right=NULL
6 S 5 6
4 T
4 10
10 4 10
2
2 5 8 12
2
1 3 8 12 8 12
1 3 7 9 11 13
1 3
7 9 11 13 7 9 11 13 108

54
Splay Tree – Operations
 Insertion Operation: To insert x into a splay tree follow the following
steps.
1. Insert x as in a normal BST.
2. Perform splay steps to move x at its root. 6
Insert(6) 7 Splay(6) 7 Splay(6)
(i) zig-zig (ii) zig 5 7
4 11 6 11 11
4

2 5 9 13 5 9 13 2 9 13

6
1 3 8 10 12 14 4 8 10 12 14 1 3 8 10 12 14

1 3

109

Splay Tree – Operations


 Deletion Operation: To delete x from a splay tree follow the following
steps.
1. Delete x as in a normal BST. Let p is the parent of the actual node is
to be deleted.
2. Perform splay steps to move x=p at its root.
Delete(4) 7 P=5 7 Splay(5) 5
(i) zig
4 11 5 11 7
2

2 5 9 13 2 6 13
9 1 3 6 11
6
1 3 8 10 12 14 1 3 8 10 12 14 9 13

8 10 12 14

110

55
Splay Tree – Operations
 Search Operation:
1. Search x, if it exist in BST then splay node is x; otherwise the node
that compared with x at last is splay node (say p).
2. Perform splay steps to move x or p at its root.

Search (11) 7 x=11, splay(11) 11


(i) Zig
4 11
7 13

2 5 9 13
4 9 12 14

6
1 3 8 10 12 14 2 5 8 10

1 3 6

111

B-Tree: Motivation

 Large differences between time access to disk,


cash memory and core memory
 Minimize expensive access
(e.g., disk access)
 B-tree: Dynamic sets that is optimized for disks

112

56
m-way Search Tree
A m-way search tree of order m is a search tree in which:
1. Each node has maximum of m children and m-1 keys (data).
2. The keys in each node are in ascending order.
3. The keys in first i children are smaller than the ith key.
4. The keys in the last m-i children are larger than ith key.
50 60 80

30 35 58 59 63 70 73 100

52 54 61 62

57
4-way tree
55 56
113

B-Tree
B-tree is invented by Bayer and McCreight in 1972. B-tree is an M-
way search tree with following properties :
1. The root has at least two children unless it is a leaf.
2. Each non-leaf and non-root node has k-1 keys and k children,
where ceil(m/2) ≤ k ≤ m.
3. Each leaf node has k-1 keys, where ceil(m/2) ≤ k ≤ m.
4. All leaves are on the same level.
This property show that at least half of each node should be occupied.

114

57
Example: a 4-way B-tree

20 40 20 40

0 5 10 25 35 45 55 0 5 25 35 45 55

10
B-tree of order 4 4-way
tree
B-tree
1. It is perfectly balanced: every leaf node is at the same level.
2. Every node, except maybe the root, is at least half-full
3. Every node with has at least 4/2=2-1=1 key.

115

B-Tree: Insert X

1. As in M-way tree find the leaf node to which X


should be added
2. Add X to this node in the appropriate place among
the values already there
(there are no sub-trees to worry about)
3. Number of values in the node after adding the key:
 Fewer than m-1: done
 Equal to m: overflowed
4. Fix overflowed node

116

58
Fix an Overflowed

1. Split the node into three parts, at it median.


 Left: the left values, become a left child node
 Middle: the middle value, goes up to parent
 Right: the right values, become a right child node
2. Continue with the parent:
1. Until no overflow occurs in the parent
2. If the root overflows, split it too, and create a new root node

J
x … 56 98 …. x … 56 68 98 ….
split

y 60 65 68 83 86 90
y 60 65 z 83 86 90
117

Insert example

20 40 60 80 M  6; t  3

0 5 10 15 25 35 45 55 62 66 70 74 78 87 98

Insert 3: 20 40 60 80

0 3 5 10 15 25 35 45 55 62 66 70 74 78 87 98

59
20 40 60 80 M  6; t  3

0 3 5 10 15 25 35 45 55 62 66 70 74 78 87 98

Insert 61: 20 40 60 80
OVERFLOW

0 3 5 10 15 25 35 45 55 61 62 66 70 74 78 87 98

20 40 60 70 80
SPLIT IT

0 3 5 10 15 25 35 45 55 61 62 66 74 78 87 98

119

M  6; t  3
20 40 60 70 80
Insert 38:

0 3 5 10 15 25 35 45 55 61 62 66 74 78 87 98

20 40 60 70 80

0 3 510 15 25 35 38 45 55 61 62 66 74 78 87 98

120

60
Insert 4:
M  6; t  3
20 40 60 70 80

0 3 5 10 15 25 35 38 45 55 6162 66 74 78 87 98

20 40 60 70 80
OVERFLOW

0 3 4 5 10 15 25 35 38 45 55 61 6266 74 78 87 98

SPLIT IT OVERFLOW
5 20 40 60 70 80
SPLIT IT
0 3 4 10 15 2535 38 45 55 6162 66 74 78 87 98

121

M  6; t  3

5 20 40 60 70 80
OVERFLOW
SPLIT IT
0 3 4 10 15 25 35 38 45 55 61 62 66 74 78 87 98

60

5 20 40 70 80

0 3 4 10 15 25 35 38 45 55 61 62 66 74 78 87 98

61
Complexity Insert

 Inserting a key into a B-tree of height h is done


in a single pass down the tree and a single pass
up the tree

Complexity: O( h)  O(log t n)

123

B-Tree: Delete X

 Delete as in M-way tree


 A problem:
 might cause underflow: the number of

keys remain in a node < t-1

Recall: The root should have at least 1 value in it, and all
other nodes should have at least t-1 values in them
124

62
M  6; t  3

Underflow Example
Delete 87: 60

5 20 40 70 80

0 3 4 10 15 25 35 38 45 55 61 62 66 74 78 87 98

60 B-tree
UNDERFLOW
5 20 40 70 80

0 3 4 10 15 25 35 38 45 55 6162 66 74 78 98

B-Tree: Delete X

 Delete as in M-way tree


 A problem:
 might cause underflow: the number of

keys remain in a node < t-1


 Solution:
 make sure a node that is visited has at

least t instead of t-1 keys

Recall: The root should have at least 1 value in it, and all other nodes
should have at least t-1 (at most 2t-1)values in them

126

63
B-Tree-Delete(x,k)
 For deletion in B tree we wish to remove from a leaf. There are
three possible case for deletion in B tree.
 Let k be the key to be deleted, x the node containing the key.
Then the cases are:
 Case 1: If the key is already in a leaf node, and removing it
doesn’t cause that leaf node to have too few keys, then simply
remove the key to be deleted. key k is in node x and x is a leaf,
simply delete k from x.

127

B-Tree Delete - Case 1

Del 6

128

64
B-Tree-Delete(x,k)
 Case 2: If key k is in node x and x is an internal node, there are
three cases to consider:
 Case 2(a): If the child y that precedes k in node x has at least t
keys (more than the minimum), then find the predecessor key k'
in the sub-tree rooted at y. Recursively delete k' and replace k
with k' in x.
 Case 2(b): Symmetrically, if the child z that follows k in node x
has at least t keys, find the successor k' and delete and replace as
before. Note that finding k' and deleting it can be performed in a
single downward pass.

129

B-Tree Delete - Case 2(a)


Del 13

130

65
B-Tree-Delete(x,k)
 Case 2(c): Otherwise, if both y and z have only t−1 (minimum
number) keys, merge k and all of z into y, so that both k and the
pointer to z are removed from x. y now contains 2t − 1 keys, and
subsequently k is deleted.

131

B-Tree Delete - Case 2(c)


Del 7

132

66
B-Tree-Delete(x,k)
 Case 3: If key k is not present in an internal node x, determine
the root of the appropriate sub-tree that must contain k. If the
root has only t − 1 keys, execute either of the following two
cases to ensure that we descend to a node containing at least t
keys. Finally, recurse to the appropriate child of x.
 Case 3(a): If the root has only t−1 keys but has a sibling with t
keys, give the root an extra key by moving a key from x to the
root, moving a key from the roots immediate left or right sibling
up into x, and moving the appropriate child from the sibling to x.

133

B-Tree Delete - Case 3(a)


Del 2

134

67
B-Tree-Delete(x,k)
 Case 3(b): If the root and all of its siblings have t−1 keys, merge
the root with one sibling. This involves moving a key down
from x into the new merged node to become the median key for
that node.

135

B-Tree Delete - Case 3(b)


Del 4

136

68
B+ Tree
It is modified form of B tree. In B+ tree the keys of the internal
(non-leaf) nodes are also stored into leaves and linked are
maintained among all the leaf nodes so that one can move from
left most leaf node to the right most leaf node and sequential
search may be done.
50

10 15 20 29 70 80

6 8 10 11 12 15 16 18 20 21 25 27 2930 505456 707176 808189

B+ tree of order 5

137

69
6/16/2022

Lecture Slide - 6
CSCC333: Data Structures and
Program Design
SartajSahni: Data Structures, Algorithms and Applications
in C++, 2nd Edition, Silicon Press
D. Samanta: Classic Data Structures, 2nd Edition, PHI

Sorting
 Sorting is a process of arranging elements of a given list of items
in a prescribed order based on some criteria.
 There are two types of sorting based on how it sort the list: (i)
keys comparison based sorting, and (ii) Key uses as indices of the
array based sorting.
 There are two types of sorting based on memory used: (i) in-place
sorting, and (ii) out place sorting.
 Stable sorting: a sorting algorithm is stable that maintain the order
of same key values after sorting.

1
6/16/2022

Sorting – Selection sort


 Suppose that we want to sort in ascending order.
 In this algorithm, we pick the smallest element from given list and
place at first location, then 2nd smallest from remaining list and
place it at 2nd location, and so on.

1. Algorithm SelectionSort(a[], n){


2. For i = 1 to n-1 do{ i=7
3. minIndx = i; minIdx=8
4. For j = i + 1 to n do { J=8 to 8
5. If(a[j] < a[minIndx]) then { minIndx = j; }
6. }
7. If(i ≠ minIndx) then { a[i] ↔ a[minIndx] }
8. }
9. }
1 2 3 4 5 6 7 8 3
1 2 3 4 5 6 7 8

Sorting – Bubble sort


 Suppose that we want to sort in ascending order.
 In this algorithm, we compare two consecutive elements, if they are
out of order then interchange. In each pass the largest value move at
last location.
i=7
1. Algorithm bubbleSort(a[], n){ minIdx=8
2. For i = 1 to n-1 do{ J=8 to 8
3. isSorted = True;
4. For j = 1 to n-i do {
5. If(a[j] > a[j+1]) then {a[j] ↔ a[j+1]; isSorted = False; }
6. }
7. If(isSorted == True) then { break; }
8. }
9. }
1 2 3 4 5 6 7 8 4
1 2 3 4 5 6 7 8

2
6/16/2022

Sorting – Insertion sort


 Suppose that we want to sort in ascending order.
 Initially there are sorted list of one element, and unsorted list of n-1
elements. In this algorithm, in each step we insert first element of
the unsorted list into sorted list. So, sorted list grow in every step.

1. Algorithm insertionSort(a[], n){ J=4, 3, 2


2. For i = 2 to n do{ X=a[4]=6
3. j = i ; x = a[i];
4. While(j >= 2 AND a[j-1] > x)do {
5. a[j] = a[j-1]; j = j – 1;
6. }
7. a[j] = x;
8. }
9. }
1 2 3 4 5 6 7 8 5
2 2 3 4 5 6 7 8

Sorting – Merge sort


 It uses the divide and conquer approach and its time complexity is O(n lg n).
 Merge sort work as follows:
Divide: divide the list of n elements into two sub-list of size n/2 each.
Conquer: Sort each sub-list recursively using merge sort algorithm.
Combine: Merge two sorted sub-lists to produce the original sorted list.
 The main operation of the merge sort is merge() operation.
1. Algorithm merge(a[], low, mid, high){
2. // merge the sorted sub-lists a[low:mid] and a[mid+1:high].
3. b[high-low+1] // temporary array stores the intermediate merge result
4. i = low, j = mid+1, k = 0; 0 1 2 3 4 5 6 7
5. While(i ≤ mid AND j ≤ high) do{
1 4 7 8 2 3 5 6
6. If(a[i] ≤ a[j]) then { b[k] = a[i]; i = i + 1; }
7. Else{b[k] = a[j]; j = j + 1; } 0 1 2 3 4 5 6 7
8. k = k + 1; 1 2 3 4 5 6 7 8
9. }
10. While(i ≤ mid) do { b[k] = a[i]; i = i + 1; k = k + 1; }
low=0
11. While(j ≤ high) do { b[k] = a[j]; j = j + 1; k = k + 1; }
mid=3
6
12. For i = 0 to (high-low) do { a[low+i] = b[i]; }
high =7
13. }

3
6/16/2022

Sorting – Merge sort


 For dividing the list into two sub-lists and then merging them, we will use the
mergeSort(a[], low, high) algorithm that is called with low=0, high=n-1.

1. Algorithm mergeSort(a[], low, high){


2. If(low < high) then{ mid = floor((low+high)/2);
3. mergeSort(a, low, mid); mergeSort(a, mid+1, high);
4. merge(a, low, mid, high);
5. }
6. }
 Analysis: The divide operation divides the list of size n into two sub-list of size
n/2 each.
 The divide operation requires constant time i.e. D(n) = Θ(1), combine operation
takes time n, so C(n) = Θ(n).

 (1) if n  1
T ( n)  
2T (n / 2)  ( n) if n  1

 It is solve using master theorem and result T(n) = Θ(n.lg n) 7


Sorting – Quick sort
It also uses the divide and conquer approach and its time complexity in best and average
case is O(n lg n). Quick sort work as follows:
Divide: In divide operation, it partition the list n elements into two sub-list by placing all
elements smaller than pivot element before it and all elements greater or equal to it after
it. Choosing pivot element play important role in sorting, for simplicity the first element
of the unsorted list may taken as pivot.
Conquer: Sort each sub-list recursively using Quick sort algorithm.
Combine: Since partition is performed in same array, so combine operation not needed.
 The main operation of the Quick sort is partition() operation.
1. Algorithm partition(a[], low, high){
0 1 2 3 4 5 6 7
2. // array a[low:high] is to be partitioned.
3 1 2 4 7 8 5 6
3. pivot_element = a[low]; j = low;
4. For i = low+1 to high do { low=0
5. if(a[i] < pivot_element) then{ j = j + 1; high =7
6. if(i ≠ j) then { Exchange a[i] ↔ a[j] ;} PE=a[low]=4
7. } J=low=3
8. }
9. pivot_point = j; If(low ≠ pivot_point) then {Exchange a[low]↔a[pivot_point];}
8
10. return pivot_point;
11. }

4
6/16/2022


Sorting – Quick sort
For partioning the list into two sub-lists and and sorting then recursively using Quick sort,
we will use the quickSort(a[], low, high) algorithm that is called with low=0, high=n-1.
1. Algorithm quickSort(a[], low, high){
2. If(low < high) then{ pivot_point = partition(a, low, high);
3. quickSort(a, low, pivot_point-1); quickSort(a, pivot_point+1, high);
4. }
5. }

 Analysis: In best case, the partition function partition the list of size n into two sub-list of
size n/2 each.
 The divide operation requires n time i.e. D(n) = Θ(n), combine operation does not takes
time, so C(n) = Θ(0). So, T(n) should be as below and when it is solved using master
theorem it result T(n) = Ω(n.lg n).  (1) if n  1
T ( n)  
2T ( n / 2)  (n) if n  1
 In worst case: (when list is sorted or in reverse order), the partition operation result the
one empty sub-list and second sub-list of size n-1. So, T(n) should be defined as below.
When it solve it result the T(n) = O(n2).  (1) if n  1
T ( n)  
T (n  1)  ( n) if n 9 1
 The average case time complexity of quick sort is T(n) = Θ(n.lg n)

Sorting – Shell sort


 Shell sort is an in-place comparison sorting algorithm.
 It is a generalization of the bubble/insertion sort.
 This method start by sorting pairs of elements far apart (with large gap), then
progressively reducing the gap between elements to be compared.
 Starting with a far apart elements pair, it can move some out of place elements
into position faster than a simple nearest neighbour exchange.
 Donald Shell published the first version of this sort in 1959.

 The execution time of this algorithm depends on gap sequence.

Example: Let we want to sort the list 3, 1, 7, 5, 4, 8, 6, 2 using insertion and Shell
sort. Insert 0 1 2 3 4 5 6 7 # of shift Total Shifts
Operation 3 1 7 5 4 8 6 2 operations
Insertion Sort
Insert 1 1 3 1 shift 1+0+1+2+0+2
Insert 7 1 3 7 0 shift +6 = 12
Insert 5 1 3 5 7 1 shift
Insert 4 1 3 4 5 7 2 shift
Insert 8 1 3 4 5 7 8 0 shift
Insert 6 1 3 4 5 6 7 8 2 shift
10
Insert 2 1 2 3 4 5 6 7 8 6 shift

5
6/16/2022

Sorting – Shell sort


Example: Let we want to sort the list 3, 1, 7, 5, 4, 8, 6, 2 using insertion and Shell
sort.
Shell Sort Insert 0 1 2 3 4 5 6 7 # of shift Total Shifts
gap=n/2=8/2=4 Operation 3 1 7 5 4 8 6 2 operations
Insert 4 3 4 0 shift 0+0+1+1=2
Insert 8 1 8 0 shift
Insert 6 6 7 1 shift
Insert 2 2 5 1 shift
Final list 3 1 6 2 4 8 7 5

Shell Sort Insert 0 1 2 3 4 5 6 7 # of shift Total Shifts


gap=n/4=8/4=2 Operation 3 1 6 2 4 8 7 5 operations
Insert 6 3 6 0 shift 0+1+0+0+0+1
Insert 4 3 4 6 1 shift =2
Insert 7 3 4 6 7 0 shift
Insert 2 1 2 0 shift
Insert 8 1 2 8 0 shift
Insert 5 1 2 5 8 1 shift
Final list 3 1 4 2 6 5 7 8 11

Sorting – Shell sort


Example: Let we want to sort the list 3, 1, 2, 5, 4, 8, 6, 7 using insertion and Shell
sort.
Shell Sort Insert 0 1 2 3 4 5 6 7 # of shift Total Shifts
gap=n/8=8/8=1 Operation 3 1 4 2 6 5 7 8 operations
Insert 1 1 3 1 shift 1+0+2+0+1+0
Insert 4 1 3 4 0 shift +0=4
Insert 2 1 2 3 4 2 shift G. Total shifts
=2+2+4=8
Insert 6 1 2 3 4 6 0 shift
Insert 5 1 2 3 4 5 6 1 shift
Insert 7 1 2 3 4 5 6 7 0 shift
Insert 8 1 2 3 4 5 6 7 8 0 shift
Final list 1 2 3 4 5 6 7 8

12

6
6/16/2022

Sorting – Shell sort


 Gap sequence: The decision of using a gap sequence is a difficult task.
 Every gap sequence that has 1 will sort the data, but we take sequence from larger
to smaller.
n n n
 Shell (1959) – sequence –   ,   ,   , 1 W (n)  ( n 2 )
2 4 8 3
 Pratt (1971) – sequence (generated by h = 3xh+1) – 1, 4, 13, 40, 121, ... W (n)  ( n 2 )

 Based on these two gap sequence, there are following two algorithms.

1. Algorithm ShellSortShell(a[0:n-1], n){


2. gap = floor(n/2);
3. While(gap ≥ 1) do{ // Do a gap insertion sort for this gap size
4. For i = gap to n do{ x = a[i]; j = i;
5. While(j ≥ gap AND a[j-gap] > x) do{
6. a[j] = a[j-gap];
7. j = j – gap;
8. }
9. a[j] = x;
10. }
11. gap = floor(gap/2);
12. }
13
13. }

Sorting – Shell sort


 Algorithm based on Pratt(1971) sequence.

1. Algorithm ShellSortPratt(a[0:n-1], n){


2. gap = 1;
3. While(gap < n/3) do{ gap = 3 * gap + 1; }
4. While(gap ≥ 1) do{ // Do a gap insertion sort for this gap size
5. For i = gap to n do{ x = a[i]; j = i;
6. While(j ≥ gap AND a[j-gap] > x) do{
7. a[j] = a[j-gap];
8. j = j – gap;
9. }
10. a[j] = x;
11. }
12. gap = (gap – 1)/3;
13. }
14. }

14

7
6/16/2022

Searching – Sequential Search


 Searching is a process to find the index (position) of given element
x, in the given list of items.
 In sequential search, the item is to be search (x) is compared with all
items of the given list.

1. Algorithm sequentialSearch(a[], n, x){


2. index = -1; x=1
3. For i = 1 to n do{
4. If(a[i] == x){
5. index = i; break;
6. }
7. }
8. return index;
9. }
1 2 3 4 5 6 715 8
1 2 3 4 5 6 7 8

Searching – Binary Search


 In binary search the list of items should be sorted.
 The list is divided at its mid position and x is compared with middle element. If it
is less that mid element then it should be searched in first half, if it is greater than
mid element then searched in second half, and if equal then search succeeded.

1. Algorithm binarySearch(a[], n, x){


2. low=1, high=n; x=9
3. While(low<=high) do{ mid = floor((low+high)/2); L=9
4. If(a[mid] == x){ return mid; } H=8
5. Else if(x < a[mid]) then { high = mid – 1; } M=8
6. Else { low = mid + 1; }
7. }
8. return -1;
9. }
1 2 3 4 5 6 716 8
1 2 3 4 5 6 7 8

8
6/16/2022

Hashing
 Hash Table: It is a table that store the key values. The location of a key value is
calculated from the key itself. There are one to one correspondence between a key
value and an index in the hash table. The process of calculating index in the hash
table for a key value is called hashing.
 Hashing Technique: It is a technique to find a one to one correspondence between
a key value and an index in the hash table, where key value is to be stored. The
function that obtain this mapping is called hash function. There are two principal
criteria for a hash function H: k → i.
(i) The hash function H should be simple and should take less execution time.
(ii) The function H should, as far as possible, give two different indices for two
different key values.
 Hash Function – division method: It is fast and more frequently used hash
function. For example, H(k) = k mod h. Generally h is taken as a prime number
or a number without small divisor.
Example: Suppose that we want to store key values 10, 19, 35, 43, 62, 59, 31, 49, 77,
33 in a hash table of size 11. If h=11, then hash table should be.
0 1 2 3 4 5 6 7 8 9 10
17
77, 33 35 59 49 62 19 31 10, 43

Hashing – Collision Resolution Techniques


 Collision in hashing cannot be removed, whatever be the size of the hash table.
There are several techniques to resolve the collisions. Out of them, two important
techniques are:
(i) Closed hashing or Linear probing.
(ii) Open hashing or chaining.
 Closed Hashing/ Linear Probing: It is a simple method to resolve the collision. In
this method, we start with hash address where the collision has occurred, let it be
i. Then follow the following sequence of the locations in the hash table and do the
sequential search. i, i+1, i+2, ..., h-1, 0, 1, 2, ..., i-1.
 The search will be continue until any of the following conditions occurs:
(i) The key value is found.
(ii) An empty location is encountered.
(iii) The search reach the location from where the search had started.
 Here the hash table is considered circular, so that when the last location is
reached, the search started from the first location of the table. This is why it is
called closed hashing. Since it search in a straight line, so it is also called linear
probing, where probe means key comparison.
18

9
6/16/2022

Hashing – Collision Resolution Techniques


 Closed Hashing/ Linear Probing- Example: Suppose that we want to store key
values 15, 11, 25, 16, 9, 8, 12 in a hash table of size 7. If hash function is H(k) =
k%7 and collision is resolve using closed hashing/ Linear probing, then hash table
should be. 0 1 2 3 4 5 6
12 15 16 9 11 25 8

 Insert 15  i = 15%7=1; Insert 11  i = 11%7 = 4; Insert 25  i = 25%7 = 4,


collision, i+1 = 5; Insert 16  i = 16%7 = 2; Insert 9  i = 9%7=2, collision,
i+1=3; Insert 8  i = 8%7 = 1, collision, i+1=2, collision, i+2=3, collision, i+3=4,
collision, i+4=5, collision, i+5=6; Insert 12  i = 12%7=5, collision, i+1=6=h-1,
collision, i=0. Total collisions = 9.
 Hash Table Class - Data members: 1D array a[] for hash table, h divisor in hash
function, n the size of hash table.
 Hash Table Class - Member functions: The main member functions of this class
are insert(k), search(k), and del(k).

19

Hashing – Collision Resolution Techniques


 Closed Hashing/ Linear Probing- Class. 1. void HashTable::insert(int k){
1. class HashTable{ int *a, h, n; // data members 2. int i = k % h;
2. public: 3. do{
3. HashTable(int n, int p); 4. if(a[i] == -1){
4. void insert(int k); 5. a[i] = k; return;
5. int search(int k); 6. } else if(a[i] == k) return;
6. void del(int k); 7. else i = (i+1) % h;
7. }; 8. }while(i != k%h);
9. throw Exception(“hash table full”);
1. HashTable::HashTable(int n, int p){
10. }
2. a = new int[n]; this->n = n; h = p;
3. for(int i=0; i<n; i++) 1. int HashTable::del(int k){
4. a[i] = -1; 2. int i = search(k);
5. } 3. if(i == -1)
4. throw Exception(“key does not exist”);
1. int HashTable::search(int k){
5. int j = (i+1) % h, k = i;
2. int i = k%h;
6. while(j != k && a[j] != -1){
3. do{if(a[i] == -1) return -1;
7. if(a[j]%h <= i){ a[i] = a[j]; i = j; }
4. else if(a[i] == k) return i;
8. j = (j+1) % h;
5. else i = (i+1) % h;
9. }
6. }while(i != k%h);
10. a[i] = -1;
7. return -1;
11. }
8. } 20

10
6/16/2022

Hashing – Collision Resolution Techniques


 Drawback of Closed Hashing/ Linear Probing: The major drawback of closed
hashing is that when half of the hash table is filled there is a tendency towards
clustering and result a sequential search, which make it slower and slower. This
kind of clustering called primary clustering.
 The primary clustering may be avoided using (a) Random probing, and (b)
Double hashing methods.
 Random Probing: This method use a pseudo random number generator to generate
the random sequence of locations, rather than a linear sequence as in linear
probing. The random sequence generator, generates all the position between 0 and
h-1.
 For example: pseudo random number generator i = (i+m) mod h, generate the
random sequence in case there is a collision at position i. Where m and h are
relative primes and m < h. For example, if m = 5, h = 7, and initially i = 4, then
pseudo number generator generates the sequence 2, 0, 5, 3, 1, 6, 4.
 We can avoid primary clustering if the probe follows the said sequence.

21

Hashing – Collision Resolution Techniques


 Random Probing- Example: Suppose that we want to store key values 15, 11, 25,
16, 9, 8, 12 in a hash table of size 7. If hash function is H(k) = k%7 and collision
is resolve using random probing with pseudo random generator i = (i+5) % 7, then
hash table should be. 0 1 2 3 4 5 6
16 15 25 12 11 9 8

 Insert 15  i = 15%7=1; Insert 11  i = 11%7 = 4; Insert 25  i = 25%7 = 4,


collision, i = (i+5) % 7 = 2; Insert 16  i = 16%7 = 2, collision, i = (i+5) %7 = 0;
Insert 9  i = 9%7=2, collision, i = (i+5)%7 = 0, collision, i = (i+5)%7 = 5; Insert
8  i = 8%7 = 1, collision, i=(i+5)%7=6; Insert 12  i = 12%7=5, collision,
i=(i+5)%7=3. Total collisions = 6.
 Problem in Random Probing: When two keys are hashed to the same location
address, it generates same sequence of pseudo random numbers; so there are again
clustering and such clustering is called secondary clustering, which may be
avoided using double hashing.

22

11
6/16/2022

Hashing – Collision Resolution Techniques


 Random Probing- Example: Suppose that we want to store key values 15, 8, 29,
22, 43, 36, 49 in a hash table of size 7. If hash function is H(k) = k%7 and
collision is resolve using random probing with pseudo random generator i = (i+5)
% 7, then hash table should be.
0 1 2 3 4 5 6
43 15 22 49 29 36 8

 Insert 15  i = 15%7=1; Insert 8  i = 8%7 = 1, collision, i=(i+5)%7=6; Insert


29  i=29%7=1, collision, i=(i+5)%7=6, collision, i=(i+5)%7=4; Insert 22  i =
22%7 = 1, collision, i=(i+5)%7=6, collision, i=(i+5)%7=4, collision,
i=(i+5)%7=2; Insert 43  i=43%7=1, collision, i=(i+5)%7=6, collision,
i=(i+5)%7=4, collision, i=(i+5)%7=2, collision, i=(i+5)%7=0; Insert 36  i =
36%7=1, collision, i=(i+5)%7=6, collision, i=(i+5)%7=4, collision, i=(i+5)%7=2,
collision, i=(i+5)%7=0, collision, i=(i+5)%7=5; Insert 49  i=49%7=0, collision,
i=(i+5)%7=5, collision, i=(i+5)%7=3.Total collisions = 17.
 Due to secondary clustering it has large number of collisions.
23

Hashing – Collision Resolution Techniques


 Double Hashing: It is used to avoid the secondary clustering. It uses the two hash
function, where second function is used to generate the value of m that is used in
pseudo random number generator. 2nd hash function is taken in such a way that
hash address generated by two functions must be distinct and m and h should be
relative prime.
 Double Hashing- Example: Suppose that we want to store key values 15, 8, 29,
22, 43, 36, 49 in a hash table of size 7. If hash function is H1(k) = k%7, H2(k) =
k%5+1, and collision is resolve using double hashing with pseudo random
generator i = (i+H2(k)) % 7, then hash table should be.
0 1 2 3 4 5 6
49 15 43 36 22 8 29

 Insert 15  i = 15%7=1; Insert 8  i = 8%7 = 1, collision, m=8%5+1=4,


i=(i+4)%7=5; Insert 29  i=29%7=1, collision, m=29%5+1=5, i=(i+5)%7=6;
Insert 22  i = 22%7 = 1, collision, m=22%5+1=3, i=(i+3)%7=4; Insert 43 
i=43%7=1, collision, m=43%5+1=4, i=(i+4)%7=5, collision, i=(i+4)%7=2; Insert
36  i = 36%7=1, collision, m=36%5+1=2, i=(i+2)%7=3; Insert 49 
24
i=49%7=0.Total collisions = 6.

12
6/16/2022

Hashing – Open Hashing


 In this method the collision is resolve by storing the keys in array of linked list. It
is also called chaining. Here the index of the linked list array is computed using
hash function and the key is inserted in that linked list. The size of the linked list
array determined by the hash function. It is more frequently used hashing
technique, where the number of keys to be inserted into the hash table are
unknown.
 Open Hashing- Example: Suppose that we want to store key values 15, 11, 25, 16,
9, 8, 12 in a hash table of size 7. If hash function is H(k) = k%7, and collision is
resolve using open hashing, then hash table should be.
0
1 100 15 600 8 Null
100
2 400 16 500 9 Null 600
3 400 500

4 200 11 300 25 Null


5 700 200 300 12 Null
700
6 25

Graph Definition
 A graph G consists of two sets
– a finite, nonempty set of vertices V(G)
– a finite, possible empty set of edges E(G)
– G(V, E) represents a graph
 An undirected graph is one in which all edges are undirected. Where
an edge represented by unordered vertex-pair, (v0, v1) = (v1,v0)
 A directed graph is one in which each edge has a direction and
represented by ordered vertex-pair, <v0, v1> != <v1,v0>
tail head

26

13
6/16/2022

Examples for Graph


0 0 0

1 2 1 2
1
3
3 4 5 6
G1 2
G2
complete graph incomplete graphs G3
V(G1)={0,1,2,3} E(G1)={(0,1),(0,2),(0,3),(1,2),(1,3),(2,3)}
V(G2)={0,1,2,3,4,5,6} E(G2)={(0,1),(0,2),(1,3),(1,4),(2,5),(2,6)}
V(G3)={0,1,2} E(G3)={<0,1>,<1,0>,<1,2>}
complete undirected graph: n(n-1)/2 edges
complete directed graph: n(n-1) edges
27

Complete Graph
 A complete graph is a graph that has the maximum number of
edges
– for undirected graph with n vertices, the maximum number
of edges is n(n-1)/2
– for directed graph with n vertices, the maximum
number of edges is n(n-1)
– example: G1 is a complete undirected graph

28

14
6/16/2022

Adjacent and Incident

 If (v0, v1) is an edge in an undirected graph,


– v0 and v1 are adjacent
– The edge (v0, v1) is incident on vertices v0 and v1
 If <v0, v1> is an edge in a directed graph
– v0 is adjacent to v1, and v1 is adjacent from v0
– The edge <v0, v1> is incident on v0 and v1

29

Example: Example of a graph with feedback loops and a multigraph

0 2 1 3

1
2
self edge multigraph:
(a) (b) multiple occurrences
of the same edge

Figure 6.3
30

15
6/16/2022

Sub--graph and Path


Sub
 A subgraph of G is a graph G’ such that V(G’)
is a subset of V(G) and E(G’) is a subset of E(G)
 A path from vertex vp to vertex vq in an undirected graph G,
is a sequence of vertices, vp, vi1, vi2, ..., vin, vq,
such that (vp, vi1), (vi1, vi2), ..., (vin, vq) are edges
in an undirected graph
 The length of a path is the number of edges in it

31

Figure 6.4: subgraphs of G1 and G3 (p.261)


0 0 0 1 2 0

1 2 1 2 3 1 2
3
3
G1 (i) (ii) (iii) (iv)
(a) Some of the subgraph of G1

0 0 0 0
0
1 1 1
1
2 2
(i) (ii) (iii) (iv)
2 (b) Some of the subgraph of G3
32
G3

16
6/16/2022

Simple Path and Cycle


 A simple path is a path in which all vertices, except possibly the first
and the last, are distinct.
 A cycle is a simple path in which the first and the last vertices are the
same.
 In an undirected graph G, two vertices, v0 and v1, are connected if
there is a path in G from v0 to v1.
 An undirected graph is connected if, for every pair of distinct vertices
vi, vj, there is a path from vi to vj.

33

Connected Graphs

0 0

1 2 1 2
3
3 4 5 6
G1
G2
tree (acyclic graph)

34

17
6/16/2022

Connected Component
 A connected component of an undirected graph is a maximal
connected subgraph.
 A tree is a graph that is connected and acyclic.
 A directed graph is strongly connected if there is a directed path from
vi to vj and also from vj to vi for each vertex pair vi-vj.
 A strongly connected component is a maximal subgraph that is
strongly connected.

35

Figure: A graph with two connected components (p.262)


connected component (maximal connected subgraph)

H1 0 H2 4

2 1 5

3 6

G4 (not connected)
36

18
6/16/2022

Figure: Strongly connected components of G3


strongly connected component
not strongly connected (maximal strongly connected subgraph)

0
0 2

1
2
G3

37

Degree
 For undirected graph
— The degree of a vertex is the number of edges incident to that
vertex.
— Degree of graph G should be sum of the degree of its vertices.
— Number of edges in graph G should be degree of graph divided
by two.
 For directed graph,
– the in-degree of a vertex v is the number of incoming edges to
the vertex v.
– the out-degree of a vertex v is the number of outgoing edges
from vertex v.
– the degree of a vertex v is the sum of in-degree and out-degree
of the vertex v.
– The degree of the digraph G should sum of the degree of its
vertices.
38
– |E(G)| = degree(G)/2

19
6/16/2022

undirected graph
degree
3 0
0 2
1 2
3 1 2 3 3 3
3 4 5 6
3
G13 1 1 G2 1 1
0 in:1, out: 1
directed graph
in-degree
out-degree 1 in: 1, out: 2

2 in: 1, out: 0
39
G3

Undirected Complete Graph Kn


 Number of edges in a undirected complete graph kn should be
nx(n-1)/2.
 Proof: The number of vertices in complete graph kn is n.
The degree of each
n
vertex should be n-1.
degree(Kn) =  d i = n x (n-1)
1

Therefore no of edges in Kn = degree(Kn) / 2 = n x (n-1)/2

40

20
6/16/2022

Graph Representations
 Adjacency Matrix
 Adjacency Lists

41

Adjacency Matrix
 Let G=(V, E) be a graph with n vertices.
 The adjacency matrix of G is a square matrix of order n x n, say A.
 If the edge (vi, vj) is in E(G), a[i][j]=1
 If there is no such edge in E(G), from vertex i to vertex j, then
a[i][j]=0
 The adjacency matrix for an undirected graph is symmetric; the
adjacency matrix for a digraph need not be symmetric

42

21
6/16/2022

Examples for Adjacency Matrix


0 0 4
0
2 1 5
1 2
3 6
3 1
0 1 1 1  0 1 0
1 0 1 1    7
 1 0 1  0 1 1 0 0 0 0 0
1 1 0 1 2 0 0 0 1
   0 0 1 0 0 0 0
1 1 1 0
1 0 0 1 0 0 0 0
G2
G1  
0 1 1 0 0 0 0 0
0 0 0 0 0 1 0 0
 
0 0 0 0 1 0 1 0
symmetric 0 0 0 0 0 1 0 1
 
0 0 0 0 0 0 1 0
43

G4

Merits of Adjacency Matrix


 From the adjacency matrix, to determine whether graph is
connected or not is easy. n  1
 The degree of a vertex is  a [ i ][ j ]
j0
 For a digraph, the row sum is the out_degree, while the
column sum is the in_degree
n 1
in _ deg ree(vi )   a[ j ][i ] n 1

j 0 out _ deg ree(vi )   a[i ][ j ]


j 0

44

22
6/16/2022

Undirected Graph: Algorithm – degree(vi)


1. Algorithm degreeOfVertices(a[][], n){
2. d[n]; // degree of the vertices of the undirected graph G
3. for i = 1 to n do {
4. d[i] = 0;
5. for j = 1 to n do {
6. d[i] = d[i] + a[i][j];
7. }
8. }
9. return d;
10. }

45

Digraph: Algorithm in/out degree(vi)


1. Algorithm inOutdegreeOfVertices(a[][], n){
2. in_d[n], out_d[n]; // in/out degree of the vertices of the directed graph G
3. for i = 1 to n do {
4. out_d[i] = 0;
5. for j = 1 to n do { out_d[i] = d[i] + a[i][j]; }
6. }
7. for j = 1 to n do {
8. in_d[j] = 0;
9. for i = 1 to n do { in_d[j] = d[j] + a[i][j]; }
10. }
11. return in_d, out_d;
12. }
46

23
6/16/2022

Examples for Adjacency Matrix


0 0 1 0  0 1 0 0 1 0 1 0 1 
A  1 0 1  A  1 0 1   1 0 1   0 1 0
2

0 0 0 0 0 0 0 0 0 0 0 0


1
1 0 1  0 1 0 0 1 0
2 A  A  A  0 1 0  1 0 1   1 0 1 
3 2

0 0 0 0 0 0 0 0 0


G2
0 1 0 1 0 1  0 1 0 1 2 1 
A  A2  A3  1 0 1   0 1 0  1 0 1   2 1 2
0 0 0 0 0 0 0 0 0 0 0 0 

Since matrix A+A2+A3 has some zero elements, so it 47

is an un-connected digraph

Graph: Algorithm isConnected


1. Algorithm isConnected(a[][], n){
2. A = a; // Adjacency matrix of the graph
3. B = C = A;
4. for i = 1 to n-1 do {
5. B = B x A;
6. C = C + B;
7. }
8. for i = 1 to n do{
9. for j = 1 to n do{
10. if(c[i][j] == 0) then{ return False; }
11. }
12. }
13. return True;
14. }
48

24
6/16/2022

Examples for Adjacency Matrix


0 0 1 0  0 1 0  1 1 1 
A  1 0 1  A ( 2)
 1 1 1  A( 3 )  1 1 1 
0 0 0 0 0 0 0 0 0
1
G2 Since some element of the matrix A(3) is zero so
2 corresponding graph G2 is un-connected.

0
0 1 0  0 1 0  1 1 1
A  1 0 1  A ( 2)
 1 1 1  A( 3 )  1 1 1
1 0 1 0 0 1 0 1 1 1
G5
2
49

Graph: Algorithm isConnected


1. Algorithm Warshall(a[][], n){
2. A = a; // Adjacency matrix of the graph
3. for k = 1 to n do{
4. for i = 1 to n do{
5. for j = 1 to n do {
6. A[i][j] = A[i][j] OR (A[i][k] AND A[k][j]);
7. }
8. }
9. }
10. For i = 1 to n do{
11. For j = 1 to n do{
12. if(A[i][j] == 0) then { return False; }
13. }
14. }
15. return True;
16. }

50

25
6/16/2022

Spanning Tree

 Let G be a connected undirected graph. A spanning tree of G is a sub-


graph of G that includes all the vertices of G and it is also a tree.
 Steps to Compute the number of spanning trees of undirected graph G:
 Step 1. A = Adjacency matrix of graph G.
deg ree(vi ) if i  j
 Step 2. Compute matrix B such that bi , j  
  ai, j if i  j
 Step 3. Compute matrix S from matrix B by removing ith row and
corresponding column.
 Step 4. Number of spanning trees of G = determinant(matrix S).

51

Spanning Tree
G1 0 0 1 1 0  2  1  1 0 
   
1 2 A  1 0 0 1  B   1 2 0  1
1 0 0 1   1 0 2  1
3    
0 1 1 0  0  1  1 2 
 2  1  1 det(S )  2.(2  2  0  0)  1.(1 2  (1)  0) 1.(1 0  (1)  2)
S   1 2 0   2.( 4  0)  1.(2  0)  1.(0  2)  8  2  2  4
 1 0 2  0 0
0
0
1 2 1 2
1 2
1 2 3
3
3
3

52

26
6/16/2022

Spanning Tree
G1 0 0 1 1 0  2  1  1 0 
   
1 2 A  1 0 1 1  B   1 3  1  1
1 1 0 1   1  1 3  1
3    
0 1 1 0  0  1  1 2 
 2  1  1 det( S )  2.(3  3  (1)  (1))  1.(1 3  (1)  (1))  1.((1)  (1)  (1)  3)
S   1 3  1  2.(9  1)  1.( 3  1)  1.(1  3)  16  4  4  8
 1  1 3  0 0
0
0
1 2 1 2
1 2
1 2 0 0 3 0
3 0
3
3
1 2 1 2 1 2 1 2
3 3 3 3 53

Adjacency Lists Representation


 It is used to represent a graph G using array of linked list. Where
linked list of the ith element of the array contains the adjacent vertices
of vertex-i.
 The elements of each linked list are un-ordered.
 Since linked list of each array element contains the adjacent vertices of
that vertex that’s why it is called adjacency list representation of the
graph.
 If G is undirected graph with n number of vertices and e number of
edges, then length of the array should be n and total number of nodes
in the adjacency lists should be 2 x e.
 If G is a diagraph with n number of vertices and e number of edges,
then array length should be n and total number of nodes in the
adjacency lists should be e. 54

27
6/16/2022

0 0 4
2 1 5
1 2 3 6
3 7
0 1 2 3 0 1 2
1 0 2 3 1 0 3
2 0 1 3 2 0 3
3 0 1 2 3 1 2
G1 0 4 5
5 4 6
0 1 6 5 7
1 0 2 1
7 6
2
G3 G4
2 55

An undirected graph with n vertices and e edges ==> n head nodes and 2e list nodes

Interesting Operations
degree of a vertex in an undirected graph
–# of nodes in adjacency list
# of edges in a graph
–determined in O(n+e)
out-degree of a vertex in a directed graph
–# of nodes in its adjacency list
in-degree of a vertex in a directed graph
–traverse the whole data structure
56

28
6/16/2022

Some Graph Operations


 Traversal
It is a process of visiting of each vertex of the graph. It is also
known as graph search. There are two types of graph traversing:
– Depth First Search (DFS)
preorder tree traversal
– Breadth First Search (BFS)
level order tree traversal
 Connected Components
 Spanning Trees

57

*Figure 6.19:Graph G and its adjacency lists


depth first search: v0, v1, v3, v7, v4, v5, v2, v6

breadth first search: v0, v1, v2, v3, v4, v5, v6, v7
58

29
6/16/2022

Depth First Search


 DFS – It start from a given vertex and go as depth as possible
before backtracking.
1. Algorithm DFS(G, v){
2. S.push(v); i = 0;
3. While(S.isEmpty() == False) do{
4. v= S.pop();
5. If (v is not in T) then {
6. T[i] = v; i = i + 1;
7. }
8. For each adjacent vertex u of v do{
9. if u is not in T then { S.push(u); }
10. }
11. }
12. return T;
13. }

 If it uses the adjacency list, then T(n) = O(e). 59


 If uses the adjacency matrix, then T(n) = O(n2)

Breadth First Search


 BFS – It start from a given vertex then visits all of its children
before going to next levels.
1. Algorithm BFS(G, v){
2. Q.insert(v); T[0] = v; i = 1;
3. While(Q.isEmpty() == False) do{
4. v= Q.del();
5. For each adjacent vertex u of v do{
6. if u is not in T then { T[i] = u; i = i + 1;
7. Q.insert(u);
8. }
9. }
10. }
11. return T;
12. }

 If it uses the adjacency list, then T(n) = O(e). 60


 If uses the adjacency matrix, then T(n) = O(n2)

30
6/16/2022

DFS VS BFS Spanning Tree


0 0 0

1 2 1 2 1 2

3 4 5 6 3 4 5 6 3 4 5 6

7 7 nontree edge 7
cycle
DFS Spanning BFS Spanning
61

Topological Sort: Introduction


 There are many problems involving
a set of tasks in which some of the
tasks must be done before others.
 For example, consider the problem
of taking a course only after taking
its prerequisites.
 Is there any systematic way of
linearly arranging the courses in the
order that they should be taken?

Yes! - Topological sort.

31
6/16/2022

Definition of Topological Sort


 Topological sort is a method of arranging the vertices in a directed acyclic
graph (DAG), as a sequence, such that no vertex appear in the sequence
before its predecessor.

 The graph in (a) can be topologically sorted as in (b)

(a) (b)

Topological Sort - Example

32
6/16/2022

Topological Sort is not unique


 Topological sort is not unique.

 The following are all topological sort of the graph below:

s1 = {a, b, c, d, e, f, g, h, i}

s2 = {a, c, b, f, e, d, h, g, i}

s3 = {a, b, d, c, e, g, f, h, i}

s4 = {a, c, f, b, e, h, d, g, i}
etc.

Topological Sort Algorithm


 One way to find a topological sort is to consider in-degrees of the vertices.
 The first vertex must have in-degree zero -- every DAG must have at least one
vertex with in-degree zero.
 The Topological sort algorithm is:
Algorithm topologicalOrderTraversal(A[0:n-1][0:n-1], n){ v[0:n-1]; // list of vertices
numVisitedVertices = 0; k = 0; T[0:n-1]; //T stored the sorted vertices
d[0:n-1] = in-degree(G); // d[i] has the in-degree of vertex vi
while(numVisitedVertices < n){
For i = 0 to n-1 do{ if (d[i] == 0) then break; }
if(i == n) then { break; } // there are no vertex of in-degree == 0
else{ d[i] = -1;
T[k++] = v[i];
numVisitedVertices++;
// updating in-degree of its adjacent vertices.
For j = 0 to n-1 do{ If (A[i][j] == 1) then {d[j] = d[j]-1; }
}
}
return T;
}

33
6/16/2022

Topological Sort Example A B C D E F G H I J


 Demonstrating Topological Sort.
-1 -1 -1 -1 0 -1 -1 -1 -1 -1
1 2 3 0 2
A B C D E

F G H I J
1 0 2 2 0

D G A B F H J E I C

Weighted Graph Representation


 A G=(V, E) called weighted graph if each of its vertex has a weight.
 Weighted graph may be represented by cost adjacency matrix or cost
adjacency list.
 If it is represented by cost adjacency matrix W, then it should be a
square matrix of order n x n, where n is number of vertices and
 If the edge (vi, vj) is in E(G), then w[i][j]=weight of edge (vi, vj).

 If there is no edge from vertex i to vertex j in E(G), then


w[i][j]=∞.
 The diagonal elements of the matrix should be zero i.e. w[i][i]=0.

 In cost adjacency list representation of the weighted graph, each


nodes of the linked list corresponding to vertex i, stores its adjacent
vertex and cost of the edge from vertex i to this adjacent vertex.
68

34
6/16/2022

Graphs and its cost matrix and cost adjacency list

6 0 1 6 2 3
V0 V1  0 6 3
3 5   1 0 6 2 5
W   6 0 5 2 0 3 1 5
V2  3 5 0
 
(a) Undirected graph G (b)Cost adjacency matrix (c)Cost adjacency list

6
0 4 7 0 1 4 2 7
V0 V1  
4 W  6 0 5 1 0 6 2 5
3 7 5  3  0 2 0 3
V2  
(a)Digraph G (b)Cost adjacency matrix (c) Cost adjacency 69list

Single Source Shortest Paths – Dijkstra’s Algorithm


 This algorithm is based on greedy approach. So, it has (i) Selection
procedure and feasibility check, and (ii) Solution Check.
 The high-level algorithm is
1. Algorithm Dijkstra(){
2. Y = {v1}; F = φ;
3. While (instance is not solved) do{
4. //Selection and feasibility check
5. Select a vertex v from V-Y that has shortest path from v1, using only vertices
in Y as intermediate;
6. Add vertex this v to Y;
7. Add the edge that touch v to F;
8. If (Y == V) then {
9. The instance is solved; // Solution Check
10. break;
11. }
12. }
13. }
70

35
6/16/2022

Single Source Shortest Paths – Dijkstra’s Algorithm


 It has touch[2:n], length[2:n], and slength[2:n] 1D arrays. Where
 touch[i] = index of vertex v in Y such that edge <v, vi> is the last edge in the current
shortest path from v1 to vi using only vertices of Y as intermediates.
 length[i] = length of current shortest path from vertex v1 to vi using only vertices of Y as
intermediates.
 slength[i] = length of the shortest path from v1 to vi.
1. Algorithm Dijkstra(W, n, F){ F = φ;
2. For i = 2 to n do{ touch[i] = 1; length[i] = W[1][i]; slength[i] = ∞; }
3. Repeat (n-1) times { min = ∞;
4. For i = 2 to n do { If(0 ≤ length[i] < min) then { min = length[i]; vnear = i; }}
5. F = F U {<touch[vnear], vnear>};
6. For i = 2 to n do{ // updating the length and touch array
7. If(length[vnear] + W[vnear][i] < length[i]) then{
8. length[i] = length[vnear] + W[vnear][i];
9. touch[i] = vnear;
10. }
11. }
12. slength[vnear] = length[vnear]; length[vnear] = -1;
13. }
14. }
71

Single Source Shortest Paths – Dijkstra’s Algorithm


 To get the shortest path from vertex v1 to vi, we will use following algorithms:
 touch[i] = index of vertex v in Y such that edge <v, vi> is the last edge in the current
shortest path from v1 to vi using only vertices of Y as intermediates.
 length[i] = length of current shortest path from vertex v1 to vi using only vertices of Y as
intermediates.
 slength[i] = length of the shortest path from v1 to vi.

1. Algorithm PrintSortestPath(i){ 1. Algorithm getPath(i){


2. Print “v1 ”; 2. If(touch[i] ≠ 1) then {
3. getPath(i); 3. getPath(touch[i]);
4. Print “v” + i; 4. Print “v” + touch[i] + “”;
5. } 5. }
6. }

72

36
6/16/2022

Single Source Shortest Paths – Dijkstra’s Algorithm


Example
v1 v2 v3 v4 v5
V1 v1 0 7 4 6 1
1 7 v2  0    
W 
V5 V2 v3  2 0 5 
 
v4  3  0 
1 6 3 4 2 
v5   1 0 
V4 5 V3

Initial
2 3 4 5
touch 1 1 1 1
length 7 4 6 1
slength ∞ ∞ ∞ ∞ 73

Single Source Shortest Paths – Dijkstra’s Algorithm


V1
v1 v2 v3 v4 v5
1 7 v1 0 7 4 6 1
V5 v2  0    
V2 
W
v3  2 0 5 
16 3 4 2  
v4  3  0 

V4 V3
v5    1 0 
5
Iteration-1: Iteration-2:
Min=1, vnear=5, e=<1, 5> Min=2, vnear=4, e=<5, 4>
F={<1, 5>} F={<1, 5>, <5,4>}
Before update 2 3 4 5 Before update 2 3 4 5
touch 1 1 1 1 touch 1 1 5 1
length 7 4 6 1 length 7 4 2 -1
slength ∞ ∞ ∞ ∞ slength ∞ ∞ ∞ 1

After update 2 3 4 5 After update 2 3 4 5


touch 1 1 5 1 touch 4 1 5 1
length 7 4 2 -1 length 5 4 -1 -1 74

slength ∞ ∞ ∞ 1 slength ∞ ∞ 2 1

37
6/16/2022

Single Source Shortest Paths – Dijkstra’s Algorithm


V1
v1 v2 v3 v4 v5
1 7 v1 0 7 4 6 1
V5 v2  0    
V2 
W
v3  2 0 5 
16 3 4 2  
v4  3  0 
v5    1 0 
V4 V3
5
Iteration-3: Iteration-4:
Min=4, vnear=3, e=<1, 3> Min=5, vnear=2, e=<4, 2>
F={<1, 5>, <5,4>, <1,3>} F ={<1, 5>, <5,4>, <1,3>, <4,2>}
Before update 2 3 4 5 Before update 2 3 4 5
touch 4 1 5 1 touch 4 1 5 1
length 5 4 -1 -1 length 5 -1 -1 -1
slength ∞ ∞ 2 1 slength ∞ 4 2 1

After update 2 3 4 5 After update 2 3 4 5


touch 4 1 5 1 touch 4 1 5 1
length 5 -1 -1 -1 length -1 -1 -1 -1 75

slength ∞ 4 2 1 slength 5 4 2 1

Single Source Shortest Paths – Dijkstra’s Algorithm


V1
Shortest path v1  v2: 1 7
2 3 4 5
touch 4 1 5 1
Print “v1 ”; V5 V2
getPath(2);
Print “v2”; 16 3 4 2
Pop() “v1” V4 V
5 3
Pop()getPath(2) touch[2]=4≠1
Shortest path v1  v3:
getPath(4); Print “v1 ”;
Print “v4”; getPath(3);
Print “v2”; Print “v3”;
Pop()getPath(4)touch[4]=5 ≠1
Pop() “v1”
getPath(5); Pop()getPath(3) touch[3]=1
Print “v5”; Pop() results
Print “v4”; v1v3
Print “v2”;
Pop()getPath(5)touch[5]=1
3 pops() results 76
v1v5v4v2

38
2D array – Assignment
 Suppose that there is a 2D array a[m][n]. We want of
store elements of this 2D array to 1D array from last row
to first row and within a row from right to left. Derive
mapping function to map the index of element ai,j in 1D.
 Example- the 2D array is a[3][2] it is mapped in 1D array.
a0,0 a0,1 
 
a a
 1,0 1,1 
a2,0 a2,1 
32
 Mapping in 1D array.
0 1 2 3 4 5
a2,1 a2,0 a1,1 a1,0 a0,1 a0,0
11

1
Solution
Q-1 Let ‘a’ is an empty object of the Array class (data structure) and for an even positive integer n we want to (5+5)
insert 1, 2, 3, ..., n by using its insert(x, index) member function (method) one by one in the same sequence
i.e. it will be inserted in sequence 1, 2, 3, 4, ..., n. At the time of insertion operation we pass the index
argument of its insert(x, index) method in such a way that first n/2 elements of its array data member stores
the values 1, 3, 5, 7, ..., n-1 and last n/2 elements stores the values 2, 4, 6, 8, ..., n. Write an efficient
algorithm for the same purpose and also compute the total number of movement operations.
Ans Algorithm to insert 1, 2, 3, 4, ..., n
1. Algorithm insert1Ton(Array a, int n){
If(n%2 == 1) throw Exception(“n must be even.”);
For i = 1 to n do{
If(i%2 == 1) then{
a.insert(i, (i-1)/2);
}
Else{
a.insert(i, i-1);
}
}
}
Computation of number of movements
x No. of Movements x No. of Movements x No. of Movements x No. of Movements
1 0 4 0 7 3 ... ...
2 0 5 2 8 0 n-1 (n-2)/2
3 1 6 0 9 4 N 0
( ) ( )
× ×( )
⸫ Total number of movements = 1 + 2 + 3 + 4 + ... + (n-2)/2 = =
Q-2 A square matrix A of order n x n is called upper left triangular matrix (ULTM) if all of its elements below (2+4+4)
the right diagonal must be zero (nulls). For example in figure 2 the matrix A given below is an ULTM of
order 3 x 3. (i) Find the condition that a square matrix of order n x n will be a ULTM. (ii) Write an efficient
algorithm (that requires minimum multiplications) to get the product of two upper left triangular matrices
of order n x n and also derive the formula to get the total number of multiplication operations in this
algorithms.
𝑎 , 𝑎 , 𝑎 ,
𝑎 , 𝑎 , 0
𝑎 , 0 0
Fig 2: ULTM – A
Ans (i) A square matrix of order nxn is upper left triangular matrix (ULTM), if 𝑎 , = 0, for 𝑖 + 𝑗 ≥ 𝑛.
2 (ii) Efficient algorithm to get the products of two ULTMs
𝑎 , 𝑎 , 𝑎 , 𝑏 , 𝑏, 𝑏, 𝑎 , ×𝑏 , +𝑎 , ×𝑏 , +𝑎 , ×𝑏 , 𝑎 , ×𝑏 , +𝑎 , ×𝑏 , 𝑎 , ×𝑏 ,
𝑎 , 𝑎, 0 × 𝑏 , 𝑏, 0 = 𝑎 , ×𝑏 , +𝑎 , ×𝑏 , 𝑎 , ×𝑏 , +𝑎 , ×𝑏 , 𝑎 , ×𝑏 ,
𝑎 , 0 0 𝑏 , 0 0 𝑎 , ×𝑏 , 𝑎 , ×𝑏 , 𝑎 , ×𝑏 ,
Algorithm productULTMs(ULTM A, int m, ULTM B, int n){ // m and n are order of matrices A and B respectively.
If(m ≠ n) then throw Exception(“Order of both matrices are not same. Product failed.”);
SquareMatrix C(n, n);
For i = 0 to n-1 do{
For j = 0 to n-1 do{
𝑐 , = 0;
For k = 0 to min(n-i-1, n-j-1) do{
𝑐, =𝑐, +𝑎, ×𝑏 , ;
}
}
}
Return C;
}
Number of multiplication operations
Row Number (i) Number of multiplication operations
0 1 + 2 + 3 + ... + n
1 1 + 2 + 3 + ... (n-2) terms + (n-1) + (n-1)
2 1 + 2 + 3 + ... (n-3) terms + (n-2) + (n-2) + (n-2)
3 1 + 2 + 3 + ... (n-4) terms + (n-3) + (n-3) + (n-3) + (n-3)
... ...
i (
1 + 2 + 3 + ... (n-i-1) terms + (n-i) + (n-i) + ... (i+1) terms =
)×( )
+ (𝑖 + 1) × (𝑛 − 𝑖)
𝑛 𝑖 𝑛 𝑖 𝑛 𝑛 𝑖 𝑖 1 1 1
= −𝑛×𝑖+ − + +𝑛×𝑖−𝑖 +𝑛−𝑖 = + − − = 𝑛 × (𝑛 + 1) − 𝑖 − 𝑖
2 2 2 2 2 2 2 2 2 2 2
... ...
n-1 (n-n+1=1) i.e. 1+1+1+ ... n terms
1 1 1 1 1 1
∴ 𝑇𝑜𝑡𝑎𝑙 𝑚𝑢𝑙𝑡𝑖𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛 𝑜𝑝𝑒𝑟𝑎𝑡𝑖𝑜𝑛𝑠 = 𝑛 × (𝑛 + 1) − 𝑖 − 𝑖2 = 𝑛 × (𝑛 + 1) − 𝑖− 𝑖
2 2 2 2 2 2
×( )× ×( ) ( )× ×( )
= − × − × = [6𝑛 × (𝑛 + 1) − 3(𝑛 − 1) − (𝑛 − 1) × (2𝑛 − 1)] =
( )( )
[6𝑛 + 6𝑛 − 3𝑛 + 3 − 2𝑛 + 𝑛 + 2𝑛 − 1] = [4𝑛 + 6𝑛 + 2] = [2𝑛 + 3𝑛 + 1] = .
Q-3 Let “lst” is an object of single non circular linked list that stores n integer numbers. Write an efficient (10)
algorithm partition(LinkedList lst) (without swapping the data of nodes of the linked list) that partition
the nodes of the linked list such that all node having values less than equal to the value of the first node will
be before the first node and nodes whose values are greater than value of the first node will be after the first
element.
Ans Algorithm partition(LinkedList lst){
3. int PE = lst.first->data; Node *prev_j, *cur_j, *prev_i, *cur_i;
prev_j = NULL; cur_j = lst.first; prev_i = lst.first; cur_i = prev_i->link;
while(cur_i != NULL){
if(cur_i->data <= PE){
prev_j = cur_j; cur_j = cur_j->link;
if(cur_j != cur_i){
if(cur_j == prev_i){
cur_j->link = cur_i->link; cur_i->link = cur_j; prev_j->link = cur_i;
}
else{
Node *t = cur_j->link; cur_j->link = cur_i->link; cur_i->link = t;
prev_j->link = cur_i;
prev_i->link = cur_j;
}
Node *t = cur_i; cur_i = cur_j; cur_j = t;
}// inner if closed
}// outer if closed
prev_i = cur_i; cur_i = cur_i->link;
}// while loop closed
if(cur_j != lst.first){
Node *fst = lst.first; Node *t = cur_j->link; cur_j->link = fst->link; fst->link = t;
prev_j->link = fst;
lst.first = cur_j;
}
return lst.first;
}
MCA (II-Sem) Second Sessional Examination, 2021
CSC23: Advanced Data Structures
Dated: Wednesday June 22, 2022 Time: 9:00 AM – 10:00 AM
Time: 1 hour Max. Marks: 20
Instructions:
(i) Write your Roll Number and Name on top of your answer script.
(ii) Answer any TWO questions in your hand writing and mail the answer script at
[email protected].
(iii) There is 10 minutes extra time for uploading the answer scripts.
(iv) The answer script uploaded after the 10:10 AM will be not accepted.
(v) The students are requested to not share their answer with other student. In case there is same
mistakes in answers of more than one student’s answer scripts it should be presumed that they have
shared their answers and marks will be deducted.
Q-1 Sort the list of integers: 6, 7, 5, 4, 3, 2, 1 in ascending order using Shell sort and insertion sort (10)
algorithms and determine the total number of shift operations in both sorting algorithms.
Q-2 Let H1(k)=k%7 and H2(k)=k%5+1 are first and second hash function respectively. Build the hash (10)
table by inserting keys: 15, 29, 8, 43, 22, 36, 49 one by one and collision is resolve using double
hashing. Also calculate the total number of collisions.
Q-3 Let array a[0:n-1] stores segregated (in odds and evens) and sorted positive integers i.e. [1, 7, 2, (10)
6, 8, 10]. Write an efficient algorithm binarySearchIterativeOnSegregatedSorderOddsEvens(int a[],
int n, int x), based on iterative binary search algorithm, to search x in this list of array a[]. Illustrate
your algorithm for x=10 in above list.
Solution
Q-1 Sort the list of integers: 6, 7, 5, 4, 3, 2, 1 in ascending order using Shell sort and insertion sort (10)
algorithms and determine the total number of shift operations in both sorting algorithms.
Ans- Shell Sort gap = floor(n/2)=floor(7/2) = 3
1. Insert Operation 0 1 2 3 4 5 6 # of shift operations Total Shifts
6 7 5 4 3 2 1
Insert 4 4 6 1 shift 1+2+1+1=5
Insert 1 1 4 6 2 shifts
Insert 3 3 7 1 shift
Insert 2 2 5 1 shift
Final list 1 3 2 4 7 5 6
Gap = floor(n/4)=1
Insert Operation 0 1 2 3 4 5 6 # of shift operations Total Shifts
1 3 2 4 7 5 6
Insert 3 1 3 0 shift 0+1+0+0+1+1 = 3
G. Total Shifts = 5 + 3 = 8
Insert 2 1 2 3 1 shift
Insert 4 1 2 3 4 0 shift
Insert 7 1 2 3 4 7 0 shift
Insert 5 1 2 3 4 5 7 1 shift
Insert 6 1 2 3 4 5 6 7 1 shift
Final list 1 2 3 4 5 6 7

Insertion Sort

Insert Operation 0 1 2 3 4 5 6 # of shift operations Total Shifts


6 7 5 4 3 2 1
Insert 7 6 7 0 shift 0+2+3+4+5+6 = 20
Insert 5 5 6 7 2 shift
Insert 4 4 5 6 7 3 shift
Insert 3 3 4 5 6 7 4 shift
Insert 2 2 3 4 5 6 7 5 shift
Insert 1 1 2 3 4 5 6 7 6 shift
Final list 2 3 4 5 6 7
Q-2 Let H1(k)=k%7 and H2(k)=k%5+1 are first and second hash function respectively. Build the hash (10)
table by inserting keys: 15, 29, 8, 43, 22, 36, 49 one by one and collision is resolve using double
hashing. Also calculate the total number of collisions.
Ans- H1(k) = K%7 H2(k) = K%5+1
2. 0 1 2 3 4 5 6
49 15 43 36 22 8 29
Insert 15  i = 15%7=1; Insert 29  i = 29%7 = 1, collision, m=29%5+1=5, i=(i+m)%7= (1+5)%7 = 6; Insert 8  i=8%7=1,
collision, m=8%5+1=4, i=(i+m)%7= (1+4)%7=5; Insert 43  i = 43%7 = 1, collision, m=43%5+1=4, i=(i+m)%7=(1+4)%7=5,
collision; i = (i+m)%7 = (5+4)%7 = 2; Insert 22  i=22%7=1, collision, m=22%5+1=3, i = (i+m)%7 = (1+3)%7 = 4; Insert 36
 i = 36%7=1, collision, m=36%5+1=2, i=(i+m)%7=(1+2)%7=3; Insert 49  i=49%7=0.Total collisions = 6.
Q-3 Let array a[0:n-1] stores segregated (in odds and evens) and sorted positive integers i.e. [1, 7, 2, (10)
6, 8, 10]. Write an efficient algorithm binarySearchIterativeOnSegregatedSorderOddsEvens(int
a[], int n, int x), based on iterative binary search algorithm, to search x in this list of array a[].
Illustrate your algorithm for x=10 in above list.
Ans- Algorithm binarySearchForSegregatedSortedEvensOddsList(int a[], int n, int x){
3. if(x<=0) return -1;
int low=0, high=n-1;
while(low<=high){
int mid=(low+high)/2;
cout<<"\nTest low = "<<low<<", high = "<<high <<", mid = " << mid;
if(x%2==1 && a[mid]%2==1){
if(x == a[mid]) return mid;
else if(x < a[mid]) high = mid-1;
else low = mid+1;
}
else if(x%2==1 && a[mid]%2==0) high = mid-1;
else if(x%2==0 && a[mid]%2==1) low = mid+1;
else if(x%2==0 && a[mid]%2==0]){
if(x==a[mid]) return mid;
else if(x < a[mid]) high = mid-1;
else low = mid+1;
}
}
return -1;
}
Illustration
[1, 7, 2, 6, 8, 10] x = 10, n = 6
low = 0, high = 5
mid = 2
low = 3, high = 5
mid = 4
low = 5, high = 5
mid = 5
Return 5.

You might also like