ADVANCED DATA
STRUCTURES &
ALGORITHMS
ANALYSIS
B.Tech III Sem
Scheme 2023
30 Jun 2025 Topics: Time complexities, examples, Asymptotic
notations
Instructions
Brief talk
Today’s Topics
Time complexity
Examples
Asymptotic notations
Examples
Time Complexity
Time complexity is a way to express how the runtime
of an algorithm increases with the size of the input.
Time complexity quantifies the relationship between
the input size and the number of operations an
algorithm performs.
Represented using Big O notation, such as O(1),
O(n), O(n^2), etc.
It's a measure of efficiency, not actual execution
time.
Time Complexity
tp(n) = total processing time for input size n
ADD(n) = number of additions performed for size
n
SUB(n) = number of subtractions
MUL(n) = number of multiplications
DIV(n) = number of divisions
ca, cs, cm, cd = constant time cost for each
operation
Time complexity
O(1) Constant time
O(log n) Logarithmic time
O(n) Linear time
O(n log n) Linearithmetic
O(n^2) Quadratic
O(2^n) Exponential
O(n!) Factorial
Problem Time Program
Complexity
int main() {
Print Welcome
to ADSA Course O(1) printf("Welcome to ADSA
Course");
return
int 0; } { int i, n = 10;
main()
Print “ADSA” n
times O(n) for (i = 1; i <= n; i++)
printf(“ADSA \n"); }
int main() {
Print the pattern O(log2(n) for (int i = 2; i <= 80;
2 4 8 16 32
64
) i*=2)
Print matrix printf("%d\t",i); }
addition with n for (int i = 0; i < n; i++)
rows and m cols O(n*m) {
for (int j=0; j < m; j++)
Algorithm: Sum of elements
of array
Algorithm: Recursive Sum
Algorithm: Sum of two
matrices
Asymptotic Notation
Mathematical notation used to describe the rate at
which a function grows or decreases.
Used in analysis of algorithms to describe the time
complexity and space complexity of an algorithm.
Helps to analyze the performance of an algorithm
without having to run it on different inputs
Using asymptotic analysis, we can conclude the
best case, average case, and worst case scenario
of an algorithm
Asymptotic Notation
Big-O notation:
provides upper bound of a function.
represents worst-case scenario
Omega notation:
provides lower bound of a function.
represents best-case scenario
Theta notation:
Provides both an upper and lower bound
represents the average-case scenario
O(2
O(n!)
n
) O(n
2
)
O(1) < O(log(n)) < O(n) < O(n log(n)) < O(n2) < O(2n) <
O(n!)
O(1): Constant
Time
O(n): Linear Time
O(log(n)) : Logarithmic Time
For example, when n is
8, the while loop will
iterate for log_2(8) = 3
times
O(nlog(n)) Example
O(nm) : Polynomial Time
Factorial Time Algorithms
– O(n!)
Big O notation
f(n) = O(g(n)),
iff
positive
constants c
and n0,
such that 0 f(n)
Big O notation: Example
Consider f(n) = 3n+2
Can this function be represented as O(g(n)) ?
f(n) = O(g(n)), iff positive constants c and n0, such that 0 f(n) cg(n), n n0
f(n) c*g(n)
3n+2 c*g(n)
f(n) = O(g(n)) ie., 3n+2 = O(n), c=4 and n0
3n+2 4*n for all n>=2
=2
Big O notation: Example 2
f(n) = 3n2 + 2n + 4.
To find upper bound of f(n), we have to find c
and n0 such that 0 ≤ f (n) ≤ c × g (n) for all n
≥ n0
C=9 and n0 = 1
f(n) = O (g(n)) = O (n2) for c = 9, n0 = 1
0 ≤ 3n2 +2n + 4 ≤ 9n2
Big O notation: Example 3
Consider f(n) = 6 x 2n + n2 = O(2n), Find
c and no
Solution
C=7
no = 4
Big O notation (O)
Big O notation is helpful in finding the worst-
case time complexity of a particular program.
Examples of Big O time complexity:
Linear search: O(N), where N is the number of elements in the given
array
Binary search: O(Log N), where N is the number of elements in the
given array
Bubble sort: O(n2), where N is the number of elements in the given
array
Omega notation
Provides a lower bound on the growth rate of
an algorithm’s running time or space usage.
It represents the best-case scenario,
i.e., the minimum amount of time or space an algorithm may
need to solve a problem.
For example, if an algorithm’s running time is ?
(n), then it means that the running time of the
algorithm increases linearly with the input size
n or more.
Omega notation
f (n) = Ω(g(n)),
iff
positive
constants c
and n0,
such that 0 ≤ c g(n)
Omega notation: Example
f (n) =3n+2
Can this function be represented as Ω (g(n)) ?
f (n) = Ω(g(n)), iff positive constants c and n0,
such that 0 ≤ c g(n) ≤ f (n), n n0
c*g(n) f(n); c= 3 and n0 = 1
3n 3n+2
f(n) = Ω (g(n)) = Ω (n) for c = 3, n0 =
1
Omega notation: Example
f (n) =8n2+2n-3
Can this function be represented as Ω (g(n)) ?
f (n) = Ω(g(n)), iff positive constants c and n0, such
that 0 ≤ c g(n) ≤ f (n), n n0
c*g(n) f(n); c= 7 and n = 1 0
f(n) = Ω (g(n)) = Ω (n2) for c = 7,
7*n2 8n2+2n-3
n0 = 1
Omega notation: Example
f (n) = 2n3 + 4n + 5
Can this function be represented as Ω (g(n)) ?
f (n) = Ω(g(n)), iff positive constants c and n0,
such that 0 ≤ c g(n) ≤ f (n), n n0
Consider c= 2 and n0 = 1
0 ≤ 2n3 ≤ 2n3 + 4n + 5
f(n) = Ω (g(n)) = Ω (n3) for c = 2, n0 =
1
Theta notation
Provides both an upper and lower
bound on the growth rate of an
algorithm’s running time or space usage.
It represents the average-case
scenario,
Theta notation
f (n) = Θ(g(n)),
iff
positive constants
c1, c2 and n0 ,such that
0 ≤ c1 *g(n) ≤ f(n) ≤
c2*g(n) n ≥ n0
Theta notation: Example
Can this function be represented as Θ (g(n)) ?
f (n) = 4n+3
f (n) = Θ(g(n)), iff positive constants c1, c2 and n0 ,such that
0 ≤ c1 *g(n) ≤ f(n) ≤ c2*g(n) n ≥ n0
Consider c1=4, c2=5 and n0 = 3
0 ≤ 4 *3 ≤ 4(3)+3 ≤ 5*3
f(n) = Θ (g(n)) = Θ (n) for c1 = 4, c2 = 5, n0 =
Theta notation: Example
terms of Θ(n) and Find c1,c2 and n0
Consider f(n) = 3n+2. Represent f(n) in
Solution
C1= 3
C2=4
n0 = 2
Theta notation
Θ (g(n)) = {f(n): there exist positive constants c1, c2 and
n0 such that 0 ≤ c1 * g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0}
theta notation in the asymptotic notation type describes
the average case time complexity of a computer program
Examples of Theta notation in asymptotic notations:
(N2), (N log N), (3*N2) gives average time complexity as Θ(N2)
(N), (log N), (5*N) gives average time complexity as Θ(N)
AVL Tree: Introduction
AVL Tree
Self balancing trees
make sure that a tree remains balanced
as we insert new nodes.
Examples:
AVL trees
Red-black trees
Splay trees
B-trees
AVL Tree
AVL tree: a binary search tree that uses modified add and
remove operations to stay balanced as its elements change
AVL trees are self-balancing binary search trees.
Invented in 1962 by Adelson, Velskii and Landis (AVL)
Properties of AVL:
Sub-trees of every node differ in height by at most one level.
Every sub-tree is an AVL tree
AVL Tree is
In AVL trees, balancing factor of each node is either 0 or 1
or -1
AVL Tree
basic idea: When nodes are added to /
removed from the tree, if the tree
becomes unbalanced, repair the tree
until balance is restored.
AVL Tree
At any node, there are 3 possibilities
AVL Tree
Maximum possible number of nodes
in AVL tree of height H= 2H+1 – 1
Ex: If H = 3, What is the max possible
number of nodes in AVL tree
Minimum number of nodes in AVL
Tree of height H is given by a
recursive relation-N(H) = N(H-1) +
N(H-2) + 1
Minimum possible number of nodes in
AVL tree of height-3 = 7
Binary Search AVL Tree
Tree
Balance factor
Bal_Factor(T) = Height(T.right) - Height(T.left)
In AVL tree, no node's two child subtrees differ in height by
more than 1.
Balance factor value are: -1, 0 or 1.
If balance factor of any node is 1, it means that the left
sub-tree is one level higher than the right sub-tree.
If balance factor of any node is 0, it means that the left
sub-tree and right sub-tree contain equal height.
If balance factor of any node is -1, it means that the left
sub-tree is one level lower than the right sub-tree
AVL Tree: Balancing factor
Consider the keys: 38,40,47,55,60,56,65
5
5
Height of left Subtree = 2 Height of right Subtree = 2
4 6
0 0
Balancing
3 4 5 6 Factor = 0
8 7 6 5 (ie., 2-2 =
0)
AVL Tree: Balancing factor
Consider the keys: 38,40,47,55,60
5
5 Height of right Subtree = 1
Height of left Subtree = 2
4 6
0 0
Balancing
3 4 Factor = 1
8 7 (ie., 2-1 =
1)
AVL Tree or Not
AVL Tree
Sorted (Order)
5 < 10 < 15
Height
heights of the two subtrees
of every node differ by at most 1
AVL Tree or Not
AVL Tree
Not AVL
Tree
Sorted (Order): Yes
Height: No
AVL Tree or Not
Not AVL
Tree
AVL Tree
AVL Tree
AVL Tree
Not AVL
Tree
Inserting node
MISC Slides
Tree
A tree is recursively defined
as a set of one or more
nodes where one node is
designated as the root of the
tree and all the remaining
nodes can be partitioned into
non-empty sets each of
which is a sub-tree of the
root
Tree
Root node: The root node A is the topmost
node in tree. If A = NULL, then it means tree is
empty.
Sub-trees: If root node A is not NULL, then the
trees T1, T2, and T3 are called sub-trees of A.
Leaf node: A node that has no children is
called the leaf node or the terminal node
(E,FJ,K,H,I).
Path: A sequence of consecutive edges is
called a path. Path from root node A to node I
is given as: A, D, and I
Ancestor node: An ancestor of a node is
any predecessor node on the path from root
to that node. The root node does not have
any ancestors. nodes A, C, and G are the
ancestors of node K.
Descendant node: A descendant node is
any successor node on any path from the
node to a leaf node. Leaf nodes do not have
any descendants. Nodes C, G, J, and K are
the descendants of node A.
Level number: Every node in the tree is
assigned a level number in such a way that
the root node is at level 0, children of the
root node are at level number 1. Thus,
every node is at one level higher than its
parent. So, all child nodes have a level
number given by parent’s level number + 1
Degree: Degree of a node is
equal to the number of
children that a node has. The
degree of a leaf node is zero.
In-degree: In-degree of a
node is the number of edges
arriving at that node.
Out-degree: Out-degree of a
node is the number of edges
leaving that node
Types of Trees
1. General trees: Stores elements hierarchically
2. Forests: ordered set of zero or more general
trees
3. Binary trees: every node in tree contains a left sub-
tree and a right sub-tree
4. Binary search trees
5. Expression trees
6. Tournament trees
General Trees
Stores elements hierarchically.
top node of a tree is the root node and
each node, except the root, has a
parent.
General trees which have 3 sub-trees
per node are called ternary trees
Forests
A forest is a disjoint union of trees. A set of disjoint trees (or forests)
is obtained by deleting the root and the edges connecting the root
node to nodes at level 1.
every node of a tree is the root of some sub-tree. Therefore, all the
sub-trees immediately below a node form a forest.
A forest can also be defined as an ordered set of zero or more general
trees.
a general tree must have a root, a forest on the other hand may be
empty because by definition it is a set, and sets can be empty.
we can convert a general tree into a forest by deleting the root node
of the tree
Binary Trees
A binary tree is a data structure that is defined as a collection
of elements called nodes.
In a binary tree, the topmost element is called the root node,
and each node has 0, 1, or at the most 2 children.
A node that has zero children is called a leaf node or a
terminal node.
A binary tree is recursive by definition as every node in tree
contains a left sub-tree and a right sub-tree. Even terminal
nodes contain an empty left sub-tree and an empty right sub-
tree
InOrder Tree Traversal
InOrder Tree
Traversal
1. Go to the left subtree.
2. Visit node.
3. Go to the right subtree.
PreOrder
Tree
Traversal
1. Visit node.
2. Go to the left subtree.
3. Go to the right subtree.
PostOrder
Tree
Traversal
1. Go to the left subtree.
2. Go to the right subtree.
3. Visit the node.
Binary Search Trees
A binary search tree has these
properties for each of its nodes
All the values in the node's left subtree is
less than the value of the node itself.
All the values in the node's right subtree is
greater than the value of the node itself.
BST: Arranging keys
BST Vs AVL
The cost of lookup is determined by
how far down the tree we need to go
if the key is in the tree, the worst case
is when it is in a leaf
AVL Tree: Intro
balanced tree: One whose subtrees differ in
height by at most 1 and are themselves
balanced.
The runtime of adding to / searching a BST is
closely related to height.
The number of nodes on the longest path
from the root to a leaf is called the height of
the tree
Height of tree: Mathematical
definition
A tree is balanced if h ε O(log n)
where h is its height and
n is the number of nodes
On a balanced tree, lookup, insert and find_min cost O(log n)
Height is max number of nodes in path
from root to any leaf.
AVL Tree: Height
Height of leaf nodes is 0
ie., nodes 25, 65,85,125 and 175 heights
are 0
Node 75 height?
Height of left subtree-height of right subtree
=1-1
=0
Node 50 height?
Height of left subtree-height of right subtree
=1-2
= -1
B - tree
B-tree is a special type of self-balancing search tree in which each node
can contain more than one key and can have more than two children.
B- tree is a multiway search tree. A node in B-tree of order n can have at
most n-1 values and n children.
All values that appear on the left sub-tree are smaller than left most
value in the parent node.
All values that appear on the right sub-tree are greater than right most
value in the parent node.
All values that appear on the middle sub-tree are greater than leftmost
value in parent node and smaller than right most value in parent node.
•For each node x, the keys are stored in increasing order.
•In each node, there is a boolean value x.leaf which is true if x is
a leaf.
•If n is the order of the tree, each internal node can contain at
most n - 1 keys along with a pointer to each child.
•Each node except root can have at most n children and at least
n/2 children.
•All leaves have the same depth (i.e. height-h of the tree).
•The root has at least 2 children and contains a minimum
of 1 key.
•If n ≥ 1, then for any n-key B-tree of height h and
minimum degree t ≥ 2, h ≥ log (n+1)/2.
t