Asympototic Notation
Asympototic Notation
Asympototic Notation
Asymptotic Analysis
• As we know that data structure is a way of organizing the data efficiently and that
efficiency is measured either in terms of time or space. So, the ideal data structure
is a structure that occupies the least possible time to perform all its operation and
the memory space. Our focus would be on finding the time complexity rather than
space complexity, and by finding the time complexity, we can decide which data
structure is the best for an algorithm.
• The main question arises in our mind that on what basis should we compare the
time complexity of data structures?. The time complexity can be compared based on
operations performed on them. Let's consider a simple example.
• Suppose we have an array of 100 elements, and we want to insert a new element at
the beginning of the array. This becomes a very tedious task as we first need to shift
the elements towards the right, and we will add new element at the starting of the
array.
Asymptotic Analysis
• Suppose we consider the linked list as a data structure to add the element at the
beginning. The linked list contains two parts, i.e., data and address of the next node.
We simply add the address of the first node in the new node, and head pointer will
now point to the newly added node. Therefore, we conclude that adding the data at
the beginning of the linked list is faster than the arrays. In this way, we can
compare the data structures and select the best possible data structure for performing
the operations.
How to find the Time Complexity or
running time for performing the
operations?
• The measuring of the actual running time is not practical at all. The running time
to perform any operation depends on the size of the input. Let's understand this
statement through a simple example.
• Suppose we have an array of five elements, and we want to add a new element at the
beginning of the array. To achieve this, we need to shift each element towards right,
and suppose each element takes one unit of time. There are five elements, so five
units of time would be taken. Suppose there are 1000 elements in an array, then it
takes 1000 units of time to shift. It concludes that time complexity depends upon the
input size.
• Therefore, if the input size is n, then f(n) is a function of n that denotes the time
complexity.
How to calculate f(n)?
• Calculating the value of f(n) for smaller programs is easy but for bigger
programs, it's not that easy. We can compare the data structures by comparing their
f(n) values. We can compare the data structures by comparing their f(n) values. We
will find the growth rate of f(n) because there might be a possibility that one data
structure for a smaller input size is better than the other one but not for the larger
sizes. Now, how to find f(n).
• Let's look at a simple example.
• f(n) = 5n2 + 6n + 12
• where n is the number of instructions executed, and it depends on the size of the
input.
• When n=1
n 5n2 6n 12
1 21.74% 26.09% 52.17%
10 87.41% 10.49% 2.09%
100 98.79% 1.19% 0.02%
1000 99.88% 0.12% 0.0002%
• As we can observe in the above table that with the increase in the value of n, the
running time of 5n2 increases while the running time of 6n and 12 also decreases.
Therefore, it is observed that for larger values of n, the squared term consumes
almost 99% of the time. As the n2 term is contributing most of the time, so we can
eliminate the rest two terms.
How to calculate f(n)?
Therefore,
• f(n) = 5n2
• Here, we are getting the approximate time complexity whose result is very close to
the actual result. And this approximate measure of time complexity is known as an
Asymptotic complexity. Here, we are not calculating the exact running time, we
are eliminating the unnecessary terms, and we are just considering the term which is
taking most of the time.
• Worst case: It defines the input for which the algorithm takes a huge time.
• Average case: It takes average time for the program execution.
• Best case: It defines the input for which the algorithm takes the lowest time
Asymptotic Notations
• The commonly used asymptotic notations used for calculating the running time
complexity of an algorithm is given below:
• then f(n) = O(g(n)) as f(n) is big oh of g(n) or f(n) is on the order of g(n)) if there
exists constants c and no such that:
• f(n)≤c.g(n) for all n≥no
• This implies that f(n) does not grow faster than g(n), or g(n) is an upper bound on
the function f(n). In this case, we are calculating the growth rate of the function
which eventually calculates the worst time complexity of a function, i.e., how worst
an algorithm can perform.
Big oh Notation (O)
Let's understand through examples
• Example 1: f(n)=2n+3 , g(n)=n
• Now, we have to find Is f(n)=O(g(n))?
• To check f(n)=O(g(n)), it must satisfy the given condition:
• f(n)<=c.g(n)
• First, we will replace f(n) by 2n+3 and g(n) by n.
• 2n+3 <= c.n
• Let's assume c=5, n=1 then
• 2*1+3<=5*1
• 5<=5
• For n=1, the above condition is true.
• If n=2
• 2*2+3<=5*2
• 7<=10
• For n=2, the above condition is true.
Big oh Notation (O)
• We know that for any value of n, it will satisfy the above condition, i.e., 2n+3<=c.n.
If the value of c is equal to 5, then it will satisfy the condition 2n+3<=c.n. We can
take any value of n starting from 1, it will always satisfy. Therefore, we can say that
for some constants c and for some constants n0, it will always satisfy 2n+3<=c.n. As
it is satisfying the above condition, so f(n) is big oh of g(n) or we can say that f(n)
grows linearly. Therefore, it concludes that c.g(n) is the upper bound of the f(n). It
can be represented graphically as:
• It is the formal way to represent the lower bound of an algorithm's running time. It
measures the best amount of time an algorithm can possibly take to complete or the
best-case time complexity.
• If we required that an algorithm takes at least certain amount of time without using an
upper bound, we use big- Ω notation i.e. the Greek letter "omega". It is used to
bound the growth of running time for large input size.
• If f(n) and g(n) are the two functions defined for positive integers,
• then f(n) = Ω (g(n)) as f(n) is Omega of g(n) or f(n) is on the order of g(n)) if there
exists constants c and no such that:
• f(n)>=c.g(n) for all n≥no and c>0
Omega Notation (Ω)
Let's consider a simple example.
• If f(n) = 2n+3, g(n) = n,
• Is f(n)= Ω (g(n))?
• It must satisfy the condition:
• f(n)>=c.g(n)
• To check the above condition, we first replace f(n) by 2n+3 and g(n) by n.
• 2n+3>=c*n
• Suppose c=1
• 2n+3>=n (This equation will be true for any value of n starting from 1).
• Therefore, it is proved that g(n) is big omega of 2n+3 function.
constant − Ο(1)
logarithmic − Ο(log n)
linear − Ο(n)
n log n − Ο(n log n)
quadratic − Ο(n2)
cubic − Ο(n3)
polynomial − nΟ(1)
exponential − 2Ο(n)