0% found this document useful (0 votes)
8 views9 pages

Big O Notation

Uploaded by

verity546
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views9 pages

Big O Notation

Uploaded by

verity546
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Big O Notation

Does that Big Fat O really mess with your mind? Is “TLE” your worst nightmare? If yes,
then you have come to the right place! This is going to be a Tutorial on the Big O
Notation. This is a collection of ways of how I was introduced to the concept and I was
made familiar with. So, without delay…

Beginners
Imagine you have to watch a movie. How are you going to get it? You got two options:

1. You can order it online, or


2. You can download it.(You don’t support piracy so you will pay anyway, same
amount for both).

If you buy it online, then it will take about 24 hours to be delivered, but if you just
download it, it will take you 23 hours(Yeah, you got a slow internet). So, which option
are you going to choose? Its obviously the second option, right? If you only need to
watch ONE movie then yeah go ahead. BUT! If you wanna watch 10 movies, what are
you going to do? Downloading 10 movies will take 230 hours, but buying 10 movies will
still take 24 hours!

This shows that Buying the movie takes ​Constant Time(O(1)) to be delivered, i.e, it will
take 24 hours no matter how many movies you buy! But Downloading a Movie takes
Linear Time(O(N)) to finish, that it is directly proportional to the value of N(The number
of movies). So, we see that Downloading the Movie is better than Buying it up-till a
certain value of N, but as it’s time “grows” faster than the other option, after a certain
value of N, Buying is the better option.
Elementary
Here, I am going to introduce the more mathematical jargon. But don’t be intimidated,
it’s not as hard to get into your head. It is assumed you know about functions and
graphs(Not graph theory, the other one ). So, Without further adieu…

In this graph , you can see two functions ​f(n) and ​g(n)​, where ​x is the number of inputs.
The ​Y-axis denotes the time taken and the ​X-axis denotes the number of inputs. It’s
clear that after a certain value of ​n(say x) the function ​g(n)​is always greater than ​f(n) in
terms of time taken. As with the example given above. For ​x=1​, Downloading is better
than buying, but after that Buying will always be the better option.

So, formally speaking…

f(n) = O(g(n)), if and only if there exist some constants c and x, such that 0 ≤ f(n) ≤
c * g(n) for all n ≥ x.
If you don’t understand the previous line, its recommended you take a good look at it
and make yourself understand it. If you think a bit, in some sense, ​g(n) defines an
“upper bound” for the function ​f(n)​, i.e., the worst case scenario for the function ​f(n)​.
While programming, the function ​f(n) denotes the time it takes our program to run,
where ​n​ is the number of inputs.

Intermediate
Defining the upper bound for a function isn’t all that hard. Let’s take an example.
Suppose, ​f(x) = 2x² + 3x + 4​. We cleverly use a trick and define ​g(x) as ​x raised to the
same power as its highest term in ​f(x)​. So, ​g(x) = x²​. We know that by this definition,

f(x) = 2x² + 3x + 4 ≤ 2x² + 3x² + 4x² = 6x² (Since, x is a positive integer x² ≥ x).

● So, here ​f(x) = 6g(x)​. So, according to our definition, here, c = 6 and evaluating
the equality we can also get the constant above which the inequality holds.
● Basically,​ g(x)​ is just take the highest term of ​f(x)​ and then drop the constant.
● Just like ​Big-O tells you which functions grow at a rate >= than ​f(N)​, for large N
there are other functions like…
● Big-Theta​, which tells you which functions grow at the same rate as ​f(N)​, for
large N
● Big-Omega​, which tells you which functions grow at a rate <= than ​f(N)​, for large
N

(Note: >= , “the same”, and <= are not really accurate here, but the concepts we use in
asymptotic notation are similar):

Note: I won’t be going to the much advanced details of the functions and
asymptotic analysis of growth of functions as they are not needed for competitive
programming. You may do a bit of research if you are interested in this field.
Another note is that this is not all very accurate but it serves the purpose and
gives a good intuition, so I stuck with it.
Competitive Programming
Here, in competitive programming, the Big O notation is not really needed, you should
just be able to find the time complexity in a rough way. You can use the Big O notation
to say, how does your program depend on the value of N(The number of inputs).
Suppose you write the following line of code:

int a = 2*N;

What is the complexity of this? Does the running time of this line depend on the value of
N? The answer is No. So, this takes constant time to run and its complexity is O(1)​.
Some more examples:

int b = 0;

for(int i = 1; i < N; i++)

b += i;

What is the complexity of the above code? The line in the loop will run N times, so, your
program is directly proportional to N.

NOTE: Since, you drop the constants in the Big O notation, even if there were two
lines or more in the loop, it will take you O(N) time. Sometimes, those constants
do matter, but those are rare situations.

int b = 0;

for(int i = 1; i <= N; i++)

for(int j = 1; j <= N; j++)


b += i * j;

In the above code, the inner loop will run N times for each i and the outer loop will run N
times, so overall, the loop will run ​N² times. So, the complexity of the above code is
O(N²)​.

int b = 0;

for(int i = 1; i <= N; i++)

for(int j = 1; j <= i; j++)

b += i * j;

In the above code, the inner loop will run i times for each i. So, the run-time will be
1+2+3+…+N = N(N+1)/2​. Taking the Big O notation ​O(N(N+1)/2) = O((N²+N)/2) =
O(N²)​(Since, we drop the constants and take the highest term).

So, last two codes take the same time when clearly the second code should take the
less time. In fact, the the second code takes the less time, but Big O tells us how the
functions ​grows ​with respect to N, and they grow at almost the same rate.

Now, as you know, you have time constraints in competitive programming and also you
are given the maximum values of N(Or any variable on which your code depends on). It
is safe to assume the machine can handle 10^8. So, according to that we need to write
our code. Clearly an O(N²) solution will get TLE is N given ​≤ 10^5 as ​(10^5)² is ​10^10
and we can do ​10^8​(Maybe ​2.5 × 10^8 ​or a bit more, solution may barely just pass).
So, we need to take good care of that.

According to that ​O(1)​ and ​O(logN)​ solution should always pass(I calculated the
maximum limit for ​logN​ and it is very large so, we needn’t worry),​ O(sqrt(N))​ solution
will pass if ​N ≤ 10^16​, ​O(N)​ will pass if ​N ≤ 10^8​, ​O(N × logN)​ solution will pass if ​N ≤
10^6​, ​O(N²)​ will pass if ​N ≤ 10^4​ and you can calculate for more other functions of ​N​.
2.2 Complexity classes
The following list contains common time complexities of algorithms:
1. O(1)​ The running time of a constant-time algorithm does not depend on the input size. A
typical constant-time algorithm is a direct formula that calculates the answer
2. ​O(logn)​ A logarithmic algorithm often halves the input size at each step. The running
time of such an algorithm is logarithmic, because equals the number of times n must be
divided by 2 to get 1.
3. O(n)​ A linear algorithm goes through the input a constant number of times. This is often
the best possible time complexity, because it is usually necessary to access each input
element at least once before reporting the answer.
4. O(nlogn)​ This time complexity often indicates that the algorithm sorts the input, because
the time complexity of efcient sorting algorithms is O(nlogn). Another possibility is that
the algorithm uses a data structure where each operation takes O(logn) time.
5. O(n​2​)​ A quadratic algorithm often contains two nested loops. It is possible to go through
all pairs of the input elements in O(n​2​) time.
6. O(n​3​)​ A cubic algorithm often contains three nested loops. It is possible to go through all
triplets of the input elements in O(n​3​) time.
7. ​O(2​n​)​ This time complexity often indicates that the algorithm iterates through all subsets
of the input elements. For example, the subsets of {1,2,3} are;, {1}, {2}, {3}, {1,2}, {1,3},
{2,3} and {1,2,3}.
8. O(n!)​ This time complexity often indicates that the algorithm iterates through all
permutations of the input elements. For example, the permutations of {1,2,3} are (1,2,3),
(1,3,2), (2,1,3), (2,3,1), (3,1,2) and (3,2,1).
An algorithm is polynomial if its time complexity is at most O(n​k​) where k is a constant. All the
above time complexities except O(2​n​) and O(n!) are polynomial. In practice, the constant k is
usually small, and therefore a polynomial time complexity roughly means that the algorithm is
efcient.
I have provided the graph of some commonly required graphs of functions and their
comparisons.

2.3 Estimating efciency

By calculating the time complexity of an algorithm, it is possible to check, before


implementing the algorithm, that it is efcient enough for the problem. The starting point
for estimations is the fact that a modern computer can perform some hundreds of
millions of operations in a second. For example, assume that the time limit for a problem
is one second and the input size is n=10​5​. If the time complexity is O(n​2​), the algorithm
will perform about (10​5​)​2​=10​10 operations. This should take at least some tens of
seconds, so the algorithm seems to be too slow for solving the problem. On the other
hand, given the input size, we can try to guess the required time complexity of the
algorithm that solves the problem. The following table contains some useful estimates
assuming a time limit of one second.
Input size Required(Maximum) time complexity

n≤10 O(n!)

n≤20 O(2​n​)

n≤500 O(n​3​)

n≤5000 O(n​2​)

n≤10​6 O(nlogn) or O(n)

n is large O(1) or O(logn)

For example, if the input size is n=10​5​, it is probably expected that the time complexity
of the algorithm is O(n) or O(nlogn). This information makes it easier to design the
algorithm, because it rules out approaches that would yield an algorithm with a worse
time complexity. Still, it is important to remember that a time complexity is only an
estimate of efciency, because it hides the constant factors. For example, an algorithm
that runs in O(n) time may perform n/2 or 5n operations. This has an important effect on
the actual running time of the algorithm.

Hope this helped.

Thank you, and Happy Coding!

~Competitive Programming Department


DPS Ruby Park

You might also like