0% found this document useful (0 votes)
53 views27 pages

Different Complexities With Suitable Examples

Uploaded by

Chirag Desai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views27 pages

Different Complexities With Suitable Examples

Uploaded by

Chirag Desai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

What is time complexity?

To recap time complexity estimates how an algorithm


performs regardless kind of machine it runs on. You can get
the time complexity by “counting” the number of operations
performed by your code. This time complexity is defined as a
function of the input size n using Big-O
notation. n indicates the size of the input, while O is the
worst-case scenario growth rate function.

We use the Big-O notation to classify algorithms based on


their running time or space (memory used) as the input
grows. The O function is the growth rate in function of the
input size n.

Before we dive in, here is the big O cheatsheet and examples


that we are going to cover on this post. Click on them to go
to the implementation. 😉

Big O
Notation Name Example(s)

# Odd or Even number,


O(1) Constant
# Look-up table (on average)

Logarith # Finding element on sorted


O(log n)
mic array with binary search

# Find max element in unsorted


array,
O(n) Linear
# Duplicate elements in array
with Hash Map

Linearith # Sorting elements in array


O(n log n)
mic with merge sort
Big O
Notation Name Example(s)

# Duplicate elements in array


2
O(n ) Quadratic **(naïve)**,
# Sorting array with bubble sort

O(n3) Cubic # 3 variables equation solver

Exponent
O(2n) # Find all subsets
ial

# Find all permutations of a


O(n!) Factorial
given set/string

In algorithm analysis, complexities describe the relationship between the size of the
input and the resources (such as time or space) required by an algorithm. Here are
some common complexities and examples:

1. Constant Time (O(1)):


 The algorithm's runtime is constant, regardless of the size of the input.
 Example: Accessing an element in an array by index.
2. Logarithmic Time (O(log n)):
 The algorithm's runtime grows logarithmically with the size of the input.
 Example: Binary search in a sorted array.
3. Linear Time (O(n)):
 The algorithm's runtime is directly proportional to the size of the input.
 Example: Linear search in an unsorted array.
4. Linear Logarithmic Time (O(n log n)):
 Commonly seen in efficient sorting algorithms.
 Example: Merge Sort, Heap Sort, Quick Sort.
5. Quadratic Time (O(n^2)):
 The algorithm's runtime is proportional to the square of the size of the input.
 Example: Bubble sort, Insertion sort (worst-case).
6. Cubic Time (O(n^3)):
 The algorithm's runtime is proportional to the cube of the size of the input.
 Example: Some matrix multiplication algorithms.
7. Exponential Time (O(2^n)):
 The algorithm's runtime grows exponentially with the size of the input.
 Example: The recursive calculation of Fibonacci numbers without
memoization.
8. Factorial Time (O(n!)):
 The algorithm's runtime grows factorially with the size of the input.
 Example: Solving the traveling salesman problem using a brute-force
approach.

It's important to note that complexities like O(1), O(log n), O(n log n), and O(n) are
generally considered efficient, while higher complexities like O(n^2), O(2^n), and
O(n!) are less efficient and can become impractical for large inputs. The choice of
algorithm and its complexity class depends on the specific requirements and
constraints of the problem at hand.

O(1) - Constant time


O(1) describes algorithms that take the same amount of time
to compute regardless of the input size.

For instance, if a function takes the identical time to


process ten elements as well as 1 million items, then we say
that it has a constant growth rate or O(1). Let’s see some
cases.

Examples of constant runtime algorithms:

 Find if a number is even or odd.


 Check if an item on an array is null.
 Print the first element from a list.
 Find a value on a map.

For our discussion, we are going to implement the first and


last example.

Odd or Even
Find if a number is odd or even.
1function isEvenOrOdd(n) {
2 return n % 2 ? 'Odd' : 'Even';
3}
4
5console.log(isEvenOrOdd(10)); // => Even
6console.log(isEvenOrOdd(10001)); // => Odd

Advanced note: you could also replace n % 2 with the bit


AND operator: n & 1. If the first bit (LSB) is 1 then is odd
otherwise is even.

It doesn’t matter if n is 10 or 10,001, it will execute line 2


one time.

Do not be fooled by one-liners. They don’t always translate to constant times. You
have to be aware of how they are implemented.

If you have a method like Array.sort() or any other array or


object methods, you have to look into the implementation to
determine its running time.

Primitive operations like sum, multiplication, subtraction,


division, modulo, bit shift, etc. have a constant runtime. This
can be shocking! But, let’s go in detail why they are
constant time. If you use the schoolbook long multiplication
algorithm, it would take O(n2) to multiply two numbers.
However, most programming languages limit numbers to
max value (e.g. in
JS: Number.MAX_VALUE is 1.7976931348623157e+308). So, you cannot
operate numbers that yield a result greater than
the MAX_VALUE. So, primitive operations are bound to be
completed on a fixed amount of instructions O(1) or throw
overflow errors (in JS, Infinity keyword).

This example was easy. Let’s do another one.

Look-up table
Given a string, find its word frequency data.

1const dictionary = {the: 22038615, be: 12545825, and: 10741073, of: 10343885, a:
10144200, in: 6996437, to: 6332195 /* ... */};
2
3
function getWordFrequency(dictionary, word) {
4
return dictionary[word];
5
}
6
7
console.log(getWordFrequency(dictionary, 'the'));
8
console.log(getWordFrequency(dictionary, 'in'));

Again, we can be sure that even if the dictionary has 10 or 1


million words, it would still execute line 4 once to find the
word. However, if we decided to store the dictionary as an
array rather than a hash map, then it would be a different
story. In the next section, we are going to explore what’s
the running time to find an item in an array.

Only a hash table with a perfect hash function will have a worst-case runtime of O(1).
The ideal hash function is not practical, so there will be some collisions and
workarounds that leads to a worst-case runtime of O(n). Still, on average, the lookup
time is O(1).

O(n) - Linear time


Linear running time algorithms are widespread. These
algorithms imply that the program visits every element from
the input.

Linear time complexity O(n) means that as the input grows,


the algorithms take proportionally longer to complete.

Examples of linear time algorithms:

 Get the max/min value in an array.


 Find a given element in a collection.
 Print all the values in a list.

Let’s implement the first example.

The largest item on an unsorted array


Let’s say you want to find the maximum value from an
unsorted array.

1function findMax(n) {
2 let max;
3 let counter = 0;
4
5 for (let i = 0; i < n.length; i++) {
6 counter++;
7 if(max === undefined || max < n[i]) {
8 max = n[i];
9 }
10 }
11
12 console.log(`n: ${n.length}, counter: ${counter}`);
13 return max;
14}

How many operations will the findMax function do?

Well, it checks every element from n. If the current item is


more significant than max it will do an assignment.

Notice that we added a counter so it can help us count how


many times the inner block is executed.

If you get the time complexity, it would be something like


this:

 Line 2-3: 2 operations


 Line 4: a loop of size n
 Line 6-8: 3 operations inside the for-loop.

So, this gets us 3(n) + 2.

Applying the Big O notation that we learn in the previous


post, we only need the biggest order term, thus O(n).

We can verify this using our counter. If n has 3 elements:

1findMax([3, 1, 2]);
2// n: 3, counter: 3

or if n has 9 elements:

1findMax([4,5,6,1,9,2,8,3,7])
2// n: 9, counter: 9

Now imagine that you have an array of one million items. Do


you think it will take the same time? Of course not, it will
take longer to the size of the input. If we plot it n
and findMax running time, we will have a graph like a linear
equation.

O(n^2) - Quadratic time


A function with a quadratic time complexity has a growth
rate of n2. If the input is size 2, it will do four operations. If
the input is size 8, it will take 64, and so on.

Here are some examples of quadratic algorithms:

 Check if a collection has duplicated values.


 Sorting items in a collection using bubble sort, insertion
sort, or selection sort.
 Find all possible ordered pairs in an array.

Let’s implement the first two.

Has duplicates
You want to find duplicate words in an array. A naïve
solution will be the following:

1function hasDuplicates(n) {
2 const duplicates = [];
3 let counter = 0; // debug
4
5 for (let outter = 0; outter < n.length; outter++) {
6 for (let inner = 0; inner < n.length; inner++) {
7 counter++; // debug
8
9 if(outter === inner) continue;
10
11 if(n[outter] === n[inner]) {
12 return true;
13 }
14 }
15 }
16
17 console.log(`n: ${n.length}, counter: ${counter}`); // debug
18 return false;
19}

Time complexity analysis:

 Line 2-3: 2 operations


 Line 5-6: double-loop of size n, so n^2.
 Line 7-13: has ~3 operations inside the double-

We get 3n^2 + 2.
Again, when we have an asymptotic analysis, we drop all
constants and leave the most important term: n^2. So, in big
O notation, it would be O(n^2).

We are using a counter variable to help us verify.


The hasDuplicates function has two loops. If we have an input
of 4 words, it will execute the inner block 16 times. If we
have 9, it will perform counter 81 times and so forth.

1hasDuplicates([1,2,3,4]);
2// n: 4, counter: 16

and with n size 9:

1hasDuplicates([1,2,3,4,5,6,7,8,9]);
2// n: 9, counter: 81

Let’s see another example.

Bubble sort
We want to sort the elements in an array. One way to do this
is using bubble sort as follows:

1function sort(n) {
2 for (let outer = 0; outer < n.length; outer++) {
3 let outerElement = n[outer];
4
5 for (let inner = outer + 1; inner < n.length; inner++) {
6 let innerElement = n[inner];
7
8 if(outerElement > innerElement) {
9 // swap
10 n[outer] = innerElement;
11 n[inner] = outerElement;
12 // update references
13 outerElement = n[outer];
14 innerElement = n[inner];
15 }
16 }
17 }
18 return n;
19}

Also, you might notice that for a very big n, the time it takes
to solve the problem increases a lot. Can you spot the
relationship between nested loops and the running time?
When a function has a single loop, it usually translates into a
running time complexity of O(n). Now, this function has 2
nested loops and quadratic running time: O(n2).

O(n^c) - Polynomial time


Polynomial running is represented as O(nc), when c > 1. As
you already saw, two inner loops almost translate to O(n2)
since it has to go through the array twice in most cases. Are
three nested loops cubic? If each one visit all elements,
then yes!

Usually, we want to stay away from polynomial running


times (quadratic, cubic, nc, etc.) since they take longer to
compute as the input grows fast. However, they are not the
worst.

Triple nested loops


Let’s say you want to find the solutions for a multi-variable
equation that looks like this:

3x + 9y + 8z = 79

This naïve program will give you all the solutions that satisfy
the equation where x, y, and z < n.

1function findXYZ(n) {
2 const solutions = [];
3
4 for(let x = 0; x < n; x++) {
5 for(let y = 0; y < n; y++) {
6 for(let z = 0; z < n; z++) {
7 if( 3*x + 9*y + 8*z === 79 ) {
8 solutions.push({x, y, z});
9 }
10 }
11 }
12 }
13
14 return solutions;
15}
16
17console.log(findXYZ(10)); // => [{x: 0, y: 7, z: 2}, ...]

This algorithm has a cubic running time: O(n^3).

Note: We could do a more efficient solution but did it this


way to show an example of a cubic runtime.

O(log n) - Logarithmic time


Logarithmic time complexities usually apply to algorithms
that divide problems in half every time. For instance, let’s
say that we want to look for a book in a dictionary. As you
know, this book has every word sorted alphabetically. If you
are looking for a word, then there are at least two ways to
do it:

Algorithm A:

1. Start on the first page of the book and go word by word


until you find what you are looking for.

Algorithm B:

1. Open the book in the middle and check the first word
on it.
2. If the word that you are looking for is alphabetically
more significant, then look to the right. Otherwise, look
in the left half.
3. Divide the remainder in half again, and repeat step #2
until you find the word you are looking for.
Which one is faster? The first algorithms go word by
word O(n), while the algorithm B split the problem in half on
each iteration O(log n). This 2nd algorithm is a binary
search.

Binary search
Find the index of an element in a sorted array.

If we implement (Algorithm A) going through all the


elements in an array, it will take a running time of O(n). Can
we do better? We can try using the fact that the collection is
already sorted. Later, we can divide in half as we look for
the element in question.

function indexOf(array, element, offset = 0) {


1 // split array in half
2 const half = parseInt(array.length / 2);
3 const current = array[half];
4
5 if(current === element) {
6 return offset + half;
7 } else if(element > current) {
8 const right = array.slice(half);
9 return indexOf(right, element, offset + half);
10 } else {
11 const left = array.slice(0, half)
12 return indexOf(left, element, offset);
13 }
14}
15
16// Usage example with a list of names in ascending order:
17const directory = ["Adrian", "Bella", "Charlotte", "Daniel", "Emma", "Hanna",
18"Isabella", "Jayden", "Kaylee", "Luke", "Mia", "Nora", "Olivia", "Paisley",
19"Riley", "Thomas", "Wyatt", "Xander", "Zoe"];
20console.log(indexOf(directory, 'Hanna')); // => 5
21console.log(indexOf(directory, 'Adrian')); // => 0
console.log(indexOf(directory, 'Zoe')); // => 18
Calculating the time complexity of indexOf is not as
straightforward as the previous examples. This function is
recursive.

There are several ways to analyze recursive algorithms. For


simplicity, we are going to use the Master Method.

Master Method for recursive algorithms


Finding the runtime of recursive algorithms is not as easy as
counting the operations. This method helps us to determine
the runtime of recursive algorithms. We are going to explain
this solution using the indexOf function as an illustration.

When analyzing recursive algorithms, we care about these


three things:

 The runtime of the work done outside the recursion


(line 3-4): O(1)
 How many recursive calls the problem is divided (line
11 or 14): 1 recursive call. Notice only one or the other
will happen, never both.
 How much n is reduced on each recursive call (line 10
or 13): 1/2. Every recursive call cuts n in half.

1) The Master Method formula is the following:

T(n) = a T(n/b) + f(n)

where:

 T: time complexity function in terms of the input size n.

 n: the size of the input. duh? :)

 a: the number of sub-problems. For our case, we only


split the problem into one subproblem. So, a=1.
 b: the factor by which n is reduced. For our example,
we divide n in half each time. Thus, b=2.
 f(n): the running time outside the recursion. Since
dividing by 2 is constant time, we have f(n) = O(1).

2) Once we know the values of a, b and f(n). We can


determine the runtime of the recursion using this formula:

nlogba

This value will help us to find which master method case we


are solving.

For binary search, we have:

nlogba = nlog21 = n0 = 1

3) Finally, we compare the recursion runtime from step 2)


and the runtime f(n) from step 1). Based on that, we have
the following cases:

Case 1: Most of the work done in the recursion.

If nlogba > f(n),

then runtime is:

O(nlogba)

Case 2: The runtime of the work done in the recursion and


outside is the same

If nlogba === f(n),

then runtime is:

O(nlogba log(n))

Case 3: Most of the work is done outside the recursion

If nlogba < f(n),

then runtime is:

O(f(n))
Now, let’s combine everything we learned here to get the
running time of our binary search function indexOf.

Master Method for Binary Search


The binary search algorithm slit n on half until a solution is
found or array is exhausted. So, using the Master Method:

T(n) = a T(n/b) + f(n)

1) Find a, b and f(n) and replace it in the formula:

 a: the number of sub-problems. For our example, we


only split the problem into another subproblem. So a=1.
 b: the factor by which n is reduced. For our case, we
divide n in half each time. Thus, b=2.
 f(n): the running time outside the recursion: O(1).

Thus,

T(n) = T(n/2) + O(1)

2) Compare the runtime executed inside and outside the


recursion:

 Runtime of the work done outside the recursion: f(n).


E.g. O(1).
 Runtime of work done inside the recursion given by
this formula nlogba. E.g. O(nlog21) = O(n0) = O(1).

3) Finally, getting the runtime. Based on the comparison of


the expressions from the previous steps, find the case it
matches.

As we saw in the previous step, the work outside and inside


the recursion has the same runtime, so we are in case 2.

O(nlogba log(n))

Making the substitution we get:


O(nlog21 log(n))

O(n0 log(n))

O(log(n)) 👈 this is running time of a binary search

O(n log n) - Linearithmic


Linearithmic time complexity it’s slightly slower than a
linear algorithm. However, it’s still much better than a
quadratic algorithm (you will see a graph at the very end of
the post).

Examples of Linearithmic algorithms:

 Efficient sorting algorithms like merge sort, quicksort


and others.

Mergesort
What’s the best way to sort an array? Before, we proposed a
solution using bubble sort that has a time complexity of
O(n2). Can we do better?

We can use an algorithm called mergesort to improve it. This


is how mergesort works:

1. We are going to divide the array recursively until the


elements are two or less.
2. We know how to sort two items, so we sort them
iteratively (base case).
3. The final step is merging: we merge in taking one by
one from each array such that they are in ascending
order.

Here’s the code for merge sort:

1/**
2 * Sort array in asc order using merge-sort
3 * @example
4 * sort([3, 2, 1]) => [1, 2, 3]
5 * sort([3]) => [3]
6 * sort([3, 2]) => [2, 3]
7 * @param {array} array
8 */
9function sort(array = []) {
10 const size = array.length;
11 // base case
12 if (size < 2) {
13 return array;
14 }
15 if (size === 2) {
16 return array[0] > array[1] ? [array[1], array[0]] : array;
17 }
18 // slit and merge
19 const mid = parseInt(size / 2, 10);
20 return merge(sort(array.slice(0, mid)), sort(array.slice(mid)));
21}
22
23/**
24 * Merge two arrays in asc order
25 * @example
26 * merge([2,5,9], [1,6,7]) => [1, 2, 5, 6, 7, 9]
27 * @param {array} array1
28 * @param {array} array2
29 * @returns {array} merged arrays in asc order
30 */
31function merge(array1 = [], array2 = []) {
32 const merged = [];
33 let array1Index = 0;
34 let array2Index = 0;
35 // merge elements on a and b in asc order. Run-time O(a + b)
36 while (array1Index < array1.length || array2Index < array2.length) {
37 if (array1Index >= array1.length || array1[array1Index] >
38array2[array2Index]) {
39 merged.push(array2[array2Index]);
40 array2Index += 1;
41 } else {
42 merged.push(array1[array1Index]);
43 array1Index += 1;
}
44
}
45
return merged;
46
}

As you can see, it has two functions sort and merge. Merge is
an auxiliary function that runs once through the
collection a and b, so it’s running time is O(n). Let’s apply
the Master Method to find the running time.

Master Method for Mergesort


We are going to apply the Master Method that we explained
above to find the runtime:

1) Let’s find the values of: T(n) = a T(n/b) + f(n)

 a: The number of sub-problems is 2 (line 12). So, a = 2.

 b: Each of the sub-problems divides n in half. So, b = 2

 f(n): The work done outside the recursion is the


function merge, which has a runtime of O(n) since it
visits all the elements on the given arrays.

Substituting the values:

T(n) = 2 T(n/2) + O(n)

2) Let’s find the work done in the recursion: nlogba.

nlog22

n1 = n

3) Finally, we can see that recursion runtime from step 2) is


O(n) and also the non-recursion runtime is O(n). So, we have
the case 2 : O(nlogba log(n))

O(nlog22 log(n))

O(n1 log(n))
O(n log(n)) 👈 this is running time of the merge sort

O(2^n) - Exponential time


Exponential (base 2) running time means that the
calculations performed by an algorithm double every time as
the input grows.

Examples of exponential runtime algorithms:

 Power Set: finding all the subsets on a set.


 Fibonacci.
 Travelling salesman problem using dyanmic
programming.

Power Set
To understand the power set, let’s imaging you are buying
pizza. The store has many toppings that you can choose
from like pepperoni, mushrooms, bacon, and pinapple. Let’s
call each topping A, B, C, D. What are your choices? You can
select no topping (you are on a diet ;), you can choose one
topping, or two or three or all of them, and so on. The power
set gives you all the possibilities (BTW, there 16 with 4
toppings as you will see later)

Finding all distinct subsets of a given set. For instance, let’s


do some examples to try to come up with an algorithm to
solve it:

1powerset('') // => ['']


2powerset('a') // => ['', 'a']
3powerset('ab') // => ['', 'a', 'b', 'ab']

Did you notice any pattern?

 The first returns an empty element.


 The second case returns the empty element + the 1st
element.
 The 3rd case returns precisely the results of 2nd case +
the same array with the 2nd element b appended to it.

What if you want to find the subsets of abc? Well, it would be


precisely the subsets of ‘ab’ and again the subsets
of ab with c appended at the end of each element.

As you noticed, every time the input gets longer, the output
is twice as long as the previous one. Let’s code it up:

1function powerset(n = '') {


2 const array = Array.from(n);
3 const base = [''];
4
5 const results = array.reduce((previous, element) => {
6 const previousPlusElement = previous.map(el => {
7 return `${el}${element}`;
8 });
9 return previous.concat(previousPlusElement);
10 }, base);
11
12 return results;
13}

If we run that function for a couple of cases we will get:

1powerset('') // ...
2// n = 0, f(n) = 1;
3powerset('a') // , a...
4// n = 1, f(n) = 2;
5powerset('ab') // , a, b, ab...
6// n = 2, f(n) = 4;
7powerset('abc') // , a, b, ab, c, ac, bc, abc...
8// n = 3, f(n) = 8;
9powerset('abcd') // , a, b, ab, c, ac, bc, abc, d, ad, bd, abd, cd, acd, bcd...
10// n = 4, f(n) = 16;
11powerset('abcde') // , a, b, ab, c, ac, bc, abc, d, ad, bd, abd, cd, acd, bcd...
12// n = 5, f(n) = 32;
As expected, if you plot n and f(n), you will notice that it
would be exactly like the function 2^n. This algorithm has a
running time of O(2^n).

Note: You should avoid functions with exponential running


times (if possible) since they don’t scale well. The time it
takes to process the output doubles with every additional
input size. But exponential running time is not the worst yet;
others go even slower. Let’s see one more example in the
next section.

O(n!) - Factorial time


Factorial is the multiplication of all positive integer numbers
less than itself. For instance:

5! = 5 x 4 x 3 x 2 x 1 = 120

It grows pretty quickly:

20! = 2,432,902,008,176,640,000

As you might guess, you want to stay away if possible from


algorithms that have this running time!

Examples of O(n!) factorial runtime algorithms:

 Permutations of a string.
 Solving the traveling salesman problem with a brute-
force search

Let’s solve the first example.

Permutations
Write a function that computes all the different words that
can be formed given a string. E.g.

1getPermutations('a') // => [ 'a']


2getPermutations('ab') // => [ 'ab', 'ba']
3getPermutations('abc') // => [ 'abc', 'acb', 'bac', 'bca', 'cab', 'cba' ]

How would you solve that?

A straightforward way will be to check if the string has a


length of 1 if so, return that string since you can’t arrange it
differently.

For strings with a length bigger than 1, we could use


recursion to divide the problem into smaller problems until
we get to the length 1 case. We can take out the first
character and solve the problem for the remainder of the
string until we have a length of 1.

1function getPermutations(string, prefix = '') {


2 if(string.length <= 1) {
3 return [prefix + string];
4 }
5
6 return Array.from(string).reduce((result, char, index) => {
7 const reminder = string.slice(0, index) + string.slice(index+1);
8 result = result.concat(getPermutations(reminder, prefix + char));
9 return result;
10 }, []);
11}

If print out the output, it would be something like this:

1getPermutations('ab') // ab, ba...


2// n = 2, f(n) = 2;
3getPermutations('abc') // abc, acb, bac, bca, cab, cba...
4// n = 3, f(n) = 6;
5getPermutations('abcd') // abcd, abdc, acbd, acdb, adbc, adcb, bacd...
6// n = 4, f(n) = 24;
7getPermutations('abcde') // abcde, abced, abdce, abdec, abecd, abedc, acbde...
8// n = 5, f(n) = 120;

I tried with a string with a length of 10. It took around 8


seconds!

1time node ./lib/permutations.js


2# getPermutations('abcdefghij') // => abcdefghij, abcdefghji, abcdefgihj,
abcdefgijh, abcdefgjhi, abcdefgjih, abcdefhgij...
3
# // n = 10, f(n) = 3,628,800;
4
# ./lib/permutations.js 8.06s user 0.63s system 101% cpu 8.562 total

I have a little homework for you:

Can you try with a permutation with 11 characters? ;) Comment below what
happened to your computer!

All running complexities graphs


We explored the most common algorithms running times
with one or two examples each! They should give you an
idea of how to calculate your running times when developing
your projects. Below you can find a chart with a graph of all
the time complexities that we covered:
Mind your time complexity!
We have discussed Asymptotic Analysis, Worst, Average and Best
Cases and Asymptotic Notations in previous posts. In this post, analysis of iterative
programs with simple examples is discussed.
1) O(1): Time complexity of a function (or set of statements) is considered as O(1) if
it doesn’t contain loop, recursion and call to any other non-constant time function.
// set of non-recursive and non-loop statements
For example swap() function has O(1) time complexity.
A loop or recursion that runs a constant number of times is also considered as O(1).
For example the following loop is O(1).

// Here c is a constant
for (int i = 1; i <= c; i++) {
// some O(1) expressions
}
2) O(n): Time Complexity of a loop is considered as O(n) if the loop variables is
incremented / decremented by a constant amount. For example following functions
have O(n) time complexity.
// Here c is a positive integer constant
for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}

for (int i = n; i > 0; i -= c) {


// some O(1) expressions
}
3) O(nc): Time complexity of nested loops is equal to the number of times the
innermost statement is executed. For example the following sample loops have O(n2)
time complexity

for (int i = 1; i <=n; i += c) {


for (int j = 1; j <=n; j += c) {
// some O(1) expressions
}
}

for (int i = n; i > 0; i -= c) {


for (int j = i+1; j <=n; j += c) {
// some O(1) expressions
}
For example Selection sort and Insertion Sort have O(n2) time complexity.
4) O(Logn) Time Complexity of a loop is considered as O(Logn) if the loop variables
is divided / multiplied by a constant amount.
for (int i = 1; i <=n; i *= c) {
// some O(1) expressions
}
for (int i = n; i > 0; i /= c) {
// some O(1) expressions
}
For example Binary Search(refer iterative implementation) has O(Logn) time
complexity. Let us see mathematically how it is O(Log n). The series that we get in
first loop is 1, c, c2, c3, … ck. If we put k equals to Logcn, we get cLog n which is n.
c

5) O(LogLogn) Time Complexity of a loop is considered as O(LogLogn) if the loop


variables is reduced / increased exponentially by a constant amount.
// Here c is a constant greater than 1
for (int i = 2; i <=n; i = pow(i, c)) {
// some O(1) expressions
}
//Here fun is sqrt or cuberoot or any other constant root
for (int i = n; i > 1; i = fun(i)) {
// some O(1) expressions
}
See this for mathematical details.
How to combine time complexities of consecutive loops?
When there are consecutive loops, we calculate time complexity as sum of time
complexities of individual loops.
for (int i = 1; i <=m; i += c) {
// some O(1) expressions
}
for (int i = 1; i <=n; i += c) {
// some O(1) expressions
}
Time complexity of above code is O(m) + O(n) which is O(m+n)
If m == n, the time complexity becomes O(2n) which is O(n).
How to calculate time complexity when there are many if, else statements
inside loops?
As discussed here, worst case time complexity is the most useful among best,
average and worst. Therefore we need to consider worst case. We evaluate the
situation when values in if-else conditions cause maximum number of statements to
be executed.
For example consider the linear search function where we consider the case when
element is present at the end or not present at all.
When the code is too complex to consider all if-else cases, we can get an upper
bound by ignoring if else and other complex control statements.

You might also like