0% found this document useful (0 votes)
119 views

How To Find Time Complexity of An Algorithm

- To find the time complexity of an algorithm, analyze how the number of operations grows as the input size increases. Operations can include comparisons, assignments, function calls, and loops. - Big O notation describes asymptotic worst-case time complexity by ignoring constants and lower order terms. It indicates how efficiently the algorithm solves the problem as input size increases indefinitely. - Common time complexities include constant O(1), linear O(n), logarithmic O(log n), quadratic O(n^2), and exponential O(2^n). Time complexity depends on statements, conditionals, loops, nested structures, and called functions.

Uploaded by

Gaurav Untawale
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
119 views

How To Find Time Complexity of An Algorithm

- To find the time complexity of an algorithm, analyze how the number of operations grows as the input size increases. Operations can include comparisons, assignments, function calls, and loops. - Big O notation describes asymptotic worst-case time complexity by ignoring constants and lower order terms. It indicates how efficiently the algorithm solves the problem as input size increases indefinitely. - Common time complexities include constant O(1), linear O(n), logarithmic O(log n), quadratic O(n^2), and exponential O(2^n). Time complexity depends on statements, conditionals, loops, nested structures, and called functions.

Uploaded by

Gaurav Untawale
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

How to find time complexity of an

algorithm?
• Finding out the time complexity of your code can help you develop better programs
that run faster.
• In general, you can determine the time complexity by analyzing the program’s
statements (go line by line).
• However, you have to be mindful how are the statements arranged. Suppose they
are inside a loop or have function calls or even recursion.
• All these factors affect the runtime of your code.

Big O Notation
• Big O notation is a framework to analyze and compare algorithms.
• Amount of work the CPU has to do (time complexity) as the input size grows
(towards infinity).
• Big O = Big Order function. Drop constants and lower order terms. E.g., O(3*n^2 +
10n + 10) becomes O(n^2).

• Big O notation cares about the worst-case scenario. E.g., when you want to sort and
elements in the array are in reverse order for some sorting algorithms.
If you have a function that takes an array as an input, if you increase the number of
elements in the collection, you still perform the same operations; you have a constant
runtime.
If the CPU’s work grows proportionally to the input array size, you have a linear
runtime O(n).
Sequential Statements
If we have statements with basic operations like comparisons, assignments, reading a
variable. We can assume they take constant time each O(1).

statement1;

statement2;

...

statementN;

If we calculate the total time complexity,


total = time(statement1) + time(statement2) + ... time (statementN)

Let’s use T(n) as the total time in function of the input size n, and t as the time complexity
taken by a statement or group of statements.
T(n) = t(statement1) + t(statement2) + ... + t(statementN);

If each statement executes a basic operation, we can say it takes constant time O(1). As
long as you have a fixed number of operations, it will be constant time, even if we have 1
or 100 of these statements.

Example:

Let’s say we can compute the square sum of 3 numbers.

function squareSum(a, b, c) {

const sa = a * a;

const sb = b * b;

const sc = c * c;

const sum = sa + sb + sc;

return sum;

As you can see, each statement is a basic operation (math and assignment). Each line
takes constant time O(1).
If we add up all statements’ time it will still be O(1). It doesn’t matter if the numbers
are 0 or 9,007,199,254,740,991, it will perform the same number of operations.

Conditional Statements
Remember that we care about the worst-case with Big O so that we will take the
maximum possible runtime.
if (isValid) {

statement1;

statement2;

} else {

statement3;

Since we are after the worst-case, we take whichever is larger:


T(n) = Math.max([t(statement1) + t(statement2)], [time(statement3)])

Example:
if (isValid) {

array.sort();

return true;

} else {

return false;

The if block has a runtime of O(n log n) (that’s common runtime for efficient sorting
algorithms). The else block has a runtime of O(1).

So, we have the following:

O([n log n] + [n]) => O(n log n)

Since n log n has a higher order than n, we can express the time complexity as O(n log n).

Loop Statements
Another prevalent scenario is loops like for-loops or while-loops
Linear Time Loops

For any loop, we find out the runtime of the block inside them and multiply it by the
number of times the program will repeat the loop.
for (let i = 0; i < array.length; i++) {

statement1;

statement2;

For this example, the loop is executed array.length, assuming n is the length of the
array, we get the following:

T(n) = n * [ t(statement1) + t(statement2) ]

All loops that grow proportionally to the input size have a linear time complexity O(n).
If you loop through only half of the array, that’s still O(n). Remember that we drop the
constants so 1/2 n => O(n).

Constant-Time Loops

However, if a constant number bounds the loop, let’s say 4 (or even 400). Then, the
runtime is constant O(4) -> O(1).
See the following example.
for (let i = 0; i < 4; i++) {

statement1;

statement2;

That code is O(1) because it no longer depends on the input size. It will always run
statement 1 and 2 four times.

Logarithmic Time Loops

Consider the following code, where we divide an array in half on each iteration (binary
search):
function fn1(array, target, low = 0, high = array.length - 1) {

let mid;

while ( low <= high ) {


mid = ( low + high ) / 2;

if ( target < array[mid] )

high = mid - 1;

else if ( target > array[mid] )

low = mid + 1;

else break;

return mid;

This function divides the array by its middle point on each iteration. The while loop will
execute the number of times that we can divide array.length in half. We can calculate this
using the log function.
E.g.: If the array’s length is 8, then we the while loop will execute 3 times because log2(8)
= 3.

Nested loops statements


Sometimes you might need to visit all the elements on a 2D array (grid/table). For such
cases, you might find yourself with two nested loops.
for (let i = 0; i < n; i++) {

statement1;

for (let j = 0; j < m; j++) {

statement2;

statement3;

For this case, you would have something like this:


T(n) = n * [t(statement1) + m * t(statement2...3)]

Assuming the statements from 1 to 3 are O(1), we would have a runtime of O(n * m).

If instead of m, you had to iterate on n again, then it would be O(n^2). Another typical case
is having a function inside a loop.
Let’s see how to deal with that next.

Function call statements


When you calculate your programs’ time complexity and invoke a function, you need to be
aware of its runtime. If you created the function, that might be a simple inspection of the
implementation.
If you are using a library function, you might need to check out the language/library
documentation or source code.
Let’s say you have the following program:
for (let i = 0; i < n; i++) {
fn1();
for (let j = 0; j < n; j++) {
fn2();
for (let k = 0; k < n; k++) {
fn3();
}
}
}

Depending on the runtime of fn1, fn2, and fn3, you would have different runtimes.

• If they all are constant O(1), then the final runtime would be O(n^3).
• However, if only fn1 and fn2 are constant and fn3 has a runtime of O(n^2), this
program will have a runtime of O(n^5).
• Another way to look at it is, if fn3 has two nested and you replace the invocation
with the actual implementation, you would have five nested loops.

In general, you will have something like this:

T(n) = n * [ t(fn1()) + n * [ t(fn2()) + n * [ t(fn3()) ] ] ]

Recursive Functions Statements


Analyzing the runtime of recursive functions might get a little tricky. There are different
ways to do it. One intuitive way is to explore the recursion tree.
Let’s say that we have the following program:

function fn(n) {

if (n < 0) return 0;

if (n < 2) return n;

return fn(n - 1) + fn(n - 2);

You can represent each function invocation as a bubble (or node).

Let’s do some examples:

• When your n = 2, you have 3 function calls. First fn(2) which in turn
calls fn(1) and fn(0).
• For n = 3, you have 5 function calls. First fn(3), which in turn
calls fn(2) and fn(1) and so on.
• For n = 4, you have 9 function calls. First fn(4), which in turn
calls fn(3) and fn(2) and so on.

Since it’s a binary tree, we can sense that every time n increases by one, we would have
to perform at most the double of operations.

Here’s the graphical representation of the 3 examples:

If you take a look at the generated tree calls, the leftmost nodes go down in descending
order: fn(4), fn(3), fn(2), fn(1), which means that the height of the tree (or the number of
levels) on the tree will be n.
The total number of calls, in a complete binary tree, is 2^n - 1.

As you can see in fn(4), the tree is not complete. The last level will only have two
nodes, fn(1) and fn(0), while a complete tree would have 8 nodes.
But still, we can say the runtime would be exponential O(2^n). It won’t get any worse
because 2^n is the upper bound.

You might also like