Time Complexity With Python
Time Complexity With Python
1 de 17 25/8/21 13:02
Understanding time complexity with Python examples | by Kelvin Salt... https://towardsdatascience.com/understanding-time-complexity-with-p...
In this post, we will understand a little more about time complexity, Big-O notation and
why we need to be concerned about it when developing algorithms.
The examples shown in this story were developed in Python, so it will be easier to
understand if you have at least the basic knowledge of Python, but this is not a
prerequisite.
Computational Complexity
Computational complexity is a field from computer science which analyzes
algorithms based on the amount resources required for running it. The amount of
required resources varies based on the input size, so the complexity is generally
expressed as a function of n, where n is the size of the input.
It is important to note that when analyzing an algorithm we can consider the time
complexity and space complexity. The space complexity is basically the amount of
memory space required to solve a problem in relation to the input size. Even though the
space complexity is important when analyzing an algorithm, in this story we will focus
only on the time complexity.
Time Complexity
As you’re reading this story right now, you may have an idea about what is time
complexity, but to make sure we’re all on the same page, let’s start understanding what
time complexity means with a short description from Wikipedia.
In computer science, the time complexity is the computational complexity that describes the
amount of time it takes to run an algorithm. Time complexity is commonly estimated by
counting the number of elementary operations performed by the algorithm, supposing that
each elementary operation takes a fixed amount of time to perform.
When analyzing the time complexity of an algorithm we may find three cases: best-
case, average-case and worst-case. Let’s understand what it means.
2 de 17 25/8/21 13:02
Understanding time complexity with Python examples | by Kelvin Salt... https://towardsdatascience.com/understanding-time-complexity-with-p...
• best-case: this is the complexity of solving the problem for the best input. In our
example, the best case would be to search for the value 1. Since this is the first value
of the list, it would be found in the first iteration.
• average-case: this is the average complexity of solving the problem. This complexity
is defined with respect to the distribution of the values in the input data. Maybe this
is not the best example but, based on our sample, we could say that the average-case
would be when we’re searching for some value in the “middle” of the list, for
example, the value 2.
• worst-case: this is the complexity of solving the problem for the worst input of size
n. In our example, the worst-case would be to search for the value 8, which is the
last element from the list.
Usually, when describing the time complexity of an algorithm, we are talking about the
worst-case.
Big-O Notation
Big-O notation, sometimes called “asymptotic notation”, is a mathematical notation
that describes the limiting behavior of a function when the argument tends towards a
particular value or infinity.
In computer science, Big-O notation is used to classify algorithms according to how their
running time or space requirements grow as the input size (n) grows. This notation
characterizes functions according to their growth rates: different functions with the
same growth rate may be represented using the same O notation.
Let’s see some common time complexities described in the Big-O notation.
╔══════════════════╦═════════════════╗
║ Name ║ Time Complexity ║
╠══════════════════╬═════════════════╣
║ Constant Time ║ O(1) ║
╠══════════════════╬═════════════════╣
║ Logarithmic Time ║ O(log n) ║
╠══════════════════╬═════════════════╣
║ Linear Time ║ O(n) ║
╠══════════════════╬═════════════════╣
║ Quasilinear Time ║ O(n log n) ║
╠══════════════════╬═════════════════╣
║ Quadratic Time ║ O(n^2) ║
╠══════════════════╬═════════════════╣
║ Exponential Time ║ O(2^n) ║
╠══════════════════╬═════════════════╣
║ Factorial Time ║ O(n!) ║
╚══════════════════╩═════════════════╝
Note that we will focus our study in these common time complexities but there are some
other time complexities out there which you can study later.
As already said, we generally use the Big-O notation to describe the time complexity of
algorithms. There’s a lot of math involved in the formal definition of the notation, but
informally we can assume that the Big-O notation gives us the algorithm’s approximate
run time in the worst case. When using the Big-O notation, we describe the algorithm’s
efficiency based on the increasing size of the input data (n). For example, if the input is a
string, the n will be the length of the string. If it is a list, the n will be the length of the
list and so on.
Now, let’s go through each one of these common time complexities and see some
examples of algorithms. Note that I tried to follow the following approach: present a
little description, show a simple and understandable example and show a more complex
example (usually from a real-world problem).
Time Complexities
Constant Time — O(1)
An algorithm is said to have a constant time when it is not dependent on the input data
4 de 17 25/8/21 13:02
Understanding time complexity with Python examples | by Kelvin Salt... https://towardsdatascience.com/understanding-time-complexity-with-p...
(n). No matter the size of the input data, the running time will always be the same. For
Get started Open in app
example:
if a > b:
return True
else:
return False
Now, let’s take a look at the function get_first which returns the first element of a list:
def get_first(data):
return data[0]
if __name__ == '__main__':
data = [1, 2, 9, 8, 3, 4, 7, 6, 5]
print(get_first(data))
Independently of the input data size, it will always have the same running time since it
only gets the first value from the list.
An algorithm with constant time complexity is excellent since we don’t need to worry
about the input size.
5 de 17 25/8/21 13:02
Understanding time complexity with Python examples | by Kelvin Salt... https://towardsdatascience.com/understanding-time-complexity-with-p...
if __name__ == '__main__':
data = [1, 2, 3, 4, 5, 6, 7, 8, 9]
print(binary_search(data, 8))
• If the searched value is lower than the value in the middle of the list, set a new right
bounder.
• If the searched value is higher than the value in the middle of the list, set a new left
bounder.
• If the search value is equal to the value in the middle of the list, return the middle
(the index).
• Repeat the steps above until the value is found or the left bounder is equal or higher
the right bounder.
It is important to understand that an algorithm that must access all elements of its input
data cannot take logarithmic time, as the time taken for reading input of size n is of the
order of n.
most linearly with the size of the input data. This is the best possible time complexity
Get started Open in app
when the algorithm must examine all values in the input data. For example:
Let’s take a look at the example of a linear search, where we need to find the position of
an element in an unsorted list:
if __name__ == '__main__':
data = [1, 2, 9, 8, 3, 4, 7, 6, 5]
print(linear_search(data, 7))
Note that in this example, we need to look at all values in the list to find the value we are
looking for.
For example: for each value in the data1 (O(n)) use the binary search (O(log n)) to
search the same value in data2.
Another, more complex example, can be found in the Mergesort algorithm. Mergesort is
an efficient, general-purpose, comparison-based sorting algorithm which has
7 de 17 25/8/21 13:02
Understanding time complexity with Python examples | by Kelvin Salt... https://towardsdatascience.com/understanding-time-complexity-with-p...
def merge_sort(data):
if len(data) <= 1:
return
mid = len(data) // 2
left_data = data[:mid]
right_data = data[mid:]
merge_sort(left_data)
merge_sort(right_data)
left_index = 0
right_index = 0
data_index = 0
if __name__ == '__main__':
data = [9, 1, 7, 6, 2, 8, 5, 3, 4, 0]
merge_sort(data)
print(data)
The following image exemplifies the steps taken by the mergesort algorithm.
8 de 17 25/8/21 13:02
Understanding time complexity with Python examples | by Kelvin Salt... https://towardsdatascience.com/understanding-time-complexity-with-p...
for x in data:
for y in data:
print(x, y)
Bubble sort is a great example of quadratic time complexity since for each value it needs
to compare to all other values in the list, let’s see an example:
def bubble_sort(data):
swapped = True
while swapped:
swapped = False
for i in range(len(data)-1):
9 de 17 if data[i] > data[i+1]: 25/8/21 13:02
Understanding time complexity with Python examples | by Kelvin Salt... https://towardsdatascience.com/understanding-time-complexity-with-p...
if __name__ == '__main__':
data = [9, 1, 7, 6, 2, 8, 5, 3, 4, 0]
bubble_sort(data)
print(data)
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n-1) + fibonacci(n-2)
If you don’t know what a recursive function is, let’s clarify it quickly: a recursive function
may be described as a function that calls itself in specific conditions. As you may have
noticed, the time complexity of recursive functions is a little harder to define since it
depends on how many times the function is called and the time complexity of a single
function call.
It makes more sense when we look at the recursion tree. The following recursion tree
was generated by the Fibonacci algorithm using n = 4:
10 de 17 25/8/21 13:02
Understanding time complexity with Python examples | by Kelvin Salt... https://towardsdatascience.com/understanding-time-complexity-with-p...
Note that it will call itself until it reaches the leaves. When reaching the leaves it returns
the value itself.
Now, look how the recursion tree grows just increasing the n to 6:
11 de 17 25/8/21 13:02
Understanding time complexity with Python examples | by Kelvin Salt... https://towardsdatascience.com/understanding-time-complexity-with-p...
You can find a more complete explanation about the time complexity of the recursive
Fibonacci algorithm here on StackOverflow.
Factorial — O(n!)
An algorithm is said to have a factorial time complexity when it grows in a factorial way
based on the size of the input data, for example:
2! = 2 x 1 = 2
3! = 3 x 2 x 1 = 6
4! = 4 x 3 x 2 x 1 = 24
5! = 5 x 4 x 3 x 2 x 1 = 120
6! = 6 x 5 x 4 x 3 x 2 x 1 = 720
7! = 7 x 6 x 5 x 4 x 3 x 2 x 1 = 5.040
8! = 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1 = 40.320
As you may see it grows very fast, even for a small size input.
A great example of an algorithm which has a factorial time complexity is the Heap’s
12 de 17 25/8/21 13:02
Understanding time complexity with Python examples | by Kelvin Salt... https://towardsdatascience.com/understanding-time-complexity-with-p...
According to Wikipedia:
Heap found a systematic method for choosing at each step a pair of elements to switch, in
order to produce every possible permutation of these elements exactly once.
for i in range(n):
heap_permutation(data, n - 1)
if n % 2 == 0:
data[i], data[n-1] = data[n-1], data[i]
else:
data[0], data[n-1] = data[n-1], data[0]
if __name__ == '__main__':
data = [1, 2, 3]
heap_permutation(data, len(data))
[1, 2, 3]
[2, 1, 3]
[3, 1, 2]
[1, 3, 2]
[2, 3, 1]
[3, 2, 1]
Note that it will grow in a factorial way, based on the size of the input data, so we can
say the algorithm has factorial time complexity O(n!).
Important Notes
13 de 17 25/8/21 13:02
Understanding time complexity with Python examples | by Kelvin Salt... https://towardsdatascience.com/understanding-time-complexity-with-p...
ItGet
is important
started to note
Open that when analyzing the time complexity of an algorithm with
in app
several operations we need to describe the algorithm based on the largest complexity
among all operations. For example:
def my_function(data):
first_element = data[0]
for x in data:
for y in data:
print(x, y)
Even that the operations in ‘my_function’ don’t make sense we can see that it has
multiple time complexities: O(1) + O(n) + O(n²). So, when increasing the size of the
input data, the bottleneck of this algorithm will be the operation that takes O(n²). Based
on this, we can describe the time complexity of this algorithm as O(n²).
14 de 17 25/8/21 13:02
Understanding time complexity with Python examples | by Kelvin Salt... https://towardsdatascience.com/understanding-time-complexity-with-p...
Here is another sheet with the time complexity of the most common sorting algorithms.
Every Thursday, the Variable delivers the very best of Towards Data Science: from hands-on tutorials
and cutting-edge research to original features you don't want to miss. Take a look.
15 de 17 25/8/21 13:02
Understanding time complexity with Python examples | by Kelvin Salt... https://towardsdatascience.com/understanding-time-complexity-with-p...
Even when working with modern languages, like Python, which provides built-in functions, like
sorting algorithms, someday you will probably need to implement an algorithm to perform some
kind of operation in a certain amount of data. By studying time complexity you will understand
the important concept of efficiency and will be able to find bottlenecks in your code which should
be improved, mainly when working with huge data sets.
Besides that, if you plan to apply to a software engineer position in a big company like Google,
16 de 17 25/8/21 13:02
Understanding time complexity with Python examples | by Kelvin Salt... https://towardsdatascience.com/understanding-time-complexity-with-p...
Facebook, Twitter, and Amazon you will need to be prepared to answer questions about time
Get started Open in app
complexity using the Big-O notation.
Final Notes
Thanks for reading this story. I hope you have learned a little more about time complexity and
the Big-O notation. If you enjoyed it, please give it a clap and share it. If you have any doubt or
suggestion feel free to comment or send me an email. Also, feel free to follow me on Twitter,
Linkedin, and Github.
References
• Computational complexity: https://en.wikipedia.org/wiki/Computational_complexity
17 de 17 25/8/21 13:02