0% found this document useful (0 votes)
3 views13 pages

Complexity

Uploaded by

harshak.2k4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views13 pages

Complexity

Uploaded by

harshak.2k4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Mastering Algorithm Complexity: Time and Space

with Big-O, Omega, and Theta Notations

h
us
Khuddush

dd
December 14, 2024
hu
K
Algorithm in the Context of Complexity
An algorithm is a step-by-step procedure or set of instructions designed to solve a problem
or perform a computation. It takes an input, processes it according to its steps, and
produces an output. The efficiency of an algorithm is critical, especially when dealing
with large inputs or complex problems. Algorithm complexity refers to the measurement
of the resources (such as time and space) an algorithm consumes as a function of the size
of the input.

Algorithm Complexity: Types


There are two primary types of algorithm complexity: Time Complexity and Space
Complexity.

Time Complexity
ime complexity measures the rate at which the running time of an algorithm in-
creases as the size of the input grows. It provides a quantitative understanding of
the algorithm’s efficiency in terms of execution time relative to the input size.

Space Complexity
pace complexity refers to the amount of memory an algorithm uses in relation to
the size of its input. It helps to evaluate how efficiently an algorithm utilizes mem-
ory resources, which is crucial for handling large datasets or resource-constrained
environments.

1
Significance of Time and Space Complexity
Both time and space complexity are essential metrics for analyzing the efficiency
of algorithms, especially when dealing with large-scale problems or systems where
both processing time and memory usage need to be optimized.

Time and Space Complexity

h
us
Both time and space complexity are measured using Big-O, Omega, and Big-Theta
notations, which describe the upper, lower, and tight bounds of an algorithm’s

dd
performance in terms of execution time and memory usage, respectively.
hu
Big O, Θ, and Ω
K

(There are also Small o, θ, and ω but we wont discuss here)

Time Complexity with Examples


1. Big O: Upper Bound
Definition: Describes the upper bound (worst-case) of an algorithm’s running time or
space usage.
f (n) = O(g(n)) if there exist positive constants C and n0 such that f (n) ≤ C·g(n) for all n ≥ n0 .
Big O represents the worst-case runtime of an algorithm.
Mathematical Example: Let f (n) = 5n2 + 3n + 2. To show f (n) = O(n2 ):
f (n) = 5n2 + 3n + 2 ≤ 6n2 for n ≥ 1, with C = 6 and n0 = 1.
Python Code:
Code Example: O(n2 ) Complexity

Explanation:
ˆ Outer Loop: Runs n times, where n is the input size.
ˆ Inner Loop: Runs n times for each iteration of the outer loop.
ˆ Total Iterations: Outer loop × Inner loop = n × n = n2 .
ˆ Big-O Complexity: O(n2 ), because the number of iterations grows quadratically
with the input size.

2
h
us
dd
Figure 1:

2. Ω: Lower Bound
hu
K
Definition:Describes the lower bound (best-case) of an algorithm’s running time or space
usage.

f (n) = Ω(g(n)) if there exist positive constants C and n0 such that f (n) ≥ C·g(n) for all n ≥ n0 .

Ω represents the best-case runtime of an algorithm.


Mathematical Example: Let f (n) = 5n2 + 3n + 2. To show f (n) = Ω(n2 ):

f (n) = 5n2 + 3n + 2 ≥ 4n2 for n ≥ 1, with C = 4 and n0 = 1.

Python Code:
Code Example Ω(1) Complexity
Time Complexity: Ω(1) (best case: constant time).
Big-O and Omega Analysis:
Big-O (Worst-case): In the worst case, the function checks every element of the
list to find the maximum value. For a list of length n, the function will perform n − 1
comparisons in the loop. Therefore, the time complexity is

O(n).

Omega (Best-case): In the best case, if the list contains only one element, the loop
does not execute at all, and the maximum value is returned in constant time. Therefore,
the time complexity is
Ω(1).
Explanation:
Best-case (Omega(1)): If the list contains only one element, the function will:

ˆ Check that the list has at least one element.

3
h
us
dd
hu
K

Figure 2:

ˆ Assign that element to max val.

ˆ No loop iteration happens because there is nothing else to compare. This is a


constant time operation, and it takes the same time regardless of the size of the list
(even if the list has 1 element, it’s done in constant time).

In this case, the time complexity is Ω(1) because the algorithm immediately returns
after checking just one element.
Worst-case (Big-O(n)): If the list contains many elements, the function will loop
through all of them, comparing each one to find the maximum. In this case, the time
complexity is O(n), where n is the number of elements in the list, because the algorithm
needs to check all n elements.
Why Omega(1)?
Ω(1) happens because, in the best case, the algorithm does not need to do more than
a constant amount of work. This means it doesn’t matter how many elements the list
has; it only needs to perform a single action (like accessing the first element).
In summary:
Ω(1) means the algorithm performs the task in constant time in the best case, meaning
the execution time does not depend on the size of the input.
In our example, the best-case scenario is when the list has only one element, and the
algorithm doesn’t need to perform any iterations to find the maximum.

4
3. Θ: Tight Bound
Definition: Describes the tight bound (exact behavior) of an algorithm’s running time
or space usage, providing both the upper and lower bounds.

f (n) = Θ(g(n)) if there exist positive constants C1 , C2 , and n0

such that C1 · g(n) ≤ f (n) ≤ C2 · g(n) for all n ≥ n0 .

h
us
Θ represents the exact bound of an algorithm’s runtime.
Mathematical Example: Let f (n) = 5n2 + 3n + 2. To show f (n) = Θ(n2 ):

dd
4n2 ≤ 5n2 + 3n + 2 ≤ 6n2 for n ≥ 1, with C1 = 4, C2 = 6, and n0 = 1.
hu
Python Code:
K

Figure 3:

Time Complexity: Θ(n2 ).


Explanation:

ˆ The outer loop runs n times.

ˆ For each iteration of the outer loop, the inner loop also runs n times.

ˆ Thus, the total number of iterations (or steps) is:

f (n) = n × n = n2
Therefore, the time complexity of this program is Θ(n2 ). This means that as the size
n grows, the time it takes grows quadratically (i.e., it doubles when n is multiplied by
the same factor).

5
Mathematically: Let’s dive into how we can determine the constants C1 , C2 , and
g(n) mathematically when analyzing the time complexity of an algorithm.
An algorithm has Big-Theta complexity, Θ(g(n)), if there exist positive constants C1 ,
C2 , and n0 such that:

C1 · g(n) ≤ T (n) ≤ C2 · g(n) for all n ≥ n0

h
Where:

us
ˆ T (n) is the time complexity of the algorithm as a function of n.

dd
ˆ g(n) is a function that describes the asymptotic growth of the algorithm.

ˆ C1 and C2 are positive constants. hu


K
ˆ n0 is a sufficiently large value of n for which the inequalities hold.
Step 1: Define T (n)
The given function ‘print grid(n)‘ has two nested loops, and for each iteration of the
outer loop, the inner loop runs n times. Therefore, the total number of operations is:

T (n) = n × n = n2
Step 2: Find C1 and C2
Now, let’s apply the definition of Big-Theta and identify constants C1 and C2 for the
function T (n) = n2 .

ˆ Lower bound (Big-O notation): The number of operations grows at least as


fast as n2 , so there exists a constant C1 such that for all sufficiently large n:

T (n) ≥ C1 · n2

In this case, T (n) = n2 , so we can choose C1 = 1. Therefore, the lower bound is:

T (n) ≥ n2 for large n

ˆ Upper bound (Big-O notation): The number of operations is also at most


proportional to n2 , so there exists a constant C2 such that:

T (n) ≤ C2 · n2

Since T (n) = n2 , we can choose C2 = 1. Therefore, the upper bound is:

T (n) ≤ n2 for large n

6
Step 3: Choose f (n)
The function f (n) is the function that describes the growth rate of the algorithm,
which in this case is n2 . So:

f (n) = n2
Step 4: Find n0

h
We need to find a sufficiently large n0 such that the inequalities:

us
C1 · n2 ≤ T (n) ≤ C2 · n2

dd
hold for all n ≥ n0 . In this case, since T (n) = n2 , the inequalities will always hold for
all n ≥ 1. Therefore, we can choose: hu
K
n0 = 1
Final Answer:
So, by the Big-Theta definition:

T (n) = Θ(n2 )
Where:

ˆ C1 = 1

ˆ C2 = 1

ˆ f (n) = n2

ˆ n0 = 1

This means that for sufficiently large n, the total number of operations performed by
the function is proportional to n2 , with constants C1 and C2 .

Space Complexities:

Big-O
Space Complexity-O(n)

Explanation:

ˆ Line 1: squared lst = []

– Creates a new list of size n (the number of elements in lst).

7
h
us
dd
hu
Figure 4:
K

– Space Complexity: O(n).

ˆ Line 2: for num in lst:

– Iterates through lst. No extra space needed for the loop variable.
– Space Complexity: O(1).

ˆ Line 3: squared lst.append(num ** 2)

– Appends a squared value to squared lst. Each append takes constant space.
– Space Complexity: O(1) per iteration.

ˆ Line 4: return squared lst

– Returns the reference to squared lst, which doesn’t require additional space.
– Space Complexity: O(1).

Final Space Complexity: O(n)

ˆ The space used by squared lst grows linearly with the input size n.

Important Note: Space Complexity


Warning: The space complexity does not depend on the size of the individual
elements or the values within the list, only the number of elements n that need to
be squared and stored.

8
h
us
dd
hu
K
Figure 5:

Space Complexity-Ω(n2)
Explanation
ˆ Line 1: pairs = []
Initializes an empty list pairs.
Space Complexity: Ω(1) (constant space).

ˆ Line 2: for i in lst: (Outer Loop)


Iterates through the list lst.
Space Complexity: Ω(1) (constant space for the loop variable i).

ˆ Line 3: for j in lst: (Inner Loop)


Iterates through the list lst for each iteration of the outer loop.
Space Complexity: Ω(1) (constant space for the loop variable j).

ˆ Line 3: pairs.append((i, j))


Appends a new pair (i, j) to the list pairs.
Space Complexity: Ω(n2 ) — The list pairs will hold n × n = n2 pairs, and each
pair takes constant space.

ˆ Line 4: return pairs


Returns the pairs list, which already takes n2 space.
Space Complexity: Ω(1) — No additional space is required to return the list.

Final Space Complexity:

9
Ω(n2 ) — The space grows quadratically with the input size n because the list pairs stores n2 pairs.
In conclusion, the Big-Omega space complexity is Ω(n2 ) due to the storage of the
pairs generated by the nested loops.

h
Space Complex-Θ(1)

us
dd
hu
K

Figure 6:

Explanation:
ˆ Line 1: total = 0 initializes a variable total to zero.
– Space Complexity: Θ(1) (constant space for initialization).
ˆ Line 2: for num in lst: iterates through each element of the input list lst.
– Space Complexity: Θ(1) (constant space for the loop variable num).
ˆ Line 3: total += num updates the value of total during each iteration.
– Space Complexity: Θ(1) (constant space for the update operation).
ˆ Line 4: return total returns the final value of total.
– Space Complexity: Θ(1) (no additional space required for returning the result).
Final Space Complexity:
ˆ The space complexity is Θ(1), as the algorithm only uses a fixed amount of space
for the total variable and the loop variable num, regardless of the size of the input
list.
Summary:
The space complexity of the function is Θ(1), since the space used by the algorithm
does not grow with the size of the input list lst.

10
Time Complexities
1. Constant Time: O(1)
Definition: The runtime does not depend on the size of the input.
Example: Accessing an element in an array by its index.
Explanation: The operation is completed in a single step, regardless of the input size.

h
us
2. Logarithmic Time: O(log n)

dd
Definition: The runtime increases logarithmically as the input size grows.
hu
Example: Binary search in a sorted array.
Explanation: The problem size is halved at every step, leading to a logarithmic number
K
of steps.

3. Linear Time: O(n)


Definition: The runtime increases linearly with the input size.
Example: Traversing an array to compute the sum of its elements.
Explanation: Each element is processed once, resulting in n operations for n elements.

4. Linearithmic Time: O(n log n)


Definition: Combines linear and logarithmic growth, often seen in divide-and-conquer
algorithms.
Example: Merge Sort or Quick Sort (average case).
Explanation: The input is divided into smaller subproblems (logarithmic part) and
processed linearly at each level.

5. Quadratic Time: O(n2 )


Definition: The runtime grows quadratically as the input size increases.
Example: Bubble Sort or nested loops iterating over the same list.
Explanation: For each of the n elements, the algorithm performs n operations, resulting
in n × n = n2 operations.

6. Cubic Time: O(n3 )


Definition: The runtime grows cubically as the input size grows.
Example: Naive matrix multiplication.

11
Explanation: For each of the n2 pairs of elements, an additional n operations are
performed.

7. Exponential Time: O(2n )


Definition: The runtime doubles with each additional input element.
Example: Solving the traveling salesman problem via brute force.

h
Explanation: For n elements, there are 2n possible combinations to explore.

us
dd
8. Factorial Time: O(n!)
hu
Definition: The runtime grows factorially as the input size increases, making it ex-
tremely inefficient for large inputs.
K
Example: Calculating all permutations of a set.
Explanation: For n elements, there are n × (n − 1) × . . . × 1 = n! permutations to
consider.

Algorithm Time Complexity Space Complexity


Best Case Worst Case Average Case
Linear Search O(1) O(n) O(n) O(1)
Binary Search (Iterative) O(1) O(log n) O(log n) O(1)
Binary Search (Recursive) O(1) O(log n) O(log n) O(log n)
2
Bubble Sort O(n) O(n ) O(n2 ) O(1)
2
Quick Sort O(n log n) O(n ) O(n log n) O(log n)
Merge Sort O(n log n) O(n log n) O(n log n) O(n)
2
Insertion Sort (Nearly Sorted) O(n) O(n ) O(n2 ) O(1)

Table 1: Complexity Table of Algorithms

Algorithm Time Complexity Space Complexity Big-O Omega Theta


Best Case Worst Case Average Case
Linear Search O(1) O(n) O(n) O(1) O(n) Ω(1) Θ(n)
Binary Search (Iterative) O(1) O(log n) O(log n) O(1) O(log n) Ω(1) Θ(log n)
Binary Search (Recursive) O(1) O(log n) O(log n) O(log n) O(log n) Ω(1) Θ(log n)
Bubble Sort O(n) O(n2 ) O(n2 ) O(1) O(n2 ) Ω(n) Θ(n2 )
2
Quick Sort O(n log n) O(n ) O(n log n) O(log n) O(n2 ) Ω(n log n) Θ(n log n)
Merge Sort O(n log n) O(n log n) O(n log n) O(n) O(n log n) Ω(n log n) Θ(n log n)
Insertion Sort (Nearly Sorted) O(n) O(n2 ) O(n2 ) O(1) O(n2 ) Ω(n) Θ(n2 )

Table 2: Complexity Table of Algorithms (Big-O, Omega, Theta)

12
Summary of Relationships
O(g(n)) : Upper bound (worst case).
Ω(g(n)) : Lower bound (best case).
Θ(g(n)) : Tight bound (both upper and lower).

h
Hierarchy of Growth Rates (from smallest to largest):

us
O(1) < O(log n) < O(n) < O(n log n) < O(n2 ) < O(n3 ) < O(2n ) < O(n!)

dd
hu
K

13

You might also like