Complexity
Complexity
h
us
Khuddush
dd
December 14, 2024
hu
K
Algorithm in the Context of Complexity
An algorithm is a step-by-step procedure or set of instructions designed to solve a problem
or perform a computation. It takes an input, processes it according to its steps, and
produces an output. The efficiency of an algorithm is critical, especially when dealing
with large inputs or complex problems. Algorithm complexity refers to the measurement
of the resources (such as time and space) an algorithm consumes as a function of the size
of the input.
Time Complexity
ime complexity measures the rate at which the running time of an algorithm in-
creases as the size of the input grows. It provides a quantitative understanding of
the algorithm’s efficiency in terms of execution time relative to the input size.
Space Complexity
pace complexity refers to the amount of memory an algorithm uses in relation to
the size of its input. It helps to evaluate how efficiently an algorithm utilizes mem-
ory resources, which is crucial for handling large datasets or resource-constrained
environments.
1
Significance of Time and Space Complexity
Both time and space complexity are essential metrics for analyzing the efficiency
of algorithms, especially when dealing with large-scale problems or systems where
both processing time and memory usage need to be optimized.
h
us
Both time and space complexity are measured using Big-O, Omega, and Big-Theta
notations, which describe the upper, lower, and tight bounds of an algorithm’s
dd
performance in terms of execution time and memory usage, respectively.
hu
Big O, Θ, and Ω
K
Explanation:
Outer Loop: Runs n times, where n is the input size.
Inner Loop: Runs n times for each iteration of the outer loop.
Total Iterations: Outer loop × Inner loop = n × n = n2 .
Big-O Complexity: O(n2 ), because the number of iterations grows quadratically
with the input size.
2
h
us
dd
Figure 1:
2. Ω: Lower Bound
hu
K
Definition:Describes the lower bound (best-case) of an algorithm’s running time or space
usage.
f (n) = Ω(g(n)) if there exist positive constants C and n0 such that f (n) ≥ C·g(n) for all n ≥ n0 .
Python Code:
Code Example Ω(1) Complexity
Time Complexity: Ω(1) (best case: constant time).
Big-O and Omega Analysis:
Big-O (Worst-case): In the worst case, the function checks every element of the
list to find the maximum value. For a list of length n, the function will perform n − 1
comparisons in the loop. Therefore, the time complexity is
O(n).
Omega (Best-case): In the best case, if the list contains only one element, the loop
does not execute at all, and the maximum value is returned in constant time. Therefore,
the time complexity is
Ω(1).
Explanation:
Best-case (Omega(1)): If the list contains only one element, the function will:
3
h
us
dd
hu
K
Figure 2:
In this case, the time complexity is Ω(1) because the algorithm immediately returns
after checking just one element.
Worst-case (Big-O(n)): If the list contains many elements, the function will loop
through all of them, comparing each one to find the maximum. In this case, the time
complexity is O(n), where n is the number of elements in the list, because the algorithm
needs to check all n elements.
Why Omega(1)?
Ω(1) happens because, in the best case, the algorithm does not need to do more than
a constant amount of work. This means it doesn’t matter how many elements the list
has; it only needs to perform a single action (like accessing the first element).
In summary:
Ω(1) means the algorithm performs the task in constant time in the best case, meaning
the execution time does not depend on the size of the input.
In our example, the best-case scenario is when the list has only one element, and the
algorithm doesn’t need to perform any iterations to find the maximum.
4
3. Θ: Tight Bound
Definition: Describes the tight bound (exact behavior) of an algorithm’s running time
or space usage, providing both the upper and lower bounds.
h
us
Θ represents the exact bound of an algorithm’s runtime.
Mathematical Example: Let f (n) = 5n2 + 3n + 2. To show f (n) = Θ(n2 ):
dd
4n2 ≤ 5n2 + 3n + 2 ≤ 6n2 for n ≥ 1, with C1 = 4, C2 = 6, and n0 = 1.
hu
Python Code:
K
Figure 3:
For each iteration of the outer loop, the inner loop also runs n times.
f (n) = n × n = n2
Therefore, the time complexity of this program is Θ(n2 ). This means that as the size
n grows, the time it takes grows quadratically (i.e., it doubles when n is multiplied by
the same factor).
5
Mathematically: Let’s dive into how we can determine the constants C1 , C2 , and
g(n) mathematically when analyzing the time complexity of an algorithm.
An algorithm has Big-Theta complexity, Θ(g(n)), if there exist positive constants C1 ,
C2 , and n0 such that:
h
Where:
us
T (n) is the time complexity of the algorithm as a function of n.
dd
g(n) is a function that describes the asymptotic growth of the algorithm.
T (n) = n × n = n2
Step 2: Find C1 and C2
Now, let’s apply the definition of Big-Theta and identify constants C1 and C2 for the
function T (n) = n2 .
T (n) ≥ C1 · n2
In this case, T (n) = n2 , so we can choose C1 = 1. Therefore, the lower bound is:
T (n) ≤ C2 · n2
6
Step 3: Choose f (n)
The function f (n) is the function that describes the growth rate of the algorithm,
which in this case is n2 . So:
f (n) = n2
Step 4: Find n0
h
We need to find a sufficiently large n0 such that the inequalities:
us
C1 · n2 ≤ T (n) ≤ C2 · n2
dd
hold for all n ≥ n0 . In this case, since T (n) = n2 , the inequalities will always hold for
all n ≥ 1. Therefore, we can choose: hu
K
n0 = 1
Final Answer:
So, by the Big-Theta definition:
T (n) = Θ(n2 )
Where:
C1 = 1
C2 = 1
f (n) = n2
n0 = 1
This means that for sufficiently large n, the total number of operations performed by
the function is proportional to n2 , with constants C1 and C2 .
Space Complexities:
Big-O
Space Complexity-O(n)
Explanation:
7
h
us
dd
hu
Figure 4:
K
– Iterates through lst. No extra space needed for the loop variable.
– Space Complexity: O(1).
– Appends a squared value to squared lst. Each append takes constant space.
– Space Complexity: O(1) per iteration.
– Returns the reference to squared lst, which doesn’t require additional space.
– Space Complexity: O(1).
The space used by squared lst grows linearly with the input size n.
8
h
us
dd
hu
K
Figure 5:
Space Complexity-Ω(n2)
Explanation
Line 1: pairs = []
Initializes an empty list pairs.
Space Complexity: Ω(1) (constant space).
9
Ω(n2 ) — The space grows quadratically with the input size n because the list pairs stores n2 pairs.
In conclusion, the Big-Omega space complexity is Ω(n2 ) due to the storage of the
pairs generated by the nested loops.
h
Space Complex-Θ(1)
us
dd
hu
K
Figure 6:
Explanation:
Line 1: total = 0 initializes a variable total to zero.
– Space Complexity: Θ(1) (constant space for initialization).
Line 2: for num in lst: iterates through each element of the input list lst.
– Space Complexity: Θ(1) (constant space for the loop variable num).
Line 3: total += num updates the value of total during each iteration.
– Space Complexity: Θ(1) (constant space for the update operation).
Line 4: return total returns the final value of total.
– Space Complexity: Θ(1) (no additional space required for returning the result).
Final Space Complexity:
The space complexity is Θ(1), as the algorithm only uses a fixed amount of space
for the total variable and the loop variable num, regardless of the size of the input
list.
Summary:
The space complexity of the function is Θ(1), since the space used by the algorithm
does not grow with the size of the input list lst.
10
Time Complexities
1. Constant Time: O(1)
Definition: The runtime does not depend on the size of the input.
Example: Accessing an element in an array by its index.
Explanation: The operation is completed in a single step, regardless of the input size.
h
us
2. Logarithmic Time: O(log n)
dd
Definition: The runtime increases logarithmically as the input size grows.
hu
Example: Binary search in a sorted array.
Explanation: The problem size is halved at every step, leading to a logarithmic number
K
of steps.
11
Explanation: For each of the n2 pairs of elements, an additional n operations are
performed.
h
Explanation: For n elements, there are 2n possible combinations to explore.
us
dd
8. Factorial Time: O(n!)
hu
Definition: The runtime grows factorially as the input size increases, making it ex-
tremely inefficient for large inputs.
K
Example: Calculating all permutations of a set.
Explanation: For n elements, there are n × (n − 1) × . . . × 1 = n! permutations to
consider.
12
Summary of Relationships
O(g(n)) : Upper bound (worst case).
Ω(g(n)) : Lower bound (best case).
Θ(g(n)) : Tight bound (both upper and lower).
h
Hierarchy of Growth Rates (from smallest to largest):
us
O(1) < O(log n) < O(n) < O(n log n) < O(n2 ) < O(n3 ) < O(2n ) < O(n!)
dd
hu
K
13