Big O notation describes the long-term growth rate of an algorithm's time or space complexity as input size increases, focusing on the dominant factors while ignoring constants and lower-order terms. It is used to compare algorithms, predict performance, and identify bottlenecks. Common complexities range from O(1) for constant time to O(n!) for factorial time, with specific rules for simplification and analysis.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
17 views4 pages
Big O Notation Class Notes
Big O notation describes the long-term growth rate of an algorithm's time or space complexity as input size increases, focusing on the dominant factors while ignoring constants and lower-order terms. It is used to compare algorithms, predict performance, and identify bottlenecks. Common complexities range from O(1) for constant time to O(n!) for factorial time, with specific rules for simplification and analysis.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4
Big O Notation Class Notes
What is Big O Notation?
Big O notation is a way to describe the long-term growth rate of a
function, typically the time or space complexity of an algorithm, as the input size (n) grows. It focuses on how the runtime or space usage scales relative to the input size, ignoring constant factors and lower-order terms. It provides an upper bound on the growth rate.
Why Use Big O Notation?
Compare Algorithms: Big O notation allows us to compare the
efficiency of different algorithms without getting bogged down in implementation details or hardware specifics. Predict Performance: It helps predict how an algorithm will perform with larger inputs. Identify Bottlenecks: It helps identify potential performance bottlenecks in code. Abstract Complexity: It simplifies the analysis of algorithm complexity by focusing on the dominant factors.
How to Determine Big O Notation:
1. Identify the Dominant Operations: Determine the operations in
the algorithm that are executed most frequently as the input size grows. These are the ones that contribute most significantly to the runtime or space usage. 2. Express Complexity as a Function of n: Express the number of dominant operations as a function of the input size (n). 3. Drop Constant Factors and Lower-Order Terms: Simplify the function by dropping constant factors and lower-order terms. For example, if the function is 2n^2 + 5n + 1, the Big O notation is O(n^2). We only keep the highest order term and discard the constant multiplier. 4. Express with "O" Notation: Express the simplified function using Big O notation.
Common Big O Notations (from best to worst):
O(1) - Constant Time: The runtime or space usage is constant,
regardless of the input size. Example: Accessing an element in an array by index. O(log n) - Logarithmic Time: The runtime or space usage grows logarithmically with the input size. Example: Binary search. O(n) - Linear Time: The runtime or space usage grows linearly with the input size. Example: Linear search. O(n log n) - Linearithmic Time: The runtime or space usage grows proportionally to n multiplied by the logarithm of n. Example: Merge sort. O(n^2) - Quadratic Time: The runtime or space usage grows proportionally to the square of the input size. Example: Nested loops. O(2^n) - Exponential Time: The runtime or space usage grows exponentially with the input size. Example: Trying all possible subsets. O(n!) - Factorial Time: The runtime or space usage grows factorially with the input size. Example: Traveling salesman problem (brute-force approach).
Rules for Big O Notation:
Constant Factors are Ignored: O(2n) is the same as O(n). Lower-Order Terms are Ignored: O(n^2 + n) is the same as O(n^2). Dominant Term Matters: O(n^3 + n^2 + 1) is the same as O(n^3). Multiple Terms: If an algorithm has multiple parts, the Big O notation is determined by the part with the highest order of growth. For example, if one part is O(n) and another is O(n^2), the overall complexity is O(n^2). Logarithms: The base of the logarithm doesn't matter in Big O notation. O(log2 n) is the same as O(log10 n), which is simply written as O(log n).
Big O Notation and Nested Loops:
Nested loops often lead to polynomial time complexity. If you
have two nested loops that iterate n times each, the complexity is likely O(n^2).
Big O Notation and Space Complexity:
Big O notation can also be used to describe the space complexity of an
algorithm, i.e., how much memory it uses as a function of the input size.
Example:
Python def find_element(arr, target): # O(n) - Linear time for element in arr: if element == target: return True return False
def get_first_element(arr): # O(1) - Constant time
return arr[0]
Common Misconceptions about Big O Notation:
Big O notation doesn't tell you the exact runtime of an algorithm. It
only describes how the runtime scales with the input size. A lower Big O complexity doesn't always mean an algorithm is faster for all input sizes. Constant factors can matter for small inputs. Big O notation is not a measure of code quality.
Further Study:
Understanding Big O notation is essential for writing efficient code.
Further study should include analyzing the time and space complexity of various algorithms and practicing determining the Big O notation of code snippets. It's important to think about how the input size affects the performance of your programs.