0% found this document useful (0 votes)
14 views9 pages

Ada - Module 1

Asymptotic notation is a mathematical tool used to describe the efficiency of algorithms as input size increases, with key types including Big O (worst-case), Big Omega (best-case), and Big Theta (average-case). It categorizes algorithms into efficiency classes such as linear, logarithmic, polynomial, and exponential based on their running time. Understanding these notations helps in comparing algorithm performance and making informed choices based on efficiency.

Uploaded by

b8618044414
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views9 pages

Ada - Module 1

Asymptotic notation is a mathematical tool used to describe the efficiency of algorithms as input size increases, with key types including Big O (worst-case), Big Omega (best-case), and Big Theta (average-case). It categorizes algorithms into efficiency classes such as linear, logarithmic, polynomial, and exponential based on their running time. Understanding these notations helps in comparing algorithm performance and making informed choices based on efficiency.

Uploaded by

b8618044414
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Asymptotic notation is a mathematical tool that describes how an algorithm's efficiency changes as the size of its input

increases. It's used to compare the performance of different algorithms. Basic efficiency classes describe the running
time of an algorithm.
Asymptotic notation
• Big O: Used to describe the worst-case running time of an algorithm
• Big Omega: Used to describe the best-case running time of an algorithm
• Big Theta: Used to describe when the running time is the same for all cases

Basic efficiency classes


•Linear
•An algorithm with a running time that grows linearly with the input size
•Logarithmic
•An algorithm with a running time that grows logarithmically with the input size
•Quadratic
•An algorithm with a running time that grows quadratically with the input size
•Exponential
•An algorithm with a running time that grows exponentially with the input size

The asymptotic behavior of an algorithm is defined by the highest degree polynomial term of its complexity function.
1.Big-Oh notation:-
Definition: A function t(n) is said to be in O(g(n)), denoted t(n)∈O(g(n)), if t (n) is bounded above by some constant multiple
of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n 0 such that
t(n) ≤ c g(n) for all n ≥ n0.

Informally, O(g(n)) is the set of all functions with a lower or same order of growth as g(n)
2. Big-Omega notation:-
Definition: A function t(n) is said to be in Ω(g(n)), denoted t(n)∈ Ω(g(n)), if t(n) is bounded below by
some positive constant multiple of g(n) for all large n, i.e., positive constant c and some nonnegative
integer n0 such that
t(n) ≥ c g(n) for all n ≥ n0.
3. Big-Theta notation:-
A function t(n) is said to be in Θ(g(n)), denoted t(n) ∈ Θ(g(n)), if t (n) is bounded both above and below by
some positive constant multiples of g(n) for all large n, i.e., if there exist some positive constants c1 and c2 and
some nonnegative integer n0 such that
c2 g(n) ≤ t(n) ≤ c1g(n) for all n ≥ n0.
Benefits of asymptotic notation

•Helps compare the performance of different algorithms


•Helps understand how an algorithm will perform as data becomes more complex
•Helps make informed decisions about which algorithm to use based on efficiency and resource constraints
Determining Big O Notation
Big O notation is a mathematical notation used to describe the asymptotic behavior of a function as its input grows
infinitely large. It provides a way to characterize the efficiency of algorithms and data structures.

Steps to Determine Big O Notation:

1. Identify the Dominant Term:


•Examine the function and identify the term with the highest order of growth as the input size increases.

•Ignore any constant factors or lower-order terms.

2. Determine the Order of Growth:


•The order of growth of the dominant term determines the Big O notation.

3. Write the Big O Notation:


•The Big O notation is written as O(f(n)), where f(n) represents the dominant term.

•For example, if the dominant term is n^2, the Big O notation would be O(n^2).

4. Simplify the Notation (Optional):


•In some cases, the Big O notation can be simplified by removing constant factors or by using a more concise
notation.

•For instance, O(2n) can be simplified to O(n).


Example:
Function: f(n) = 3n3 + 2n2 + 5n + 1
1.Dominant Term: 3n3
2.Order of Growth: Cubic (n3)
3.Big O Notation: O(n3)
4.Simplified Notation: O(n3)
Mathematical Examples of Runtime Analysis
Below table illustrates the runtime analysis of different orders of algorithms as the input size (n) increases.
Algorithmic Examples of Runtime Analysis
Below table categorizes algorithms based on their runtime complexity and provides examples for each type.
Type Notation Example Algorithms
Logarithmic O(log n) Binary Search

Linear O(n) Linear Search

Superlinear O(n log n) Heap Sort, Merge Sort

Polynomial O(n^c) Strassen’s Matrix Multiplication, Bubble Sort, Selection Sort, Insertion Sort, Bucket Sort

Exponential O(c^n) Tower of Hanoi

Factorial O(n!) Determinant Expansion by Minors, Brute force Search algorithm for Traveling Salesman Problem

n log(n) n n * log(n) n^2 2^n n!


10 1 10 10 100 1024 3628800

20 2.996 20 59.9 400 1048576 2.432902e+1818


Comparison of Big O Notation, Big Ω (Omega) Notation, and Big θ (Theta) Notation
Below is a table comparing Big O notation, Ω (Omega) notation, and θ (Theta) notation:

Notation Definition Explanation


Describes the upper bound of the algorithm’s
Big O (O) f(n) ≤ C * g(n) for all n ≥ n0
running time in the worst case.

Describes the lower bound of the algorithm’s


Ω (Omega) f(n) ≥ C * g(n) for all n ≥ n0
running time in the best case.

Describes both the upper and lower bounds of the


θ (Theta) C1 * g(n) ≤ f(n) ≤ C2 * g(n) for n ≥ n0
algorithm’s running time.

In each notation:
•f(n) represents the function being analyzed, typically the algorithm’s time complexity.

•g(n) represents a specific function that bounds f(n).

•C, C1​, and C2​ are constants.

•n0​ is the minimum input size beyond which the inequality holds.

These notations are used to analyze algorithms based on their worst-case (Big O), best-case (Ω),
and average-case (θ) scenarios.
f we plot the most common Big O notation examples, we would have graph like this:

You might also like