0% found this document useful (0 votes)
275 views29 pages

1.3 Complexity Analysis of Algorithms - Big O, Omega, and Theta Notation

This document discusses complexity analysis of algorithms using Big O, Omega, and Theta notation. It defines order of growth and explains how to determine the asymptotic complexity of algorithms. Common time complexities like constant, logarithmic, linear, quadratic, and exponential are described. Examples are provided to illustrate Big O, Omega, and Theta notation and how they are used to classify algorithms according to their time complexity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
275 views29 pages

1.3 Complexity Analysis of Algorithms - Big O, Omega, and Theta Notation

This document discusses complexity analysis of algorithms using Big O, Omega, and Theta notation. It defines order of growth and explains how to determine the asymptotic complexity of algorithms. Common time complexities like constant, logarithmic, linear, quadratic, and exponential are described. Examples are provided to illustrate Big O, Omega, and Theta notation and how they are used to classify algorithms according to their time complexity.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

COMPLEXITY ANALYSIS OF

ALGORITHMS: BIG O, OMEGA, AND


THETA NOTATION

AAD CSE SRM-AP 1


Order of Growth
Consider two different algorithms A and B to solve a problem X.
Algorithm-A: T(n) = 7n2 + 8n + 9
For suciently large values of n; T(n) ≈ 7n2 + 8n ≈ 7n2 ≈ n2 .
Algorithm-B: T(n) = 4 * 2n + 5n + 6
For suciently large values of n; T(n)= 4 * 2n +5n ≈ 4* 2n ≈ 2n .
Observation
" Lower order terms and the constant coefficients of highest
order
term i.e., leading term are insignificant.“

Order of growth of running time of Algorithm-A is n2.


Order of growth of running time of Algorithm-B is 2n.

AAD CSE SRM-AP 2


Order of Growth
Examples:
Order of growth of T(n) = 7n3 + 20n log n + 84n +
30 is n3.
Order of growth of T(n) =n(n + 1)/2is n2.
Order of growth of T(n) = 20 is n0 = 1.
Order of growth of T(n) = 3n + 4n2 + 8n - 3 log n is
3n .

Note: An algorithm is more efficient than another if


its running time has a lower order of growth.

AAD CSE SRM-AP 3


Basic orders of growth in
increasing order
1 constant
log n logarithmic
n linear
n log n linear arithmetic
n2 quadratic
n3 cubic
2n exponential
n! factorial

AAD CSE SRM-AP 4


Growth Rates

AAD CSE SRM-AP 5


Asymptotic Analysis

The word Asymptotic means
approaching a value or curve arbitrarily
closely (i.e., as some sort of limit is
taken).
Asymptotic notations are mathematical
tools to represent time complexity of
algorithms for asymptotic analysis.
 we evaluate the performance of an algorithm
in terms of input size (we don’t measure the
actual running time)
AAD CSE SRM-AP 6

Asymptotic Analysis
The simplest example is a function ƒ (n) = n2+3n,
the term 3n becomes insignificant compared
to n2 when n is very large.
The function "ƒ (n) is said to be asymptotically
equivalent  to n2 as n → ∞", and here is written
symbolically as ƒ (n) ~ n2.
Asymptotic notations are used to write fastest
and slowest possible running time for an
algorithm. These are also referred to as 'best case'
and 'worst case' scenarios respectively.
AAD CSE SRM-AP 7
Asymptotic Notations

AAD CSE SRM-AP 8


Θ Notation
The theta notation bounds a
functions from above and below,
so it defines exact asymptotic
behavior
Θ(g(n)) = {f(n): there exist
positive constants c1, c2 and n0
such that
0 <= c1*g(n) <= f(n) <= c2*g(n)
for all n >= n0}
3n+2= θ (n) as 3n+2≥3n and 3n+2≤ 4n, for n
c1=3,c2=4, and n0=2
AAD CSE SRM-AP 9
Big O Notation
The Big O notation
defines an upper
bound of an
algorithm, it bounds a
function only from
above
O(g(n)) = {f(n): there
exist positive
constants c and1.3n+2=O(n) as 3n+2≤4n for all n≥2  
n0
2.3n+3=O(n) as 3n+3≤4n for all n≥3  
such that 0 <= f(n)
AAD CSE SRM-AP 10
Ω Notation
Just as Big O notation
provides an asymptotic upper
bound on a function, Ω
notation provides an
asymptotic lower bound
Ω (g(n)) = {f(n): there exist
positive constants c and n0
such that 0 <= c*g(n) <=
f(n) for all n >= n0}.
f (n) =8n2+2n-3≥8n2-3
=7n2+(2n-3)≥7n2 (g(n))
Thus, c=7, n0=2
AAD CSE SRM-AP 11
Little o notation
“Little-ο” (ο()) notation is used to describe
an upper-bound that cannot be tight

Let f(n) and g(n) be functions that map


positive integers to positive real numbers.
 We say that f(n) is ο(g(n)) (or f(n) ϵ ο(g(n)))
 if for any real constant c > 0, there exists an
integer constant n0 ≥ 1
 such that 0 ≤ f(n) < c*g(n).
AAD CSE SRM-AP 12
Little omega notation
we use ω notation to denote a lower
bound that is not asymptotically tight.
Let f(n) and g(n) be functions that map
positive integers to positive real numbers.
 We say that f(n) is ω(g(n)) (or f(n) ∈
ω(g(n)))
 if for any real constant c > 0, there exists an
integer constant n0 ≥ 1
 such that f(n) > c * g(n) ≥ 0 for every
integer n ≥ n0. AAD CSE SRM-AP 13
Intuition for Asymptotic
Notation
Big-Oh
 f(n) is O(g(n)) if f(n) is asymptotically less than or equal to g(n)

big-Omega
 f(n) is (g(n)) if f(n) is asymptotically greater than or equal to g(n)

big-Theta
 f(n) is (g(n)) if f(n) is asymptotically equal to g(n)

little-oh
 f(n) is o(g(n)) if f(n) is asymptotically strictly less than g(n)

little-omega
 f(n) is (g(n)) if is asymptotically strictly greater than g(n)

AAD CSE SRM-AP 14


Some examples
Logarithmic algorithm – O(logn) – Binary Search.
Linear algorithm – O(n) – Linear Search.
Superlinear algorithm – O(nlogn) – Heap Sort, Merge Sort.
Polynomial algorithm – O(nc) – Strassen’s Matrix
Multiplication [O(n3)], Bubble Sort [O(n2)], Selection Sort
[O(n2)], Insertion Sort [O(n2)], Bucket Sort [O(n2)].
Exponential algorithm – O(cn) – Tower of Hanoi [O(2n)].
Factorial algorithm – O(n!) – Determinant Expansion by
Minors, Brute force Search algorithm for Traveling
Salesman Problem.

AAD CSE SRM-AP 15


Big-Oh Notation
10,000
Given functions f(n) and 3n
g(n), we say that f(n) is 2n+10
1,000
O(g(n)) if there are
n
positive constants
100
c and n0 such that
f(n)  cg(n) for n  n0
10
Example: 2n + 10 is O(n)
 2n + 10  cn
1
 (c  2) n  10 1 10 100 1,000
 n  10/(c  2) n
 Pick c = 3 and n0 = 10

AAD CSE SRM-AP 16


Big-Oh Example
1,000,000
n^2
Example: the function 100n
100,000
n2 is not O(n) 10n
 n2  cn 10,000 n
 nc
 The above inequality 1,000
cannot be satisfied
since c must be a 100
constant
10

1
1 10 100 1,000
n

AAD CSE SRM-AP 17


More Big-Oh Examples
7n-2
7n-2 is O(n)
need c > 0 and n0  1 such that 7n-2  c•n for n  n0
this is true for c = 7 and n0 = 1
 3n3 + 20n2 + 5
3n3 + 20n2 + 5 is O(n3)
need c > 0 and n0  1 such that 3n3 + 20n2 + 5  c•n3 for n  n0
this is true for c = 4 and n0 = 21
 3 log n + log log n
3 log n + log log n is O(log n)
need c > 0 and n0  1 such that 3 log n + log log n  c•log n for n  n0
this is true for c = 4 and n0 = 2
AAD CSE SRM-AP 18
Big-Oh and Growth Rate
The big-Oh notation gives an upper bound on the
growth rate of a function
The statement “f(n) is O(g(n))” means that the growth
rate of f(n) is no more than the growth rate of g(n)
We can use the big-Oh notation to rank functions
according to their growth rate

AAD CSE SRM-AP 19


Big-Oh Rules
If is f(n) a polynomial of degree d, then f(n) is
O(nd), i.e.,
1. Drop lower-order terms
2. Drop constant factors
Use the smallest possible class of functions
 Say “2n is O(n)” instead of “2n is O(n2)”
Use the simplest expression of the class
 Say “3n + 5 is O(n)” instead of “3n + 5 is O(3n)”

AAD CSE SRM-AP 20


Asymptotic Algorithm Analysis
The asymptotic analysis of an algorithm determines
the running time in big-Oh notation
To perform the asymptotic analysis
 We find the worst-case number of primitive operations
executed as a function of the input size
 We express this function with big-Oh notation
Example:
 We determine that algorithm arrayMax executes at most
7n  1 primitive operations
 We say that algorithm arrayMax “runs in O(n) time”
Since constant factors and lower-order terms are
eventually dropped anyhow, we can disregard them
when counting primitive operations

AAD CSE SRM-AP 21


Computing Prefix Averages
We further illustrate 35
asymptotic analysis with X
two algorithms for prefix 30 A
averages 25
The i-th prefix average of 20
an array X is average of the
first (i + 1) elements of X: 15
A[i] = (X[0] + X[1] + … + X[i])/(i+1) 10

Computing the array A of 5


prefix averages of another 0
array X has applications to 1 2 3 4 5 6 7
financial analysis

AAD CSE SRM-AP 22


Prefix Averages (Quadratic)
The following algorithm computes prefix averages in
quadratic time by applying the definition
Algorithm prefixAverages1(X, n)
Input array X of n integers
Output array A of prefix averages of X #operations
A  new array of n integers n
for i  0 to n  1 do n
s  X[0] n
for j  1 to i do 1 + 2 + …+ (n  1)
s  s + X[j] 1 + 2 + …+ (n  1)
A[i]  s / (i + 1) n
return A 1
AAD CSE SRM-AP 23
Arithmetic Progression
7
The running time of
6
prefixAverages1 is
O(1 + 2 + …+ n) 5
The sum of the first n 4
integers is n(n + 1) / 2
3
 There is a simple visual
proof of this fact 2
Thus, algorithm 1
prefixAverages1 runs in
0
O(n2) time
1 2 3 4 5 6

AAD CSE SRM-AP 24


Prefix Averages (Linear)
The following algorithm computes prefix averages in
linear time by keeping a running sum
Algorithm prefixAverages2(X, n)
Input array X of n integers
Output array A of prefix averages of X #operations
A  new array of n integers n
s0 1
for i  0 to n  1 do n
s  s + X[i] n
A[i]  s / (i + 1) n
return A 1
Algorithm prefixAverages2 runs in O(n) time
AAD CSE SRM-AP 25
Math you need to Review
Logarithms and Exponents
properties of logarithms:
logb(xy) = logbx + logby
logb (x/y) = logbx - logby
logbxa = alogbx
logba = logxa/logxb
properties of exponentials:
a(b+c) = aba c
abc = (ab)c
ab /ac = a(b-c)
b = a logab
bc = a c*logab

AAD CSE SRM-AP 26


Some examples
Time Complexity

Sorting Technique
Best Case Average Case Worst Case
 

AAD CSE SRM-AP 27


Analysis of the Binary
Search

Algorithm: Binary-Search(numbers[], x, l, r)
if l = r then
return l
else
m := ⌊(l + r) / 2⌋
if x ≤ numbers[m] then
return Binary-Search(numbers[], x, l, m)
else
return Binary-Search(numbers[], x, m+1, r)

AAD CSE SRM-AP 28


Analysis of the Binary
Search

{(
𝑐 𝑖𝑓 𝑁 =1
𝑇 ( 𝑁 )=
𝑇
𝑁
2 )
+ 1 𝑜𝑡h𝑒𝑟𝑤𝑖𝑠𝑒

AAD CSE SRM-AP 29

You might also like