0% found this document useful (0 votes)
5 views

7_Algorithms and Analysis - Part 3

The document discusses the concepts of algorithm complexity and asymptotic analysis, focusing on Big-O, Omega-Ω, and Theta-Θ notations. It explains how these notations provide upper and lower bounds for algorithm performance, emphasizing the importance of analyzing algorithms at larger input sizes. Additionally, it highlights the significance of Big-O notation in comparing algorithm efficiency while disregarding constant factors and hardware differences.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

7_Algorithms and Analysis - Part 3

The document discusses the concepts of algorithm complexity and asymptotic analysis, focusing on Big-O, Omega-Ω, and Theta-Θ notations. It explains how these notations provide upper and lower bounds for algorithm performance, emphasizing the importance of analyzing algorithms at larger input sizes. Additionally, it highlights the significance of Big-O notation in comparing algorithm efficiency while disregarding constant factors and hardware differences.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

19CCE202 Data Structures and Algorithms

LECTURE 7 – ALGORITHMS AND ANALYSIS – PART 2&3

Dr. R. Ramanathan
Associate Professor
Department of Electronics and Communication Engineering
Amrita School of Engineering, Coimbatore
[email protected]
Objective
To understand rate of growth for algorithm complexity, the asymptotic analysis of algorithms,
and Big-O, Omega- Ω and Theta-Θ notations.

Key concepts
• Rate of growth [CO03]
• Big-O notation [CO03]
• Omega- Ω notation [CO03]
• Theta-Θ notation [CO03]
• Rate of growth- geometric meaning [CO03]
• Asymptotic analysis [CO03]
What is Rate of
Growth?
The rate at which the running
time increases as a function of
input is called rate of growth
Big-O (Big Oh) Notation [Upper Bounding Function]

This notation gives the tight upper bound of the given function. Generally, it is represented as
f(n) = O(g(n)). That means, at larger values of n, the upper bound of f(n) is g(n).

O–notation defined as O(g(n)) = {f(n): there exist positive constants c and n0 such that
0 ≤ f(n) ≤ cg(n) for all n > n0}. g(n) is an asymptotic tight upper bound for f(n).

Our objective is to give the smallest rate of growth g(n) which is greater than or equal to the
given algorithms’ rate of growth /(n). Generally we discard lower values of n. That means the
rate of growth at lower values of n is not important.

For example, if f n    100  10  50 is the given algorithm, then  is g(n). That
means g(n) gives the maximum rate of growth for f(n) at larger values of n.
Omega- Ω Notation [Lower Bounding Function]

Similar to the O discussion, this notation gives the tighter lower bound of the given algorithm
and we represent it as f(n) = Ω(g(n)). That means, at larger values of n, the tighter lower bound
of f(n) is g(n).

The Ω notation can be defined as Ω(g(n)) = {f(n): there exist positive constants c and n0 such
that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0}. g(n) is an asymptotic tight lower bound for f(n).

Our objective is to give the largest rate of growth g(n) which is less than or equal to the given
algorithm’s rate of growth f(n).

For example, if f(n) = 100n2 + 10n + 50, g(n) is Ω(n2).


Theta-Θ Notation [Order Function]

This notation decides whether the upper and lower bounds of a given function (algorithm) are
the same.

Definition of Θ notation. It is defined as Θ(g(n)) = {f(n): there exist positive constants c1,c2
and n0 such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0}. g(n) is an asymptotic tight bound for
f(n). Θ(g(n)) is the set of functions with the same order of growth as g(n).

The average running time of an algorithm is always between the lower bound and the upper
bound. If the upper bound (O) and lower bound (Ω) give the same result, then the Θ notation
will also have the same rate of growth.

For example, let us assume that f(n) = 10n2 + n is the expression. Then, its tight upper bound
g(n) is O(n). The rate of growth in the best case is g(n) = O(n).
Rate of growth :Big-O,
Omega- Ω, Theta-Θ

Big-O

Omega- Ω

• Rate of growth at lower values of n is not important.


• below n0 (threshold for the given function), the rate of
growth could be different.
• general focus is on the upper bound (O) Theta-Θ
• Lower bound (Ω) - no practical importance
• Θ notation - if the upper bound (O) and lower bound
(Ω) are same
Why is it called Asymptotic Analysis?

• in every case for a given function f(n) we are trying to find another function g(n)
which approximates f(n) at higher values of n.
• g(n) is also a curve which approximates f(n) at higher values of n.
• In mathematics, we call such a curve an asymptotic curve. In other terms, g(n) is the
asymptotic curve for f(n). For this reason, we call algorithm analysis asymptotic
analysis.
Objective
To understand the asymptotic analysis of algorithms, and gain deeper insight and visualization
into Big-O notation with various complexity classes

Key concepts
• Big-O notation [CO03]
– Common complexity classes
• Big-O visualization [CO03]
• Big-O physical implication [CO03]
• Insight into Big-O asymptotic analysis [CO03]
Big Oh notation Common complexity classes (in increasing order) are
the following:
 not interested in the actual
function C(n) that describes the O(1), pronounced `Oh of one', or constant complexity;
time complexity of an algorithm O(log2log2n), `Oh of log log en';
in terms of the problem size n, but
just its complexity class. O(log2 n), `Oh of log en', or logarithmic complexity;
O(n), `Oh of en', or linear complexity;
 ignores any constant overheads O(nlog2n), `Oh of en log en';
and small constant factors, and
just tells us about the principal O(n2), `Oh of en squared', or quadratic complexity;
growth of the complexity function O(n3), `Oh of en cubed', or cubic complexity;
with problem size.
O(2n), `Oh of two to the en', or exponential complexity.
 something about the performance
of the algorithm on large numbers
of items.
Big Oh notation - O(1) constant: the operation doesn't depend on the size of
insight its input
O(n) linear: the run time complexity is proportionate to the
• For run time complexity analysis we size of n.
use big Oh notation extensively O(log n) logarithmic: normally associated with algorithms
• provides an abstract measurement that break the problem into smaller chunks per each
by which we can judge the invocation.
performance of algorithms without O(n log n) just n log n: usually associated with an
using mathematical proofs. algorithm that breaks the problem into smaller chunks per
each invocation, and then takes the results of these smaller
chunks and stitches them back together,
O(n2) quadratic: e.g. bubble sort.
O(n3) cubic: very rare.
O(2n) exponential: incredibly rare.
Big-O Visualization

O(g(n)) is the set of


functions with smaller or
the same order of growth
as g(n).

For example; O(n2)


includes O(1), O(n),
O(nlogn), etc.

Note: Analyze the algorithms at larger values of n only. What this means is, below
n0we do not care about the rate of growth
Big Oh notation –
physical implication

Assuming MIPS
Big Oh notation –
physical implication
Big Oh notation - insight

The biggest asset that Big Oh notation gives us is that it allows us to essentially
discard things like hardware.

Example: If you have two sorting algorithms, one with a quadratic run time, and the other with
a logarithmic run time then the logarithmic algorithm will always be faster than the quadratic
one when the data set becomes suitably large. This applies even if the former is ran on a
machine that is far faster than the latter.

Why? Because big Oh notation isolates a key factor in algorithm analysis: growth.

An algorithm with a quadratic run time grows faster than one with a logarithmic run time. It is
generally said at some point as n → ∞ the logarithmic algorithm will become faster than the
quadratic algorithm.

You might also like