0% found this document useful (0 votes)
17 views

Complexity of Algorithms

1) There are often multiple algorithms that can solve a problem. Algorithm complexity is measured by the amount of time and space required for a given input size. 2) Asymptotic analysis defines the mathematical bounds of an algorithm's performance. Big-O notation describes the worst-case time complexity as the input size increases. 3) Common time complexities include O(1) for constant time, O(n) for linear time, O(n^2) for quadratic time, and O(log n) for logarithmic time.

Uploaded by

Joshua
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Complexity of Algorithms

1) There are often multiple algorithms that can solve a problem. Algorithm complexity is measured by the amount of time and space required for a given input size. 2) Asymptotic analysis defines the mathematical bounds of an algorithm's performance. Big-O notation describes the worst-case time complexity as the input size increases. 3) Common time complexities include O(1) for constant time, O(n) for linear time, O(n^2) for quadratic time, and O(log n) for logarithmic time.

Uploaded by

Joshua
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

COMPLEXITY OF ALGORITHMS

ALGORITHMS

There is rarely a single algorithm that can solve a problem (ie. one
problem can be solved in multiple ways.

How can we express that an algorithm is better compared to other


algorithms?
ALGORITHM COMPLEXITY

A measure of the amount of time and/or space required by an algorithm


for an input of a given size (n).

Generally, we represent an algorithm that processes a given size n as


follows:

f(n) = efficiency of algorithm


ASYMPTOTIC ANALYSIS

The asymptotic analysis of an algorithm refers to defining the


mathematical bounds/framing of its performance.

Through this, we can infer the best case, average case, and worst-
case scenario of an algorithm.
BIG-O NOTATION

The Big-O Notation is the formal way of describing the upper bound
or worst-case scenario performance of an algorithm.

It can be used to describe the execution time required or the space


used (e.g. in memory or on disk).

Is represented using asymptotic notations.


OTHER NOTATIONS

Big Omega Notation (Ω) refers to the lower bound or best-case


scenario performance of an algorithm.

Big Theta Notation (Θ) describes the tight bound or expected case
of an algorithm.
O, OMEGA, AND THETA AS CONDITIONAL OPERATORS

Big O is like <= i.e., the rate of growth of an algorithm is less than or
equal to a specific value.

Big Omega is like >= i.e., the rate of growth of an algorithm is greater
than or equal to a specific value.

Big Theta is like == i.e., the rate of growth of an algorithm is equal to a


specific value.
O(1) CONSTANT TIME

O(1) describes an algorithm that will always execute in the same time (or
space) regardless of the size of the input data set.
x=n
if ( x > 0 ) {
application code
print x
}
O(n) LINEAR

O(n) describes an algorithm whose performance will grow linearly and in


direct proportion to the size of the input data set.
x=n
while ( x > 0 ) {
application code
x=x-1
}
O(n2) QUADRATIC

O(n2) represents an algorithm whose performance is directly proportional


to the square of the size of the input data set.
x=n

while ( x > 0 ) {

y=x

while ( y > 0 ) {

application code

y=y-1

x=x-1
O(nk) POLYNOMIAL

O(nk) represents an algorithm whose performance is directly proportional


to the k of the size of the input data set.
x=n
while ( x > 0 ) {
y=x
while ( y > 0 ) {
z=x
while ( z > 0 ) {
application code
z=z-1
}
y=y-1
}
x=x-1
}
O(2n) EXPONENTIAL

O(2n) denotes an algorithm whose growth doubles with each addition to


the input data set.
int Fibonacci (int number)
{
if (number <= 1) return number;
return Fibonacci(number - 2) + Fibonacci(number - 1);
}
O(log n) LOGARITHMIC

O(log n) denotes an algorithm who's able to iteratively cut the


performance time of an algorithm in half based on the input size.

x=n

while ( x > 0 ) {

x=x/2

}
O(n log n) LINEAR LOGARITHMIC

O(n log n) denotes an algorithm who's able to iteratively cut the


performance time of an algorithm in half based on the input size within a
linear growth.
x=n
while ( x > 0 ) {
y=n
while ( y > 0 ) {
y=y/2
}
x=x-1
}
BIG-O ANALYSIS

To derive Big-O from f(n), we do the following steps:

• In each term, set the coefficient of the term to 1

• Keep the largest term in the function and discard the others

Example:

f(n) = n ((n + 1) / 2)

You might also like