0% found this document useful (0 votes)
10 views33 pages

Asymptotic Notations Notes

The document discusses time complexity in algorithms, detailing worst, average, and best case scenarios. It outlines rules for counting operations in loops, nested loops, and consecutive program fragments, as well as asymptotic notations such as Big O, Omega, and Theta for analyzing algorithm efficiency. Additionally, it touches on recursion, base and recursive cases, and provides examples and exercises for determining time complexity.

Uploaded by

meridithjanny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views33 pages

Asymptotic Notations Notes

The document discusses time complexity in algorithms, detailing worst, average, and best case scenarios. It outlines rules for counting operations in loops, nested loops, and consecutive program fragments, as well as asymptotic notations such as Big O, Omega, and Theta for analyzing algorithm efficiency. Additionally, it touches on recursion, base and recursive cases, and provides examples and exercises for determining time complexity.

Uploaded by

meridithjanny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Time Complexity

The time complexity is a function that gives the amount of time


required by an algorithm to run to completion.
• Worst case time complexity: It is the function defined by the
maximum amount of time needed by an algorithm for an input
of size n.
• Average case time complexity: The average-case running time
of an algorithm is an estimate of the running time for an
“average” input. Computation of average-case running time
entails knowing all possible input sequences, the probability
distribution of occurrence of these sequences, and the running
times for the individual sequences.
• Best case time complexity: It is the minimum amount of time
that an algorithm requires for an input of size n.
There are four rules to count the operations:
Rule 1: for loops - the size of the loop times the
running time of the body
• The running time of a for loop is at most the running
time of the statements inside the loop times the
number of iterations.
for( i = 0; i < n; i++)
sum = sum + i;
a. Find the running time of statements when
executed only once:

• The statements in the loop heading have fixed number of


operations, hence they have constant running time O(1) when
executed only once.
• The statement in the loop body has fixed number of
operations, hence it has a constant running time when executed
only once.
b. Find how many times each statement is
executed.
• for( i = 0; i < n; i++) // i = 0; executed only once: O(1)
• // i < n; n + 1 times O(n)
• // i++ n times O(n)
• // total time of the loop heading:
• // O(1) + O(n) + O(n) = O(n)
• sum = sum + i; // executed n times, O(n)
• The loop heading plus the loop body will give: O(n) + O(n) =
O(n).
• Loop running time is: O(n)
• Mathematical analysis of how many times the
statements in the body are executed

• If
a) the size of the loop is n (loop variable runs
from 0, or some fixed constant, to n) and
b) the body has constant running time (no
nested loops) then the time is O(n)
Rule 2: Nested loops – the product of the size of the
loops times the running time of the body

• The total running time is the running time of the inside


statements times the product of the sizes of all the loops
sum = 0;
for( i = 0; i < n; i++)
for( j = 0; j < n; j++)
sum++;
• Applying Rule 1 for the nested loop (the ‘j’ loop) we get O(n)
for the body of the outer loop.
• The outer loop runs n times, therefore the total time for the
nested loops will be
• O(n) * O(n) = O(n*n) = O(n^2)
Analysis:
What happens if the inner loop does not start from 0?
sum = 0;
for( i = 0; i < n; i++)
for( j = i; j < n; j++)
sum++;

• Here, the number of the times the inner loop is executed depends on
the value of i
• i = 0, inner loop runs n times
• i = 1, inner loop runs (n-1) times
• i = 2, inner loop runs (n-2) times
• …
• i = n – 2, inner loop runs 2 times
• i = n – 1, inner loop runs once.
• Thus we get: ( 1 + 2 + … + n) = n*(n+1)/2 = O(n2)
General rule for nested loops:
• Running time is the product of the size of the loops times the
running time of the body.
Example:
sum = 0;
for( i = 0; i < n; i++)
for( j = 0; j < 2n; j++)
sum++;
• We have one operation inside the loops, and the product of the
sizes is 2n2
• Hence the running time is O(2n2) = O(n2)
• Note: if the body contains a function call, its running time has
to be taken into consideration
sum = 0;
for( i = 0; i < n; i++)
for( j = 0; j < n; j++)
sum = sum + function(sum);
• Assume that the running time of function(sum) is known to be
log(n).
• Then the total running time will be O(n2*log(n))
Rule 3: Consecutive program fragments
• The total running time is the maximum of the running time of
the individual fragments
sum = 0;
for( i = 0; i < n; i++)
sum = sum + i;
sum = 0;
for( i = 0; i < n; i++)
for( j = 0; j < 2*n; j++)
sum++;
• The first loop runs in O(n) time, the second - O(n2) time, the
maximum is O(n2)
Rule 4: If statement
• if C
S1;
else
S2;
• The running time is the maximum of the running times of S1
and S2.
Summary
• Steps in analysis of non-recursive algorithms:
• Decide on parameter n indicating input size
• Identify algorithm’s basic operation
• Check whether the number of time the basic operation is
executed depends on some additional
• property of the input. If so, determine worst, average, and best
case for input of size n
• Count the number of operations using the rules above.
ASYMPTOTIC NOTATIONS
• The main idea of asymptotic analysis is to have a measure of
efficiency of algorithms that doesn’t depend on machine
specific constants, and doesn’t require algorithms to be
implemented and time taken by programs to be compared.
• Asymptotic notations are mathematical tools to represent time
complexity of algorithms for asymptotic analysis.
• The following 3 asymptotic notations are mostly used to
represent time complexity of algorithms.
Big O Notation:
• The Big O notation defines an upper bound of an algorithm, it
bounds a function only from above. For example,
• consider the case of Insertion Sort. It takes linear time in best
case and quadratic time in worst case.
• We can safely say that the time complexity of Insertion sort is
O(n3). Note that O(n2) also covers linear time.
• If we use Θ notation to represent time complexity of Insertion
sort, we have to use two statements for best and worst cases:
1.The worst case time complexity of Insertion Sort is Θ(n 3).
2. The best case time complexity of Insertion Sort is Θ(n2).
• The Big O notation is useful when we only have upper bound
on time complexity of an algorithm.
• Many times we easily find an upper bound by simply looking
at the algorithm.
• For example, consider the following expression. logn = O (n)
• Let f(n)= logn , g(n)= n
When,
• n=1 f(n)=0 g(n)=1
• n=10 f(n)=3.3 g(n)=10
• N=100 f(n)=6.64 g(n)=100
• N=1000 f(n)=10 g(n)=1000

Hence,
• 0 <= f(n) <= c *g(n) for all n>=n0

• f(n) =O(g(n))
Ω Notation:
• Just as Big O notation provides an asymptotic upper bound on
a function, Ω notation provides an asymptotic lower bound.
• Ω Notation can be useful when we have lower bound on time
complexity of an algorithm.
• As discussed in the previous post, the best case performance of
an algorithm is generally not useful;
• the Omega notation is the least used notation among all three.
• For example, consider the following expression. O (n)= logn
• Let f(n)= n , g(n)= logn
• When,
• n=1 f(n)=1 g(n)=0
• n=10 f(n)=10 g(n)=3.3
• N=100 f(n)=100 g(n)=6.64
• N=1000 f(n)=1000 g(n)=10

Hence,
• 0 <= c *g(n) <= f(n) for all n>=n0

• f(n) = Ω(g(n))
Θ Notation:
• The theta notation bounds a function from above and below, so
it defines exact asymptotic behavior.
• A simple way to get Theta notation of an expression is to drop
low order terms and ignore leading constants.
• For example, consider the following expression. 3n 3 + 6n2 +
6000 = Θ (n3)

• Dropping lower order terms is always fine because there will


always be a n0 after which Θ (n3) beats Θ (n2)
• irrespective of the constants involved. For a given function
g(n), we denote Θ(g(n)) is following set of functions.
• Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0
such that 0 <=c1*g(n) <= f(n) <= c2 *g(n) for all n>=n0
• The above definition means, if f(n) is theta of g(n), then the
value f(n) is always between c1*g(n) and c2*g(n) for large
values of n (n >= n0).
• The definition of theta also requires that f(n) must be non-
negative for values of n greater than n0.
• For example, consider the following expression. 6n 2 +8n = Θ
(n2)
• Let f(n)= 6n2 + 8 , g1(n)= 4n2 g2(n)=10n2 if c1=4 c2=10
When,
• n=1 g1(n)=4 f(n)=14 g2(n)=10
• n=10 g1(n)=400 f(n)=608 g2(n)=1000
• N=100 g1(n)=40000 f(n)=60008 g2(n)=100000

Hence,
• 0 <=c1*g(n) <= f(n) <= c2 *g(n) for all n>=n0

• f(n) =Θ(g(n))
Properties of Big-O Notation

• Fact 1 (Transitivity)
– If f(n)=O(g(n)) and g(n)=O(h(n)), then f(n)=O(h(n))
– Prove it (done in class)
• Fact 2
– If f(n)=O(h(n)) and g(n)=O(y(n)), then
f(n)+g(n)=max(O(h(n),O(y(n)) )

– If f(n)=O(h(n)) and g(n)=O(y(n)), then f(n)*g(n)=O(h(n))


*O(y(n))
• Fact 3
– The function ank=O(nk)
• Fact 4
– The function nk=O(nk+j) for any positive j
Properties of Big-O Notation (cont’d)
• It follows from those facts that every polynomial is big-O of n
raised to the largest power

f(n) = aknk + ak-1nk-1 + . . . + a1n1 + a0 = O(nk)

• Fact 5
– If f(n)=cg(n), then f(n)=O(g(n))
Rate of Growth functions
Exercises
1. Arrange the functions according to their order of growth
O(n),O(2n), O(n2) , O(log n), O(n3) ,O(n log n), O(1)
2. True or False
3n2 log n= Θ(n2)
3.Prove inequalities
3n3 + 6n2 + 6000 = Θ (n3)
Exercise
Find Time Complexity of the algorithms
1. for (i=0 to n-1) do
For (j=i+1 to n-1)
If(A[i]==A[j])
Return false
2. Matrix Multiplication
3. for(i=0;i<n;i=i+2)
Sum=sum+I;
4. While(n>1)
{ count=count+1;
n=n/2;
}
return (count)
Recursion
• Any function calls itself is recursive function
• Recursion is used for problems when a large task can be broken
down into a repetitive subtask.
Base Case
• The routine will come across a subtask that it can handle without
calling itself known as a base case. It stops the recursion.
Recursive Case
• The routine has to call itself to complete its subtask.
Advantages
• More elegant and requires few variables which makes program
clean.
• Replaces complex nesting code.
Disadvantages
• Harder to think logic of a recursive function
• Debugging is difficult
• Requires more memory
Example
Factorial (n)
{
If (n=0)
Return(1)
Else
Return (n*fact(n-1))
}
Recurrence Equation
• An equation that defines a sequence recursively.
• It is normally in following form
T(n)=T(n-1) + n if n>0
T(0)=0
Deriving Recurrence Equation
Factorial (n)
{
If (n=0)
Return(1)
Else
Return (n*fact(n-1))
}
• Basic Operation is Multiplication
• M(n)= M(n-1)+1 if n>0
• M(0)=0
Binary Search
// initially called with low = 0, high = N – 1
BinarySearch_Right(A[0..N-1], value, low, high) {
if (high < low)
return low
else
{
mid = low +((high – low) / 2)
if (A[mid] == value)
return mid
else if (A[mid] > value)
return BinarySearch_Right(A, value, low, mid-1)
else
return BinarySearch_Right(A, value, mid+1, high)
}
}
Recurrence Equation
Basic Operation is comparision
C(n)= C(n/2)+1 if n>1
C(1)=1

You might also like