0% found this document useful (0 votes)
0 views46 pages

Time complexity

The document discusses the complexity of algorithms, focusing on time and space complexity as measures of efficiency. It outlines the steps for analyzing algorithms, the factors affecting running time, and the limitations of experimental studies. Additionally, it explains asymptotic complexity, including Big O, Big Omega, and Big Theta notations for classifying algorithm performance.

Uploaded by

Shahukar Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views46 pages

Time complexity

The document discusses the complexity of algorithms, focusing on time and space complexity as measures of efficiency. It outlines the steps for analyzing algorithms, the factors affecting running time, and the limitations of experimental studies. Additionally, it explains asymptotic complexity, including Big O, Big Omega, and Big Theta notations for classifying algorithm performance.

Uploaded by

Shahukar Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Complexity of Algorithms

• There are often many different algorithms which can be used to solve the same
problem. Thus, it makes sense to develop techniques that allow us to:
• compare different algorithms with respect to their “efficiency”
• choose the most efficient algorithm for the problem
• The Complexity of any algorithmic solution to a problem is a measure of several
factors. Two important and general factors:
• Time Complexity : the time it takes to execute.
• Space Complexity : the space (primary memory) it uses.
• We will focus on an algorithm’s efficiency with respect to time.
Analysis of Algorithms
• A complete analysis of the running time of an algorithm
involves the following steps:
• Implement the algorithm completely.
• Determine the time required for each basic operation.
• Identify unknown quantities that can be used to describe the frequency of execution of
the basic operations.
• Develop a realistic model for the input to the program.
• Analyze the unknown quantities, assuming the modelled input.
• Calculate the total running time by multiplying the time by the frequency for each
operation, then adding all the products.
Running Time of Program
• Experimental Study
• Write a program that implements the algorithm
• Run the program with data sets of varying size and composition.
• Use a method like time.h, System.currentTimeMillis() to get an accurate measure of the actual
running time.
• Factors affecting running time
• Hardware
• Operating System
• Compiler
• Size of input
• Nature of Input
• Algorithm
• Which should be improved?
Limitations
• Experimental studies have several limitations:
• It is necessary to implement and test the algorithm in order to determine its
running time.
• Experiments can be done only on a limited set of inputs, and may not be
indicative of the running time on other inputs not included in the experiment.
• In order to compare two algorithms, the same hardware and software
environments should be used.
Running Time of an Algorithm
• Depends upon
• Input Size
• Nature of Input
• To find running time of an algorithm
• count the number of basic/key/primitive operations/steps the algorithm performs.
• Ex. Comparisons in searching and sorting
• calculate how this number depends on the size of the input.
• A basic operation is an operation which takes a constant amount of time to execute.
• Time for other operations are much less than or at most proportional to the time for basic
operations
• Machine independent
Complexity of Algorithms
• The complexity (or efficiency) of an algorithm is the number of basic operations it
performs. This number is a function of the input size n.
• Definition: The time complexity of an algorithm is the function f(n) which gives
the running time requirement of the algorithm in terms of size n of input data.
• Complexity function f(n) is found for three cases
• Best: minimum value of f(n) for any possible input
• Worst: maximum value of f(n) for any possible input
• Average: expected value of f(n)
• Ex: Linear Search
• Best: 1, Worst: N, Average: (n+1)/2
WORST CASE COMPLEXITY
• We are usually interested in the worst case complexity: what are the most operations that might be
performed for a given problem size.
• Usually focus on worst case analysis
• Best case complexity has little use
• Average case is difficult to compute
• Worst case complexity is easier to compute
• Provides upper bound on complexity i.e. complexity in all other cases is lower than worst case
complexity
• Usually close to the actual running time
• Crucial to real-time systems (e.g. air-traffic control, surgery)
Example 1
Swap (a, b)
{
temp ← a;
a ← b;
b ← temp;
}
Example 1
Swap (a, b)
{ Space Complexity
temp ←a; ----------------------------------- 1S(n)= a, b, temp
a ← b; ----------------------------------- 1a -------1
b -------1
b ← temp; ----------------------------------- 1Temp -----1
} S(n) = 1+1+1= 3 words
S(n) = O(1)
Time Complexity:
T(n)= 1+1+1=> T(n)=3=> O(1)
Example 2
Algorithm: Sum_till_N(N)
sum ← 0;
for i ← 1 to N
sum=sum+1;
end for
Example 2
Sum_till_N(N)
{
sum = 0;
for (i=1; i<=N;i++)
{
sum=sum+1;
}
return 0;
}
Example 2
Sum_till_N(N)
{
sum=0; ---------------------------------1
for (i=1; i<=N;i++) --------------------------1+(N+1)+N=>N+1
{ Space Complexity
sum=sum+1; -------------------------N Sum -------1
N ----------1
}
i------------1
return Sum ; ---------------------------1 S(n) = 1+1+1=3 words
} S(n)= 3= O(1)

Total Time taken by algorithm= 1+(N+1)+N+1= 2N+3=> T(n)=O(N)


Example 3
Algorithm: Sum(A,n)
s ← 0;
for i ← 0 to n
s= s + A[i]
end for
Example 3
Sum(A,n)
{
s=0;
for(i=0; i<n; i++)
{
s=s+A[i];
}
return s;
}
Example 3
Sum(A,n)
{
s=0; -----------------------------1
for(i=0;i<n;i++) ----------1+(n+1)+n=> n+1
{
s=s+A[i]; ---------- n
}
return s; --------------------------1
}
Example 3
Time Complexity: 1+n+1+n+1= 2n+3= O(n)
Space Complexity:
A -------- n
n ---------1
s ---------1
i ---------1
Space Complexity S(n)= n+1+1+1= n+3=> O(n)
Example 4
Algorithm: Add(A, B, n)
for i ← 0 to n
for j ← 0 to n
c[i] [j]= A[i][j]+B[i][j]
end for
end for
Example 4
Assume that 2-D array (matrices) are of size nxn
Add(A, B, n)
{
for (i=0;i<n;i++)
{
for (j=0;j<n;j++)
{
C[i] [j]= A[i][j]+B[i][j]
}
}
}
Example 4
Assume that 2-D array (matrices) are of size nxn
Add(A, B, n)
{
for (i=0;i<n;i++) ----------------------n+1
{
for (j=0;j<n;j++) --------------n
{
C[i] [j]= A[i][j]+B[i][j] --------------n
}
}
}
Example 4
Assume that 2-D array (matrices) are of size nxn
Add(A, B, n)
{
for (i=0;i<n;i++) ----------------------n+1
{
for (j=0;j<n;j++) --------------n*(n+1)
{
C[i] [j]= A[i][j]+B[i][j] --------------n*n
}
}
}
Example 4
Time Complexity T(n): (n+1)+(n*(n+1))+(n*n)=>n+1+n2+n+n2
=> n2+2n+1
Time Complexity T(n)= O(n2)
Space Complexity:
A -------- n2
B ---------n2
C ---------n2
n ---------1
i ---------1
j ---------1
Space Complexity S(n)= n2+n2+n2+1+1+1= 3n2+3=> O(n2)
Example 5
for(i=0; i<n;i=i+2)
{
print i;
}
Example 5
for(i=0; i<n;i=i+2)--------------n+1
{
print i; --------------------n/2
}
T(n)= O(n)
Types Of Time Functions
• O(1) - constant
• O(log n) - Logarithmic
• O(n) – Linear
• O(n2) – Quadratic
• O(n3) – Cubic
• O(2n) – Exponential
• These are also called as Classes of complexity.
Types Of Time Functions
• 1< log n < 𝑛< n < n logn < n2 < n3< ………………………….2n < 3n< …….. nn

log n n n2 2n nn
n=1 0 1 1 2 1
n= 2 1 2 4 4 4
n=4 2 4 16 16 256
n=8 3 8 64 256 88
Note:
log n
n(linear)
n log n polynomial time (easy or tractable)
n2
n3

2n
exponential time (hard or intractable)
n!
26
Complexity classes
f(n) 2n n3
n2

n log n

n (linear time)

log n

Figure. Growth rates of some important complexity classes


ASYMPTOTIC COMPLEXITY
• The n2+2n+1 time bound is said to “grow asymptotically” like N
• This gives us an approximation of the complexity of the algorithm
• ASYMPTOTIC NOTATION
• Big Oh Notation: Upper bound
• Omega Notation: Lower bound
• Theta Notation: Tighter bound
BIG OH NOTATION
• If f(n) and g(n) are two complexity functions, we say
f(n) = O(g(n))
(read "f(n) is order of g(n)", or "f(n) is big-O of g(n)")
if there are positive constants c and n0 such that for n > n0,
|f(n)| ≤ c ∙ |g(n)| ∀ n ≥ n0.
Examples:
• Is 17 n2 – 5 = O(n2)?

• Is 35 n3 + 100 = O(n3)?

• Is 6 ∙ 2n + n2 = O(2n)?
Example
For functions f(n) and g(n) f(n) = 2n + 6
(to the right) there are
positive constants c and n0
such that:
f(n)≤c g(n) for n ≥ n0

conclusion:
2n+6 is O(n).

32
Another Example

On the other hand…


n2 is not O(n) because there is no c and n0
such that: n2 ≤ cn for n ≥ n0
(As the graph to the right illustrates, no
matter how large a c is chosen there is an n
big enough that n2>cn ) .

33
Comparing Functions
• As inputs get larger, any algorithm of a smaller order will be more efficient than an
algorithm of a larger order
Big-Oh vs. Actual Running Time
• Example 1: let algorithms A and B have running times TA(n) = 20n ms and TB(n)
= 0.1n log2n ms
• In the “Big-Oh” sense, A is better than B…
• But: on which data volume can A outperform B?
TA(n) < TB(n) if 20n < 0.1n log2n, or
log2n > 200, that is, when n >2200 ≈ 1060 !
• Thus, in all practical cases B is better than A…

35
Big-Oh vs. Actual Running Time
• Example 2: let algorithms A and B have running times TA(n) = 20n ms and TB(n)
= 0.1n2 ms
• In the “Big-Oh” sense, A is better than B…
• But: on which data volumes A outperforms B?
TA(n) < TB(n) if 20n < 0.1n2, or n > 200
• Thus A is better than B in most practical cases except for n < 200 when B
becomes faster…

36
Useful Rules of BIG-OH NOTATION
• If the function f can be written as a finite sum of other functions, then the fastest
growing one determines the order of f (n).
Drop lower order terms and constant factors
7n+3 is O(n)
8n2log n + 5n2 + n is O(n2log n)
Big-Omega: Asymptotic lower bound
• The function g(n) is Ω(f(n)) iff there exist a real positive constant c > 0 and a
positive integer n0 such that f(n)  cg(n) for all n  n0
• Big Omega is just opposite to Big Oh
• It generalises the concept of “lower bound” () in the same way as Big Oh
generalises the concept of “upper bound” (≤)
• If f(n) is O(g(n)) then g(n) is Ω (f(n))

38
Big-Omega
Examples:
• Is ?

• Is ?

value of n0
• Is ?

When n is bigger, 2n will grow faster than n100. ( Yes, you


can find n0 )
Big-Theta: asymptotic tight bound

• The function f(n) is Θ(g(n)) iff there exist two real positive constants c1 > 0 and c2
> 0 and a positive integer n0 such that:
c1g(n)  f(n)  c2g(n) for all n  n0
• Whenever two functions, f and g, are of the same order, f(n) is Θ(g(n)), they are
each Big-Oh of the other: g(n) is O(f(n)) AND f(n) is O(g(n))
• g(n) is an asymptotic tight bound for f(n).

41
Big-Theta
Examples:
• Is 3n + 2 = Θ(n)?

• Is 3n + 2 = Θ(n2)?
• Is ?

• Is ?
Little-o notation
Even though it is correct to say “7n +3 is O(n3)”, a better statement is “7n+3 is O(n)”, that is, one
should make the approximation as tight as possible
Little-ω notation
4n2 = W(n) is not asymptotically tight but
4n2 = W(n2) is asymptotically tight.

You might also like