Time complexity
Time complexity
• There are often many different algorithms which can be used to solve the same
problem. Thus, it makes sense to develop techniques that allow us to:
• compare different algorithms with respect to their “efficiency”
• choose the most efficient algorithm for the problem
• The Complexity of any algorithmic solution to a problem is a measure of several
factors. Two important and general factors:
• Time Complexity : the time it takes to execute.
• Space Complexity : the space (primary memory) it uses.
• We will focus on an algorithm’s efficiency with respect to time.
Analysis of Algorithms
• A complete analysis of the running time of an algorithm
involves the following steps:
• Implement the algorithm completely.
• Determine the time required for each basic operation.
• Identify unknown quantities that can be used to describe the frequency of execution of
the basic operations.
• Develop a realistic model for the input to the program.
• Analyze the unknown quantities, assuming the modelled input.
• Calculate the total running time by multiplying the time by the frequency for each
operation, then adding all the products.
Running Time of Program
• Experimental Study
• Write a program that implements the algorithm
• Run the program with data sets of varying size and composition.
• Use a method like time.h, System.currentTimeMillis() to get an accurate measure of the actual
running time.
• Factors affecting running time
• Hardware
• Operating System
• Compiler
• Size of input
• Nature of Input
• Algorithm
• Which should be improved?
Limitations
• Experimental studies have several limitations:
• It is necessary to implement and test the algorithm in order to determine its
running time.
• Experiments can be done only on a limited set of inputs, and may not be
indicative of the running time on other inputs not included in the experiment.
• In order to compare two algorithms, the same hardware and software
environments should be used.
Running Time of an Algorithm
• Depends upon
• Input Size
• Nature of Input
• To find running time of an algorithm
• count the number of basic/key/primitive operations/steps the algorithm performs.
• Ex. Comparisons in searching and sorting
• calculate how this number depends on the size of the input.
• A basic operation is an operation which takes a constant amount of time to execute.
• Time for other operations are much less than or at most proportional to the time for basic
operations
• Machine independent
Complexity of Algorithms
• The complexity (or efficiency) of an algorithm is the number of basic operations it
performs. This number is a function of the input size n.
• Definition: The time complexity of an algorithm is the function f(n) which gives
the running time requirement of the algorithm in terms of size n of input data.
• Complexity function f(n) is found for three cases
• Best: minimum value of f(n) for any possible input
• Worst: maximum value of f(n) for any possible input
• Average: expected value of f(n)
• Ex: Linear Search
• Best: 1, Worst: N, Average: (n+1)/2
WORST CASE COMPLEXITY
• We are usually interested in the worst case complexity: what are the most operations that might be
performed for a given problem size.
• Usually focus on worst case analysis
• Best case complexity has little use
• Average case is difficult to compute
• Worst case complexity is easier to compute
• Provides upper bound on complexity i.e. complexity in all other cases is lower than worst case
complexity
• Usually close to the actual running time
• Crucial to real-time systems (e.g. air-traffic control, surgery)
Example 1
Swap (a, b)
{
temp ← a;
a ← b;
b ← temp;
}
Example 1
Swap (a, b)
{ Space Complexity
temp ←a; ----------------------------------- 1S(n)= a, b, temp
a ← b; ----------------------------------- 1a -------1
b -------1
b ← temp; ----------------------------------- 1Temp -----1
} S(n) = 1+1+1= 3 words
S(n) = O(1)
Time Complexity:
T(n)= 1+1+1=> T(n)=3=> O(1)
Example 2
Algorithm: Sum_till_N(N)
sum ← 0;
for i ← 1 to N
sum=sum+1;
end for
Example 2
Sum_till_N(N)
{
sum = 0;
for (i=1; i<=N;i++)
{
sum=sum+1;
}
return 0;
}
Example 2
Sum_till_N(N)
{
sum=0; ---------------------------------1
for (i=1; i<=N;i++) --------------------------1+(N+1)+N=>N+1
{ Space Complexity
sum=sum+1; -------------------------N Sum -------1
N ----------1
}
i------------1
return Sum ; ---------------------------1 S(n) = 1+1+1=3 words
} S(n)= 3= O(1)
log n n n2 2n nn
n=1 0 1 1 2 1
n= 2 1 2 4 4 4
n=4 2 4 16 16 256
n=8 3 8 64 256 88
Note:
log n
n(linear)
n log n polynomial time (easy or tractable)
n2
n3
2n
exponential time (hard or intractable)
n!
26
Complexity classes
f(n) 2n n3
n2
n log n
n (linear time)
log n
• Is 35 n3 + 100 = O(n3)?
• Is 6 ∙ 2n + n2 = O(2n)?
Example
For functions f(n) and g(n) f(n) = 2n + 6
(to the right) there are
positive constants c and n0
such that:
f(n)≤c g(n) for n ≥ n0
conclusion:
2n+6 is O(n).
32
Another Example
33
Comparing Functions
• As inputs get larger, any algorithm of a smaller order will be more efficient than an
algorithm of a larger order
Big-Oh vs. Actual Running Time
• Example 1: let algorithms A and B have running times TA(n) = 20n ms and TB(n)
= 0.1n log2n ms
• In the “Big-Oh” sense, A is better than B…
• But: on which data volume can A outperform B?
TA(n) < TB(n) if 20n < 0.1n log2n, or
log2n > 200, that is, when n >2200 ≈ 1060 !
• Thus, in all practical cases B is better than A…
35
Big-Oh vs. Actual Running Time
• Example 2: let algorithms A and B have running times TA(n) = 20n ms and TB(n)
= 0.1n2 ms
• In the “Big-Oh” sense, A is better than B…
• But: on which data volumes A outperforms B?
TA(n) < TB(n) if 20n < 0.1n2, or n > 200
• Thus A is better than B in most practical cases except for n < 200 when B
becomes faster…
36
Useful Rules of BIG-OH NOTATION
• If the function f can be written as a finite sum of other functions, then the fastest
growing one determines the order of f (n).
Drop lower order terms and constant factors
7n+3 is O(n)
8n2log n + 5n2 + n is O(n2log n)
Big-Omega: Asymptotic lower bound
• The function g(n) is Ω(f(n)) iff there exist a real positive constant c > 0 and a
positive integer n0 such that f(n) cg(n) for all n n0
• Big Omega is just opposite to Big Oh
• It generalises the concept of “lower bound” () in the same way as Big Oh
generalises the concept of “upper bound” (≤)
• If f(n) is O(g(n)) then g(n) is Ω (f(n))
38
Big-Omega
Examples:
• Is ?
• Is ?
value of n0
• Is ?
• The function f(n) is Θ(g(n)) iff there exist two real positive constants c1 > 0 and c2
> 0 and a positive integer n0 such that:
c1g(n) f(n) c2g(n) for all n n0
• Whenever two functions, f and g, are of the same order, f(n) is Θ(g(n)), they are
each Big-Oh of the other: g(n) is O(f(n)) AND f(n) is O(g(n))
• g(n) is an asymptotic tight bound for f(n).
41
Big-Theta
Examples:
• Is 3n + 2 = Θ(n)?
• Is 3n + 2 = Θ(n2)?
• Is ?
• Is ?
Little-o notation
Even though it is correct to say “7n +3 is O(n3)”, a better statement is “7n+3 is O(n)”, that is, one
should make the approximation as tight as possible
Little-ω notation
4n2 = W(n) is not asymptotically tight but
4n2 = W(n2) is asymptotically tight.