Time Complexity
Time Complexity
Program Efficiency
&
Complexity Analysis
Algorithm
An algorithm is a definite procedure for solving
a problem in finite number of steps
Algorithm is a well defined computational
procedure that takes some value(s) as input,
and produces some value(s) as output.
Algorithm is finite number of computational
statements that transform input into the output
Good Algorithms?
Run in less time
Consume less memory
N = 10 => 53 steps
N = 100 => 503 steps
N = 1,000 => 5003 steps
N = 1,000,000 => 5,000,003 steps
10 n^2
2000
n^3
1500
1000
500
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Comparing Functions
As inputs get larger, any algorithm of a
smaller order will be more efficient than an
algorithm of a larger order
0.05 N2 = O(N2)
Time (steps)
3N = O(N)
Input (size)
N = 60
Big-Oh Notation
Even though it is correct to say “7n - 3 is
O(n3)”, a better statement that “7n - 3 is O(n)”,
that is, one should make the approximation as
tight as possible
Simple Rule:
Drop lower order terms and constant
factors
7n-3 is O(n)
8n2log n + 5n2 + n is O(n2log n)
Performance Classification
f(n) Classification
1 Constant: run time is fixed, and does not depend upon n. Most instructions are
executed once, or only a few times, regardless of the amount of information being
processed
log n Logarithmic: when n increases, so does run time, but much slower. Common in
programs which solve large problems by transforming them into smaller problems.
n Linear: run time varies directly with n. Typically, a small amount of processing is
done on each element.
n log n When n doubles, run time slightly more than doubles. Common in programs which
break a problem down into smaller sub-problems, solves them independently, then
combines solutions
n2 Quadratic: when n doubles, runtime increases fourfold. Practical only for small
problems; typically the program processes all pairs of input (e.g. in a double nested
loop).
n3 Cubic: when n doubles, runtime increases eightfold
2n Exponential: when n doubles, run time squares. This is often the result of a natural,
“brute force” solution.
Size does matter[1]
What happens if we double the input size N?
N log2N 5N N log2N N2 2N
8 3 40 24 64 256
16 4 80 64 256 65536
32 5 160 160 1024 ~109
64 6 320 384 4096 ~1019
128 7 640 896 16384 ~1038
256 8 1280 2048 65536 ~1076
Size does matter[2]
Suppose a program has run time O(n!) and the
run time for
n = 10 is 1 second
if (condition)
statement1;
else
statement2;
where statement1 runs in O(N) time and statement2 runs in O(N2)
time?
We use "worst case" complexity: among all inputs of size N, that is the
maximum running time?
The analysis for the example above is O(N2)
Best Case
Best case is defined as which input of size
n is cheapest among all inputs of size n.
“The best case for my algorithm is n=1
because that is the fastest.” WRONG!
Misunderstanding
Complexity Examples
What does the following algorithm compute?
procedure who_knows(a1, a2, …, an: integers)
m := 0
for i := 1 to n-1
for j := i + 1 to n
if |ai – aj| > m then m := |ai – aj|
{m is the maximum difference between any two
numbers in the input sequence}
Comparisons: n-1 + n-2 + n-3 + … + 1
= (n – 1)n/2 = 0.5n2 – 0.5n
Time complexity is O(n2).
Complexity Examples
Another algorithm solving the same problem:
procedure max_diff(a1, a2, …, an: integers)
min := a1
max := a1
for i := 2 to n
if ai < min then min := ai
else if ai > max then max := ai
m := max - min
Comparisons: 2n - 2