Time Complexity: Dr. Zahid Halim
Time Complexity: Dr. Zahid Halim
Time Complexity
Algorithm
Algorithm
Algorithm Definition
A finite set of statements that guarantees an optimal solution in
finite interval of time
Good Algorithms?
• Run in less time
• Consume less memory
Measuring Efficiency
• The efficiency of an algorithm is a measure of the amount of resources
consumed in solving a problem of size n.
•
The resource we are most interested in is time
• We can use the same techniques to analyze the consumption of other resources,
such as memory space.
• It would seem that the most obvious way to measure the efficiency of an
algorithm is to run it and measure how much processor time is needed
• Is it correct
Factors
• Hardware
• Operating System
• Compiler
• Size of input
• Nature of Input
• Algorithm
N = 10 => 53 steps
N = 100 => 503 steps
N = 1,000 => 5003 steps
N = 1,000,000 => 5,000,003 steps
Asymptotic Complexity
• The 5N+3 time bound is said to "grow asymptotically" like N
• This gives us an approximation of the complexity of the
algorithm
• Ignores lots of (machine dependent) details, concentrate on
the bigger picture
Big Oh Notation
If f(N) and g(N) are two complexity functions, we say
f(N) = O(g(N))
3000
2500
10 n^2
2000
n^3
1500
1000
500
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Comparing Functions
• As inputs get larger, any algorithm of a smaller order will be more efficient than an
algorithm of a larger order
0.05 N2 = O(N2)
Time (steps)
3N = O(N)
N = 60 Input (size)
Big-Oh Notation
• Even though it is correct to say “7n - 3 is O(n3)”, a better statement
is “7n - 3 is O(n)”, that is, one should make the approximation as tight as
possible
• Simple Rule:
Drop lower order terms and constant factors
7n-3 is O(n)
8n2log n + 5n2 + n is O(n2log n)
Performance Classification
f(n) Classification
1 Constant: run time is fixed, and does not depend upon n. Most instructions are executed once, or only a few times,
regardless of the amount of information being processed
log n Logarithmic: when n increases, so does run time, but much slower. Common in programs which solve large problems
by transforming them into smaller problems.
n Linear: run time varies directly with n. Typically, a small amount of processing is done on each element.
n log n When n doubles, run time slightly more than doubles. Common in programs which break a problem down into smaller
sub-problems, solves them independently, then combines solutions
n2 Quadratic: when n doubles, runtime increases fourfold. Practical only for small problems; typically the program
processes all pairs of input (e.g. in a double nested loop).
2n Exponential: when n doubles, run time squares. This is often the result of a natural, “brute force” solution.
N log2N 5N N log2N N2 2N
8 3 40 24 64 256
16 4 80 64 256 65536
32 5 160 160 1024 ~109
64 6 320 384 4096 ~1019
128 7 640 896 16384 ~1038
256 8 1280 2048 65536 ~1076
Analyzing Loops
• Any loop has two parts:
– How many iterations are performed?
– How many steps per iteration?
Analyzing Loops
SUM RULE
Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi
Lecture 02: Time Complexity CS221: Data Structures & Algo.
if (condition)
statement1;
else
statement2;
where statement1 runs in O(N) time and statement2 runs in O(N2) time?
We use "worst case" complexity: among all inputs of size N, that is the maximum
running time?
The analysis for the example above is O(N2)