Time Complexity Lecture 4 1
Time Complexity Lecture 4 1
Analysis
Dr Umesh K L
1
Algorithm Definition
2
Good Algorithms?
3
Measuring Efficiency
The efficiency of an algorithm is a measure of the amount
of resources consumed in solving a problem of size n.
The resource we are most interested in is time
We can use the same techniques to analyze the consumption
of other resources, such as memory space.
It would seem that the most obvious way to measure the
efficiency of an algorithm is to run it and measure how
much processor time is needed
Is it correct ?
4
Factors
Hardware
Operating System
Compiler
Size of input
Nature of Input
Algorithm
5
RUNNING TIME OF AN
ALGORITHM
Depends upon
Input Size
Nature of Input
6
Finding running time of an
Algorithm / Analyzing an
Algorithm
Running time is measured by number of steps/primitive
operations performed
7
Simple Example (1)
8
Simple Example (2)
9
Simple Example (3) Growth of
5n+3
Estimated running time for different values of N:
N = 10 => 53 steps
N = 100 => 503 steps
N = 1,000 => 5003 steps
N = 1,000,000 => 5,000,003 steps
1
0
What Dominates in Previous
Example?
1
1
Asymptotic Complexity
1
2
COMPARING FUNCTIONS:
ASYMPTOTIC NOTATION
Big Oh Notation: Upper bound
1
3
Big Oh Notation [1]
f(N) = O(g(N))
1
4
Big Oh Notation [2]
1
5
O(f(n))
1
6
Example (2): Comparing
Functions
10 n2 Vs n3 3500
3000
2500
10 n^2
2000
n^3
1500
1000
500
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
17
Comparing Functions
0.05 N2 = O(N2)
Time (steps)
3N = O(N)
Input (size)
N = 60
1
8
Big-Oh Notation
Simple Rule:
Drop lower order terms and constant factors
7n-3 is O(n)
8n2log n + 5n2 + n is O(n2log n)
1
9
Big Omega Notation
If we wanted to say “running time is at least…” we use Ω
f(n) is Ω(g(n)) if there exist positive numbers c and n0 such that 0<=f(n)>=cΩ (n) for all n>=n0
2
0
Big Theta Notation
If we wish to express tight bounds we use the theta notation, Θ
2
1
What does this all mean?
If f(n) = Θ(g(n)) we say that f(n) and g(n) grow at the
same rate, asymptotically
2
2
Which Notation do we use?
To express the efficiency of our algorithms which of
the three notations should we use?
Why?
2
3
Performance Classification
f(n) Classification
1 Constant: run time is fixed, and does not depend upon n. Most instructions are executed once, or
only a few times, regardless of the amount of information being processed
log n Logarithmic: when n increases, so does run time, but much slower. Common in programs which solve
large problems by transforming them into smaller problems. Exp : binary Search
n Linear: run time varies directly with n. Typically, a small amount of processing is done on each
element. Exp: Linear Search
n log n When n doubles, run time slightly more than doubles. Common in programs which break a problem
down into smaller sub-problems, solves them independently, then combines solutions. Exp: Merge
n2 Quadratic: when n doubles, runtime increases fourfold. Practical only for small problems; typically
the program processes all pairs of input (e.g. in a double nested loop). Exp: Insertion Search
2n Exponential: when n doubles, run time squares. This is often the result of a natural, “brute force”
solution. Exp: Brute Force.
Note: logn, n, nlogn, n2>> less Input>>Polynomial
n3, 2n>>high input>> non polynomial
24
Size does matter[1]
N log2N 5N N log2N N2 2N
8 3 40 24 64 256
16 4 80 64 256 65536
32 5 160 160 1024 ~109
64 6 320 384 4096 ~1019
128 7 640 896 16384 ~1038
256 8 1280 2048 65536 ~1076
2
5
Complexity Classes
Time (steps)
2
6
Size does matter[2]
Suppose a program has run time O(n!) and the run time for
n = 10 is 1 second
Analyzing Loops
2
8
Constant time statements
Simplest case: O(1) time statements
3
0
Analyzing Loops[2]
3
1
Analyzing Loops – Linear Loops
Example (have a look at this code segment):
3
2
Analyzing Nested Loops[1]
Treat just like a single loop and evaluate each level of
nesting as needed:
int j,k;
for (j=0; j<N; j++)
for (k=N; k>0; k--)
sum += k+j;
3
3
Analyzing Nested Loops[2]
What if the number of iterations of one loop depends on the
counter of the other?
int j,k;
for (j=0; j < N; j++)
for (k=0; k < j; k++)
sum += k+j;
3
4
How Did We Get This Answer?
When doing Big-O analysis, we sometimes have to compute a
series like: 1 + 2 + 3 + ... + (n-1) + n
3
5
Sequence of Statements
For a sequence of statements, compute their complexity
functions individually and add them up
3
6
Conditional Statements
What about conditional statements such as
if (condition)
statement1;
else
statement2;
However if N ≥ 2, then running time T(N) is the cost of each step taken plus time
required to compute power(x,n-1). (i.e. T(N) = 2+T(N-1) for N ≥ 2)
3 So T(N) = 2N+2 is O(N) for last example.
9
Summary
Algorithms can be classified according to their
complexity => O-Notation
only relevant for large input sizes
4
0
references
Introduction to Algorithms by Thomas H.
Cormen
Chapter 3 (Growth of Functions)
4
1