0% found this document useful (0 votes)
11 views8 pages

Analysis

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views8 pages

Analysis

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Running Time

 Most algorithms transform best case

input objects into output average case

Analysis of Algorithms objects. 120


worst case

 The running time of an 100


algorithm typically grows

Running Time
80
with the input size.
60
 Average case time is often
Input Algorithm Output difficult to determine. 40

 We focus on the worst case 20

running time. 0
1000 2000 3000 4000
 Easier to analyze Input Size
 Crucial to applications such as
games, finance and robotics
© 2010 Goodrich, Tamassia Analysis of Algorithms 1 © 2010 Goodrich, Tamassia Analysis of Algorithms 2

Experimental Studies Limitations of Experiments


 Write a program 9000
 It is necessary to implement the
implementing the 8000

algorithm 7000
algorithm, which may be difficult
 Run the program with 6000  Results may not be indicative of the
Time (ms)

inputs of varying size and 5000 running time on other inputs not included
composition
 Use a method like clock()
4000
in the experiment.
3000
to get an accurate 2000
 In order to compare two algorithms, the
measure of the actual same hardware and software
1000
running time
 Plot the results
0 environments must be used
0 50 100
Input Size

© 2010 Goodrich, Tamassia Analysis of Algorithms 3 © 2010 Goodrich, Tamassia Analysis of Algorithms 4
Theoretical Analysis Pseudocode
High-level description Example: find max
 Uses a high-level description of the 
of an algorithm element of an array
algorithm instead of an implementation  More structured than Algorithm arrayMax(A, n)
 Characterizes running time as a English prose Input array A of n integers
Less detailed than a
function of the input size, n.
 Output maximum element of A
program
currentMax ← A[0]
 Takes into account all possible inputs  Preferred notation for
for i ← 1 to n − 1 do
describing algorithms
 Allows us to evaluate the speed of an  Hides program design if A[i] > currentMax then
currentMax ← A[i]
algorithm independent of the issues
return currentMax
hardware/software environment
© 2010 Goodrich, Tamassia Analysis of Algorithms 5 © 2010 Goodrich, Tamassia Analysis of Algorithms 6

The Random Access Machine


Pseudocode Details (RAM) Model
 Control flow  Method call  A CPU
 if … then … [else …] var.method (arg [, arg…])
 while … do …  Return value
 repeat … until … return expression  An potentially unbounded
 for … do …  Expressions bank of memory cells, 1
2
← Assignment
 Indentation replaces braces
(like = in C++)
each of which can hold an 0
 Method declaration = Equality testing arbitrary number or
Algorithm method (arg [, arg…]) (like == in C++) character
Input … n Superscripts and other
2

Output … mathematical Memory cells are numbered and accessing


formatting allowed
any cell in memory takes unit time.
© 2010 Goodrich, Tamassia Analysis of Algorithms 7 © 2010 Goodrich, Tamassia Analysis of Algorithms 8
Functions Graphed Slide by Matt Stallmann
included with permission.

Seven Important Functions Using “Normal” Scale


 Seven functions that g(n) = n lg n
often appear in algorithm 1E+30 g(n) = 1
analysis: 1E+28 Cubic g(n) = 2n
1E+26
 Constant ≈ 1 1E+24 Quadratic
 Logarithmic ≈ log n 1E+22
Linear
1E+20
 Linear ≈ n 1E+18
N-Log-N ≈ n log n g(n) = n2
T (n )
 1E+16
 Quadratic ≈ n2 1E+14
g(n) = lg n
1E+12
 Cubic ≈ n3 1E+10
 Exponential ≈ 2n 1E+8
1E+6
1E+4
 In a log-log chart, the
g(n) = n
1E+2
slope of the line
corresponds to the
1E+0
1E+0 1E+2 1E+4 1E+6 1E+8 1E+10
g(n) = n3
growth rate n

© 2010 Goodrich, Tamassia Analysis of Algorithms 9 © 2010 Stallmann Analysis of Algorithms 10

Primitive Operations Counting Primitive Operations


 Basic computations  By inspecting the pseudocode, we can determine the
 Examples: maximum number of primitive operations executed by
performed by an algorithm
 Evaluating an an algorithm, as a function of the input size
 Identifiable in pseudocode expression
 Largely independent from the  Assigning a value Algorithm arrayMax(A, n) # operations
to a variable currentMax ← A[0]
programming language 2
 Indexing into an for i ← 1 to n − 1 do 2n
 Exact definition not important array
if A[i] > currentMax then 2(n − 1)
(we will see why later)  Calling a method
currentMax ← A[i] 2(n − 1)
Returning from a
 Assumed to take a constant 
{ increment counter i } 2(n − 1)
method
amount of time in the RAM return currentMax 1
model Total 8n − 2

© 2010 Goodrich, Tamassia Analysis of Algorithms 11 © 2010 Goodrich, Tamassia Analysis of Algorithms 12
Estimating Running Time Growth Rate of Running Time
Algorithm arrayMax executes 8n − 2 primitive
Changing the hardware/ software


operations in the worst case. Define:
a = Time taken by the fastest primitive operation
environment
b = Time taken by the slowest primitive operation  Affects T(n) by a constant factor, but
 Let T(n) be worst-case time of arrayMax. Then  Does not alter the growth rate of T(n)
a (8n − 2) ≤ T(n) ≤ b(8n − 2)  The linear growth rate of the running
 Hence, the running time T(n) is bounded by two time T(n) is an intrinsic property of
linear functions algorithm arrayMax

© 2010 Goodrich, Tamassia Analysis of Algorithms 13 © 2010 Goodrich, Tamassia Analysis of Algorithms 14

Slide by Matt Stallmann Slide by Matt Stallmann


included with permission. included with permission.

Why Growth Rate Matters Comparison of Two Algorithms


if runtime
time for n + 1 time for 2 n time for 4 n insertion sort is
is...
n2 / 4
c lg n c lg (n + 1) c (lg n + 1) c(lg n + 2) merge sort is
2 n lg n
cn c (n + 1) 2c n 4c n
sort a million items?
~ c n lg n 2c n lg n + 4c n lg n + runtime insertion sort takes
c n lg n
+ cn 2cn 4cn quadruples roughly 70 hours
when while
c n2 ~ c n2 + 2c n 4c n2 16c n2 problem merge sort takes
size doubles roughly 40 seconds
c n3 ~ c n3 + 3c n2 8c n3 64c n3
This is a slow machine, but if
100 x as fast then it’s 40 minutes
c 2n c 2 n+1 c 2 2n c 2 4n
versus less than 0.5 seconds
© 2010 Stallmann Analysis of Algorithms 15 © 2010 Stallmann Analysis of Algorithms 16
Constant Factors Big-Oh Notation
10,000
1E+26  Given functions f(n) and 3n
 The growth rate is 1E+24 Quadratic
g(n), we say that f(n) is
Quadratic 2n+10
not affected by 1E+22
1E+20 Linear O(g(n)) if there are
1,000

n
 constant factors or 1E+18 Linear positive constants
 lower-order terms 1E+16
c and n0 such that 100

T (n )
1E+14
 Examples 1E+12
f(n) ≤ cg(n) for n ≥ n0
1E+10
 102n + 105 is a linear 10
function
1E+8  Example: 2n + 10 is O(n)
1E+6
 105n2 + 108n is a 1E+4  2n + 10 ≤ cn
1
quadratic function 1E+2  (c − 2) n ≥ 10
1E+0 1 10 100 1,000
 n ≥ 10/(c − 2) n
1E+0 1E+2 1E+4 1E+6 1E+8 1E+10
n  Pick c = 3 and n0 = 10

© 2010 Goodrich, Tamassia Analysis of Algorithms 17 © 2010 Goodrich, Tamassia Analysis of Algorithms 18

Big-Oh Example More Big-Oh Examples


1,000,000 7n-2
n^2
 Example: the function 100n
7n-2 is O(n)
n2 is not O(n)
100,000
10n need c > 0 and n0 ≥ 1 such that 7n-2 ≤ c•n for n ≥ n0
 n2 ≤ cn 10,000 n this is true for c = 7 and n0 = 1
n≤c
 3n3 + 20n2 + 5

 The above inequality 1,000


3n3 + 20n2 + 5 is O(n3)
cannot be satisfied
since c must be a 100 need c > 0 and n0 ≥ 1 such that 3n3 + 20n2 + 5 ≤ c•n3 for n ≥ n0
constant this is true for c = 4 and n0 = 21
10
 3 log n + 5
1 3 log n + 5 is O(log n)
1 10 100 1,000
n need c > 0 and n0 ≥ 1 such that 3 log n + 5 ≤ c•log n for n ≥ n0
this is true for c = 8 and n0 = 2
© 2010 Goodrich, Tamassia Analysis of Algorithms 19 © 2010 Goodrich, Tamassia Analysis of Algorithms 20
Big-Oh and Growth Rate Big-Oh Rules
 The big-Oh notation gives an upper bound on the
growth rate of a function  If is f(n) a polynomial of degree d, then f(n) is
 The statement “f(n) is O(g(n))” means that the growth O(nd), i.e.,
rate of f(n) is no more than the growth rate of g(n)
1. Drop lower-order terms
 We can use the big-Oh notation to rank functions
2. Drop constant factors
according to their growth rate
 Use the smallest possible class of functions
f(n) is O(g(n)) g(n) is O(f(n))
 Say “2n is O(n)” instead of “2n is O(n2)”
g(n) grows more Yes No
 Use the simplest expression of the class
f(n) grows more No Yes  Say “3n + 5 is O(n)” instead of “3n + 5 is O(3n)”
Same growth Yes Yes
© 2010 Goodrich, Tamassia Analysis of Algorithms 21 © 2010 Goodrich, Tamassia Analysis of Algorithms 22

Asymptotic Algorithm Analysis Computing Prefix Averages


 The asymptotic analysis of an algorithm determines  We further illustrate
the running time in big-Oh notation asymptotic analysis with
35
X
 To perform the asymptotic analysis two algorithms for prefix 30
A
 We find the worst-case number of primitive operations averages 25
executed as a function of the input size  The i-th prefix average of
 We express this function with big-Oh notation an array X is average of the
20

 Example: first (i + 1) elements of X: 15


 We determine that algorithm arrayMax executes at most A[i] = (X[0] + X[1] + … + X[i])/(i+1) 10
8n − 2 primitive operations
 We say that algorithm arrayMax “runs in O(n) time”  Computing the array A of 5
 Since constant factors and lower-order terms are prefix averages of another 0
eventually dropped anyhow, we can disregard them array X has applications to 1 2 3 4 5 6 7
when counting primitive operations financial analysis

© 2010 Goodrich, Tamassia Analysis of Algorithms 23 © 2010 Goodrich, Tamassia Analysis of Algorithms 24
Prefix Averages (Quadratic) Arithmetic Progression
The following algorithm computes prefix averages in 7
quadratic time by applying the definition  The running time of
6
prefixAverages1 is
Algorithm prefixAverages1(X, n)
O(1 + 2 + …+ n) 5
Input array X of n integers
Output array A of prefix averages of X #operations  The sum of the first n 4
A ← new array of n integers n integers is n(n + 1) / 2
3
for i ← 0 to n − 1 do n  There is a simple visual
proof of this fact 2
s ← X[0] n
for j ← 1 to i do 1 + 2 + …+ (n − 1)  Thus, algorithm 1
s ← s + X[j] 1 + 2 + …+ (n − 1) prefixAverages1 runs in 0
A[i] ← s / (i + 1) n O(n2) time
1 2 3 4 5 6
return A 1
© 2010 Goodrich, Tamassia Analysis of Algorithms 25 © 2010 Goodrich, Tamassia Analysis of Algorithms 26

Prefix Averages (Linear) Math you need to Review


The following algorithm computes prefix averages in Summations
linear time by keeping a running sum Logarithms and Exponents
Algorithm prefixAverages2(X, n)  properties of logarithms:
Input array X of n integers logb(xy) = logbx + logby
logb (x/y) = logbx - logby
Output array A of prefix averages of X #operations
logbxa = alogbx
A ← new array of n integers n
logba = logxa/logxb
s←0 1  properties of exponentials:
for i ← 0 to n − 1 do n a(b+c) = aba c
s ← s + X[i] n abc = (ab)c
Proof techniques
A[i] ← s / (i + 1) n ab /ac = a(b-c)
Basic probability b = a logab
return A 1
bc = a c*logab
Algorithm prefixAverages2 runs in O(n) time
© 2010 Goodrich, Tamassia Analysis of Algorithms 27 © 2010 Goodrich, Tamassia Analysis of Algorithms 28
Intuition for Asymptotic
Relatives of Big-Oh
Notation
big-Omega Big-Oh
 f(n) is Ω(g(n)) if there is a constant c > 0
 f(n) is O(g(n)) if f(n) is asymptotically
and an integer constant n0 ≥ 1 such that less than or equal to g(n)
f(n) ≥ c•g(n) for n ≥ n0 big-Omega
 f(n) is Ω(g(n)) if f(n) is asymptotically
big-Theta
greater than or equal to g(n)
 f(n) is Θ(g(n)) if there are constants c’ > 0 and c’’
big-Theta
> 0 and an integer constant n0 ≥ 1 such that
c’•g(n) ≤ f(n) ≤ c’’•g(n) for n ≥ n0  f(n) is Θ(g(n)) if f(n) is asymptotically
equal to g(n)

© 2010 Goodrich, Tamassia Analysis of Algorithms 29 © 2010 Goodrich, Tamassia Analysis of Algorithms 30

Example Uses of the


Relatives of Big-Oh
 5n2 is Ω(n2)
f(n) is Ω(g(n)) if there is a constant c > 0 and an integer constant n0 ≥ 1
such that f(n) ≥ c•g(n) for n ≥ n0
let c = 5 and n0 = 1
 5n2 is Ω(n)
f(n) is Ω(g(n)) if there is a constant c > 0 and an integer constant n0 ≥ 1
such that f(n) ≥ c•g(n) for n ≥ n0
let c = 1 and n0 = 1
 5n2 is Θ(n2)
f(n) is Θ(g(n)) if it is Ω(n2) and O(n2). We have already seen the former,
for the latter recall that f(n) is O(g(n)) if there is a constant c > 0 and an
integer constant n0 ≥ 1 such that f(n) < c•g(n) for n ≥ n0
Let c = 5 and n0 = 1

© 2010 Goodrich, Tamassia Analysis of Algorithms 31

You might also like