2 Analysis of Algorithms
2 Analysis of Algorithms
Outline
1) Running time and theoretical analysis
2) Big-O notation
3) Big-Ω and Big-Θ
4) Analyzing Seamcarve runtime
5) Dynamic programming
6) Fibonacci sequence
Tuesday, January 27, 2015 3
Running Time
•The running time of an algorithm varies
with the input and typically grows with the
input size
•Average case difficult to determine
•In most of computer science we focus on
the worst case running time
• Easier to analyze
• Crucial to many applications: what would
happen if an autopilot algorithm ran drastically
slower for some unforeseen, untested inputs?
Tuesday, January 27, 2015 5
• Why not?
• You have to implement the algorithm which isn’t always
doable!
• Your inputs may not entirely test the algorithm
• The running time depends on the particular computer’s
hardware and software speed
Tuesday, January 27, 2015 6
Theoretical Analysis
•Uses a high-level description of the
algorithm instead of an implementation
•Takes into account all possible inputs
•Allows us to evaluate speed of an algorithm
independent of the hardware or software
environment
•By inspecting pseudocode, we can
determine the number of statements
executed by an algorithm as a function of
the input size
Tuesday, January 27, 2015
7
Elementary Operations
• Algorithmic “time” is measured in elementary operations
• Math (+, -, *, /, max, min, log, sin, cos, abs, ...)
• Comparisons ( ==, >, <=, ...)
• Variable assignment
• Variable increment or decrement
• Array allocation
• Creating a new object
• Function calls and value returns
• (Careful, object's constructor and function calls may have
elementary ops too!)
• In practice, all of these operations take different amounts of time
• For the purpose of algorithm analysis, we assume each of these
operations takes the same time: “1 operation”
Tuesday, January 27, 2015 8
• Examples
• 105n2 + 108n and n2 both
grow with same slope despite
differing constants and lower-
order terms
n
• 10n + 105 and n both grow
with same slope as well In this graph (log scale on both axes),
the slope of a line corresponds to the
growth rate of its respective function
Tuesday, January 27, 2015 12
Big-O Notation
• Given any two functions f(n) and g(n), we say that
f(n) is O(g(n))
if there exist positive constants c and n0 such that
f(n) ≤ cg(n) for all n ≥ n0
• Example: 2n + 10 is O(n)
• Pick c = 3 and n0 = 10
2n + 10 ≤ 3n
2(10) + 10 ≤ 3(10)
30 ≤ 30
Tuesday, January 27, 2015 13
Big-Omega (Ω)
•
Big-Omega
Tuesday, January 27, 2015 19
Big-Theta (Θ)
•
Big-Theta
Tuesday, January 27, 2015 20
an + b ? ? Θ(n)
an2 + bn + c ? ? Θ(n2)
a ? ? Θ(1)
3n + an40 ? ? Θ(3n)
an + b log n ? ? Θ(n)
Tuesday, January 27, 2015 21
Seamcarve
•An algorithm that considers every possible
solution is known as an exhaustive
algorithm
•One solution to the seamcarve problem
would be to consider all possible
seams and choose the minimum
•What would be the big-O running time of
that algorithm in terms of n input pixels?
• : exponential and not good
Tuesday, January 27, 2015 23
Seamcarve
• What’s the runtime of the solution we went over last
class?
• Remember: constants don’t affect big-O runtime
• The algorithm:
• Iterate over every pixel from bottom to top to populate the
costs and dirs arrays
• Create a seam by choosing the minimum value in the top row
and tracing downward
• How many times do we evaluate each pixel?
• A constant number of times
• Therefore the algorithm is linear, or O(n), where n is the
number of pixels
• Hint: we also could have looked back at the pseudocode
and counted the number of nested loops!
Tuesday, January 27, 2015 24
Fibonacci: Recursive
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, …
• The Fibonacci sequence is usually defined by
the following recurrence relation:
F0 = 0, F1 = 1
Fn = Fn-1 + Fn-2
• This lends itself very well to a recursive
function for finding the nth Fibonacci number
function fib(n):
if n == 0:
return 0
if n == 1:
return 1
return fib(n-1) + fib(n-2)
Tuesday, January 27, 2015 26
Fibonacci: Recursive
• In order to calculate fib(4), how many times does
fib() get called?
fib(4)
fib(3) fib(2)
Fibonacci: Dynamic
Programming
• Instead of recomputing the same Fibonacci numbers
over and over, we’ll compute each one only once,
and store it for future reference.
• Like most dynamic programming algorithms, we’ll
need a table of some sort to keep track of
intermediary values.
function dynamicFib(n):
fibs = [] // make an array of size n
fibs[0] = 0
fibs[1] = 1
for i from 2 to n:
fibs[i] = fibs[i-1] + fibs[i-2]
return fibs[n]
Tuesday, January 27, 2015 28
Readings
• Dasgupta Section 0.2, pp 12-15
• Goes through this Fibonacci example (although without
mentioning dynamic programming)
• This section is easily readable now
• Dasgupta Section 0.3, pp 15-17
• Describes big-O notation far better than I can
• If you read only one thing in Dasgupta, read these 3
pages!
• Dasgupta Chapter 6, pp 169-199
• Goes into detail about Dynamic Programming, which it
calls one of the “sledgehammers of the trade” – i.e.,
powerful and generalizable.
• This chapter builds significantly on earlier ones and will
be challenging to read now, but we’ll see much of it this
semester.