Analysis of Algorithms: Dr. Max Alekseyev USC, 2010
Analysis of Algorithms: Dr. Max Alekseyev USC, 2010
Organization
Lecturer:
Time/Place:TTh 2:00-3:15pm / SWGN 2A15 Office hours: after each lecture or by an appointment, SWGN 3A48 Course webpage: http://cse.sc.edu/~maxal/csce750/ Textbooks: Required: Introduction to Algorithms by Cormen, Leiserson, Rivest, and Stein Recommended: Computers and Intractability by Garey and Johnson
Assessment
Introduction to Algorithms
What they are and why you should care
Algorithms Course Materials by Jeff Erickson http://compgeom.cs.uiuc.edu/~jeffe/teaching/algorithms/ Design and Analysis of Algorithms by Russell Impagliazzo http://cseweb.ucsd.edu/classes/sp09/cse101/ Design and Analysis of Algorithms: How to Think About Algorithms by Jeff Edmonds http://www.cse.yorku.ca/~jeff/notes/
An everyday algorithm
Good algorithm
Bad algorithm
A computational problem
Pseudocode: Assignment
Pseudocode: Arithmetic
Pseudocode: Conditional
Pseudocode: Loops
Pseudocode
More specifically
Generalized Problem
BetterChange
77, (25,10,5,1), 4
r 77 k1 i1 77/25 (=3) r 77-3*25 (=2) k2 i2 2/10 (=0) r 2-0*10 (=2) k3 i3 2/5 (=0) r 2-0*5 (=2) k4 i4 2/1 (=2) r 2-2*1 (=0) Return (3, 0, 0, 2)
Brute Force
M/c1 M/c2
Each check does 2d+k operations (k is constant) Hence, the total number of operations (running time complexity) is: complexity M/c1 M/c2 M/cd (2d+k) = 2/(c1...cd) d Md + k/(c1...cd) Md = O(d Md)
Asymptotic Complexity
Finding the exact complexity, f(n) = number of basic operations, of an algorithm is difficult. We approximate f(n) by a function g(n) in a way that does not substantially change the magnitude of f(n), i.e., g(n) is sufficiently close to f(n) for large values of the input size n. This "approximate" measure of efficiency is called asymptotic complexity. Thus, the asymptotic complexity measure does not give the exact number of operations of an algorithm, but it shows how that number grows with the size of the input. This gives us a measure that will work for different platforms, operating systems, compilers, and CPUs.
Big-O Notation
The most commonly used notation for specifying asymptotic complexity is the big-O notation. The Big-O notation, O(g(n)), is used to give an upper bound (worst-case) on a positive runtime function f(n) where n is the input size.
Definition of Big-O: For a function f(n) that is non-negative for all n 0, we say that f(n) = O(g(n)) (f(n) is Big-O of g(n))
if there exist n0 0 and a constant c > 0 such that f(n) cg(n) for all n n0 .
Order notation
=O(d M^d)
Examples
O(1) O(log(n)) O(n) O(nlog(n)) O(nx) O(an) O(n!) O(nn) (e.g., O(n2), O(n3), etc.) (e.g., O(1.6n), O(2n), etc.) Constant Logarithmic Linear nlog(n) Polynomial Exponential Factorial
Big-Omega Notation
Similarly, (g(n)) is used to give a lower bound on a positive runtime function f(n) where n is the input size.
Definition: For a function f(n) that is non-negative for all n 0, we say that f(n) = (g(n)) (f(n) is big-Omega of g(n))
if there exist n0 0 and a constant c > 0 such that f(n) cg(n) for all n n 0.
Big-Theta Notation
Similarly, (g(n)) is used to give a tight bound on a positive runtime function f(n) where n is the input size.
Definition: For a function f(n) that is non-negative for all n 0, we say that f(n) = (g(n)) (f(n) is big-Theta of g(n))
Two complexities
Algorithms have a complexity that determines their execution speed. Problems have a complexity that determines how fast the fastest algorithm can go!
Sorting can be done in polynomial time Listing all subsets takes exponential time
NP-completeness
There is a class of problems that might require exponential time (NP-complete problems). Any problem in this class is, in some way, equivalent to any other problem. It is very unlikely that a polynomial time algorithm exists that can solve any problem from this class.
Summary
Computational problems define mapping from inputs to outputs Algorithms solve computational problems. Algorithms can be communicated through pseudocode. Two fundamental properties of algorithms are correctness and efficiency. Problems also have an inherent complexity.
How can one design an algorithm, given a problem? Is a particular algorithm correct? How efficient is a particular algorithm?