0% found this document useful (0 votes)
67 views35 pages

Analysis of Algorithms: Dr. Max Alekseyev USC, 2010

This document summarizes an introductory lecture on analysis of algorithms. It discusses defining computational problems, evaluating algorithms based on correctness and efficiency using asymptotic analysis, and complexity classes like P, NP, and NP-complete. Key topics covered include pseudocode, the change-making problem, brute force algorithms, asymptotic notation like O, Ω, Θ, and the concept of inherently difficult problems that likely require exponential time to solve.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views35 pages

Analysis of Algorithms: Dr. Max Alekseyev USC, 2010

This document summarizes an introductory lecture on analysis of algorithms. It discusses defining computational problems, evaluating algorithms based on correctness and efficiency using asymptotic analysis, and complexity classes like P, NP, and NP-complete. Key topics covered include pseudocode, the change-making problem, brute force algorithms, asymptotic notation like O, Ω, Θ, and the concept of inherently difficult problems that likely require exponential time to solve.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Analysis of Algorithms

Lecture 1 Dr. Max Alekseyev USC, 2010

Organization

Lecturer:

Dr. Max Alekseyev

Time/Place:TTh 2:00-3:15pm / SWGN 2A15 Office hours: after each lecture or by an appointment, SWGN 3A48 Course webpage: http://cse.sc.edu/~maxal/csce750/ Textbooks: Required: Introduction to Algorithms by Cormen, Leiserson, Rivest, and Stein Recommended: Computers and Intractability by Garey and Johnson

Assessment

Homeworks: Mid-term: Final exam:

30% 30% 40%

Introduction to Algorithms
What they are and why you should care

Web Resources on Algorithms

Algorithms by S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani http://www.cs.berkeley.edu/~vazirani/algorithms.html

Algorithms Course Materials by Jeff Erickson http://compgeom.cs.uiuc.edu/~jeffe/teaching/algorithms/ Design and Analysis of Algorithms by Russell Impagliazzo http://cseweb.ucsd.edu/classes/sp09/cse101/ Design and Analysis of Algorithms: How to Think About Algorithms by Jeff Edmonds http://www.cse.yorku.ca/~jeff/notes/

An everyday algorithm

Good algorithm

Bad algorithm

A computational problem

Is defined by inputs and outputs, e. g. Input: amount of money to change, M

Output: set of coins summing to M

Requires precise formulation Is solved by algorithms Is a prerequisite to an algorithm

Pseudocode: Assignment

Pseudocode: Arithmetic

Pseudocode: Conditional

Pseudocode: Loops

Pseudocode: Array access

U.S. Change Problem

Pseudocode

More specifically

Inelegant, but correct

But what is about, say, South Africa?

Generalized Problem

BetterChange
77, (25,10,5,1), 4
r 77 k1 i1 77/25 (=3) r 77-3*25 (=2) k2 i2 2/10 (=0) r 2-0*10 (=2) k3 i3 2/5 (=0) r 2-0*5 (=2) k4 i4 2/1 (=2) r 2-2*1 (=0) Return (3, 0, 0, 2)

BetterChange: is it a good solution?

c=(25, 10, 5, 1) ok c=(25, 20, 10, 5, 1) nope

Brute Force

How fast is it?


# iterations of first index: # iterations of second index: ...

M/c1 M/c2

Each check does 2d+k operations (k is constant) Hence, the total number of operations (running time complexity) is: complexity M/c1 M/c2 M/cd (2d+k) = 2/(c1...cd) d Md + k/(c1...cd) Md = O(d Md)

Asymptotic Complexity

Finding the exact complexity, f(n) = number of basic operations, of an algorithm is difficult. We approximate f(n) by a function g(n) in a way that does not substantially change the magnitude of f(n), i.e., g(n) is sufficiently close to f(n) for large values of the input size n. This "approximate" measure of efficiency is called asymptotic complexity. Thus, the asymptotic complexity measure does not give the exact number of operations of an algorithm, but it shows how that number grows with the size of the input. This gives us a measure that will work for different platforms, operating systems, compilers, and CPUs.

Asymptotic = Far Out

Big-O Notation

The most commonly used notation for specifying asymptotic complexity is the big-O notation. The Big-O notation, O(g(n)), is used to give an upper bound (worst-case) on a positive runtime function f(n) where n is the input size.

Definition of Big-O: For a function f(n) that is non-negative for all n 0, we say that f(n) = O(g(n)) (f(n) is Big-O of g(n))

if there exist n0 0 and a constant c > 0 such that f(n) cg(n) for all n n0 .

Order notation

=O(d M^d)

Examples
O(1) O(log(n)) O(n) O(nlog(n)) O(nx) O(an) O(n!) O(nn) (e.g., O(n2), O(n3), etc.) (e.g., O(1.6n), O(2n), etc.) Constant Logarithmic Linear nlog(n) Polynomial Exponential Factorial

Big-Omega Notation

Similarly, (g(n)) is used to give a lower bound on a positive runtime function f(n) where n is the input size.

Definition: For a function f(n) that is non-negative for all n 0, we say that f(n) = (g(n)) (f(n) is big-Omega of g(n))

if there exist n0 0 and a constant c > 0 such that f(n) cg(n) for all n n 0.

Big-Theta Notation

Similarly, (g(n)) is used to give a tight bound on a positive runtime function f(n) where n is the input size.

Definition: For a function f(n) that is non-negative for all n 0, we say that f(n) = (g(n)) (f(n) is big-Theta of g(n))

if f(n) = O(g(n)) and f(n) = (g(n)).

Two complexities

Algorithms have a complexity that determines their execution speed. Problems have a complexity that determines how fast the fastest algorithm can go!

Sorting can be done in polynomial time Listing all subsets takes exponential time

NP-completeness

There is a class of problems that might require exponential time (NP-complete problems). Any problem in this class is, in some way, equivalent to any other problem. It is very unlikely that a polynomial time algorithm exists that can solve any problem from this class.

Good vs. Bad

Problems: Good: clear; precise

Bad: allows silly/mean solutions

Algorithms: Good: poly-time; correct

Bad: exponential or worse; incorrect

Implementations: Good: as fast as the algorithm allows

Bad: dumb coding

Summary

Computational problems define mapping from inputs to outputs Algorithms solve computational problems. Algorithms can be communicated through pseudocode. Two fundamental properties of algorithms are correctness and efficiency. Problems also have an inherent complexity.

Design and Analysis of Algorithms

How can one design an algorithm, given a problem? Is a particular algorithm correct? How efficient is a particular algorithm?

You might also like