0% found this document useful (0 votes)
53 views

Graphs and Network Flows IE411: Dr. Ted Ralphs

Here are the comparisons of the given functions for various values of n: n2 > n log n for n > 2 n3 > 2n for n > 2 2n > n for n > 1 n2 > sqrt(n) for all n So in summary: - n2 is larger than n log n for n > 2 - n3 is larger than 2n for n > 2 - 2n is larger than n for n > 1 - n2 is larger than sqrt(n) for all n The approximate values of n where the relationships change are given above.

Uploaded by

alvaro_65
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views

Graphs and Network Flows IE411: Dr. Ted Ralphs

Here are the comparisons of the given functions for various values of n: n2 > n log n for n > 2 n3 > 2n for n > 2 2n > n for n > 1 n2 > sqrt(n) for all n So in summary: - n2 is larger than n log n for n > 2 - n3 is larger than 2n for n > 2 - 2n is larger than n for n > 1 - n2 is larger than sqrt(n) for all n The approximate values of n where the relationships change are given above.

Uploaded by

alvaro_65
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Graphs and Network Flows IE411 Lecture 3

Dr. Ted Ralphs

IE411 Lecture 3

Algorithms
algorithm1 1. any systematic method of solving a certain kind of problem 2. a predetermined set of instructions for solving a specic problem in a limited number of steps The concept of an algorithm is not new but formal study of eciency is relatively new.

Westers New World Dictionary

IE411 Lecture 3

Introduction to Computational Complexity


What is the goal of computational complexity theory? To provide a method of comparing the diculty of two dierent problems. To provide a method of comparing the eciency of two dierent algorithms for the same problem. We would like to be able to rigorously dene the meaning of ecient algorithm. Complexity theory is built on a basic set assumptions called the model of computation. We will not concern ourselves too much with the details of a particular model here. To deal with this topic in full rigor would require a full semester course.

IE411 Lecture 3

Problems, Instances, and Algorithms


A problem P is a mapping of a set of inputs to specied outputs. An instance is a problem along with a particular input. An algorithm is a procedure for computing the output expected from a given input. An algorithm solves a problem P if that algorithm produces the expected output for any input. Example: Traveling Salesman Problem Given an undirected graph G = (N, A) and non-negative arc lengths dij for all (i, j) A, nd a cycle that visits all nodes exactly once and is of minimum total length. How do we specify an instance?

IE411 Lecture 3

Computational Complexity: What is the objective?


Complexity analysis is aimed at answering two types of questions. How hard is a given problem? How ecient is a given algorithm for a given problem? Our measure of eciency will be running time, dened as either The actual wall clock time required to execute the algorithm on a computer (problematic) or the number of elementary operations required (more on this later). The running time may dier by instance, algorithm, and computing platform. How should we measure the performance so that we can select the best algorithm from among several?

IE411 Lecture 3

What do We Measure?
Three methods of analysis: Empirical analysis Try to determine how algorithms behave in practice on real computational platforms under load in real-world conditions. Average-case analysis Try to determine the expected running time an algorithm will take analytically. Worst-case analysis Provide an upper bound on the running time of an algorithm for any instance in a given set.

IE411 Lecture 3

Drawbacks of Three Approaches


Empirical 1. 2. 3. 1. 2. 3. 4. 1. Depends on programming language, compiler, etc. Time consuming and expensive Often inconclusive Depends on probability distribution Dicult to determine appropriate distribution Intricate mathematical analysis No information on distribution of outcomes Inuenced by pathological instances

Average-Case

Worst-Case

IE411 Lecture 3

The Size of a Problem


Obviously, the time needed to solve a problem instance with a given algorithm depends on certain properties of the instance. The most easily identiable such property is the size of the instance. However, it is again problematic to dene what we mean by size. In many cases, the size of an instance can be taken to be the number of input parameters. For a linear program, this would be roughly determined by the number of variables and constraints. The running time of certain algorithms, however, depends explicitly on the magnitude of the input data.

IE411 Lecture 3

Measuring the Size of a Problem


We will dene the size of an instance to be the amount of information required to represent the instance. This is still not a clear denition because it depends on our representation of the data (the alphabet). Because computers store numbers in binary format, we use the size of a binary encoding (a two symbol alphabet) as our standard measure. In other words, the size of a number l is the number of bits required to represent it in binary, i.e., log2 l. As long as the magnitude of the input data is bounded, this is equivalent to considering the number of input parameters. In practice, the magnitude of the input data is usually, but not always, bounded.

IE411 Lecture 3

More on the Size of a Problem


Note that many combinatorial problems are dened implicitly, i.e., independent of a particular formulation. An example of this is the Traveling Salesman Problem. The input data for an instance of the TSP may be either an explicit a vector of costs for traveling between pairs of locations or explicit coordinates of each location, with the costs being implicitly dened as Euclidean distances. Hence, the size of an instance may be either the number of locations or the number of costs specied between pairs of locations. The magnitude of the costs may also aect the size (if this is not bounded).

IE411 Lecture 3

10

The Running Time of an Algorithm


Running time is a measure of eciency for an algorithm. For a given instance of a problem, we can determine (roughly) the time required to solve it with a given implementation on a given computing platform. Worst-case running time with respect to a given set of instances is the maximum time required over all instances. In most cases, worst case running time depends primarily on the size of the instances, as we have dened it. Therefore, our measure will typically be the worst-case running time over all instances of a given size. However, we still need a measure of running time that is architecture independent. We will simply count the number of elementary operations required to perform the algorithm.

IE411 Lecture 3

11

Elementary Operations
Elementary operations are very loosely dened to be additions, subtractions, multiplications, comparisons, etc. In most cases, we will assume that each of these can be performed in constant time. Again, this is a good assumption as long as the size of the numbers remains small as the calculation progresses. Generally we will want to ensure that the numbers can be encoded in a size polynomial in the size of the input. This justies our assumption about constant time operations. In some cases, we may have to be very careful about checking this assumption.

IE411 Lecture 3

12

Asymptotic Analysis
So far, we have determined that our measure of running time will be a function of instance size (a positive integer). Determining the exact function is still problematic at best. We will only really be interested in approximately how quickly the function grows in the limit. To determine this, we will use asymptotic analysis. Order relations f (n) O(g(n)) c R+, n0 Z+ s.t. f (n) cg(n) n n0. In this case, we say f is order g or f is big O of g. Using this relation, we can divide functions into classes that are all of the same order.

IE411 Lecture 3

13

Example
for i = 1 p do for j = 1 q do cij = aij + bij How many elementary operations?

IE411 Lecture 3

14

Order Relations
For polynomials, the order relation from the previous slide can be used to divide the set of functions into equivalence classes. We will only be concerned with what equivalence class the function belongs to. Note that class membership is invariant under multiplication by scalars and addition of low-order terms. For polynomials, the class is determined by the largest exponent on any of the variables. For example, all functions of the form f (n) = an2 + bn + c are (n2).

IE411 Lecture 3

15

Running Time and Complexity


Running time is a measure of the eciency of an algorithm. Computational complexity is a measure of the diculty of a problem. The computational complexity of a problem is the running time of the best possible algorithm. In most cases, we cannot prove that the best known algorithm is the also the best possible algorithm. We can therefore only provide an upper bound on the computational complexity in most cases. That is why complexity is usually expressed using big O notation. A case in which we know the exact complexity is comparison-based sorting, but this is unusual.

IE411 Lecture 3

16

Aside: Space Complexity


So far, we have discussed only the amount of computing time required to solve a problem. The amount of memory required to execute a given algorithm may also be an issue. This is known as space complexity. We can analyze space complexity in an analogous manner. This will be important in some cases.

IE411 Lecture 3

17

Polynomial Time Algorithms


An algorithm is said to be polynomial-time if its worst-case complexity is bounded by a polynomial function of the input. For network problems A strongly polynomial algorithm is bounded by a polynomial function that involves only n and m. A weakly polynomial has a running time that is a function of the size of the whole input, including capacities, etc. An algorithm is said to be exponential-time if it worst-case complexity grows as a function that cannot be bounded by a polynomial function. An algorithm is pseudopolynomial-time if its running time is bounded by a polynomial function of the actual values of the inputs parameters, such as the largest arc capacity.

IE411 Lecture 3

18

Worst-Case Complexity of Algorithms


Dijkstras Algorithm O(n2) Dials Algorithm O(m + nC) Floyd-Warshall Algorithm O(n3) Shortest Augmenting Path Algorithm O(n2m) Out-of-Kilter Algorithm O(nU ) Minimum Mean Cycle-Canceling Algorithm O(n2m3logn) Kruskals Algorithm O(nm)

IE411 Lecture 3

19

Computational Complexity: Activity!


Compare the following functions for various values of n. Determine which function is larger (according to big O) and the approximate value of n after which it is always larger. 1000n2 and 2n/100 n0.001 and (log n)3 0.1n2 and 10000n

IE411 Lecture 3

20

Computational Complexity: Summary


(Theoretical) objective is to develop polynomial-time algorithms with smallest possible growth rate Why? Need to consider empirical performance because not all polynomial-time algorithms perform better in practice than exponential-time algorithms Classic example? Explanation? Will we always be able to nd a polynomial-time algorithm for every combinatorial optimization problem?

You might also like