0% found this document useful (0 votes)
17 views19 pages

02b Analysis

Computational tractability refers to problems that can be solved efficiently as the input size increases. An algorithm runs in polynomial time if doubling the input size increases the running time by at most some constant factor. Polynomial time algorithms are considered efficient because they avoid the exponential slowdown of brute force methods. While some polynomial time algorithms have high constants or exponents, they generally perform well in practice by exploiting problem structure to avoid exponential time complexity.

Uploaded by

ap6487
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views19 pages

02b Analysis

Computational tractability refers to problems that can be solved efficiently as the input size increases. An algorithm runs in polynomial time if doubling the input size increases the running time by at most some constant factor. Polynomial time algorithms are considered efficient because they avoid the exponential slowdown of brute force methods. While some polynomial time algorithms have high constants or exponents, they generally perform well in practice by exploiting problem structure to avoid exponential time complexity.

Uploaded by

ap6487
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Computational Tractability

Polynomial-Time

Brute force. For many non-trivial problems, there is a natural brute


force search algorithm that checks every possible solution.n ! for stable
 Typically takes 2N time or worse for inputs of size N. matching
with n M-members
 Unacceptable in practice. and n W-members

Desirable scaling property. When the input size doubles, the algorithm
should only slow down by some constant factor C.

There exists constants c > 0 and d > 0 such that on every


input of size N, its running time is bounded by c Nd steps.

Def. An algorithm is poly-time if the above scaling property holds.


choose C = 2d

1
Worst-Case Analysis

Worst case running time. Obtain bound on largest possible running time
of algorithm on input of a given size N.
 Generally captures efficiency in practice.
 Draconian view, but hard to find effective alternative.

Average case running time. Obtain bound on running time of algorithm


on random input as a function of input size N.
 Hard (or impossible) to accurately model real instances by random
distributions.
 Algorithm tuned for a certain distribution may perform poorly on
other inputs.

2
Worst-Case Polynomial-Time

Def. An algorithm is efficient if its running time is polynomial.

Justification: It really works in practice!


 Although 6.02 ´ 1023 ´ N20 is technically poly-time, it would be
useless in practice.
 In practice, the poly-time algorithms that people develop almost
always have low constants and low exponents.
 Breaking through the exponential barrier of brute force typically
exposes some crucial structure of the problem.

Exceptions.
 Some poly-time algorithms do have high constants and/or
exponents, and are useless in practice.
 Some exponential-time (or worse) algorithms are widely used
because the worst-case instances seem to be rare.
simplex method
Unix grep

3
Why It Matters

10^25≌ 10 septillion !

4
Asymptotic Order of Growth
A prelude: some functions we use

• The constant function


• The logarithmic function
• The linear function
• The n-logn function
• The quadratic function
• The cubic function
• Polynomials
• Exponentials

6
Asymptotic Order of Growth

Upper bounds. T(n) is O(f(n)) if there exist constants c > 0 and integer
n0 ³ 0 such that for all n ³ n0 we have T(n) £ c · f(n).

Lower bounds. T(n) is W(f(n)) if there exist constants c > 0 and n0 ³ 0


such that for all n ³ n0 we have T(n) ³ c · f(n).

Tight bounds. T(n) is Q(f(n)) if T(n) is both O(f(n)) and W(f(n)).

Ex: T(n) = 32n2 + 17n + 32.


 T(n) is O(n2), O(n3), W(n2), W(n), and Q(n2) .
 T(n) is not O(n), W(n3), Q(n), or Q(n3).

7
Notation

Slight abuse of notation. T(n) = O(f(n)).


 Asymmetric:
– f(n) = 5n3; g(n) = 3n2
– f(n) = O(n3) = g(n)
– but f(n) ¹ g(n) and f(n) ¹ O(g(n)).
 Better notation: T(n) Î O(f(n)).

Meaningless statement. Any comparison-based sorting algorithm


requires at least O(n log n) comparisons.
 Statement doesn't "type-check."
 Use W for lower bounds.

8
Properties

Transitivity.
 If f = O(g) and g = O(h) then f = O(h).
 If f = W(g) and g = W(h) then f = W(h).
 If f = Q(g) and g = Q(h) then f = Q(h).

Additivity.
If f = O(h) and g = O(h) then f + g = O(h).
If f = W(h) and g = W(h) then f + g = W(h).
If f = Q(h) and g = Q(h) then f + g = Q(h).
Is it true that if f = O(h) then 2f = O(2h)?
No, check f(n) = 2n, h(n) = n.

9
Useful sufficient conditions

10
Asymptotic Bounds for Some Common Functions

Polynomials. a0 + a1n + … + adnd is Q(nd) if ad > 0.

Polynomial time. Running time is O(nd) for some constant d independent


of the input size n.

Logarithms. O(log a n) = O(log b n) for any constants a, b > 0.

can avoid specifying the


base
Logarithms. For every x > 0, log n = O(nx).

log grows slower than every polynomial

Exponentials. For every r > 1 and every d > 0, nd = O(rn).

every exponential grows faster than every polynomial

11
A Survey of Common Running Times
Linear Time: O(n)

Linear time. Running time is at most a constant factor times the size
of the input.

Computing the maximum. Compute maximum of n numbers a1, …, an.

max ¬ a1
for i = 2 to n {
if (ai > max)
max ¬ ai
}

13
Linear Time: O(n)

Merge. Combine two sorted lists A = a1,a2,…,an with B = b1,b2,…,bn


into sorted whole.

i = 1, j = 1
while (both lists are nonempty) {
if (ai £ bj) append ai to output list and increment i
else(ai £ bj)append bj to output list and increment j
}
append remainder of nonempty list to output list

Claim. Merging two lists of size n takes O(n) time.


Pf. After each comparison, the length of output list increases by 1.

14
O(n log n) Time

O(n log n) time. Arises in divide-and-conquer algorithms.


also referred to as linearithmic time

Sorting. Mergesort and heapsort are sorting algorithms that perform


O(n log n) comparisons.

Largest empty interval. Given n time-stamps x1, …, xn on which copies


of a file arrive at a server, what is largest interval of time when no
copies of the file arrive?

O(n log n) solution. Sort the time-stamps. Scan the sorted list in
order, identifying the maximum gap between successive time-stamps.

15
Quadratic Time: O(n2)

Quadratic time. Enumerate all pairs of elements.

Closest pair of points. Given a list of n points in the plane (x1, y1), …,
(xn, yn), find the pair that is closest.

O(n2) solution. Try all pairs of points.

min ¬ (x1 - x2)2 + (y1 - y2)2


for i = 1 to n {
for j = i+1 to n {
d ¬ (xi - xj)2 + (yi - yj)2 don't need to
if (d < min) take square roots

min ¬ d
}
}

Remark. W(n2) seems inevitable, but this is just an illusion. see chapter 5

16
Cubic Time: O(n3)

Cubic time. Enumerate all triples of elements.

Set disjointness. Given n sets S1, …, Sn each of which is a subset of


1, 2, …, n, is there some pair of these which are disjoint?

O(n3) solution. For each pairs of sets, determine if they are disjoint.

foreach set Si {
foreach other set Sj {
foreach element p of Si {
determine whether p also belongs to Sj
}
if (no element of Si belongs to Sj)
report that Si and Sj are disjoint
}
}

17
Polynomial Time: O(nk) Time

Independent set of size k. Given a graph, are there k nodes such that
no two are joined by an edge?
k is a constant

O(nk) solution. Enumerate all subsets of k nodes.

foreach subset S of k nodes {


check whether S in an independent set
if (S is an independent set)
report S is an independent set
}
}

 Check whether S is an independent set = O(k2).


 Number of k element subsets =
 O(k2 nk / k!) = O(nk).

poly-time for k=17,


but not practical

18
Exponential Time

Independent set. Given a graph, what is maximum size of an


independent set?

O(n2 2n) solution. Enumerate all subsets.

S* ¬ f
foreach subset S of nodes {
check whether S in an independent set
if (S is largest independent set seen so far)
update S* ¬ S
}
}

19

You might also like