Time Complexity - Class Lecture
Time Complexity - Class Lecture
Merge Sort:
Time Complexity (roughly): c2 n . log2 n - where c2 is constant that does not depend on n.
Insertion sort typically has a smaller constant factor than merge sort, so that c1 < c2.
We shall see that the constant factors can have far less of an impact on the running time than the
dependence on the input size n.
Insertion sort usually runs faster than merge sort for small input sizes. Once the input size n becomes
large enough, merge sort’s advantage of lg n vs. n will more than compensate for the difference in
constant factors.
No matter how much smaller c1 is than c2, there will always be a crossover point beyond which
merge sort is faster.
n2 vs n log n
Suppose that
the world’s craftiest programmer codes insertion sort in machine language for computer A, and
the resulting code requires 2 n2 instructions to sort n numbers.
an average programmer implements merge sort, using a high-level language with an inefficient
compiler, with the resulting code taking 50 n lg n instructions.
We used some simplifying abstractions to ease our analysis of the INSERTIONSORT procedure.
First, we ignored the actual cost of each statement, using the constants ci to represent
these costs.
we expressed the worst-case running time as
an2+bn+c for some constants a, b, and c that depend on the statement costs ci. We
thus ignored not only the actual statement costs, but also the abstract costs ci .
We shall now make one more simplifying abstraction: it is the rate of growth, or order of growth, of the
running time that really interests us.
We therefore consider only the leading term of a formula (e.g., an2), since the lower-order terms
are relatively insignificant for large values of n.
We also ignore the leading term’s constant coefficient, since constant factors are less significant
than the rate of growth in determining computational efficiency for large inputs.
For insertion sort, when we ignore the lower-order terms and the leading term’s constant coefficient,
we are left with the factor of n2 from the leading term.
CONCLUSION:
n
= ∑ (n−i−1+1)
i=1
n
= ∑ (n−i)
i=1
= (n-1) + (n-2) + . . . . . + 3+2+1
Case 1:
Suppose a computer takes 1000 times as long as to process basic operation once in algorithm A
as it takes to process the basic operation once in Algorithm B.
Here Algorithm B (n2) is efficient for small values of n. But we know that linear time should
be efficient.
Slow Fast
n2* t > n * 1000 t
Or n > 1000
If application never had an input size larger than 1000, Algorithm B should be implemented,
otherwise implement Algorithm A.
Observation:
Algorithm with time complexity n is more efficient than the algorithm with time complexity n2 for
sufficiently large values on n, regardless of how long it to take to process basic operation in two
algorithms.
It is for these reasons that the efficiency analysis framework ignores multiplicative constants and
concentrates on the count’s order of growth to within a constant multiple for large-size inputs.
Case 2:
Same Machine, Different Algorithms, Same Problem, Same Processing Time of Basic Operation
Suppose a computer takes same time to process basic operation once in algorithm A as it takes
to process the basic operation once in Algorithm B.
Suppose if it takes the same amount of time to process the basic operations in both algorithms
and the overhead is about the same, the first algorithmwill be more efficient if
0.01 n2 >100n
0r
0.01 n >100
n >10,000
If it takes longer to process the basic operation in algorithm A than in algorithm B, then there is
simply some larger value of n at which the algorithm A becomes more efficient.
“Any linear-time algorithm is eventually more efficient than any quadratic-time algorithm”.
0.1 n3 + n2 + n + 100 ≈ n3
Orders of Growth
Why this emphasis on the count’s order of growth for large input sizes? A difference in running
times on small inputs is not what really distinguishes efficientalgorithms from inefficient ones.
Comparison Analysis:
Compare the running time count of the given algorithm with common complexity functions.
Asymptotic Notation
Definition: Asymptotic
An asymptotic line is a line that gets closer
and closer to a curve as the distance gets
closer to infinity.
Following are the commonly used basic Asymptotic Notations.
Orders of Growth:
Values (some approximate) of several functions important for analysis of algorithms.
Big O:
If f(n) is in O(n2), then
eventually, f(n) lies beneath some pure quadratic function cn2on a graph.
This means that
If f(n) is the time complexity for some algorithm,
eventually, the running time of the algorithm will be at least as fast as quadratic.
Or
eventually f(n) is at least as good as a pure quadratic function.
Big Ω:
If f(n) is in Ω(n2), then
Eventually, f(n) lies above some pure quadratic function on agraph.
For the purposes of analysis, this means that
Eventually, f(n) is at least as bad as a pure quadraticfunction.
We say that “big Ω” puts an asymptotic lower bound on afunction.
Informally,
Ω( g(n) ), is the set of all functions with a higher or same order of growth as g(n) (to within a
constant multiple, as n goes to infinity).
Big Θ:
If f(n) is in Θ(n2),
(or f(n) is in both O(n2) and Ω(n2)orΘ(n2) is the intersection of O(n2) and Ω(n2) then
we can conclude that
eventually the function lies beneathsome pure quadratic function on a graph and
eventually it lies above some pure quadratic function ona graph.
That is,
eventually it is at least as good as some pure quadratic function and
eventually it is at least as bad as some pure quadratic function.
We can therefore conclude that its growth is similartothat of a pure quadratic function.
We say that “big Θ” puts an asymptotic tight bound on afunction.
Θ( g(n) )is the set of all functions that have the same order of growth as g(n) (to within a constant multiple, as n
goes to infinity).
The time complexities of some algorithms do not increase with n. For example, recall that
bestcasetime complexity B(n) for Algorithm 1.1 is 1 for every value of n. The complexity
categorycontaining such functions can be represented by any constant, and for simplicity we
represent it byΘ(1).