0% found this document useful (0 votes)
69 views

Time Complexity - Class Lecture

The document compares the time complexities of insertion sort and merge sort. It notes that while insertion sort has a smaller constant factor, merge sort's O(n log n) runtime grows more slowly than insertion sort's O(n^2) runtime. As a result, there is a crossover point where merge sort becomes faster, and this point depends on the relative speed of the basic operations, not just the constant factors. A concrete example shows merge sort running over 17 times faster than insertion sort for large inputs, due to its more efficient asymptotic growth rate.

Uploaded by

Ali Razzaq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views

Time Complexity - Class Lecture

The document compares the time complexities of insertion sort and merge sort. It notes that while insertion sort has a smaller constant factor, merge sort's O(n log n) runtime grows more slowly than insertion sort's O(n^2) runtime. As a result, there is a crossover point where merge sort becomes faster, and this point depends on the relative speed of the basic operations, not just the constant factors. A concrete example shows merge sort running over 17 times faster than insertion sort for large inputs, due to its more efficient asymptotic growth rate.

Uploaded by

Ali Razzaq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Insertion Sort:

Time Complexity (roughly): c1 n2 - where c1 is a constant that does not depend on n

Merge Sort:
Time Complexity (roughly): c2 n . log2 n - where c2 is constant that does not depend on n.

Insertion sort typically has a smaller constant factor than merge sort, so that c1 < c2.

We shall see that the constant factors can have far less of an impact on the running time than the
dependence on the input size n.

Insertion sort usually runs faster than merge sort for small input sizes. Once the input size n becomes
large enough, merge sort’s advantage of lg n vs. n will more than compensate for the difference in
constant factors.

No matter how much smaller c1 is than c2, there will always be a crossover point beyond which
merge sort is faster.

n2 vs n log n

For a concrete example,


Suppose,

Computer A (Faster Computer): running insertion sort


Computer B (Slower Computer): running merge sort

Suppose that computer A executes 10 billion instructions per second


computer B executes only 10 million instructions per second
so that computer A is 1000 times faster than computer B in raw computing power

To make the difference even more dramatic

Suppose that
 the world’s craftiest programmer codes insertion sort in machine language for computer A, and
the resulting code requires 2 n2 instructions to sort n numbers.
 an average programmer implements merge sort, using a high-level language with an inefficient
compiler, with the resulting code taking 50 n lg n instructions.

To sort 10 million numbers, computer A takes


2
2 . ( 10 7 ) instructions
=20 , 000 seconds ( more than 5.5 hrs )
( 10 )10 instructions /second

while computer B takes

50 . ( 107 log 107 ) instructions


=1163 seconds ( less than 20 minutes )
( 10 )7 instructions /second
By using an algorithm whose running time grows more slowly, even with a poor compiler, computer B
runs more than 17 times faster than computer A!

We used some simplifying abstractions to ease our analysis of the INSERTIONSORT procedure.

 First, we ignored the actual cost of each statement, using the constants ci to represent
these costs.
 we expressed the worst-case running time as

an2+bn+c for some constants a, b, and c that depend on the statement costs ci. We

thus ignored not only the actual statement costs, but also the abstract costs ci .

We shall now make one more simplifying abstraction: it is the rate of growth, or order of growth, of the
running time that really interests us.
 We therefore consider only the leading term of a formula (e.g., an2), since the lower-order terms
are relatively insignificant for large values of n.
 We also ignore the leading term’s constant coefficient, since constant factors are less significant
than the rate of growth in determining computational efficiency for large inputs.

For insertion sort, when we ignore the lower-order terms and the leading term’s constant coefficient,
we are left with the factor of n2 from the leading term.

CONCLUSION:

We usually consider one algorithm to be more efficient than another

 if its worst case running time has a lower order of growth.


 Due to constant factors and lower order terms, an algorithm whose running time has a higher
order of growth might take less time for small inputs than an algorithm whose running time has a
lower order of growth. But for large enough inputs, a Θ(n2) algorithm, for example, will run more
quickly in the worst case than a Θ(n3) algorithm.

Principal indicator of the algorithm’s efficiency:


order of growth of an algorithm’s basic operation count

Exchange sort Algorithm


void Exchange-Sort ( int n, keytype S[ ] ){
for (int i=1; i<=n; i++)
for (int j=i+1; j<=n; i++)
if ( S[j] < S[i] )
exchange S[i] and S[j];
n n
} ∑∑ 1
i=1 j=i+1

n
= ∑ (n−i−1+1)
i=1
n
= ∑ (n−i)
i=1
= (n-1) + (n-2) + . . . . . + 3+2+1

Basic Operation’s Count: (ignoring overhead and controlinstruction’s time)

T(n) = 1+2+3+ . . . . . + n-1 = 1/2 n(n-1)


C(n) = 1/2n(n −1)
C(n) = 1/2n(n −1) = 1/2n2–1/2n ≈ 1/2n2 - (ignoring least terms) Detail attached….

Case 1:

Same Machine, Different Algorithms, Same Problem

Suppose a computer takes 1000 times as long as to process basic operation once in algorithm A
as it takes to process the basic operation once in Algorithm B.

Process means including time it takes to execute control instruction (Assumingoverhead


instructions’ execution time negligible)

Processing Time of Basic Operation of Algorithm B: t


Processing Time of Basic Operation of Algorithm A: 1000 t

Basic Operation’s Count of Algorithm B (Every-Case Time Complexity) = n2


Basic Operation’s Count of Algorithm A (Every-Case Time Complexity) = n

Execution Time to process an instance of size n by Algorithm A = n * 1000 t


Execution Time to process an instance of size n by Algorithm B = n2 * t

Here Algorithm B (n2) is efficient for small values of n. But we know that linear time should
be efficient.

To determine when algorithm A is efficient, we solve inequality to maintain algorithm A to be


efficient:

Slow Fast
n2* t > n * 1000 t

Or n > 1000

This shows that Algorithm A is efficient when n > 1000.

If application never had an input size larger than 1000, Algorithm B should be implemented,
otherwise implement Algorithm A.

Observation:

Algorithm with time complexity n is more efficient than the algorithm with time complexity n2 for
sufficiently large values on n, regardless of how long it to take to process basic operation in two
algorithms.
It is for these reasons that the efficiency analysis framework ignores multiplicative constants and
concentrates on the count’s order of growth to within a constant multiple for large-size inputs.

Case 2:

Same Machine, Different Algorithms, Same Problem, Same Processing Time of Basic Operation

Suppose a computer takes same time to process basic operation once in algorithm A as it takes
to process the basic operation once in Algorithm B.

Every-Case Time Complexity of Algorithm A = 100 n

Every-Case Time Complexity of Algorithm B = 0.01 n2

Suppose if it takes the same amount of time to process the basic operations in both algorithms
and the overhead is about the same, the first algorithmwill be more efficient if

0.01 n2 >100n
0r
0.01 n >100

n >10,000

If it takes longer to process the basic operation in algorithm A than in algorithm B, then there is
simply some larger value of n at which the algorithm A becomes more efficient.

There is a fundamental principle here. That is,

“Any linear-time algorithm is eventually more efficient than any quadratic-time algorithm”.

In the theoretical analysis of analgorithm, we are interested in eventual behavior.

An Intuitive Introduction to Order

In theoretical analysis, we are interested in eventual behavior.


5n2
5n2+100 pure quadratic functions (Having no linear term)

0.1 n2 + n + 100 complete quadratic functions (Having linear term)


Eventually, quadratic term dominates shown as:

We should always be able to throw away lower terms.

0.1 n3 + n2 + n + 100 ≈ n3

Orders of Growth
Why this emphasis on the count’s order of growth for large input sizes? A difference in running
times on small inputs is not what really distinguishes efficientalgorithms from inefficient ones.

Comparison Analysis:
Compare the running time count of the given algorithm with common complexity functions.

Asymptotic Notation

Definition: Asymptotic
An asymptotic line is a line that gets closer
and closer to a curve as the distance gets
closer to infinity.
Following are the commonly used basic Asymptotic Notations.

O big-Oh - puts an asymptotic upper bound on a function


Ω big-Omega - puts an asymptotic lower bound on a function
Θ big-Theta - puts an asymptotic tight bound on a function (Average
Bound)

Basic Efficiency Classes – Common Complexity Functions


1 < log n < √ n < n < n log n < n2 < n3 < . . . < 2n < 3n < nn < n!

Some other are:


Log n: Log2 n , Log3 n, Log4 n etc.
nk : n1 , n2 , n3 , n4 etc.
an : 2n , 3n , 43 , 54 etc.

Orders of Growth:
Values (some approximate) of several functions important for analysis of algorithms.

Growth rates of some common complexity functions


What is useful out of these three?
f(n) and g(n) can be any nonnegative functions defined on the set of natural numbers.
In the context we are interested in,
 f(n) will be an algorithm’s running time, and
 g(n) will be some simple function to compare the count with.

Big O:
If f(n) is in O(n2), then
eventually, f(n) lies beneath some pure quadratic function cn2on a graph.
This means that
If f(n) is the time complexity for some algorithm,

eventually, the running time of the algorithm will be at least as fast as quadratic.
Or
eventually f(n) is at least as good as a pure quadratic function.

We say that “big O” puts an asymptotic upper bound on a function.

Definition: Upper Bound


An element greater than or equal to all
elements in a given set. 3 and 4 are upper
bounds of the set {1, 2, 3}.
Definition: Asymptotic Upper Bound
Trend for g(n) always greater than or equal
to f(n) as n approaches to infinity.
Definition: Least Upper Bound
An upper bound that is less than or equal to
all upper bounds of a particular set is called
Least Upper Bound. 3, 4 and 5 are upper
bounds but 3 is the least upper bound of a
set {1, 2, 3}.
Note: Don’t mix up this with worst, best and average case.
Informally,
O ( g(n) ) is the set of all functions with a lower or same order of growth as g(n) (to within a
constant multiple, as n goes to infinity).

Big Ω:
If f(n) is in Ω(n2), then
Eventually, f(n) lies above some pure quadratic function on agraph.
For the purposes of analysis, this means that
Eventually, f(n) is at least as bad as a pure quadraticfunction.
We say that “big Ω” puts an asymptotic lower bound on afunction.

Informally,
Ω( g(n) ), is the set of all functions with a higher or same order of growth as g(n) (to within a
constant multiple, as n goes to infinity).

Big Θ:
If f(n) is in Θ(n2),
(or f(n) is in both O(n2) and Ω(n2)orΘ(n2) is the intersection of O(n2) and Ω(n2) then
we can conclude that

eventually the function lies beneathsome pure quadratic function on a graph and
eventually it lies above some pure quadratic function ona graph.
That is,
eventually it is at least as good as some pure quadratic function and
eventually it is at least as bad as some pure quadratic function.

We can therefore conclude that its growth is similartothat of a pure quadratic function.
We say that “big Θ” puts an asymptotic tight bound on afunction.

Θ( g(n) )is the set of all functions that have the same order of growth as g(n) (to within a constant multiple, as n
goes to infinity).
The time complexities of some algorithms do not increase with n. For example, recall that
bestcasetime complexity B(n) for Algorithm 1.1 is 1 for every value of n. The complexity
categorycontaining such functions can be represented by any constant, and for simplicity we
represent it byΘ(1).

You might also like