Omplexity of Algorithms
Omplexity of Algorithms
In our previous articles on Analysis of Algorithms, we had discussed asymptotic notations, their worst
and best case performance etc. in brief. In this article, we discuss analysis of algorithm using Big – O
asymptotic notation in complete details.
Big-O Analysis of Algorithms
The Big O notation defines an upper bound of an algorithm, it bounds a function only from above. For
example, consider the case of Insertion Sort. It takes linear time in best case and quadratic time in worst
case. We can safely say that the time complexity of Insertion sort is O(n^2). Note that O(n^2) also covers
linear time.
The Big-O Asymptotic Notation gives us the Upper Bound Idea, mathematically described below:
f(n) = O(g(n)) if there exists a positive integer n0 and a positive constant c, such that f(n)≤c.g(n) ∀ n≥n0
The general step wise procedure for Big-O runtime analysis is as follows:
1. Figure out what the input is and what n represents.
2. Express the maximum number of operations, the algorithm performs in terms of n.
3. Eliminate all excluding the highest order terms.
4. Remove all the constant factors.
Some of the useful properties on Big-O notation analysis are as follow:
▪ Constant Multiplication:
If f(n) = c.g(n), then O(f(n)) = O(g(n)) ; where c is a nonzero constant.
▪ Polynomial Function:
If f(n) = a0 + a1.n + a2.n2 + —- + am.nm, then O(f(n)) = O(nm).
▪ Summation Function:
If f(n) = f1(n) + f2(n) + —- + fm(n) and fi(n)≤fi+1(n) ∀ i=1, 2, —-, m,
then O(f(n)) = O(max(f1(n), f2(n), —-, fm(n))).
▪ Logarithmic Function:
If f(n) = logan and g(n)=logbn, then O(f(n))=O(g(n))
; all log functions grow in the same manner in terms of Big-O.
Basically, this asymptotic notation is used to measure and compare the worst-case scenarios of
algorithms theoretically. For any algorithm, the Big-O analysis should be straightforward as long as we
correctly identify the operations that are dependent on n, the input size.
Runtime Analysis of Algorithms
In general cases, we mainly used to measure and compare the worst-case theoretical running time
complexities of algorithms for the performance analysis.
The fastest possible running time for any algorithm is O(1), commonly referred to as Constant Running
Time. In this case, the algorithm always takes the same amount of time to execute, regardless of the input
size. This is the ideal runtime for an algorithm, but it’s rarely achievable.
In actual cases, the performance (Runtime) of an algorithm depends on n, that is the size of the input or
the number of operations is required for each input item.
The algorithms can be classified as follows from the best-to-worst performance (Running Time
Complexity):
▪ A logarithmic algorithm – O(logn)
Runtime grows logarithmically in proportion to n.
▪ A linear algorithm – O(n)
Runtime grows directly in proportion to n.
▪ A superlinear algorithm – O(nlogn)
Runtime grows in proportion to n.
▪ A polynomial algorithm – O(nc)
Runtime grows quicker than previous all based on n.
▪ A exponential algorithm – O(cn)
Runtime grows even faster than polynomial algorithm based on n.
▪ A factorial algorithm – O(n!)
Runtime grows the fastest and becomes quickly unusable for even
small values of n.
Where, n is the input size and c is a positive constant.
Recommended Posts:
omplexity of Algorithms
Complexity
The whole point of the big-O/Ω/Θ stuff was to be able
to say something useful about algorithms.
o So, let's return to some algorithms and see if we
learned anything.
Good/bad times
We have said that these running times are important
when it comes to running times of algorithm.
\(\log_2 \(n\log_2
\(n\) \(n\) \(n^2\) \(n^{3}\) \(2^n\)
n\) n\)
\(10\) 3.3 μs 10 μs 33 μs 100 μs 1 ms 1 ms
100 \(4\times 10^{16}\)
\(10^2\) 6.6 μs 664 μs 10 ms 1s
μs years
10 1.7
\(10^4\) 13 μs 133 ms 11.6 days \(10^{2997}\) years
ms minutes
11.6 32000 \(10^{300000}\)
\(10^6\) 20 μs 1s 20 s
days years years
A summary:
o If you can get \(O(\log n)\) life is good: hand it in
and go home.
o \(O(n\log n)\) is pretty good: hard to complain
about it.
o \(O(n^k)\) could be bad, depending on \(k\): you
won't be solving huge problems. These
are polynomial complexity algorithms for \(k\ge
1\).
o \(\Omega(k^n)\) is a disaster: almost as bad as no
algorithm at all if you have double-digit input
sizes. These are exponential
complexity algorithms for \(k\gt 1\).
o See also: Numbers everyone should know
Space Complexity
We have only been talking about running time/speed so
far.
o It also makes good sense to talk about the
complexity of other things.