Best Case
Best Case
Best case is the function which performs the minimum number of steps on input data of
n elements. Worst case is the function which performs the maximum number of steps on
input data of size n. Average case is the function which performs an average number of steps
on input data of n elements.
Sorting algorithms
See also: Sorting algorithm § Comparison of algorithms
Merge sort Array O(n log(n)) O(n log(n)) O(n log(n)) O(n)
Heap sort Array O(n log(n)) O(n log(n)) O(n log(n)) O(1)
Data structures
Basic O(1) O(n) O(n) O(n) O(1) O(n) O(n) O(n) O(n)
array
Stack O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1) O(n)
Queue O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1) O(n)
Singly O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1) O(n)
linked
list
Doubly O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1) O(n)
linked
list
Skip O(log O(log O(log O(log O(n) O(n) O(n) O(n) O(nlog
list (n)) (n)) (n)) (n)) (n))
Binary O(log O(log O(log O(log O(n) O(n) O(n) O(n) O(n)
search (n)) (n)) (n)) (n))
tree
B-tree O(log O(log O(log O(log O(log O(log O(log O(log O(n)
(n)) (n)) (n)) (n)) (n)) (n)) (n)) (n))
Red–bl O(log O(log O(log O(log O(log O(log O(log O(log O(n)
ack (n)) (n)) (n)) (n)) (n)) (n)) (n)) (n))
tree
AVL O(log O(log O(log O(log O(log O(log O(log O(log O(n)
tree (n)) (n)) (n)) (n)) (n)) (n)) (n)) (n))
K-d O(log O(log O(log O(log O(n) O(n) O(n) O(n) O(n)
tree (n)) (n)) (n)) (n))
Big O notation
Big O is a member of a family of notations invented by German mathematicians Paul
Bachmann,[1] Edmund Landau,[2] and others, collectively called Bachmann–Landau notation or
asymptotic notation.
In computer science, big O notation is used to classify algorithms according to how their run
time or space requirements grow as the input size grows.
Example
In typical usage the O notation is asymptotical, that is, it refers to very large x. In this setting, the
contribution of the terms that grow "most quickly" will eventually make the other ones irrelevant.
As a result, the following simplification rules can be applied:
● If f(x) is a sum of several terms, if there is one with largest growth rate, it can be kept,
and all others omitted.
● If f(x) is a product of several factors, any constants (factors in the product that do not
depend on x) can be omitted.
For example, let f(x) = 6x4 − 2x3 + 5, and suppose we wish to simplify this function, using O
notation, to describe its growth rate as x approaches infinity. This function is the sum of three
terms: 6x4, −2x3, and 5. Of these three terms, the one with the highest growth rate is the one
with the largest exponent as a function of x, namely 6x4. Now one may apply the second rule:
6x4 is a product of 6 and x4 in which the first factor does not depend on x. Omitting this factor
results in the simplified form x4. Thus, we say that f(x) is a "big O" of x4. Mathematically, we can
write f(x) = O(x4). One may confirm this calculation using the formal definition: let f(x) = 6x4 − 2x3
+ 5 and g(x) = x4. Applying the formal definition from above, the statement that f(x) = O(x4) is
equivalent to its expansion,
|f(x)|≤Mx4
for some suitable choice of a real number x0 and a positive real number M and for all x > x0. To
prove this, let x0 = 1 and M = 13. Then, for all x > x0:
| 6 x 4 − 2 x 3 + 5 | ≤ 6 x 4 + | 2 x 3 | + 5 ≤ 6 x 4 + 2 x 4 + 5 x 4 = 13 x 4
so
| 6 x 4 − 2 x 3 + 5 | ≤ 13 x 4 .
We define an algorithm’s worst-case time complexity by using the Big-O notation, which
determines the set of functions grows slower than or at the same rate as the expression.
Furthermore, it explains the maximum amount of time an algorithm requires to consider all input
values.
2. Omega Notation
It defines the best case of an algorithm’s time complexity, the Omega notation defines whether
the set of functions will grow faster or at the same rate as the expression. Furthermore, it
explains the minimum amount of time an algorithm requires to consider all input values.
3. Theta Notation
It defines the average case of an algorithm’s time complexity, the Theta notation defines when
the set of functions lies in both O(expression) and Omega(expression), then Theta notation is
used. This is how we define a time complexity average case for an algorithm.
In the worst-case analysis, we calculate the upper bound on the running time of an algorithm.
We must know the case that causes a maximum number of operations to be executed. For
Linear Search, the worst case happens when the element to be searched (x) is not present in
the array. When x is not present, the search() function compares it with all the elements of arr[]
one by one. Therefore, the worst-case time complexity of the linear search would be O(n).
In the best-case analysis, we calculate the lower bound on the running time of an algorithm. We
must know the case that causes a minimum number of operations to be executed. In the linear
search problem, the best case occurs when x is present at the first location. The number of
operations in the best case is constant (not dependent on n). So time complexity in the best
case would be ?(1)
In average case analysis, we take all possible inputs and calculate the computing time for all of
the inputs. Sum all the calculated values and divide the sum by the total number of inputs. We
must know (or predict) the distribution of cases. For the linear search problem, let us assume
that all cases are uniformly distributed (including the case of x not being present in the array).
So we sum all the cases and divide the sum by (n+1). Following is the value of average-case
time complexity.
A) For some algorithms, all the cases (worst, best, average) are asymptotically the
same. i.e., there are no worst and best cases.
B) Where as most of the other sorting algorithms have worst and best cases.
● Best Case: The order of growth will be constant because in the best case we are
assuming that (n) is even.
● Average Case: In this case, we will assume that even and odd are equally likely,
therefore Order of growth will be linear
● Worst Case: The order of growth will be linear because in this case, we are assuming
that (n) is always odd.