0% found this document useful (0 votes)
10 views6 pages

Best Case

Asympetic notation
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views6 pages

Best Case

Asympetic notation
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Best case

Best case is the function which performs the minimum number of steps on input data of
n elements. Worst case is the function which performs the maximum number of steps on
input data of size n. Average case is the function which performs an average number of steps
on input data of n elements.

Sorting algorithms
See also: Sorting algorithm § Comparison of algorithms

Algorithm Data Time Time Time Space


structure complexity: complexity: complexity: complexity:
Best Worst Average Worst

Quick sort Array O(n log(n)) O(n2) O(n log(n)) O(n)

Merge sort Array O(n log(n)) O(n log(n)) O(n log(n)) O(n)

Heap sort Array O(n log(n)) O(n log(n)) O(n log(n)) O(1)

Smooth sort Array O(n) O(n log(n)) O(n log(n)) O(1)

Bubble sort Array O(n) O(n2) O(n2) O(1)

Insertion sort Array O(n) O(n2) O(n2) O(1)

Selection sort Array O(n2) O(n2) O(n2) O(1)

Bogo sort Array O(n) O(∞) O(n n!) O(1)

Graphs of functions commonly used in the analysis of


algorithms, showing the number of operations N versus
input size n for each function

● Insertion sort applied to a list of n elements,


assumed to be all different and initially in random order. On
average, half the elements in a list A1 ... Aj are less than
element Aj+1, and half are greater. Therefore, the algorithm
compares the (j + 1)th element to be inserted on the average
with half the already sorted sub-list, so tj = j/2. Working out
the resulting average-case running time yields a quadratic
function of the input size, just like the worst-case running
time.
● Quicksort applied to a list of n elements, again assumed to be all different and initially in
random order. This popular sorting algorithm has an average-case performance of O(n
log(n)), which contributes to making it a very fast algorithm in practice. But given a
worst-case input, its performance degrades to O(n2). Also, when implemented with the
"shortest first" policy, the worst-case space complexity is instead bounded by O(log(n)).
● Heapsort has O(n) time when all elements are the same. Heapify takes O(n) time and
then removing elements from the heap is O(1) time for each of the n elements. The run
time grows to O(nlog(n)) if all elements must be distinct.
● Bogosort has O(n) time when the elements are sorted on the first iteration. In each
iteration all elements are checked if in order. There are n! possible permutations; with a
balanced random number generator, almost each permutation of the array is yielded in
n! iterations. Computers have limited memory, so the generated numbers cycle; it might
not be possible to reach each permutation. In the worst case this leads to O(∞) time, an
infinite loop.

Data structures

See also: Search data structure § Asymptotic worst-case analysis

Data Time complexity Space


struct compl
ure exity

Avg: Avg: Avg: Avg: Worst: Worst: Worst: Worst: Worst


Indexi Searc Inserti Deleti Indexi Searc Inserti Deleti
ng h on on ng h on on

Basic O(1) O(n) O(n) O(n) O(1) O(n) O(n) O(n) O(n)
array

Dynam O(1) O(n) O(n) O(1) O(n) O(n) O(n)


ic — —
array

Stack O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1) O(n)

Queue O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1) O(n)

Singly O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1) O(n)
linked
list

Doubly O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1) O(n)
linked
list

Skip O(log O(log O(log O(log O(n) O(n) O(n) O(n) O(nlog
list (n)) (n)) (n)) (n)) (n))

Hash O(1) O(1) O(1) O(n) O(n) O(n) O(n)


— —
table

Binary O(log O(log O(log O(log O(n) O(n) O(n) O(n) O(n)
search (n)) (n)) (n)) (n))
tree

Cartesi O(log O(log O(log O(n) O(n) O(n) O(n)


— —
an tree (n)) (n)) (n))

B-tree O(log O(log O(log O(log O(log O(log O(log O(log O(n)
(n)) (n)) (n)) (n)) (n)) (n)) (n)) (n))

Red–bl O(log O(log O(log O(log O(log O(log O(log O(log O(n)
ack (n)) (n)) (n)) (n)) (n)) (n)) (n)) (n))
tree

Splay O(log O(log O(log O(log O(log O(log O(n)


— —
tree (n)) (n)) (n)) (n)) (n)) (n))

AVL O(log O(log O(log O(log O(log O(log O(log O(log O(n)
tree (n)) (n)) (n)) (n)) (n)) (n)) (n)) (n))

K-d O(log O(log O(log O(log O(n) O(n) O(n) O(n) O(n)
tree (n)) (n)) (n)) (n))

Big O notation
Big O is a member of a family of notations invented by German mathematicians Paul
Bachmann,[1] Edmund Landau,[2] and others, collectively called Bachmann–Landau notation or
asymptotic notation.
In computer science, big O notation is used to classify algorithms according to how their run
time or space requirements grow as the input size grows.

Example
In typical usage the O notation is asymptotical, that is, it refers to very large x. In this setting, the
contribution of the terms that grow "most quickly" will eventually make the other ones irrelevant.
As a result, the following simplification rules can be applied:

● If f(x) is a sum of several terms, if there is one with largest growth rate, it can be kept,
and all others omitted.
● If f(x) is a product of several factors, any constants (factors in the product that do not
depend on x) can be omitted.

For example, let f(x) = 6x4 − 2x3 + 5, and suppose we wish to simplify this function, using O
notation, to describe its growth rate as x approaches infinity. This function is the sum of three
terms: 6x4, −2x3, and 5. Of these three terms, the one with the highest growth rate is the one
with the largest exponent as a function of x, namely 6x4. Now one may apply the second rule:
6x4 is a product of 6 and x4 in which the first factor does not depend on x. Omitting this factor
results in the simplified form x4. Thus, we say that f(x) is a "big O" of x4. Mathematically, we can
write f(x) = O(x4). One may confirm this calculation using the formal definition: let f(x) = 6x4 − 2x3
+ 5 and g(x) = x4. Applying the formal definition from above, the statement that f(x) = O(x4) is
equivalent to its expansion,

|f(x)|≤Mx4
for some suitable choice of a real number x0 and a positive real number M and for all x > x0. To
prove this, let x0 = 1 and M = 13. Then, for all x > x0:
| 6 x 4 − 2 x 3 + 5 | ≤ 6 x 4 + | 2 x 3 | + 5 ≤ 6 x 4 + 2 x 4 + 5 x 4 = 13 x 4
so
| 6 x 4 − 2 x 3 + 5 | ≤ 13 x 4 .

Popular Notations in Complexity Analysis of Algorithms


1. Big-O Notation

We define an algorithm’s worst-case time complexity by using the Big-O notation, which
determines the set of functions grows slower than or at the same rate as the expression.
Furthermore, it explains the maximum amount of time an algorithm requires to consider all input
values.

2. Omega Notation

It defines the best case of an algorithm’s time complexity, the Omega notation defines whether
the set of functions will grow faster or at the same rate as the expression. Furthermore, it
explains the minimum amount of time an algorithm requires to consider all input values.

3. Theta Notation

It defines the average case of an algorithm’s time complexity, the Theta notation defines when
the set of functions lies in both O(expression) and Omega(expression), then Theta notation is
used. This is how we define a time complexity average case for an algorithm.

Measurement of Complexity of an Algorithm


Based on the above three notations of Time Complexity there are three cases to analyze an
algorithm:

1. Worst Case Analysis (Mostly used)

In the worst-case analysis, we calculate the upper bound on the running time of an algorithm.
We must know the case that causes a maximum number of operations to be executed. For
Linear Search, the worst case happens when the element to be searched (x) is not present in
the array. When x is not present, the search() function compares it with all the elements of arr[]
one by one. Therefore, the worst-case time complexity of the linear search would be O(n).

2. Best Case Analysis (Very Rarely used)

In the best-case analysis, we calculate the lower bound on the running time of an algorithm. We
must know the case that causes a minimum number of operations to be executed. In the linear
search problem, the best case occurs when x is present at the first location. The number of
operations in the best case is constant (not dependent on n). So time complexity in the best
case would be ?(1)

3. Average Case Analysis (Rarely used)

In average case analysis, we take all possible inputs and calculate the computing time for all of
the inputs. Sum all the calculated values and divide the sum by the total number of inputs. We
must know (or predict) the distribution of cases. For the linear search problem, let us assume
that all cases are uniformly distributed (including the case of x not being present in the array).
So we sum all the cases and divide the sum by (n+1). Following is the value of average-case
time complexity.

A) For some algorithms, all the cases (worst, best, average) are asymptotically the
same. i.e., there are no worst and best cases.

● Example: Merge Sort does ?(n log(n)) operations in all cases.

B) Where as most of the other sorting algorithms have worst and best cases.

● Example 1: In the typical implementation of Quick Sort (where pivot is


chosen as a corner element), the worst occurs when the input array is
already sorted and the best occurs when the pivot elements always divide
the array into two halves.
● Example 2: For insertion sort, the worst case occurs when the array is
reverse sorted and the best case occurs when the array is sorted in the
same order as output.
Time Complexity Analysis:

● Best Case: The order of growth will be constant because in the best case we are
assuming that (n) is even.
● Average Case: In this case, we will assume that even and odd are equally likely,
therefore Order of growth will be linear
● Worst Case: The order of growth will be linear because in this case, we are assuming
that (n) is always odd.

You might also like