Types of Algorithm Analysis
Types of Algorithm Analysis
Till now we must already be aware that, the running time of an algorithm increases with the
increase in the size of the input. However, if the running time is constant then it will remain
constant, no matter what the input is.
Even if the size of the input is the same sometimes, the running time of the algorithm still
may vary. Hence, we perform the best, average, and worst-case analyses of the algorithms,
covering all the possible cases where the algorithm may behave abruptly high or low.
There are four types of analysis of algorithms. They are stated below :
1. Best case
2. Average case
3. Worst case
4. Amortized analysis
The best case analysis of algorithms gives us the lower bound on the running time of the
algorithm for any given input. In simple words, it states that any program will need at
least (greater than or equal to) that time to run. For example, suppose we have an algorithm
that has the best case time complexity is (𝑁)O(N), then we can say that the program will
take a minimum of 𝑂(𝑁)O(N) time to run, it can never take sometime less than that.
The best case time or space complexity is often represented in terms of the Big Omega (Ω)
(Ω) notation.
Example for Best Case Analysis :
Let us take the example of the linear search to calculate the best time complexity. Suppose,
you have an array of numbers and you have to search for a number.
Code :
Now suppose, the number you are searching for is present at the very beginning index of the
array. In that case, your algorithm will take (1)O(1) time to find the number in the best case.
So, the best case complexity for this algorithm becomes (1)O(1), and you will get your
output in a constant time.
The best case is rarely necessary for measuring the runtime of the algorithms in practice. An
algorithm is never created using the best-case scenario.
The worst-case analysis of algorithms gives us the upper bound on the running time of the
algorithm for any given input. In simple words, it states that any program will need
maximum that time (less than or equal to) to run. For example, suppose we have an
algorithm that has the worst-case time complexity is 𝑂(𝑁2)O(N2), then we can say that the
program will take a maximum of 𝑂(𝑁2)O(N2) time to run, for an input of size 𝑁N it can
never take more time than that to run.
The worst-case time or space complexity is often represented in terms of the Big Oh (𝑂)
(O) notation.
Example for worst case analysis :
Let us take our previous example where we were performing the linear search. Suppose, this
time the element we are trying to find is at the end of the array. So, we will have to traverse
the whole array before finding the element. Hence, we can say that the worst case for this
algorithm is (𝑁)O(N) itself. Because we have to go through at most 𝑁N elements before
finding our target. So, this is how we calculate the worst case of the algorithms.
In actual life, we typically analyze an algorithm's worst-case scenario for most of the cases.
The longest running time (𝑛)W(n) of an algorithm for any input of size n is referred to as
worst-case time complexity.
As the name suggests, the average case analysis of algorithms takes the sum of the running
time on every possible input, and after that, it takes the average. So, in this case, the
execution time of the algorithm acts as both the lower and upper bound. In simple terms, we
can get an idea of the average running time of the algorithm through it.
Generally, the average case of the algorithms is as high as the worst-case running of it.
Hence, it roughly gives us an estimation of the worst case itself.
The average case time or space complexity is often represented in terms of the Big theta (Θ)
(Θ) notation.
Example for average case analysis :
In our previous example, suppose we need to find any element which is present in the mid of
our array. So, for that, we need to traverse at least the half length of the array. In other words,
it will take us (𝑛/2)O(n/2) time for us to traverse the half-length. The time
complexity (𝑛/2)O(n/2) is as good as 𝑂(𝑛)O(n). That is why we say that the average case
in most of the cases depicts the worst case of an algorithm.
To be precise, it is usually difficult to analyze the average case running time of an algorithm.
It is simpler to calculate the worst case instead. This is mainly because it might not be a very
exact thing to declare any input as the "average" input for any problem. Therefore, prior
knowledge of the distribution of the input cases is necessary for a useful examination of the
average behavior of an algorithm, which is necessarily an impossible condition.
The amortized analysis of the algorithms mainly deals with the overall cost of the operations.
It does not mention the complexity of any one particular operation in the sequence. The total
cost of the operations is the major focus of amortized analysis.
In times when only a few operations are slow but a majority of other operations are quicker,
we perform an amortized analysis. Through this, we return the average running time per
operation in the worst case.
So, in a hash table, for searching, the time taken is generally (1)O(1), or constant time.
However, there are situations when it takes (𝑁)O(N) times because it has to execute that
many operations to search for a value.
Similarly, when we try to insert some value in a hash table, for a majority of the cases it takes
a time of 𝑂(1)O(1), but still, there are cases when suppose multiple collisions occur, when
it takes a time of 𝑂(𝑁)O(N), for the collisions resolution.
Other data structures we need amortized analysis are Disjoint Sets etc.
Whenever a single operation or very few operations runs slowly on occasion, but most of
the operations run quickly and frequently, then the amortized case analysis is
used.
Link: https://www.scaler.com/topics/analysis-of-algorithm/