0% found this document useful (0 votes)
71 views5 pages

Complexity Analysis of Algorithms (Greek For Greek)

The document discusses asymptotic notation and analysis in complexity analysis of algorithms. Asymptotic analysis evaluates an algorithm's performance based on input size rather than actual running time. The three most common notations - Big O, Omega, and Theta - provide upper bounds, lower bounds, and average case analysis of an algorithm's time or space complexity based on how it scales with increasing input size. Asymptotic analysis is useful for comparing algorithms and predicting performance for large inputs.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views5 pages

Complexity Analysis of Algorithms (Greek For Greek)

The document discusses asymptotic notation and analysis in complexity analysis of algorithms. Asymptotic analysis evaluates an algorithm's performance based on input size rather than actual running time. The three most common notations - Big O, Omega, and Theta - provide upper bounds, lower bounds, and average case analysis of an algorithm's time or space complexity based on how it scales with increasing input size. Asymptotic analysis is useful for comparing algorithms and predicting performance for large inputs.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Asymptotic Notation and Analysis (Based on input size) in Complexity Analysis of Algorithms

Asymptotic Analysis is defined as the big idea that handles the above issues in analyzing algorithms. In
Asymptotic Analysis, we evaluate the performance of an algorithm in terms of input size (we don’t
measure the actual running time). We calculate, how the time (or space) taken by an algorithm
increases with the input size.

Asymptotic notation is a way to describe the running time or space complexity of an algorithm based on
the input size. It is commonly used in complexity analysis to describe how an algorithm performs as the
size of the input grows. The three most commonly used notations are Big O, Omega, and Theta.

Big O notation (O): This notation provides an upper bound on the growth rate of an algorithm’s running
time or space usage. It represents the worst-case scenario, i.e., the maximum amount of time or space
an algorithm may need to solve a problem. For example, if an algorithm’s running time is O(n), then it
means that the running time of the algorithm increases linearly with the input size n or less.

Omega notation (Ω): This notation provides a lower bound on the growth rate of an algorithm’s running
time or space usage. It represents the best-case scenario, i.e., the minimum amount of time or space an
algorithm may need to solve a problem. For example, if an algorithm’s running time is Ω(n), then it
means that the running time of the algorithm increases linearly with the input size n or more.

Theta notation (Θ): This notation provides both an upper and lower bound on the growth rate of an
algorithm’s running time or space usage. It represents the average-case scenario, i.e., the amount of
time or space an algorithm typically needs to solve a problem. For example, if an algorithm’s running
time is Θ(n), then it means that the running time of the algorithm increases linearly with the input size n.

In general, the choice of asymptotic notation depends on the problem and the specific algorithm used to
solve it. It is important to note that asymptotic notation does not provide an exact running time or space
usage for an algorithm, but rather a description of how the algorithm scales with respect to input size. It
is a useful tool for comparing the efficiency of different algorithms and for predicting how they will
perform on large input sizes.

Why performance analysis?


There are many important things that should be taken care of, like user-friendliness, modularity, security
, maintainability, etc. Why worry about performance? The answer to this is simple, we can have all the
above things only if we have performance. So performance is like currency through which we can buy
all the above things. Another reason for studying performance is – speed is fun! To summarize,
performance == scale. Imagine a text editor that can load 1000 pages, but can spell check 1 page per
minute OR an image editor that takes 1 hour to rotate your image 90 degrees left OR … you get it. If a
software feature can not cope with the scale of tasks users need to perform – it is as good as dead.

How to study efficiency of algorithms?


The way to study the efficiency of an algorithm is to implement it and experiment by running the
program on various test inputs while recording the time spent during each execution. A simple
mechanism in Java is based on use of the currentTimeMillis() method of the System class for collecting
such running times. That method reports the number of milliseconds that have passed since a
benchmark time known as the epoch (January 1, 1970 UTC).The key is that if we record the time
immediately before executing the algorithm and then immediately after it.

long start = System.currentTimeMillis( ); // record the starting time


/∗ (run the algorithm) ∗/
long end = System.currentTimeMillis( ); // record the ending time
long elapsed = end − start; //Total time elapsed

Measuring elapsed time provides a reasonable reflection of an algorithm’s efficiency.


Given two algorithms for a task, how do we find out which one is better?

One naive way of doing this is – to implement both the algorithms and run the two programs on your
computer for different inputs and see which one takes less time. There are many problems with this
approach for the analysis of algorithms.

It might be possible that for some inputs, the first algorithm performs better than the second. And for
some inputs second performs better.

It might also be possible that for some inputs, the first algorithm performs better on one machine, and
the second works better on another machine for some other inputs.

Asymptotic Analysis is the big idea that handles the above issues in analyzing algorithms. In
Asymptotic Analysis, we evaluate the performance of an algorithm in terms of input size (we don’t
measure the actual running time). We calculate, how the time (or space) taken by an algorithm
increases with the input size.

For example, let us consider the search problem (searching a given item) in a sorted array.

The solution to above search problem includes:

Linear Search (order of growth is linear)

Binary Search (order of growth is logarithmic).

To understand how Asymptotic Analysis solves the problems mentioned above in analyzing algorithms
, let us say: we run the Linear Search on a fast computer A and Binary Search on a slow computer B
and pick the constant values for the two computers so that it tells us exactly how long it takes for the
given machine to perform the search in seconds.

Let’s say the constant for A is 0.2 and the constant for B is 1000 which means that A is 5000 times
more powerful than B.

For small values of input array size n, the fast computer may take less time.

But, after a certain value of input array size, the Binary Search will definitely start taking less time
compared to the Linear Search even though the Binary Search is being run on a slow machine.

The reason is the order of growth of Binary Search with respect to input size is logarithmic while the
order of growth of Linear Search is linear.

So the machine-dependent constants can always be ignored after a certain value of input size.

Running times for this example:

Linear Search running time in seconds on A: 0.2 * n

Binary Search running time in seconds on B: 1000*log(n)

Challenges of Experimental Analysis:

Experimental running times of two algorithms are difficult to directly compare unless the experiments
are performed in the same hardware and software environments . Experiments can be done only on a
limited set of test inputs ; hence , they leave out the running times of inputs not included in the
experiment (and these inputs may be important).

To overcome the challenges in the Experimental analysis Asymptotic Analysis is used.


Does Asymptotic Analysis always work?

Asymptotic Analysis is not perfect, but that’s the best way available for analyzing algorithms. For
example, say there are two sorting algorithms that take 1000nLogn and 2nLogn time respectively on a
machine. Both of these algorithms are asymptotically the same (order of growth is nLogn). So, With
Asymptotic Analysis, we can’t judge which one is better as we ignore constants in Asymptotic Analysis.

Also, in Asymptotic analysis, we always talk about input sizes larger than a constant value. It might be
possible that those large inputs are never given to your software and an asymptotically slower
algorithm always performs better for your particular situation. So, you may end up choosing an
algorithm that is Asymptotically slower but faster for your software.

Please write comments if you find anything incorrect, or if you want to share more information about
the topic discussed above

Advantages or Disadvantages:
Advantages:
Asymptotic analysis provides a high-level understanding of how an algorithm performs with respect to
input size.
It is a useful tool for comparing the efficiency of different algorithms and selecting the best one for a
specific problem.
It helps in predicting how an algorithm will perform on larger input sizes, which is essential for real-
world applications.
Asymptotic analysis is relatively easy to perform and requires only basic mathematical skills.
Disadvantages:

Asymptotic analysis does not provide an accurate running time or space usage of an algorithm.
It assumes that the input size is the only factor that affects an algorithm’s performance, which is not
always the case in practice.
Asymptotic analysis can sometimes be misleading, as two algorithms with the same asymptotic
complexity may have different actual running times or space usage.
It is not always straightforward to determine the best asymptotic complexity for an algorithm, as there
may be trade-offs between time and space complexity.

Worst, Average and Best Case Analysis of Algorithms

In the previous post, we discussed how Asymptotic analysis overcomes the problems of the naive way
of analyzing algorithms. But let’s take an overview of the asymptotic notation and learn about What is
Worst, Average, and Best cases of an algorithm:

Popular Notations in Complexity Analysis of Algorithms


1. Big-O Notation
We define an algorithm’s worst-case time complexity by using the Big-O notation, which determines
the set of functions grows slower than or at the same rate as the expression. Furthermore, it explains
the maximum amount of time an algorithm requires to consider all input values.

2. Omega Notation
It defines the best case of an algorithm’s time complexity, the Omega notation defines whether the set
of functions will grow faster or at the same rate as the expression. Furthermore, it explains the
minimum amount of time an algorithm requires to consider all input values.

3. Theta Notation
It defines the average case of an algorithm’s time complexity, the Theta notation defines when the set
of functions lies in both O(expression) and Omega(expression), then Theta notation is used. This is
how we define a time complexity average case for an algorithm.
Measurement of Complexity of an Algorithm

Based on the above three notations of Time Complexity there are three cases to analyze an algorithm:

1. Worst Case Analysis (Mostly used)

In the worst-case analysis, we calculate the upper bound on the running time of an algorithm. We must
know the case that causes a maximum number of operations to be executed. For Linear Search, the
worst case happens when the element to be searched (x) is not present in the array. When x is not
present, the search() function compares it with all the elements of arr[] one by one. Therefore, the
worst-case time complexity of the linear search would be O(n).

2. Best Case Analysis (Very Rarely used)

In the best-case analysis, we calculate the lower bound on the running time of an algorithm. We must
know the case that causes a minimum number of operations to be executed. In the linear search
problem, the best case occurs when x is present at the first location. The number of operations in the
best case is constant (not dependent on n). So time complexity in the best case would be Ω(1)

3. Average Case Analysis (Rarely used)

In average case analysis, we take all possible inputs and calculate the computing time for all of the
inputs. Sum all the calculated values and divide the sum by the total number of inputs. We must know (
or predict) the distribution of cases. For the linear search problem, let us assume that all cases are
uniformly distributed (including the case of x not being present in the array). So we sum all the cases
and divide the sum by (n+1). Following is the value of average-case time complexity.

Which Complexity analysis is generally used?

Below is the ranked mention of complexity analysis notation based on popularity:

1. Worst Case Analysis:

Most of the time, we do worst-case analyses to analyze algorithms. In the worst analysis, we
guarantee an upper bound on the running time of an algorithm which is good information.

2. Average Case Analysis

The average case analysis is not easy to do in most practical cases and it is rarely done. In the
average case analysis, we must know (or predict) the mathematical distribution of all possible inputs.

3. Best Case Analysis

The Best Case analysis is bogus. Guaranteeing a lower bound on an algorithm doesn’t provide any
information as in the worst case, an algorithm may take years to run.

Interesting information about asymptotic notations:

A) For some algorithms, all the cases (worst, best, average) are asymptotically the same. i.e., there are
no worst and best cases. Example: Merge Sort does Θ(n log(n)) operations in all cases.

B) Where as most of the other sorting algorithms have worst and best cases.

Example 1: In the typical implementation of Quick Sort (where pivot is chosen as a corner element),
the worst occurs when the input array is already sorted and the best occurs when the pivot elements
always divide the array into two halves.

Example 2: For insertion sort, the worst case occurs when the array is reverse sorted and the best
case occurs when the array is sorted in the same order as output.
Time Complexity Analysis:

Best Case: The order of growth will be constant because in the best case we are assuming that (n) is
even.

Average Case: In this case, we will assume that even and odd are equally likely, therefore Order of
growth will be linear

Worst Case: The order of growth will be linear because in this case, we are assuming that (n) is always
odd.

For more details, please refer: Design and Analysis of Algorithms. Please write comments if you find
anything incorrect, or if you want to share more information about the topic discussed above.

Worst, Average, and Best Case Analysis of Algorithms is a technique used to analyze the performance
of algorithms under different conditions. Here are some advantages, disadvantages, important points,
and reference books related to this analysis technique:

Advantages:

This technique allows developers to understand the performance of algorithms under different
scenarios, which can help in making informed decisions about which algorithm to use for a specific
task.

Worst case analysis provides a guarantee on the upper bound of the running time of an algorithm,
which can help in designing reliable and efficient algorithms.

Average case analysis provides a more realistic estimate of the running time of an algorithm, which
can be useful in real-world scenarios.

Disadvantages:

This technique can be time-consuming and requires a good understanding of the algorithm being
analyzed.

Worst case analysis does not provide any information about the typical running time of an algorithm,
which can be a disadvantage in real-world scenarios.

Average case analysis requires knowledge of the probability distribution of input data, which may not
always be available.

Important points:

The worst case analysis of an algorithm provides an upper bound on the running time of the algorithm
for any input size.

The average case analysis of an algorithm provides an estimate of the running time of the algorithm for
a random input.

The best case analysis of an algorithm provides a lower bound on the running time of the algorithm for
any input size.

The big O notation is commonly used to express the worst case running time of an algorithm.

Different algorithms may have different best, average, and worst case running times.

You might also like