Lecture 2 PDF
Lecture 2 PDF
Lecture # 2
Asymptotic notation
Asymptotic notation is a mathematical tool used to describe the efficiency of an algorithm or
the rate at which a function grows as the input size increases. It is a way to simplify and
express the behavior of a function as it approaches infinity.
There are three common types of asymptotic notation used in algorithm analysis: big O
notation, big Omega notation, and big Theta notation.
Big O notation is used to describe the upper bound of the growth rate of a function. It
represents the maximum amount of time or space required by an algorithm, as a function of
the input size, in the worst-case scenario. For example, if an algorithm has a time complexity
of O(n^2), this means that the algorithm will take no more than n^2 units of time to run,
where n is the size of the input.
Big Omega notation is used to describe the lower bound of the growth rate of a function. It
represents the minimum amount of time or space required by an algorithm.
Big-O notation describes the upper bound or worst-case scenario of an algorithm's time
complexity. It tells us how quickly the runtime of an algorithm grows as the size of the input
increases. For example, if an algorithm has a time complexity of O(n), it means that the
maximum time it takes to run will grow linearly with the size of the input.
Big-Omega notation describes the lower bound or best-case scenario of an algorithm's time
complexity. It tells us how quickly the runtime of an algorithm can decrease as the size of the
input increases. For example, if an algorithm has a time complexity of Omega(n), it means
that the minimum time it takes to run will grow linearly with the size of the input.
Big-Theta notation provides a tight bound on the algorithm's time complexity by describing
both the upper and lower bounds. It gives us a range of time complexity that an algorithm
will fall within. For example, if an algorithm has a time complexity of Theta(n), it means that
its runtime will grow linearly with the size of the input, and both the best-case and worst-case
scenarios will fall within that same linear growth rate.
Asymptotic notation is a way of measuring how quickly an algorithm's runtime grows as the
size of the input increases. It provides us with a general idea of how efficient an algorithm is
without getting into the specifics.
Big-O notation tells us the maximum time an algorithm takes to run, Big-Omega tells us the
minimum time, and Big-Theta gives us a range in which the algorithm's time complexity
falls.
Design and Analysis of Algorithm Lecture # 1
Overall, asymptotic notation helps us compare different algorithms and choose the most
efficient one for a given problem, without worrying too much about the details.
1. Let's say you have an algorithm that sorts a list of numbers using the bubble sort
algorithm. The worst-case time complexity of this algorithm is O(n^2), which means
that the maximum time it takes to sort the list will grow quadratically with the size of
the list. In other words, if you have a list of 1000 numbers, the worst-case scenario is
that the algorithm will take around 1 million (1000^2) operations to complete.
2. Consider another algorithm that finds the maximum element in a list of numbers. The
best-case scenario for this algorithm is that the maximum element is at the beginning
of the list, so the algorithm only needs to look at the first element to fi O(n^2)nd it. In
this case, the time complexity of the algorithm is Omega(1), which means that the
minimum time it takes to find the maximum element is constant, regardless of the size
of the list.
3. Let's say you have a third algorithm that searches for an element in a sorted list using
the binary search algorithm. The time complexity of this algorithm is Theta(log n),
which means that the time it takes to search the list grows logarithmically with the
size of the list. In other words, if you have a list of 1000 elements, the algorithm will
take around 10 operations to find the element, since log base 2 of 1000 is
approximately 10.
These examples show how asymptotic notation can help us understand the behavior of
different algorithms as the input size grows larger. It allows us to compare algorithms and
choose the most efficient one for a given problem.
Quadratically
Quadratically means that something is growing with the square of the input size. In other
words, if the input size doubles, the output size will quadruple.
For example, if an algorithm has a time complexity of O(n^2), it means that the time it takes
to complete the algorithm will grow with the square of the input size. If the input size doubles
from 1000 to 2000, the time it takes to complete the algorithm will increase by a factor of 4
(since 2000^2 is 4 times larger than 1000^2).
So, quadratically refers to a growth rate that is proportional to the square of the input size.
Logarith:
Logarithmically means that something is growing with the logarithm of the input size. In
other words, the growth rate is proportional to the logarithm of the input size.
For example, if an algorithm has a time complexity of O(log n), it means that the time it takes
to complete the algorithm will grow logarithmically with the input size. If the input size
doubles from 1000 to 2000, the time it takes to complete the algorithm will only increase by a
small constant amount (since log(2000) is only slightly larger than log(1000)).
So, logarithmically refers to a growth rate that is proportional to the logarithm of the input
size.
Design and Analysis of Algorithm Lecture # 1
In mathematics, the logarithm is a mathematical function that measures the number of times a
certain number (known as the base) must be multiplied by itself to obtain another number.
For example, the logarithm base 2 of 8 is 3, because 2 multiplied by itself three times gives us
8 (i.e., 2^3 = 8). Similarly, the logarithm base 10 of 1000 is 3, because 10 multiplied by itself
three times gives us 1000 (i.e., 10^3 = 1000).
-----------
The most commonly used asymptotic notations are the big O, big Omega, and big Theta
notations. These notations describe the upper bound, lower bound, and tight bound of the
algorithm's time complexity, respectively.
In simple words, the big O notation describes the maximum amount of resources an
algorithm will use in the worst-case scenario. It provides an upper bound on the growth rate
of the algorithm's time complexity as the input size increases.
The big Omega notation describes the minimum amount of resources an algorithm will use in
the best-case scenario. It provides a lower bound on the growth rate of the algorithm's time
complexity as the input size increases.
The big Theta notation describes the exact amount of resources an algorithm will use. It
provides a tight bound on the growth rate of the algorithm's time complexity as the input size
increases.
Using asymptotic notation, we can compare different algorithms and determine which is
more efficient for a given problem. However, it is important to note that asymptotic notation
does not provide information about the actual performance of an algorithm for a specific
input size or on a specific hardware platform.
sql
function sum(n):
result = 0
for i in range(1, n+1):
result += i
return result
This algorithm calculates the sum of the first n natural numbers by iterating through them and
adding them up.
Design and Analysis of Algorithm Lecture # 1
To calculate the efficiency of this algorithm using asymptotic notation, we need to determine
its time complexity. We can do this by analyzing the number of operations it performs as a
function of the input size n.
In this case, the algorithm performs n iterations of the loop, and each iteration performs a
constant amount of work (adding i to result). Therefore, the total number of operations is
proportional to n, so the time complexity is O(n).
This means that the algorithm's running time grows linearly with the size of the input. For
example, if we double the value of n, the algorithm will take approximately twice as long to
run.
We can use this information to compare this algorithm to other algorithms for solving the
same problem. If we find another algorithm with a lower time complexity, it will be more
efficient for large input sizes.
Here's an example of an algorithm that can calculate the sum of the first n natural numbers
with a lower time complexity than the previous example:
function sum(n):
return (n * (n+1)) / 2
This algorithm uses a mathematical formula to directly calculate the sum of the first n natural
numbers, without iterating through them. The time complexity of this algorithm is constant,
or O(1), because it performs the same amount of work regardless of the input size.
Compared to the previous algorithm, this algorithm is much more efficient for large input
sizes. While the previous algorithm had a time complexity of O(n), which means it would
take longer to run as n increases, this algorithm always takes the same amount of time to run,
regardless of the value of n.
In practice, this means that for very large values of n, the second algorithm would be much
faster than the first algorithm. For example, if we wanted to calculate the sum of the first one
billion natural numbers, the first algorithm would take a long time to run, while the second
algorithm would return the result almost instantly.
If bubble sort has a worst-case time complexity of O(n^2), then its best-case time complexity
is Ω(n).
The best-case time complexity of an algorithm refers to the minimum possible running time
of the algorithm for any input of size n. For bubble sort, the best-case occurs when the input
is already sorted. In this case, the algorithm only needs to make a single pass over the input to
confirm that it is already sorted, resulting in a time complexity of Ω(n).
Note that the best-case time complexity is not the same as the average-case or worst-case
time complexity of an algorithm. It simply represents the lower bound on the algorithm's
running time for any input of size n.
Design and Analysis of Algorithm Lecture # 1
In summary, the best-case time complexity of bubble sort is Ω(n), while its worst-case time
complexity is O(n^2). This means that the running time of bubble sort is at least linear, but
may be as much as quadratic, depending on the input.