Lec-2 Algorithms Efficiency & Complexity Updated
Lec-2 Algorithms Efficiency & Complexity Updated
-f(n) is the formula that tells us exactly how many operations the
function/algorithm in question will perform when the problem size
is n.
-g(n) is like an upper bound for f(n). Within a constant factor, the
number of operations required by your function is no worse than
g(n).
Big “O” Notation
Why is this useful?
–We want out algorithms to scalable. Often, we write
program and test them on relatively small inputs. Yet, we
expect a user to run our program with larger inputs.
Running-time analysis helps us predict how efficient our
program will be in the `real world'.
Big “O” Notation
Big “O” Notation
Big “O” Notation
Big “O” Notation
RUN-TIME COMPLEXITY TYPES (BIG-O
NOTATION TYPES)
Constant time f (n) = C.
An algorithm is said to have a constant time when it’s run-time not
dependent on the input data(n). This means that the algorithm/operation
will always take the same amount of time regardless of the number of
elements we’re working with. For example, accessing the first element
of a list is always O (1) regardless of how big the list is.
Logarithmic time f (n) = log n.
Algorithms with logarithmic time complexity reduce the input data size
in each step of the operation. Usually, Binary trees and Binary search
operations have O(log n ) as their time complexity.
RUN-TIME COMPLEXITY TYPES (BIG-O
NOTATION TYPES)
Linear time f (n) = n.
An algorithm is said to have a linear time complexity when
the run-time is directly and linearly proportional to the size of
the input data. This is the best possible time complexity when
the algorithm has to examine all the items in the input data.
For example:
for value in data:
print(value)
Example of such operations would be linear search hence the
iteration over the list is O(n).
RUN-TIME COMPLEXITY TYPES (BIG-O
NOTATION TYPES)
Quasilinear time (n log n)
Where each operation in the input data have a logarithm
time complexity. Commonly seen in optimized sorting
algorithms such as merge sort, timsort, heapsort.
In merge sort, the input data is broken down into several
sub-lists until each sublist consists of a single element
and then the sub lists are merged into a sorted list. This
gives us a time complexity of O(nlogn
RUN-TIME COMPLEXITY TYPES (BIG-O
NOTATION TYPES)
Quadratic time f (n) = n2.
An algorithm is said to have a quadratic time complexity when the time it
takes to perform an operation is proportional to the square of the items in
the collection. This occurs when the algorithm needs to perform a linear
time operation for each item in the input data. Bubble sort has
O(n^2) .For example, a loop within a loop:
Exponential time f (n) = bn,
An algorithm is said to have an exponential time complexity when the
growth doubles with each addition to the input data set. This kind of time
complexity is usually seen in brute-force algorithms. For example, the
recursive Fibonacci algorithm has O(2n) time complexity.
RUN-TIME COMPLEXITY TYPES (BIG-O
NOTATION TYPES)
Factorial time f (n) = n!
An algorithm is said to have a factorial time complexity
when every single permutation of a collection is
computed in an operation and hence the time it takes to
perform an operation is factorial of the size of the items
in the collection. The Travelling Salesman Problem and
the Heap’s algorithm(generating all possible
permutations of n objects) have O(n!) time complexity.
Disadvantage: It is very slow
RUN-TIME COMPLEXITY TYPES
BIG-OH NOTATION: FEW EXAMPLES
BIG-OH NOTATION: FEW EXAMPLES
BIG-OH NOTATION: FEW EXAMPLES
Big “” Notation
Definition: function f(n) is (g(n)) if there exist constants c and
n0 such that for all n>=n0: f(n) >= c (g(n)).
-The notation is often confusing: f = (g) is read "f is big-omega
of g.“
-Generally, when we see a statement of the form f(n)= (g(n)):
Big “” Notation