Unit 1: Algorithm Analysis
Unit 1: Algorithm Analysis
Unit 1: Algorithm Analysis
Algorithm Analysis
Analysis of Algorithms
Time and Space Complexities
Algorithm is a set of steps for solving a given
problem. Algorithm is a step-by-step process for
solving a given problem.
Analysis of Algorithms
Time and Space Complexities
The analysis of algorithms is the determination of
the number of resources (such as time and storage
space) necessary to execute (run) them.
Analysis of Algorithms
Time and Space Complexities
Most algorithms are designed to work with inputs of
any size and length. When input size and length
increases, complexity, processing time (execution
time), and storage space requirements of the
algorithm (and program) also increases.
If the algorithm (and program) requires more
processing time for and more storage space, then the
algorithm (and program) is less efficient (slower).
Analysis of Algorithms
Time and Space Complexities
The processing time and storage space are used to
measure the efficiency (speed or performance) and
complexity of an algorithm (and program).
Analysis of Algorithms
Time and Space Complexities
Time and space complexity depends on the input
size and length and the number of steps in the
algorithm.
Classes of Algorithms
Algorithms are divided based on their
functionalities (operations). There are different
types of algorithms, as shown below:
1. Simple Recursive Algorithms
2. Backtracking Algorithms
3. Divide-and-Conquer Algorithms
4. Dynamic Programming Algorithms
5. Greedy Algorithms
6. Branch-and-Bound Algorithms
Asymptotic Notations
Asymptotic notations were first introduced by
German mathematicians Edmund Landau and Paul
Bachmann, therefore it is called Landau notation or
Bachmann–Landau notation. Asymptotic notations
are also called members of Family of Bachmann–
Landau notations.
Asymptotic Notations
There are different types of asymptotic notations
used for the analysis of algorithm for its time and
space complexities (difficulties or problems):
1. Big O Notation (or Big-Oh Notation)
2. Big Omega Ω Notation
3. Big Theta Θ Notation
4. Small O Notation (or Small Oh Notation)
(also called Little O or Little Oh Notation)
5. Small Omega ω Notation
Big Oh Notation or
Big O Notation
Big O notation is used to classify (categorize)
algorithms by how they respond to changes in
input size. When input size (size of input values
or input data) increases, the complexity
(difficulty), the processing time (speed or
execution time or running time or efficiency), and
the working space requirements (requirements of
spaces in RAM to store values) of the algorithm
(of the function and program) also increases.
Big Oh Notation or
Big O Notation
For example, when we are performing a function
addition of two input values 10 and 20, then the
complexity, the processing time, and the working
space requirements of the algorithm (of the
function and program) is very less.
Big Oh Notation or
Big O Notation
But, when we are performing a function addition
of other two input values 12345.5678 and
5566.778899, then the complexity, the processing
time, and the working space requirements of the
algorithm (of the function and program) is higher
than the previous case.
Big Oh Notation or
Big O Notation
Suppose f(x) and g(x) are two functions (operations
or work). The f(x) is the addition of two input values
10 and 20, and the g(x) is the addition of two input
values 12345.5678 and 5566.778899, then we can
write:
f(x) = O(g(x))
Big Oh Notation or
Big O Notation
Big O notation is used when the function g(x)
defines (indicates) an upper bound for the
function f(x). It means, the function g(x) is more
complex (difficult) than the function f(x), and
requires more processing time and working space
than the function f(x).
Big Omega Notation or
Big Omega Ω Notation
Big omega Ω notation is used when the function
g(x) defines (indicates) a lower bound for the
function f(x). It means, the function g(x) is less
complex (difficult) than the function f(x), and
requires less processing time (speed or execution
time or running time or efficiency) and working
space (spaces in RAM to store values) than the
function f(x).
f(x) = Ω(g(x))