Introduction - Aoa
Introduction - Aoa
Chapter-1. Introduction
What is Algorithm
Finite set of steps to solve a problem is call algorithm.
Analysing is a process of comparing two algorithms w.r.t time and space.
In computer science, the analysis of algorithms is the process of finding the computational
complexity of algorithms – the amount of time, storage, or other resources needed to execute them.
Initially the solution to problem is written in natural language known as algorithm then this
algorithm is converted into code.
If algorithm is correct then program should produce correct output on valid input otherwise it
should generate an appropriate error message.
An algorithm is said to be efficient when this function's values are small, or grow slowly compared
to a growth in the size of the input.
Different inputs of the same length may cause the algorithm to have different behaviour, so best,
worst and average case.
This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 1
Analysis of Algorithm Introduction
(i) Input:
The range of input for which algorithm work and there should be clear indication for the range of
inputs for which the algorithm may fail.
(ii) Output:
Algorithm reads input, processes it and produce at least one output.
(iii) Definiteness:
Each of the statement in the algorithm must be clear and precise. There should not be ambiguity in
any of the statement.
(iv) Finiteness:
Algorithm should be finite i.e there should not be infinite condition leading to a never-ending
procedure and hence never completing the task.
(v) Effectiveness:
It should produce the result as fast as possible or efficiently.
5. Loop Statement
a. While (condition) do
Do Some Work
end
b. For index Start to end do
Do some work
End
This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 2
Analysis of Algorithm Introduction
Algorithm Factorial(n)
// Description: Find factorial of given number
// Input: Number n whose factorial is to be computed.
// Output: Factorial of n = n × (n – 1) × …. × 2 × 1
If (n ==1) then
return 1
else
return n * Factorial(n – 1)
Performance Analysis
• Efficiency:
Efficiency of algorithm is it should take less space in memory and also require less time for
giving the output.
The various parameters required to be considered for the efficiency of an algorithm are:
1. Space Complexity
2. Time Complexity
1. Space Complexity
The amount of memory requires to solve given problem is called space complexity of the
algorithm.
This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 3
Analysis of Algorithm Introduction
Algorithm ADD_SCALAR(A,B)
// Description: Perform arithmetic addition of two numbers
// Input: Two scalar variables A and B
// Output: variable C, which holds the addition of A and B
C←A+B
return C
The addition of two scalar numbers required one extra memory location to hold the result. Thus
the space complexity of this algorithm is constant, hence S(n) = O(1).
Algorithm ADD_ARRAY(A, B)
// Description: Performa element-wise arithmetic addition of two arrays
// Input: Two number arrays A and B
// Output: Array C holding the element-wise sum of array A and B
for i ← 1 to n do
C[i] ← A[i] + B[i]
end
return C
Addition corresponding elements of two arrays, each of size n requires extra n memory locations
to hold the result. As input size n increases, required space to hold the result also grows in the linear order
of input. Thus, the space complexity of above code segment would be S(n) = O(n).
This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 4
Analysis of Algorithm Introduction
The addition of all array elements requires only one extra variable denoted as sum, this is independt
of array size.
So space complexity of algorithm is S(n)=O(1)
2. Time Complexity
The Time required by the algorithm to solve given problem is called complexity of the algorithm.
Time complexity is not measured in physical clock tricks, rather it is most frequent operation in
algorithm.
We use notation T(n) to symbolise time complexity
The sum of two scalar numbers requires one addition operation. Thus the time complexity of this
algotithm is constant, so T(n) = O(1).
This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 5
Analysis of Algorithm Introduction
As it can be observed from above code, addition array elements required iterating loop n times.
Variable i is initialized once, the relation between control variable i and n are checked n times, and i is
incremented n times. With the loop, addition and assignment operations are performed n time.
Thus, the total time of algorithm is measured as
T(n) = f(initialization) + n(comparison+ increment +addition + assignment)
= 1 + 4n
While doing efficiency analysis of the algorithm, we are interested in the order of complexity in term
of input size n.
So, all multiplicative and divisive constants should be dropped. Thus, for given algorithm T(n) = O(n).
The addition of all array elements requires n additions (we shall omit ṭhe number of comparisons,
assignment, initialization etc. to avoid the multiplicative or additive constants. A number of additions
depend on the size of the array. It grows in the linear order of imput size. Thus the time complexity of
above code is T(n) = O(n).
This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 6
Analysis of Algorithm Introduction
Growth Function
Growth functions are used to estimate the number of steps an algorithm uses as its input
grows.
Order of growth indicates how quickly the time required by algorithm grows with respect to
input size.
The largest number of steps needed to solve the given problem using an algorithm on input
of specified size is worst-case complexity.
Binary search
Logarithmic log n
Insert / delete element from binary search tree
Linear search
Merge sort
Binary search
n log n n log n
Quick sort
Heap sort
Selection sort
Bubble sort
Quadratic n2
Input 2D array
Find maximum element from 2D matrix
n3 Matrix multiplication
Cubic
Find power set of any set
These are widely used classes, there exist many other classes.
Algorithm having exponential or factorial running time are unacceptable for practical use.
This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 7
Analysis of Algorithm Introduction
Asymptotic Notation
Asymptotic notations are mathematical tool to find time and space complexity of an algorithm
without implementing it in a programming language.
It is way of describing cost of algorithm.
Asymptotic notation does analysis of algorithm independent of processor speed, ram, memory.
For example: In bubble sort, when the input array is already sorted, the time taken by the algorithm
is linear i.e., the best case.
But, when the input array is in reverse condition, the algorithm takes the maximum time (quadratic)
to sort the elements i.e., the worst case.
When the input array is neither sorted nor in reverse order, then it takes average time. These
durations are denoted using asymptotic notations.
➢ There are mainly three asymptotic notations:
• Big-O notation
• Omega notation
• Theta notation
1. Big oh
The notation is denoted by ‘O’ and pronounced as Big oh
It means running time of algorithm cannot be more than its
asymptotic upper bound.
f(n) = O(g(n))
f(n) <= c.g(n)
In this f(n) lies on or below c.g(n)
2. Big Omega
This notation is denoted by ‘Ω’ and pronounced as Big
omega
It defines lower bound for the algorithm.
It means running time of algorithm cannot be less than its
asymptotic lower bound.
f(n) = Ω(g(n))
f (n) >= c.g(n)
In this f (n) lies on or above c.g(n).
This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 8
Analysis of Algorithm Introduction
3. Big Theta
This notation is denoted by ‘𝜃’ and pronounced as Big
theta
It defines lower bound for the algorithm.
It means running time of algorithm cannot be less or
greater than its asymptotic tight bound.
f(n) = 𝜃(g(n))
c1.g(n) <= f(n) <= c2.g(n)
In this f(n) lies between c1.g(n) and c2.g(n)
Example: Represent the following function using Big oh, Omega and Theta Notations.
(i) (n) = 3n + 2 (ii) T(n) = 10n2 + 2n + 1
Solution:
(A) Big oh (upper bound)
(i) T(n) = 3n + 2
To Find upper bound of f(n), we have to find c and n0 such that 0 ≤ f(n) ≤ c . g(n) for all n ≥
n0
0 ≤ f(n) ≤ c . g (n)
0 ≤ 3n + 2 ≤ c . g (n)
0 ≤ 3n + 2 ≤ 3n + 2n, for all n ≥ 1 (there can be such infinite possibilities)
0 ≤ 3n + 2 ≤ 5n
So, c = 5 and g(n) = n, n0 = 1
This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 9
Analysis of Algorithm Introduction
This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 10
Analysis of Algorithm Introduction
4. Problem which can be solved theoretically and practically in reasonable amount of time
Problems which can be solved in polynomial time is known as P class problem (Sorting/
Searching), which take time like O(n), O(n2), O(n3).
E.g.,: finding maximum element in an array or to check whether a string is palindrome or not. So there
are many problems which can be solved in polynomial time.
Problem which can't be solved in polynomial time but can be verified in polynomial time like
TSP( travelling salesman problem) , su-do-ku
But NP problems are checkable in polynomial time means that given a solution of a problem, we can
check that whether the solution is correct or not in polynomial time.
This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 11
Analysis of Algorithm Introduction
P is subset of NP
Reducibility- If we can convert one instance of a problem A into problem B (NP problem) then it means
that A is reducible to B.
Now suppose we found that A is reducible to B, then it means that B is at least as hard as A.
NP Hard: Problems are slow to verify, slow to solve and can be reduced to any other problem.
NP-Complete -- The group of problems which are both in NP and NP-hard are known as NP-Complete
problem.
Now suppose we have a NP-Complete problem R and it is reducible to Q then Q is at least as hard as R
and since R is an NP-hard problem. Therefore Q will also be at least NP-hard, it may be NP-complete also.
NP Complete: Problems are also quick to verify, slow to solve and can be reduced to any other problem.
This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 12