0% found this document useful (0 votes)
18 views12 pages

Introduction - Aoa

aoa introduction

Uploaded by

TEJAS DALVI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views12 pages

Introduction - Aoa

aoa introduction

Uploaded by

TEJAS DALVI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Analysis of Algorithm Introduction

Chapter-1. Introduction

What is Algorithm
Finite set of steps to solve a problem is call algorithm.
Analysing is a process of comparing two algorithms w.r.t time and space.
In computer science, the analysis of algorithms is the process of finding the computational
complexity of algorithms – the amount of time, storage, or other resources needed to execute them.

Initially the solution to problem is written in natural language known as algorithm then this
algorithm is converted into code.
If algorithm is correct then program should produce correct output on valid input otherwise it
should generate an appropriate error message.
An algorithm is said to be efficient when this function's values are small, or grow slowly compared
to a growth in the size of the input.
Different inputs of the same length may cause the algorithm to have different behaviour, so best,
worst and average case.

➢ A good algorithm should have following properties and characteristics:

This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 1
Analysis of Algorithm Introduction

(i) Input:
The range of input for which algorithm work and there should be clear indication for the range of
inputs for which the algorithm may fail.
(ii) Output:
Algorithm reads input, processes it and produce at least one output.
(iii) Definiteness:
Each of the statement in the algorithm must be clear and precise. There should not be ambiguity in
any of the statement.
(iv) Finiteness:
Algorithm should be finite i.e there should not be infinite condition leading to a never-ending
procedure and hence never completing the task.
(v) Effectiveness:
It should produce the result as fast as possible or efficiently.

How to write algorithm


• Algorithm Basically consist of two-part Head, Body
• Head Consist of algorithm name, description of problem, input and expected output.
• Body Consist of description of functionality.
1. Algorithm should start with keyword followed by name of algorithm
“Algorithm Find_Sum(A, B)”
2. Use left arrow for assignment: C  A + B
3. Input and Output are performed using read and write statement
Read(A) or Read “A”
Write(A) or Write “A” or Print “A”
4. Control Statements
If (Condition) then Statement end

If (Condition) then Statement1 else statement2 end

5. Loop Statement
a. While (condition) do
Do Some Work
end
b. For index  Start to end do
Do some work
End

This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 2
Analysis of Algorithm Introduction

Example: Write an algorithm for finding the factorial of a number n.

Algorithm Factorial(n)
// Description: Find factorial of given number
// Input: Number n whose factorial is to be computed.
// Output: Factorial of n = n × (n – 1) × …. × 2 × 1

If (n ==1) then
return 1
else
return n * Factorial(n – 1)

Performance Analysis
• Efficiency:
Efficiency of algorithm is it should take less space in memory and also require less time for
giving the output.
The various parameters required to be considered for the efficiency of an algorithm are:
1. Space Complexity
2. Time Complexity

1. Space Complexity
The amount of memory requires to solve given problem is called space complexity of the
algorithm.

➢ Space complexity controlled by two Components:

1. Fixed size component


Programming part whose memory requirement does not alter on program execution.

e.g., Variables, Constant, arrays

2. Variable size component


Programming part whose size depends on program being solved.
e.g., size of loop, linked list.
We use notation s(n) to specify the space complexity.
Where n is treated as size of input.

This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 3
Analysis of Algorithm Introduction

Example: Addition of two scalar variables.

Algorithm ADD_SCALAR(A,B)
// Description: Perform arithmetic addition of two numbers
// Input: Two scalar variables A and B
// Output: variable C, which holds the addition of A and B
C←A+B
return C

The addition of two scalar numbers required one extra memory location to hold the result. Thus
the space complexity of this algorithm is constant, hence S(n) = O(1).

Example: Addition of two Arrays.

Algorithm ADD_ARRAY(A, B)
// Description: Performa element-wise arithmetic addition of two arrays
// Input: Two number arrays A and B
// Output: Array C holding the element-wise sum of array A and B
for i ← 1 to n do
C[i] ← A[i] + B[i]
end
return C

Addition corresponding elements of two arrays, each of size n requires extra n memory locations
to hold the result. As input size n increases, required space to hold the result also grows in the linear order
of input. Thus, the space complexity of above code segment would be S(n) = O(n).

Example: Sum of elements of Arrays.


Algorithm SUM_ARRAY_ELEMENTS(A)
// Description: Add all elements of array A
// Input: Array A of size n
// Output: Variable Sum which holds the addition of array elements
Sum ← 0
for i ← 1 to n do

This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 4
Analysis of Algorithm Introduction

Sum ← Sum + A[i]


end
return Sum

The addition of all array elements requires only one extra variable denoted as sum, this is independt
of array size.
So space complexity of algorithm is S(n)=O(1)

2. Time Complexity
The Time required by the algorithm to solve given problem is called complexity of the algorithm.
Time complexity is not measured in physical clock tricks, rather it is most frequent operation in
algorithm.
We use notation T(n) to symbolise time complexity

Example: Addition of two scalar variables.


Algorithm ADD_SCALAR(A, B)
// Description: Perform arithmetic addition of two numbers
// Input: Two scalar variable A and B
// Output: Variable C, which holds the addition of A and B
C←A+B
return C

The sum of two scalar numbers requires one addition operation. Thus the time complexity of this
algotithm is constant, so T(n) = O(1).

Example: Perform addition of two Arrays.


Algorithm ADD_ARRAY(A, B)
// Description : Perform element wise arithmetic addition of two arrays
// Input : Two number arrays A and B of length
// Output : Array C holding the element wise sum of array A and B
for i ← to n do
c[i] ← A[i] + B[i]
end
return C

This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 5
Analysis of Algorithm Introduction

As it can be observed from above code, addition array elements required iterating loop n times.
Variable i is initialized once, the relation between control variable i and n are checked n times, and i is
incremented n times. With the loop, addition and assignment operations are performed n time.
Thus, the total time of algorithm is measured as
T(n) = f(initialization) + n(comparison+ increment +addition + assignment)
= 1 + 4n
While doing efficiency analysis of the algorithm, we are interested in the order of complexity in term
of input size n.
So, all multiplicative and divisive constants should be dropped. Thus, for given algorithm T(n) = O(n).

Example: Sum of elements of array.


Algorithm SUM_ARRAY_ELEMENTS(A)
// Description: Add all elements of array A
// Input: Array A of size n
// Output: Variable Sum which holds the addition of array elements
Sum ← 0
for i ← 1 to n do
Sum ← Sum + A[i]
end

The addition of all array elements requires n additions (we shall omit ṭhe number of comparisons,
assignment, initialization etc. to avoid the multiplicative or additive constants. A number of additions
depend on the size of the array. It grows in the linear order of imput size. Thus the time complexity of
above code is T(n) = O(n).

This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 6
Analysis of Algorithm Introduction

Growth Function
Growth functions are used to estimate the number of steps an algorithm uses as its input
grows.
Order of growth indicates how quickly the time required by algorithm grows with respect to
input size.
The largest number of steps needed to solve the given problem using an algorithm on input
of specified size is worst-case complexity.

➢ Efficiency classes are categories into different classes:

Efficiency Order of Example


Class Growth Rate
Delete the first node from linked list

1 Remove the largest element from max heap


Constant
Add two numbers

Binary search
Logarithmic log n
Insert / delete element from binary search tree

Linear search

n Insert node at the end of linked list


Linear
Find minimum / maximum element from array

Merge sort
Binary search
n log n n log n
Quick sort
Heap sort

Selection sort
Bubble sort
Quadratic n2
Input 2D array
Find maximum element from 2D matrix

n3 Matrix multiplication
Cubic
Find power set of any set

2n Find optimal solution for knapsack problem


Exponential
Solve TSP using dynamic programming

These are widely used classes, there exist many other classes.
Algorithm having exponential or factorial running time are unacceptable for practical use.

This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 7
Analysis of Algorithm Introduction

Asymptotic Notation
Asymptotic notations are mathematical tool to find time and space complexity of an algorithm
without implementing it in a programming language.
It is way of describing cost of algorithm.
Asymptotic notation does analysis of algorithm independent of processor speed, ram, memory.
For example: In bubble sort, when the input array is already sorted, the time taken by the algorithm
is linear i.e., the best case.
But, when the input array is in reverse condition, the algorithm takes the maximum time (quadratic)
to sort the elements i.e., the worst case.
When the input array is neither sorted nor in reverse order, then it takes average time. These
durations are denoted using asymptotic notations.
➢ There are mainly three asymptotic notations:
• Big-O notation
• Omega notation
• Theta notation
1. Big oh
The notation is denoted by ‘O’ and pronounced as Big oh
It means running time of algorithm cannot be more than its
asymptotic upper bound.

• Big oh (o) (Upper Bound)

f(n) = O(g(n))
f(n) <= c.g(n)
In this f(n) lies on or below c.g(n)

2. Big Omega
This notation is denoted by ‘Ω’ and pronounced as Big
omega
It defines lower bound for the algorithm.
It means running time of algorithm cannot be less than its
asymptotic lower bound.

• Omega(o) (Lower Bound)

f(n) = Ω(g(n))
f (n) >= c.g(n)
In this f (n) lies on or above c.g(n).

This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 8
Analysis of Algorithm Introduction

3. Big Theta
This notation is denoted by ‘𝜃’ and pronounced as Big
theta
It defines lower bound for the algorithm.
It means running time of algorithm cannot be less or
greater than its asymptotic tight bound.

• Theta(o) (between order)

f(n) = 𝜃(g(n))
c1.g(n) <= f(n) <= c2.g(n)
In this f(n) lies between c1.g(n) and c2.g(n)

Example: Represent the following function using Big oh, Omega and Theta Notations.
(i) (n) = 3n + 2 (ii) T(n) = 10n2 + 2n + 1
Solution:
(A) Big oh (upper bound)
(i) T(n) = 3n + 2
To Find upper bound of f(n), we have to find c and n0 such that 0 ≤ f(n) ≤ c . g(n) for all n ≥
n0
0 ≤ f(n) ≤ c . g (n)
0 ≤ 3n + 2 ≤ c . g (n)
0 ≤ 3n + 2 ≤ 3n + 2n, for all n ≥ 1 (there can be such infinite possibilities)
0 ≤ 3n + 2 ≤ 5n
So, c = 5 and g(n) = n, n0 = 1

(ii) T(n) = 10n2 + 2n + 1


To find upper bound of f(n), we have to find c and n0 such that 0 ≤ f(n) ≤ c . g (n) for all n ≥
n0
0 ≤ f(n) ≤ c . g (n)
0 ≤ 10n2 + 2n + 1 ≤ 10n2 + 2 n2 + n2 for all n ≥ 1
0 ≤ 10n2 + 2n + 1 ≤ 13n2
So, c = 13, g(n) = n2 and n0 = 1

This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 9
Analysis of Algorithm Introduction

(A) Lower bound


(i) T(n) = 3n + 2
To Find lower bound of f(n), we have to find c and n0 such that 0 ≤ c . g(n) ≤ f(n) for all n ≥
n0
0 ≤ c . g(n) ≤ f(n)
0 ≤ c . g(n) ≤ 3n + 2
0 ≤ 3n ≤ 3n + 2 → true, for all n ≥ n0
f(n) = Ω (g(n)) = Ω(n) for c = 3, n0 = 1

(ii) T(n) = 10n2 + 2n + 1


To find lower bound of f(n), we have to find c and n0 such that 0 ≤ c . g (n) ≤ f(n) for all n ≥
n0
0 ≤ c . g (n) ≤ f(n)
0 ≤ c . g (n) ≤ 10n2 + 2n + 1
0 ≤ 10n2 ≤ 10n2 + 2n + 1 → true, for all n ≥ 1
So, c = 13, g(n) = n2 and n0 = 1
f(n) = Ω (g(n)) = Ω(n2) for c = 3, n0 = 1

(A) Tight bound


(i) T(n) = 3n + 2
To Find tight bound of f(n), we have to find c1, c2 and n0 such that 0 ≤ c1 . g(n) ≤ c2 . g(n) for
all n ≥ n0
0 ≤ c1 . g(n) ≤ 3n + 2 ≤ c . g(n)
0 ≤ 3n ≤ 3n + 2 ≤ 5n, for all n ≥ 1
Above inequality is true and there exists such infinite inequalities.
So, f(n) = θ(g(n)) = θ(n) for c1 = 3, c2 = 5, n0 = 1

(ii) T(n) = 10n2 + 2n + 1


To Find tight bound of f(n), we have to find c1, c2 and n0 such that 0 ≤ c1 . g(n) ≤ f(n) ≤ c2 .
g(n) for all n ≥ n0
0 ≤ c1 . g(n) ≤ 10n2 + 2n + 1 ≤ c2 . g(n)
0 ≤ 10n2 ≤ 10n2 + 2n + 1 ≤ 5n, for all n ≥ 1
Above inequality is true and there exists such infinite inequalities.
So, f(n) = θ(g(n)) = θ(n2) for c1 = 10, c2 = 13, n0 = 1

This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 10
Analysis of Algorithm Introduction

P, NP, NP Hard and NP Complete


➢ Problem can be classified in one of the following categories,

1. Problem cannot be defined in a proper way


2. Problem which can be defined but cannot be solved.
3. Problem which can be solved but computationally they are not feasible algorithm takes long
time to solve such a problem.

Intractable: - Problem is called intractable if it takes too long time to solve.

4. Problem which can be solved theoretically and practically in reasonable amount of time

Tractable: - If the problem is solvable in polynomial time, it is called tractable such a


problem is denoted by P

Polynomial Time Exponential time

• Linear Search O(n) Traveling Salesman 2n


• Binary Search O(logn) su-do-ku 2n
• Inserting element O(n) Scheduling 2n
• Selection sort O(n2)

1. P- Polynomial time solving.

Problems which can be solved in polynomial time is known as P class problem (Sorting/
Searching), which take time like O(n), O(n2), O(n3).

E.g.,: finding maximum element in an array or to check whether a string is palindrome or not. So there
are many problems which can be solved in polynomial time.

P: problems are quick to solve

2. NP- Non deterministic Polynomial time solving.

Problem which can't be solved in polynomial time but can be verified in polynomial time like
TSP( travelling salesman problem) , su-do-ku

But NP problems are checkable in polynomial time means that given a solution of a problem, we can
check that whether the solution is correct or not in polynomial time.

NP : Problems are quick to verify but slow to solve

This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 11
Analysis of Algorithm Introduction

P is subset of NP

Take two problems A and B both are NP problems.

Reducibility- If we can convert one instance of a problem A into problem B (NP problem) then it means
that A is reducible to B.

NP-hard—Problem is NP-Hard if every problem in NP can be polynomially reduced to it.

Now suppose we found that A is reducible to B, then it means that B is at least as hard as A.

NP Hard: Problems are slow to verify, slow to solve and can be reduced to any other problem.

NP-Complete -- The group of problems which are both in NP and NP-hard are known as NP-Complete
problem.

NP-complete problems are the hardest problems in NP set.

Now suppose we have a NP-Complete problem R and it is reducible to Q then Q is at least as hard as R
and since R is an NP-hard problem. Therefore Q will also be at least NP-hard, it may be NP-complete also.

NP Complete: Problems are also quick to verify, slow to solve and can be reduced to any other problem.

This document is property of RKDEMY and cannot be used, disclosed or duplicated without the prior written consent of RKDEMY pg.1- 12

You might also like