0% found this document useful (0 votes)
21 views

MODULE 1design and Analysis of Algorithm

Uploaded by

Sonu kumar Singh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

MODULE 1design and Analysis of Algorithm

Uploaded by

Sonu kumar Singh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Design and Analysis of

Algorithm

Course Code: MCAE253

Thara Chakkingal
Introduction
● An algorithm is a set of steps of operations to solve a problem
performing calculation, data processing, and automated reasoning
tasks.
● It takes set of values as input and results in the output as a set of
values by solving the problem.
● An algorithm design include creating an efficient algorithm to solve
a problem in an efficient way using minimum time and space.
Algorithms must satisfy the following
criteria:
● Input: Zero or more quantities are externally supplied.

● Output: At least one quantity is produced.

● Definiteness: Each instruction is clear and unambiguous.

● Finiteness: If we trace out the instructions of an algorithm, then for all cases, the

algorithm terminates after a finite number of steps.

● Effectiveness: Every instruction must be very basic so that it can be carried out, in

principle, by a person using only pencil and paper. It is not enough that each operation

be definite as in criterion3; it also must be feasible


Analysis of algorithm
Algorithms to find biggest of three numbers
Algorithm 2
Algorithm 1 if(a > b)
big = a { if(a > c)
return a
if(b > big)
else
big = b return c
if (c > big) }
else
big = c
{ if(b > c)
return big return b
else
return c
}
Algorithmic efficiency
● More than one algorithms for solving one problem.
● One algorithm will be efficient than others.
● If a function is linear, efficiency depends on the number of instructions.

efficiency = f(n)

● n = number of instructions
Algorithmic efficiency
● Linear loops
for(i=0;i<1000;i++)
{
code
}
○ n = Loop factor =1000
○ Number of iterations are directly proportional to loop factor
○ f(n) = n
Algorithmic efficiency
● Linear loops
for(i=0;i<1000;i=i+2)
{
code
}
○ n = Loop factor =1000
○ Number of iterations are directly proportional to half the loop factor
○ f(n) = n/2
Algorithmic efficiency
● Nested loops
● Iterations = outer loop iterations * inner loop iterations
● Quadratic loop
for(i=1; i <= n; i++)
{
for(j=1; j <= n; j++)
{
code
}
}
● f(n) = n2
Algorithmic efficiency
● Nested loops
● Dependent Quadratic loop
for(i=1; i <= n; i++)
{
for(j=1; j <= i; j++)
{
code
}
}
● f(n) = n (n+1) / 2
Steps
Problem Types
Analysis of Algorithm

● Space Complexity

● Time Complexity
Space Complexity(Ref. DS by Salaria)
● Amount of memory needed by program upto completion of
execution.
● Space needed by program has following components:
○ Instruction Space-
■ Space needed to store the compiled version of the program instruction
○ Data space-
■ Space needed by constants,
■ variables,
■ fixed sized structured variables,
■ dynamically allocated space
○ Environmental stack space
■ Used to save information needed to resume execution of partially completed functions and
methods

● E.g. One fun call other fun

■ Each time function is invoked the following data are saved on environment stack :

● Return address

● Values of local variables


Time Complexity(Ref. DS by Salaria)
● Amount of time program needs to run to completion.

● Time complexity varies system to system.

● Following are the steps to measure the time complexity accurately:

○ Count all sort of operations performed in algorithm.

○ Know the time required for each operation.

○ Compute the time required for execution of algorithm.


● Note: The space and/or Time complexity is usually expressed in the form of a function f(n),where n
is the input size for given instance of problem
Time Complexity

● Execution time physically clocked


● Count no. of operations

algo sum()
{ s=0 --------- 1
for i= 1 to n --------- n + 1
s=s+a[i] ---------- n
return s ---------- 1
} 2n + 3
Asymptotic Analysis
● Asymptotic analysis of an algorithm refers to defining the mathematical
boundation/framing of its run-time performance.

● It is used to find the best case, average case, and worst case scenario of
an algorithm.

● Asymptotic analysis is input bound

● Other than the "input" all other factors are considered constant.
● Asymptotic analysis refers to computing the running time of any
operation in mathematical units of computation.

● For example, the running time of one operation is computed as f(n)


and may be for another operation it is computed as g(n2).

● The time required by an algorithm falls under three types −


○ Best Case − Minimum time required for program execution.

○ Average Case − Average time required for program execution.

○ Worst Case − Maximum time required for program execution.


Asymptotic Notations
● Following are the commonly used asymptotic notations to calculate the
running time complexity of an algorithm.
○ Big – Oh(O) Notation
○ Big - Omega (Ω) Notation
○ Big - Theta (Θ) Notation

○ Note : (reference book Khushwaha)


O-Notation (Upper Bound)
● The notation Ο(n) is the formal way to express the upper bound of an
algorithm's running time.

● It measures the worst case time complexity or the longest amount of


time an algorithm can possibly take to complete.
Big Oh : O-Notation (Upper Bound)

● Def - Given functions f(n) and g(n), we say that f(n) is O(g(n) ) if and only if there are positive constants c and n0 such that

f(n)≤ c * g(n) for all n , n ≥ n0.

● g(n) should be as small as possible.

f(n) = O(g(n))
● As n increases, f(n) grows no faster than g(n). In other words, g(n) is an asymptotic upper bound on f(n).

● Examples:

● Consider the following f(n) and g(n)...


f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as O(g(n)) then it must satisfy

f(n) <= C x g(n) for all values of C > 0 and n0>= 1

● f(n) <= C g(n)


⇒3n + 2 <= C n

Above condition is always TRUE for all values of C = 4 and n >= 2.

● By using Big - Oh notation we can represent the time complexity as follows...


3n + 2 = O(n)
● Example 2
● Consider
○ f(n) = 3*n^2
○ g(n) = n
● Is f(n) O(g(n))? OR Is 3 * n^2 O(n)?
Omega : Ω-Notation (Lower Bound)
● The notation Ω(n) is the formal way to express the lower bound
of an algorithm's running time.
● It measures the best case time complexity or the best amount of
time an algorithm can possibly take to complete.
Ω-Notation (Lower Bound)
● Def - Given functions f(n) and g(n), we say that f(n) is Ω(g(n) ) if g(n) is O(f(n) ); that is,
there exists positive constants c and n0 such that f(n) ≥ c * g(n) for all n , n ≥ n0.

● g(n) should be as large as possible. Gives best case running time

f(n) = Ω(g(n))
● Example
● Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n) >= C
g(n) for all values of C > 0 and n0>= 1

● f(n) >= C g(n)


⇒3n + 2 >= C n
Above condition is always TRUE for all values of C = 1 and n >= 1.

● By using Big - Omega notation we can represent the time


complexity as follows...
3n + 2 = Ω(n)
Theta: Θ-Notation (Same order)
● The notation θ(n) is the formal way to express both the lower
bound and the upper bound of an algorithm's running time.

● Big - Theta notation is used to define the average bound of an


algorithm in terms of Time Complexity.

● That means Big - Theta notation always indicates the average


time required by an algorithm for all input values.

● That means Big - Theta notation describes the average case of an


algorithm time complexity.
Θ-Notation (Same order)
● We say f(n) = Θ(g(n)) if there exist positive constants n0, c1 and c2 such that to the right of n0 the value of f(n)
always lies between c1g(n) and c2g(n) inclusive

● c1 * g(n) <= f(n) <=c2 * g(n) for all n , n>=n0

● i.e. g(n) is both upper and lower bound of f(n).

f(n) = Θ(g(n))
● Example
● Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n

If we want to represent f(n) as Θ(g(n)) then it must satisfy


C1 g(n) <= f(n) >= C2 g(n) for all values of C1, C2 > 0 and n0>= 1
● C1 g(n) <= f(n) <= C2 g(n)
C1 n <= 3n + 2< = C2 n
Above condition is always TRUE for all values of C1 = 1, C2 = 4 and n
>= 1.

● By using Big - Theta notation we can represent the time complexity


as follows...
3n + 2 = Θ(n)
Calculation of Time complexity
● Drop lower order terms and constant factors.

● Remove coefficient of higher order term.

● Time complexity for following function,


○ 7n-3 is O(n)
○ 10n3+100n2+11n+900 is O(n3)
Calculation of Time complexity

● Special classes of algorithms in increasing order:


○ Constant: O(1)
○ logarithmic: O(log n)
○ linear: O(n)
○ quadratic: O(n2)
○ polynomial: O(nk), k ≥ 1
○ exponential: O(an), n > 1 a is constant
The type of input also affects the running time:
● Best-case analysis:- based on “ideal” input
● Worst-case analysis:- based on worst possible input
● Average-case analysis:- based on the average outcome of running
an algorithm many times over random input
Cases to consider during algorithm analysis

● Best case input


○ With this input algorithm takes shortest time to execute.
○ E.g. for searching algorithm, number we search is found at the first place itself
● Worst case input
○ With this input algorithm will take most time to execute.
○ E.g. for searching algorithm, number we search is found at the last place itself
● Average case input
○ With this input algorithm delivers average performance.
○ A(n) = ∑ Pi ti for i=1 to m
○ n = size of input
○ m = number of groups
○ pi = probability that input will be from group i
○ ti = time that algorithm takes for input from group i

You might also like