0% found this document useful (0 votes)
16 views

Algorithm Note 2

algorithm notes

Uploaded by

anujbhagat031
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Algorithm Note 2

algorithm notes

Uploaded by

anujbhagat031
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

1.

Algorithm:
Algorithm is a step-by-step procedure, which defines a set of instructions to be
executed in a certain order to get the desired output. Algorithms are generally
created independent of underlying languages, i.e. an algorithm can be
implemented in more than one programming language.
Functional Diagram:
Problem

Algorithm
m

Input Programming Language Output

Algorithm can be specified in any of the 3 form given below:


1. English
2. As a programming language
3. As a pseudo code
1.1Criteria of Algorithm:
1. Input: There are some input data which are externally supplied to the
algorithm
2. Output : There will be at least one output
3. Definiteness:Each instruction /step of the algorithm must be clear and
unambiguous.
4. Finiteness: If we trace out the instruction of an algorithm will terminate
after a finite number of steps.
5. Effectiveness: The steps of an algo must be sufficient basic that it can
be carried out by a person mechanically using pen and paper.
1.2 Analyzing an Algorithm:
Once an algorithm is designed , There are two issues which are to be properly
deal with
1.2.1 Validation of the algorithm : It is necessary to show that designed
algorithm gives correct output for all possible combination of input
value. That is called validation of algorithm or program proving.
1.2.2 Evaluation of complexity : when a program is executed , it uses
resources of a computing system such as central process unit,
memory unit etc. The study of algorithm determines the amount of
time and storage it may require for execution . This is called
evaluation of complexity.
1.3What analysis of algorithm does not do?
The analysis of algorithm does not give any formula that helps us to determine
how much second or how much cycle of a particular algorithm will take to solve
a problem. This is not useful information to choose the right algorithm because
it involves lots of variables.
• Type of the computer
• Instruction set used by the Microprocessor
• Condition of the computer
• What optimization compiler performs on the executable code.
1.4 Difference Between Program and Algorithm
1. A programming language is strict in its syntax. Otherwise the program
would not compile and run. But algorithm is not much strict about syntax.
Algorithm can be written in any easily understandable language that a
particular task is accomplished.
2. A program does not necessarily satisfy fitness condition .But algorithm
must have to satisfy all 5 criteria.
3. Program is just implementation of an algorithm and algorithm is a blue
print of program.
2.Algorithm Analysis
Efficiency of an algorithm can be analyzed at two different stages, before
implementation and after implementation. They are the following −
• A Priori Analysis− This is a theoretical analysis of an algorithm. Efficiency
of an algorithm is measured by assuming that all other factors, for
example, processor speed, are constant and have no effect on the
implementation. To ana
• A Posterior Analysis − This is an empirical analysis of an algorithm. The
selected algorithm is implemented using programming language. This is
then executed on target computer machine. In this analysis, actual
statistics like running time and space required, are collected. We shall learn
about a priori algorithm analysis. Algorithm analysis deals with the
execution or running time of various operations involved. The running time
of an operation can be defined as the number of computer instructions
executed per operation.
3.Algorithm Complexity
Suppose X is an algorithm and n is the size of input data, the time and space
used by the algorithm X are the two main factors, which decide the efficiency of
X.
• Time Factor − Time is measured by counting the number of key
operations such as comparisons in the sorting algorithm.
• Space Factor − Space is measured by counting the maximum memory
space required by the algorithm.
The complexity of an algorithm f(n) gives the running time and/or the storage
space required by the algorithm in terms of n as the size of input data.
3.1 Space Complexity
Space complexity of an algorithm represents the amount of memory space
required by the algorithm in its life cycle. The space required by an algorithm is
equal to the sum of the following two components −
• A fixed part that is a space required to store certain data and variables,
that are independent of the size of the problem. For example, simple
variables and constants used, program size, etc.
• A variable part is a space required by variables, whose size depends on
the size of the problem. For example, dynamic memory allocation,
recursion stack space, etc.
However, running time requirements are more critical than memory
requirements. Therefore, in this section, we will concentrate on the
running time efficiency of algorithms.

3.2 Time Complexity


Time complexity of an algorithm represents the amount of time required by the
algorithm to run to completion. Time requirements can be defined as a
numerical function T(n), where T(n) can be measured as the number of steps,
provided each step consumes constant time.
For example, addition of two n-bit integers takes n steps. Consequently, the
total computational time is T(n) = c n, where c is the time taken for the
addition of two bits. Here, we observe that T(n) grows linearly as the input size
increases.

4.1Worst Case Analysis (Usually Done)


This denotes the behaviour of an algorithm with respect to the worst possible
case of the input instance. The worst-case running time of an algorithm is an
upper bound on the running time for any input. Therefore, having the
knowledge of worst-case running time gives us an assurance that the algorithm
will never go beyond this time limit.
4.2Average Case Analysis (Sometimes done)
The average-case running time of an algorithm is an estimate of the running
time for an ‘average’ input. It specifies the expected behaviour of the algorithm
when the input is randomly drawn from a given distribution. Average-case
running time assumes that all inputs of a given size are equally likely.
4.3 Best Case Analysis (Rarely used)
The term ‘best-case performance’ is used to analyse an algorithm under optimal
conditions. For example, the best case for a simple linear search on an array
occurs when the desired element is the first in the list. However, while
developing and choosing an algorithm to solve a problem, we hardly base our
decision on the best-case performance. It is always recommended
to improve the average performance and the worst-case performance of an
algorithm.
4.4Amortized running time Amortized running time refers to the time
required to perform a sequence of (related) operations averaged over all the
operations performed. Amortized analysis guarantees the average performance
of each operation in the worst case.
5.Time–Space Trade-off
The best algorithm to solve a particular problem at hand is no doubt the one
that requires less memory space and takes less time to complete its execution.
But practically, designing such an ideal algorithm is not a trivial task. There can
be more than one algorithm to solve a particular problem. One may require less
memory space, while the other may require less CPU time to execute. Thus, it is
not uncommon to sacrifice one thing for the other. Hence, there exists a
time–space trade-off among algorithms.
So, if space is a big constraint, then one might choose a program that takes less
space at the cost of more CPU time. On the contrary, if time is a major
constraint, then one might choose a program that takes minimum time to
execute at the cost of more space.

6.1.Calculate the time complexity equation to find the mean of n


numbers using arrays.

Code Cost Time


int i, n, arr[20], sum =0; C1 1
float mean = 0.0; C2 1
printf("\n Enter the number of elements in the array : "); C3 1
scanf("%d", &n); C4 1
for(i=0;i<n;i++) C5 n+1
scanf("%d",&arr[i]); C6 n
for(i=0;i<n;i++) C7 n+1
sum += arr[i]; C8 n
mean = (float)sum/n; C9 1
printf("\n The sum of the array elements = %d", sum); C10 1
printf("\n The mean of the array elements = %.2f", mean); C11 1
Total time complexity =C1*1+ C2*1+ C3*1+ C4*1+ C5*(n+1)+ C6*n+
C7*(n+1)+ C8*n+ C9*1+ C10*1+ C11*1
=(C1+ C2+ C3+ C4+C5+C7+C9+ C10+ C11)
+ n*(C5+C6+C7+C8)
= C12+n*C13 Where C12=(C1+ C2+ C3+C4+ C5
+C7 + C9+ C10+ C11) and C13=(C5+C6+C7+C8)
time complexity equation = C12+ C13*n
6.2Calculate the time complexity equation for to find whether the array
of integers contains a duplicate number..
int array[10], i, n, j, flag =0; C1 1
printf("\n Enter the size of the array : "); C2 1
scanf("%d", &n); C3 1
for(i=0;i<n;i++) C4 n+1
scanf("%d", &array[i]); C5 n
for(i=0;i<n;i++) C6 n+1
{ C7 0
for(j=i+1;j<n;j++) C8 n*n
{ C9 0
if(array[i] == array[j] && i!=j) C10 n*(n-1)
{ C11 0
flag =1; C12 n*(n-1)
printf("\n Duplicate numbers found at locations C13 n*(n-1)
%d and %d", i, j);
} C14 0
} C15 0
} C16 0
if(flag==0) C17 1
printf("\n No Duplicates Found"); C18 1
Total Time complexity = C1*1+C2*1+ C3*1+ C4*(n+1) +C5*n+ C6*(n+1)+
C8*(n*n)+C10*n*(n-1)+C12*n*(n-1)+C13*n*(n-1)+
C17*1+C18*1
=(C1+C2+C3+C4+C6+C17+C18)+n*(C4+C5+C6-
C10 -C12-C13)+n*n(C8+C10+C12+C13)
=a+bn+cn2 where a=(C1+C2+C3+C4+C6+C17+C18)
b=(C4+C5+C6-C10 -C12-C13) and c=( C8+C10+C12+C13)

7. Asymptotic Analysis: The main idea of asymptotic analysis is to have a


measure of efficiency of algorithms that doesn’t
depend on machine specific constants, and doesn’t
require algorithms to be implemented and time taken
by programs to be compared.
8.Asymptotic notations : When it comes to analysing the complexity of any
algorithm in terms of time and space, we can never
provide an exact number to define the time required
and the space required by the algorithm, instead we
express it using some standard notations, also known
as Asymptotic Notations. This is mathematical tools
to represent complexity of algorithms for asymptotic
analysis.
The following 3 asymptotic notations are mostly used to represent time
complexity of algorithms.
i) Big O Notation/Big Oh Notation ii)Big Ω Notation(omega) iii) Big Θ Notation( theta)

8.i) Big O Notation/Big Oh Notation :Big-Oh (O) notation gives an upper bound for
a function f(n) to within a constant factor.

We write f(n) = O(g(n)), If there are


positive constants n0 and c such that, to
the right of n0 the f(n) always lies on or
below c*g(n).
O(g(n)) = { f(n) : There exist positive
constant c and n0 such that
0 ≤ f(n) ≤ c g(n), for all n ≥ n0}
Example1: Calculate the Big O where f(n)= 3n3 + 2n + 7
f(n)= 3n3 + 2n + 7 ≤ 3n3 + 2n + 7n when n≥1 7n>7=>2n+7n>2n + 7 =>
3n3 + 2n + 7n>3n3 + 2n + 7
= 3n3 + 9n
≤3n3 + 9n3 =12n3
f(n)≤12n3 where n>1
Now c=12 and n0=1 and g(n)= n3
It satisfy f(n)<cg(n) for all n> n0
f(n)=O(g(n))=O(n3)
Shortcut method: 1)remove constant term 2) remove lower order term 3) remove the
constant from highest order term

O( 3n3 + 2n + 7 )
1)after removing contestant term 7 we get O(3n3 + 2n )
2) after removing lower order term 2n we get O(3n3 )
3) after removing constant term of higher order we get O(n3 )
Example2: Calculate the Big O where f(n)=2n+1
f(n)=2n+1= 2.2n for all n>=0 So f(n)=O(2n)
Example3: Calculate the Big O where f(n)=n!
f(n)=n!= n*(n-1)*(n-2)*(n-3)*…………3*2*1
< n*n*n*n………..*n*n*n for all n>0
=nn
f(n)=O(nn)
Example4: Calculate the Big O where f(n)=10nlogn+5n4+7*2n

n nlogn n4 2n
1 0 1 2
2 .61 16 4
3 1.43 81 8
4 2.40 256 16

f(n)= 10nlogn+5n4+7*2n< 10*2n+5n4+7*2n=5n4+17*2n for all n>0


<5n4+17* n4 =22 n4 where n>1
f(n)<22 n4 for all n>2 so f(n)=O(n4)
8.ii)Big Ω Notation(omega): Big-Omega (Ω) notation gives a lower bound for a
function f(n) to within a constant factor.

We write f(n) = Ω(g(n)), If there are


positive constants n0 and c such that,
to the right of n0 the f(n) always lies on
or above c*g(n).
Ω(g(n)) = { f(n) : There exist positive
constant c and n0 such that 0 ≤ c g(n)
≤ f(n), for all n ≥ n0}

Example1: Calculate the Big Ω where f(n)= 3n3 + 2n + 7


f(n)= 3n3 + 2n + 7 ≥ 3n3 + 2n when n≥0 a+b>a
≥3n3
f(n)≥3n3 where n≥0
Now c=3 and n0=0 and g(n)= n3
It satisfy f(n)<cg(n) for all n> n0
f(n)= Ω (g(n))= Ω (n3)
Example2:Calculate the Big O where f(n)=10nlogn+5n4+7*2n
f(n)=10nlogn+5n4+7*2n
>5n4+7*2n
>5n4 for all n>2
f(n)>cg(n) where c=5, g(n)= n4 and n0=2
So f(n)= Ω (g(n))= Ω (n4)
8.iii) Big Θ Notation( theta): Big-Theta(Θ) notation gives bound for a
function f(n) to within a constant factor or tight
bound.

We write f(n) = Θ(g(n)), If there are


positive constants n0 and c1 and c2 such
that, to the right of n0 the f(n) always lies
between c1*g(n) and c2*g(n) inclusive.
Θ(g(n)) = {f(n) : There exist positive
constant c1, c2 and n0 such that 0 ≤ c1
g(n) ≤ f(n) ≤ c2 g(n), for all n ≥ n0}

Calculate the Big Θ where f(n)= 3n3 + 2n + 7

f(n)= 3n3 + 2n + 7 ≤ 3n3 + 2n + 7n when n≥1


= 3n3 + 9n
≤3n3 + 9n3 =12n3
f(n)≤12n3 where n≥1
f(n)= 3n3 + 2n + 7 ≥ 3n3 + 2n when n≥0
≥3n3
f(n)≥3n3 where n≥1
so 0≤3n3≤ f(n) ≤12 n3 for all n≥1
here c1=3 c2=12 , n0=1 and g(n)= n3
f(n)=Θ(n3)

Another three notation is used in Asymptotic notations


i)Small o Notation/small oh Notation : o(g(n)) = { f(n) : There exist
positive constant c and n0 such that 0 ≤ f(n) <c g(n), for all n ≥ n0}
ii)Small Ω Notation(omega) : Ω(g(n)) = { f(n) : There exist positive
constant c and n0 such that 0 ≤ c g(n) < f(n), for all n ≥ n0}
iii) Small Θ Notation( theta): Θ(g(n)) = {f(n) : There exist positive constant
c1, c2 and n0 such that 0 ≤ c1 g(n) < f(n) < c2 g(n),
for all n ≥ n0}
Typical Algorithm Complexities
O(1)<O(log n)<O(n)<O(nlogn)<O(n2)<O(n3)<O(2n)< O(n4) where
This table will explain what every type of complexity (running time) means:
Complexity Running Time Description
It takes a constant number of steps for performing a given operation (for example
constant O(1) 1, 5, 10 or other number) and this count does not depend on the size of the input
data.
It takes the order of log(N) steps, where the base of the logarithm is most often 2,
for performing a given operation on N elements. For example, if N = 1,000,000, an
logarithmic O(log(N)) algorithm with a complexity O(log(N)) would do about 20 steps (with a constant
precision). Since the base of the logarithm is not of a vital importance for the order
of the operation count, it is usually omitted.
It takes nearly the same amount of steps as the number of elements for
performing an operation on N elements. For example, if we have 1,000 elements, it
linear O(N) takes about 1,000 steps. Linear complexity means that the number of elements and
the number of steps are linearly dependent, for example the number of steps for N
elements can be N/2 or 3*N.
It takes N*log(N) steps for performing a given operation on N elements. For
O(n*log(n))
example, if you have 1,000 elements, it will take about 10,000 steps.
It takes the order of N2 number of steps, where the N is the size of the input data,
for performing a given operation. For example if N = 100, it takes about 10,000
quadratic O(n2) steps. Actually we have a quadratic complexity when the number of steps is in
quadratic relation with the size of the input data. For example for N elements the
steps can be of the order of 3*N2/2.
It takes the order of N3 steps, where N is the size of the input data, for performing
3
cubic O(n ) an operation on N elements. For example, if we have 100 elements, it takes about
1,000,000 steps.
It takes a number of steps, which is with an exponential dependability with the
size of the input data, to perform an operation on N elements. For example, if N =
O(2n), O(N!), 10, the exponential function 2N has a value of 1024, if N = 20, it has a value of 1
exponential
O(nk), … 048 576, and if N = 100, it has a value of a number with about 30 digits. The
exponential function N! grows even faster: for N = 5 it has a value of 120, for N =
10 it has a value of 3,628,800 and for N = 20 – 2,432,90,008,176,640,000.

You might also like