0% found this document useful (0 votes)
9 views47 pages

DAA-Unit-I - Part A & B

Uploaded by

Hemanth Kumar1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views47 pages

DAA-Unit-I - Part A & B

Uploaded by

Hemanth Kumar1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 47

Unit-I

Introduction:
Algorithm, Pseudo code for expressing algorithms,
Performance Analysis-Space complexity, Time
complexity,
Asymptotic Notation- Big oh notation, Omega notation,
Theta notation and Little oh notation,
Probabilistic analysis,
Amortized analysis.
Algorithm
• Definition
An algorithm is a set of steps used to solve a specific
problem.
Pseudo code for expressing algorithms
We present most of our algorithms using pseudo code that
looks like C and Pascal code.

1. Comments begin with // and continue until end of the


line.
2. Blocks are indicated with matching braces: { and }.
i. A compound statement.
ii. Body of a function.
3.
i). The data types of variables are not explicitly declared.

ii). The types will be clear from the context.

iii). Whether a variable is global or local to a function will


also be clear from the context.
4. Assignment statement
< variable > := < expression >

5. Boolean values are true and false.


Logical operators are and, or and not R
Relational operators are <, ≤, =, ≠, ≥ and >

6. Elements of multidimensional arrays are accessed using


[ and ]. For example the (i,j)th element of the array A is
denoted as A[i,j].

7. Looping statements are for, while, and repeat until.


The general form of a while loop:
while( condition ) do
{
statement_1;
:
statement_n;
}
The general form of a for loop:
for variable := value1 to value2 step step do
{
statement_1;
:
statement_n;
}
– Here value1, value2, and step are arithmetic expressions.
– The clause “step step” is optional and taken as +1 if it does
not occur.
– step could be either positive or negative.

Ex: 1: for i:= 1 to 10 step 2 do // increment by 2, 5 iterations


2: for i:= 1 to 10 do // increment by 1, 10 iterations
• The general form of a repeat-until loop:
repeat
<statement 1>
:
<statement n>
until ( condition )

• The statements are executed as long as condition is false.


Ex: sum:=0, number:=10;
repeat
sum := sum + number;
number := number - 1;
until number = 0;
8. A conditional statements

if< condition > then < statement >


if< condition > then < statement 1> else
< statement 2>

9. Input and output are done using the instructions read and
write.
10. Procedure or function starts with the word
Algorithm.

General form :
Algorithm Name( <parameter list> )
{
body
}
where Name is the name of the procedure.
– Simple variables to functions are passed by value.
– Arrays and records are passed by reference.
Ex:-Algorithm that finds and returns the
maximum of n given numbers.
Algorithm max(a,n)
// a is an array of size n
{
result:=a[1];
for i:=2 to n do
if ( a[i] > result ) then
result:=a[i];

return result;
}
Ex:-Write an algorithm to find sum of n numbers

Algorithm sum(a, n)
// a is an array of size n
{ sum:=0;
for i :=1 to n do
{
sum := sum+a[i];
}
return sum;
}
Space Complexity
• The space needed by an algorithm is called a space complexity.
• It has the following components.
– Instruction space / Program space
• It does not depend on the number of inputs. (Instance characteristics
of a problem ).
– Data space
• Constants & simple variables – Don’t depend on the number of
inputs.
• Dynamically allocated objects - Depends on the number of inputs.
• Stack space - Generally does not depend on the number of inputs
unless recursive functions are in use.
Recursive algorithm

Algorithm rfactorial(n)
// n is an integer
{ fact=1;
if(n=1 or n=0) return fact;
else
fact=n*rfactorial(n-1);
return fact;
}
Note : Each time the recursive function is called, the current values
of n, fact and the address of the statement to return to on
completion are saved on the stack.
Execution
n: 2
R: n: 3
int rfactorial(n){
void main() int rfactorial(n){
fact=1
{int fact=1;
if (n=0 or n=1)return fact
R=rfactorial(3); if (n=0 or n=1)return fact;
else
printf(R);} else
fact=n*rfactorial(n-1);
fact=n*rfactorial(n-1);
return fact;
return fact;
}
}
location to return n: 1
fact 1 fact=1
n 1
int rfactorial(n){
fact=1
4000 if (n=0 or n=1)
fact 1 fact=2*
n 2
return 1;
else
3000 fact=n*rfactorial(n-1);
fact 1
3
return fact;
n fact=3*
}
2000
R Stack
Contd..
Therefore, We can divide the total space required by a
program into two parts:
i) Fixed Space Requirements (C)
Independent of the characteristics of the problem instance ( I )
• Instruction space
• Space for simple variables and constants.
ii) Variable Space Requirements (SP(I))
depend on the characteristics of the problem instance ( I )
• Dynamically allocated objects (Number of inputs associated with
I)
• Recursive stack space ( formal parameters, local variables,
return address ).
– Therefore, the space requirement of any problem P
can be written as
S(p)=C +Sp( Instance characteristics ).
Note:
– We concentrate only on estimating
Sp (Instance characteristics ).

– We don’t concentrate on estimating fixed part c .

– We need to identify the instance characteristics of the


problem to measure Sp
Example1
Algorithm abc(a,b,c)
{
return a+b+b*c+(a+b-c)/(a+b)+4.0;
}
• Problem instance characterized by the specific values of
a,b,and c.
• If we assume one word (4 bytes) is adequate to store the
values of each a, b, and c , then the space required by a, b, c is
independent of the instance characteristics.
Therefore, Sabc( instance characteristics)=0
Example2
Algorithm sum(a,n)
{
s:=0;
for i:=1 to n do
s:=s+a[i];
return s;
}
• Problem instance characterized by n.
• The amount of space depends on the value of n.

Therefore, Ssum(n) = n
Example3
Algorithm RSum(a,n)
{
if(n ≤ 0) then return 0;
else return RSum(a,n-1)+a[n];
}

Type Name Number of bytes


formal parameter: int a 2
formal parameter: int n 2
return address 2
( used internally)
Total per one recursive 6
call

Total no.of recursive calls n, therefore SRSum (n)=6n


Contd..
• Time Complexity:
T(P)=C+TP(I)

– Time Complexity, T(P) is the sum of its compile time C


plus its run time, TP(I).

– Compile time does not depend on the instance


characteristics( Number of inputs ).

– We will concentrate on estimating run time Tp(I).


Contd..
Program step
program step is a program statement whose
execution time is independent of the number of inputs.

Example :
c := a + b;
sum := sum + a[i];
• Comments are counted as zero number of steps.

• An assignment statement which does not involve any function


calls to other functions counted as one step.

• For loops, such as the for, while, and repeat-until, we consider


the step counts only for the control part of the statement.

• The control parts for for and while statements have the
following forms:
for i:= <expr1> to <expr2> do
while ( <expr> ) do
• Each execution of the control part of a while statement is one,
unless <expr> is a function of instance characteristics.

• Similarly, each execution of the control part of a for statement


is one, unless <expr1> and <expr2> are functions of the
instance characteristics.

• Each execution of the condition of a conditional statements


step count is one, unless condition is a function of instance
characteristics.

• If any statement ( assignment statement, control part,


condition etc.) involves function calls, then the step count is
equal to the number of steps assignable to the function plus
one.
• Method to compute the step count
Tabular method
• Determine the number of steps contributed by each statement per
execution  frequency ( number of times statement is executed ).
• Add up the contribution of all statements
• Ex:- Iterative sum of n numbers

Statement s/e frequency Total steps


(s/e X frequency)

Algorithm sum(a, n) 0 -- 0
{ 0 -- 0
s:=0 ; 1 1 1
for i:=1 to n do 1 n+1 n+1
s:=s+a[i]; 1 n n
return s; 1 1 1
} 0 -- 0

Total 2n+3
• EX:- Addition of two m×n matrices

Statement s/e frequency Total steps


(s/e X frequency)

Algorithm Add(a,b,c,m, n) 0 -- 0
{ 0 -- 0
for i:=1 to m do 1 m+1 m+1
for j:=1 to n do 1 m(n+1) mn+m
c[i,j]:=a[i,j]+b[i,j] ; 1 mn mn
} 0 -- 0

Total 2mn+2m+1
Best, Worst, Average Cases
1. Best-Case:-
Minimum number of steps taken by the algorithm
for the given inputs.
2. Worst-Case:-
Maximum number of steps taken by the algorithm
for the given inputs.
3.Average-Case:-
Average number of steps taken by an algorithm.
Best, Worst, Average Cases
 The number of steps taken by the algorithm depends on the
input.
Ex:
 Linear search.

Algorithm sequentialSrch( a, n, Ex : Let us take , a [ ] := { 2, 5, 7, 8, 9 }


key )
{ Best case, key := 2
for i := 1 to n do Worst case, key := 9
{ Avg Case:
if ( a[ i ] = key ) if key := 2 , total no.of steps +
{ return i; } if key := 5 , total no.of steps +
} if key :=7, total no.of steps +
} if key :=8, total no.of steps +
if key :=9, total no.of steps +
=sum of the total no of steps / no.of
elements
Asymptotic efficiency
• Asymptotic efficiency means study of the time complexity of
algorithms for larger inputs.

• To compare two algorithms with time complexities f(n) and


g(n), we need a rough measure that tells how fast each function
grows as n grows.
Rate of Growth
• Ex:- F(n)=n2+100n+log10n+1000
n f(n) n2 100n log10n 1000

value Value % value % value % value %

1 1,101 1 0.1 100 9.1 0 0.0 1,000 90.83

10 2,101 100 4.76 1,000 47.6 1 0.05 1,000 47.60

100 21,002 10,000 47.6 10,000 47.6 2 0.001 1,000 4.76

1,000 1,101,003 1,000,000 90.8 100,000 9.1 3 0.000 1,000 0.09


3
10,000 101,001,004 100,000,000 99.0 1,000,000 0.99 4 0.0 1,000 0.001
• The low order terms and constants in a function are relatively
insignificant for large n
n2 + 100n + log10n + 1000 ~ n2

i.e., we say that n2 + 100n + log10n + 1000 and n2 have the same
rate of growth

Some more examples


• n4 + 100n2 + 10n + 50 is ~n4
• 10n3 + 2n2 is ~n3
• n3 - n2 is ~n3
• constants
– 10 is ~1
– 1273 is ~1
Asymptotic Notations
• Asymptotic notation describes the behavior
of functions for the large inputs.
• Big Oh(O) notation:

– The big oh notation specifies an upper


bound for the growth rate of the function
f.
Definition: [Big “oh’’]
– f(n) = O(g(n)) (read as “f of n is big oh of
g of n”) iff there exist positive constants
c and n0 such that
f(n)  cg(n) for all n, n  n0.
• The definition states that the function f(n) is at most c times the
function g(n) except when n is smaller than n0.
• In other words, f(n) grows slower than or same rate as” g(n).
• When providing an upper –bound function g for f, we normally
use a single term in n.
• Examples
– f(n) = 3n+2
• 3n + 2 <= 4n, for all n >= 2,  3n + 2 =  (n)

– f(n) = 10n2+4n+2
• 10n2+4n+2 <= 11n2, for all n >= 5,  10n2+4n+2 =  (n2)

– f(n)=6*2n+n2=O(2n) /* 6*2n+n2  7*2n for n4 */


• It also possible to write 10n2+4n+2 = O(n3) since 10n2+4n+2
<= 7n3 for n>=2

• Although n3 is an upper bound for 10n2+4n+2, it is not a tight


upper bound; we can find a smaller function (n2 ) that satisfies
big oh relation.

• But, we can not write 10n2+4n+2 =O(n), since it does not


satisfy the big oh relation for sufficiently large input.

Note : We always consider tight upper bound.


• Omega () notation:
– The omega notation specifies lower bound for
the growth rate of the function f.

Definition: [Omega]
– f(n) = (g(n)) (read as “f of n is
omega of g of n”) iff there exist
positive constants c and n0 such
that f(n)  cg(n) for all n,
n  n0.
• The definition states that the function f(n) is at least c times
the function g(n) except when n is smaller than n0.

• In other words,f(n) grows faster than or same rate as” g(n).

• Examples
– f(n) = 3n+2
• 3n + 2 >= 3n, for all n >= 1, 3n + 2 =  (n)

– f(n) = 10n2+4n+2
• 10n2+4n+2 >= n2, for all n >= 1,  10n2+4n+2 =  (n2)

• It also possible to write 10n2+4n+2 = (n) since 10n2+4n+2 >=n for n>=0
• Although n is a lower bound for 10n2+4n+2, it is not a tight
lower bound; we can find a larger function (n2 )that satisfies
omega relation.

• But, we can not write 10n2+4n+2 = (n3), since it does not


satisfy the omega relation for sufficiently large input.

Note : We always consider tight lower bound.


• Theta () notation:
– The Theta notation specifies the tight
upper and lower bounds for the growth
rate of the function f.

Definition: [Theta]
– f(n) = (g(n)) (read as “f of n is theta
of g of n”) iff there exist positive
constants c1, c2, and n0 such that
c1g(n)  f(n)  c2g(n) for all n, n  n0.
• The definition states that the function f(n) lies between c1 times the function
g(n) and c2 times the function g(n) except when n is smaller than n0.
• In other words, f(n) grows same rate as” g(n).

• Examples:-
– f(n) = 3n+2
• 3n <= 3n + 2 <= 4n, for all n >= 2,  3n + 2 =  (n)

– f(n) = 10n2+4n+2
• n2<= 10n2+4n+2 <= 11n2, for all n >= 5,  10n2+4n+2 =  (n2)

• But, we can not write either 10n2+4n+2= (n) or 10n2+4n+2= (n3), since
neither of these will satisfy the theta relation.
• Little Oh(O) notation:
– The little oh notation specifies a strict
upper bound for the growth rate of the
function f.

Definition: [Little “oh’’]


– f(n) = o(g(n)) (read as “f of n is little
oh of g of n”) iff

Lim f(n) =0
n->∞ g(n)
• The definition states that the function f(n) is less than c times the
function g(n) except when n is smaller than n0.
• In other words, f(n) grows slower than” g(n).
• Examples
– f(n) = 3n+2=o(n2)
Lim 3n+2
= 0
since n->∞ n2

– However, 3n+2 ≠ o(n)


Big-Oh, Theta, Omega and Little-oh

Tips :
• Think of O(g(n)) as “less than or equal to” g(n)
– Upper bound: “grows slower than or same rate as” g(n)

• Think of Ω(g(n)) as “greater than or equal to” g(n)


– Lower bound: “grows faster than or same rate as” g(n)

• Think of Θ(g(n)) as “equal to” g(n)


– “Tight” bound: same growth rate

• Think of o(g(n)) as “less than to” g(n)


– Strict Upper bound: “grows slower than ” g(n)

• (True for large N)


Functions ordered by growth rate

Function Name
1 Growth is constant
logn Growth is logarithmic
n Growth is linear
nlogn Growth is n-log-n
n2 Growth is quadratic
n3 Growth is cubic
2n Growth is exponential
n! Growth is factorial

1 < logn < n < nlogn < n2 < n3 < 2n < n!


– To get a feel for how the various functions grow
with n, you are advised to study the following
figs:
2
• The following fig gives the time needed by a 1 billion
instructions per second computer to execute a
program of complexity f(n) instructions.

You might also like