0% found this document useful (0 votes)
10 views39 pages

DAA Unit 1

The document covers the design and analysis of algorithms, including the definition of algorithms, methods for designing them, and performance analysis in terms of time and space complexity. It discusses characteristics of algorithms, specifications using flow charts and pseudo code, and provides examples of various algorithms including sorting, recursive algorithms, and performance analysis techniques. Additionally, it explains how to calculate space and time complexity for different algorithms.

Uploaded by

ksaikarthik5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views39 pages

DAA Unit 1

The document covers the design and analysis of algorithms, including the definition of algorithms, methods for designing them, and performance analysis in terms of time and space complexity. It discusses characteristics of algorithms, specifications using flow charts and pseudo code, and provides examples of various algorithms including sorting, recursive algorithms, and performance analysis techniques. Additionally, it explains how to calculate space and time complexity for different algorithms.

Uploaded by

ksaikarthik5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Unit-1

 Design and analysis of algorithms includes designing or developing of algorithms and


analyzing algorithms.

 An algorithm is a step by step procedure for solving a problem. It contains sequence of steps
which indicate how to solve a problem.

 The basic reason for writing algorithms is to write or implement programs

easily. To design or develop algorithms, the following methods are used

1. Divide and Conquer


2. Greedy
3. Dynamic Programming
4. Backtracking
5. Branch and Bound

 Analysis of algorithms is measuring performance of algorithms in terms of space complexity


and time complexity and making some decisions.

Consider sorting problem as an example. For sorting a list of values, number of techniques exist

1. Bubble sort
2. Selection sort
3. Insertion sort
4. Quick sort
5. Merge sort
6. Heap sort
7. Radix sort

 The time and space complexity of all algorithms are calculated in order to decide the best
sorting method for sorting a list of values.

 If there are number of methods to solve a problem and if we need to identity the best method
for solving the problem then performance analysis is used.

I. Criteria (or) characteristics (or) properties of an algorithm

The following are characteristics of an algorithm

- Input
- Output
- Definiteness
- Finiteness
- Effectiveness
1.Input
There are zero (or) more inputs to an algorithm.
2. Output
Every algorithm or its equivalent program generates one or more outputs.
3. Definiteness
Each step of algorithm should be clear and unambiguous.
Ex: add 5 or 6 to 7 is am ambiguous statement.
4. Finiteness
The algorithm should terminate after a finite number of steps. The algorithm should not
enter into an infinite loop.
5. Effectiveness
Each step of the algorithm should be such that it can be easily converted to equivalent
statement of the program.

Specification of algorithm
Two methods are generally used to specify an algorithm

1. Flow chart
2. Pseudo code
 In flow chart representation, the steps of algorithm are represented using graphical notations.
Flow chart representation is effective when the algorithm is simple and small.
 If the algorithm is large and complex then pseudo code is used to represent the algorithm.

2.Pseudo code representation of algorithms


The syntax rules for specifying an algorithm in pseudo code are as follows

Delimiter: ; is used as delimiter of statements.

Comments:
// notation is used to indicate comments.

Block of statements:
{} are used to indicate block of statements.

Variables:
Any variable name should start with a letter. No need to specify data type and scope for the
variables. Variables can be used at any place in the algorithm without declaring them.

Operators:
Relational operators: <, ≤, >, ≥, =, ≠
Logical operators: and, or, not
Assignment operator: :=
The symbols for remaining operators are same as in „c‟ language.

Arrays:
Single dimensional arrays are used with the notation- arrayname[index]
Multi dimensional arrays are used with the notation- arrayname[index of first dimension, index
of second dimension, ……]

Ex: a[i]
a[i,j]
a[i,j,k]

Conditional Statements:
The conditional statements if and case are used in pseudo code.

if:
if statement is used to check one condition. The syntax of if statement is

if condition then
{
Block of statements
}

if condition then
{

Block of statements

else
{

Block of statements

}
case:
case statement is similar to switch statement. It is used to check number of conditions. The
syntax of case statement is

case
{
:condition1: statements
:condition2: statements
:condition3: statements
.
.
:conditionn: statements
}
The conditions are checked one after another and when any condition becomes true then the
corresponding statements are executed and then control comes out of case statement.

Loop statements:

while
The syntax of while statement is

while condition do
{
Block of statements
}

repeat
The syntax of repeat statement is

repeat
Block of statements
until condition

for
The syntax of for statement is

for variable := value1 to value2 step step do


{
Block of statements
}

Variable is any variable name. Value1 is starting value of variable. Value2 is ending value of
variable.
step is either a +ve or –ve value. After each iteration, the value of variable is incremented by step
value if step value is +ve or decremented by step value if step value is –ve.
step is optional.
Default value of step is +1

Input & Output:


„read‟ statement is used to read input. „write‟ statement is used to display output.

Heading of the algorithm:


Each algorithm should start with the heading

Algorithm name(list of parameters)


{
}

„name‟ is user defined name and parameter list is optional. No need to specify data type for
parameters.

Ex1: Write an algorithm in pseudo code format to calculate sum of values in an array
Algorithm sum(a, n)
//a is an array containing list of values
//n is size of the array
{
sum := 0;
for i := 1 to n do
{
sum := sum + a[i];
}
write “sum”;
}

Ex2: Write an algorithm in pseudo code format to find maximum value in a list of values

Algorithm max(a, n)
//a is an array and n is size of the array
{
max := a[1];
for i := 2 to n do
{
if a[i] > max then
max := a[i];
}
write “max”;
}
Ex3: Write an algorithm in pseudo code format to check whether a number is Armstrong number
or not
Algorithm Armstrong(n)
// n is a positive integer
{
sum := 0;
m := n;
while n > 0 do
{
r := n % 10;
sum := sum + (r * r * r);
n := n / 10;
}
if sum = m then
write “the given number is Armstrong”;
else
write “the given number is not Armstrong”;
}

Ex4: Write an algorithm to check whether the given number is strong number or not

A number is said to be strong number if sum of the factorial values of digits of the given number
is same as the number
Algorithm strong(n)
// n is a positive integer
{
sum := 0;
m := n;
while n > 0 do
{
r := n % 10;
f := 1;
for i := 1 to r do
{
f := f * i;
}
sum := sum + f;
n := n / 10;
}
if sum = m then
write “the given number is strong”;
else

write “the given number is not strong”;

3.Recursive algorithm
 An algorithm which calls itself is said to be recursive algorithm.
 Recursive algorithms are used to solve complex problems in an easy manner.
 Recursion can be used as replacement of loops.
 There are two types of recursive algorithms: 1) direct recursive algorithms, 2) indirect
recursive algorithms.

Direct recursive algorithm


An algorithm which calls itself is direct recursive algorithm.

Ex: Algorithm A()


{
.
.
.
A();
.
.
}

Indirect recursive algorithm


If A and B are two algorithms and if algorithm A calls algorithm B and if algorithm B calls
algorithm A then algorithm A is called as indirect recursive algorithm.

Ex: Algorithm A()


{
.
.
.
B();
.
.
}

Algorithm B()
{
.
.
A();
.
.
}

Ex: Write a recursive algorithm to find factorial of a number

Algorithm factorial(n)
//n is a positive integer
{
if n = 1 then
return 1;
else
return n * factorial(n-1);
}

Ex: Write an algorithm to find factorial of a number

Algorithm factorial(n)
//n is a positive integer
{
f := 1;
for i := 1 to n step 1 do
f := f * i;
write “f”;
}

Ex: Write a recursive algorithm to calculate sum of values in an array

Algorithm rsum(a, n)
// a is an array containing list of values and n is size of array
{
if n = 0 then
return 0;
else
return a[n] + rsum(a, n-1);
}
Ex: Write a recursive algorithm to calculate sum of digits in a number
Algorithm rsdigits(n)
// n is a positive number
{
if n = 0 then
return 0;

else
return n % 10 + rsdigits(n / 10);
}
4.Performance analysis
Performance of any algorithm is measured in terms of space and time complexity.
1. Space complexity of an algorithm indicates the memory requirement of the algorithm.
2. Time complexity of an algorithm indicates the total CPU time required to execute the
algorithm.
4.1 Space complexity for an algorithm
 Space complexity of an algorithm is sum of space required for fixed part of algorithm and
space required for variable part of algorithm.
 Under fixed part, the space for the following is considered
1) Code of algorithm
2) Simple variables or local variables
3) Defined constants
 Under variable part, the space for the following is considered
1) Variables whose size varies from one instance of the problem to another instance (arrays,
structures and so on)
2) Global or referenced variables
3) Recursion stack
 Recursion stack space is considered only for recursive algorithms. For each call of recursive
algorithm, the following information is stored in recursion stack
1) Values of formal parameters
2) Values of local variables
3) Return value

Ex1: Calculate space complexity of the following algorithm

Algorithm Add(a, b)
{
c := a+b;
write c;
}

Space complexity=space for fixed part + space for variable part

Space for fixed part:


Space for code=c words
Space for simple variables=3 (a, b, c) words
Space for defined constants=0 words
Space for variable part:
Space for arrays=0 words
Space for global variables=0 words
Space for recursion stack=0 words

Space complexity=c+3 +0+0+0+0=(c+3) words

Ex2: Calculate space complexity of the following algorithm

Algorithm Sum(a, n)
{
sum := 0;
for i := 1 to n do
sum := sum + a[i];
write „sum‟;
}

Space for fixed part:


Space for code=c words
Space for simple variables=3 (n, sum, i) words
Space for defined constants=0 words

Space for variable part:


Space for arrays=n (a) words
Space for global variables=0 words
Space for recursion stack=0 words

Space complexity=c+3+0+n+0+0=(c+n+3) words

Ex3: Calculate space complexity for the following algorithm

Algorithm Armstrong(n)
// n is a positive integer
{
sum := 0;
m := n;
while n > 0 do
{
r := n % 10;
sum := sum + (r * r * r);
n := n / 10;
}
if sum = m then
write “the given number is Armstrong”;
else
write “the given number is not Armstrong”;
}
Space for fixed part:
Space for code=c words
Space for simple variables=4 (n, sum, m, r) words
Space for defined constants=0 words

Space for variable part:


Space for arrays=0 words
Space for global variables=0 words
Space for recursion stack=0 words

Space complexity=c+4+0+0+0+0=(c+4) words


Ex4: calculate space complexity for the following algorithm

Algorithm MatAdd(a, b, m, n)
// a, b are matrices of size mxn
{
for i := 1 to m do
{
for j := 1 to n do
{
c[i, j] := a[i, j] + b[i, j];
write c[i, j];
}
}
}

Space for fixed part:


Space for code=c words
Space for simple variables=4 (m, n, i, j) words
Space for defined constants=0 words

Space for variable part:


Space for arrays=3mn (a, b, c) words
Space for global variables=0 words
Space for recursion stack=0 words

Space complexity=c+4+0+3mn+0+0=(c+3mn+4) words

Ex5: calculate space complexity for the following algorithm

Algorithm MatMul(a, b, m, n)
// a, b are matrices of size mxn
{
for i := 1 to m do
{
for j := 1 to n do
{
c[i, j] := 0;
for k := 1 to m do
{
c[i, j] := c[i, j] + a[i, k] * b[k, j];
}
}
write c[i, j];
}
}

Space for fixed part:


Space for code=c words
Space for simple variables=5 (m, n, i, j, k) words
Space for defined constants=0 words
Space for variable part:
Space for arrays=3mn (a, b, c) words
Space for global variables=0 words
Space for recursion stack=0 words

Space complexity=c+5+0+3mn+0+0=(c+3mn+5) words


Ex6: calculate space complexity for the following recursive algorithm

Algorithm factorial(n)
// n is a positive integer
{
if n = 1 then
return 1;
else
return n*factorial(n-1);
}

Space for fixed part:Space for code=c words


Space for simple variables=1 (n) word

Space for defined constants=0 words


Space for variable part:
Space for arrays=0 words
Space for global variables=0 words
Space for recursion stack=2n words

For each call of factorial algorithm, two values are stored in recursion stack (formal parameter n
and return value). The factorial algorithm is called for n times. Total space required by the
recursion stack is n*2 words.

Space complexity=c+1+0+0+0+2n=(c+2n+1) words

Ex7: calculate space complexity for the following recursive algorithm

Algorithm Rsum(a, n)
// a is an array of size n
{
if n = 0 then
return 0;
else
return a[n] + Rsum(a, n-1); }

Space for fixed part:


Space for code=c words
Space for simple variables=1 (n) word
Space for defined constants=0 words

Space for variable part:


Space for arrays=n words
Space for global variables=0 words Space
for recursion stack=3(n+1) words

For each call of the algorithm, three values are stored in recursion stack (formal parameters: n,
starting address of array and return value). The algorithm is called for n+1 times. Total space
required by the recursion stack is (n+1)*3 words.

Space complexity = c+1+0+n+0+(n+1)3=(c+4n+4) words


4.2 Time complexity
Time complexity of an algorithm is the total time required for completing the execution of the
algorithm. Two methods are used to calculate time complexity of the algorithm

1) Step count
2) Frequency count

4.2.1 Step count method

In this method, a global variable called count with initial value 0 is used.
The value of count variable is incremented by 1 after each executable statement in the algorithm.
At the end of algorithm, the value of count variable indicates the time complexity of the
algorithm.
The computer executes a program in steps and each step has a time cost associated with it. It means
that a step could be done in finite amount of time.

Program Step Step Value Description


Comments 0 comments are not executed
Assignments 1 can be done in constant time
Arithmetic Operations 1 con be done in constant time
if loop runs n times, count step as n+1 and any
Loops Count Steps
assignment inside loop is counted as n step.
Flow Control 1 step only take account of part that was executed.

Ex1: calculate time complexity of the algorithm

Algorithm sum(a, n)
// a is an array of size n
{
sum := 0; 1
for i := 1 to n do n+1
sum := sum + a[i]; n
write „sum‟; 1
}

Time complexity=1+n+1+n+1=2n+3

Ex2: calculate time complexity of the algorithm


Algorithm Max(a, n)
//
{
max := a[1]; 1
for i := 2 to n do n
{
if max < a[i] then n-1
max := a[i]; n
}
write „max‟; 1
}

Time complexity=1+n+n-1+n+1=3n+1

Ex3: calculate time complexity for the algorithm

Algorithm MatAdd(a, b, m, n)
// a, b are matrices of size mxn
{
for i := 1 to m do m+1
{
for j := 1 to n do m(n+1)
{
c[i, j] := a[i, j] + b[i, j]; mn
write c[i, j]; mn
}
}
}

Time complexity=m+1+m(n+1)+mn+mn=3mn+2m+1

Ex4: calculate time complexity for the algorithm

Algorithm MatMul(a, b, m, n)
// a, b are matrices of size mxn
{
for i := 1 to m do m+1
{
for j := 1 to n do m(n+1)
{
c[i, j] := 0; mn
for k := 1 to m do mn(m+1)
{
c[i, j] := c[i, j] + a[i, k] * b[k, j]; mn(m)
}
}
write c[i, j]; mn
}
}

Time complexity=m+1+m(n+1)+mn+mn(m+1)+mn(m)+mn=2m2n+4mn+m+1

Ex5: Calculate time complexity for the following algorithm

Algorithm Armstrong(n)
// n is a positive integer
{
sum := 0; 1
m := n; 1
while n > 0 do k+1
{
r := n % 10; k
sum := sum + (r * r * r); k
n := n / 10; k
}
if sum = m then 1
write “the given number is Armstrong”; 1
else 1
write “the given number is not Armstrong”; 1
}
Time complexity=1+1+k+1+k+k+k+1+1=4k+5
Where „k‟ is number of digits in „n‟.

Ex6: calculate time complexity of the algorithm

Algorithm factorial(n)
// n is a positive integer
{
if n = 1 then 1
return 1; 1
else 1
return n*factorial(n-1); 1
}

Time complexity:
Case1: when n=1
In this case, if and return statements are executed and the algorithm terminates. So, the time
complexity is
T(1)=2

Case2: when n>1


In this case, else and return statements are executed and the algorithm is called with (n-1). So,
the time complexity is
T(n)=2+T(n-1)

Solving the above equation


T(n)=2+T(n-1)
=2+2+T(n-2)
=2+2+2+T(n-3)
.
.
After (n-1) times
=2+2+2+2+……+T(1)
=2+2+2+2+…… n times
=2n

T(n)=2n

Ex7: Calculate time complexity for the following recursive algorithm


Algorithm Rsum(a, n)
// a is an array containing n number of values
{
if n=0 then 1
return 0; 1
else 1
return a[n]+Rsum(a,n-1); 1
}

Time complexity
Case1: when n=0
In this case, if and return statements are executed and the algorithm terminates. So, the time
complexity is
T(1)=2

Case2: when n>1


In this case, else and return statements are executed and the algorithm is called with (n-1). So,
the time complexity is
T(n)=2+T(n-1)

Solving the above equation


T(n)=2+T(n-1)
=2+2+T(n-2)
=2+2+2+T(n-3)
.
.
After n times
=2+2+2+2+……+T(0)
=2+2+2+2+…… n+1 times
=2(n+1)

T(n)=2(n+1)
=2n

T(n)=2n

Ex7: Calculate time complexity for the following recursive algorithm


Algorithm Rsum(a, n)
// a is an array containing n number of values
{
if n=0 then 1
return 0; 1
else 1
return a[n]+Rsum(a,n-1); 1
}

Time complexity
Case1: when n=0
In this case, if and return statements are executed and the algorithm terminates. So, the time
complexity is
T(1)=2

Case2: when n>1


In this case, else and return statements are executed and the algorithm is called with (n-1). So,
the time complexity is
T(n)=2+T(n-1)

Solving the above equation


T(n)=2+T(n-1)
=2+2+T(n-2)
=2+2+2+T(n-3)
.
.
After n times
=2+2+2+2+……+T(0)
=2+2+2+2+…… n+1 times
=2(n+1)

T(n)=2(n+1)

Ex8: Write recursive algorithm for Towers of Hanoi. Calculate space and time complexity.

Algorithm TOH(n, A, B, C)
// n is number of disks
// A, B, C are towers. A is source and C is destination
{
if n>0 then
{
TOH(n-1, A, C, B);
Move nth disk from tower A to tower C;
TOH(n-1, B, A, C);
}
}

Space for fixed part:


Space for code=c words
Space for simple variables=4 (n,A,B,C) word
Space for defined constants=0 words

Space for variable part:


Space for arrays=0
Space for global variables=0 words
Space for recursion stack=4(2n-1) words

This algorithm is called for (2n-1) times. The recursive calls of the algorithm for n=3 are shown
below. For each call of the algorithm, the values of formal parameters (n, starting address of A,
starting address of B and starting address of C) are stored in the recursion stack. These formal
parameters require 4 words of memory. So, total space required by recursion stack is 4(2n-1)
words.

TOH(n=3)

TOH(n=2) TOH(n=2)

TOH(n=1) TOH(n=1) TOH(n=1) TOH(n=1)

Space complexity = c+4+0+0+0+4(2n-1) = c+4+4(2n-1) words


Time Complexity:
Case1: when n=0
In this case, only if statement is executed. The time complexity is
T(0)=1

Case2: when n>0


In this case, time complexity is
T(n)=1+T(n-1)+1+T(n-1)
T(n)=2+2T(n-1)
T(n)=2+2[2+2T(n-2)]=2+22+22T(n-2)
T(n)=2+22+22[2+2T(n-3)]=2+22+23+23[2+2T(n-4)]
.
.
After n times
T(n)=2+22+23+….+2n
Ex9: Write recursive algorithm for displaying Fibonacci numbers. Calculate space and
time complexity.

Algorithm Fibonacci(n, a, b)
// n is number of Fibonacci numbers
// a, b are previous two Fibonacci numbers
{
if n>0 then
{
c:=a+b;
w r i t e (c);
a:=b;
b := c;
Fibonacci( n-1, a, b);
}
}
Space complexity:
Space for fixed part:
Space for code=c words
Space for simple variables=4 (n, a, b,c)
c) words Space for defined
constants=0 words

Space for variable part:


Space for arrays=0 words
Space for global
variables=0 words Space
for recursion stack=4n
words

This algorithm is called for n times. For each call of the algorithm, the values of formal
parameters (n, a, b) and the value of local variable (c) are stored in the recursion stack.
4 words of memory are required for storing information of each call of the algorithm.
So, total space required by recursion stack is 4n words.

Space complexity=c+4+0+0+0+4n= c+4n+4

words Time complexity:


Case1: when n=0
In this case, only if statement is executed. The time
complexity is T(0)=1

Case2: when n>0


In this case, time complexity is
T(n)=1+1+1+1+1+T(n-1)
=5+T(n-1)
T(n)=5+5+T(n-2)
T(n)=5+5+5+T(n-3)
. .
.
After n
times
T(n)=5+5+5+….
n times+1
T(n)=5n+1
4.2.2 Frequency Count or Tabulation Method
 This is another method for calculating time complexity of an algorithm.
 In this method, steps per execution and frequency count is calculated for each
executable statement in the algorithm.
 Steps per executions is the no of steps needed by the statement.
 Frequency count of a statement indicates the number of times that statement is
executed.
 The frequency counts of all executable statements are added to get time complexity
of the algorithm.

Ex1:
Statements Step count Frequency Total steps

Algorithm Sum(a, n)
{
s := 0; 1 1 1
for i := 1 to n do 1 n+1 n+1
{
s := s + a[i]; 1 n n
}
write s; 1 1 1
}
Total: 2n+3

Time complexity=2n+3
 Steps needed per statement = steps per execution*frequency
 Sum of these steps gives the total step count.

4.3 Asymptotic notations


 Asymptotic notations are used to represent space and time complexity of
algorithms.
 Asymptotic notations are the mathematical notations used to describe the running
time of an algorithm when the input tends towards a particular value or a limiting
value.
Commonly used asymptotic notations are
1) Big oh (O)
2) Omega (Ω)
3) Theta (θ)
4) Small oh (o)
5) Small omega (ω)

1.Big oh notation (O)


 The Big-O notation describes the worst-case running time of an algorithm.
 Compute the Big-O of an algorithm by counting how many iterations an algorithm
will take in the worst-case scenario with an input of n.
 Denote upper bound.
 Definition:
If f(n) and g(n) are two functions defined in terms of n then f(n)=O(g(n)) if
and only if there exists two positive constants c and no such that f(n) ≤ c*g(n), for
all values of n where n ≥ n0.
Ex1: if the complexity of an algorithm is 3n+2 then

3n+2=O(n) f(n)=3n+2
g(n)=n

3n +2 ≤ 4n, n≥2

c=4
and
n0=2
So,
3n+2
=O(n
)
Ex2: if the complexity of an algorithm is 100n+6 then 100n+6=O(n)

f(n)=100n+6
g(n)=n

100n+6 ≤

101n, n≥6

c=101 and

n0=6
So, 100n+6=O(n)

Ex3: if the complexity of an algorithm is 10n2+4n+6 then

10n2+4n+6=O(n2) f(n)= 10n2+4n+6


g(n)=n2

10n2+4n+6 ≤

11n2, n≥6
c=11 and

n0=6
So, 10n2+4n+6=O(n2)

Ex4: if the complexity of an algorithm is 6*2n + n2 then 6*2n + n2=O(2n)

f(n)=6*2n + n2
g(n)=2n

6*2n +

n2≤7*2n, n≥1

c=7 and n0=1


So, 6*2n + n2=O(2n)

Actually, 3n+2 can be represented as


3n+2=O(n) because 3n+2≤4n where
c=4 and n0=2
or as 3n+2=O(n2)
because 3n+2≤4n2 where c=4
and n0=2 or as 3n+2=O(n3)
because 3n+2≤4n3 where c=4 and n0=2

In Big Oh notation, the least upper bound has to be used. So, 3n+2=O(n)

2.Omega Notation (Ω)


 Omega notation represents the lower bound of the running time of an algorithm.
 describe the best case complexity of an algorithm.
definition
If f(n) and g(n) are two functions defined in terms of n then f(n)=Ω(g(n)) if and only if
there exists two positive constants c and no such that f(n) ≥ c*g(n), for all values of n
where n ≥ n0.

Ex1: if the complexity of an algorithm is 3n+2 then


3n+2=Ω(n) f(n)=3n+2
g(n)=n

3n +2 ≥ 3n, n≥1
c=3
and
n0=1
So,
3n+2
=Ω(n
)

Ex2: if the complexity of an algorithm is 100n+6 then

100n+6=Ω(n) f(n)=100n+6
g(n)=n

100n+6 ≥

100n, n≥1

c=100 and

n0=1
So, 100n+6=Ω(n)

Ex3: if the complexity of an algorithm is 10n2+4n+6 then

10n2+4n+6=Ω(n2) f(n)= 10n2+4n+6


g(n)=n2

10n2+4n+6 ≥

10n2, n≥1

c=10 and

n0=1
So, 10n2+4n+6=Ω(n2)

Ex4: if the complexity of an algorithm is 6*2n + n2 then 6*2n + n2=Ω(2n)

f(n)=6*2n + n2
g(n)=2n
6*2n + n2 ≥

6*2n, n≥1

c=6 and n0=1


So, 6*2n + n2=Ω(2n)

Actually, 3n+2 can be represented as


3n+2=Ω(n) because 3n+2≥3n where
c=3 and n0=1
or as 3n+2=Ω(1)
because 3n+2≥3 where c=3 and n0=1

In Omega notation, the highest lower bound has to be used. So, 3n+2=Ω(n)

3. Theta notation (θ)


 theta notation encloses the function from above and below.
 Since it represents the upper and the lower bound of the running time of an
algorithm, it is used for analyzing the average-case complexity of an algorithm.
 definition

If f(n) and g(n) are two functions defined in terms of n then f(n)=θ(g(n)) if and only if
there exists three positive constants c1, c2 and no such that c1*g(n) ≤ f(n) ≤ c2*g(n), for
all values of n where n ≥ n0.

Ex1: if the complexity of an algorithm is 3n+2 then

3n+2=θ(n) f(n)=3n+2 g(n)=n

3n ≤ 3n +2 ≤ 4n, n≥2
c1=3, c2=4
and n0=2
So,
3n+2=θ(n
)

Ex2: if the complexity of an algorithm is 100n+6 then

100n+6=θ(n) f(n)=100n+6
g(n)=n

100n ≤ 100n+6 ≤ 101n, n≥6

c1=100, c2=101
and n0=6 So,
100n+6=θ(n)

Ex3: if the complexity of an algorithm is 10n2+4n+6 then

10n2+4n+6=θ(n2) f(n)= 10n2+4n+6


g(n)=n2

10n2 ≤ 10n2+4n+6 ≤

11n2, n≥6 c1=10,

c2=11 and n0=6


So, 10n2+4n+6=θ(n2)

Ex4: if the complexity of an algorithm is 6*2n + n2 then 6*2n +

n2=θ(2n) f(n)=6*2n + n2
g(n)=2n

6*2n ≤ 6*2n + n2 ≤

7*2n, n≥1 c1=6,

c2=7 and n0=1


So, 6*2n + n2=θ(2n)

4. Small Oh notation (o)


If f(n) and g(n) are two functions defined in terms of n then f(n)=o(g(n)) if and only if

Ex1: if the complexity of an algorithm is 3n+2 then 3n+2=o(n2) as


3�+2
lim�→∞
n2
=0
Ex2: if the complexity of an algorithm is 10n2+4n+6 then 10n2+4n+6=o(n3) as
10�2+4�+6
lim�→∞ =0
n3
5. Small Omega notation
If f(n) and g(n) are two functions defined in terms of n then f(n)=ω(g(n)) if and only if
lim g(n)/f(n) = 0
n→∞
Ex1: if the complexity of an algorithm is 3n+2 then 3n+2=ω(1) as

lim
�→∞
1
=0
3n+2

Out of the five notations, the frequently used notations are O, Ω and θ. The θ notation
accurately represents the complexity of algorithms.

Ex1: Show that

3n3+2n2=O(n3)

f(n)=3n3+2n2
g(n)=n3

3n3+2n2 ≤ 4n3, n≥2


c

=4,

n0=2

3n3+2n

=O(n3
2

Ex2: Show that 3n ≠ O(2n)

f(n)
=3n
g(n)
=2n

It is not possible to identify c and n0 such that 3n ≤ c2n is satisfied. So,


3n ≠ O(2n) Ex3: Show that 3n3+2n2=Ω(n3)

f(n)
=3
n3+
2n2
g(n)
=n3

3n3+2n2
≥ 3n3,
n≥1 c=3,
n0=1

3n3+2n2=Ω(n3)

Ex4: Show that 3n3+2n2=θ(n3)


f(n)=3n3+2n2
g(n)=n3
3n3 ≤ 3n3+2n2≤ 4n3, n≥2
c1=3, c2=4,n0=2
3n3+2n2=θ(n3)

4.4 Performance Measurement


 Performance measurement concerned with obtaining the space and time requirements
of a particular algorithm.
 These quantities depend on the compiler and options used as well as on the computer
on which program is executed.
 Here focus is on the measuring the computing time of a program.
 Clocking procedure is used to compute the time needed.
 This procedure assumes GetTime() that returns current time in milliseconds
 Worst case performance measured by taking different vales for input size of the
program.
Ex:measuring performance of linear search.
 The following algorithm generates input data for input size and array.
 Timesearch algorithm marks the starting and end times of executing the code related
to linear search using GetTime() and computes time needed as difference between
the start and end time.

Running time search produces the following output.

 Time is not computed for dome input sizes as program can be executed bofore clock can
change time.
 The process of running linear search can be repeated for r no of times and the time obtained
is divided by r to get time needed for running linear search one time.
 In below algorithm r is the array that contains repetition factors.
With the repetition time is computed.
2.Divide and Conquer
1.General Method:
Divide and Conquer is one of the best-known general algorithm design
technique.
 Given a function to compute on ‘n’ inputs the divide-and-conquer strategy
suggests splitting the inputs into ‘k’ distinct subsets, 1<k<=n, yielding ‘k’
sub problems.
 These sub problems must be solved, and then a method must be found
to combine sub solutions into a solution of the whole.
 If the sub problems are still relatively large, then the
divide-and-conquer strategy can possibly be reapplied.
 Often the sub problems resulting from a divide-and-conquer design are of the
same type as the original problem. For those cases the reapplication of the
divide-and- conquer principle is naturally expressed by a recursive algorithm.
 Control Abstraction for divide and conquer:

In the above specification,


 Initially DAndC(P) is invoked, where ‘P’ is the problem to be solved.
 Small (P) is a Boolean-valued function that determines whether the
input size is small enough that the answer can be computed without splitting.
If this so, the function ‘S’ is invoked. Otherwise, the problem P is divided
into smaller sub problems. These sub problems P1, P2 …Pk are solved by
recursive application of DAndC.
 Combine is a function that determines the solution to P using the solutions to the
‘k’ sub problems.
Recurrence equation for divide and conquer:
If the size of problem ‘p’ is n and the sizes of the ‘k’ sub problems are n1,
n2….nk, respectively, then the computing time of divide and conquer is
described by the recurrence relation

 T(n) is the time for divide and conquer method on any input of size n and
 g(n) is the time to compute answer directly for small inputs.
 The function f(n) is the time for dividing the problem ‘p’ and combining the
solutions to sub problems.
 More generally, an instance of size n can be divided into b instances of size
n/b, with a of them needing to be solved. (Here, a and b are constants; a>=1
and b > 1.). Assuming that size n is a power of b(i.e. n=bk), to simplify our
analysis, we get the following recurrence for the running time T(n):
..... (1)
 where f(n) is a function that accounts for the time spent on dividing the
problem into smaller ones and on combining their solutions.

2. Binary Search
Problem definition: Let ai, 1 ≤ i ≤ n be a list of elements that are sorted in
non-decreasing order. The problem is to find whether a given element x is
present in the list or not. If x is present we have to determine a value j
(element’s position) such that aj=x. If x is not in the list, then j is set to zero.
Solution: Let P = (n, ai…al , x) denote an arbitrary instance of search problem
where n is the number of elements in the list, ai…al is the list of elements and
x is the key element to be searched for in the given list. Binary search on the
list is done as follows:
Step1: Pick an index q in the middle range [i, l] i.e. q= [(n + 1)/2] and
compare x with aq.
Step 2: if x = aq i.e key element is equal to mid element, the problem is
immediately solved.
Step 3: if x <aqin this case x has to be searched for only in the sub-list
ai, ai+1, ……, aq-
Therefore, problem reduces to (q-i, ai…aq-1, x).
Step 4: if x >aq,x has to be searched for only in the sub-list aq+1, ...,., al .
Therefore problem reduces to (l-i, aq+1…al, x).
For the above solution procedure, the Algorithm can be implemented as
recursive or non- recursive algorithm.
Analysis:
In binary search the basic operation is key comparison.
Binary Search can be analyzed with the best, worst, and average case number
of comparisons.
Recursive Binary Search, count each pass through the if-then-else block as one
comparison.
Best case –Θ(1) In the best case, the key is the middle in the array. A constant
number of comparisons (actually just 1) are required.
Worst case - Θ(log2 n) In the worst case, the key does not exist in the array at
all. Through each recursion or iteration of Binary Search, the size of the
admissible range is halved. This halving can be done ceiling(log2n ) times.
Thus, [ log2 n ] comparisons are required.
Sometimes, in case of the successful search, it may take maximum number
of comparisons.
] log2 n ]. So worst case complexity of successful binary
search is Θ (log2 n).
Average case - Θ (log2n) To find the average case, take the sum of the product
of number of comparisons required to find each element and the probability of
searching for that element. To simplify the analysis, assume that no item
which is not in array will be searched for, and that the probabilities of
searching for each element are uniform.

Space Complexity - The space requirements for the recursive and iterative
versions of binary search are different. Iterative Binary Search requires only a
constant amount of space, while Recursive Binary Search requires space
proportional to the number of comparisons to maintain the recursion stack.

3.Finding the maximum and minimum


Problem statement: Given a list of n elements, the problem is to find the
maximum and minimum items.
StraightMaxMin: A simple and straight forward algorithm to achieve this
is given below.
StraightMaxMin requires 2(n-1) comparisons in the best, average & worst
cases.
Algorithm based on Divide and
Conquer strategy
Let P = (n, a [i],……,a [j]) denote an arbitrary instance of the problem. Here
‘n’ is the no. of elements in the list (a[i],….,a[j]) and we are interested in
finding the maximum and minimum of the list. If the list has more than 2
elements, P has to be divided into smaller instances.
For example, we might divide ‘P’ into
the 2 instances,
P1= ( [n/2],a[1], a[n/2])
P2= (n-[n/2],[[n/2]+1],……., a[n])
After having divided ‘P’ into 2 smaller sub problems, we can solve them by
recursively invoking the same divide-and-conquer algorithm.

Example:
Complexity Analysis

4.Quick Sort
Quicksort is the other important sorting algorithm that is based on the
divide-and-conquer approach. Unlike mergesort, which divides its input
elements according to their position in the array, quicksort divides (or
partitions) them according to their value.
A partition is an arrangement of the array’s elements so that all the elements
to the left of some element A[s] are less than or equal to A[s], and all the
elements to the right of A[s] are greater than or equal to it:

Obviously, after a partition is achieved, A[s] will be in its final position in


the sorted array, and we can continue sorting the two subarrays to the left and
the right of A[s] independently (e.g., by the same method).
In quick sort, the entire work happens in the division stage, with no work
required to combine the solutions to the sub problems.

Example:
Partitioning:

We start by selecting a pivot—an element with respect to whose value we are


going to divide the subarray. There are several different strategies for selecting
a pivot.We use the sophisticated method suggested by C.A.R. Hoare, the
prominent British computer scientist who invented quicksort.
Select the subarray’s first element: p = A[l].
Now scan the subarray from both ends,comparing the subarray’s elements to
the pivot.
1. The left-to-right scan, denoted below by index pointer i, starts with
the second element. Since we want elements smaller than the pivot to
be in the left part of the subarray, this scan skips over elements that are
smaller than the pivot and stops upon encountering the first element
greater than or equal to the pivot.
2. The right-to-left scan, denoted below by index pointer j, starts with the
last element of the subarray. Since we want elements larger than the
pivot to be in the right part of the
subarray, this scan skips over elements that are larger than the
pivot and stops on encounteriAfter both scans stop,ng the first
element smaller than or equal to the pivot.
three situations may arise, depending on whether or not the scanning
indices have crossed.
1. If scanning indices i and j have not crossed, i.e., i< j, we simply
exchange A[i] and
A[j ] and resume the scans by incrementing I and decrementing j,
respectively:

2. If the scanning indices have crossed over, i.e., i> j, we will have
partitioned the subarray after exchanging the pivot with A[j]:
3. If the scanning indices stop while pointing to the same element, i.e., i =
j, the value they are pointing to must be equal to p. Thus, we have the
subarray partitioned, with the split position s = i = j :

We can combine this with the case-2 by exchanging the pivot with A[j]
whenever i≥j
 Time complexity analysis
Worst case time complexity:
worst case occurs when the partition process always picks greatest or
smallest element as pivot.

Worst case complexity is


T(n)=[n+(n-1)+(n-2)+..+2]+1-1
=n(n+1)/2-1
=(n2+n)/2-1
T(n)=O(n2)
Best case time complexity:
In the best case, the pivot is in the middle.
List is partitioned in to sub lists of equal size approximately.
Hence T (n) = 2 T (n/2) + C n
T(n)=2[2T(n
/2)+cn/2]+c
n T(n)=22
T(n/22)+cn+
cn T(n)=22
T(n/22)+2cn
=22 T(2T(n/8)+cn/4]+2cn
=23T(n/23)+cn+2cn
=23T(n/23)+3cn
:
:
After k times
=2kT(n/2k)+kcn
Assuming that n=2k then k= log2n
T(n)=nT(n/n)+cn(log2n)=nT(1)+cnlog2n=n+cnlog2n=O(nlog2n)

Average case:
Comparisons needed when pivot is ith smallest element are considered and
average is taken to compute the average complexity.

T(n)=n+1+[T(0)+T(1)+……….+T(n)]/n+[T(n)+T(n-1)+………+0]
/n
T(n)=cn+2/n[T(0)+T(1)+............ +T(n)]
After solving the above equation, the time
complexity is T(n)=O(nlog2n)

You might also like