0% found this document useful (0 votes)
48 views41 pages

1 - To - 5 - Lecture Notes

The document discusses algorithms and their design process. It defines an algorithm as a step-by-step procedure to solve a problem. The life cycle of designing an algorithm involves understanding the problem, choosing an exact or approximate solving method, using techniques like divide-and-conquer, expressing the algorithm, validating it works correctly, analyzing its performance, testing it, coding it into a program, and debugging/profiling it. Pseudocode is used to express algorithms without specifying a particular programming language.

Uploaded by

abizer safdari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views41 pages

1 - To - 5 - Lecture Notes

The document discusses algorithms and their design process. It defines an algorithm as a step-by-step procedure to solve a problem. The life cycle of designing an algorithm involves understanding the problem, choosing an exact or approximate solving method, using techniques like divide-and-conquer, expressing the algorithm, validating it works correctly, analyzing its performance, testing it, coding it into a program, and debugging/profiling it. Pseudocode is used to express algorithms without specifying a particular programming language.

Uploaded by

abizer safdari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

1

INTRODUCTION

1.1 ALGORITHM
Algorithm originates from the name of Latin translation of a book written by
al-khwarizmi a Persian mathematician. The book was entitled: “Algoritmi de
numero Indorum”. The term “Algoritmi” in the title of the book led to the
term “Algorithm”. In mathematics and computer science, an algorithm is a
step-by-step procedure for calculations.
An algorithm is an effective method for finding out the solution for a
given problem. It is a sequence of instructions that conveys the method to
address a problem. Algorithm is the basis for any program in computer
science. Algorithm helps in the organization of the program. Every program
is structured and follows the specified guidelines and algorithm gives steps
for the program. A problem can be solved by means of logical induction but
in order to implement it in the form of a program, we require an appropriate
algorithm.
Formal Definitions
Definition 1: Step by step procedure to solve a problem is called Algorithm.
Definition 2: A countable set of instructions followed to complete the
required task is called an algorithm.
Definition 3: An algorithm is a finite set of instructions to do a particular
task.
Various criteria are considered to evaluate an algorithm among which the
essential ones are as follows:
1. Input: The algorithm should be given zero (or) more inputs
explicitly. (≥ 0)

1
2 Design and Analysis of Algorithms

2. Output: One or more quantities are produced as an outcome. (> 0)


3. Definiteness: Every instruction must be free from ambiguity. (A = B
+ 3 or 5)
4. Finiteness: The algorithm must halt after finite number of steps for all
the cases.
5. Effectiveness: Every step in the algorithm should be easy to
understand and must be implementable through any of the
programming languages.
Each step of algorithm must be subjected to one or more operations. For a
computer to operate on the algorithm, certain constraints must be imposed
on the operations performed on it. The time required to terminate an
algorithm must be short sensibly. All the operations performed on an
algorithm must be solvable manually within a limited amount of time.
Computational procedures are those algorithms which are definite and
effective. When an algorithm is implemented in a programming language, it
gains this criterion.

1.2 LIFE CYCLE OF DESIGN AND ANALYSIS OF ALGORITHM


Introduction 3

Understand the Problem: Before designing the algorithm, we need to


understand the problem completely. This is a critical phase. If we do any
mistake in this phase, the entire algorithm becomes wrong. So, we need to
gather all the requirements from the user regarding problem. After that we
have to find out the necessary inputs for solving that problem. The input to
the algorithm is called “instance” of the problem.
Exact vs. Approximate Solving: Solve the problem exactly if possible,
otherwise use approximation methods. Though some problems are solvable
by exact method, they are solved faster using approximation method. So in
such situations, we will use approximation method.
Algorithm Design Techniques: An important aspect of this book is to study
different design techniques which yielded good algorithms. Designing new
algorithms becomes easy once these design strategies are clearly understood.
Designing an algorithm needs human intervention and cannot be done by a
machine alone. Some of the techniques used in designing algorithms are:
1. Brute force
2. Divide and Conquer
3. Greedy Method
4. Dynamic Programming
5. Back Tracking
6. Branch and Bound
Depending on the nature of the problem suitable design technique is
adopted.
Expressing an algorithm: The algorithms are written using structured
programming principles. This includes writing comments where ever
necessary, providing indentation, adhering to standards etc. Algorithms can
be described in the following three ways.
1. Natural language like English: When this way is used, care should be
taken and we should ensure that each and every statement is definite.
2. Graphic representation called flowchart: This method will work well
when the algorithm is small and simple.
3. Pseudo-code Method: In this method, we should typically describe
algorithms as program, which resembles language like Pascal and
algol.
Algorithm validation: Checking whether the algorithm is producing
appropriate outputs for all the valid inputs is called algorithm validation. The
algorithm must be independent of limitations in a programming language in
which it is implemented. It must not be effected by any programming
language issues.
4 Design and Analysis of Algorithms

The solution can be expressed as a set of assertions about the input and
output variables and also as expression in predicate calculus. If these two
forms are proved to be equivalent, then it is a correct solution. Once the
algorithm is built, next we have to prove its correctness. Usually validation
is used for proving correctness of the algorithm i.e., it should be tested by
proving all possible combinations of the inputs. Some of the techniques used
for generating the inputs are:
1. Boundary Value Technique: In Boundary Value Technique, if the
correct value of a Variable is 10, it is tested by giving values in and
around 10. (ex. 9,9.5, 10.5,11 etc.,)
2. Equivalence Partitioning: In Equivalence Partitioning, the i/p is
divided into different groups and tests the algorithm to work
satisfactorily by giving values from these groups.
3. Random Generation: Random Generation will test the algorithm by
generating set of input values in random.
Analysis of an algorithm: Analyzing an algorithm involves study of data
storage and processing of data which are performed by a computer. It
measures the evaluation time of an algorithm and space required by it.
Analysis is required to compare the performance of algorithms.
Algorithm analysis involves synthesizing a formula or guessing the
fastness of algorithm depending upon the size of the problem it operates.
Size of the problem can be
(a) Number of inputs/outputs in an algorithm.
E.g., For a multiplication algorithm, the numbers to be multiplied
are the inputs and the product of the numbers is the output.
(b) Total number of operations involved in algorithm:
E.g., To find the minimum of all the elements in an array the number
of comparisons made among the elements is the total number of
operations.
Algorithm testing: Testing of an algorithm involves debugging and
profiling. Debugging refers to finding out errors in the results and correcting
the problem. Profiling refers to the measurement of performance of a correct
program when executed on a data set. This includes calculation of time and
space required for computation.
Coding an algorithm: After successful completion of all the phases, then an
algorithm is converted into program by identifying a suitable computer
language.
Introduction 5

1.3 PSEUDO‐CODE FOR EXPRESSING ALGORITHMS


1. Comments are denoted by ‘//’
2. Block of statements are enclosed within braces ‘{ }’.
3. No need of explicit declaration of a variables data type. Identifier
starts with a character.
4. Records are used for the formation of complex data types. Here is an
example,
Node = Record
{
data type – 1 data – 1;
.
.
.
data type – n data – n;
node * link;
}
Here link is a pointer to the record type node. Individual data items of
a record can be accessed with → and period operators.
5. Values can be assigned to variables as follows:
<Variable>: = <expression>;
6. TRUE and FALSE are two Boolean values present.
→ Logical Operators AND, OR, NOT
→ Relational Operators <, <=, >, >=, =, !=
7. Loop control statements used are for, while and repeat-until.
while Loop:
while < condition > do
{
<statement-1>
.
.
.
<statement-n>
}
for Loop:
for variable: = value-1 to value-2 step step do
{
<statement-1>
.
.
.
<statement-n>
}
6 Design and Analysis of Algorithms

repeat-until:
repeat
<statement-1>
.
.
.
<statement-n>
until<condition>
8. A conditional statement has the following forms.
If <condition> then <statement>
If <condition> then <statement-1>
Else <statement-2>
Case statement:
case
{
: <condition-1> : <statement-1>
.
.
.
: <condition-n> : <statement-n>
: else : <statement-n+1>
}
9. Input and output are done using the instructions read and write.
10. A single procedure exists which is of the form: Algorithm.
the heading takes the form,
Algorithm Name ___ of __ Algorithm (List of Parameter)
As an example, the following algorithm calculates first ‘n’ natural
numbers and returns the result:

Algorithm
1: Algorithm Sum(n) // n is number of natural numbers
2: {
3: NumSum:=0
4: for i:= 1 to n do
5: NumSum:= Numsum + i
6: return NumSum;
7: }

In the above example, Sum is the name of algorithm, ‘n’ is parameter.


NumSum and ‘i’ are local variables.
Introduction 7

Case study: Selection Sort


Here is an example that demonstrates the method of transforming a problem
into an algorithm.
• Let us consider a problem of sorting ‘n’ arbitrary elements in non
decreasing order for which algorithm must be devised.
• A Simple solution given by the following.
• Find the smallest among the unsorted elements and place it first in the
sorted list.

Algorithm
1: For i := 1 to n do
2: {
3: Examine x[i] to x[n] and suppose the smallest
element is x[min];
4: swap x[i] and x[min];
5: }

The above algorithm has two tasks:


(i) Finding the smallest element
(ii) Swap
The first task can be solved by assuming the minimum is x[i]; checking
x[i] with x[I + 1], x[I + 2]…….x[n], and whenever a smaller element is
found, regarding it as the new minimum which is x[min]. x[n] is compared
with the current minimum. The second task swap can be solved using the
following code
temp:= x[i];
x[i]:=x[min];
x[min]:=temp;
Putting all these observations together, we get the algorithm Selection
sort.
In the first step of this algorithm, the smallest element in the given list is
identified. It is placed in the beginning of the list in the next step. A similar
procedure is followed with the remaining elements where each time the least
element among the elements is found and placed in appropriate position until
the whole list is in sorted order.
Example: List L = 15, 25, 20, 5, 10
5 is identified it is swapped with 15, → 5, 25, 20, 15, 10
10 is identified it is swapped with 25, → 5, 10, 20, 15, 25
15 is identified it is swapped with 20, → 5, 10, 15, 20, 25
20 is identified it is swapped with 20, → 5, 10, 15, 20, 25
8 Design and Analysis of Algorithms

Algorithm
1: Algorithm selection sort (x, n)
2: // Sort the array x[1:n] into non-decreasing
//order.
3: {
4: for i:=1 to n do
5: {
6: min:=i;
7: for k:=i+1 to n do
8: if (x[k]<x[min]) then min:=k;
9: temp:=x[i];
10: x[i]:=x[min];
11: x[min]:=temp;
12: }
13: }

1.4 RECURSIVE ALGORITHMS


Ability of a function to call itself is called recursion. Recursive algorithms
are potential mechanisms and can express a convoluted code in a lucid
manner. That is why, in certain cases, recursive algorithms are preferable.
Recursion is of two types (i) Direct recursion (ii) Indirect recursion
A direct recursive algorithm straight away calls itself. If ‘P’ is an
algorithm that calls an algorithm ‘Q’ in its body. If ‘Q’ in turn calls ‘P’, then
it is said to be indirect recursive algorithm.
The following two examples show how to develop recursive algorithms.
In the first, we consider the Towers of Hanoi problem, and in the second,
we generate all possible permutations of a list of characters.

1.4.1 Towers of Hanoi


Towers of Hanoi is a classical example of recursion problem. It is easier,
efficient and do not make use of complex data structure. The task is to move
all the three disks from source to destination without violating the following
rules.
1. At a time, only one disk may be moved.
2. Larger disk should not be placed on top.
3. Each time a move is made, upper disk from one tower must be moved
to another tower.
4. For the purpose of intermediate storage of disks, only one auxiliary
tower should be used.
There is an interesting story behind the concept of Towers of Hanoi. It
believed that when the world was brought into existence there was a tower
Introduction 9

accommodated with 64 golden disks. It is also called Tower of Brahma or


Luca’s Tower. The disks were of different sizes arranged in a stack in
ascending order. There were two more empty towers beside the actual tower.
Let them be auxiliary, destination and actual tower be source. The disks
must be transferred from source to destination using the middle tower
auxiliary.
It is said that Brahmin presents have been moving these disks, according
to the unchangeable rules of the Brahma, since that time and when once this
puzzle is completed, the world would come to an end.
Note: If there are ‘n’ disks then the minimum number of moves to solve the
Towers of Hanoi problem is 2n – 1
Algorithm for Towers of Hanoi problem: Name of the algorithm is TOH
and n disks are to be moved from source s to destination d using auxiliary
tower a.

Algorithm
1: Algorithm TOH(n, s, d, a)
2: {
3: If (n>=1) then
4: {
5: TOH(n-1,s,a,d);
6: Write(“move top disk from tower”, s, “to top of
tower”, d);
7: TOH(n-1,a,d,s);
8: }
9: }

1.4.1.1 Solution to Towers of Hanoi for Three Disks


In this problem, there are three towers named source, auxiliary and
destination. The source tower consists of three disks of different sizes, where
each disk resting on the one just larger than it.
Start: The source tower consisting of 3 disks.
10 Design and Analysis of Algorithms

Step 1: Move disk 1 from source to destination.

Step 2: Move disk 2 from source to auxiliary.

Step 3: Move disk 1 from destination to auxiliary.

Step 4: Move disk 3 from source to destination.

Step 5: Move disk 1 from auxiliary to source.


Introduction 11

Step 6: Move disk 2 from auxiliary to destination.

Step 7: Move disk 1 from source to destination.

Thus, the destination consists of three disks satisfying the above rule.

Analysis:
Number of disks = 3
Number of times disk 1 moved = 22 = 4
Number of times disk 2 moved = 21 = 2
Number of times disk 3 moved = 20 = 1
Total number of moves = 23 – 1 = 7
Note: Number of movements of each disk is in power of 2

1.4.1.2 Solution to Towers of Hanoi for Four disks


Step 1: Move disk 1 from Source to Auxiliary.
Step 2: Move disk 2 from Source to Destination
Step 3: Move disk 1 from Auxiliary to Destination.
Step 4: Move disk 3 from Source to Auxiliary.
Step 5: Move disk 1 from Destination to Source
Step 6: Move disk 2 from Destination to Auxiliary
Step 7: Move disk 1 from Source to Auxiliary
12 Design and Analysis of Algorithms

Step 8: Move disk 4 from Source to Destination


Step 9: Move disk 1 from Auxiliary to Destination
Step 10: Move disk 2 from Auxiliary to Source
Step 11: Move disk 1 from Destination to Source
Step 12: Move disk 3 from Auxiliary to Destination
Step 13: Move disk 1 from Source to Auxiliary
Step 14: Move disk 2 from Source to Destination
Step 15: Move disk 1 from Auxiliary to Destination
Analysis:
Number of disks = 4
Number of times disk 1 moved = 23 = 8
Number of times disk 2 moved = 22 = 4
Number of times disk 3 moved = 21 = 2
Number of times disk 4 moved = 20 = 1
Total number of moves = 24 – 1 = 15
NOTE: Number of movements of each disk is in power of 2

1.4.1.3 C Program to Solve Towers of Hanoi Problem


#include<stdio.h>
#include<conio.h>
#include<math.h>
void Towers(int n, char from, char to, char aux)
{
if(a>=1)
{
Towers(n-1,from,aux,to);
printf(“Move disk %d from %c to %c \n”,n, from,to);
Towers(n-1,aux,to,from);
}
return;
}
void main()
{
int disk, moves;
clrscr();
printf(“Enter number of disks : ”);
scanf(“%d”,&disk);
Introduction 13

moves=(pow(2,disk)-1);
printf(“Number of moves needed are : %d\n”,moves);
Towers(disk,’s’,’d’,’a’); getch();
}

1.4.1.4 Permutation Generator


The problem is to print all possible permutations of a set containing ‘k’
elements (k>=1).
For example, if the set is {x, y, z}, then the set of permutation is,
{ (x, y, z),(y, z, x),(z, x, y),(x, z, y),(y, x, z),(z, y, x)}
• Hence for ‘k’ elements there are k! permutations
• A simple algorithm can be obtained by looking at the case of 4
statement(x, y, z, w)
• The Answer can be constructed by writing
1. x followed by all the permutations of (y, z, w)
2. y followed by all the permutations of (x, z, w)
3. z followed by all the permutations of (x, y, w)
4. w followed by all the permutations of (x, y, z)

Algorithm
1: Algorithm perm(a,k,n)
2: {
3: if(k=n) then write (a[1:n]); // output permutation
4: else //a[k:n] ahs more than one permutation
5: // Generate this recursively.
6: for i:=k to n do
7: {
8: t:=a[k];
9: a[k]:=a[i];
10: a[i]:=t;
11: perm(a,k+1,n);
12: //all permutation of a[k+1:n]
13: t:=a[k];
14: a[k]:=a[i];
15: a[i]:=t;
16: }
17: }
14 Design and Analysis of Algorithms

1.5 PERFORMANCE ANALYSIS OF ALGORITHM


Study of algorithms is required for the following reasons:
• To estimate the computational time and space required while
operating a problem.
• To prove that your method for obtaining solution ends after finite
steps yielding a correct solution.
• To pick-up an efficient algorithm to solve a problem among the
available algorithms.

1.5.1 Efficiency
Efficiency of an algorithm depends upon the amount of resources utilized by
the algorithm. A maximum efficient algorithm will exhibit the property of
minimal resource utilization and vice versa. Though two algorithms are
designed to solve same problem, they may have different efficiencies.
For example, consider a problem of sorting ‘n’ elements. If an Insertion
sort technique is adopted to solve this problem, it takes an approximate time
equal to C1n2 (C1 is constant). If the same problem is solved using Merge
sort technique, it takes C2 n log2 n units of time (C2 is constant). For small
inputs Insertion sort is better than merge sort. But for larger inputs Merge
sort is better than Insertion sort because its running time grows more slowly
with increase in input size compared to that of Insertion sort (i.e., n log n
grows more slowly than n2)
Analysis of algorithm is the study of algorithm i.e., calculating the time
and space complexity. Analysis of algorithms is of two types Priori analysis
and posteriori analysis.

1.5.2 Priori Analysis


It is also known as performance analysis. Analysis of algorithm is done
before running the algorithm on any computer machine i.e., before executing
algorithm, we will study the behavior of the algorithm. It gives an estimate
about the running time of the algorithm.
We find the order of magnitude of an algorithm before a program is
written and executed. This analysis is machine and platform independent.
The first objective of priori analysis is to associate a mathematical function
in terms of input size ‘n’ representing the rate of growth of the time of
algorithm as a function of input size. Some algorithm may follow a constant
growth that is irrespective of the input size, it takes constant growth. While
other algorithm may have logarithmic growth like O(log n) and some may
have polynomial growth like n, n2 while some may have exponential growth
like 2n, 3n. It is desirable to have algorithm for problems that take
polynomial growth as they take less time than in comparison to exponential
Introduction 15

time algorithm. It is less accurate when compared to posteriori analysis, but


cost of analysis is low.
Advantage of priori analysis:
1. Analysis is done before implementing or running the algorithm on
machine.
2. Simple and uniform. Hence easier for making performance.
Drawback: We get only estimated value i.e., not real values.

1.5.3 Posteriori Analysis


It is also called performance measurement. During the stage of analysis, the
target must be identified. Algorithm is converted to a program and run on a
machine. While algorithm is executed, then the information collected
regarding execution time and primary memory requirements is called as
posteriori analysis. It gives accurate values and is very costly.
Note: Priori algorithm is always better than posteriori analysis.
Advantage of posteriori: Values we get are real or exact in actual units of
time.
Drawback
1. Difficulty in conducting experiments.
2. Difficulty in making performance comparison, because of non-
uniform values of a single algorithm.
Analysis of algorithm means developing a formula or prediction of how
fast the algorithm works on the problem size.
Problem size of an algorithm: Size of the problem is based on kind of the
problem dealt with.
Example 1: If an element is to be searched a p-element array, size of
problem = Size of array = p.
Example 2: If the elements of two arrays of sizes ‘p’ and ‘q’ are to be
merged then size of problem = p + q.
Example 3: If factorial of a number ‘p’ is to be computed, then size of the
problem = ‘p’.

1.5.4 Complexity
Complexity is the time taken and the space required for an algorithm for its
completion. It is a measurement through which one can judge the quality of
an algorithm and can be used for finding and sorting out better algorithms.
Complexity can be classified into two types. Normally:
1. Space complexity 2. Time complexity
16 Design and Analysis of Algorithms

1.5.4.1 Space Complexity


The complexity can be defined as the amount of space an algorithm requires.
The space needed by each of the algorithm is the sum of the following
components.
1. A fixed part
2. A variable part
1. Fixed part: It depends on the characteristics of input and output. That
consists of space needed by component variable whose size is
dependent on the particular problem instance being solved.
2. Variable part: The space requirement denoted by S(p) of any
algorithm ‘p’ can be given as S(p) = C + Sp (Instance characteristics)
where, C is constant.

1.5.4.2 Time Complexity


Time Complexity of a program ‘p’ can be defined as the sum of compile
time and execution time (runtime). Let us suppose that we compile a
program for once and we run it several times. Then runtime of program will
be considered.
Time complexity is the amount of time required by the algorithm for the
completion of the problem. In this we have three cases (i) best case
(ii) worst case (iii) Average case.
1. Best case: If an algorithm takes minimum amount of time to run to
completion for a specific set of input, then it is called best case time
complexity.
E.g., While searching a particular element by using sequential search
we get the desired element at first place itself then it is called best case
time complexity.
2. Worst case: If an algorithm takes maximum amount of time to run to
completion for a specific set of input, then it is called worst case time
complexity. Defining the other way, it is maximum amount of time
taken by an algorithm to give an output is called as worst case.
E.g., While searching an element by using linear searching method if
desired element is placed at the end of the list then we get worst case
time complexity.
3. Average case: The average time taken by an algorithm to run to
completion is called average case.
This classification gives the complexity information about the behavior of
the algorithm at a particular instance and of an algorithm on specific in
random algorithm.
Introduction 17

Case Study – Linear Search:

1 2 3 4 5 6 7 8 9
10 20 15 08 20 30 50 40 25

Output in this problem is of two types: 1. Successful Search


2. Unsuccessful Search
1. Successful Search: Element to be searched is present in the array.
Minimum numbers of comparisons are required when we are
searching for 1st element i.e., 10 in this problem and number of
comparisons are 1, if numbers of elements are increased from 9 to 100
numbers of comparisons are not changed. This is best case for linear
search.
Here algorithm is same only we are changing the input, we get the
output in least amount of time and it is O(1) or O(c) or O(k) where c
& k are constants, O(1) does not mean 1 comparison, it means
algorithm takes constant time independent of number of inputs.
Maximum number of comparisons are required if we search for last
element i.e., 25 in this problem, number of comparison equal to
number of elements i.e., 9 in this problem. As number of elements is
increasing, number of comparisons also increases. So time taken is
dependent on number of elements and it is linear. This case falls under
Worst case.
Worst case for Linear Search = O(n).
2. Unsuccessful Search: If the element to be searched is not present in
the array, then search is unsuccessful. Number of comparison required
is ‘n’. So in this case Best case, Worst case and Average case, all are
same and it is equal to O(n).
Best case = Worst case = Average case = O(n)
Time complexity of an algorithm can be found in three ways:
(i) Brute force method (ii) Step count method (iii) Asymptotic
Notation
1.5.4.2.1 Brute Force Method
Time Complexity of a program ‘p’ can be defined as the sum of compile
time and execution time (runtime). Let us suppose that we compile a
program for once and we run it several times. Then runtime of program will
be considered. Let the runtime be tp.
tp = time taken to perform all the operations present in the program(addition,
subtraction, comparison etc).
As tp is influenced by many other factors and these factors are unknown
at the time of conceiving of the program, only estimation of tp is possible.
18 Design and Analysis of Algorithms

Let us assume that we know the characteristics of the compiler to be used.


Let us determine the number of additions, subtractions, multiplications,
divisions, comparisons, loads, and stores etc., that are performed by code for
p.
Hence, an expression for tp is as follows:
tp (k) = Ca ADD(k) + Cs SUB(k) + Cm MUL(k) + Cd DIV(k) + ……
‘k’ denotes instance characteristics and Ca, Cs, Cm, Cd etc., represent the
time required for additions, subtractions, multiplications, divisions
respectively. ADD, SUB, MUL, DIV are the functions that represent the
number of additions, subtractions, multiplications, divisions etc that are
performed when the code for ‘p’ is used on an instance with characteristic
‘k’.
But it is difficult to obtain such an exact formula because the time of
execution depends upon the numbers being operated upon. Also in the case
of a multi-user system the execution time depends upon numerous factors
such as system load, number of other programs being run on the computer at
that particular instance.
1.5.4.2.2 Step Count Method
A program step is defined as a meaningful statement of a program that has
an execution time that is independent of the instance characteristics. It is
assumed that all statements have same cost (i.e., execution time). We can
determine the total number of steps needed by the program by counting the
total number of steps.
Order of Magnitude of an algorithm: Each algorithm contains a finite
number of statements. Each statement occurs one or more times. The sum of
number of occurrences of all the statements contained in an algorithm is
called order of magnitude of an algorithm.
Example:
For (i = 0; i < n; i++)
{
---------
--------- ‘k’ statements
----------
}

Let us assume in above program, there are ‘k’ statements enclosed in the
for loop and statement taken one unit of time for execution. Hence the
execution of ‘k’ statements requires ‘k’ units of time. If these ‘k’ statements
are executed ‘n’ times, then execution time is n*k units.
Hence order of magnitude of above algorithm = n*k units.
Introduction 19

1. Few examples for finding out number of steps using step count method:
Example:
Algorithm Sum()
{
read (a,b,c,d); ← 1 unit
x=a+b+c+d; ← 1 unit
write(c) ← 1 unit
} -------------
3 units
-------------
The above algorithm has Four inputs, output and number of steps are
three.
Step count =3;
2. Finding the sum of n numbers stored in the array a “n” is known as
instance characteristics (input size):
Algorithm Sum(a,n)
{
sum:=0.0; ← 1 unit
for i:=1 to n do ← n+1 unit
sum:=sum + a[i]; ← n unit
write(sum); ← 1 unit
} -------------
2n+3 units
-------------
Step count = 2n + 3
3. Read n values from keyboard find their sum and average:
Algorithm Avg( )
{
sum:=0.0; ← 1 unit
Read n; ← 1 unit
for i:=1 to n do ← n+1 unit
{
Read num; ← n unit
Sum=sum + num; ← n unit
}
Avg=sum/n ← 1 unit
Write(sum, avg); ← 1 unit
}
-------------
3n+5 units
-------------
Step count = 3n + 5
20 Design and Analysis of Algorithms

4. Addition of matrix A and matrix B of dimension m × n storing result in


matrix C:
Algorithm matadd(a,b,c,m,n)
{
for i:=1 to m do ← m+1 unit
for j:=1 to n do ← m(n+1) unit
c[i,j]=a[i,j]+b[i,j]; ← mn unit
} --------------
2mn+2m+1 units
--------------
Step count = 2mn + 2m + 1
5. Finding nth fibonacci number:
Algorithm Fibonacci(n)
{
if (n<=1) then ← 1 unit
write(n);
else
{
fib1 := 0; ← 1 unit
fib2 := 1; ← 1 unit
for i := 2 to n do ← n+1 units
{
fib := fib1 + fib2; ← n units
fib1 := fib2; ← n units
fib2 := fib; ← n units
}
Write (fn); ← 1 unit
}
} --------------
4n+5 units
--------------
Step count = 4n + 5
6. Finding the sum of n numbers stored in the array a using recursion:
Algorithm RecSum(a,n) ← T(n) unit
{
if(n<=0) then ← 1 unit
{
Return 0;
}
else
{
return RecSum(a, n-1) + a[n]; ← T(n-1) + b units
}
}
Introduction 21

Solution: T(n) = 1 n = 0
T(n) = T(n – 1) + b n>0
Step count method for C programs:
1. main( )
{
int a,b,c;
scanf(“%d%d”,&a,&b); ← 1 unit
c=a+b; ← 1 unit
printf(“%d”,c) ← 1 unit
} ------------
3 units
------------
Step count = 3;
2. main( )
{
int n;
printf(“enter a number”); ← 1 unit
scanf(“%d”, &n); ← 1 unit
for(int i=0;i≤n;i++) ← 2n+2 unit
printf(”shyam”); ← n unit
} --------------
3n+4 units
-------------
Step count = 3n + 4;

1.5.5 Asymptoic Notation: [Formal Definitions]


We use same simple abstractions to simplify algorithm analysis. First we
ignored the actual cost of each statement, assumed that all statements have
same cost (i.e., execution time). Then we observed that handling these
constants are also complex, the running time of any sorting algorithm is an2
+ bn + c for some +ve constants a, b & c that depend on the statement cost.
As the Size of ‘n’ is very large we are interested only in growth function
of the running time of the algorithm. We are interested in finding out as ‘n’
is increased running time is also increased or not. If running time is not
increased as ‘n’ is increased then the running time is independent of ‘n’ and
time complexity is O(1).
If the running time is increased as ‘n’ is increased, then running time is
dependent on ‘n’. If the running time of an algorithm is an2 + bn + c. We
22 Design and Analysis of Algorithms

consider only the leading term of the formula (eg. an2) since the lower order
terms are relatively insignificant for large n.
We also ignore the coefficient of the leading term, Since they are less
significant then the rate of growth.
This study of growth functions are called Asymptotic analysis of
algorithm, and they are denoted by Asymptotic notations.
This asymptotic notation contains five types they are as follows.
1. Big-Oh Notation(O-Notation)
2. Omega Notation(Ω-Notation)
3. Theta Notation (Θ Notation)
4. Little –oh Notation(o-Notation)
5. Little omega Notation (ω-Notation)
1.5.5.1 Big‐Oh Notation (O‐ Notation)
Big –Oh notation denoted by ‘O’ is a method of representing the upper
bound or worst case of algorithm’s run time. Using big-oh notation, we can
give longest amount of time taken by the algorithm to complete.
Definition: Let f(n) and g(n) are two non-negative functions. The function
f(n) = O(g(n)) if there exist positive constants n0 and c such that
f(n) ≤ c*g(n) for all n, n>n0.
→ Means order at most
→ Used to measure the worst case time complexity.
Find Big oh for the following functions:

1. f(n)=2n2 + 3n + 1
f(n) = O(g(n))
f(n) <= cg(n) n>=n0
2n2 + 3n + 1 <= 1 false
2n2 + 3n + 1 <= 3n false
2 2
2n + 3n + 1 <= n false
2n2 + 3n + 1 <= 2n2 false
2n2 + 3n + 1 <= 3n2 true for n > = 4
f(n)<=3n2
f(n) = O(n2) where c = 3 and n0= 4
2. f(n) = 5n + 4 n > = n0
5n + 4 < =4 false
5n + 4 < = 5n false
5n + 4 < = 6n true for n > = 4
f(n)= O(n) where c = 6 and n0 = 4
Introduction 23
3. f(n) = 10n3 + 6n2 + 6n + 2
10n3 + 6n2 + 6n + 2 < = 11n3 n > = 7
f(n) = O(n3) where c = 11 n0 = 7
Consider the function f(n) = 2n + 2 and g(n) = n2 we have to find constant
c so that f(n) ≤ g(n), in other words 2n + 2 ≤ n2 then we find that for n = 1 or
2, f(n) is greater than g(n) that means for c = 1 when n = 1, f(n) = 4 and g(n)
= 1, for n = 2, f(n) = 6 and g(n) = 4. When n ≥ 3 we obtain f(n) ≤ g(n).
Hence f(n) = O(g(n))
5n + 2 = Ο(n) as 5n + 2 < = 6n for all n > = 2.
109n + 6 = Ο(n) as 109n + 6 < = 110n for all n > = 6.
13n2 + 6n + 2 = Ο(n2) as 10n2 + 4n + 2 < = 14n2 for all n > = 6.
4*2n + n2 = Ο(2n) as 4*2n + n2 < = 5*2n for n > = 4.
Various meanings associated with Big-Oh are
O(1) - Constant computing time
O(n) – Linear
O(n2) – Quadratic
O(n3) – Cubic
O(2n) – Exponential
O(logn) – Logarithmic
The relationship among those
computing time is 0 (1) < 0
(logn) < 0 (n) < 0 (n2) < 0 (2n)
O(2n) – rate of growth is very
high (algorithm takes slower)
O(log n) – rate of growth is very less (algorithm takes faster)

g(n) is an asymptotic upper bound for f(n).


24 Design and Analysis of Algorithms

1.5.5.2 Omega Notation (Ω‐Notation)


Omega notation denoted as ‘Ω’ is a
method of representing the bound of
algorithm running time using omega
notation we can denote shortest amount
of time taken by algorithm to complete.
Definition: Let f(n) and g(n) are two
non-negative functions. The function
f(n) = Ω(g(n)) if there exist positive
constants C and n0 such that f(n) >
c*g(n) for all n, n ≥ n0.
→ Means order at least.
→ Used to measure best case time complexity.
g(n) is an asymptotic lower bound for f(n).
Example: Consider f(n) = 2n + 5 and g(n) = 2n then 2n + 5 ≥ 2n. for n > 1
hence 2n + 5 = Ω(n) 5n + 2 = Ω(n) as 5n + 2 > = 5n for n > = 1. (the
inequality holds for n > = 0, but the definition of Ω requires an n0 > 0).
109n + 6 = Ω(n) as 109n + 6 > = 109n for all n > = 1.
13n2 + 6n + 2 = Ω(n2) as 13n2 + 4n + 2 > = 13n2 for all n > = 1.
4*2n + n2 = Ω (2n) as 4*2n + n2 > = 4*2n for n > = 1.
Observe also that
5n + 2 = Ω(1).
13n2 + 6n + 2 = Ω(n).
13n2 + 6n + 2 = Ω(1).
4*2n + n2 = Ω(n2).
4*2n + n2 = Ω(n).
4*2n + n2 = Ω(1).
1.5.5.3 Theta Notation (Θ ‐Notation)
Theta notation denoted as ‘Θ’ is a method of
representing running time between upper
bound and lower bound.
Let f(n) and g(n) be two non negative
functions. The function f(n) = ~ (g(n)) if there
exist positive constants C1, C2 and n0 such that
C1g(n) ≤ f(n) ≤ C2 g(n) for all n, n ≥ n0.
Introduction 25

→ Means order exactly.


→ Used to find average case time complexity
g(n) is an asymptotically tight bound for f(n).
E.g., if f(n) = 2n + 8 > 5n where n ≥ 2; 2n + 8 ≥ 6n where n ≥ 2 and 2n +
8 < 7n when n ≥ 2. Hence 2n + 8 = Θ(n) such that constants C1 = 5,
C2 = 7 and n0 = 2.
The theta notation is more precise than both big-oh and omega notation.
1.5.5.4 Little “oh” Notation (o‐Notation)
Little oh notation is denoted as o. It is used to denote proper upper bound
that is not asymptotically tight.
Let f(n) and g(n) are two non-negative functions. The function
f(n) = o(g(n)) if there exist positive constants n0 and c such that
f(n) < c*g(n) for all n, n > no .
f (n)
lim =0
n →α g(n)

3n + 2
E.g., The function 3n + 2 = o(n2) since lim =0
n →α n2
f(n) = 3n + 2
f(n) < c g(n)
3n + 2 < n2 n>=4
3n + 2
lim =0
n →α n2
3n + 2 = o(n2)
1.5.5.5 Little Omega Notation (ω‐Notation)
Little omega notation is used to denote proper lower bound that is not
asymptotically tight.
Let f(n) and g(n) are two non-negative functions. The function
f(n) = w(g(n)) if there exist positive constants n0 and c such that
f(n) > c*g(n) for all n, n > n0 .
g(n)
lim =0
n →α f (n)

n
Ex: The function n2 + 6 since lim =0
n →α n +6
2

f(n) = n2 + 6
26 Design and Analysis of Algorithms

f(n) > cg(n)


n2 + 6 > n n>=1
n
lim =0
n →α n +6
2

n2 + 6 > ω(n)

1.5.6 Properties of Asymptotic Notations


Many of the relational properties of real numbers apply to asymptotic
comparisons as well. For the following, assume that f(n) and g(n) are
asymptotically positive.
Transitivity
f(n) = Θ(g(n)) and g(n) = Θ(h(n)) imply f(n) = Θ(h(n)),
f(n) = O(g(n)) and g(n) = O(h(n)) imply f(n) = O(h(n)),
f(n) = Ω(g(n)) and g(n) = Ω(h(n)) imply f(n) = Ω(h(n)),
f(n) = o(g(n)) and g(n) = o(h(n)) imply f(n) = o(h(n)),
f(n) = ω(g(n)) and g(n) = ω(h(n)) imply f(n) = ω(h(n)).
Reflexivity
f(n) = Θ(f(n)),
f(n) = O(f(n)),
f(n) = Ω(f(n)).
Symmetry
f(n) = Θ(g(n)) if and only if g(n) = Θ(f(n)).
Transpose symmetry
f(n) = O(g(n)) if and only if g(n) = Ω(f(n)),
f(n) = o(g(n)) if and only if g(n) = ω(f(n)).
Comparison of Asymptotic notations with real numbers: Let f(n) and
g(n) be two non negative functions and a and b be two real numbers.
f(n) = O(g(n)) ≈ a ≤ b,
f(n) = Ω(g(n)) ≈ a ≥ b,
f(n) = Θ(g(n)) ≈ a = b,
f(n) = o(g(n)) ≈ a < b,
f(n) = ω(g(n)) ≈ a > b.
Introduction 27

We say that f(n) is asymptotically smaller than g(n) if f(n) = o(g(n)), and
f(n) is asymptotically larger than g(n) if f(n) = ω(g(n)).
One property of real numbers, however, does not carry over to asymptotic
notation:
Trichotomy: For any two real numbers a and b, exactly one of the following
must hold: a < b, a = b, or a > b, but all functions cannot be asymptotically
comparable.
Any two real numbers can be compared, but not all functions are
asymptotically comparable. Let f(n) and g(n) are two non-negative functions,
a special case may exist for which neither f(n) = O(g(n)) nor f(n) = Ω(g(n))
are satisfied. For example, the functions n and n1+ cos n cannot be compared
using asymptotic notation, since the value of 1 + cos (n) rotates between 0
and 2, taking on all values in between.
if f(n) = O(g(n)) and h(n) = O(g(n)) then f(n) + h(n) = O(max(g(n), d(n)))
if f(n) = O(g(n)) and h(n) = O(g(n)) then f(n) * h(n) = O(g(n) * d(n))
if f(n) = O(g(n)) Then a*f(n) = O(g(n)) where ‘a’ is constant.
1.5.6.1 Time Complexity of Program Segments with Loops
1. for i :=1 to n
s;
complexity: O(n)
2. for i:=1 to n
for j:=1 to n
s;
complexity: O(n2)
3. i=1, k=1;
While(k<=n)
{
i++;
k=k+i;
}
complexity: O( 2 )
4. for (i=1; i*i <=n;++i)
S;
complexity: O( 2 )
5. j=1;
while(j<=n)
{
j =j*2;
}
complexity: O(log n)
28 Design and Analysis of Algorithms
6. for i:=1 to n/2
for j:=1 to n/3
for k:=1 to n/4
s;
complexity: O(n3)
7. for i:=1 to n
for(j=1; j<=n; j=j*2)
s;
complexity: O(n log n)
8. for i:=1 to n
for j:=1 to n
for k:=1 to n
{ s;
break;
}
complexity: O(n2)

1.6 RECURRENCE RELATIONS


Definition 1: A recurrence relation is an equation or inequality that
illustrates a function in terms of its value on smaller inputs. Special
techniques are required to analyze the space and time required.
Definition 2: Recurrence relation is an equation that recursively defines a
sequence, once one or more initial terms are given.
Each further term of the sequence is defined as a function of the
preceeding terms.
A recurrence relation is the arrangement of a series of values in terms of
previous values in the sequence and base values.
Solving Recurrence Relations: Solution to recurrence relation can be
obtained using two methods: 1. substitution method 2. Masters method.

1.6.1 Substitution Method


In this method we consistently guess an asymptotic bound (upper or lower)
on the solution, and trying to prove it by induction.
Example Problems:
1. T(n) = T(n – 1) + n n>1
T(n) =1 n=1
Solution:
T(n) = T(n – 1) + n …..(1)
Introduction 29

T(n – 1) = T(n – 2) + n – 1 …..(2)


Substituting (2) in (1)
T(n) = T(n – 2) + n – 1+ n …..(3)
T(n – 2) = T(n – 3) + n – 2 …..(4)
Substituting (4) in (3)
T(n) = T(n – 3) + n – 2 + n – 1 + n …..(5)
General equation
T(n) = T(n – k) + (n – (k – 1)) + n – (k – 2) + n – (k – 3) + n – 1 + n
…..(6)
T(1) =1
n – k =1
k=n–1 …..(7)
Substituting (7) in (6)
T(n) = T(1) + 2 + 3 + ….. n – 1 + n
=1 + 2 + 3 + ….. + n
= n(n + 1)/2
T(n) = O( n2)
2. T(n) = T(n – 1) + b n>1
T(n) = 1 n=1
Solution:
T(n) = T(n – 1) + b …..(1)
T(n – 1) = T(n – 2) + b …..(2)
Substituting (2) in (1)
T(n) = T(n – 2) + b + b
T(n) = T(n – 2) + 2b …..(3)
T(n – 2) = T(n – 3) + b …..(4)
Substituting (4) in (3)
T(n) = T(n – 3) + 3b
General equation
T(n) = T(n – k) + k.b
T(1) = 1
n–k=1
k=n–1
T(n) = T(1) + (n – 1) b
= 1 + bn – b
T(n) = O(n)
30 Design and Analysis of Algorithms

3. T(n) = 2 T(n – 1) + b n>1


T(n) = 1 n=1
Solution:
T(1) = 1
T(2) = 2.T(1) + b
=2+b
= 21 + b
T(3) = 2T(2) + b
= 2(2 + b) + b
= 4 + 2b + b
= 4 + 3b
= 22 + (22 – 1)b
T(4) = 2.T(3) + b
= 2(4 + 3b) + b
= 8 + 7b
= 23 + (23 –1)b
General equation
T(k) = 2k–1 + (2k–1 – 1)b
.
.
.
T(n) = 2n–1 + (2n–1 – 1)b
T(n) = 2n–1(b + 1) – b
= 2n(b + 1)/2 – b
Let c = (b + 1)/2
T(n) = c 2n – b
= O(2n)
4. T(n) = T(n/2) + b n>1
T(1) = 1 n=1
Solution:
T(n) = T(n/2) + b …..(1)
T(n/2) = T(n/4) + b …..(2)
Substituting (2) in (1)
T(n) = T(n/4) + b + b
= T(n/4) + 2b …..(3)
T(n/4) = T(n/8) + b …..(4)
Introduction 31

Substituting (4) in (3)


T(n) = T(n/8) + 3b
= T(n/23) + 3b
General equation
T(n) = T(n/2k) + kb …..(5)
T(1) = 1
n/2k = 1
2k = n
K = log n …..(6)
Substituting (6) in (5)
T(n) = T(1) + b.log n
= 1 + b log n
T(n) = O(log n)

1.6.2 The Master Method


This is a cookbook method for determining asymptotic solutions to
recurrences of a specific form. Master method provides solution to
recurrence relation of the function.
T(n) = aT(n/b) + f(n) n>d
=c n=d
Where a, b, c and d are positive constants.
a > = 1, b > 1, c > = 1, d > = 1.
T(n) = a T(n/b) + f(n),
Where we interpret n/b to mean either ⎡n/b⎤ ⎣n/b⎦. Then T(n) can be
bounded asymptotically as follows:
Case 1: If f(n) = O((nlogba - €) for some constant € > 0, then T(n) = θ(nlogba).
Case 2: If f(n) = θ(nlogba logkn), then T(n) = θ (nlogba log k+1 n).
Case 3: f(n) = Ω(nlogba+€) for some constant € > 0, and if af(n/b) <= cf(n) for
some constant c < 1 and all sufficiently large n, then T(n) = θ(f(n)).
Use the master method to give tight asymptotic bounds for the following
recurrences:
1. T(n) = 4T(n/2) + n n>1
T(n) = 1 n=1
From the above recurrence relation we obtain
a = 4, b = 2, c = 1, d = 1, f(n) = n
logba = log24 = log222 = 2 log22 = 2
nlogba = n2
32 Design and Analysis of Algorithms

f(n) = O(n2)
n = O(n2) It will fall in Case 1. So that
2
T(n) = θ(n )
2. T(n) = 4T(n/2) + n2 n>1
T(n) = 1 n=1
From the above recurrence relation we obtain
a = 4, b = 2, c = 1, d = 1, f(n) = n2
nlogba = n2
f(n) = θ(n2)
n2 = θ(n2)
It will fall in case 2.
T(n) = θ(n2log n)
3. T(n) = 4T(n/2) + n3 n>1
T(n) =1 n=1
From the above recurrence relation we obtain
a = 4, b = 2, c = 1, d = 1, f(n) = n3
nlogba = n3
f(n) = Ω(nlogba + €)
= Ω(n2 + €)
n3 = Ω(n2 + €)
This will fall in case 3.
T(n) = θ(n3)
4. T(n) = 2T(n/2) + n n>1
T(n) =1 n=1
From the above recurrence relation we obtain
a = 2, b = 2, c = 1, d = 1, f(n) = n
nlogba = nlog22 = n
f(n) = θ nlogba)
= θ(n)
It will fall in case 2.
T(n) = θ(n log n)

1.7 PROBABILISTIC ANALYSIS


To analyze the running time of an algorithm we use theory of probability.
Probabilistic analysis is used to find out average running over all possible
Introduction 33

inputs. It is required to know about the distribution of the inputs to perform


probabilistic analysis to compute an expected running time.
In some problems, the set of all possible inputs can be assumed, for other
problems set of all possible inputs cannot be determined. Probabilistic
analysis can be used to all problems where the set of all possible inputs can
be assumed for other problems we cannot apply probabilistic analysis.
Input distribution should be known to apply probabilistic analysis. Some
part of the algorithm behavior is randomized, for analysis and design of
algorithms probability and randomness is used very frequently and these
algorithms are called probabilistic or randomized algorithms.
In these algorithms a random
bits generated through pseudo
random generator are used as an
auxiliary input to get the good
performance in the average case. The performance of the algorithm will be
determined by the random input and it is called the expected run time. The
worst case is ignored as the probability of its occurrence is very less.
Example: An array of n elements having n/2 ‘1’s and n/2 ‘0’s, the problem
is to search a ‘1’ in this array. If we use any deterministic algorithm it will
take n/2 comparisons, which is very long, and it is not guaranteed for all
possible inputs that the algorithm will complete quickly.
With high probability we can search quickly ‘1’ if we check elements at
random for all possible inputs.
Randomized Algorithm is used in Quick sort where the complexity of the
problem is reduced from O(n2) to O(n log n) for some set of input such as
when the elements are already in sorted order.

1.8 AMORTIZED ANALYSIS


An important analysis tool useful for understanding the run times of
algorithms that have steps with widely varying performance is
“amortization”. The term “amortization” itself comes from the field of
accounting, which provides an instance monetary metaphor for algorithm
analysis.
Example: The above example provides a motivation for the amortization
technique, which gives us a worst case way of performing an average case
analysis. Formally, we define the amortized running time of an operation
within a series of operations as the worst case running time of the series of
operations divided by number of operations. When the series of operations is
not specified, it is usually assumed to be a series of operations from the
repertoire of a certain data structure starting from an empty structure. Thus
amortized running time of each operation in the clearable table ADT is
34 Design and Analysis of Algorithms

‘O(n)’. When we implement that clearable table with an array. Note that the
actual running time of operation may be much higher than the amortized
running time.
For example, a particular clear operation may take ‘O(n)’ time.
The advantage of using amortization is that it gives us a way to a robust
average case analysis without using any probability.
Amortized complexity: In amortized complexity we change sum of the
actual cost of an operation to other operation. The amortized cost of each
insertion is no more than 2 and that of each deletion is no more than 6.
The actual cost of any insertion or deletion is no more than ‘2 * I + 6 *
D’.

Amortized means finalizing the average run time per operation over a
worst case sequence of operations.
Note: The only requirement is that the sum of the amortized complexity of
all operations in any sequence of operations be greater than or equal to their
sum of actual complexity.
∑ am ortized (i) ≥ ∑ actual(i)
1 <= i <= n 1<= i <= n

Case Study 1: Linear Search

1 2 3 4 5 6 7 8 9

10 20 15 08 20 30 50 40 25

Number of comparisons required to search: 10 → 1 comparison


Number of comparisons required to search: 20 → 2 comparisons
Number of comparisons required to search: 15 → 3 comparisons
Number of comparisons required to search: 08 → 4 comparisons
Number of comparisons required to search: 20 → 5 comparisons
Number of comparisons required to search: 30 → 6 comparisons
Number of comparisons required to search: 50 → 7 comparisons
Number of comparisons required to search: 40 → 8 comparisons
Number of comparisons required to search: 25 → 9 comparisons
Introduction 35

Total number of comparisons required to search all the elements in the


array are 45.
Then amortized cost = total number of comparison/ total number of
elements
= 45/9 = 5
Case Study 2: Binary Search

1 2 3 4 5 6 7

10 15 20 21 40 45 70

Number of comparisons required to search: 10 → 3 comparisons


Number of comparisons required to search: 15 → 2 comparisons
Number of comparisons required to search: 20 → 3 comparisons
Number of comparisons required to search: 21 → 1 comparison
Number of comparisons required to search: 40 → 3 comparisons
Number of comparisons required to search: 45 → 2 comparisons
Number of comparisons required to search: 70 → 3 comparisons
Total number of comparisons required to search all the elements in the
array are 17.
Then amortized cost = total number of comparison/ total number of
elements
= 17/7 = 2.43
1. low = 1, high = 7, mid = (1+7)/2 = 4 → 1 comparison
2. low = mid +1 = 5, high = 7, mid = (5 + 7)/2 = 6 → 2 comparisons
3. low = 1, high = mid – 1 = 3, mid = (1 + 3)/2 = 2 → 2 comparisons
4. low = mid +1 = 7, high = 7, mid = (7 + 7)/2 = 7 → 3 comparisons
5. low = 5, high = mid – 1 = 5, mid = (5 + 5)/2 = 5 → 3 comparisons
6. low = mid +1 =3, high = 3, mid = (3 + 3)/2 = 3 → 3 comparisons
7. low = 1, high = mid – 1 = 1, mid = (1 + 1)/2 =2 → 3 comparisons

Previous Gate Questions and Solutions


1. Consider the following three claims
1. (n + k)m = Θ(nm) , where k and m are constants
2. 2n+1 = O(2n)
3. 22n+1 = O(2n)
36 Design and Analysis of Algorithms

Which of these claims are correct?


(a) 1 and 2 (b) 1 and 3
(c) 2 and 3 (d) 1, 2 and 3
Answer: (a)
Solution:
Consider each statement separately
I. f(n) = (n + k)m
So, f(n) = (1 + n)m
Assume k = 1 constant
f(n) = 1 + mC1n + mC2n + --------- mCm nm
f(n) = O(nm) which is correct
II. f(n) = 2n + 1
f(n) = 2n . 21
f(n) = 2.2n
f(n) = O(2n) which is correct
III. f(n) = 2n + 1
f(n) = 22n .21
f(n) = 2.22n
f(n) = O(22n) which is false
Then I and II are correct.
2. Let A [1,…., n] be an array storing a bit(1 or 0) at each location, and f(m)
is a function whose time complexity is θ(m). Consider the following
program fragment written in a C like language:
Counter = 0;
for(i=0;i<n;i++)
{if (A[i]==1)counter++;
else {f(counter);counter=0;
}}
The Complexity of the program fragment is:
(a) Ω(n2) (b) Ω(n log n) and O(n2)
(c) θ(n) (d) o(n)
Answer: (c)
Solution:
The given code is:
1. Counter = 0;
Introduction 37
2. for(i=0;i<n;i++)
3. {if (A[i]==1)counter++;
4. else {f(counter);counter=0};
5. }
The time complexity of the program fragment depends on the frequency
(Number of steps) of line 3 and 4. In line 4 the frequency depends on the
variable counter and there is no increment in the counter variable which
is initialize to 0, so f(0) then counter = 0 means there is no cell in an array
which having a bit 0, so all cells in the array contain 1. Consider the line
3 if (A[i] = 1)counter++; the value of i will be increases up to n so the
value of counter will be n. Since n is the frequency of the line 3 and the
frequency of the line 4 is 0. So the time complexity of the line 3 is O(n)
on average n and f(0) = O(1) is the time complexity of line 4. So the time
complexity of the program fragment is maximum of line 3 and 4 which is
O(n) on average.
3. The time complexity of the following C function is (assume n>0)
int recursive(int n) {
if(n==1)
return(1);
else
return(recursive(n-1) + recursive(n-1));
}
(a) O(n) (b) O(n log n) (c) O(n2) (d) O(2n)
Answer: (d)
Solution:
The given C function is recursive. The best way to find the time
complexity of recursive function is that convert the code (algorithm) into
recursion equation and solution of the recursion equation is time
complexity of the given algorithm.
1. int recursive(int n) {
2. if(n==1) return(1);
3. else
4. return(recursive(n – 1) + recursive(n – 1));
5. }
The name of the function is recursive
Let recursive(n) = T(n)
According to line 2 if(n = 1) return(1)
38 Design and Analysis of Algorithms

Then the recursion equation is


T(n) = 1 n = 1
According to line 4 recursion equation is
T(n) = T(n – 1) + T(n – 1) n > 1
Or T(n) = 2T(n – 1) n > 1
So the complete recursion equation is
T(n) = 1 n = 1
T(n) = T(n – 1) + T(n – 1) n > 1
Or T(n) = 2T(n – 1) n > 1
T(1) = 1 = 20
T(2) = 2T(1) = 2.1 = 21
T(3) = 2T(2) = 2.2 = 22
T(4) = 2T(3) = 2.3 = 23
. . . .
. . . .
. . . .
n–1
T(n) = 2
Or T(n) = 2n.1/2
So T(n) = O(2n)
4. Suppose T(n) = 2T(n/2) + n, T(0) = T(1) = 1 Which one of the following
is FALSE?
(a) T(n) = O(n2) (b) T(n) = θ(n log n)
(c) T(n) = Ω(n2) (d) T(n) = O(n log n)
Answer: (a)
Solution:
T(0) = T(1) = 1
T(n) = 2T(n/2) + n
T(n) = 2T(n/2) + n n>1
T(n) =1 n=1
From the above recurrence relation we obtain
a = 2, b = 2, c = 1, d = 1, f(n) = n
n logba = nlog22 = n
f(n) = θ(n log ba)
= θ (n)
Introduction 39

It will fall in case 2.


T(n) = θ (n log n)
Implies T(n) = O(n log n)
Implies T(n) = Ω(n log n)⇒ Ω (n2)
T(n) is not O(n2).
5. Consider the following is true?
T(n) = 2T([ 2 ]) + 1, T(1)=1
Which one of the following is true?
(a) T(n) = θ(log log n) (b) T(n) = θ(log n)
(c) T(n) = θ( 2 ) (d) T(n) = θ(n)
Answer: (a)
Solution:
T(1) = 1
T(n) = 2T([ 2 ]) + 1
We know that log22 = 1
So, all the level sums are equal to log22
The problem size at level k of the recursion tree is n2–k and we stop
recursing this value is a constant. Setting n2–k = 2 and solving for k gives
us 2–k log2n = 1 ⇒ 2k = log2 n ⇒ k = loglog2n)
So T(n) = θ(log log n)
6. Consider the following functions:
f(n) = 2n
g(n) = n!
h(n) = n log n
Which of the following statements about the asymptotic behavior of f(n),
g(n) and h(n) is true ?
(a) f(n) = O(g(n)); g(n) = O(h(n))
(b) f(n) = Ω (g(n)); g(n) = O(h(n))
(c) g(n) = O(f(n)); h(n ) = O(f(n))
(d) h(n) = O(f(n)); g(n) = Ω (f(n))
Answer: (d)
40 Design and Analysis of Algorithms

Solution:
f(n) = 2n
⇒ f(n) = O(2n)
g(n) = n!
⇒ g(n) = O(n!)
h(n) = nlog n
⇒ h(n) = O(nlog n)
The Asymptotic order of the function is as follows 1 < log log n < log n <
ne < nc < nlogn < cn < nn < ccn < n!
Where 0 < ∈ < 1 < c [n < n2 means n grows more slowly than n2]
So nlogn < cn < n!
nlogn < 2n < n!
Assume C = 2
h(n) < f(n) < g(n)
From the above relation we can arrive at h(n) ∈ O(f(n)) and
g(n) = O(f(n))
7. Consider the Quick sort algorithm. Suppose there is a procedure for
finding a pivot element which splits the list into sub-lists each of which
contains at least one-fifth of the elements. Let T(n) be the number of
comparisons required to sort n elements. Then
(a) T(n) ≤ 2T(n/5) + n (b) T(n) ≤ T(n/5) +T(4n/5)+ n
(c) T(n) ≤ 2T(4n/5) + n (d) T(n) ≤ 2T(n/2) + n
Answer: (b)
Solution:
If we want to sort n elements with the help of quick sort algorithm. If
pivot elements which split the lists into two sub lists each in which one
list contains one-fifth element or n/5 and other list contains 4n/5 and
balancing takes n so
T(n) ≤ T(n/5) ≤ T(4n/5) + n
[Note: n – n/5 = 5n – n/5 = 4n/5]
8. The running time of an algorithm is represented by the following
recurrence relation
⎧ n n≥3

T(n) = ⎨ ⎛ n ⎞
⎪T ⎜ 3 ⎟ + cn otherwise
⎩ ⎝ ⎠
Introduction 41

Which one of the following represents the time complexity of the


algorithm?
(a) θ(n) (b) θ(n log n)
2
(c) θ(n ) (d) θ(n2 log n)
Answer: (a)
Solution:
Complexity is decided for large values of n only,
So, T(n) = T(n/3) + cn for n > 3
Using masters theorem
Here a = 1, b = 3, logba = log31= 0
f(n) = cn = θ(n1)
Since nlog a = n0 is below f(n) = θ(n1)
This belongs to case III of masters theorem,
Where the solution is
T(n) = θ(f(n)) = θ(n)
9. Two alternative packages A and B are available for processing a database
having 10k records. Package A requires 0.0001 n2 time units and Package
B requires 10nlog10n time units to process n records. What is the smallest
value of k for which package B will be preferred over A ?
(a) 12 (b) 10 (c) 6 (d) 5
Answer: (c)
Solution:
A requires 0.0001 n2 time units. A and B require 10 nlog10n time unit to
process n records for package B will be preferred over A
So 0.0001 n2 < 10 nlog10n
n2/105 < 10 nlog10n
n2 = 106 nlog10n
compare this constant with 10K
So minimum value of K = 6 for which package B will be preferred over
A.
10. Procedure A(n)
{
if (n<=2) return(1)
else
return(A 2 ))
}

You might also like