Presenter Manual - DAA (UNIT 1) Edited
Presenter Manual - DAA (UNIT 1) Edited
PRESENTER’ S MANUAL
Concept class: This is a theory class that will focus on the concepts. Whenever required this session will also
demonstrate how these concepts get translated into analyse the efficiency of algorithm.
Lab Manual: The lab manual follows the concept class. The students learn to implement the concepts learnt
in the concept class.
Directed Learning Class: Learning and application may be challenging for some students. One of the oldest
and most comprehensive ways of delivery information, self-directed class allows the student to apply
themselves in a manner that makes understanding content more accessible. In this process, learners take
initiative in their own learning by planning, implementing and evaluating their learning.
The entire purpose of this methodology is to make the students more:
Concept focused
Adapted to real life work environment
Algorithms can be seen as special kinds of solutions to problems-not answers but rather
precisely defined procedures for getting answers.
Well trained computer science professionals knows how to deal with algorithms:
- How to construct them.
- Manipulate them.
- Understand them.
- Analyze them.
The knowledge is preparation for much more than writing good computer programs.
Actually, a person does not really understand something until after teaching it to someone else.
Similarly, a person does not understand until after teaching it to a computer.
Course Objectives:
a.To critically analyze the efficiency of alternative algorithmic solutions for the same
problem
b.To illustrate brute force and divide and conquer design techniques.
c.To explain dynamic programming and greedy technique for solving various problems.
d.To apply iterative improvement technique to solve optimization problems
e.To examine the limitations of algorithmic power and handling it in different problems.
SYLLABUS
AD3351 DESIGN AND ANALYSIS OF ALGORITHM T T P C
3024
Unit I: Introduction 8+3
Notion of an Algorithm –Fundamentals of Algorithmic Problem Solving–Important Problem Types –
Fundamental of the Analysis of Algorithmic Efficiency –Asymptotic Notations and their properties.
Analysis Framework -Empirical analysis-Mathematical analysis for Recursive and Non-recursive
algorithms–Visualization.
Unit II: Brute Force And Divide-And-Conquer 10+3
Brute Force – String Matching - Exhaustive Search-Travelling Salesman- Problem-Knapsack
Problem-Assignment problem. Divide and Conquer Methodology-Multiplication of Large Integers
and Strassen’s Matrix Multiplication–Closest-Pair and Convex Hull Problems. Decrease and Conquer
Method: Topological Sorting-Transform and Conquer Method: Presorting – Heaps and Heap Sort.
Unit III: Dynamic Programming And Greedy Technique 11+3
Dynamic programming – Principle of optimality - Coin changing problem, Warshall’s and Floyd‘s
algorithm-Optimal Binary Search Trees–Multistage graph-Knapsack Problem and Memory functions.
Greedy Technique- Dijkstra’s algorithm – Huffman Trees and codes-0/1 Knapsack problem.
Unit IV: Iterative Improvement 7+3
The Simplex Method- The Maximum –Flow Problem–Maximum Matching in Bipartite Graphs-The
Stable marriage Problem.
Unit V: Coping with the limitations of algorithm power 9+3
Lower - Bound Arguments - P, NP NP- Complete and NP Hard Problems. Backtracking – n-Queen
problem - Hamiltonian Circuit Problem – Subset Sum Problem. Branch and Bound –LIFO Search
and FIFO search - Assignment problem –Knapsack Problem – Travelling Salesman Problem -
Approximation Algorithms for NP-Hard Problems – Travelling Salesman problem–Knapsack
problem.
Total hours: 45
Anany Levitin, Introduction to the Design and Analysis of Algorithms, Third Edition, Person Education,
2012.
REFERENCE BOOKS:
1. Ellis Horowitz, Sartaj Sahni and Sanguthevar Rajasekaran, Computer Algorithms/ C++, Second
Edition, Universities Press, 2019.
2. Thomas H. Cormen, Charles E. Leiserson, Ronald L.Rivest and Clifford Stein, Introduction to
Algorithms, Third Edition, PHI Learning Private Limited, 2012.
3. S.Sridhar, Design and Analysis of Algorithms, Oxford University Press, 2014.
4. Alfred, Data Structures and Algorithms, Pearson Eductaion, Reprint 2006.
1 2 3 4 5 6 7 8 9 10 11 12 1 2 3
1 3 3 3 1 1 - - - 1 1 2 2 3 2 1
2 2 1 1 3 2 - - - 2 2 1 2 2 2 2
3 3 2 1 2 2 - - - 2 1 1 2 1 3 3
4 3 2 3 2 2 - - - 3 3 3 2 2 1 2
5 3 1 2 3 3 - - - 2 2 2 2 3 1 3
AVG 3 2 2 2 2 - - - 2 2 2 2 2 2 2
1 - low, 2 - medium, 3 - high, ‘-' - no correlation
Method Lecture
S.No Proposed notes – Teaching
Topics to be covered Ref
. Date Page Method
Number
Notion of an CC
Algorithm – Fundamentals of
1. 15.07.24 1- 2 T1 CB/L
Algorithmic Problem Solving-
Important Problem Types
Fundamentals of the Analysis CC
2. 16.07.24 3-7 T1 CB/L
of Algorithmic Efficiency
Asymptotic Notations and their CC
3. 18.07.24 8 - 11 T1 CB/L
properties
Analysis Framework CC
4. 19.07.24 11 - 12 T1 CB/L
– Empirical analysis
Mathematical analysis for CC
5. Recursive 20.07.24 13 - 15 T1 CB/L
Algorithms
Mathematical CC
6. analysis for Non- 22.07.24 16 - 18 T1 CB/L
Recursive Algorithms
7. Visualization CC 23.07.24 19 - 20 T1 CB/L
Example Problems in Recursive CC
8. 24.07.24 23 - 24 T1 CB/L
Algorithm Analysis
Example Problems in Non- CC
9. 25.07.24 25- 27 T1 CB/L
Recursive Algorithm Analysis
1 2 Mins Attendance
Remarks:
Faculty Incharge
1 2 Mins Attendance
Questions by Students :
6 3 Mins
Remarks:
Faculty Incharge
1 2 Attendance
Mins
2 2 Technical Terms
Mins
Remarks:
Faculty Incharge
1 2 Attendance
Mins
2 2 Technical Terms
Mins
Outcome: The student should be able to understand the concept of big oh,
8 1
omega, theta.
Mins
9 1 Next Class : Empirical Efficiency
Mins
Remarks:
Faculty Incharge
1 2 Attendance
Mins
2 2 Technical Terms
Mins
3 7 Revision:
Mins
4 1 min Objective: Introduce the analysis of empirical algorithm
Content:
35 Design
5
Mins Analysis The Empirical Algorithm
Questions by Students :
6 3
Mins
Revision and Questions:
7 3 Mins Properties of Asymptotic notations?
What is basic efficiency classes?
Outcome: The student should be able to understand the concept of Empirical
8 1
algorithm.
Mins
9 1 Next Class : Recursive and Non-Recursive algorithm
Mins
Remarks:
Faculty Incharge
1 2 Attendance
Mins
2 2 Technical Terms
Mins
3 7 Revision:
Mins
4 1 min Objective: Introduce the recursive and Non-recursive
Content:
35 Binary(n)
5
Mins Factorial numbers
Arrays
Questions by Students :
6 3
Mins
Revision and Questions:
7 3 Mins How the algorithm’s efficiency is to be measured?
Define Profiling?
Remarks:
Faculty Incharge
1 2 Attendance
Mins
2 2 Technical Terms
Mins
3 7 Revision:
Mins
4 1 min Objective: Introduce the visualization of algorithm
Content:
35 Static algoritm
5
Mins Dynamic Algorithm
Questions by Students :
6 3
Mins
Revision and Questions:
7 3 Mins Analyzing Efficiency Of Recursive Algorithms.
Outcome: The student should be able to understand the concept of
8 1
visualization of algorithm.
Mins
9 1 Next Class : Unit 2
Mins
Remarks:
Faculty Incharge
TECHNICAL TERMS
UNIT 1
Unit I INTRODUCTION
What is an algorithm?
An algorithm is a sequence of unambiguous instructions for solving a problem. i.e.., For obtaining
a required output for any legitimate input in a finite amount of time. This definition can be
illustrated by a simple diagram in Fig 1.
Fig 1 Notion of
Algorithms
-The range of inputs for which an algorithm works has to be specified carefully.
- Algorithms for the same problem can be based on very different ideas and can solve the
problem with different speeds.
Problem:
Greatest common divisor of two non-negative, non-both zero integers m and n, denoted
gcd(m,n).
gcd(m,n) is defined as the largest integer that divides both m and n evenly, i.e., with a remainder
zero.
I Method:
Euchid’s algorithm is based on gcd(m,n) = gcd(n,m mod n) until m mod n is equal to ø, the value of
m is also the gcd of the initial m and n.
Step 1: If n = 0 return the value of n as the answer and stop; otherwise proceed to
step 2.
Algorithm: Euclid(m,n)
rm mod n
mn
nr
algorithm:
Step 1: Assign the value of min{m,n} to t.tep 2: Divide m by t. If the remainder of this division O,
goto step 3; Otherwise goto step 4.
Step 3: Divide n by t. If the remainder of this division is 0, return the value of t as the answer and
stop; Otherwise proceed to step 4.
This algorithm does not work correctly, when one of its input numbers is zero.
This example illustrates, the importance of specifying the range of an algorithm’s inputs explicitly and
carefully.
III Method:
Step 3: Identify all the common factors in the two prime expansions found in step 1 and
step 2.
Step 4: Compute the product of the all the common factors and return it as the greatest
common divisor of the numbers given.
Ex: 60 and 24
60 = 2. 2. 3. 5
24 = 2. 2. 2. 3
gcd(60,24) = 2. 2. 3 = 12
This algorithm is much more complex and slower than Euclid’s algorithm.
It does not quality, in the form presented, as a legitimate algorithm. Because the prime
factorization steps are not defined unambiguously. They require list of prime numbers.
Simple algorithm for generating consecutive primes not exceeding any given integer n.
The algorithm
- On the I iteration of the algorithm, it eliminates from the list all multiples of 2. i.e., 4,6 and so
on.
- Then it moves to the next item on the list, which is 3 and eliminates its multiples.
The algorithm continues in this fashion until no more numbers can be eliminated from the list.
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
2 3 5 7 9 11 13 15 17 19 21 23 25
2 3 5 7 11 13 17 19 23 25
23 5 7 11 13 17 19 23
they would eliminate numbers already eliminated on previous iterations of the algorithm.
The remaining numbers on the list are the consecutive primes less than or equal to 25.
If p is a number whose multiples are being eliminated on the current pass, then the first
multiple we should consider is
p= �
� Sieve(n)
Algorithm:
for p 2 to [ 𝑛] do if
for p 2 to n do A[p] p
A[p] ≠ 0
j p*p
while j < n do
A[j] 0
j j+p
i0
for p 2 to n do
if A[p] ≠ 0
L[i] A[p]
i i+1
return L
So, we incorporate the sieve into middle school procedure to get a legitimate algorithm for computing
gcd.
1.1 Fundamentals of algorithm problem solving
Read the problems description and ask questions or doubt about the problems.
There are few types of problems that arise in computing applications quite often.
There are known algorithms for solving it. It might help us to understand how such an
algorithm works and know its strengths and weakness.
There will not be readily available algorithm for some problems and we have to design on our
own.
It is very important to specify exactly the range of instances the algorithm needs to handle.
If we fail to do, the algorithm may work for majority of inputs but crash on some
“boundary” value.
Ascertaining the capabilities of a computional device:
The next step is to ascertain the capabilities of the computational device the algorithm is
intended for.
The algorithms in use today are still designed to be programmed for a computer closely
resembles the current architecture.
Its central assumption is that instructions are executed one after another, one
operation at a time.
The central assumptions does not hold for some newer computers that can execute operations
concurrently. i.e in parallel. This type of algorithms are called parallel algorithms.
If we were designing as a practical tool, the answer may depend on a problem you need to
solve.
In many problems, we are not bothering about a computer speed and memory.
There are important problems, which are very complex, have to process huge volumes of data,
in that case it is imperative to be aware of the speed and memory available on a particular
system.
The next principal decision is to choose between solving the problem exactly or solving it
approximately.
In former case, it is solving the problem exactly called exact algorithm, latter approximation
algorithm.
iii) Approximation algorithm can be part of a more sophisticated algorithm that solves a
problem exactly.
There are two options that are most widely used nowadays for specifying algorithms:
i) Pseudo code
ii) Flowchart
A pseudo code is usually more precise than a natural language and its usage of ten
yields more algorithm description.
Pseudo code cannot be directly fed into a computer, instead it needs to be converted
into a computer program written in a particular computer language.
Once an algorithm has been specified, you have to prove its correctness.
That it, you have to prove that the algorithm yields a required result for every
legitimate input in a finite amount of time.
The simple observation that the second number gets smaller on every iteration of the
algorithm.
For some algorithms, a proof of correctness is quite easy; for others, it can be quite complex.
In order to show that an algorithm is incorrect, we need just one instance of its input of which
the algorithm fails.
Analyzing an algorithm:
i) Time Efficiency.
- Space Efficiency indicates how much extra memory the algorithm needs.
Simpler algorithms are easier to understand and easier to program. The resulting programs
usually contain fewer bugs.
First issue:
- Sometimes easier to design an algorithm for a problem posed in more general terms. Ex: gcd
Second issue:
-Designing an algorithm that can handle a range of inputs that is natural for
the problem at hand.
If we are not satisfied with the algorithms efficiency, simplicity, generality, we must redesign
the algorithm.
Implementing an algorithm correctly is necessary but not sufficient. We should have efficient
algorithm.
Empirical analysis of the algorithm is based on timing the program on several inputs and
then analyzing the results obtained.
An important issue of algorithms problem solving is whether or not every problem can
be solved by an algorithm.
Ex: Finding real root of a quadratic equation with a negative discriminant there is no solution.
There are many problems one may encounters in computing, but there are few areas that have
attracted particular attention from researchers.
o ii) Searching
o v) Combinational problems
Sorting:
The sorting problem is rearranging the items of a given list in ascending order.
Usually need to sort the lists of numbers, characters from a alphabet, character strings and
most important, records similar to those maintained by schools, companies about their
employees.
The most important is searching, that’s why dictionaries, telephone books and
so on are sorted.
Although some algorithms are indeed better than others, there is no algorithms that would be
the best solution in all situations.
Some of the algorithms are simple but relatively slow while others are faster but more complex,
some work better on randomly ordered pairs (inputs) while others do better on almost sorted
lists.
Some are suitable only for lists residing in the fast memory while others can be
adapted for sorting large files stored on a disk and so on.
i) Stable
ii) In place
- Stable algorithm means it preserves the relative order of any two equal elements in its
input.
- If an input contain two equal elements in positions i & j, where i < j, then in the sorted list
they have to be in positions i1 and j1 respectively such that i1 < j1
Ex: Students list sorted alphabetically and we want to sort it according to GPA means
- Stable algorithm will yield a list in which students with the same GPA will still be sorted
alphabetically.
- An algorithm is said to be in place if it does not require extra memory, except, possibly, for a
few memory units.
Searching:
The searching problem deals with finding a given value called a search key, in a given set.
There are many searching algorithms range from the straight forward sequential search to a
limited binary search algorithms.
Some algorithms work faster than other but require more memory. Some are very fast
but applicable only to sorted arrays and so on.
String processing:
Strings which comprise letters, numbers and special characters, bit strings which comprise
zeros and ones.
String processing algorithms are important in computer science for a long time in
conjunction with computer languages and compiling issues.
Graph problems:
Graph can be thought of as a collection of points called vertices, some of which are
connected by line segments call edges.
Graphs can be used for modeling a wide variety of real-life applications including
transportation & communication networks, project scheduling.
Some graph problems are computationally every hard that is only very small instances of such
problems can be solved in a realistic amount of time even with the fastest computers
imaginable.
Combinatorial problems:
For these problems, need to find a combinatorial object – such as permutation, a combination,
or a subset – that satisfies certain constraints and has some desired property.
o ii) There are no known algorithms for solving most such problems exactly in an
acceptable amount of time.
Geometric problems:
Geometric problems deal with geometric objects such as points, lines and polygons.
Algorithms are developed for constructing simple geometric shapes – triangles, circles and so
on.
There are algorithms for two classic problems
o Closest pair problem Given n points in the plane, find the closest pair among them.
o Convex hull problem Find the smallest convex polygon that would include all points
of a given set.
Numerical problems:
Another special area of applications that involve mathematical objects of continuous nature:
Solving equations, systems of equations, computing definite integrals, evaluating functions and
so on.
There are many algorithms that play a critical role in many scientific and engineering
applications.
ii) Space efficiency deals with extra space the algorithm requires.
The most obvious observation that almost all algorithms run longer on larger inputs.
Ex: It takes longer to sort larger arrays, multiply larger matrices and so on.
Ex: Sorting, searching where the size of the list will be taken.
p(x) = an xn + …… + a0 of degree n.
Parameters may be
i) Polynomial’s degree
i) Matrix order n
So, the algorithm efficiency will be qualitatively different depending on which of the two
measures we use.
i) If the algorithm examines individual characters of its input, then measure the size by
the number of characters.
ii) If it works by processing words, should count their number in the input.
Should make a special note about measuring size of inputs for algorithms involving
properties of numbers.
Computer scientists prefer measuring size by the number of bits b in the n’s binary
representation.
b = [log2 n]+1
We can simply use some standard unit of time measurement – a second, a millisecond and so
on to measure the running time of a program implementing the algorithm.
Drawbacks:
iv) The difficulty of clocking the actual running time of the program.
We should have a metric that does not depend on these extraneous factors.
One possible approach is to identify the most important operation of the algorithm, called the
basic operation. The operation contributing the most for the total running time and compute the
number of times the basic operation is executed.
Ex-1: Sorting algorithms work by comparing elements of a list being sorted with each other. The
basic operation is a key comparison.
Ex-2 : Matrix multiplication and polynomial evaluation which requires two arithmetic operations:
Multiplication and addition.
Thus, the established frame work for the analysis of an algorithm’s time efficiency suggests
measuring it by counting the number of time the algorithm’s basic operation is executed in
inputs of size n.
Cop – The time of execution of an algorithm’s basic operation on a particular computer. C(n) – The
Estimating the running time T(n) of a program implementing this algorithm on that computer is
In this formula, the count C(n) does not contain any information about operations that
are not basic. So, the count is computed approximately.
Unless n is extremely small or large, the formula can give a reasonable estimate of the
algorithm’s running time.
How much longer will the algorithm run if double its input size?
therefore
o Cop value is not knowing. The value was neatly cancelled out in the ratio.
o ½ the multiplicative constant in the formula for the count C(n), was also cancelled out.
Therefore, the efficiency analysis framework ignores multiplicative constants and concentrates on the
count’s order of growth to within a constant multiple for larger-size inputs.
1.3.4- Orders of growth:
A difference in running times on small inputs is not what really distinguishes efficient
algorithms from inefficient ones.
Ex: Computing gcd of two small numbers, the efficiency is not clear when Euclid’s
algorithm is compared with other two algorithms.
n log2 n n n log2n n2 n3 2n n!
10 3.3 10
1
3.3.101 102 103 103 3.6.106
102 6.6 10
2 6.6.102 104 106 1.3.10 9.3.10157
30
103 10 10
3 1.0.104 106 109
Table 1 Values (some approximate) of several functions important for analysis of algorithms
The magnitude of the numbers in Table 1 has a profound significance for the analysis of
algorithm.
The function growing the slowest among these is the logarithmic function.
On the other hand of the spectrum are the exponential function 2 n and n!. Both these functions
grow so fast that their values become astronomically large even for rather small values of n.
Another way to appreciate the qualitative difference among the orders of growth of
the functions in table is if we increase two fold in the value of their argument n.
The function
log2 n => log2 2n = log2 2 + log2 n = 1+log2 n => increases in value by just 1. n log2 n
=> 2n log2 2n = 2n [1+log2 n] => increases slightly more than 2 fold. n2 => (2n)2 =
But there are many algorithms for which running time depends not only on an input size but
also on the specific of a particular input.
This is a straight forward algorithm that searches for a given item in a list of n elements by
checking successive elements of the list until either a match with the search key is found or the
list is exhausted.
Algorithm:
// Output: Returns the index of the first element of A that matches K or -1 if there are no matching
elements.
i0
i+ 1
if i < n return i
else return -1
The running time of this algorithm can be quite different for the same list size n depends where the
K appears.
In the worst case, when there are no matching elements or the first matching element happens to be
the last one on the list, the algorithm makes the largest number of key comparisons among all
possible inputs of size n:
Cworst (n) = n
The Worst Case Efficiency (WCE) of an algorithm is its efficiency for the worst-case input of
size n, which is an input of size n for which the algorithm runs the longest among all possible inputs
of that size.
Computing worst-case efficiency is straight forward, analyze what kind of inputs yield the largest
value of the basic operation’s count C(n) among all possible inputs of size n and then compute the
worst-case value
Cworst (n)
WCE provides important information about an algorithm’s efficiency by bounding its running
time.
The Best Case Efficiency (BCE) of an algorithm is its efficiency for the best- case input of size
(n), which is an input of size n for which the algorithm runs fastest among all possible inputs of
that size.
i) Determine the kind of inputs for which the count C(n) will be the smallest among all possible
inputs of size n.
Cbest (n) = 1
ii) The probability of the first match occurring in the ith position of the list is the same for every i.
as follows
Successful Search
- The probability of the first match occurring in the ith position of the list in p/n for every i.
Unsuccessful search
if p = 1
= (n+1)/2
(i.e) The algorithm will inspect, on average, about half of the list’s elements. if p = 0
The average number of key comparisons = 0(n+1)/2 + n(1-0)= n Therefore, the algorithm will
inspect all n elements on all such inputs.
Investigation of the ACE is considerably more difficult than investigation of the worst- case &
best-case efficiencies.
The direct approach for doing it involves dividing all instances of size n into several classes so
that for each instance of the class the number of times the algorithm’s basic operation is
executed is the same.
The probability distribution of inputs need to be obtained or assumed so that the expected value
of the basic operation’s count then be derived.
It applies not to a single run of an algorithm but rather to a sequence of operations performed
on the same data structure.
It turns out that in some situations a single operation can be expensive, but the total time for an
entire sequence of n such operations is always significantly better than the worst-case efficiency
of that single operation multiplied by n.
1.4 - Asymptotic Notations And Basic Efficiency Classes
i) Ο (Big Oh)
1.4.1 - Introduction
Ω(g(n)) stands for the set of all function with a larger or same order of growth as
g(n).
θ(n2)
Informal Introduction
O(g(n)) is the set of all functions with a smaller or same order of growth as g(n).
Examples:
The first two function are linear and have a smaller order of growth than g(n)=n2.
But the last one quadratic and has the same order of growth as n2.
The second notation Ω(g(n)), set of all functions with a larger or same order of
growth as g(n).
Examples:
n3 ε Ω(n2), ½n(n-1) ε Ω(n2)
Finally Θ(g(n)), set of all functions have the same order of growth as g(n). Thus
A function t(n) is said to be in O(g(n)), denoted t(n) ε O(g(n)), if t(n) is bounded above by some
constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and non-
negative integer n0 such that t(n)≤cg(n) for all n≥n0
Example:
100n+5 ε O(n2)
100n+5≤100n+n(for all n≥5)=101n ≤ 101n2
As the values of constants c and n0 required by the definition, we take 101 and 5
respectively.
100n+5 ≤ 100n+5n (for all n ≥ 1) = 105n with c=105 and n0=1 1.4.4 Ω-
notation
A function t(n) is said to be in Ω(g(n)), denoted t(n) ε Ω(g(n)), if t(n) is bounded below
by some positive constant multiple of g(n) for all large n, i.e., if there exist some positive constant c
and some nonnegative integer n0 such that t(n)≥cg(n) for all n≥n0.
Example:
n3 ε Ω(n2)
n3≥n2 for all n≥0
i.e., we can select c=1 and n0=0.
1.4.5 Θ-notation
A function t(n) is said to be in Θ(g(n)), denoted t(n) ) ε Θ(g(n)), if t(n) is bounded both above
and below by some positive constant multiples of g(n) for all large n, i.e., if there exist some
positive constant c1and c2 and some nonnegative integer n0 such that c2g(n)≤t(n)≤c1g(n) for all
n≥n0
Prove ε
θ(n2) = ≤ for all n0≥1
Left inequality,
= ≥ = for all n ≥ 2
Right inequality,
Hence we select c2 = ¼ , c1 = ½ and n0 = 2
Proof: simple fact about four arbitrary real numbers a1, a2, b1, b2: if a1 ≤ b1 and a2 ≤ b2, then
a1+a2 ≤ 2 max{b1,b2}.
Since if t1(n) ε O(g1(n)), there exist some positive constant c1 and some nonnegative
integer n1 such that
t1(n) ≤ c1g1(n) for all n≥n1
Since if t2(n) ε O(g2(n)), t2(n) ≤ c2g2(n) for all n≥n2
Let us denote c3=max{c1,c2} and consider n≥max{n1,n2} we can use both inequalities. T1(n)
+t2(n)≤c1g1(n)+c2g2(n)
≤c3g1(n)+c3g2(n)=c3[g1(n)+g2(n)]
≤c32max{g1(n),g2(n)} So
t1(n)+t2(n) ε O(max{g1(n),g2(n)})
It implies that the algorithm’s overall efficiency is determined by the part with a larger
order of growth.
0 implies that t(n) has a smaller order of growth than g(n) c > 0
implies that t(n) has a same order of growth as g(n)
=
∞ implies that t(n) has a larger order of growth than g(n)
First two cases mean that t(n)
Ο(g(n)) Last two cases mean that t(n)
Ω(g(n)) Second case mean that t(n)
θ(g(n))
1.4.8 Basic Efficiency Classes
The time efficiencies of a large number of algorithms fall into only a few classes.
These classes are listed in increasing order of their growth along with their names and
a few comments.
1.5 Empirical Analysis
An alternative method to mathematical analysis is empirical analysis. The general plan for analysis of
algorithm time efficiency in Empirical analysis is given below.
1. Insert a counter in the program to count the number of times the algorithm’s basic operation
is executed.
2. Use UNIX system command time
3. Measure the running time of a code fragment before starting, tstart and tfinish and then compute
the difference (tstart and tfinish).
Important facts
The physical running time provides very specific information about an algorithm’s performance in a
particular computing environment and measuring time spent on different segments. Getting such data
called as profiling is an important resource in the empirical analysis of an algorithm’s running time.
Use a sample representing a typical input which is to be developed by the experimenter. Several sizes
of same sizes can be included. The instances can be generated randomly.
The empirical data obtained as the result of an experiment need to be recorded and then presented for
an analysis. Data can be presented numerically in a table or graphically in a scatterplot. One of the
possible applications of the empirical analysis is to predict the algorithm’s performance on an
instance not included in the sample.
Before proceeding with examples, we review about list of summation formulas and two
basic rules of sum manipulation.
The number of comparisons will be the same for all arrays of size n. There is no need to
distinguish among the worst, average and best cases.
Count the number of times the basic operation performed and try to formulate as
function of size n.
The algorithm makes one comparison on each execution of the loop, which is repeated for
each value of the loop’s variable I within the bounds between 1 and n-1.
𝑛−1
𝑛−1
𝐶𝑛 = 1 => = 𝑛 − 1 − 1 + 1 = 𝑛 − 1 𝜖 (𝑛)
𝑖=1 𝑖=1
General plan for analyzing efficiency of non-recursive algorithms:
iii) Check whether the number of times the basic operation is executed depends only on the size
of an input.
iv) Set up a sum expressing the number of times the algorithm’s basic operation is
executed.
- The number of elements comparison will depend not only on n but also on whether there are
equal elements in the array and if there are, which array positions they occupy.
Example 3: Given two nxn matrices. Find the Product of two matrices
ALGORITHM MatrixMul(A[0..n-1,0..n-1],B[0..n-1,0..n-1])
// To find the product of two matrices of order nxn
//Input: A and B are two matrices of nxn order
//Output: Resultant matrix C
for i0 to n-1 do
for j0 to n-1 do
C[i,j]=0
for k0 to n-1 do
C[i,j]=c[i,j]+A[i,k]*B[k,j]
return C
Algorithm analysis
The total number of multiplications M(n) is expressed by the following triple sum:
The sum is computed by using the formula (S1) and Rule (R1)
Example 4: Finds the number of digits in the binary representation of a positive decimal integer
ALGORITHM Binary(n)
//To find the count of digits in a binary representation of a number
//Input: The number n
//Output: number of digits
count1
while n>1 do
countcount+1
n n/2
Algorithm analysis
An input’s size is n.
The loop variable takes on only a few values between its lower and upper limits.
Since the value of n is about halved on each repetition of the loop, the answer should be about
log2 n.
The exact formula for the number of times.
The comparison 𝑛 > 1 will be executed is actually log2 n + 1.
- To determine a solution uniquely for recurrence relation, we need an initial condition that
tells us the value with which the sequence starts.
- We can obtain this value by inspecting the condition that makes the algorithm stop its
recursive calls.
if n = 0 return 1
M(0) = 0
- Thus, we succeed in setting up the recurrence relation and initial condition for the
algorithm’s number of multiplication M(n):
M(0) = 0
- For solving recurrence relations, we use the method of backward substitutions M(n) =
= [M(n-3) + 1] + 2 = M(n-3) + 3
M(n-n) + n = n
1) Decide on a parameter.
4) Setup a recurrence relation, with an appropriate initial condition for the number of times
the basic operation is executed.
5) Solve the recurrence.
1. First move recursively (n-1) disks from peg 1 to peg2, peg 3 as auxiliary
= M(n-1) + 1 + M(n-1)
M(1) = 1
= 2 [2M(n-2) + 1) + 1
= 22 [2M (n-3) + 1] + 2 + 1
= 23 M(n-3) + 22 + 2 + 1
= 2i M(n-i) + 2i -1
M(n) = 2n-1 M(n-(n-1)) + 2n-1 -1 n-i= 1, i= n-1
= 2n-1 + 2n-1 -1
= 2 . 2n-1 -1 = 2n -1
Thus we have an exponential algorithm, which will run for an unimaginably long time even for
moderate values of n. This is not due to the fact that this algorithm is poor.
if n = 1 return 1
- The number of additions made in computing Bin Rec([n/2]) is A([n/2]), plus one more addition
is made by the algorithm to increase the returned value by 1.
- if n = 1, the recursive calls end, there are no additions made, the initial condition is A(1) = 0
- The presence of [n/2] in the function’s argument makes the method of backward
substitutions stumble on values of n that are not powers of 2.
- Therefore the standard approach to solving such a recurrence is to solve it only for n
= 2k and then
o This theorem gives correct answer about the order of growth for all values of n.
A(20) = 0
= [A(2k-2) + 1] + 1
=[A(2k-3) + 1] + 2
= A[2k-3] + 3
= A[2k-i] + i
= A[2k-k] + k
Thus we end up with
A(2k) = A(1) + K
=0+k=k
K = log2 n
Fibonacci Series
0, 1, 1, 2, 3, 5 …..
Static algorithm visualization shows an algorithm’s progress through a series of still images.
Algorithm animation, shows a continuous presentation of an algorithm’s operations.
In 1981, the appearance of the algorithm visualization classic, a 30 minute color sound film titled
sorting out sorting. Sorting-out-sorting algorithm can be visualized via vertical or horizontal bars or
sticks of different heights or lengths, which are rearranged according to their sizes. This will be
convenient for smaller amount of data.
i) Research: For researchers based on expectations that algorithm visualization may help uncover
some unknown features of algorithms
Design an algorithm to find all the common elements in two sorted lists of numbers. For example, for
the lists 2, 5, 5, 5 and 2, 2, 3, 5, 5, 7, the output should be 2, 5, 5.What is the maximum number of
comparisons your algorithm makes if the lengths of the two given lists are m and n, respectively?
For each ai, the algorithm iterates over the entire b to check if ai ℇ b. So, it has
an O(mn) time complexity, which implies O(n2) if m and n are comparable.
Describe the standard algorithm for finding the binary representation of a positive decimal
integer a. in English. b. in pseudocode.