Unit 3 Mathematical Aspects and Analysis of Algorithms: Structure
Unit 3 Mathematical Aspects and Analysis of Algorithms: Structure
Unit 3 Mathematical Aspects and Analysis of Algorithms: Structure
3.1 Introduction
In the earlier unit you were introduced to the concepts of analysis
framework. In this unit you will be learning about the basic concepts of
mathematical analysis of algorithms.
It is essential to check the efficiency of each algorithm in order to select the
best algorithm. The efficiency is generally measured by calculating the time
complexity of each algorithm. The shorthand way to represent time
complexity is asymptotic notation.
For simplicity, we can classify the algorithms into two categories as:
Non recursive algorithms
Recursive algorithms
We compute non-recursive algorithm only once to solve the problem.
In this unit, we will mathematically analyze non-recursive algorithms.
Objectives:
After studying this unit you should be able to:
explain the types of asymptotic notations
list the basic asymptotic efficiency classes
describe the efficient analysis of non-recursive algorithms with
illustrations
The graph of C h(n) and T(n) can be seen in the figure 3.1. As n becomes
larger, the running time increases considerably. For example, consider
T(n)=13n3+42n2+2nlogn+4n. Here as the value on n increases n3 is much
larger than n2, nlogn and n. Hence it dominates the function T(n) and we
can consider the running time to grow by the order of n3. Therefore it can be
written as T(n)=O(n3). The value of n for T(n) and C h(n) will not be less than
n0.Therefore values less than n0 are considered as not relevant.
Example:
Consider function T(n)=1n+2 and h(n)=n2. Determine some constant C so
that t(n)≤C*h(n).
T(n)=n+2 and h(n)=n2, find C for n=1, Now
T(n)=n+2
=(1) + 2
T(n)=3 and h(n)=n2=12=1
T(n)>h(n)
If n=3
Then T(n)=(3)+2 = 5 and h(n)=n2=9
T(n)<h(n)
Hence for n>2, T(n)<h(n), Therefore Big Oh notation always gives the
existing upper bound.
Let us next discuss another asymptotic notation, the omega notation.
Example:
Assume T(n)=2n+5 and h(n)=4n similarly T(n)=2n+5 and h(n)=6n
Where n≥2 so now 4n<2n+5<6n
Here C1=4, C2=6 and n0=2
Theta notation is more accurate than Big Oh notation and Omega notation.
We will next discuss some rules that help us to analyze algorithms.
The maximum rule
Maximum rule is a useful tool that proves that one function is in the order of
another function.
Let F, h: N → RZ0 be two arbitrary functions. The maximum rule says that
O(F(n)+h(n))=O(max (F(n), h(n))).
Even if n3log n−6n2 is negative for small values of n, for large values of n
(i.e. n≥4), it is always non-negative.
The limit rule
This powerful rule states that, given arbitrary functions F and h: N → RZ0
F ( n)
1. If lim R+ then F(n) Θ(h(n))
n h( n)
F ( n)
2. If lim =0 then F(n) O(h(n)) but F(n) Θ(h(n)
n h( n)
F ( n)
3. If lim =∞ then F(n) Ω(h(n)) but F(n) Ω(h(n))
n h( n)
The L’Hôpital’s rule
L'Hôpital's rule uses derivatives to help compute limits with indeterminate
forms. Application of the rule converts an indeterminate form to a
determinate form, allowing easy computation of the limit.
In simple cases, L'Hôpital's rule states that for functions F(n) and h(n), if:
F(n ) F(n )
then: lim lim
n x h (n ) n x h ( n )
where F’(n) and h’(n) are the derivatives of F(n) and h(n) respectively.
Example:
Let F(n)=log n and h(n)= n be two functions. Since both F(n) and h(n) tend
to infinity as n tends to infinity, de l’Hôpital’s rule is used to compute
F ( n) log n 1/ n 2
lim = lim = lim = lim
n h( n) n n 1 n
n 1/ 2 n n
Therefore it is clear that F(n) O(h(n)) but F(n) Θ(h(n)).
Conditional asymptotic notation
When we start the analysis of algorithms it becomes easy for us if we
restrict our initial attentions to instances whose size is satisfied in certain
conditions (like being a power of 2). Consider n as the size of integers to be
multiplied. The algorithm moves forward if n=1, that needs microseconds for
a suitable constant ‘a’. If n>1, the algorithm proceeds by multiplying four
pairs of integers of size n/2 (or three if we use the better algorithm). To
accomplish additional tasks it takes some linear amount of time.
This algorithm takes a worst case time which is given by the function
T: N → R≥0 recursively defined where R is a set of non negative integers.
T(1) = a
T(n) = 4T([n /2]) + bn for n >1
Conditional asymptotic notation is a simple notational convenience. Its main
interest is that we can eliminate it after using it for analyzing the algorithm. A
function F: N → R≥0 is non decreasing function if there is an integer
threshold n0 such that F(n)≤F(n+1) for all n≥n0. This implies that
mathematical induction is F(n)≤F(m) whenever m ≥n≥n0.
Consider b≥2 as an integer. Function F is b-smooth if it is non-decreasing
and satisfies the condition F(bn) O(F(n)). Other wise, there should be a
constant C (depending on b) such that F(bn)≤C F(n) for all n≥n0. A function
is said to be smooth if it is b-smooth for every integer b≥2.
Most expected functions in the analysis of algorithms will be smooth, such
as log n, nlogn, n2, or any polynomial whose leading coefficient will be
positive. However, functions those grow too fast, such as nlogn, 2n or n! will
not be smooth because the ratio F(2n)/F(n) is unbounded.
Which shows that (2n)log (2n) is not approximate of O(nlog n) because a
constant never bounds 2n2. Functions that are bounded above by a
polynomial are usually smooth only if they are eventually non-decreasing. If
they are not eventually non-decreasing then there is a probability for the
function to be in the exact order of some other function which is smooth. For
instance, let b(n) represent the number of bits equal to 1 in the binary
expansion of n, for instance b(13) = 3 because 13 is written as 1101 in
binary. Consider F(n)=b(n)+log n. It is easy to see that F(n) is not eventually
non-decreasing and therefore it is not smooth because b (2k −1)=k whereas
b(2k)=1 for all k. However F(n) Θ(log n) is a smooth function.
A constructive property of smoothness is that if we assume f is b-smooth for
any specific integer b≥2, then it is actually smooth. To prove this, consider
any two integers a and b (not smaller than 2). Assume that f is b-smooth. It
is important to show that f is a-smooth as well. Consider C and n0 as
constants such that F(bn)≤C F(n) and F(n)≤F(n+1) for all n≥n0. Let i= [logba].
By definition of the logarithm a b b logb a b i .
logb a
T ( n)
lim = C>0 (order of growth of T(n)=order of growth of h(n))
n h( n)
T ( n)
lim = ∞ (order of growth of T(n)>order of growth of h(n))
n h( n)
3.2.2 Basic asymptotic efficiency classes
We have previously analyzed that we can obtain a different order of growth
by means of constant multiple (C in C*h(n)). We used different types of
notations for these orders of growth. But we do not restrict the classification
of order of growth to Ω, Θ and O. There are various efficiency classes and
each class possesses certain characteristic which is shown in table 3.1.
Table 3.1: Basic Asymptotic Efficiency Classes
Name of
Growth the Example
Explanation
order efficiency
class
1 Constant Specifies that algorithm’s Scanning the
running time is not changed elements of array
with increase in size of the
input.
log n Logarithmic For each iteration of algorithm Performing the
a constant factor shortens the operations of binary
problem’s size. search
N Linear Algorithms that examine the Performing the
list of size n. operations of
sequential search
n logn n-log-n Divide and conquer algorithms. Using merge sort or
quick sort elements
are sorted
2
n Quadratic Algorithms with two embedded Scanning the
loops. elements of matrix
3
n Cubic Algorithms with three Executing matrix
embedded loops. multiplication
n
2 Exponential Algorithms that generate all the Generating all the
subsets which are present in n subsets of n different
– element sets elements
n! Factorial Algorithms that generate all the All the permutations
permutations of an n-element are generated
set
Activity 1
Determine a constant p for a given function F(n)≤p*h(n) where
F(n)=2n+3 and h(n)=n2.
Self Assessment Questions
1. ___________ is more accurate than Big Oh notation and Omega
notation.
2. ____________ asymptotic notation is a simple notational convenience.
3. ___________ depicts the running time between the upper bound and
lower bound.
Set up summation formula for the number of times the basic operation is
implemented.
Simplify the sum using standard formula and rules.
The commonly used summation rules are listed next.
n
n k (n k 1)
3) i K = 1+2k+3k-------nk =
i 1 k 1
Θ(nk+1)
n
a n 1 1
4) a i = 1+a+ ---------an =
i 1 a 1
Θ(an)
n n
5) ca i = c
i 1
a
i 1
i
n n n
6) a
i 1
i bi = a
i 1
i
i 1
bi
n
7) 1 = n-k+1 where n and k are upper and lower limits
ik
Example 1
We will now discuss an example for identifying the element which has the
minimum value in a given array.
We can find the element with the help of the general plan. Let us now see
the algorithm for finding the minimum element in an array.
n
h(n) = n-1Θ(n) {Using the rule 1 =n}
i 1
Min_value ←A [0]//Min_value=0
For i←1 to 4-1 do// this loop iterates from i=1 to i=4-1
{
If (A[1]) <Min_value) then// if 1<0
Min_value← A[1]
}
return Min_value // Min_value = 0
Example 2
Let us next discuss an algorithm for counting the number of bits in an
integer.
Mathematical analysis:
Step 1: let the size of the input be p.
Step 2: The while loop indicates the basic operation and checks whether
p>1. Execution of while loop is done whenever p>1 is true. It gets executed
once more when p>1 is false. When p>1 is false the statements inside the
while loop is not executed.
Step 3: The value of n gets halved whenever the loop gets repeated. Hence
the efficiency of the loop is log2p.
Step 4: The total number of times the while loop gets executed is given by
[log2p] + 1.
Let us now trace the algorithm for counting the number of elements in an
integer.
end
end
end
return C
The tracing for the matrix multiplication algorithm is given below.
different sizes which can slide onto any peg. Initially, we arrange all the
disks in a neat stack in ascending order of size on one peg putting the
smallest disc at the top. This makes a conical shape as can be seen in the
figure 3.4.
Algorithm for moving the rings in clock wise direction in one post:
If n is odd then d: = clockwise else d: = counterclockwise
Repeat
Move the smallest ring in one post in the direction d till all rings are on
same post.
Make the legal move that will not include the smallest ring until all the
rings come to the same post.
Activity 2
Write an algorithm for counting even number of bits in an integer
3.4 Summary
It is very important to obtain the best algorithm for analysis. For selecting the
best algorithm, checking the efficiency of each algorithm is essential. The
shorthand way for representing time complexity is asymptotic notation.
Asymptotic notation within the limit deals with the character of a function that
is a parameter with large values. The main characteristic of this approach is
that constant factors are neglected and importance is given to the terms that
are present in the expression (for T(n)) dominating the function’s behavior
whenever n becomes large. This helps in classification of run-time functions
into broad efficiency classes. The different types of asymptotic notations are
Big Oh notation, Omega notation and Theta notation.
We classify algorithms broadly into recursive and non-recursive algorithms.
In this unit we have analyzed non-recursive algorithms mathematically with
suitable examples. Non-recursive algorithm is an algorithm which is
performed only once to solve the problem.
3.5 Glossary
Term Description
Recursive algorithm It is an algorithm which calls itself with smaller inputs
and obtains the inputs for the current input by applying
simple operations to the returned value of the smaller
input.
Runtime The time when a program or process is being executed
is called as runtime.
Notation It is the activity of representing something by a special
system of characters.
3.7 Answers
Self Assessment Questions
1. Theta notation
2. Conditional
3. Theta notation
4. Mathematical
5. Constant
6. Choice
Terminal Questions
1. Refer section 3.2.1 – Asymptotic notations
2. Refer section 3.2.1 – Asymptotic notations
3. Refer section 3.2.1 – Asymptotic notations
4. Refer section 3.2.2 – Basic efficiency classes
5. Refer section 3.3.5 – Towers of Hanoi
6. Refer section 3.3.6 – Conversion of recursive algorithm in to non
recursive algorithm
References
A. A. Puntambekar (2008). Design and Analysis of Algorithms, First
edition, Technical publications, Pune.
James Andrew Storer. An Introduction to Data Structures and
Algorithms, Brandies university Waltham, U.S.A.
E-Reference
http://www.cmpe.boun.edu.tr/~akin/cmpe160/recursion.html
www.cs.utsa.edu/~bylander/cs3343/chapter2handout.pdf
www.scuec.edu.cn/jsj/jpkc/algorithm/res/ppt/Lecture03.ppt