0% found this document useful (0 votes)
10 views24 pages

Ads Unit 1

CP4151 Advance Data structures and algorithms notes

Uploaded by

krithiviji147
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views24 pages

Ads Unit 1

CP4151 Advance Data structures and algorithms notes

Uploaded by

krithiviji147
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Program Performance Measurement

In computer science, there are numerous algorithms for solving a problem.


When there are several different algorithms to solve a problem, we
evaluate the performance of all those algorithms. Performance evaluation
aids in the selection of the best algorithm from a set of competing
algorithms for a given issue. So, we can express algorithm performance as
a practice of producing evaluation judgments regarding algorithms.

Factors Determining Algorithm’s


Performance

To compare algorithms, we consider a collection of parameters or


components such as the amount of memory required by the algorithm, its
execution speed, how easy it is to comprehend and execute, and so on. In
general, an algorithm’s performance is determined by the following factors:
 Is that algorithm giving you the perfect solution to your problem?
 Is it straightforward to comprehend?
 Is it simple to put into practice?
 How much memory (space) is needed to solve the problem?
 How long does it take to remedy the issue?
When we aim to analyze an algorithm, we just look at space and time
requirements of that algorithm and neglect anything else. On the basis of
this information, an algorithm’s performance may alternatively be
described as a technique of determining the space and time requirements
of an algorithm.

The following metrics are used to evaluate the performance of an


algorithm:

 The amount of space necessary to perform the algorithm’s task (Space


Complexity). It consists of both program and data space.
 The time necessary to accomplish the algorithm’s task (Time
Complexity).
Space Complexity

When we create a problem-solving algorithm, it demands the use of


computer memory to complete its execution. Memory is necessary for the
following purposes in any algorithm:
 To keep track of software instructions.
 To keep track of constant values.
 To keep track of variable values.
 Additionally, for a few additional things such as function calls, jump
statements, and so on.
When software is running, it often utilizes computer memory for the
following reasons:

 The amount of memory needed to hold compiled versions of


instructions, which is referred to as instruction space.
 The amount of memory utilized to hold information about partially run
functions at the time of a function call, which is known as the
environmental stack.
 The amount of memory needed to hold all of the variables and
constants, which is referred to as data space.
Example
We need to know how much memory is required to store distinct datatype
values to compute the space complexity (according to the compiler). Take a
look at the following code:

int square(int a)

return a*a;

In the preceding piece of code, variable ‘a’ takes up 2 bytes of memory, and
the return value takes up another 2 bytes, i.e., it takes a total of 4 bytes of
memory to finish its execution, and for any input value of ‘a’, this 4 byte
memory is fixed. Constant Space Complexity is the name given to this type
of space complexity.
Time Complexity

Every algorithm needs a certain amount of computer time to carry out its
instructions and complete the operation. The amount of computer time
required is referred to as time complexity. In general, an algorithm’s
execution time is determined by the following:

 It doesn’t matter if it’s on a single processor or a multi-processor


computer.
 It doesn’t matter if it’s a 32-bit or 64-bit computer.
 The machine’s read and write speeds.
 The time taken by an algorithm to complete arithmetic, logical, return
value, and assignment operations, among other things.
 Data to be entered
Example
Calculating an algorithm’s Time Complexity based on the system
configuration is a challenging undertaking since the configuration varies
from one system to the next. We must assume a model machine with a
certain setup to tackle this challenge. As a result, we can compute
generalized time complexity using that model machine. Take a look at the
following code:

int sum(int a, int b)

return a+b;

In the following example code, calculating a+b takes 1 unit of time, and
returning the value takes 1 unit of time, i.e., it takes two units of time to
perform the task, and it is unaffected by the input values for a and b. This
indicates it takes the same amount of time for all input values, i.e., 2 units.
Notation of Performance Measurement

The Notation for Performance Measurement of an Algorithm


We must compute the complexity of an algorithm if we wish to do an
analysis on it. However, calculating an algorithm’s complexity does not
reveal the actual amount of resources required. Rather than taking the
precise quantity of resources, we describe the complexity in a generic form
(Notation), which yields the algorithm’s essential structure. That Notation
is termed Asymptotic Notation and is a mathematical representation of the
algorithm’s complexity. The following are three asymptotic notations for
indicating time-complexity each of which is based on three separate
situations, namely, the best case, worst case, and average case:

Big – O (Big-Oh)
The top-bound of an algorithm’s execution time is represented by the Big-O
notation. It’s the total amount of time an algorithm takes for all input
values. It indicates an algorithm’s worst-case time complexity.

Big – Ω (Omega)

The omega notation represents the lowest bound of an algorithm’s


execution time. It specifies the shortest time an algorithm requires for all
input values. It is the best-case scenario for the time complexity of an
algorithm.

Big – Θ (Theta)

The function is enclosed in the theta notation from above and below. It
reflects the average case of an algorithm’s time complexity and defines the
upper and lower boundaries of its execution time.

Recurrence Relation
A recurrence is an equation or inequality that describes a function in terms of its
values on smaller inputs. To solve a Recurrence Relation means to obtain a function
defined on the natural numbers that satisfy the recurrence.

For Example, the Worst Case Running Time T(n) of the MERGE SORT Procedures is
described by the recurrence.

T (n) = θ (1) if n=1

2T + θ (n) if n>1

There are four methods for solving Recurrence:

1. Substitution Method
2. Iteration Method
3. Recursion Tree Method
4. Master Method

1. Substitution Method:
The Substitution Method Consists of two main steps:

1. Guess the Solution.


2. Use the mathematical induction to find the boundary condition and shows that the
guess is correct.

For Example1 Solve the equation by Substitution Method.

T (n) = T + n

We have to show that it is asymptotically bound by O (log n).

Solution:

For T (n) = O (log n)

We have to show that for some constant c

1. T (n) ≤c logn.

Put this in given Recurrence Equation.

T (n) ≤c log + 1
≤c log + 1 = c logn-clog2 2+1
≤c logn for c≥1
Thus T (n) =O logn.

Example2 Consider the Recurrence

T (n) = 2T + n n>1

Find an Asymptotic bound on T.

Solution:

We guess the solution is O (n (logn)).Thus for constant 'c'.


T (n) ≤c n logn
Put this in given Recurrence Equation.
Now,

T (n) ≤2c log +n


≤cnlogn-cnlog2+n
=cn logn-n (clog2-1)
≤cn logn for (c≥1)
Thus T (n) = O (n logn).

2. Iteration Methods
It means to expand the recurrence and express it as a summation of terms of n and
initial condition.

Example1: Consider the Recurrence

1. T (n) = 1 if n=1
2. = 2T (n-1) if n>1

Solution:

T (n) = 2T (n-1)
= 2[2T (n-2)] = 22T (n-2)
= 4[2T (n-3)] = 23T (n-3)
= 8[2T (n-4)] = 24T (n-4) (Eq.1)

Repeat the procedure for i times

T (n) = 2i T (n-i)
Put n-i=1 or i= n-1 in (Eq.1)
T (n) = 2n-1 T (1)
= 2n-1 .1 {T (1) =1 .....given}
= 2n-1

Example2: Consider the Recurrence


1. T (n) = T (n-1) +1 and T (1) = θ (1).

Solution:

T (n) = T (n-1) +1
= (T (n-2) +1) +1 = (T (n-3) +1) +1+1
= T (n-4) +4 = T (n-5) +1+4
= T (n-5) +5= T (n-k) + k
Where k = n-1
T (n-k) = T (1) = θ (1)
T (n) = θ (1) + (n-1) = 1+n-1=n= θ (n).

Recursion Tree Method


Recursion is a fundamental concept in computer science and mathematics that allows
functions to call themselves, enabling the solution of complex problems through
iterative steps. One visual representation commonly used to understand and analyze
the execution of recursive functions is a recursion tree. In this article, we will explore
the theory behind recursion trees, their structure, and their significance in
understanding recursive algorithms.

What is a Recursion Tree?


A recursion tree is a graphical representation that illustrates the execution flow of a
recursive function. It provides a visual breakdown of recursive calls, showcasing the
progression of the algorithm as it branches out and eventually reaches a base case.
The tree structure helps in analyzing the time complexity and understanding the
recursive process involved.

Tree Structure
Each node in a recursion tree represents a particular recursive call. The initial call is
depicted at the top, with subsequent calls branching out beneath it. The tree grows
downward, forming a hierarchical structure. The branching factor of each node
depends on the number of recursive calls made within the function. Additionally, the
depth of the tree corresponds to the number of recursive calls before reaching the
base case.

Base Case
The base case serves as the termination condition for a recursive function. It defines
the point at which the recursion stops and the function starts returning values. In a
recursion tree, the nodes representing the base case are usually depicted as leaf
nodes, as they do not result in further recursive calls.

Recursive Calls
The child nodes in a recursion tree represent the recursive calls made within the
function. Each child node corresponds to a separate recursive call, resulting in the
creation of new sub problems. The values or parameters passed to these recursive
calls may differ, leading to variations in the sub problems' characteristics.

Execution Flow:
Traversing a recursion tree provides insights into the execution flow of a recursive
function. Starting from the initial call at the root node, we follow the branches to
reach subsequent calls until we encounter the base case. As the base cases are
reached, the recursive calls start to return, and their respective nodes in the tree are
marked with the returned values. The traversal continues until the entire tree has
been traversed.

Time Complexity Analysis


Recursion trees aid in analyzing the time complexity of recursive algorithms. By
examining the structure of the tree, we can determine the number of recursive calls
made and the work done at each level. This analysis helps in understanding the
overall efficiency of the algorithm and identifying any potential inefficiencies or
opportunities for optimization.

Introduction
o Think of a program that determines a number's factorial. This function takes a
number N as an input and returns the factorial of N as a result. This function's
pseudo-code will resemble,

// find factorial of a number


factorial(n) {
// Base case
if n is less than 2: // Factorial of 0, 1 is 1
return n

// Recursive step
return n * factorial(n-1); // Factorial of 5 => 5 * Factorial(4)...
}

/* How function calls are made,

Factorial(5) [ 120 ]
|
5 * Factorial(4) ==> 120
|
4. * Factorial(3) ==> 24
|
3 * Factorial(2) ==> 6
|
2 * Factorial(1) ==> 2
|
1
*/
o Recursion is exemplified by the function that was previously mentioned. We are
invoking a function to determine a number's factorial. Then, given a lesser value of the
same number, this function calls itself. This continues until we reach the basic case, in
which there are no more function calls.
o Recursion is a technique for handling complicated issues when the outcome is
dependent on the outcomes of smaller instances of the same issue.
o If we think about functions, a function is said to be recursive if it keeps calling itself
until it reaches the base case.
o Any recursive function has two primary components: the base case and the recursive
step. We stop going to the recursive phase once we reach the basic case. To prevent
endless recursion, base cases must be properly defined and are crucial. The definition
of infinite recursion is a recursion that never reaches the base case. If a program never
reaches the base case, stack overflow will continue to occur.

Recursion Types
Generally speaking, there are two different forms of recursion:

o Linear Recursion
o Tree Recursion
o Linear Recursion

Linear Recursion
o A function that calls itself just once each time it executes is said to be linearly
recursive. A nice illustration of linear recursion is the factorial function. The name
"linear recursion" refers to the fact that a linearly recursive function takes a linear
amount of time to execute.
o Take a look at the pseudo-code below:

1. function doSomething(n) {
2. // base case to stop recursion
3. if nis 0:
4. return
5. // here is some instructions
6. // recursive step
7. doSomething(n-1);
8. }
o If we look at the function doSomething(n), it accepts a parameter named n and does
some calculations before calling the same procedure once more but with lower values.
o When the method doSomething() is called with the argument value n, let's say that
T(n) represents the total amount of time needed to complete the computation. For this,
we can also formulate a recurrence relation, T(n) = T(n-1) + K. K serves as a constant
here. Constant K is included because it takes time for the function to allocate or de-
allocate memory to a variable or perform a mathematical operation. We use K to define
the time since it is so minute and insignificant.
o This recursive program's time complexity may be simply calculated since, in the worst
scenario, the method doSomething() is called n times. Formally speaking, the function's
temporal complexity is O(N).

Tree Recursion
o When you make a recursive call in your recursive case more than once, it is referred to
as tree recursion. An effective illustration of Tree recursion is the fibonacci sequence.
Tree recursive functions operate in exponential time; they are not linear in their
temporal complexity.
o Take a look at the pseudo-code below,

1. function doSomething(n) {
2. // base case to stop recursion
3. if n is less than 2:
4. return n;
5. // here is some instructions
6. // recursive step
7. return doSomething(n-1) + doSomething(n-2);
8. }
o The only difference between this code and the previous one is that this one makes one
more call to the same function with a lower value of n.
o Let's put T(n) = T(n-1) + T(n-2) + k as the recurrence relation for this function. K serves
as a constant once more.
o When more than one call to the same function with smaller values is performed, this
sort of recursion is known as tree recursion. The intriguing aspect is now: how time-
consuming is this function?
o Take a guess based on the recursion tree below for the same function.

o It may occur to you that it is challenging to estimate the time complexity by looking
directly at a recursive function, particularly when it is a tree recursion. Recursion Tree
Method is one of several techniques for calculating the temporal complexity of such
functions. Let's examine it in further detail.

What Is Recursion Tree Method?


o Recurrence relations like T(N) = T(N/2) + N or the two we covered earlier in the kinds
of recursion section are solved using the recursion tree approach. These recurrence
relations often use a divide and conquer strategy to address problems.
o It takes time to integrate the answers to the smaller sub problems that are created
when a larger problem is broken down into smaller sub problems.
o The recurrence relation, for instance, is T(N) = 2 * T(N/2) + O(N) for the Merge sort.
The time needed to combine the answers to two sub problems with a combined size of
T(N/2) is O(N), which is true at the implementation level as well.
o For instance, since the recurrence relation for binary search is T(N) = T(N/2) + 1, we
know that each iteration of binary search results in a search space that is cut in half.
Once the outcome is determined, we exit the function. The recurrence relation has +1
added because this is a constant time operation.
o The recurrence relation T(n) = 2T(n/2) + Kn is one to consider. Kn denotes the amount
of time required to combine the answers to n/2-dimensional sub problems.
o Let's depict the recursion tree for the aforementioned recurrence relation.
We may draw a few conclusions from studying the recursion tree above, including

1. The magnitude of the problem at each level is all that matters for determining the
value of a node. The issue size is n at level 0, n/2 at level 1, n/2 at level 2, and so on.

2. In general, we define the height of the tree as equal to log (n), where n is the size
of the issue, and the height of this recursion tree is equal to the number of levels in
the tree. This is true because, as we just established, the divide-and-conquer strategy
is used by recurrence relations to solve problems, and getting from issue size n to
problem size 1 simply requires taking log (n) steps.

 Consider the value of N = 16, for instance. If we are permitted to divide N


by 2 at each step, how many steps are required to get N = 1? Considering
that we are dividing by two at each step, the correct answer is 4, which is
the value of log(16) base 2.

log(16) base 2

log(2^4) base 2

4 * log(2) base 2, since log(a) base a = 1

so, 4 * log(2) base 2 = 4

3. At each level, the second term in the recurrence is regarded as the root.

Although the word "tree" appears in the name of this strategy, you don't need to be
an expert on trees to comprehend it.
How to Use a Recursion Tree to Solve Recurrence Relations?
The cost of the sub problem in the recursion tree technique is the amount of time
needed to solve the sub problem. Therefore, if you notice the phrase "cost" linked
with the recursion tree, it simply refers to the amount of time needed to solve a
certain sub problem.

Let's understand all of these steps with a few examples.

Example

Consider the recurrence relation,

T(n) = 2T(n/2) + K

Solution

The given recurrence relation shows the following properties,

A problem size n is divided into two sub-problems each of size n/2. The cost of
combining the solutions to these sub-problems is K.

Each problem size of n/2 is divided into two sub-problems each of size n/4 and so on.

At the last level, the sub-problem size will be reduced to 1. In other words, we finally
hit the base case.

Let's follow the steps to solve this recurrence relation,

Step 1: Draw the Recursion Tree


Step 2: Calculate the Height of the Tree

Since we know that when we continuously divide a number by 2, there comes a time
when this number is reduced to 1. Same as with the problem size N, suppose after K
divisions by 2, N becomes equal to 1, which implies, (n / 2^k) = 1

Here n / 2^k is the problem size at the last level and it is always equal to 1.

Now we can easily calculate the value of k from the above expression by taking log()
to both sides. Below is a more clear derivation,

n = 2^k

o log(n) = log(2^k)
o log(n) = k * log(2)
o k = log(n) / log(2)
o k = log(n) base 2

So the height of the tree is log (n) base 2.

Step 3: Calculate the cost at each level

o Cost at Level-0 = K, two sub-problems are merged.


o Cost at Level-1 = K + K = 2*K, two sub-problems are merged two times.
o Cost at Level-2 = K + K + K + K = 4*K, two sub-problems are merged four
times. and so on....
Step 4: Calculate the number of nodes at each level

Let's first determine the number of nodes in the last level. From the recursion tree, we
can deduce this

o Level-0 have 1 (2^0) node


o Level-1 have 2 (2^1) nodes
o Level-2 have 4 (2^2) nodes
o Level-3 have 8 (2^3) nodes

So the level log(n) should have 2^(log(n)) nodes i.e. n nodes.

Step 5: Sum up the cost of all the levels

o The total cost can be written as,


o Total Cost = Cost of all levels except last level + Cost of last level
o Total Cost = Cost for level-0 + Cost for level-1 + Cost for level-2 +.... + Cost for
level-log(n) + Cost for last level

The cost of the last level is calculated separately because it is the base case and no
merging is done at the last level so, the cost to solve a single problem at this level is
some constant value. Let's take it as O (1).

Let's put the values into the formulae,

o T(n) = K + 2*K + 4*K + .... + log(n)` times + `O(1) * n


o T(n) = K(1 + 2 + 4 + .... + log(n) times)` + `O(n)
o T(n) = K(2^0 + 2^1 + 2^2 + ....+ log(n) times + O(n)

If you closely take a look to the above expression, it forms a Geometric progression
(a, ar, ar^2, ar^3 ...... infinite time). The sum of GP is given by S(N) = a / (r - 1). Here
is the first term and r is the common ratio.

Master Method
The Master Method is used for solving the following types of recurrence

T (n) = a T + f (n) with a≥1 and b≥1 be constant & f(n) be a function and can be
interpreted as

Let T (n) is defined on non-negative integers by the recurrence.


T (n) = a T + f (n)

In the function to the analysis of a recursive algorithm, the constants and function
take on the following significance:

o n is the size of the problem.


o a is the number of subproblems in the recursion.
o n/b is the size of each subproblem. (Here it is assumed that all subproblems are
essentially the same size.)
o f (n) is the sum of the work done outside the recursive calls, which includes the sum of
dividing the problem and the sum of combining the solutions to the subproblems.
o It is not possible always bound the function according to the requirement, so we make
three cases which will tell us what kind of bound we can apply on the function.

Master Theorem:
It is possible to complete an asymptotic tight bound in these three cases:

Case1: If f (n) = for some constant ε >0, then it follows that:

T (n) = Θ

Example:
T (n) = 8 T apply master theorem on it.

Solution:

Compare T (n) = 8 T with

T (n) = a T
a = 8, b=2, f (n) = 1000 n2, logba = log28 = 3

Put all the values in: f (n) =


1000 n2 = O (n3-ε )
If we choose ε=1, we get: 1000 n2 = O (n3-1) = O (n2)

Since this equation holds, the first case of the master theorem applies to the given
recurrence relation, thus resulting in the conclusion:

T (n) = Θ
Therefore: T (n) = Θ (n3)

Case 2: If it is true, for some constant k ≥ 0 that:

F (n) = Θ then it follows that: T (n) = Θ

Example:

T (n) = 2 , solve the recurrence by using the master method.

As compare the given problem with T (n) = a T a = 2,


b=2, k=0, f (n) = 10n, logba = log22 =1

Put all the values in f (n) =Θ , we will get


1
10n = Θ (n ) = Θ (n) which is true.

Therefore: T (n) = Θ
= Θ (n log n)
Case 3: If it is true f(n) = Ω for some constant ε >0 and it also true that: a

f for some constant c<1 for large value of n ,then :

1. T (n) = Θ((f (n))

Example: Solve the recurrence relation:

T (n) = 2

Solution:

Compare the given problem with T (n) = a T


a= 2, b =2, f (n) = n2, logba = log22 =1

Put all the values in f (n) = Ω ..... (Eq. 1)


If we insert all the value in (Eq.1), we will get
n2 = Ω(n1+ε) put ε =1, then the equality will hold.
n2 = Ω(n1+1) = Ω(n2)
Now we will also check the second condition:

2
If we will choose c =1/2, it is true:

∀ n ≥1
So it follows: T (n) = Θ ((f (n))
T (n) = Θ(n2)

Different types of recurrence relations and


their solutions
we will see how we can solve different types of recurrence relations using
different approaches. Before understanding this article, you should have idea
about recurrence relations and different method to solve them (See : Worst,
Average and Best Cases, Asymptotic Notations, Analysis of Loops).
Type 1: Divide and conquer recurrence relations –
Following are some of the examples of recurrence relations based on divide
and conquer.
T(n) = 2T(n/2) + cn
T(n) = 2T(n/2) + √n
These types of recurrence relations can be easily solved using Master
Method.
For recurrence relation T(n) = 2T(n/2) + cn, the values of a = 2, b = 2 and k
=1. Here logb(a) = log2(2) = 1 = k. Therefore, the complexity will be
Θ(nlog2(n)).
Similarly for recurrence relation T(n) = 2T(n/2) + √n, the values of a = 2, b =
2 and k =1/2. Here logb(a) = log2(2) = 1 > k. Therefore, the complexity will
be Θ(n).
Type 2: Linear recurrence relations –
Following are some of the examples of recurrence relations based on linear
recurrence relation.
T(n) = T(n-1) + n for n>0 and T(0) = 1
These types of recurrence relations can be easily solved using substitution
method.
For example,
T(n) = T(n-1) + n
= T(n-2) + (n-1) + n
= T(n-k) + (n-(k-1))….. (n-1) + n
Substituting k = n, we get
T(n) = T(0) + 1 + 2+….. +n = n(n+1)/2 = O(n^2)
Type 3: Value substitution before solving –
Sometimes, recurrence relations can’t be directly solved using techniques like
substitution, recurrence tree or master method. Therefore, we need to convert
the recurrence relation into appropriate form before solving. For example,
T(n) = T(√n) + 1
To solve this type of recurrence, substitute n = 2^m as:
T(2^m) = T(2^m /2) + 1
Let T(2^m) = S(m),
S(m) = S(m/2) + 1
Solving by master method, we get
S(m) = Θ(logm)
As n = 2^m or m = log2(n),
T(n) = T(2^m) = S(m) = Θ(logm) = Θ(loglogn)
Let us discuss some questions based on the approaches discussed.
Que – 1. What is the time complexity of Tower of Hanoi problem?
(A) T(n) = O(sqrt(n))
(D) T(n) = O(n^2)
(C) T(n) = O(2^n)
(D) None
Solution: For Tower of Hanoi, T(n) = 2T(n-1) + c for n>1 and T(1) = 1.
Solving this,
T(n) = 2T(n-1) + c
= 2(2T(n-2)+ c) + c = 2^2*T(n-2) + (c + 2c)
= 2^k*T(n-k) + (c + 2c + .. kc)
Substituting k = (n-1), we get
T(n) = 2^(n-1)*T(1) + (c + 2c + (n-1)c) = O(2^n)
Que – 2. Consider the following recurrence:
T(n) = 2 * T(ceil (sqrt(n) ) ) + 1, T(1) = 1
Which one of the following is true?
(A) T(n) = (loglogn)
(B) T(n) = (logn)
(C) T(n) = (sqrt(n))
(D) T(n) = (n)
Solution: To solve this type of recurrence, substitute n = 2^m as:
T(2^m) = 2T(2^m /2) + 1
Let T(2^m) = S(m),
S(m) = 2S(m/2) + 1
Solving by master method, we get
S(m) = Θ(m)
As n = 2^m or m = log2n,
T(n) = T(2^m) = S(m) = Θ(m) = Θ(logn)

Practice Set for Recurrence Relations


Que-1. Solve the following recurrence relation?
T(n) = 7T(n/2) + 3n^2 + 2
(a) O(n^2.8)
(b) O(n^3)
(c) θ(n^2.8)
(d) θ(n^3)
Explanation –
T(n) = 7T(n/2) + 3n^2 + 2
As one can see from the formula above:
a = 7, b = 2, and f(n) = 3n^2 + 2
So, f(n) = O(n^c), where c = 2.
It falls in master’s theorem case 1:
logb(a) = log2(7) = 2.81 > 2
It follows from the first case of the master theorem that T(n) = θ(n^2.8) and
implies O(n^2.8) as well as O(n^3).
Therefore, option (a), (b), and (c) are correct options.
Que-2. Sort the following functions in the decreasing order of their
asymptotic (big-O) complexity:
f1(n) = n^√n , f2(n) = 2^n, f3(n) = (1.000001)^n , f4(n) = n^(10)*2^(n/2)
(a) f2> f4> f1> f3
(b) f2> f4> f3> f1
(c) f1> f2> f3> f4
(d) f2> f1> f4> f3
Explanation –
f2 > f4 because we can write f2(n) = 2^(n/2)*2^(n/2), f4(n) = n^(10)*2^(n/2)
which clearly shows that f2 > f4
f4 > f3 because we can write f4(n) = n^10.〖√2〗^n = n10.(1.414)n , which
clearly shows f4> f3
f3> f1:
f1 (n) = n^√n take log both side log f1 = √n log n
f3 (n) = (1.000001)^n take log both side log f3 = n log(1.000001), we can
write as log f3 = √n*√n log(1.000001) and √n > log(1.000001).
So, correct order is f2> f4> f3> f1. Option (b) is correct.
Que-3. f(n) = 2^(2n)
Which of the following correctly represents the above function?
(a) O(2^n)
(b) Ω(2^n)
(c) Θ(2^n)
(d) None of these
Explanation – f(n) = 2^(2n) = 2^n*2^n
Option (a) says f(n)<= c*2n, which is not true. Option (c) says c1*2n <= f(n)
<= c2*2n, lower bound is satisfied but upper bound is not satisfied. Option (b)
says c*2n <= f(n) this condition is satisfied hence option (b) is correct.
Que-4. Master’s theorem can be applied on which of the following recurrence
relation?
(a) T (n) = 2T (n/2) + 2^n
(b) T (n) = 2T (n/3) + sin(n)
(c) T (n) = T (n-2) + 2n^2 + 1
(d) None of these
Explanation – Master theorem can be applied to the recurrence relation of
the following type
T (n) = aT(n/b) + f (n) (Dividing Function) & T(n)=aT(n-b)+f(n) (Decreasing
function)
Option (a) is wrong because to apply master’s theorem, function f(n) should
be polynomial.
Option (b) is wrong because in order to apply master theorem f(n) should be
monotonically increasing function.
Option (d) is not the above mentioned type, therefore correct answer is (c)
because T (n) = T (n-2) + 2n^2 + 1 will be considered as T (n) = T (n-2) +
2n^2 that is in the form of decreasing function.
Que-5. T(n) = 3T(n/2+ 47) + 2n^2 + 10*n – 1/2. T(n) will be
(a) O(n^2)
(b) O(n^(3/2))
(c) O(n log n)
(d) None of these
Explanation – For higher values of n, n/2 >> 47, so we can ignore 47, now
T(n) will be
T(n) = 3T(n/2)+ 2*n^2 + 10*n – 1/2 = 3T(n/2)+ O(n^2)
Apply master theorem, it is case 3 of master theorem T(n) = O(n^2).
Option (a) is correct.

You might also like