Ads Unit 1
Ads Unit 1
int square(int a)
return a*a;
In the preceding piece of code, variable ‘a’ takes up 2 bytes of memory, and
the return value takes up another 2 bytes, i.e., it takes a total of 4 bytes of
memory to finish its execution, and for any input value of ‘a’, this 4 byte
memory is fixed. Constant Space Complexity is the name given to this type
of space complexity.
Time Complexity
Every algorithm needs a certain amount of computer time to carry out its
instructions and complete the operation. The amount of computer time
required is referred to as time complexity. In general, an algorithm’s
execution time is determined by the following:
return a+b;
In the following example code, calculating a+b takes 1 unit of time, and
returning the value takes 1 unit of time, i.e., it takes two units of time to
perform the task, and it is unaffected by the input values for a and b. This
indicates it takes the same amount of time for all input values, i.e., 2 units.
Notation of Performance Measurement
Big – O (Big-Oh)
The top-bound of an algorithm’s execution time is represented by the Big-O
notation. It’s the total amount of time an algorithm takes for all input
values. It indicates an algorithm’s worst-case time complexity.
Big – Ω (Omega)
Big – Θ (Theta)
The function is enclosed in the theta notation from above and below. It
reflects the average case of an algorithm’s time complexity and defines the
upper and lower boundaries of its execution time.
Recurrence Relation
A recurrence is an equation or inequality that describes a function in terms of its
values on smaller inputs. To solve a Recurrence Relation means to obtain a function
defined on the natural numbers that satisfy the recurrence.
For Example, the Worst Case Running Time T(n) of the MERGE SORT Procedures is
described by the recurrence.
2T + θ (n) if n>1
1. Substitution Method
2. Iteration Method
3. Recursion Tree Method
4. Master Method
1. Substitution Method:
The Substitution Method Consists of two main steps:
T (n) = T + n
Solution:
1. T (n) ≤c logn.
T (n) ≤c log + 1
≤c log + 1 = c logn-clog2 2+1
≤c logn for c≥1
Thus T (n) =O logn.
T (n) = 2T + n n>1
Solution:
2. Iteration Methods
It means to expand the recurrence and express it as a summation of terms of n and
initial condition.
1. T (n) = 1 if n=1
2. = 2T (n-1) if n>1
Solution:
T (n) = 2T (n-1)
= 2[2T (n-2)] = 22T (n-2)
= 4[2T (n-3)] = 23T (n-3)
= 8[2T (n-4)] = 24T (n-4) (Eq.1)
T (n) = 2i T (n-i)
Put n-i=1 or i= n-1 in (Eq.1)
T (n) = 2n-1 T (1)
= 2n-1 .1 {T (1) =1 .....given}
= 2n-1
Solution:
T (n) = T (n-1) +1
= (T (n-2) +1) +1 = (T (n-3) +1) +1+1
= T (n-4) +4 = T (n-5) +1+4
= T (n-5) +5= T (n-k) + k
Where k = n-1
T (n-k) = T (1) = θ (1)
T (n) = θ (1) + (n-1) = 1+n-1=n= θ (n).
Tree Structure
Each node in a recursion tree represents a particular recursive call. The initial call is
depicted at the top, with subsequent calls branching out beneath it. The tree grows
downward, forming a hierarchical structure. The branching factor of each node
depends on the number of recursive calls made within the function. Additionally, the
depth of the tree corresponds to the number of recursive calls before reaching the
base case.
Base Case
The base case serves as the termination condition for a recursive function. It defines
the point at which the recursion stops and the function starts returning values. In a
recursion tree, the nodes representing the base case are usually depicted as leaf
nodes, as they do not result in further recursive calls.
Recursive Calls
The child nodes in a recursion tree represent the recursive calls made within the
function. Each child node corresponds to a separate recursive call, resulting in the
creation of new sub problems. The values or parameters passed to these recursive
calls may differ, leading to variations in the sub problems' characteristics.
Execution Flow:
Traversing a recursion tree provides insights into the execution flow of a recursive
function. Starting from the initial call at the root node, we follow the branches to
reach subsequent calls until we encounter the base case. As the base cases are
reached, the recursive calls start to return, and their respective nodes in the tree are
marked with the returned values. The traversal continues until the entire tree has
been traversed.
Introduction
o Think of a program that determines a number's factorial. This function takes a
number N as an input and returns the factorial of N as a result. This function's
pseudo-code will resemble,
// Recursive step
return n * factorial(n-1); // Factorial of 5 => 5 * Factorial(4)...
}
Factorial(5) [ 120 ]
|
5 * Factorial(4) ==> 120
|
4. * Factorial(3) ==> 24
|
3 * Factorial(2) ==> 6
|
2 * Factorial(1) ==> 2
|
1
*/
o Recursion is exemplified by the function that was previously mentioned. We are
invoking a function to determine a number's factorial. Then, given a lesser value of the
same number, this function calls itself. This continues until we reach the basic case, in
which there are no more function calls.
o Recursion is a technique for handling complicated issues when the outcome is
dependent on the outcomes of smaller instances of the same issue.
o If we think about functions, a function is said to be recursive if it keeps calling itself
until it reaches the base case.
o Any recursive function has two primary components: the base case and the recursive
step. We stop going to the recursive phase once we reach the basic case. To prevent
endless recursion, base cases must be properly defined and are crucial. The definition
of infinite recursion is a recursion that never reaches the base case. If a program never
reaches the base case, stack overflow will continue to occur.
Recursion Types
Generally speaking, there are two different forms of recursion:
o Linear Recursion
o Tree Recursion
o Linear Recursion
Linear Recursion
o A function that calls itself just once each time it executes is said to be linearly
recursive. A nice illustration of linear recursion is the factorial function. The name
"linear recursion" refers to the fact that a linearly recursive function takes a linear
amount of time to execute.
o Take a look at the pseudo-code below:
1. function doSomething(n) {
2. // base case to stop recursion
3. if nis 0:
4. return
5. // here is some instructions
6. // recursive step
7. doSomething(n-1);
8. }
o If we look at the function doSomething(n), it accepts a parameter named n and does
some calculations before calling the same procedure once more but with lower values.
o When the method doSomething() is called with the argument value n, let's say that
T(n) represents the total amount of time needed to complete the computation. For this,
we can also formulate a recurrence relation, T(n) = T(n-1) + K. K serves as a constant
here. Constant K is included because it takes time for the function to allocate or de-
allocate memory to a variable or perform a mathematical operation. We use K to define
the time since it is so minute and insignificant.
o This recursive program's time complexity may be simply calculated since, in the worst
scenario, the method doSomething() is called n times. Formally speaking, the function's
temporal complexity is O(N).
Tree Recursion
o When you make a recursive call in your recursive case more than once, it is referred to
as tree recursion. An effective illustration of Tree recursion is the fibonacci sequence.
Tree recursive functions operate in exponential time; they are not linear in their
temporal complexity.
o Take a look at the pseudo-code below,
1. function doSomething(n) {
2. // base case to stop recursion
3. if n is less than 2:
4. return n;
5. // here is some instructions
6. // recursive step
7. return doSomething(n-1) + doSomething(n-2);
8. }
o The only difference between this code and the previous one is that this one makes one
more call to the same function with a lower value of n.
o Let's put T(n) = T(n-1) + T(n-2) + k as the recurrence relation for this function. K serves
as a constant once more.
o When more than one call to the same function with smaller values is performed, this
sort of recursion is known as tree recursion. The intriguing aspect is now: how time-
consuming is this function?
o Take a guess based on the recursion tree below for the same function.
o It may occur to you that it is challenging to estimate the time complexity by looking
directly at a recursive function, particularly when it is a tree recursion. Recursion Tree
Method is one of several techniques for calculating the temporal complexity of such
functions. Let's examine it in further detail.
1. The magnitude of the problem at each level is all that matters for determining the
value of a node. The issue size is n at level 0, n/2 at level 1, n/2 at level 2, and so on.
2. In general, we define the height of the tree as equal to log (n), where n is the size
of the issue, and the height of this recursion tree is equal to the number of levels in
the tree. This is true because, as we just established, the divide-and-conquer strategy
is used by recurrence relations to solve problems, and getting from issue size n to
problem size 1 simply requires taking log (n) steps.
log(16) base 2
log(2^4) base 2
3. At each level, the second term in the recurrence is regarded as the root.
Although the word "tree" appears in the name of this strategy, you don't need to be
an expert on trees to comprehend it.
How to Use a Recursion Tree to Solve Recurrence Relations?
The cost of the sub problem in the recursion tree technique is the amount of time
needed to solve the sub problem. Therefore, if you notice the phrase "cost" linked
with the recursion tree, it simply refers to the amount of time needed to solve a
certain sub problem.
Example
T(n) = 2T(n/2) + K
Solution
A problem size n is divided into two sub-problems each of size n/2. The cost of
combining the solutions to these sub-problems is K.
Each problem size of n/2 is divided into two sub-problems each of size n/4 and so on.
At the last level, the sub-problem size will be reduced to 1. In other words, we finally
hit the base case.
Since we know that when we continuously divide a number by 2, there comes a time
when this number is reduced to 1. Same as with the problem size N, suppose after K
divisions by 2, N becomes equal to 1, which implies, (n / 2^k) = 1
Here n / 2^k is the problem size at the last level and it is always equal to 1.
Now we can easily calculate the value of k from the above expression by taking log()
to both sides. Below is a more clear derivation,
n = 2^k
o log(n) = log(2^k)
o log(n) = k * log(2)
o k = log(n) / log(2)
o k = log(n) base 2
Let's first determine the number of nodes in the last level. From the recursion tree, we
can deduce this
The cost of the last level is calculated separately because it is the base case and no
merging is done at the last level so, the cost to solve a single problem at this level is
some constant value. Let's take it as O (1).
If you closely take a look to the above expression, it forms a Geometric progression
(a, ar, ar^2, ar^3 ...... infinite time). The sum of GP is given by S(N) = a / (r - 1). Here
is the first term and r is the common ratio.
Master Method
The Master Method is used for solving the following types of recurrence
T (n) = a T + f (n) with a≥1 and b≥1 be constant & f(n) be a function and can be
interpreted as
In the function to the analysis of a recursive algorithm, the constants and function
take on the following significance:
Master Theorem:
It is possible to complete an asymptotic tight bound in these three cases:
T (n) = Θ
Example:
T (n) = 8 T apply master theorem on it.
Solution:
T (n) = a T
a = 8, b=2, f (n) = 1000 n2, logba = log28 = 3
Since this equation holds, the first case of the master theorem applies to the given
recurrence relation, thus resulting in the conclusion:
T (n) = Θ
Therefore: T (n) = Θ (n3)
Example:
Therefore: T (n) = Θ
= Θ (n log n)
Case 3: If it is true f(n) = Ω for some constant ε >0 and it also true that: a
T (n) = 2
Solution:
2
If we will choose c =1/2, it is true:
∀ n ≥1
So it follows: T (n) = Θ ((f (n))
T (n) = Θ(n2)