Daa Notes-1
Daa Notes-1
Recurrence Relations play a significant role in analyzing and optimizing the complexity of
algorithms.Having a strong understanding of Recurrence Relations play a great role in developing
the problem-solving skills of an individual. Some of the common uses of Recurrence
Relations are:
Following are some of the examples of recurrence relations based on linear recurrence relation.
T(n) = T(n-1) + n for n > 0 and T(0) = 1
These types of recurrence relations can be easily solved using substitution method.
For example,
T(n)=T(n-1)+n
=T(n-2)+(n-1)+n
= T(n-k) + (n-(k-1))….. (n-1) + n
Substituting k = n, we get
T(n) = T(0) + 1 + 2+….. +n = n(n+1)/2 = O(n^2)
Substitution Recurrences:
Sometimes, recurrence relations can’t be directly solved using techniques
like substitution, recurrence tree or master method. Therefore, we need to convert the
recurrence relation into appropriate form before solving. For example,
T(n) = T(√n) + 1
S(m) = Θ(logm)
As n = 2^m or m = log2(n),
T(n) = T(2^m) = S(m) = Θ(logm) = Θ(loglogn)
A homogeneous recurrence relation is one in which the right-hand side is equal to zero.
Mathematically, a homogeneous recurrence relation of order k is represented as:
an=f(an−1,an−2,…,an−k)
Example: an=2∗an−1–an−2
Non-Homogeneous Recurrence Relations:
A non-homogeneous recurrence relation is one in which the right-hand side is not equal to zero. It
can be expressed as:
an=f(an−1,an−2,…,an−k)+g(n)an=f(an−1,an−2,…,an−k)+g(n)
where g(n) is a function that introduces a term not dependent on the previous terms. The presence of
g(n) makes the recurrence non-homogeneous.
Example: an=2∗an−1−an−2+3nan=2∗an−1−an−2+3n
Ways to Solve Recurrence Relations :
Here are the general steps to analyze the complexity of a recurrence relation:
Substitute the input size into the recurrence relation to obtain a sequence of terms.
Identify a pattern in the sequence of terms, if any, and simplify the recurrence relation to
obtain a closed-form expression for the number of operations performed by the algorithm.
Determine the order of growth of the closed-form expression by using techniques such as
the Master Theorem, or by finding the dominant term and ignoring lower-order terms.
Use the order of growth to determine the asymptotic upper bound on the running time of the
algorithm, which can be expressed in terms of big O notation.
It’s important to note that the above steps are just a general outline and that the specific details of
how to analyze the complexity of a recurrence relation can vary greatly depending on the specific
recurrence relation being analyzed.
We have already discussed the analysis of loops . Many algorithms are recursive. When we
analyze them, we get a recurrence relation for time complexity.
We get running time on an input of size n as a function of n and the running time on inputs of
smaller sizes. For example, in Merge Sort, to sort a given array, we divide it into two halves and
recursively repeat the process for the two halves.
Finally, we merge the results. Time complexity of Merge Sort can be written as T(n) = 2T(n/2) +
cn. There are many other algorithms like Binary Search, Tower of Hanoi, etc.
Overall, solving recurrences plays a crucial role in the analysis, design, and optimization of
algorithms, and is an important topic in computer science.
There are mainly three ways of solving recurrences:
1. Substitution Method
2. Recurrence Tree Method
3. Master Method
Substitution Method:
We make a guess for the solution and then we use mathematical induction to prove the guess is
correct or incorrect .
For example, consider the recurrence T(n) = 2T(n/2) + n
We guess the solution as T(n) = O(nLogn). Now we use induction to prove our guess.
We need to prove that T(n) <= cnLogn. We can assume that it is true for values smaller than n.
T(n) = 2T(n/2) + n
<= 2cn/2Log(n/2) + n
= cnLogn – cnLog2 + n
= cnLogn – cn + n
<= cnLogn
Recurrence Tree Method:
Recursion is a fundamental concept in computer science and mathematics that allows functions to
call themselves, enabling the solution of complex problems through iterative steps. One visual
representation commonly used to understand and analyze the execution of recursive functions is a
recursion tree. In this article, we will explore the theory behind recursion trees, their structure, and
their significance in understanding recursive algorithms.
Tree Structure
Each node in a recursion tree represents a particular recursive call. The initial call is depicted at the
top, with subsequent calls branching out beneath it. The tree grows downward, forming a
hierarchical structure. The branching factor of each node depends on the number of recursive calls
made within the function. Additionally, the depth of the tree corresponds to the number of recursive
calls before reaching the base case.
Base Case
The base case serves as the termination condition for a recursive function. It defines the point at
which the recursion stops and the function starts returning values. In a recursion tree, the nodes
representing the base case are usually depicted as leaf nodes, as they do not result in further
recursive calls.
Recursive Calls
The child nodes in a recursion tree represent the recursive calls made within the function. Each child
node corresponds to a separate recursive call, resulting in the creation of new sub problems. The
values or parameters passed to these recursive calls may differ, leading to variations in the sub
problems' characteristics.
Execution Flow:
Traversing a recursion tree provides insights into the execution flow of a recursive function. Starting
from the initial call at the root node, we follow the branches to reach subsequent calls until we
encounter the base case. As the base cases are reached, the recursive calls start to return, and their
respective nodes in the tree are marked with the returned values. The traversal continues until the
entire tree has been traversed.
Recursion trees aid in analyzing the time complexity of recursive algorithms. By examining the
structure of the tree, we can determine the number of recursive calls made and the work done at each
level. This analysis helps in understanding the overall efficiency of the algorithm and identifying any
potential inefficiencies or opportunities for optimization.
In this method, we draw a recurrence tree and calculate the time taken by every level of the tree.
Finally, we sum the work done at all levels. To draw the recurrence tree, we start from the given
recurrence and keep drawing till we find a pattern among levels. The pattern is typically
arithmetic or geometric series.
Consider the recurrence relation, T(n) = T(n/4) + T(n/2) + cn2
cn2
/ \
T(n/4) T(n/2)
If we further break down the expression T(n/4) and T(n/2), we get the following
recursion tree.
cn2
/ \
c(n2)/16 c(n2)/4
/ \ / \
T(n/16) T(n/8) T(n/8) T(n/4)
To know the value of T(n), we need to calculate the sum of tree nodes level by
level. If we sum the above tree level by level, we get the following series T(n) =
c(n^2 + 5(n^2)/16 + 25(n^2)/256) + ….
To get an upper bound, we can sum the infinite series. We get the sum as (n2)/(1 –
5/16) which is O(n2)
Master Method:
The Master Method is used for solving the following types of recurrence
T (n) = a T + f (n) with a≥1 and b≥1 be constant & f(n) be a function and can be interpreted
as
T (n) = a T + f (n)
In the function to the analysis of a recursive algorithm, the constants and
function take on the following significance:
o n is the size of the problem.
o a is the number of subproblems in the recursion.
o n/b is the size of each subproblem. (Here it is assumed that all
subproblems are essentially the same size.)
o f (n) is the sum of the work done outside the recursive calls, which
includes the sum of dividing the problem and the sum of combining the
solutions to the subproblems.
o It is not possible always bound the function according to the
requirement, so we make three cases which will tell us what kind of
bound we can apply on the function.
1. To determine the range, identify the minimum and maximum values of the input array.
2. Create a worksheet initialized with the range size and zeros.
3. Iterate over the input array and increment each element found.
4. Modify the worksheet by calculating the cumulative total to obtain the correct positions for
each element.
5. Create an output array the same size as the input array.
6. Move the input array again, placing each element in the correct position in the output array
based on the worksheet.
7. The result table now contains sorted elements.
The main advantage of descending sort is that it achieves a linear time complexity of O(n),
which makes it very efficient for large input sizes. However, its applicability is limited to
scenarios where the choice of input elements is known in advance and relatively small.
Counting Sort
It is a linear time sorting algorithm which works faster by not making a comparison. It assumes that
the number to be sorted is in range 1 to k where k is small. 563Basic idea is to determine the "rank"
of each number in the final sorted array.
Initialize C to zero
Bucket Sort
Bucket Sort runs in linear time on average. Like Counting Sort, bucket Sort is fast because it
considers something about the input. Bucket Sort considers that the input is generated by a random
process that distributes elements uniformly over the intervalμ=[0,1].
Bucket Sort considers that the input is an n element array A and that each element A [i] in the array
satisfies 0≤A [i] <1. The code depends upon an auxiliary array B [0....n-1] of linked lists (buckets)
and considers that there is a mechanism for maintaining such lists.
BUCKET-SORT (A)
1. n ← length [A]
2. for i ← 1 to n
3. do insert A [i] into list B [n A[i]]
4. for i ← 0 to n-1
5. do sort list B [i] with insertion sort.
6. Concatenate the lists B [0], B [1] ...B [n-1] together in order.
Radix Sort
Radix Sort is a Sorting algorithm that is useful when there is a constant'd' such that all keys are d
digit numbers. To execute Radix Sort, for p =1 towards 'd' sort the numbers with respect to the
Pth digits from the right using any linear time stable sort.
The Code for Radix Sort is straightforward. The following procedure assumes that each element
in the n-element array A has d digits, where digit 1 is the lowest order digit and digit d is the
highest-order digit.
Here is the algorithm that sorts A [1.n] where each number is d digits long.
Knapsack Problem
Given a bag with maximum weight capacity of W and a set of items, each having a weight and a
value associated with it. Decide the number of each item to take in a collection such that the
total weight is less than the capacity and the total value is maximized.
Types of Knapsack Problem:
The knapsack problem can be classified into the following types:
Fractional Knapsack Problem
0/1 Knapsack Problem
1. Fractional Knapsack Problem
The Fractional Knapsack problem can be defined as follows:
Given the weights and values of N items, put these items in a knapsack of capacity W to get the
maximum total value in the knapsack. In Fractional Knapsack, we can break items for
maximizing the total value of the knapsack.
2.0/1 Knapsack Problem
The 0/1 Knapsack problem can be defined as follows:
We are given N items where each item has some weight (wi) and value (vi) associated with it.
We are also given a bag with capacity W. The target is to put the items into the bag such that
the sum of values associated with them is the maximum possible.
Note that here we can either put an item completely into the bag or cannot put it at all.
Mathematically the problem can be expressed as:
Given N items where each item has some weight and profit associated with it and also given a
bag with capacity W, [i.e., the bag can hold at most W weight in it]. The task is to put the items
into the bag such that the sum of profits associated with them is the maximum possible.
Note: The constraint here is we can either put an item completely into the bag or cannot put it at
all [It is not possible to put a part of an item into the bag].
Examples:
Explanation: There are two items which have weight less than or equal to 4. If we select the
item with weight 4, the possible profit is 1. And if we select the item with weight 1, the possible
profit is 3. So the maximum possible profit is 3. Note that we cannot put both the items with
weight 4 and 1 together as the capacity of the bag is 4.
Input: N = 3, W = 3, profit[] = {1, 2, 3}, weight[] = {4, 5, 6}
Output: 0
Dynamic Programming Approach for 0/1 Knapsack Problem
Memoization Approach for 0/1 Knapsack Problem:
Note: The above function using recursion computes the same subproblems again
and again. See the following recursion tree, K(1, 1) is being evaluated twice.
As there are repetitions of the same subproblem again and again we can
implement the following idea to solve the problem.
If we get a subproblem the first time, we can solve this problem by creating a 2-
D array that can store a particular state (n, w). Now if we come across the same
state (n, w) again instead of calculating it in exponential complexity we can
directly return its result stored in the table in constant time.
// Here is the top-down approach of
// dynamic programming
#include <bits/stdc++.h>
using namespace std;
if (wt[index] > W) {
// Store the value of function call
// stack in table before return
dp[index][W] = knapSackRec(W, wt, val, index - 1, dp);
return dp[index][W];
}
else {
// Store value in a table before return
dp[index][W] = max(val[index]
+ knapSackRec(W - wt[index], wt, val,
index - 1, dp),
knapSackRec(W, wt, val, index - 1, dp));
// Driver Code
int main()
{
int profit[] = { 60, 100, 120 };
int weight[] = { 10, 20, 30 };
int W = 50;
int n = sizeof(profit) / sizeof(profit[0]);
cout << knapSack(W, weight, profit, n);
return 0;
}
Output
220
Time Complexity: O(N*W). As redundant calculations of states are avoided.
Auxiliary Space: O(N * W) + O(N). The use of a 2D array data structure for storing intermediate
states and O(N) auxiliary stack space(ASS) has been used for recursion stack
UNIT –IV
What is Greedy Algorithm?
A greedy algorithm is a type of optimization algorithm that makes locally optimal
choices at each step to find a globally optimal solution. It operates on the principle of
“taking the best option now” without considering the long-term consequences.
If we have two sorted files containing n and m records respectively then they could be
merged together, to obtain one sorted file in time O (n+m).
There are many ways in which pairwise merge can be done to get a single sorted file.
Different pairings require a different amount of computing time.The main thing is to
pairwise merge the n sorted files so that the number of comparisons will be less.
After this, pick two smallest numbers and repeat this until we left with only one number.
STEP:2
Step 3: Insert 5
Step 4: Insert 13