55 - BD - Data Structures and Algorithms - Narasimha Karumanchi
55 - BD - Data Structures and Algorithms - Narasimha Karumanchi
55 - BD - Data Structures and Algorithms - Narasimha Karumanchi
Solution: (D). According to the rate of growth: h(n) < f(n) < g(n) (g(n) is asymptotically greater
than f(n), and f(n) is asymptotically greater than h(n)). We can easily see the above order by
taking logarithms of the given 3 functions: lognlogn < n < log(n!). Note that, log(n!) = O(nlogn).
Problem-53 Consider the following segment of C-code:
The number of comparisons made in the execution of the loop for any n > 0 is:
(A)
(B) n
(C)
(D)
Solution: (a). Let us assume that the loop executes k times. After kth step the value of j is 2k .
Taking logarithms on both sides gives . Since we are doing one more comparison for
exiting from the loop, the answer is .
Problem-54 Consider the following C code segment. Let T(n) denote the number of times the
for loop is executed by the program on input n. Which of the following is true?
Solution: (B). Big O notation describes the tight upper bound and Big Omega notation describes
the tight lower bound for an algorithm. The for loop in the question is run maximum times and
minimum 1 time. Therefore, T(n) = O( ) and T(n) = Ω(1).
Problem-55 In the following C function, let n ≥ m. How many recursive calls are made by
this function?
(A)
(B) Ω(n)
(C)
(D) Θ(n)
Solution: No option is correct. Big O notation describes the tight upper bound and Big Omega
notation describes the tight lower bound for an algorithm. For m = 2 and for all n = 2i, the running
time is O(1) which contradicts every option.
Problem-56 Suppose T(n) = 2T(n/2) + n, T(O)=T(1)=1. Which one of the following is false?
(A) T(n) = O(n2)
(B) T(n) = Θ(nlogn)
(C) T(n) = Q(n2)
(D) T(n) = O(nlogn)
Solution: (C). Big O notation describes the tight upper bound and Big Omega notation describes
the tight lower bound for an algorithm. Based on master theorem, we get T(n) = Θ(nlogn). This
indicates that tight lower bound and tight upper bound are the same. That means, O(nlogn) and
Ω(nlogn) are correct for given recurrence. So option (C) is wrong.
Problem-57 Find the complexity of the below function:
Solution:
Time Complexity: There are n – 1 multiplications and each takes constant time giving a Θ(n)
algorithm.
Problem-59 For Problem-58, can we improve the time complexity?
Solution: Refer to the Divide and Conquer chapter.
Problem-60 Find the time complexity of recurrence .
Solution: Let us solve this problem by method of guessing. The total size on each level of the
recurrance tree is less than n, so we guess that f(n) = n will dominate. Assume for all i < n that
c1n ≤ T(i) < c2n. Then,
If c1 ≥ 8k and c2 ≤ 8k, then c1n = T(n) = c2n. So, T(n) = Θ(n). In general, if you have multiple
recursive calls, the sum of the arguments to those calls is less than n (in this case ),
and f(n) is reasonably large, a good guess is T(n) = Θ(f(n)).
Problem-61 Solve the following recurrence relation using the recursion tree method:
.
At level 2 the four subproblems are of size and respectively. These two
subproblems take time:
Similarly the amount of work at level k is at most .
That is, the first level provides a constant fraction of the total runtime.
Problem-62 Rank the following functions by order of growth: (n + 1)!, n!, 4n, n × 3n, 3n + n2
+ 20n, , n2 + 200, 20n + 500, 2lgn, n2/3, 1.
Solution:
Problem-63 Find the complexity of the below function:
In this chapter, we will look at one of the important topics, “recursion”, which will be used in
almost every chapter, and also its relative “backtracking”.
Any function which calls itself is called recursive. A recursive method solves a problem by
calling a copy of itself to work on a smaller problem. This is called the recursion step. The
recursion step can result in many more such recursive calls.
It is important to ensure that the recursion terminates. Each time the function calls itself with a
slightly simpler version of the original problem. The sequence of smaller problems must
eventually converge on the base case.
2.3 Why Recursion?
Recursion is a useful technique borrowed from mathematics. Recursive code is generally shorter
and easier to write than iterative code. Generally, loops are turned into recursive functions when
they are compiled or interpreted.
Recursion is most useful for tasks that can be defined in terms of similar subtasks. For example,
sort, search, and traversal problems often have simple recursive solutions.
A recursive function performs a task in part by calling itself to perform the subtasks. At some
point, the function encounters a subtask that it can perform without calling itself. This case, where
the function does not recur, is called the base case. The former, where the function calls itself to
perform a subtask, is referred to as the ecursive case. We can write all recursive functions using
the format:
As an example consider the factorial function: n! is the product of all integers between n and 1.
The definition of recursive factorial looks like:
This definition can easily be converted to recursive implementation. Here the problem is
determining the value of n!, and the subproblem is determining the value of (n – l)!. In the
recursive case, when n is greater than 1, the function calls itself to determine the value of (n – l)!
and multiplies that with n.
In the base case, when n is 0 or 1, the function simply returns 1. This looks like the following:
2.5 Recursion and Memory (Visualization)
Each recursive call makes a new copy of that method (actually only the variables) in memory.
Once a method ends (that is, returns some data), the copy of that returning method is removed
from memory. The recursive solutions look simple but visualization and tracing takes time. For
better understanding, let us consider the following example.
For this example, if we call the print function with n=4, visually our memory assignments may
look like:
Now, let us consider our factorial function. The visualization of factorial function with n=4 will
look like:
While discussing recursion, the basic question that comes to mind is: which way is better? –
iteration or recursion? The answer to this question depends on what we are trying to do. A
recursive approach mirrors the problem that we are trying to solve. A recursive approach makes
it simpler to solve a problem that may not have the most obvious of answers. But, recursion adds
overhead for each recursive call (needs space on the stack frame).
Recursion
Iteration
• Recursive algorithms have two types of cases, recursive cases and base cases.
• Every recursive function case must terminate at a base case.
• Generally, iterative solutions are more efficient than recursive solutions [due to the
overhead of function calls].
• A recursive algorithm can be implemented without recursive function calls using a
stack, but it’s usually more trouble than its worth. That means any problem that can
be solved recursively can also be solved iteratively.
• For some problems, there are no obvious iterative algorithms.
• Some problems are best suited for recursive solutions while others are not.
In this chapter we cover a few problems with recursion and we will discuss the rest in other
chapters. By the time you complete reading the entire book, you will encounter many recursion
problems.
Problem-1 Discuss Towers of Hanoi puzzle.
Solution: The Towers of Hanoi is a mathematical puzzle. It consists of three rods (or pegs or
towers), and a number of disks of different sizes which can slide onto any rod. The puzzle starts
with the disks on one rod in ascending order of size, the smallest at the top, thus making a conical
shape. The objective of the puzzle is to move the entire stack to another rod, satisfying the
following rules:
• Only one disk may be moved at a time.
• Each move consists of taking the upper disk from one of the rods and sliding it onto
another rod, on top of the other disks that may already be present on that rod.
• No disk may be placed on top of a smaller disk.
Algorithm:
• Move the top n – 1 disks from Source to Auxiliary tower,
• Move the nth disk from Source to Destination tower,
• Move the n – 1 disks from Auxiliary tower to Destination tower.
• Transferring the top n – 1 disks from Source to Auxiliary tower can again be thought
of as a fresh problem and can be solved in the same manner. Once we solve Towers
of Hanoi with three disks, we can solve it with any number of disks with the above
algorithm.
Problem-2 Given an array, check whether the array is in sorted order with recursion.
Solution:
Time Complexity: O(n). Space Complexity: O(n) for recursive stack space.
Backtracking is a form of recursion. The usual scenario is that you are faced with a number of
options, and you must choose one of these. After you make your choice you will get a new set of
options; just what set of options you get depends on what choice you made. This procedure is
repeated over and over until you reach a final state. If you made a good sequence of choices, your
final state is a goal state; if you didn’t, it isn’t.
Backtracking can be thought of as a selective tree/graph traversal method. The tree is a way of
representing some initial starting position (the root node) and a final goal state (one of the
leaves). Backtracking allows us to deal with situations in which a raw brute-force approach
would explode into an impossible number of options to consider. Backtracking is a sort of refined
brute force. At each node, we eliminate choices that are obviously not possible and proceed to
recursively check only those that have potential.
What’s interesting about backtracking is that we back up only as far as needed to reach a previous
decision point with an as-yet-unexplored alternative. In general, that will be at the most recent
decision point. Eventually, more and more of these decision points will have been fully explored,
and we will have to backtrack further and further. If we backtrack all the way to our initial state
and have explored all alternatives from there, we can conclude the particular problem is
unsolvable. In such a case, we will have done all the work of the exhaustive recursion and known
that there is no viable solution possible.
• Sometimes the best algorithm for a problem is to try all possibilities.
• This is always slow, but there are standard tools that can be used to help.
• Tools: algorithms for generating basic objects, such as binary strings [2n
possibilities for n-bit string], permutations [n!], combinations [n!/r!(n – r)!],
general strings [k –ary strings of length n has kn possibilities], etc...
• Backtracking speeds the exhaustive search by pruning.
Using Subtraction and Conquer Master theorem we get: T(n) = O(2n). This means the algorithm
for generating bit-strings is optimal.
Problem-4 Generate all the strings of length n drawn from 0... k – 1.
Solution: Let us assume we keep current k-ary string in an array A[0.. n – 1]. Call function k-
string(n, k):
Solution: The simplest idea is: for each location traverse in all 8 directions and in each of those
directions keep track of maximum region found.
Sample Call: