Discuss The Roll of Stack in Case of Recursion With Help of A Suitable Example
Discuss The Roll of Stack in Case of Recursion With Help of A Suitable Example
example.
#include<iostream>
using namespace std;
void print_backwards();
int main()
{
print_backwards();
cout << "\n";
return 0;
}
void print_backwards()
{
char character;
cout << "Enter a character ('.' to end program): ";
cin >> character;
if (character != '.')
{
print_backwards();
cout << character;
}
}
We will examine how this function works in more detail in the next section. But
notice that the recursive call to "print_backwards()" (within its own definition) is
embedded in an "if" statement. In general, recursive definitions must always use some
sort of branch statement with at least one non-recursive branch, which acts as the base
case of the definition. Otherwise they will "loop forever". In above Program the base
case is in the implicit "else" part of the "if" statement. We could have written the
function as follows:
void print_backwards()
{
char character;
It is easy to see why works with the above aid of a few diagrams. When the main
program executes, it begins with a call to "print_backwards()". At this point space is
set aside in the computer's memory to execute this call (and in other cases in which to
make copies of the value parameters). This space is represented as a box in following
figure
The internal execution of this call begins with a character input, and then a second call
to "print_backwards()" (at this point, nothing has been output to the screen). Again,
space is set aside for this second call:
The process repeats, but inside the third call to "print_backwards()" a full-stop
character is input, thus allowing the third call to terminate with no further function
calls:
Technically speaking, C++ arranges the memory spaces needed for each function call
in a stack. The memory area for each new call is placed on the top of the stack, and
then taken off again when the execution of the call is completed. In the example
above, the stack goes through the following sequence:
C++ uses this stacking principle for all nested function calls - not just for recursively
defined functions. A stack is an example of a "last in/first out" structure (as opposed
to, for example, a queue, which is a "first in/first out" structure).
2).Discuss the various applications where the Queues are applied, also give the
justification for those applications.
Ans:An almost complete binary tree is a tree in which each node that has a right
child also has a left child. Having a left child does not require a node to have a right
child. Stated alternately, an almost complete binary tree is a tree where for a right
child, there is always a left child, but for a left child there may not be a right child.
The number of nodes in a binary tree can be found using this formula: n = 2^h Where
n is the amount of nodes in the tree, and h is the height of the tree.
An almost complete strictly binary tree with N leaves has 2N – 1 nodes (as does any
other strictly binary tree). An almost complete binary tree with N leaves that is not
strictly binary has 2N nodes. There are two distinct almost complete binary trees
with N leaves, one of which is strictly binary and one of which is not.
There is only a single almost complete binary tree with N nodes. This tree is strictly
binary if and only if N is odd.
A complete binary tree of depth d is the strictly binary tree all of whose leaves are at
level d.
The total number of nodes in a complete binary tree of depth d equals 2d+1 – 1. Since
all leaves in such a tree are at level d, the tree contains 2d leaves and, therefore, 2d - 1
internal nodes.
A complete binary tree may also be defined as a full binary tree in which all
leaves are at depth n or n-1 for some n. In order for a tree to be the latter kind
of complete binary tree, all the children on the last level must occupy the
leftmost spots consecutively, with no spot left unoccupied in between any two.
For example, if two nodes on the bottommost level each occupy a spot with an
empty spot between the two of them, but the rest of the children nodes are
tightly wedged together with no spots in between, then the tree cannot be a
complete binary tree due to the empty spot.
0 1 0 1 0
1 0 0 1 1
0 0 0 1 1
1 1 1 0 1
0 1 1 10
Ans:
A
B D
C E
Part-b
1. Differentiate Max Heap and Min Heap and sort the following using
heap sort.
D,A,T,A,S,T,R,U,C,T,U,R,E,S.
T
D D
A T A D
A
T
T U
T
S T S
R
A D S D
A C A T
D E R
A A
A S
Given a set S of values, a min-max heap on S is a
binary tree T with the following properties:
1) T has the heap-shape
2) T is min-max ordered: values stored at nodes on even (odd) levels are
smaller (greater) than or equal to the values stored at their descendants
(if any) where the root is at level zero. Thus, the smallest value of S is
stored at the root of T, whereas the largest value is stored at one of the
root’s children; an example of a min-max heap is shown in Figure 1
(p. 998). A min-max heap on n elements can be stored in an array A[1 .
. . n]. The ith location in the array will correspond to a node located on
level L(log,i)l in the heap. A max-min heap is defined analogously; in
such a heap, the maximum value is stored at the root, and the smallest
value is stored at one of the root’s children. It is interesting to observe
that the Hasse diagram for a min-max heap (i.e., the diagram representing
the order relationships implicit within the structure) is rather complex in
contrast with the one for a traditional heap (in this case, the Hasse
diagram is the heap itself); Figure 2 (p. 998) shows the Hasse
diagram for the example of Figure 1. Algorithms processing min-max
heaps are very similar to those corresponding to conventional heaps.
Creating a min-max heap is accomplished by an adaption of Floyd’s [4]
linear-time heap construction algorithm. Floyd’s algorithm builds a heap
in a new element is placed into the next available leaf position, and
must then move up the diagram toward the top, or down toward the
bottom, to ensure that all paths running from top to bottom remain
sorted. Thus the algorithm must first determine whether the new element
should proceed further down the Hasse diagram (i.e., up the heap on
max- levels) or up the Hasse diagram (i.e., up the heap on successive
min-levels). Once this has been determined, only grandparents along the
path to the root of the heap need be examined-either those lying on min-
levels or those lying on max-levels.
2.Give the description of some of the applications that require the usage
of the Heap data structure than the Tree.
Order statistics: The Heap data structure can be used to efficiently find the kth
sm Heapsort: One of the best sorting methods being in-place and with no
quadratic worst-case scenarios.
Selection algorithms: Finding the min, max, both the min and max, median, or
even the k-th largest element can be done in linear time (often constant time)
using heaps.[4]
Graph algorithms: By using heaps as internal traversal data structures, run
time will be reduced by polynomial order. Examples of such problems are Prim's
minimal spanning tree algorithm and Dijkstra's shortest path problem.
Full and almost full binary heaps may be represented in a very space-efficient way
using an array alone. The first (or last) element will contain the root. The next two
elements of the array contain its children. The next four contain the four children of
the two child nodes, etc. Thus the children of the node at position n would be at
positions 2n and 2n+1 in a one-based array, or 2n+1 and 2n+2 in a zero-based array.
This allows moving up or down the tree by doing simple index computations.
Balancing a heap is done by swapping elements which are out of order. As we can
build a heap from an array without requiring extra memory (for the nodes, for
example), heapsort can be used to sort an array in-place.
One more advantage of heaps over trees in some applications is that construction of
heaps can be done in linear time using Tarjan's algorithm
Applications: A heap has many applications, including the most efficient implementation of priority queu
Variants:
• 2-3 heap
• Binary heap
• Many many others
Binary heap storage rules -- A heap implemented with a binary tree in which the following two rules ar
• The element contained by each node is greater than or equal to the elements of that node's children
• The tree is a complete binary tree.
Example: We want to insert a node with value 42 to the heap on the left.
1. Place the new element in the heap in the first available location. This keeps the structure as a comp
2. while (the new element has a greater value than its parent) swap the new element with its parent.
3. Notice that Step 2 will stop when the new element reaches the root or when the new element's pare
The procedure for deleting the root from the heap -- effectively extracting the maximum element in a max
The above process is called reheapification downward.
1. Copy the element at the root of the heap to the variable used to return a value.
2. Copy the last element in the deepest level to the root and then take this last node out of the tree. Th
3. while (the out-of-place element has a value that is lower than one of its children) swap the out-of-p
4. Return the answer that was saved in Step 1.
5. Notice that Step 3 will stop when the out-of-place element reaches a leaf or it has a value that is gr
Now, think about how to build a heap. Check out the example of inserting 27, 35, 23, 22, 4, 45, 21, 5, 42 a
Heap Implementation
It is perfectly acceptable to use a traditional binary tree data structure to implement a binary heap. There i
A more common approach is to store the heap in an array. Since heap is always a complete binary tree, it
array indices.
• For each index i, element arr[i] has children at arr[2i + 1] and arr[2i + 2], and the parent at arr[floo
This implementation is particularly useful in the heapsort algorithm, where it allows the space in the input
method not that useful in priority queues implementation, where the number of elements is unknown.
Building a Heap
A heap could be built by successive insertions. This approach requires O(n log n) time for n elements. Wh
As an example, let's build a heap with the following values: 20, 35, 23, 22, 4, 45, 21, 5, 42 and 19. Click h
As has been proved here, this optimal method requires O(n) time for n elements.
Priority Queues
A priority queue behaves much like an ordinary queue:
In the heap implementation of a priority queue, each node of the heap contains one element along with the
• The element contained by each node has a priority that is greater than or equal to the priorities of t
• The tree is a complete binary tree.
return(node);
cout<<”Postorder(node->right)”;
cout<<"\n”, node->data);
}
void printInorder(struct node* node)
{
if (node == NULL)
return;
printInorder(node->left);
cout<<(node->data);
printInorder(node->right);
}
cout<<(node->data);
printPreorder(node->left);
printPreorder(node->right);
}
int main()
{
struct node *root = newNode(1);
root->left = newNode(2);
root->right = newNode(3);
root->left->left = newNode(4);
root->left->right = newNode(5);
Upon termination of the algorithm, the d(j) part of the label of node j
indicates the length of the shortest path from s to j, while thep(j) part
indicates the predecessor node to j on this shortest path. By tracing
back the pa j) parts, it is easy to identify the shortest paths between s
and each of the nodes of G.