dhiraj
dhiraj
Let's consider a Binary Search Tree (BST) with the following structure:
5
/ \
3 7
/ \ / \
2 4 6 8
A BST satisfies the rule that for any given node, all elements in its left subtree are smaller and all
elements in its right subtree are larger.
1. Visit the left subtree: For the root (5), start with its left subtree.
2. Visit the node: After the left subtree is processed, visit the current node.
3. Visit the right subtree: Finally, traverse the right subtree.
At Node 5:
o Traverse left subtree (Node 3).
At Node 3:
o Traverse its left subtree (Node 2).
o At Node 2:
No left child → visit Node 2
No right child → return to Node 3.
o Visit Node 3.
o Traverse its right subtree (Node 4).
o At Node 4:
No left child → visit Node 4
No right child → return to Node 5.
Back at Node 5:
o Visit Node 5.
o Traverse the right subtree (Node 7).
At Node 7:
o Traverse its left subtree (Node 6).
o At Node 6:
No left child → visit Node 6
No right child → return to Node 7.
o Visit Node 7.
o Traverse its right subtree (Node 8).
o At Node 8:
No left child → visit Node 8
No right child.
The resulting order of visited nodes is: 2, 3, 4, 5, 6, 7, 8. Notice that this is a sorted (ascending)
order.
Yes, I agree with the statement that a Binary Tree can be represented in memory using a
structure. Here are some detailed reasons supporting this:
1. Node Representation: A binary tree is made up of nodes, and each node typically
contains some data and pointers (or references) to its left and right child nodes.
Programming languages like C or C++ have the struct (or class) construct, which
allows us to encapsulate this combined information neatly. For instance, in C, a simple
binary tree node can be defined as follows:
struct Node {
int data;
struct Node* left;
struct Node* right;
};
This structure clearly represents the concept of a binary tree node by holding the node's
value and pointers to its left and right children.
2. Dynamic and Recursive Nature: Binary trees are inherently recursive data structures.
Each subtree of a binary tree is itself a binary tree. A structure, especially when used with
dynamic memory allocation (e.g., using malloc in C), supports this recursive definition
seamlessly. Each node dynamically points to other nodes, allowing for trees of any shape
and size.
3. Flexibility and Ease of Manipulation: Using structures to represent binary trees allows
programmers to easily implement and manipulate tree operations such as insertion,
deletion, traversal, and balancing. The use of pointers (or references) makes it
straightforward to connect nodes, rearrange subtrees, or even convert a binary tree into
other data structures if needed.
4. Language and Implementation Versatility: While the structure approach is common in
languages that support pointers explicitly (like C or C++), even object-oriented languages
(like Java, Python, or C#) use a similar concept, albeit with classes. The object or class in
these languages serves the same purpose—encapsulating data and pointers (or references)
to child nodes.
1. Sort the Edges by Weight: Order the edges from smallest to largest weight. For the
given graph, the sorted list is:
o A–B: 1
o D–E: 2
o B–C: 3
o C–D: 4
o A–E: 5
o A–C: 7
o A–D: 10
2. Initialize the MST: Start with an empty MST and consider each edge in the sorted order.
3. Process Each Edge:
o Edge A–B (Weight 1): The MST is empty, so add A–B. MST now: {A, B}
o Edge D–E (Weight 2): D and E are not connected to A–B, so add D–E. MST
now forms two separate components: {A, B} and {D, E}
o Edge B–C (Weight 3): B is in the component {A, B} but C is not yet included.
Add B–C. MST now: {A, B, C} and {D, E}
o Edge C–D (Weight 4): C belongs to {A, B, C} and D belongs to {D, E}; adding
C–D connects these two components. MST now becomes one connected
component: {A, B, C, D, E}
o Remaining Edges: The next edge, A–E (weight 5), would form a cycle since A
and E are already connected through the MST. Similarly, edges A–C (7) and A–D
(10) would also create cycles. These are rejected.
4. Resulting Minimum Spanning Tree (MST): The MST consists of the edges:
o A–B: 1
o D–E: 2
o B–C: 3
o C–D: 4
Q4 Apply Prim's algorithm to find the minimum spanning tree on the following
graph:
Let’s apply Prim’s algorithm step by step on the given graph. We’ll choose vertex 0 as the
starting point, and at every step, we add the edge of minimum weight that connects a vertex in
the growing spanning tree (MST) to a vertex outside it.
Candidate Edges:
o From vertex 0: (0,7) = 11
o From vertex 1:
(1,2) = 8
(1,7) = 2
Candidate Edges:
o From vertex 1: (1,2) = 8
o From vertex 7:
(7,8) = 7
(6,7) = 1
o From vertex 0: (0,7) = 11 (but 7 is already included)
Minimum edge: (6,7) (weight 1)
Candidate Edges:
o From vertex 1: (1,2) = 8
o From vertex 7: (7,8) = 7
o From vertex 6:
(5,6) = 2
(6,8) = 6
o (Edges from vertex 0 are either already used or connect to 7)
MST Edges so far: {(0,1)=4, (1,7)=2, (6,7)=1, (5,6)=2} MST Vertices: {0, 1, 5, 6, 7}
Candidate Edges:
o From vertex 1: (1,2) = 8
o From vertex 7: (7,8) = 7
o From vertex 6: (6,8) = 6
o From vertex 5:
(3,5) = 14
(4,5) = 10
MST Edges so far: {(0,1)=4, (1,7)=2, (6,7)=1, (5,6)=2, (6,8)=6} MST Vertices: {0, 1, 5, 6, 7,
8}
Candidate Edges:
o From vertex 1: (1,2) = 8
o From vertex 8: (2,8) = 2
o Other edges from vertices 5 and 6 either lead to already included vertices or have
higher weights.
Minimum edge: (2,8) (weight 2)
MST Edges so far: {(0,1)=4, (1,7)=2, (6,7)=1, (5,6)=2, (6,8)=6, (2,8)=2} MST Vertices: {0, 1,
2, 5, 6, 7, 8}
Candidate Edges:
o From vertex 2: (2,3) = 7
o (Edge (1,2)=8 is ignored as vertex 1 is already in MST)
MST Edges so far: {(0,1)=4, (1,7)=2, (6,7)=1, (5,6)=2, (6,8)=6, (2,8)=2, (2,3)=7} MST
Vertices: {0, 1, 2, 3, 5, 6, 7, 8}
Candidate Edge:
o From vertex 3: (3,4) = 9
o (Alternatively, from vertex 5: (4,5)=10; hence (3,4) is preferred)
(0,1) = 4
(1,7) = 2
(6,7) = 1
(5,6) = 2
(6,8) = 6
(2,8) = 2
(2,3) = 7
(3,4) = 9
4+2+1+2+6+2+7+9=334 + 2 + 1 + 2 + 6 + 2 + 7 + 9 = 33
Thus, the Minimum Spanning Tree (MST) found using Prim’s algorithm has a total weight of
33
Q5 Compare and Contrast the Linear probing and Chaining methods of collision
resolution in Hashing. Also explain these techniques with the help of examples.
Below is a detailed comparison of the two popular collision resolution techniques in hash tables
—Linear Probing and Chaining—along with examples to illustrate each method.
Concept: When a collision occurs (i.e., when two keys hash to the same index), linear probing
searches for the next available slot in the table by checking indices sequentially. If the hash
function computes an index that is already occupied, the algorithm probes the next index (using a
constant step of 1), wrapping around to the beginning of the array if necessary.
Example:
Suppose we have a hash table of size 7 and a simple hash function: h(key) = key mod 7
Advantages:
o Cache Performance: Since the hash table is usually implemented as a contiguous
array, accessing sequential indices tends to use caches efficiently.
o Memory Efficiency: All elements are stored in one array without extra pointers.
Disadvantages:
o Primary Clustering: Collisions tend to create clusters of occupied slots, causing
longer probe sequences as the table fills up.
o Deletion Complexity: Removing an element can be problematic as it may break
the probe chain. Special markers (like tombstones) are often needed to signal
deleted entries without halting the search.
2. Chaining (Separate Chaining)
Concept: In chaining, each slot in the hash table contains a pointer to a linked list (or another
dynamic data structure) of elements. When two or more keys hash to the same slot, they are
simply added to the list at that index.
Example:
Assume the same hash table size of 7 with the hash function: h(key) = key mod 7
Advantages:
o Simplicity of Collision Resolution: There is no need for probing; collisions are
handled by simply adding the new key to the list.
o Flexible Load Factor: The hash table can handle load factors greater than 1 since
each slot can store an arbitrary number of elements.
o Easy Deletion: Removing a key entails deleting a node from a linked list—this is
a straightforward operation.
Disadvantages:
o Memory Overhead: Extra memory is needed for pointers in the linked lists.
o Cache Performance: Because the nodes of the lists may be scattered in memory,
cache performance might suffer, especially if many keys hash to the same slot.
o Worst-Case Performance: If a poor hash function results in many keys
clustering in the same list, search time can degrade to O(n) for that slot.
Comparison Summary
Aspect Linear Probing Chaining
Uses a single contiguous array (open Uses an array of linked lists (or dynamic
Data Structure
addressing). arrays).
Handling Finds the next available slot using a fixed Stores multiple keys at the same index via a
Collisions sequence (e.g., index+1). list.
Aspect Linear Probing Chaining
More memory efficient; no extra pointer Requires extra memory for pointers and list
Memory Usage
overhead. management.
Cache Generally better due to contiguous May suffer due to non-contiguous memory
Performance memory access. access (pointer chasing).
Deletion More complex; requires marking Straightforward; remove the element from
Handling removed elements as "deleted". the linked list.
Conclusion
Both linear probing and chaining are effective methods for handling hash collisions, yet they
come with trade-offs. Linear probing offers excellent cache performance and minimal memory
overhead but can suffer from clustering and deletion complications. In contrast, chaining
provides flexibility in handling a high number of collisions and simplifies deletions, though it
incurs extra memory overhead and may have less favorable cache performance.