0% found this document useful (0 votes)
3 views

dhiraj

Uploaded by

bheem op
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

dhiraj

Uploaded by

bheem op
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Q1 "Inorder traversal of Binary Search Tree gives elements in sorted order".

Justify this statement using an example.

Let's consider a Binary Search Tree (BST) with the following structure:

5
/ \
3 7
/ \ / \
2 4 6 8

A BST satisfies the rule that for any given node, all elements in its left subtree are smaller and all
elements in its right subtree are larger.

Inorder Traversal Process:

1. Visit the left subtree: For the root (5), start with its left subtree.
2. Visit the node: After the left subtree is processed, visit the current node.
3. Visit the right subtree: Finally, traverse the right subtree.

Step-by-Step Inorder Traversal:

 At Node 5:
o Traverse left subtree (Node 3).
 At Node 3:
o Traverse its left subtree (Node 2).
o At Node 2:
 No left child → visit Node 2
 No right child → return to Node 3.
o Visit Node 3.
o Traverse its right subtree (Node 4).
o At Node 4:
 No left child → visit Node 4
 No right child → return to Node 5.
 Back at Node 5:
o Visit Node 5.
o Traverse the right subtree (Node 7).
 At Node 7:
o Traverse its left subtree (Node 6).
o At Node 6:
 No left child → visit Node 6
 No right child → return to Node 7.
o Visit Node 7.
o Traverse its right subtree (Node 8).
o At Node 8:
 No left child → visit Node 8
 No right child.
The resulting order of visited nodes is: 2, 3, 4, 5, 6, 7, 8. Notice that this is a sorted (ascending)
order.

Q2 "Binary Tree can be represented in memory using Structure". Do you agree


with this statement? Give reasons in support of your answer.

Yes, I agree with the statement that a Binary Tree can be represented in memory using a
structure. Here are some detailed reasons supporting this:

1. Node Representation: A binary tree is made up of nodes, and each node typically
contains some data and pointers (or references) to its left and right child nodes.
Programming languages like C or C++ have the struct (or class) construct, which
allows us to encapsulate this combined information neatly. For instance, in C, a simple
binary tree node can be defined as follows:

struct Node {
int data;
struct Node* left;
struct Node* right;
};

This structure clearly represents the concept of a binary tree node by holding the node's
value and pointers to its left and right children.

2. Dynamic and Recursive Nature: Binary trees are inherently recursive data structures.
Each subtree of a binary tree is itself a binary tree. A structure, especially when used with
dynamic memory allocation (e.g., using malloc in C), supports this recursive definition
seamlessly. Each node dynamically points to other nodes, allowing for trees of any shape
and size.
3. Flexibility and Ease of Manipulation: Using structures to represent binary trees allows
programmers to easily implement and manipulate tree operations such as insertion,
deletion, traversal, and balancing. The use of pointers (or references) makes it
straightforward to connect nodes, rearrange subtrees, or even convert a binary tree into
other data structures if needed.
4. Language and Implementation Versatility: While the structure approach is common in
languages that support pointers explicitly (like C or C++), even object-oriented languages
(like Java, Python, or C#) use a similar concept, albeit with classes. The object or class in
these languages serves the same purpose—encapsulating data and pointers (or references)
to child nodes.

Q3 Apply Kruskal's algorithm to find the minimum spanning tree on the


following graph:
Let's apply Kruskal’s algorithm step by step:

1. Sort the Edges by Weight: Order the edges from smallest to largest weight. For the
given graph, the sorted list is:
o A–B: 1
o D–E: 2
o B–C: 3
o C–D: 4
o A–E: 5
o A–C: 7
o A–D: 10
2. Initialize the MST: Start with an empty MST and consider each edge in the sorted order.
3. Process Each Edge:
o Edge A–B (Weight 1): The MST is empty, so add A–B. MST now: {A, B}
o Edge D–E (Weight 2): D and E are not connected to A–B, so add D–E. MST
now forms two separate components: {A, B} and {D, E}
o Edge B–C (Weight 3): B is in the component {A, B} but C is not yet included.
Add B–C. MST now: {A, B, C} and {D, E}
o Edge C–D (Weight 4): C belongs to {A, B, C} and D belongs to {D, E}; adding
C–D connects these two components. MST now becomes one connected
component: {A, B, C, D, E}
o Remaining Edges: The next edge, A–E (weight 5), would form a cycle since A
and E are already connected through the MST. Similarly, edges A–C (7) and A–D
(10) would also create cycles. These are rejected.
4. Resulting Minimum Spanning Tree (MST): The MST consists of the edges:
o A–B: 1
o D–E: 2
o B–C: 3
o C–D: 4

The total weight is: 1 + 2 + 3 + 4 = 10.

Q4 Apply Prim's algorithm to find the minimum spanning tree on the following
graph:
Let’s apply Prim’s algorithm step by step on the given graph. We’ll choose vertex 0 as the
starting point, and at every step, we add the edge of minimum weight that connects a vertex in
the growing spanning tree (MST) to a vertex outside it.

Step 1: Start at Vertex 0

 MST Vertices: {0}


 Candidate Edges from 0:
o (0,1) with weight 4
o (0,7) with weight 11

Minimum edge: (0,1) (weight 4)

 Action: Add edge (0,1) and vertex 1.

Step 2: MST Now Contains {0, 1}

 Candidate Edges:
o From vertex 0: (0,7) = 11
o From vertex 1:
 (1,2) = 8
 (1,7) = 2

Minimum edge: (1,7) (weight 2)

 Action: Add edge (1,7) and vertex 7.

MST Edges so far: {(0,1) = 4, (1,7) = 2} MST Vertices: {0, 1, 7}

Step 3: MST Now Contains {0, 1, 7}

 Candidate Edges:
o From vertex 1: (1,2) = 8
o From vertex 7:
 (7,8) = 7
 (6,7) = 1
o From vertex 0: (0,7) = 11 (but 7 is already included)
Minimum edge: (6,7) (weight 1)

 Action: Add edge (6,7) and vertex 6.

MST Edges so far: {(0,1)=4, (1,7)=2, (6,7)=1} MST Vertices: {0, 1, 6, 7}

Step 4: MST Now Contains {0, 1, 6, 7}

 Candidate Edges:
o From vertex 1: (1,2) = 8
o From vertex 7: (7,8) = 7
o From vertex 6:
 (5,6) = 2
 (6,8) = 6
o (Edges from vertex 0 are either already used or connect to 7)

Minimum edge: (5,6) (weight 2)

 Action: Add edge (5,6) and vertex 5.

MST Edges so far: {(0,1)=4, (1,7)=2, (6,7)=1, (5,6)=2} MST Vertices: {0, 1, 5, 6, 7}

Step 5: MST Now Contains {0, 1, 5, 6, 7}

 Candidate Edges:
o From vertex 1: (1,2) = 8
o From vertex 7: (7,8) = 7
o From vertex 6: (6,8) = 6
o From vertex 5:
 (3,5) = 14
 (4,5) = 10

Minimum edge: (6,8) (weight 6)

 Action: Add edge (6,8) and vertex 8.

MST Edges so far: {(0,1)=4, (1,7)=2, (6,7)=1, (5,6)=2, (6,8)=6} MST Vertices: {0, 1, 5, 6, 7,
8}

Step 6: MST Now Contains {0, 1, 5, 6, 7, 8}

 Candidate Edges:
o From vertex 1: (1,2) = 8
o From vertex 8: (2,8) = 2
o Other edges from vertices 5 and 6 either lead to already included vertices or have
higher weights.
Minimum edge: (2,8) (weight 2)

 Action: Add edge (2,8) and vertex 2.

MST Edges so far: {(0,1)=4, (1,7)=2, (6,7)=1, (5,6)=2, (6,8)=6, (2,8)=2} MST Vertices: {0, 1,
2, 5, 6, 7, 8}

Step 7: MST Now Contains {0, 1, 2, 5, 6, 7, 8}

 Candidate Edges:
o From vertex 2: (2,3) = 7
o (Edge (1,2)=8 is ignored as vertex 1 is already in MST)

Minimum edge: (2,3) (weight 7)

 Action: Add edge (2,3) and vertex 3.

MST Edges so far: {(0,1)=4, (1,7)=2, (6,7)=1, (5,6)=2, (6,8)=6, (2,8)=2, (2,3)=7} MST
Vertices: {0, 1, 2, 3, 5, 6, 7, 8}

Step 8: MST Now Contains {0, 1, 2, 3, 5, 6, 7, 8}

 Candidate Edge:
o From vertex 3: (3,4) = 9
o (Alternatively, from vertex 5: (4,5)=10; hence (3,4) is preferred)

Action: Add edge (3,4) (weight 9) and vertex 4.

Final MST Edges:

 (0,1) = 4
 (1,7) = 2
 (6,7) = 1
 (5,6) = 2
 (6,8) = 6
 (2,8) = 2
 (2,3) = 7
 (3,4) = 9

MST Vertices: {0, 1, 2, 3, 4, 5, 6, 7, 8}

Total Weight Calculation

4+2+1+2+6+2+7+9=334 + 2 + 1 + 2 + 6 + 2 + 7 + 9 = 33
Thus, the Minimum Spanning Tree (MST) found using Prim’s algorithm has a total weight of
33

Q5 Compare and Contrast the Linear probing and Chaining methods of collision
resolution in Hashing. Also explain these techniques with the help of examples.

Below is a detailed comparison of the two popular collision resolution techniques in hash tables
—Linear Probing and Chaining—along with examples to illustrate each method.

1. Linear Probing (Open Addressing)

Concept: When a collision occurs (i.e., when two keys hash to the same index), linear probing
searches for the next available slot in the table by checking indices sequentially. If the hash
function computes an index that is already occupied, the algorithm probes the next index (using a
constant step of 1), wrapping around to the beginning of the array if necessary.

Example:

Suppose we have a hash table of size 7 and a simple hash function: h(key) = key mod 7

 Insert key 10:


o h(10) = 10 mod 7 = 3.
o Slot 3 is empty; place key 10 at index 3.
 Insert key 17:
o h(17) = 17 mod 7 = 3.
o Index 3 is already occupied by key 10.
o Probe: Check index 4. If index 4 is empty, place key 17 there.
 Insert key 24:
o h(24) = 24 mod 7 = 3.
o Index 3 is occupied.
o Probe: Check index 4 (occupied by 17); then index 5. If index 5 is empty, place
key 24 there.

Advantages and Disadvantages:

 Advantages:
o Cache Performance: Since the hash table is usually implemented as a contiguous
array, accessing sequential indices tends to use caches efficiently.
o Memory Efficiency: All elements are stored in one array without extra pointers.
 Disadvantages:
o Primary Clustering: Collisions tend to create clusters of occupied slots, causing
longer probe sequences as the table fills up.
o Deletion Complexity: Removing an element can be problematic as it may break
the probe chain. Special markers (like tombstones) are often needed to signal
deleted entries without halting the search.
2. Chaining (Separate Chaining)

Concept: In chaining, each slot in the hash table contains a pointer to a linked list (or another
dynamic data structure) of elements. When two or more keys hash to the same slot, they are
simply added to the list at that index.

Example:

Assume the same hash table size of 7 with the hash function: h(key) = key mod 7

 Insert key 10:


o h(10) = 10 mod 7 = 3.
o Create a new list at index 3 and add key 10.
 Insert key 17:
o h(17) = 17 mod 7 = 3.
o Index 3 already has key 10, so simply append key 17 to the linked list at index 3.
 Insert key 24:
o h(24) = 24 mod 7 = 3.
o Index 3’s list now gets another element (key 24), so the list becomes: [10, 17, 24].

Advantages and Disadvantages:

 Advantages:
o Simplicity of Collision Resolution: There is no need for probing; collisions are
handled by simply adding the new key to the list.
o Flexible Load Factor: The hash table can handle load factors greater than 1 since
each slot can store an arbitrary number of elements.
o Easy Deletion: Removing a key entails deleting a node from a linked list—this is
a straightforward operation.
 Disadvantages:
o Memory Overhead: Extra memory is needed for pointers in the linked lists.
o Cache Performance: Because the nodes of the lists may be scattered in memory,
cache performance might suffer, especially if many keys hash to the same slot.
o Worst-Case Performance: If a poor hash function results in many keys
clustering in the same list, search time can degrade to O(n) for that slot.

Comparison Summary
Aspect Linear Probing Chaining

Uses a single contiguous array (open Uses an array of linked lists (or dynamic
Data Structure
addressing). arrays).

Handling Finds the next available slot using a fixed Stores multiple keys at the same index via a
Collisions sequence (e.g., index+1). list.
Aspect Linear Probing Chaining

More memory efficient; no extra pointer Requires extra memory for pointers and list
Memory Usage
overhead. management.

Cache Generally better due to contiguous May suffer due to non-contiguous memory
Performance memory access. access (pointer chasing).

Prone to primary clustering, which can Clustering is less of an issue; performance


Clustering
degrade performance. depends on the list length.

Deletion More complex; requires marking Straightforward; remove the element from
Handling removed elements as "deleted". the linked list.

Conclusion

Both linear probing and chaining are effective methods for handling hash collisions, yet they
come with trade-offs. Linear probing offers excellent cache performance and minimal memory
overhead but can suffer from clustering and deletion complications. In contrast, chaining
provides flexibility in handling a high number of collisions and simplifies deletions, though it
incurs extra memory overhead and may have less favorable cache performance.

You might also like